Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

services/ticker: add test logs to identify inputs that make test flaky #2064

Merged
merged 2 commits into from
Dec 16, 2019
Merged

services/ticker: add test logs to identify inputs that make test flaky #2064

merged 2 commits into from
Dec 16, 2019

Conversation

leighmcculloch
Copy link
Member

PR Checklist

PR Structure

  • This PR has reasonably narrow scope (if not, break it down into smaller PRs).
  • This PR avoids mixing refactoring changes with feature changes (split into two PRs
    otherwise).
  • This PR's title starts with name of package that is most changed in the PR, ex.
    services/friendbot, or all or doc if the changes are broad or impact many
    packages.

Thoroughness

  • This PR adds tests for the most critical parts of the new functionality or fixes.
  • I've updated any docs (developer docs, .md
    files, etc... affected by this change). Take a look in the docs folder for a given service,
    like this one.

Release planning

  • I've updated the relevant CHANGELOG (here for Horizon) if
    needed with deprecations, added features, breaking changes, and DB schema changes.
  • I've decided if this PR requires a new major/minor version according to
    semver, or if it's mainly a patch change. The PR is targeted at the next
    release branch if it's not a patch change.

What

Add logs to the TestInsertOrUpdateAsset that print out the exact times that are being used.

Why

The test has been reported as flaky a couple times, first in #1733 then in #2063. There was an imperfect fix put in place in 3e23070 but we've seen another test failure. Previously when I debugged this I wrote some test cases to try long series of inputs to find an input that caused a failure. This time I think it will be a better use of our time if we improve the test logs in this test so that the next time it fails we know exactly what inputs are being used for time and then we can address the continued flakyness.

For #2063

Known limitations

This doesn't fix the issue reported, just gives us more visibility into a set of inputs we can reproduce with so that when we spend time fixing it we can be confident if we have fixed it or not.

Copy link
Contributor

@bartekn bartekn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it LGTM if you want to debug it more but maybe it's just easier to use assert#WithinDuration (as I suggested here: #1733 (comment)). I don't think we need to check the values here up to millisecond because it seems that this test just checks if the time values have been inserted/updated properly.

@leighmcculloch
Copy link
Member Author

WithinDuration would probably be a good way to go 👍. Either way, I'd like a set of inputs so that I know I have actually fixed the issue since it is flaky. I can use that function once it happens one more time.

Also lots of people have this issue interacting between Go and Postgres and assuming any cost to do so is low I'd like to see how a fix for this could be contributed back in a way that helps everyone.

@leighmcculloch leighmcculloch merged commit 86f2db7 into stellar:master Dec 16, 2019
@leighmcculloch leighmcculloch deleted the issue2063debug branch December 16, 2019 22:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants