services/ticker: add test logs to identify inputs that make test flaky #2064
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR Checklist
PR Structure
otherwise).
services/friendbot
, orall
ordoc
if the changes are broad or impact manypackages.
Thoroughness
This PR adds tests for the most critical parts of the new functionality or fixes.I've updated any docs (developer docs,.md
files, etc... affected by this change). Take a look in the
docs
folder for a given service,like this one.
Release planning
I've updated the relevant CHANGELOG (here for Horizon) ifneeded with deprecations, added features, breaking changes, and DB schema changes.
semver, or if it's mainly a patch change. The PR is targeted at the next
release branch if it's not a patch change.
What
Add logs to the
TestInsertOrUpdateAsset
that print out the exact times that are being used.Why
The test has been reported as flaky a couple times, first in #1733 then in #2063. There was an imperfect fix put in place in 3e23070 but we've seen another test failure. Previously when I debugged this I wrote some test cases to try long series of inputs to find an input that caused a failure. This time I think it will be a better use of our time if we improve the test logs in this test so that the next time it fails we know exactly what inputs are being used for time and then we can address the continued flakyness.
For #2063
Known limitations
This doesn't fix the issue reported, just gives us more visibility into a set of inputs we can reproduce with so that when we spend time fixing it we can be confident if we have fixed it or not.