-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
go/oasis-test-runner: configurable sanitychecker #3719
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3719 +/- ##
==========================================
+ Coverage 66.90% 67.01% +0.10%
==========================================
Files 401 401
Lines 39855 39855
==========================================
+ Hits 26665 26708 +43
+ Misses 9412 9372 -40
+ Partials 3778 3775 -3
Continue to review full report at Codecov.
|
b12d513
to
41bc63d
Compare
c51fa2b
to
2eb29ed
Compare
@kostko This is ready for review. The last 2 failed runs were due to runtime staking messages being resubmitted (related to compute node being restarted at an unlucky time), causing the operation to re-apply (since the runtime doesn't use nonces). I think we should tackle those in a subsequent PR as this one got big enough and it's kinda hard to keep testing the changes here while the daily tests are also being run. This PR however should fix the permanent test failures that are currently happening on the daily tests runs. |
2eb29ed
to
fcbcc97
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice work, thanks!
if cfg.EnableProfiling { | ||
val.Node.pprofPort = net.nextNodePort | ||
net.nextNodePort++ | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would be nice if we could unify/cleanup this as it is the same for every Node
, but this is also true for the Node
constructor above so probably best for a separate issue/PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 I'll open an issue
Daily longterm tests have been failing since validator-0 was falling behind the rest of the network due to the supplementary-sanity checker. This moves the supplementary-sanity checker to a new (client-1) node, which can fall behind without affecting rest of the network.
Also increase the timeout between governance workload iterations, as currently the amount of proposals was quickly pilling up over time (and both queries-workload and sanity-checker, go through all (even expired) proposal and votes).
TODO: