-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: merge staging to master #474
Conversation
Pipeline Attempt on 658670966 for 06df14a https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/658670966 |
@tegefaulkes the Nat tests failed here. There's a failure that is not just a timeout. |
Looking at it now. The initial problem here is that the pings are failing after the default timeout of 20 seconds.
So it's not related to the bug from yesterday. I need to look at the bin command closer to know why the ping is failing. But right now it's consistent with failing to connect. |
It's odd, the core problem is that the connection is timing out with Another odd thing is that setting a shorter timeout of '10000' doesn't change the outcome, I'd expect it to abort with a different timeout error but it doesn't. So there is a bug in the code here somewhere. But that relates to the nodes domain cancellability i'm currently working on so it should be addressed there. |
The only changes was updating the network domain with cancellability. The network and nodes tests are passing so the connection logic is still working, including |
Does the test work locally? Maybe you do need to fix the bug in the nodes domain first. |
The tests are failing locally. the problem doesn't seem to be nodes domain specifically. So far as I can tell the I'll come back to this, I need to think on it for a little bit. |
Pipeline Attempt on 660898712 for e428206 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/660898712 |
Pipeline Attempt on 669366315 for d0ffca0 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/669366315 |
Pipeline Attempt on 675968580 for 0421f73 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/675968580 |
Pipeline Attempt on 677224551 for b381e29 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/677224551 |
Pipeline Attempt on 678321421 for bb9f39b https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/678321421 |
Pipeline Attempt on 678384368 for 04737bc https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/678384368 |
Make sure any logs left over should be following the rest of the code in their logging structure. |
Pipeline Attempt on 688345460 for 1890801 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/688345460 |
There seems to be a bug causing the seed nodes to crash when receiving certain connections.
It is internal to some node implementation. It may be related to this issue. nodejs/node#35695. For now I'm going to disable the testnet connection tests due to the seed nodes instability. |
Pipeline Attempt on 689345933 for 2e972e7 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/689345933 |
Pipeline Attempt on 689348586 for 6a18b6b https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/689348586 |
I think we saw that exception before? Is this is what is causing the random failures? Or is the random failures coming elsewhere? Are you sure, it's HTTP2? And not the UDP, uTP or elsewhere? |
Also we should try to replicate that, and use |
Yeah, I think we've seen this one before. I'm not sure exactly where it's happening but I found that issue after a quick search. Triggering this bug might be a little tricky. Right now it seems to be caused when I connect to a seed node with the |
Pipeline Attempt on 689376099 for ab8d759 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/689376099 |
In order to make it easier for us to see if our manual test is succeeding, we need a If this succeeds and we don't crash the testnet, then testnet connection tests should succeed automatically too. We just need to ensure we are doing the same thing. However our networking system is getting too complex and it is resulting in hard to figure out bugs. We may need to start on a networking refactoring soon after we merge in the |
We can also add a |
We'll look into network refactoring after feature-crypto (this should eliminate the situations where we have random failures). That will also need to fix MatrixAI/js-mdns#1. Also try to rule out this https://aws.amazon.com/premiumsupport/knowledge-center/ecs-resolve-outofmemory-errors/ happening. It seems it never gets past 50% memory usage in the service. Remove the old |
[ci skip]
[ci skip]
[ci skip]
Turns out it wasn't needed. I ran all the tests and there were no errors relating to it. [ci skip]
[ci skip]
[ci skip]
[ci skip]
[ci skip]
[ci skip]
[ci skip]
[ci skip]
[ci skip]
`js-quic` integration and Agent migration
Pipeline Attempt on 951440760 for fc82a3d https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/951440760 |
Once #534 is merged, this is expected to pass and merge into master. However CI deployment to testnet and mainnet will be moved to Polykey-CLI repository. |
Pipeline Attempt on 954294586 for f7708f6 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/954294586 |
Pipeline Succeeded on 954294586 for f7708f6 https://gitlab.com/MatrixAI/open-source/Polykey/-/pipelines/954294586 |
This is an automatic PR generated by the pipeline CI/CD. This will be automatically fast-forward merged if successful.
Tasks
[ ] - Investigate random crashes- unnecessary since CLI tests will go to Polykey-CLI