-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Are WebSockets production ready? #97
Comments
I am having the same problem. But for me, even the I am also grateful for the great work the Infura team does! |
same to me |
Same issue here. Losing connection randomly |
Having similar issues here, randomly dropping connection on |
Infura's websockets aren't quite production ready yet, but they will be soon. http://status.infura.io/ will have websocket providers added to it when they are production ready. For now, assume that the endpoints will go up/down as they work on the architecture. MetaMask's InPageProvider provides a polyfill for subscription support. This is probably your best bet if you're trying to go to production now. |
I can't wait until they get the wss stable. Such a useful service for the future development of dapps. |
Agreed. Can't wait for this to be production ready. Constantly getting: |
Hi, for those using Infura websocket provider do you have to supply your API key in the url like we do with the http provider i.e.https://rinkeby.infura.io/api-key? If so what is the format? Or is the WS provider only for reading/listening to events so there is no need for an API key? |
@lakamsani The latter. There's no need to specify an API key. |
Any idea when this will be production ready? |
Any updates on this topic? @MeoMix as far as I see, you have some info about it :) |
Any updates on this yet? |
+1 on this. |
If you have specific issues with websockets please open a new ticket with a reproducible test case. |
+1 on this - I've been working with wss://kovan.infura.io/ws for a while now - and mostly it works well. However, sporadically I see:
followed by several:
At this moment (10:39AM US Eastern, Oct 12, 2018), I've been unable to connect for > 10 minutes. Does infura recommend that users implement gracefully degrade to http with polling for filters? |
@yarrumretep I see no indications of any issues on our end, if it happens again can get a wireshark/tcpdump of an attempted connection? While you may occasionally be disconnected during maintenance or due to network timeouts, you should be able to immediately reconnect (any maintenance we do is via rolling deploys, but we didn't do any maintenance today and everything in the logs and monitoring looks normal). |
@ryanschneider - here's a wireshark from today. Still seeing this sporadically. Also seeing some 404 errors occasionally during WS connecting. Seems like the problem was with 54.174.165.50 - this occured around 15:14 US Eastern time on October 18. |
We're having similar results, more and more we see this err. |
I think we've found the issue, it looks to be a bug in our infrastructure layer which is adding the wrong instances to the load balancer (a couple unrelated servers that unfortunately pass the same health check). Investigating now. |
|
Ok, the issue is resolved and we're working on determining a) how it happened and b) mitigations to make sure it doesn't happen again. For those curious, this AWS bug seems to be the root cause, or at least related: https://github.com/aws/amazon-ecs-agent/issues/1417 |
Our problem persists (see my comment above for console output). We are using Kovan. |
@xardass that appears to be a separate issue, can you open a separate GitHub issue, ideally with details on the exact |
Thank you, but we narrowed our problem down to a parity update, unrelated to web3. |
@ryanschneider - I'm still seeing this with wss://kovan.infura.io/ws
|
@yarrumretep can you provide more details on what state your client connection was in at the time? Since everything is in TLS I can't see the actual payloads, but it seems like the issue happened a couple seconds after the session was negotiated so if you could log on your end what you were sending to the server that would help debug. Also, if you wanted to see the payloads yourself and are generating the traffic from a browser, you can use this method: https://jimshaver.net/2015/02/11/decrypting-tls-browser-traffic-with-wireshark-the-easy-way/ However if you do so please don't post the sslkeylog file, since it appears that it would contain session keys for all of your recent TLS sessions, so could be used by an eavesdropper to decrypt your other TLS traffic. |
Issue also happens here. |
I am having the same issues while connecting to wss://kovan.infura.io/ws using the latest web3-1.0 release |
I'm having the same issue on mainnet with an infura node |
same, nothing helps 👎 |
Same issue here, all good on ropsten but on mainnet I get disconnected |
any updates on this? I'm still getting:
|
In our case the issue was that we were loading too much data through eth_filterLogs. In Ropsten there was much less data, therefore the websocket connection wasnt closing. Hope helps someone |
@ianaz how do you manage how much data you load through eth_fitlerLogs? |
@Sm00g15 by loading the data in smaller batches with |
Started experiencing the issue as @hadarbmdev: |
Base on docs, let use v3 scheme: |
In the past day or so, I've been testing out Infura websocket endpoints at
wss://mainnet.infura.io
for Web3 1.0 compatibility. It seems that it's randomly disconnecting, does not support long running subscriptions (like for example subscribing to events generated by a contract), and some methods like
getBlock()
are not working.Can we get some clarity on the state of Infura WS?
FWIW, the
wss://mainnet.infura.io/_ws
endpoint seems to be working more reliably thanwss://mainnet.infura.io/ws
.Infura with HTTP is awesome. We rely on it every day. Thank you for your incredible work!
The text was updated successfully, but these errors were encountered: