Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Websocket client closes with "unexpected reserved bits 0x60" on phantom packet #373

Closed
john-wilkinson opened this issue Apr 19, 2018 · 5 comments

Comments

@john-wilkinson
Copy link

I have an open client websocket connection and recieve data just find until I hit a particular packet:

02 42 0a 00 00 08 02 42 4a 19 30 e2 08 00 45 00
00 46 0f f6 40 00 3e 06 16 3d 0a ee 01 8a 0a 00
00 08 ad 74 d9 06 a6 d4 78 f7 66 8f df b1 80 18
00 eb 41 a0 00 00 01 01 08 0a 0e 76 a4 6e 0e 76
be b1 81 10 1b 5b 73 1b 5b 32 35 3b 37 38 48 46
36 1b 5b 75

This packet gets acked and then the websocket client connection gets closed with the error:

websocket: unexpected reserved bits 0x60

The value of p (

p, err := c.read(2)
) at that time is 6d6d.

This is entirely reproduceable.

I have been unable to find the total packet being read to determine why it is reading 6d6d.

Information on how to dump the contents of the entire packet being read or why/where it would be reading these phantom values would be great.

@garyburd
Copy link
Contributor

garyburd commented Apr 19, 2018

Run the application with the race detector and report back after fixing any issues. Include information about the server and how to reproduce the problem.

Note that concurrent reads are not supported.

@john-wilkinson
Copy link
Author

john-wilkinson commented Apr 20, 2018

I "fixed" the problem by resetting the internal buffer at the end of conn.ReadMessage

websocket/conn.go

Line 1024 in cd94665

p, err = ioutil.ReadAll(r)

c.br.Reset(c.conn)

I haven't run it with race detector, but I am also only reading from the websocket in a single location.

After dumping the entire buffer, it became clear that there was data from previous packets stuck in there, and for some reason they are getting pushed to the end of the buffer. I'm still not entirely sure what's going on, but my gut instinct is that it has something to do with the payload size value in the header and how we calculate whether or not we have the entire message. But that's just a feeling. I am not familiar enough the entire model to understand what is actually going on under the hood.

@garyburd
Copy link
Contributor

garyburd commented Apr 20, 2018

Run the race detector. I want to rule out the possibility that the application is reading the connection concurrently before investigating further.

@john-wilkinson
Copy link
Author

It looks like I do have race conditions, which is very interesting. I'll fix those and see if that fixes the problem.

@john-wilkinson
Copy link
Author

It looks like fixing the concurrency issue worked! Although I'm still confused about what was going on....

Hopefully this issue will help other people with the same issue.

Thank you for your help @garyburd

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants