-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Panic when reading file #391
Comments
Thank you @nadenf |
This reproduced locally. I will now take a look. |
Ok, quick update: I know what this is now. This seems to be a regression introduced by @HippoBaro 's work on the What happens is that the network streams add waiters to a source (Which increase the reference count on the waker), by doing I added a print statement to There is code to flush this vector that should be called, that I don't see called. So there are still missing pieces of this puzzle, but we'll get there! |
👋 bug author here! Thanks for looking into this @glommer. The vector in question is drained when processing events from the ring. My intuition is that this code may process sources outside of the ring somehow. If that's the case, then I can see how the Vec could grow unbounded. |
It used to be the case where we would assume there was a single waiter per source, but in the read_many work we moved that to many-waiters. The function that manipulates the wakers was kept the same, so whether or not we had many wakers was entirely up to the user. However I just found a use case (issue DataDog#391) where there are many waiters added to a source that only expects a single waiter. That's a use case in which we reuse a source, and we keep calling the poll function into a stream even though the stream is never ready. It's not clear to me (yet) why this is the case. It is certainly surprising. While I want to get to the bottom of this, it is not a bad idea to require the user to ask for their intentions explicitly. In the future, if we can indeed guarantee that a function with a single waiter should be empty we can use this opportunity for a stronger assert. For now, this reverts the old behavior in the original users and at least gets rid of this particular regression Fixes DataDog#391
hi @nadenf I just opened a PR that should "fix" this issue. However, I'm only pushing this now because I am very respectful of your contributions and mindful of your time, and want to unblock any work of yours that may be blocked on it. So feel free to use that for now. However, I may withhold merging for a while for the following reason: it makes no sense to me that the network stream never completes. The behavior I see is that we keep calling the I am fine breaking this assumption but this is really strange. If the poll function is called, I would expect that this happens because the operation completed. In a nutshell, I'd like to understand a bit more why this is. Maybe this is related to hyper, and they may have something artificially driving the poll function. But how does it make sense that it is never ready? Stay tuned! |
I looked into this a bit more, and hyper indeed has logic that calls So this is legitimate. We don't want to cancel the existing source (which was my biggest fear), and because this is a stream, this immediately means that we're not interested in the old waiter anymore. I want to think a bit more about this, to make sure that there aren't cases in which we are not supposed to keep both wakers in the stream |
It used to be the case where we would assume there was a single waiter per source, but in the read_many work we moved that to many-waiters. The function that manipulates the wakers was kept the same, so whether or not we had many wakers was entirely up to the user. However I just found a use case (issue DataDog#391) where there are many waiters added to a source that only expects a single waiter. That's a use case in which we reuse a source, and we keep calling the poll function into a stream even though the stream is never ready. It's not clear to me (yet) why this is the case. It is certainly surprising. While I want to get to the bottom of this, it is not a bad idea to require the user to ask for their intentions explicitly. In the future, if we can indeed guarantee that a function with a single waiter should be empty we can use this opportunity for a stronger assert. For now, this reverts the old behavior in the original users and at least gets rid of this particular regression Fixes DataDog#391
It used to be the case where we would assume there was a single waiter per source, but in the read_many work we moved that to many-waiters. The function that manipulates the wakers was kept the same, so whether or not we had many wakers was entirely up to the user. However I just found a use case (issue DataDog#391) where there are many waiters added to a source that only expects a single waiter. That's a use case in which we reuse a source, and we keep calling the poll function into a stream even though the stream is never ready. It's not clear to me (yet) why this is the case. It is certainly surprising. While I want to get to the bottom of this, it is not a bad idea to require the user to ask for their intentions explicitly. In the future, if we can indeed guarantee that a function with a single waiter should be empty we can use this opportunity for a stronger assert. For now, this reverts the old behavior in the original users and at least gets rid of this particular regression Fixes DataDog#391
It used to be the case where we would assume there was a single waiter per source, but in the read_many work we moved that to many-waiters. The function that manipulates the wakers was kept the same, so whether or not we had many wakers was entirely up to the user. However I just found a use case (issue #391) where there are many waiters added to a source that only expects a single waiter. That's a use case in which we reuse a source, and we keep calling the poll function into a stream even though the stream is never ready. It's not clear to me (yet) why this is the case. It is certainly surprising. While I want to get to the bottom of this, it is not a bad idea to require the user to ask for their intentions explicitly. In the future, if we can indeed guarantee that a function with a single waiter should be empty we can use this opportunity for a stronger assert. For now, this reverts the old behavior in the original users and at least gets rid of this particular regression Fixes #391 (cherry picked from commit b301484)
Code
Steps to Recreate
In another Terminal:
The text was updated successfully, but these errors were encountered: