-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
colexecop: reset the internal batch in a test operator #78703
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @michae2 and @yuzefovich)
-- commits, line 12 at r1:
So even though the performance declined, should we see some benchmarks with better memory usage?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @michae2 and @rharding6373)
Previously, rharding6373 (Rachael Harding) wrote…
So even though the performance declined, should we see some benchmarks with better memory usage?
Existing benchmarks don't exercise the scenario fixed by this commit - namely, all values are inlined in the existing benchmarks.
I guess I'm a bit on the fence whether the regressions in the speed are worthwhile to address the scenario described in the commit message. Currently, the contract is that Bytes.Reset
should be called as an optimization, and we do reset the vector in all places (because this was mandatory until the Bytes vector refactor), so we're getting a perf hit in order to behave more sanely in a scenario that is only hypothetical or test-only. Thoughts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @michae2 and @yuzefovich)
Previously, yuzefovich (Yahor Yuzefovich) wrote…
Existing benchmarks don't exercise the scenario fixed by this commit - namely, all values are inlined in the existing benchmarks.
I guess I'm a bit on the fence whether the regressions in the speed are worthwhile to address the scenario described in the commit message. Currently, the contract is that
Bytes.Reset
should be called as an optimization, and we do reset the vector in all places (because this was mandatory until the Bytes vector refactor), so we're getting a perf hit in order to behave more sanely in a scenario that is only hypothetical or test-only. Thoughts?
If the existing behavior isn't buggy and isn't responsible for (or won't reduce) OOMs, but this change results in performance hits across the board, maybe it's better to hold off until either a better solution is found or we can demonstrate more benefits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @michae2 and @rharding6373)
Previously, rharding6373 (Rachael Harding) wrote…
If the existing behavior isn't buggy and isn't responsible for (or won't reduce) OOMs, but this change results in performance hits across the board, maybe it's better to hold off until either a better solution is found or we can demonstrate more benefits.
I agree with @rharding6373, seems better to hold off if we're already calling Bytes.Reset
enough to make this moot.
This commit makes a test operator that "owns" a batch to properly reset the batch on each `Next` call. This is the expectation that we forgot to follow. This was discovered when running `BenchmarkLikeOps` which uses a Bytes vector. Due to the way `Bytes.Copy` is implemented, the Bytes vector's buffer could grow arbitrarily large because the test operator never reset the vector. Release note: None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 0 of 0 LGTMs obtained (and 1 stale) (waiting on @michae2 and @rharding6373)
Previously, michae2 (Michael Erickson) wrote…
I agree with @rharding6373, seems better to hold off if we're already calling
Bytes.Reset
enough to make this moot.
Sounds good, removed the first commit. PTAL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewable status: complete! 1 of 0 LGTMs obtained (waiting on @michae2 and @rharding6373)
TFTRs! bors r+ |
Build succeeded: |
This commit makes a test operator that "owns" a batch to properly reset
the batch on each
Next
call. This is the expectation that we forgot tofollow.
This was discovered when running
BenchmarkLikeOps
which uses a Bytesvector. Due to the way
Bytes.Copy
is implemented, the Bytes vector'sbuffer could grow arbitrarily large because the test operator never
reset the vector.
Informs: #78592.
Release note: None