-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: account for recursion when stringing to avoid overflow #1315
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #1315 +/- ##
==========================================
+ Coverage 56.08% 56.27% +0.18%
==========================================
Files 421 422 +1
Lines 65436 65772 +336
==========================================
+ Hits 36700 37010 +310
+ Misses 25870 25869 -1
- Partials 2866 2893 +27 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. Your commit resolves my issue.
The only thing I'm uncertain about is when mixing slices and pointers, where your "seen" map is reinitialized each time. I'll test some more unusual cases later.
Cool. Yeah there are definitely some issues due to this change -- see the failing tests for example. I'm going to take a closer look to figure out what is going wrong with those. |
Using the |
I took a look at how the Reference: |
This reverts commit 25455af. Renaming receivers is not trivial due to linting rules
There is one way we could possibly make this more efficient. Instead of using a map, use a Of course, this is on the assumption that small maps, initialized to some good amount (ie. |
I can try it, but I want to understand your thought before I do. Where would efficiency be improved using a slice instead of a map? The map guarantees O(1) look ups so it seems like it would always be much faster to check if it has been encountered before rather than iterating over a slice. |
It's the usual: big O is useful to understand how an algorithm scales, but it is useless as an algorithm performance metric itself. Consider the following micro benchmark: package x
import (
"slices"
"testing"
)
func BenchmarkSlice(b *testing.B) {
for i := 0; i < b.N; i++ {
s := make([]int, 0, nItems)
for j := 1; j <= nItems; j++ {
s = append(s, j*2)
}
for j := 0; j < (nItems * 2); j++ {
_ = slices.Contains(s, j)
}
}
}
func BenchmarkMap(b *testing.B) {
for i := 0; i < b.N; i++ {
s := make(map[int]struct{}, nItems)
for j := 1; j <= nItems; j++ {
s[j*2] = struct{}{}
}
for j := 0; j < (nItems * 2); j++ {
_, ok := s[j]
_ = ok
}
}
} I am running the benchmark as follows, to test it with different values for
Here are the results on my machine
As you can see, they are not exactly perfect and scientific, but I think you can see the general trend: BenchmarkSlice starts off as being orders of magnitude more efficient than BenchmarkMap, but as the values grow it starts to catch up until it's about tied. The key insight I'm trying to make: when you want to check for the "existance in a set", while maps will definitely be better than linear-search on a slice in the long run, they are not if you are initializing the values in a "fire and forget" manner and are working with relatively small arrays anyway. If you create a slice with a reasonable cap at the beginning of ProtectedString (ie. if seen == nil, Finally, there is a case that we can implement better by using such a system rather than a map. Consider the following structure: var i int = 42
type S struct { A, B *int }
var s S = &S{A: &i, B: &i} From my understanding of how your code would work, the way that What we are looking for instead is to guard against recursive structures; but as you can see, Of course, regarding the performance concerns, I still invite you to do some benchmarks to confirm what I'm saying and possibly prove me wrong, but my hunch is that a slice will work better with the amount of values we are likely to see :) To make it clear, the rough idea is that:
|
That's a great explanation. After reading it and doing some tests of my own I agree with you; it's unlikely that the depth of the stack trace will reach the point where using the map would be more performant. I'll make the necessary modifications. |
@thehowl can you let me know what you think of the most recent changes when you have a chance? |
Resolved it. good to merge @moul |
Co-authored-by: Morgan <[email protected]>
Co-authored-by: Morgan <[email protected]>
…1315) Addresses gnolang#1291. This is to fix the issue causing the entire interpreter process to crash, not the underlying issue that is causing it to panic in the first place. --------- Co-authored-by: Morgan <[email protected]>
Addresses #1291. This is to fix the issue causing the entire interpreter process to crash, not the underlying issue that is causing it to panic in the first place.