-
Notifications
You must be signed in to change notification settings - Fork 175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.1 beta 3: "fatal error: concurrent map iteration and map write" #432
Comments
Thanks! It would be good to see something deeper in the stack trace to know what code lines triggered the panic (e.g., the last |
Catching up on this, there were three I figured more data is best data so here are the full stack traces, each being 7,000 or so lines: |
Excellent, thanks! We're taking a look! |
Looks like a concurrent map access on the request Headers, on all 3 traces. Should be an easy one to track down and fix up. We'll aim to have beta4 w/ this patch ready tomorrow. |
Pulled down the 1.1.0 Beta 4 Docker image and unleashed the traffic on it. Got a panic Stack trace here. EDIT: Fixed the gist link. If you clicked the gist via an email alert, it was to an older Gist. Sorry! Back to 1.0.3 for now! 😄 |
Wesley and I worked through this in a separate channel and the fix for the map iteration was applied in Beta6. we'll close for now and Wesley can reopen if any more issues arise relating to influxdb |
Previously we were testing out 1.0.3, and I just dropped 1.1 beta 3 in, pulled fresh from Dockerhub:
We're using an in-memory cache. Our backend is InfluxDB 1.7. We're sending about 5,000 queries per minute through Trickster, and within just a minute or two of the container being up we're seeing:
...and the usual avalanche of about a million stack trace log lines after that. Our running config is fairly simple with just one cache and one origin. We've given Trickster about 200GB of memory and run it in a Docker containers on Linux (technically CoreOS). I can provide any further details that would be helpful in tracking this down, but so far that's about all I've got. 😄
The text was updated successfully, but these errors were encountered: