-
Notifications
You must be signed in to change notification settings - Fork 3.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
max-sql-memory not set #35054
Comments
Here is an example of the failing query. Table structure:
Query:
|
Here is the EXPLAIN statement for the query above:
|
EXPLAIN(opt) result:
|
Thanks for the report. I'm a bit perplexed because the line here:
is exactly the line that reflects the size of the root memory budget. As you point out, the error message reflects 128MB, not 3.8GB. Are you completely sure that the server that printed that error message out was the same one that you ran the query on? If you are running a cluster, all nodes need to be started with that flag - not just one. |
Hi @jordanlewis I restarted the server on Monday with the 3,8GiB cache and SQL memory pool size. Thats also when we see 3,8GiB in the startup logs. The query errors in the logs with the 128MiB bucket limit are from today. All applications using CockroachDB have been restarted as well. We are a bit puzzled as well. We run a single node setup at the moment. |
Can you please confirm that running the problematic query in |
Right now it works when I run the query via
We have about 500 Java clients using postgres driver, that calculate daily/hourly/minute rollups and we end up having spikes throughout the day when we see the "budget exceeded" error in the cockroachdb.log. During the spikes we have a handful of the 500 Java clients fail 20 - 50 times (see screenshot below). Database configs are 100% identical.
|
With our current setup we tested different memory settings to see the connection behaviour: 128 MiB: 1245 connections So we should be fine with our 500 clients. |
I'm surprised this aggregate would be the one to cause the out of memory errors. Are there other concurrent queries? All queries share the root memory pool - perhaps there's one that's much heavier weight? Is the error message you included at the top of the issue the actual captured error message from the application, after you changed the setting? |
I checked again, there are multiple failing queries on top of the ones Marcel mentioned: The value they for the size of the pool report in the error message is always the same, the 128 MiB mentioned above.
I also traced through the cockroach-code and everything looks correct to me so the size should be set. But whatever we do, we always get those 128 MiB. I am 100% sure that it must something very basic we awre missing here but we are currently out of ideas :) |
Hi @jordanlewis we found the problem. The log messages from our Log Management provider are 2 days behind but have current timestamps (see screenshot). Thank you for your help 👍 |
No problem- I'm glad it was something like that! |
Describe the problem
After porting some memory heavy queries from Postgres to Cockroach we see issues with exceeding the memory bduget. We had followed the production checklist and adjusted the values to the recommended 25%.
We still get:
ERROR: root: memory budget exceeded: 10240 bytes requested, 134210560 currently allocated, 134217728 bytes in budget
The value for "bytes in budget" still reflects 128 mb so it seems the values don't get passed trhough.
We checked
cockroach.log
and see:So it looks like we set the values correctly.
To Reproduce
This is the commandline we use for starting cockroach:
/opt/cockroachdb/cockroach start --store=/mnt/data/cockroachdb --temp-dir=/tmp --port=26257 --http-port=7005 --log-dir=/mnt/logs/cockroachdb --cache=.25 --max-sql-memory=.25 --insecure
Expected behavior
We expected to see the queries use more than the 128 mb.
Environment:
The text was updated successfully, but these errors were encountered: