-
Notifications
You must be signed in to change notification settings - Fork 67
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom Prombench Tests #321
Comments
it is just one extra step to open a PR from your branch and run it as usual, no? |
I think personal forks and branches is a bit too much to ask. Can we provide a set of additional subcommands and config options to the prombench command? I'm sure Chris and I would be happy to help with a remote write test for prombench. |
Yep more flexibility in the prombench configs would be an amazing addition. @cstyan if you have the time to look into this that would be great. I guess a proposal doc for the implementation could be the way to dicsuss this? |
I can help with something immediate to get over the memory issues so we can properly benchmark 2.16.0-rc.0, anything longer term I wouldn't be able to get to for a few weeks at least. @geekodour what would you suggest for a short term fix? |
@cstyan A benchmark is already running for prometheus/prometheus#6729 The last one oomed at 8h, this one is still running 10h in. So the short-term fix would be to rerun it(which is running now) and to hope it does not crash again 😞 because we reduced the memory. Otherwise, we can revert the following to |
@cstyan if this fails, we can maybe start another test with |
For the long term resolution of the memory issue, I think we can try reducing the no. of series if we do not intend to increase the memory of the instances to prevent any kind of out of memory related to this in the future. ? |
Did the
I think we should do this, just for this next 3d benchmark, so we can confirm 2.16.0 isn't performing worse in terms of memory usage than 2.15.x. Then going forward we could reduce the number of series and go back to What do you guys think about that? |
@cstyan yes, it did work. I started another one manually to inspect the failing behavior again since grafana was not showing previous Loki logs. (#322) we can manually start a test with |
I would say decrease the num of fake servers to decreaes the number of ingested series. |
Running it in link to loki logs Why does the time series count get very high when restarting prometheus though? @cstyan I am starting a new test for |
It is clearly visible that the new Prom version uses more memory and does a lot more allocations, but yeah lets reduce the num or targets to see how it goes. |
@cstyan @krasi-georgiev i started a test for |
Closing since the memory issue was resolved and there is not a requirement of such custom tests as of now. |
The current prombench setup is rigid, recently got some interest in running custom prombench tests, especially when the current prombench setup is running on low memory. (prometheus/prometheus#6729).
I suggest having something like this:
This way prombench users will be able to fork prombench, add their custom changes to a branch and prombench infrastructure will use that branch when running the test. Additionally for cases like requiring high memory, we can have some template branch in the prombench repo itself.
Please let me know what you think.
cc:
@codesome with prometheus/prometheus#6679
@csmarchbanks #249
@cstyan prometheus/prometheus#6729
@krasi-georgiev
The text was updated successfully, but these errors were encountered: