-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Discussion] Stress Testing monerod #9348
Comments
That ability has always existed. But starting on ad hoc basis like this requires all participants to communicate with each other to tell each other their specific node addresses, so it takes some coordination. |
Perhaps I am exaggerating the issues, but my thought was that using --add-exclusive-node to create an extra testnet is not a quality solution due to the difficulty of coordination, as well as the possibility that someone might connect with a copy of the existing testnet. That would destroy/overwrite the alternative chain that people are attempting to create. |
In the interest of trying to get something set up quickly, I would like to share my hasty attempt at a disposable testnet/stressnet (https://github.com/spackle-xmr/monero). It is a simple testnet replacement, making no other changes. My node p2p port is stressnet.net:28080 if anyone wishes to use it. |
For the time being, in my ugly solution, what I am doing is to use Maybe having |
Another option might be to publish a copy of the testnet after running --pop-blocks to the most recent fork and mining/churning to a single address for a while. Publishing that chain and miner seed phrase would offer an end product that:
I expect that having this available will make independent stress testing more attractive. |
I made some tools to stress test monerod: https://github.com/Boog900/Monero-stress-test-tools My idea was to pob blocks back to when we know txpool was huge and push the transactions from the blocks after that to the nodes pool, doing this at height Then I also created a tool to make and maintain a certain number of "fake" connections to a node, these connections do just enough to stay connected and nothing else. Monerod will still fluff txs to these connections. Using these tools I am able to reliably get a node killed. The first thing to note is that even with no connections and spamming txs monerod still likes to use a lot of RAM, however using My node got killed in a VM with 10GB of RAM with ~150 connections I can't remember how long it took and I have killed a node 3 times in a VM with 5GB of RAM with 100 connections within 20 minutes each. I wouldn't recommend setting the |
I want to confirm that the testnet fork / 'stressnet' set up here at is now running with community support. There are over 35 nodes on the network, with flooding set to begin at 15:00 UTC on June 19th. |
|
I see you on one of my nodes via 'print_cn'. Should be good to go. |
I have seen multiple people express the need for extensive stress testing of monerod. Per the recent MRL meeting (monero-project/meta#1015), this might be done either with simulation tools or via a dedicated/abusable testnet. The intention is to address any daemon performance issues which present a roadblock. It is important to note that the current set of issues do not appear to be readily reproduced in isolated environments.
My personal belief is that the present situation calls for the creation of a new/disposable testnet, though that would admittedly require significant participation for the testing to work as desired. I imagine an additional testnet could be integrated into the project as a recurring temporary network that runs for a limited time frame each year. Perhaps, as was suggested to me, an even more fully featured approach might be taken by adding the ability for monerod to spin up custom public testnets using command line parameters.
I do not have the background needed to discuss creating appropriate simulation tools, and I hope others will speak to that.
In any case, I believe an additional testing tool would be helpful and I hope this issue can guide collaboration on creating it.
The text was updated successfully, but these errors were encountered: