-
Notifications
You must be signed in to change notification settings - Fork 912
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High cpu usage and unresponding lightning-cli command #796
Comments
Those CHANNEL_OPENING peers never got far enough in negotiation to be permanently remembered: committing then to the db is basically a bug (though minor and not trivial to fix). Will test with 20 awaiting_lockin channels and see what the CPU usage is all about... |
When most channels are restored to the Also, and this might also cause high system load, when debug-log is enabled lots of lines are written to the logfile(1~4mb/s). Hard to debug other stuff because of all these lines. Maybe its a good idea to move these spamlines to another loglevel (for example "EVERYTHING"). So that testers are able to use the debug loglevel for some debugging.
|
I think I got clightning to the point it is no longer usable. Already waiting 30 minutes for a return on the I got 52 Anyone else see the same behavior when trying a node with more then a couple of payment channels? |
It's most likely due to the initial gossip sync, and that some peers simply do not prune their |
That explains. Is UTXO tracking for clightning something that is implemented in the very near future or will that take some time to implement. I'm asking this so I can consider alternatives for my goal to run a high connectivity LN-node. |
I'll add a gettxout cache for the on-chain validation ASAP, since that also kills the store. The UTXO tracking will most likely take a bit longer, but the short-term solution should fix the worst problems. |
Thank you. I'll wait for your fix before any further testing |
Closing due to inactivity and the issue is fixed imho. |
I did some drastic testing with the latest version. Opened around 20~30 payment channels.
While 2 of them had the status:
CHANNELD_NORMAL
the rest had status:CHANNELD_AWAITING_LOCKIN
At this point I decided to stop the daemon. Remove the directory (not the configuration directory) and updated to the latest version.
After that I've started the daemon again.
Results: High cpu usage for
lightning_gossipd
(100%)Several lightning_channeld processes spawned as expected. Load minimal on these specific processes. However the logging (log-level=info because debug is untraceable due to high spam) says the following:
Before upgrade these channels had a txid
Also the following command seems to stall (just saw it returned information after around 15 minutes).
Is it normal that it takes so long to restore the channels?
The text was updated successfully, but these errors were encountered: