-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Parity warp sync is no longer very warpy. #6372
Comments
Upon further inspection, it appears that the initial launch of Parity post-install is |
As an update, I just restarted my computer (for unrelated reasons) and Parity crashed on startup the first time, then on re-launch it started the Warp Restore over again from 0% and my best block says 3,831,654 (where it was at time of restart). Perhaps this should be a separate issue (let me know if so) but it would seem that restarting in the middle of a warp restore causes problems. |
Please share some logs. It's hard to tell why the warp sync is stuck at a certain state / block. |
It has recovered at this point, is it possible to see a log history or are they pruned over time? If not, the repro steps are:
If I end up resetting my chain and reproducing this myself I'll try to enable the log capture during sync. |
Actually, I can reproduce this on different setups. Sometimes it feels random. https://gist.github.com/5chdn/683b905aa410de0232690fd9ddaf32fb The current state grew to around 1.3 GB and we have 6.3 million unique addresses. https://etherscan.io/chart/address Not sure what the future will bring for warp sync, but it's quite probable that end-users will switch to light mode by default on most machines. |
#6371 same issue, logs look similar to mine. |
Yes, that's my issue #6371. Is there any way to restore parity account in other ethereum-based wallet? |
Just noticed that operating mode is passive. Should it be that way? |
@arkpar I was able to reproduce this on my office laptop today and logged the full trace of sync, network, and snapshot: https://5chdn.co/dump/g7xny97o/8i4h/warp-sync-fail.log.tar.bz2 It's around 65 minutes of logs, warp sync was stuck at 32% and fetching blocks in background. |
@5chdn could repeat the test with the |
@arkpar compiled and running. So far it looks good. However, I had to reset my configuration this morning to unregister some tokens. I wasn't able to fix the warp sync until I removed everything in |
Here is another user with this issue: https://www.screencast.com/t/bBcU5oYXKm For some reasons it tries to fetch 700+ snapshots (and eventually fails) while a normal warp sync should only fetch ~360. |
There are multiple problems here:
|
Re: 2) can't we force a default number of "snapshot peers"? |
Did you delete my relay and "summarized" it in yours? O_o Seriously? |
Could you rephrase that question? |
My apologies. I think I overworked. I checked in the wrong issue. Feel free to delete my latest posts post in this thread |
How is it that this is closed?
Sometimes it will sync snapshots, sometimes it will not. It appears to be completely random and Parity under my invocation(s) doesn't appear to get enough information to figure out why it is doing what it is doing. Given sufficient time, I think I could base a cryptographically secure 2-headed coin toss based on the question, "Will it try to sync via warp on this invocation after 5 minutes?". HOW would I have Parity state things like:
The tool's complete lack of transparency as to why it can't sync is entirely frustrating. |
Warp-sync is hardly salvageable. I'm sorry. The key issue is #7436 that prevents most nodes from generating snapshots, but even we could fix this temporarily, it will be only an intermediary duct-tape solution until the state is too big again. With 1.8.6 I can recommend resetting the DB and trying a full sync, and for the long-term (1.10+) will stabilize the light client experience for different use-cases. |
Ubuntu 16.04, Parity 1.8.11Step 1 - is important |
So is this a "not to fix"? If so, does that mean that parity is being retired -- this is a show stopper for anyone looking to setup a new node. I have been trying for almost 4 days and have resulted to now try the "no warp" sync which looks like it may take another 4 days. If so, that is going to be pretty much useless in terms of bringing new nodes online. |
With warp-sync no longer working and the light client not ready yet, Parity is completely unusable for me and, I guess, for anyone who can't afford to sync the full blockchain. The suggested workaround of killing the db does the trick, but has to be redone every time I open Parity so it's also unfeasible. If warp mode can't be fixed, can it at least any mention to it be removed from end-user documentation, so people don't waste their time trying it? |
@ravensorb The issues have been identified and split into sub-tasks, you can find an overview here: https://wiki.parity.io/Known-Issues-Priorities @codewiz Warp-sync is not broken per se, and it works very well for any other chain than Ethereum. The current state size for Ethereum is a serious problem, and this is not a Parity issue in first place. |
@ravensorb we will definitely attempt to fix it, this is a duplicate. See the list of known issues, this is the 1.3 @codewiz Did you try the light client ? If you can't afford to sync the chain, then you can't afford a full node and |
I tried light client 1.9.5-stable. Unfortunately it took 5 hours to get synced. Database size is just 2.5 Mb, but downloading speed is extremely slow. |
Also if it helps, I think the warp sync issue also occurs when parity maxes out the IO of the storage system (its not just low memory) |
@melnikaite 5 hours is incredibly fast for verifying 5.3 million block headers :) @ravensorb to get a full parity DB synchronized, just leave your client running overnight. ideally, on a machine with SSD. |
@5chdn Is it possible to disable verifying? |
@melnikaite yes #8075 - in 1.11:
|
@5chdn I was afraid of that :) I am now going on 23 hours using the nightly docker image (pulled just before I started the sync) and it is stuck at 90.90% I'll give it another few hours and than if it doesn't make any progress, I'll kill it, clear the db, and try again. |
No need to kill the DB, just leave it running, otherwise it will start from scratch again |
@5chdn I noticed :) It seems to take about 2 hours to get to block 4880040 and then it drops to syncing a block every 2 seconds. If my math is right, that means I am looking at almost 10 days to complete the last 446k blocks. Does that sound correct?? |
Looks about right |
@5chdn all I can say is WOW. If your curious, in the past 48 hours its only progressed 1%. Are there any options to speed this up? If it helps, here is the command I am using to launch the docker container
|
Increase the |
@5chdn What |
My favorite cache size is 12288 :) |
My favorite cache size is 31337 |
this problem is clearly still occurring, but I've noticed that when it drops out of warp mode into normal syncing, I usually get the error: |
I hope this log will be useful for investigating this issue https://pastebin.com/YN8sUs2t |
Hay!!! I suffered an accident. Out of hospital |
I just installed Parity onto a brand new computer with a fresh install of Windows 10 Pro. The computer has Hyper-V enabled and Docker, but otherwise is a stock Windows 10 Pro computer of reasonable power (Dell XPS 9560). It is connected via wireless ab/n 5GHz to a nearby router and is able to easily max out my internet connection (100Mb/s down, 12 Mb/s up).
It launched after install about 12 hours ago and at first warp sync was moving quickly. However, at about 91.46% warp restore (pretty shortly into the initial sync) the restore progress froze at 91.46% and best block started counting up from 0. About 12 hours later it is only up to best block 3,594,000.
On previous versions of parity launching Parity with no chain data and no extra parameters (other than the default
ui
) would result in a full restore in about an hour on a less powerful computer.Over the course of this time, it has written 2 TB to disk (presumably mostly overwrites since I only have a 1TB disk) and it has read almost nothing off of disk (42MB). It has received ~7GB over the network and only sent about 300MB.
It seems there are several issues floating around about Parity's footprint (one of them even by me) and I apologize if this should have been a comment on one of those, but none of them expressed the same symptoms so I wasn't sure.
Some cumulative numbers across the lifetime of the process:
The text was updated successfully, but these errors were encountered: