You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
as several posts suggest, it grows really fast ( #2431#2615 , for me 38TB after one year ) and pruning often does not work ( #2441#2580#2586 ) ( and probably not viable for archival ) , so a periodic reinstall is a must. The problem:
INFO [09-06|08:50:28.971] created block l2Block=140,516,812 l2BlockHash=734523..689678
INFO [09-06|08:50:29.744] latest assertion not yet in our node staker=0x0000000000000000000000000000000000000000 assertion=16635 state="{BlockHash:0x168c0639b8aec4d60cd156c7d230990505abd5bb6a78005533457f9efe416692 SendRoot:0xe093081ccfa08219309709a929aa561ba3dda3906a3e839ca421a5e6be3b3404 Batch:683243 PosInBatch:0}"
INFO [09-06|08:50:29.971] created block l2Block=140,516,824 l2BlockHash=2933bc..58daef
INFO [09-06|08:50:30.392] catching up to chain blocks target="{BatchNumber:683243 PosInBatch:0}" current="{BatchNumber:0 PosInBatch:0}"
INFO [09-06|08:50:30.972] created block l2Block=140,516,842 l2BlockHash=650701..62a9a7
INFO [09-06|08:50:31.972] created block l2Block=140,516,866 l2BlockHash=99923d..e8bc84
The node is checking "half the blocks" ? starting from around 135.000.000 of 250.000.000 . Even on my high performance node ( Ryzen 9 7900X , 3x nvme in Raid0 ), this is expected to take around 30 days to complete.
okay i found your statement here..
so in principle you gave up supporting archival nodes 4 month ago..
As of May 2024, archive node snapshots for Arbitrum One, Arbitrum Nova, and Arbitrum Sepolia are no longer being updated on https://snapshot.arbitrum.foundation/index.html due to accelerated database and state growth. https://docs.arbitrum.io/run-arbitrum-node/more-types/run-archive-node
but the block 140516812 from above is even from October 2023.. so the archive is that old?
How are we expected to run a full archival node?
as several posts suggest, it grows really fast ( #2431 #2615 , for me 38TB after one year ) and pruning often does not work ( #2441 #2580 #2586 ) ( and probably not viable for archival ) , so a periodic reinstall is a must. The problem:
The node is checking "half the blocks" ? starting from around 135.000.000 of 250.000.000 . Even on my high performance node ( Ryzen 9 7900X , 3x nvme in Raid0 ), this is expected to take around 30 days to complete.
How can we skip this part? Here my boot command:
or is the image just really old? it is 5 TB big
The text was updated successfully, but these errors were encountered: