Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

Snapshot creation and restoration #1679

Merged
merged 88 commits into from
Aug 5, 2016
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
88 commits
Select commit Hold shift + click to select a range
317a3bf
to_rlp takes self by-reference
rphmeier Jul 12, 2016
575de78
clean up some derefs
rphmeier Jul 13, 2016
aec9c9e
Merge branch 'master' into pv64
rphmeier Jul 13, 2016
0ebd69a
Merge branch 'master' into pv64
rphmeier Jul 14, 2016
a0f9759
out-of-order insertion for blockchain
rphmeier Jul 14, 2016
02d052d
implement block rebuilder without verification
rphmeier Jul 14, 2016
e8f7c4f
group block chunk header into struct
rphmeier Jul 14, 2016
9f96512
block rebuilder does verification
rphmeier Jul 14, 2016
a8da992
integrate snapshot service with client service; flesh out implementat…
rphmeier Jul 15, 2016
7b3b494
initial implementation of snapshot service
rphmeier Jul 16, 2016
51fc3d3
remove snapshottaker trait
rphmeier Jul 16, 2016
b77d8e2
snapshot writer trait with packed and loose implementations
rphmeier Jul 16, 2016
910b77f
write chunks using "snapshotwriter" in service
rphmeier Jul 16, 2016
039decc
have snapshot taking use snapshotwriter
rphmeier Jul 16, 2016
2c68467
implement snapshot readers
rphmeier Jul 16, 2016
dbc15c3
back up client dbs when replacing
rphmeier Jul 16, 2016
f3ad832
use snapshot reader in snapshot service
rphmeier Jul 17, 2016
b588feb
describe offset format
rphmeier Jul 17, 2016
4ecf7b9
use new get_db_path in parity, allow some errors in service
rphmeier Jul 18, 2016
fea41ad
merge with master
rphmeier Jul 18, 2016
5ea7324
merge with master
rphmeier Jul 18, 2016
13852b9
blockchain formatting
rphmeier Jul 18, 2016
257b8d3
implement parity snapshot
rphmeier Jul 18, 2016
1573acd
implement snapshot restore
rphmeier Jul 18, 2016
edf75dc
force blocks to be submitted in order
rphmeier Jul 19, 2016
d7a1f6e
fix bug loading block hashes in packed reader
rphmeier Jul 19, 2016
de0236b
fix seal field loading
rphmeier Jul 19, 2016
9ca9d0d
fix uncle hash computation
rphmeier Jul 19, 2016
0bebec3
fix a few bugs
rphmeier Jul 19, 2016
5c67073
store genesis state in db. reverse block chunk order in packed writer
rphmeier Jul 19, 2016
4187c54
merge with master
rphmeier Jul 19, 2016
0ec4da7
allow out-of-order import for blocks
rphmeier Jul 20, 2016
a119416
bring restoration types together
rphmeier Jul 20, 2016
d756943
only snapshot the last 30000 blocks
rphmeier Jul 20, 2016
8cf8ea9
merge with master
rphmeier Jul 20, 2016
df0f566
restore into overlaydb instead of journaldb
rphmeier Jul 20, 2016
3468cfa
commit version to database
rphmeier Jul 20, 2016
207e38a
use memorydbs and commit directly
rphmeier Jul 21, 2016
f6ca5ff
fix trie test compilation
rphmeier Jul 21, 2016
1e573fb
fix failing tests
rphmeier Jul 21, 2016
456c122
sha3_null_rlp, not H256::zero
rphmeier Jul 21, 2016
504a5ac
move overlaydb to ref_overlaydb, add new overlaydb without on-disk rc
rphmeier Jul 21, 2016
b04c065
port archivedb to new overlaydb
rphmeier Jul 21, 2016
3e8dbb4
add deletion mode tests for overlaydb
rphmeier Jul 21, 2016
b58806d
Merge branch 'overlaydb' into pv64
rphmeier Jul 21, 2016
f155fa4
use new overlaydb, check state root at end
rphmeier Jul 21, 2016
4bdbeb5
share chain info between state and block snapshotting
rphmeier Jul 21, 2016
200a2af
create blocks snapshot using blockchain directly
rphmeier Jul 22, 2016
2878a39
allow snapshot from arbitrary block, remove panickers from snapshot c…
rphmeier Jul 22, 2016
944f13a
begin test framework
rphmeier Jul 22, 2016
80dab3c
blockchain chunking test
rphmeier Jul 22, 2016
04ccbdf
implement stateproducer::tick
rphmeier Jul 22, 2016
53ef687
state snapshot test
rphmeier Jul 22, 2016
eeb143f
create block and state chunks concurrently, better restoration informant
rphmeier Jul 22, 2016
2784d83
fix tests
rphmeier Jul 22, 2016
fd566ff
add deletion mode tests for overlaydb
rphmeier Jul 21, 2016
22ed490
address comments
rphmeier Jul 26, 2016
64275fa
merge with master
rphmeier Jul 27, 2016
472c9c6
more tests
rphmeier Jul 27, 2016
3b8c4da
Merge branch 'overlaydb_no_archive' of https://github.com/rphmeier/pa…
gavofyork Jul 28, 2016
200a063
Fix up tests.
gavofyork Jul 28, 2016
c880ab6
remove a few printlns
rphmeier Jul 28, 2016
80bf077
merge with master
rphmeier Jul 30, 2016
b8d6986
add a little more documentation to `commit`
rphmeier Jul 30, 2016
1339ed6
fix tests
rphmeier Jul 30, 2016
8967db7
merge with master, break everything!
rphmeier Jul 30, 2016
0037aeb
get latest overlaydb
rphmeier Jul 30, 2016
d1d1727
fix ref_overlaydb test names
rphmeier Jul 30, 2016
70bcf86
Merge branch 'overlaydb_no_archive' into pv64
rphmeier Jul 30, 2016
3520fa7
snapshot command skeleton
rphmeier Jul 31, 2016
49d52e1
revert ref_overlaydb renaming
rphmeier Aug 2, 2016
72d8352
reimplement snapshot commands
rphmeier Aug 3, 2016
6368917
fix many errors
rphmeier Aug 3, 2016
82a58c0
everything but inject
rphmeier Aug 3, 2016
01835c5
merge with master
rphmeier Aug 3, 2016
ae45db8
get ethcore compiling
rphmeier Aug 3, 2016
758107e
get snapshot tests passing again
rphmeier Aug 3, 2016
917d1f3
instrument snapshot commands again
rphmeier Aug 3, 2016
64ea9d1
fix fallout from other changes, mark snapshots as experimental
rphmeier Aug 4, 2016
90aa435
merge with master
rphmeier Aug 4, 2016
19612ec
optimize injection patterns
rphmeier Aug 4, 2016
d36d5a8
do two injections
rphmeier Aug 4, 2016
d48e262
fix up tests
rphmeier Aug 4, 2016
2802734
take snapshots from 1000 blocks efore
rphmeier Aug 4, 2016
30ea380
address minor comments
rphmeier Aug 5, 2016
96ef106
merge with master
rphmeier Aug 5, 2016
ad02931
fix a few io crate related errors
rphmeier Aug 5, 2016
57d4014
clarify names about total difficulty
rphmeier Aug 5, 2016
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions ethcore/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ ethjson = { path = "../json" }
ethcore-ipc = { path = "../ipc/rpc" }
ethstore = { path = "../ethstore" }
ethcore-ipc-nano = { path = "../ipc/nano" }
rand = "0.3"

[dependencies.hyper]
git = "https://github.com/ethcore/hyper"
Expand Down
2 changes: 1 addition & 1 deletion ethcore/src/block.rs
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ impl Block {
UntrustedRlp::new(b).as_val::<Block>().is_ok()
}

/// Get the RLP-encoding of the block without the seal.
/// Get the RLP-encoding of the block with or without the seal.
pub fn rlp_bytes(&self, seal: Seal) -> Bytes {
let mut block_rlp = RlpStream::new_list(3);
self.header.stream_rlp(&mut block_rlp, seal);
Expand Down
18 changes: 9 additions & 9 deletions ethcore/src/block_queue.rs
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ impl BlockQueueInfo {
/// Sorts them ready for blockchain insertion.
pub struct BlockQueue {
panic_handler: Arc<PanicHandler>,
engine: Arc<Box<Engine>>,
engine: Arc<Engine>,
more_to_verify: Arc<SCondvar>,
verification: Arc<Verification>,
verifiers: Vec<JoinHandle<()>>,
Expand Down Expand Up @@ -140,7 +140,7 @@ struct Verification {

impl BlockQueue {
/// Creates a new queue instance.
pub fn new(config: BlockQueueConfig, engine: Arc<Box<Engine>>, message_channel: IoChannel<ClientIoMessage>) -> BlockQueue {
pub fn new(config: BlockQueueConfig, engine: Arc<Engine>, message_channel: IoChannel<ClientIoMessage>) -> BlockQueue {
let verification = Arc::new(Verification {
unverified: Mutex::new(VecDeque::new()),
verified: Mutex::new(VecDeque::new()),
Expand Down Expand Up @@ -196,7 +196,7 @@ impl BlockQueue {
}
}

fn verify(verification: Arc<Verification>, engine: Arc<Box<Engine>>, wait: Arc<SCondvar>, ready: Arc<QueueSignal>, deleting: Arc<AtomicBool>, empty: Arc<SCondvar>) {
fn verify(verification: Arc<Verification>, engine: Arc<Engine>, wait: Arc<SCondvar>, ready: Arc<QueueSignal>, deleting: Arc<AtomicBool>, empty: Arc<SCondvar>) {
while !deleting.load(AtomicOrdering::Acquire) {
{
let mut more_to_verify = verification.more_to_verify.lock().unwrap();
Expand Down Expand Up @@ -226,7 +226,7 @@ impl BlockQueue {
};

let block_hash = block.header.hash();
match verify_block_unordered(block.header, block.bytes, &**engine) {
match verify_block_unordered(block.header, block.bytes, &*engine) {
Ok(verified) => {
let mut verifying = verification.verifying.lock();
for e in verifying.iter_mut() {
Expand Down Expand Up @@ -319,7 +319,7 @@ impl BlockQueue {
}
}

match verify_block_basic(&header, &bytes, &**self.engine) {
match verify_block_basic(&header, &bytes, &*self.engine) {
Ok(()) => {
self.processing.write().insert(h.clone());
self.verification.unverified.lock().push_back(UnverifiedBlock { header: header, bytes: bytes });
Expand All @@ -340,7 +340,7 @@ impl BlockQueue {
return;
}
let mut verified_lock = self.verification.verified.lock();
let mut verified = verified_lock.deref_mut();
let mut verified = &mut *verified_lock;
let mut bad = self.verification.bad.lock();
let mut processing = self.processing.write();
bad.reserve(block_hashes.len());
Expand Down Expand Up @@ -460,15 +460,15 @@ mod tests {
fn get_test_queue() -> BlockQueue {
let spec = get_test_spec();
let engine = spec.engine;
BlockQueue::new(BlockQueueConfig::default(), Arc::new(engine), IoChannel::disconnected())
BlockQueue::new(BlockQueueConfig::default(), engine, IoChannel::disconnected())
}

#[test]
fn can_be_created() {
// TODO better test
let spec = Spec::new_test();
let engine = spec.engine;
let _ = BlockQueue::new(BlockQueueConfig::default(), Arc::new(engine), IoChannel::disconnected());
let _ = BlockQueue::new(BlockQueueConfig::default(), engine, IoChannel::disconnected());
}

#[test]
Expand Down Expand Up @@ -531,7 +531,7 @@ mod tests {
let engine = spec.engine;
let mut config = BlockQueueConfig::default();
config.max_mem_use = super::MIN_MEM_LIMIT; // empty queue uses about 15000
let queue = BlockQueue::new(config, Arc::new(engine), IoChannel::disconnected());
let queue = BlockQueue::new(config, engine, IoChannel::disconnected());
assert!(!queue.queue_info().is_full());
let mut blocks = get_good_dummy_block_seq(50);
for b in blocks.drain(..) {
Expand Down
120 changes: 115 additions & 5 deletions ethcore/src/blockchain/blockchain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -533,6 +533,116 @@ impl BlockChain {
}
}

/// Inserts a verified, known block from the canonical chain.
///
/// Can be performed out-of-order, but care must be taken that the final chain is in a correct state.
/// This is used by snapshot restoration.
///
/// Supply a dummy parent total difficulty when the parent block may not be in the chain.
/// Returns true if the block is disconnected.
pub fn insert_snapshot_block(&self, bytes: &[u8], receipts: Vec<Receipt>, parent_td: Option<U256>, is_best: bool) -> bool {
let block = BlockView::new(bytes);
let header = block.header_view();
let hash = header.sha3();

if self.is_known(&hash) {
return false;
}

assert!(self.pending_best_block.read().is_none());

let batch = self.db.transaction();

let block_rlp = UntrustedRlp::new(bytes);
let compressed_header = block_rlp.at(0).unwrap().compress(RlpType::Blocks);
let compressed_body = UntrustedRlp::new(&Self::block_to_body(bytes)).compress(RlpType::Blocks);

// store block in db
batch.put(DB_COL_HEADERS, &hash, &compressed_header).unwrap();
batch.put(DB_COL_BODIES, &hash, &compressed_body).unwrap();

let maybe_parent = self.block_details(&header.parent_hash());

if let Some(parent_details) = maybe_parent {
// parent known to be in chain.
let info = BlockInfo {
hash: hash,
number: header.number(),
total_difficulty: parent_details.total_difficulty + header.difficulty(),
location: BlockLocation::CanonChain,
};

self.prepare_update(&batch, ExtrasUpdate {
block_hashes: self.prepare_block_hashes_update(bytes, &info),
block_details: self.prepare_block_details_update(bytes, &info),
block_receipts: self.prepare_block_receipts_update(receipts, &info),
transactions_addresses: self.prepare_transaction_addresses_update(bytes, &info),
blocks_blooms: self.prepare_block_blooms_update(bytes, &info),
info: info,
block: bytes
}, is_best);
self.db.write(batch).unwrap();

false
} else {
// parent not in the chain yet. we need the parent difficulty to proceed.
let d = parent_td
.expect("parent total difficulty always supplied for first block in chunk. only first block can have missing parent; qed");

let info = BlockInfo {
hash: hash,
number: header.number(),
total_difficulty: d + header.difficulty(),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is parent_diff parent's difficulty or total difficulty? If it is just difficulty then this is incorrect

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is the total difficulty.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe rename to parent_total_diff to make clear?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i added documentation in my last commit which specifies it.

location: BlockLocation::CanonChain,
};

let block_details = BlockDetails {
number: header.number(),
total_difficulty: info.total_difficulty,
parent: header.parent_hash(),
children: Vec::new(),
};

let mut update = HashMap::new();
update.insert(hash, block_details);

self.prepare_update(&batch, ExtrasUpdate {
block_hashes: self.prepare_block_hashes_update(bytes, &info),
block_details: update,
block_receipts: self.prepare_block_receipts_update(receipts, &info),
transactions_addresses: self.prepare_transaction_addresses_update(bytes, &info),
blocks_blooms: self.prepare_block_blooms_update(bytes, &info),
info: info,
block: bytes,
}, is_best);
self.db.write(batch).unwrap();

true
}
}

/// Add a child to a given block. Assumes that the block hash is in
/// the chain and the child's parent is this block.
///
/// Used in snapshots to glue the chunks together at the end.
pub fn add_child(&self, block_hash: H256, child_hash: H256) {
let mut parent_details = self.block_details(&block_hash)
.unwrap_or_else(|| panic!("Invalid block hash: {:?}", block_hash));

let batch = self.db.transaction();
parent_details.children.push(child_hash);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a guarantee that the same child won't be pushed twice?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function is called at the very end to glue all disconnected chunks together. see BlockRebuilder::glue_chunks. Whenever we insert a snapshot block and the parent details don't exist (i.e. this is the first block of a chunk whose ancestor hasn't been fed yet), we mark it as "disconnected" and update the block details once we have processed all chunks.

in the absence of a bad snapshot i think this can be guaranteed to not push the same child twice.


let mut update = HashMap::new();
update.insert(block_hash, parent_details);

self.note_used(CacheID::BlockDetails(block_hash));

let mut write_details = self.block_details.write();
batch.extend_with_cache(DB_COL_EXTRA, &mut *write_details, update, CacheUpdatePolicy::Overwrite);

self.db.write(batch).unwrap();
}

#[cfg_attr(feature="dev", allow(similar_names))]
/// Inserts the block into backing cache database.
/// Expects the block to be valid and already verified.
Expand Down Expand Up @@ -572,7 +682,7 @@ impl BlockChain {
blocks_blooms: self.prepare_block_blooms_update(bytes, &info),
info: info.clone(),
block: bytes,
});
}, true);

ImportRoute::from(info)
}
Expand Down Expand Up @@ -618,7 +728,7 @@ impl BlockChain {
}

/// Prepares extras update.
fn prepare_update(&self, batch: &DBTransaction, update: ExtrasUpdate) {
fn prepare_update(&self, batch: &DBTransaction, update: ExtrasUpdate, is_best: bool) {
{
for hash in update.block_details.keys().cloned() {
self.note_used(CacheID::BlockDetails(hash));
Expand All @@ -645,17 +755,16 @@ impl BlockChain {
// update best block
match update.info.location {
BlockLocation::Branch => (),
_ => {
_ => if is_best {
batch.put(DB_COL_EXTRA, b"best", &update.info.hash).unwrap();
*best_block = Some(BestBlock {
hash: update.info.hash,
number: update.info.number,
total_difficulty: update.info.total_difficulty,
block: update.block.to_vec(),
});
}
},
}

let mut write_hashes = self.pending_block_hashes.write();
let mut write_txs = self.pending_transaction_addresses.write();

Expand Down Expand Up @@ -745,6 +854,7 @@ impl BlockChain {
}

/// This function returns modified block details.
/// Uses the given parent details or attempts to load them from the database.
fn prepare_block_details_update(&self, block_bytes: &[u8], info: &BlockInfo) -> HashMap<H256, BlockDetails> {
let block = BlockView::new(block_bytes);
let header = block.header_view();
Expand Down
2 changes: 1 addition & 1 deletion ethcore/src/blockchain/extras.rs
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ pub enum ExtrasIndex {
fn with_index(hash: &H256, i: ExtrasIndex) -> H264 {
let mut result = H264::default();
result[0] = i as u8;
result.deref_mut()[1..].clone_from_slice(hash);
(*result)[1..].clone_from_slice(hash);
result
}

Expand Down
2 changes: 2 additions & 0 deletions ethcore/src/blockchain/generator/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
// You should have received a copy of the GNU General Public License
// along with Parity. If not, see <http://www.gnu.org/licenses/>.

//! Blockchain generator for tests.

mod bloom;
mod block;
mod complete;
Expand Down
2 changes: 1 addition & 1 deletion ethcore/src/blockchain/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ mod import_route;
mod update;

#[cfg(test)]
mod generator;
pub mod generator;

pub use self::blockchain::{BlockProvider, BlockChain};
pub use self::cache::CacheSize;
Expand Down
Loading