Skip to content

Commit

Permalink
Merge pull request #1 from hicommonwealth/develop
Browse files Browse the repository at this point in the history
Develop to master push/update
  • Loading branch information
drewstone authored Aug 30, 2019
2 parents 0e89ea7 + 4af6124 commit 2a80448
Show file tree
Hide file tree
Showing 99 changed files with 5,145 additions and 2,955 deletions.
3 changes: 1 addition & 2 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,6 @@ check-web-wasm:
- time cargo web build -p substrate-keystore
- time cargo web build -p substrate-executor
- time cargo web build -p substrate-network
- time cargo web build -p substrate-offchain
- time cargo web build -p substrate-panic-handler
- time cargo web build -p substrate-peerset
- time cargo web build -p substrate-primitives
Expand Down Expand Up @@ -336,7 +335,7 @@ check_warnings:
- docker push $CONTAINER_IMAGE:$VERSION
- docker push $CONTAINER_IMAGE:latest

publish-docker-substrate:
publish-docker-substrate:
stage: publish
<<: *publish-docker-release
# collect VERSION artifact here to pass it on to kubernetes
Expand Down
340 changes: 181 additions & 159 deletions Cargo.lock

Large diffs are not rendered by default.

42 changes: 37 additions & 5 deletions README.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ First let's get a template chainspec that you can edit. We'll use the "staging"
substrate build-spec --chain=staging > ~/chainspec.json
----

Now, edit `~/chainspec.json` in your editor. There are a lot of individual fields for each module, and one very large one which contains the Webassembly code blob for this chain. The easiest field to edit is the block `period`. Change it to 10 (seconds):
Now, edit `~/chainspec.json` in your editor. There are a lot of individual fields for each module, and one very large one which contains the WebAssembly code blob for this chain. The easiest field to edit is the block `period`. Change it to 10 (seconds):

[source, json]
----
Expand All @@ -160,7 +160,7 @@ It won't do much until you start producing blocks though, so to do that you'll n

[source, shell]
----
substrate --chain ~/mychain.json --validator --key ...
substrate --chain ~/mychain.json --validator
----

You can distribute `mychain.json` so that everyone can synchronize and (depending on your authorities list) validate on your chain.
Expand Down Expand Up @@ -281,9 +281,9 @@ cargo run \-- --dev

Detailed logs may be shown by running the node with the following environment variables set: `RUST_LOG=debug RUST_BACKTRACE=1 cargo run \-- --dev`.

If you want to see the multi-node consensus algorithm in action locally, then you can create a local testnet with two validator nodes for Alice and Bob, who are the initial authorities of the genesis chain specification that have been endowed with a testnet DOTs. We'll give each node a name and expose them so they are listed on link:https://telemetry.polkadot.io/#/Local%20Testnet[Telemetry] . You'll need two terminals windows open.
If you want to see the multi-node consensus algorithm in action locally, then you can create a local testnet with two validator nodes for Alice and Bob, who are the initial authorities of the genesis chain specification that have been endowed with a testnet DOTs. We'll give each node a name and expose them so they are listed on link:https://telemetry.polkadot.io/#/Local%20Testnet[Telemetry]. You'll need two terminal windows open.

We'll start Alice's substrate node first on default TCP port 30333 with her chain database stored locally at `/tmp/alice`. The Bootnode ID of her node is `QmRpheLN4JWdAnY7HGJfWFNbfkQCb6tFf4vvA6hgjMZKrR`, which is generated from the `--node-key` value that we specify below:
We'll start Alice's Substrate node first on default TCP port 30333 with her chain database stored locally at `/tmp/alice`. The Bootnode ID of her node is `QmRpheLN4JWdAnY7HGJfWFNbfkQCb6tFf4vvA6hgjMZKrR`, which is generated from the `--node-key` value that we specify below:

[source, shell]
cargo run --release \-- \
Expand All @@ -294,7 +294,7 @@ cargo run --release \-- \
--telemetry-url ws://telemetry.polkadot.io:1024 \
--validator

In the second terminal, we'll run the following to start Bob's substrate node on a different TCP port of 30334, and with his chain database stored locally at `/tmp/bob`. We'll specify a value for the `--bootnodes` option that will connect his node to Alice's Bootnode ID on TCP port 30333:
In the second terminal, we'll run the following to start Bob's Substrate node on a different TCP port of 30334, and with his chain database stored locally at `/tmp/bob`. We'll specify a value for the `--bootnodes` option that will connect his node to Alice's Bootnode ID on TCP port 30333:

[source, shell]
cargo run --release \-- \
Expand Down Expand Up @@ -378,6 +378,38 @@ git checkout -b v1.0 origin/v1.0

You can then follow the same steps for building and running as described above in <<flaming-fir>>.

== Key management

Keys in Substrate are stored in the keystore in the file system. To store keys into this keystore,
you need to use one of the two provided RPC calls. If your keys are encrypted or should be encrypted
by the keystore, you need to provide the key using one of the cli arguments `--password`,
`--password-interactive` or `--password-filename`.

=== Recommended RPC call

For most users who want to run a validator node, the `author_rotateKeys` RPC call is sufficient.
The RPC call will generate `N` Session keys for you and return their public keys. `N` is the number
of session keys configured in the runtime. The output of the RPC call can be used as input for the
`session::set_keys` transaction.

```
curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "method":"author_rotateKeys", "id":1 }' localhost:9933
```

=== Advanced RPC call

If the Session keys need to match a fixed seed, they can be set individually key by key. The RPC call
expects the key seed and the key type. The key types supported by default in Substrate are listed
https://github.com/paritytech/substrate/blob/master/core/primitives/src/crypto.rs#L767[here], but the
user can declare any key type.

```
curl -H 'Content-Type: application/json' --data '{ "jsonrpc":"2.0", "method":"author_insertKey", "params":["KEY_TYPE", "SEED"],"id":1 }' localhost:9933
```

`KEY_TYPE` - needs to be replaced with the 4-character key type identifier.
`SEED` - is the seed of the key.

== Documentation

=== Viewing documentation for Substrate packages
Expand Down
5 changes: 3 additions & 2 deletions core/application-crypto/src/traits.rs
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
use primitives::crypto::{KeyTypeId, CryptoType, IsWrappedBy, Public};
#[cfg(feature = "std")]
use primitives::crypto::Pair;
use codec::Codec;

/// An application-specific key.
pub trait AppKey: 'static + Send + Sync + Sized + CryptoType + Clone {
Expand Down Expand Up @@ -72,7 +73,7 @@ pub trait AppSignature: AppKey + Eq + PartialEq + MaybeDebugHash {
/// A runtime interface for a public key.
pub trait RuntimePublic: Sized {
/// The signature that will be generated when signing with the corresponding private key.
type Signature;
type Signature: Codec + MaybeDebugHash + Eq + PartialEq + Clone;

/// Returns all public keys for the given key type in the keystore.
fn all(key_type: KeyTypeId) -> crate::Vec<Self>;
Expand All @@ -97,7 +98,7 @@ pub trait RuntimePublic: Sized {
/// A runtime interface for an application's public key.
pub trait RuntimeAppPublic: Sized {
/// The signature that will be generated when signing with the corresponding private key.
type Signature;
type Signature: Codec + MaybeDebugHash + Eq + PartialEq + Clone;

/// Returns all public keys for this application in the keystore.
fn all() -> crate::Vec<Self>;
Expand Down
14 changes: 2 additions & 12 deletions core/cli/src/informant.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,22 +21,12 @@ use futures::{Future, Stream};
use futures03::{StreamExt as _, TryStreamExt as _};
use log::{info, warn};
use sr_primitives::{generic::BlockId, traits::Header};
use service::{Service, Components};
use tokio::runtime::TaskExecutor;
use service::AbstractService;

mod display;

/// Spawn informant on the event loop
#[deprecated(note = "Please use informant::build instead, and then create the task manually")]
pub fn start<C>(service: &Service<C>, exit: ::exit_future::Exit, handle: TaskExecutor) where
C: Components,
{
handle.spawn(exit.until(build(service)).map(|_| ()));
}

/// Creates an informant in the form of a `Future` that must be polled regularly.
pub fn build<C>(service: &Service<C>) -> impl Future<Item = (), Error = ()>
where C: Components {
pub fn build(service: &impl AbstractService) -> impl Future<Item = (), Error = ()> {
let client = service.client();

let mut display = display::InformantDisplay::new();
Expand Down
93 changes: 28 additions & 65 deletions core/cli/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,8 @@ pub mod informant;
use client::ExecutionStrategies;
use service::{
config::Configuration,
ServiceFactory, FactoryFullConfiguration, RuntimeGenesis,
FactoryGenesis, PruningMode, ChainSpec,
ServiceBuilderExport, ServiceBuilderImport, ServiceBuilderRevert,
RuntimeGenesis, PruningMode, ChainSpec,
};
use network::{
self, multiaddr::Protocol,
Expand Down Expand Up @@ -317,13 +317,17 @@ pub struct ParseAndPrepareExport<'a> {

impl<'a> ParseAndPrepareExport<'a> {
/// Runs the command and exports from the chain.
pub fn run<F, S, E>(
pub fn run_with_builder<C, G, F, B, S, E>(
self,
builder: F,
spec_factory: S,
exit: E,
) -> error::Result<()>
where S: FnOnce(&str) -> Result<Option<ChainSpec<FactoryGenesis<F>>>, String>,
F: ServiceFactory,
where S: FnOnce(&str) -> Result<Option<ChainSpec<G>>, String>,
F: FnOnce(Configuration<C, G>) -> Result<B, error::Error>,
B: ServiceBuilderExport,
C: Default,
G: RuntimeGenesis,
E: IntoExit
{
let config = create_config_with_db_path(spec_factory, &self.params.shared_params, self.version)?;
Expand All @@ -338,9 +342,8 @@ impl<'a> ParseAndPrepareExport<'a> {
None => Box::new(stdout()),
};

service::chain_ops::export_blocks::<F, _, _>(
config, exit.into_exit(), file, from.into(), to.map(Into::into), json
).map_err(Into::into)
builder(config)?.export_blocks(exit.into_exit(), file, from.into(), to.map(Into::into), json)?;
Ok(())
}
}

Expand All @@ -352,13 +355,17 @@ pub struct ParseAndPrepareImport<'a> {

impl<'a> ParseAndPrepareImport<'a> {
/// Runs the command and imports to the chain.
pub fn run<F, S, E>(
pub fn run_with_builder<C, G, F, B, S, E>(
self,
builder: F,
spec_factory: S,
exit: E,
) -> error::Result<()>
where S: FnOnce(&str) -> Result<Option<ChainSpec<FactoryGenesis<F>>>, String>,
F: ServiceFactory,
where S: FnOnce(&str) -> Result<Option<ChainSpec<G>>, String>,
F: FnOnce(Configuration<C, G>) -> Result<B, error::Error>,
B: ServiceBuilderImport,
C: Default,
G: RuntimeGenesis,
E: IntoExit
{
let mut config = create_config_with_db_path(spec_factory, &self.params.shared_params, self.version)?;
Expand All @@ -377,7 +384,7 @@ impl<'a> ParseAndPrepareImport<'a> {
},
};

let fut = service::chain_ops::import_blocks::<F, _, _>(config, exit.into_exit(), file)?;
let fut = builder(config)?.import_blocks(exit.into_exit(), file)?;
tokio::run(fut);
Ok(())
}
Expand Down Expand Up @@ -440,67 +447,23 @@ pub struct ParseAndPrepareRevert<'a> {

impl<'a> ParseAndPrepareRevert<'a> {
/// Runs the command and reverts the chain.
pub fn run<F, S>(
pub fn run_with_builder<C, G, F, B, S>(
self,
builder: F,
spec_factory: S
) -> error::Result<()>
where S: FnOnce(&str) -> Result<Option<ChainSpec<FactoryGenesis<F>>>, String>,
F: ServiceFactory {
where S: FnOnce(&str) -> Result<Option<ChainSpec<G>>, String>,
F: FnOnce(Configuration<C, G>) -> Result<B, error::Error>,
B: ServiceBuilderRevert,
C: Default,
G: RuntimeGenesis {
let config = create_config_with_db_path(spec_factory, &self.params.shared_params, self.version)?;
let blocks = self.params.num;
Ok(service::chain_ops::revert_chain::<F>(config, blocks.into())?)
builder(config)?.revert_chain(blocks.into())?;
Ok(())
}
}

/// Parse command line interface arguments and executes the desired command.
///
/// # Return value
///
/// A result that indicates if any error occurred.
/// If no error occurred and a custom subcommand was found, the subcommand is returned.
/// The user needs to handle this subcommand on its own.
///
/// # Remarks
///
/// `CC` is a custom subcommand. This needs to be an `enum`! If no custom subcommand is required,
/// `NoCustom` can be used as type here.
/// `RP` are custom parameters for the run command. This needs to be a `struct`! The custom
/// parameters are visible to the user as if they were normal run command parameters. If no custom
/// parameters are required, `NoCustom` can be used as type here.
#[deprecated(
note = "Use parse_and_prepare instead; see the source code of parse_and_execute for how to transition"
)]
pub fn parse_and_execute<'a, F, CC, RP, S, RS, E, I, T>(
spec_factory: S,
version: &VersionInfo,
impl_name: &'static str,
args: I,
exit: E,
run_service: RS,
) -> error::Result<Option<CC>>
where
F: ServiceFactory,
S: FnOnce(&str) -> Result<Option<ChainSpec<FactoryGenesis<F>>>, String>,
CC: StructOpt + Clone + GetLogFilter,
RP: StructOpt + Clone + AugmentClap,
E: IntoExit,
RS: FnOnce(E, RunCmd, RP, FactoryFullConfiguration<F>) -> Result<(), String>,
I: IntoIterator<Item = T>,
T: Into<std::ffi::OsString> + Clone,
{
match parse_and_prepare::<CC, RP, _>(version, impl_name, args) {
ParseAndPrepare::Run(cmd) => cmd.run(spec_factory, exit, run_service),
ParseAndPrepare::BuildSpec(cmd) => cmd.run(spec_factory),
ParseAndPrepare::ExportBlocks(cmd) => cmd.run::<F, _, _>(spec_factory, exit),
ParseAndPrepare::ImportBlocks(cmd) => cmd.run::<F, _, _>(spec_factory, exit),
ParseAndPrepare::PurgeChain(cmd) => cmd.run(spec_factory),
ParseAndPrepare::RevertChain(cmd) => cmd.run::<F, _>(spec_factory),
ParseAndPrepare::CustomCommand(cmd) => return Ok(Some(cmd))
}?;

Ok(None)
}

/// Create a `NodeKeyConfig` from the given `NodeKeyParams` in the context
/// of an optional network config storage directory.
fn node_key_config<P>(params: NodeKeyParams, net_config_dir: &Option<P>)
Expand Down
6 changes: 5 additions & 1 deletion core/cli/src/params.rs
Original file line number Diff line number Diff line change
Expand Up @@ -441,7 +441,11 @@ lazy_static::lazy_static! {
/// The Cli values for all test accounts.
static ref TEST_ACCOUNTS_CLI_VALUES: Vec<KeyringTestAccountCliValues> = {
keyring::Sr25519Keyring::iter().map(|a| {
let help = format!("Shortcut for `--key //{} --name {}`.", a, a);
let help = format!(
"Shortcut for `--name {} --validator` with session keys for `{}` added to keystore.",
a,
a,
);
let conflicts_with = keyring::Sr25519Keyring::iter()
.filter(|b| a != *b)
.map(|b| b.to_string().to_lowercase())
Expand Down
4 changes: 2 additions & 2 deletions core/client/db/src/cache/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -299,8 +299,8 @@ impl<Block: BlockT> BlockchainCache<Block> for DbCacheSync<Block> {
key: &CacheKeyId,
at: &BlockId<Block>,
) -> Option<((NumberFor<Block>, Block::Hash), Option<(NumberFor<Block>, Block::Hash)>, Vec<u8>)> {
let cache = self.0.read();
let storage = cache.cache_at.get(key)?.storage();
let mut cache = self.0.write();
let storage = cache.get_cache(*key).storage();
let db = storage.db();
let columns = storage.columns();
let at = match *at {
Expand Down
41 changes: 26 additions & 15 deletions core/client/db/src/light.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1030,26 +1030,37 @@ pub(crate) mod tests {

#[test]
fn cache_can_be_initialized_after_genesis_inserted() {
let db = LightStorage::<Block>::new_test();
let (genesis_hash, storage) = {
let db = LightStorage::<Block>::new_test();

// before cache is initialized => None
assert_eq!(db.cache().get_at(b"test", &BlockId::Number(0)), None);

// insert genesis block (no value for cache is provided)
let mut genesis_hash = None;
insert_block(&db, HashMap::new(), || {
let header = default_header(&Default::default(), 0);
genesis_hash = Some(header.hash());
header
});

// before cache is initialized => None
assert_eq!(db.cache().get_at(b"test", &BlockId::Number(0)), None);
// after genesis is inserted => None
assert_eq!(db.cache().get_at(b"test", &BlockId::Number(0)), None);

// insert genesis block (no value for cache is provided)
let mut genesis_hash = None;
insert_block(&db, HashMap::new(), || {
let header = default_header(&Default::default(), 0);
genesis_hash = Some(header.hash());
header
});
// initialize cache
db.cache().initialize(b"test", vec![42]).unwrap();

// after genesis is inserted => None
assert_eq!(db.cache().get_at(b"test", &BlockId::Number(0)), None);
// after genesis is inserted + cache is initialized => Some
assert_eq!(
db.cache().get_at(b"test", &BlockId::Number(0)),
Some(((0, genesis_hash.unwrap()), None, vec![42])),
);

// initialize cache
db.cache().initialize(b"test", vec![42]).unwrap();
(genesis_hash, db.db)
};

// after genesis is inserted + cache is initialized => Some
// restart && check that after restart value is read from the cache
let db = LightStorage::<Block>::from_kvdb(storage as Arc<_>).expect("failed to create test-db");
assert_eq!(
db.cache().get_at(b"test", &BlockId::Number(0)),
Some(((0, genesis_hash.unwrap()), None, vec![42])),
Expand Down
Loading

0 comments on commit 2a80448

Please sign in to comment.