diff --git a/crates/topos-tce-storage/README.md b/crates/topos-tce-storage/README.md
new file mode 100644
index 000000000..77bc827a1
--- /dev/null
+++ b/crates/topos-tce-storage/README.md
@@ -0,0 +1,67 @@
+# topos-tce-storage
+
+The library provides the storage layer for the Topos TCE.
+It is responsible for storing and retrieving the certificates, managing the
+pending certificates pool and the certificate status, storing the different
+metadata related to the protocol and the internal state of the TCE.
+
+The storage layer is implemented using RocksDB.
+The library is exposing multiple store that are used by the TCE.
+
+
+### Architecture
+
+The storage layer is composed of multiple stores that are used by the TCE.
+Each store is describe in detail in its own module.
+
+As an overview, the storage layer is composed of the following stores:
+
+
+
+### Usage
+
+Each store represents a different kind of capabilities, but they all act and need the same kind
+of configuration in order to work.
+
+For instance, the [`EpochValidatorsStore`](struct@epoch::EpochValidatorsStore) only needs a [`PathBuf`](struct@std::path::PathBuf)
+argument to be instantiated where [`FullNodeStore`](struct@fullnode::FullNodeStore) needs a little bit more arguments.
+
+The underlying mechanisms of how data is stored is fairly simple, it relies a lot on [`rocksdb`] and will
+be describe below.
+
+As an example, in order to create a new [`EpochValidatorsStore`](struct@epoch::EpochValidatorsStore) you need to provide a
+path where the [`rocksdb`] database will be placed:
+
+```rust
+use epoch::EpochValidatorsStore;
+path.push("epoch");
+let store: Arc = EpochValidatorsStore::new(path).unwrap();
+```
+
+### Special Considerations
+
+When using the storage layer, you need to be aware of the following:
+- The storage layer is using [`rocksdb`] as a backend, which means that the data is stored on disk.
+- The storage layer is using [`Arc`](struct@std::sync::Arc) to share the stores between threads.
+- The storage layer is using [`async_trait`](https://docs.rs/async-trait/0.1.51/async_trait/) to expose methods that need to manage locks. (see [`WriteStore`](trait@store::WriteStore))
+- Some functions are using [`DBBatch`](struct@rocks::db_column::DBBatch) to batch multiple writes in one transaction. But not all functions are using it.
+
+### Design Philosophy
+
+The choice of using [`rocksdb`] as a backend was made because it is a well known and battle tested database.
+It is also very fast and efficient when it comes to write and read data. However, it is not the best when it comes
+to compose or filter data. This is why we have multiple store that are used for different purposes.
+
+For complex queries, another database like [`PostgreSQL`](https://www.postgresql.org/) or [`CockroachDB`](https://www.cockroachlabs.com/) could be used as a Storage for projections.
+The source of truth would still be [`rocksdb`] but the projections would be stored in a relational database. Allowing for more complex queries.
+
+As mention above, the different stores are using [`Arc`](struct@std::sync::Arc), allowing a single store to be instantiated once
+and then shared between threads. This is very useful when it comes to the [`FullNodeStore`](struct@fullnode::FullNodeStore) as it is used in various places.
+
+It also means that the store is immutable, which is a good thing when it comes to concurrency.
+The burden of managing the locks is handled by the [`async_trait`](https://docs.rs/async-trait/0.1.51/async_trait/) crate when using the [`WriteStore`](trait@store::WriteStore).
+The rest of the mutation on the data are handled by [`rocksdb`] itself.
+
diff --git a/crates/topos-tce-storage/assets/store-dark.png b/crates/topos-tce-storage/assets/store-dark.png
new file mode 100644
index 000000000..c95fd52e1
Binary files /dev/null and b/crates/topos-tce-storage/assets/store-dark.png differ
diff --git a/crates/topos-tce-storage/assets/store-light.png b/crates/topos-tce-storage/assets/store-light.png
new file mode 100644
index 000000000..2b0ab5f6f
Binary files /dev/null and b/crates/topos-tce-storage/assets/store-light.png differ
diff --git a/crates/topos-tce-storage/src/lib.rs b/crates/topos-tce-storage/src/lib.rs
index 6d17603a7..5facc3625 100644
--- a/crates/topos-tce-storage/src/lib.rs
+++ b/crates/topos-tce-storage/src/lib.rs
@@ -1,3 +1,74 @@
+//! The library provides the storage layer for the Topos TCE.
+//! It is responsible for storing and retrieving the certificates, managing the
+//! pending certificates pool and the certificate status, storing the different
+//! metadata related to the protocol and the internal state of the TCE.
+//!
+//! The storage layer is implemented using RocksDB.
+//! The library is exposing multiple store that are used by the TCE.
+//!
+//!
+//! ## Architecture
+//!
+//! The storage layer is composed of multiple stores that are used by the TCE.
+//! Each store is describe in detail in its own module.
+//!
+//! As an overview, the storage layer is composed of the following stores:
+//!
+//!
+//!
+//! ## Usage
+//!
+//! Each store represents a different kind of capabilities, but they all act and need the same kind
+//! of configuration in order to work.
+//!
+//! For instance, the [`EpochValidatorsStore`](struct@epoch::EpochValidatorsStore) only needs a [`PathBuf`](struct@std::path::PathBuf)
+//! argument to be instantiated where [`FullNodeStore`](struct@fullnode::FullNodeStore) needs a little bit more arguments.
+//!
+//! The underlying mechanisms of how data is stored is fairly simple, it relies a lot on [`rocksdb`] and will
+//! be describe below.
+//!
+//! As an example, in order to create a new [`EpochValidatorsStore`](struct@epoch::EpochValidatorsStore) you need to provide a
+//! path where the [`rocksdb`] database will be placed:
+//!
+//! ```
+//! # use topos_tce_storage::epoch;
+//! use epoch::EpochValidatorsStore;
+//! # use std::str::FromStr;
+//! # use std::path::PathBuf;
+//! # use std::sync::Arc;
+//! # let mut path = PathBuf::from_str(env!("CARGO_MANIFEST_DIR")).unwrap();
+//! # path.push("./../../target/tmp/");
+//! path.push("epoch");
+//! let store: Arc = EpochValidatorsStore::new(path).unwrap();
+//! ```
+//!
+//! ## Special Considerations
+//!
+//! When using the storage layer, you need to be aware of the following:
+//! - The storage layer is using [`rocksdb`] as a backend, which means that the data is stored on disk.
+//! - The storage layer is using [`Arc`](struct@std::sync::Arc) to share the stores between threads.
+//! - The storage layer is using [`async_trait`](https://docs.rs/async-trait/0.1.51/async_trait/) to expose methods that need to manage locks. (see [`WriteStore`](trait@store::WriteStore))
+//! - Some functions are using [`DBBatch`](struct@rocks::db_column::DBBatch) to batch multiple writes in one transaction. But not all functions are using it.
+//!
+//! ## Design Philosophy
+//!
+//! The choice of using [`rocksdb`] as a backend was made because it is a well known and battle tested database.
+//! It is also very fast and efficient when it comes to write and read data. However, it is not the best when it comes
+//! to compose or filter data. This is why we have multiple store that are used for different purposes.
+//!
+//! For complex queries, another database like [`PostgreSQL`](https://www.postgresql.org/) or [`CockroachDB`](https://www.cockroachlabs.com/) could be used as a Storage for projections.
+//! The source of truth would still be [`rocksdb`] but the projections would be stored in a relational database. Allowing for more complex queries.
+//!
+//! As mention above, the different stores are using [`Arc`](struct@std::sync::Arc), allowing a single store to be instantiated once
+//! and then shared between threads. This is very useful when it comes to the [`FullNodeStore`](struct@fullnode::FullNodeStore) as it is used in various places.
+//!
+//! It also means that the store is immutable, which is a good thing when it comes to concurrency.
+//! The burden of managing the locks is handled by the [`async_trait`](https://docs.rs/async-trait/0.1.51/async_trait/) crate when using the [`WriteStore`](trait@store::WriteStore).
+//! The rest of the mutation on the data are handled by [`rocksdb`] itself.
+//!
use errors::InternalStorageError;
use rocks::iterator::ColumnIterator;
use serde::{Deserialize, Serialize};
diff --git a/crates/topos/tests/config.rs b/crates/topos/tests/config.rs
index e0dd1acaa..744cfa98f 100644
--- a/crates/topos/tests/config.rs
+++ b/crates/topos/tests/config.rs
@@ -1,6 +1,8 @@
use assert_cmd::prelude::*;
+use rstest::rstest;
use std::path::PathBuf;
use std::process::Command;
+use std::time::Duration;
use topos::install_polygon_edge;
async fn polygon_edge_path(path: &str) -> String {
@@ -23,7 +25,9 @@ async fn polygon_edge_path(path: &str) -> String {
installation_path.to_str().unwrap().to_string()
}
+#[rstest]
#[tokio::test]
+#[timeout(Duration::from_secs(5))]
async fn test_handle_command_init() -> Result<(), Box> {
let temporary_test_folder = "/tmp/topos/handle_command_init";
let path = polygon_edge_path(temporary_test_folder).await;
@@ -86,7 +90,9 @@ fn test_nothing_written_if_failure() -> Result<(), Box> {
Ok(())
}
+#[rstest]
#[tokio::test]
+#[timeout(Duration::from_secs(5))]
async fn test_handle_command_init_with_custom_name() -> Result<(), Box> {
let temporary_test_folder = "/tmp/topos/test_handle_command_init_with_custom_name";
let node_name = "TEST_NODE";
diff --git a/scripts/check_readme.sh b/scripts/check_readme.sh
index 9ab3ff37a..6d7e76463 100755
--- a/scripts/check_readme.sh
+++ b/scripts/check_readme.sh
@@ -13,3 +13,4 @@ function check {
}
check crates/topos-tce-broadcast
+check crates/topos-tce-storage