-
Notifications
You must be signed in to change notification settings - Fork 500
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'master' into refactor-captive-core
- Loading branch information
Showing
22 changed files
with
333 additions
and
180 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
|
||
# Ledger Exporter Developer Guide | ||
The ledger exporter is a tool to export Stellar network transaction data to cloud storage in a way that is easy to access. | ||
|
||
## Prerequisites | ||
This document assumes that you have installed and can run the ledger exporter, and that you have familiarity with its CLI and configuration. If not, please refer to the [Installation Guide](./README.md). | ||
|
||
## Goal | ||
The goal of the ledger exporter is to build an easy-to-use tool to export Stellar network ledger data to a configurable remote data store, such as cloud blob storage. | ||
- Use cloud storage optimally | ||
- Minimize network usage to export | ||
- Make it easy and fast to search for a specific ledger or ledger range | ||
|
||
## Architecture | ||
To achieve its goals, the ledger exporter uses the following architecture, which consists of the 3 main components: | ||
- Captive-core to extract raw transaction metadata from the Stellar Network. | ||
- Export manager to bundles and organizes the ledgers to get them ready for export. | ||
- The cloud storage plugin writes to the cloud storage. This is specific to the type of cloud storage, GCS in this case. | ||
|
||
|
||
![ledgerexporter-architecture](./architecture.png) | ||
|
||
|
||
## Data Format | ||
- Ledger exporter uses a compact and efficient data format called [XDR](https://developers.stellar.org/docs/learn/encyclopedia/data-format/xdr) (External Data Representation), which is a compact binary format. A Stellar Captive Core instance emits data in this format and the data structure is referred to as `LedgerCloseMeta`. The exporter bundles multiple `LedgerCloseMeta`'s into a single object using a custom XDR structure called `LedgerCloseMetaBatch` which is defined in [Stellar-exporter.x](https://github.com/stellar/go/blob/master/xdr/Stellar-exporter.x). | ||
|
||
- The metadata for the same batch is also stored alongside each exported object. Supported metadata is defined in [metadata.go](https://github.com/stellar/go/blob/master/support/datastore/metadata.go). | ||
|
||
- Objects are compressed before uploading using the [zstd](http://facebook.github.io/zstd/) (zstandard) compression algorithm to optimize network usage and storage needs. | ||
|
||
## Data Storage | ||
- An example implementation of `DataStore` for GCS, Google Cloud Storage. This plugin is located in the [support](https://github.com/stellar/go/tree/master/support/datastore) package. | ||
- The ledger exporter currently implements the interface only for Google Cloud Storage (GCS). The [GCS plugin](https://github.com/stellar/go/blob/master/support/datastore/gcs_datastore.go) uses GCS-specific behaviors like conditional puts, automatic retry, metadata, and CRC checksum. | ||
|
||
## Build, Run and Test using Docker | ||
The Dockerfile contains all the necessary dependencies (e.g., Stellar-core) required to run the ledger exporter. | ||
|
||
- Build: To build the Docker container, use the provided [Makefile](./Makefile). Simply run make `make docker-build` to build a new container after making any changes. | ||
|
||
- Run: For instructions on running the Docker container, refer to the [Installation Guide](./README.md). | ||
|
||
- Test: To test the Docker container, refer to the [docker-test](./Makefile) command for an example of how to use the [GCS emulator](https://github.com/fsouza/fake-gcs-server) for local testing. | ||
|
||
## Adding support for a new storage type | ||
Support for different data storage types are encapsulated as 'plugins', which are implementation of `DataStore` interface in a go package. To add a data storage plugin based on a new storage type (e.g. AWS S3), follow these steps: | ||
|
||
- A data storage plugin must implement the [DataStore](https://github.com/stellar/go/blob/master/support/datastore/datastore.go) interface. | ||
- Add support for new datastore-specific features. Implement any datastore-specific custom logic. Different datastores have different ways of handling | ||
- race conditions | ||
- automatic retries | ||
- metadata storage, etc. | ||
- Add the new datastore to the factory function [NewDataStore](https://github.com/stellar/go/blob/master/support/datastore/datastore.go). | ||
- Add a [config](./config.example.toml) section for the new storage type. This may include configurations like destination, authentication information etc. | ||
- An emulator such as a GCS emulator [fake-gcs-server](https://github.com/fsouza/fake-gcs-server) can be used for testing without connecting to real cloud storage. | ||
|
||
### Design DOs and DONTs | ||
- Multiple exporters should be able to run in parallel without the need for explicit locking or synchronization. | ||
- Exporters when restarted do not have any memory of prior operation and rely on the already exported data as much as possible to decide where to resume. | ||
|
||
## Using exported data | ||
The exported data in storage can be used in the ETL pipeline to gather analytics and reporting. To write a tool that consumes exported data you can use Stellar ingestion library's `ledgerbackend` package. This package includes a ledger backend called [BufferedStorageBackend](https://github.com/stellar/go/blob/master/ingest/ledgerbackend/buffered_storage_backend.go), | ||
which imports data from the storage and validates it. For more details, refer to the ledgerbackend [documentation](https://github.com/stellar/go/tree/master/ingest/ledgerbackend). | ||
|
||
## Contributing | ||
For information on how to contribute, please refer to our [Contribution Guidelines](https://github.com/stellar/go/blob/master/CONTRIBUTING.md). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,101 +1,130 @@ | ||
# Ledger Exporter (Work in Progress) | ||
## Ledger Exporter: Installation and Usage Guide | ||
|
||
The Ledger Exporter is a tool designed to export ledger data from a Stellar network and upload it to a specified destination. It supports both bounded and unbounded modes, allowing users to export a specific range of ledgers or continuously export new ledgers as they arrive on the network. | ||
This guide provides step-by-step instructions on installing and using the Ledger Exporter, a tool that exports Stellar network ledger data to a Google Cloud Storage (GCS) bucket for efficient analysis and storage. | ||
|
||
Ledger Exporter currently uses captive-core as the ledger backend and GCS as the destination data store. | ||
* [Prerequisites](#prerequisites) | ||
* [Setup](#setup) | ||
* [Set Up GCP Credentials](#set-up-gcp-credentials) | ||
* [Create a GCS Bucket for Storage](#create-a-gcs-bucket-for-storage) | ||
* [Running the Ledger Exporter](#running-the-ledger-exporter) | ||
* [Pull the Docker Image](#1-pull-the-docker-image) | ||
* [Configure the Exporter](#2-configure-the-exporter-configtoml) | ||
* [Run the Exporter](#3-run-the-exporter) | ||
* [Command Line Interface (CLI)](#command-line-interface-cli) | ||
1. [scan-and-fill: Fill Data Gaps](#1-scan-and-fill-fill-data-gaps) | ||
2. [append: Continuously Export New Data](#2-append-continuously-export-new-data) | ||
|
||
# Exported Data Format | ||
The tool allows for the export of multiple ledgers in a single exported file. The exported data is in XDR format and is compressed using zstd before being uploaded. | ||
## Prerequisites | ||
|
||
```go | ||
type LedgerCloseMetaBatch struct { | ||
StartSequence uint32 | ||
EndSequence uint32 | ||
LedgerCloseMetas []LedgerCloseMeta | ||
} | ||
``` | ||
* **Google Cloud Platform (GCP) Account:** You will need a GCP account to create a GCS bucket for storing the exported data. | ||
* **Docker:** Allows you to run the Ledger Exporter in a self-contained environment. The official Docker installation guide: [https://docs.docker.com/engine/install/](https://docs.docker.com/engine/install/) | ||
|
||
## Setup | ||
|
||
### Set Up GCP Credentials | ||
|
||
Create application default credentials for your Google Cloud Platform (GCP) project by following these steps: | ||
1. Download the [SDK](https://cloud.google.com/sdk/docs/install). | ||
2. Install and initialize the [gcloud CLI](https://cloud.google.com/sdk/docs/initializing). | ||
3. Create [application authentication credentials](https://cloud.google.com/docs/authentication/provide-credentials-adc#google-idp) and store it in a secure location on your system, such as $HOME/.config/gcloud/application_default_credentials.json. | ||
|
||
For detailed instructions, refer to the [Providing Credentials for Application Default Credentials (ADC) guide.](https://cloud.google.com/docs/authentication/provide-credentials-adc) | ||
|
||
### Create a GCS Bucket for Storage | ||
|
||
## Getting Started | ||
1. Go to the GCP Console's Storage section ([https://console.cloud.google.com/storage](https://console.cloud.google.com/storage)) and create a new bucket. | ||
2. Choose a descriptive name for the bucket, such as `stellar-ledger-data`. Refer to [Google Cloud Storage Bucket Naming Guideline](https://cloud.google.com/storage/docs/buckets#naming) for more information. | ||
3. **Note down the bucket name** as you'll need it later in the configuration process. | ||
|
||
### Installation (coming soon) | ||
|
||
### Command Line Options | ||
## Running the Ledger Exporter | ||
|
||
### 1. Pull the Docker Image | ||
|
||
Open a terminal window and download the Stellar Ledger Exporter Docker image using the following command: | ||
|
||
#### Scan and Fill Mode: | ||
Exports a specific range of ledgers, defined by --start and --end. Will only export to remote datastore if data is absent. | ||
```bash | ||
ledgerexporter scan-and-fill --start <start_ledger> --end <end_ledger> --config-file <config_file_path> | ||
docker pull stellar/ledger-exporter | ||
``` | ||
|
||
#### Append Mode: | ||
Exports ledgers initially searching from --start, looking for the next absent ledger sequence number proceeding --start on the data store. If abscence is detected, the export range is narrowed to `--start <absent_ledger_sequence>`. | ||
This feature requires ledgers to be present on the remote data store for some (possibly empty) prefix of the requested range and then absent for the (possibly empty) remainder. | ||
### 2. Configure the Exporter (config.toml) | ||
The Ledger Exporter relies on a configuration file (config.toml) to connect to your specific environment. This file defines details like: | ||
- Your Google Cloud Storage (GCS) bucket where exported ledger data will be stored. | ||
- Stellar network settings, such as the network you're using (testnet or pubnet). | ||
- Datastore schema to control data organization. | ||
|
||
In this mode, the --end ledger can be provided to stop the process once export has reached that ledger, or if absent or 0 it will result in continous exporting of new ledgers emitted from the network. | ||
A sample configuration file [config.example.toml](config.example.toml) is provided. Copy and rename it to config.toml for customization. Edit the copied file (config.toml) to replace placeholders with your specific details. | ||
|
||
### 3. Run the Exporter | ||
|
||
The following command demonstrates how to run the Ledger Exporter: | ||
|
||
It’s guaranteed that ledgers exported during `append` mode from `start` and up to the last logged ledger file `Uploaded {ledger file name}` were contiguous, meaning all ledgers within that range were exported to the data lake with no gaps or missing ledgers in between. | ||
```bash | ||
ledgerexporter append --start <start_ledger> --config-file <config_file_path> | ||
docker run --platform linux/amd64 \ | ||
-v "$HOME/.config/gcloud/application_default_credentials.json":/.config/gcp/credentials.json:ro \ | ||
-e GOOGLE_APPLICATION_CREDENTIALS=/.config/gcp/credentials.json \ | ||
-v ${PWD}/config.toml:/config.toml \ | ||
stellar/ledger-exporter <command> [options] | ||
``` | ||
|
||
### Configuration (toml): | ||
The `stellar_core_config` supports two ways for configuring captive core: | ||
- use prebuilt captive core config toml, archive urls, and passphrase based on `stellar_core_config.network = testnet|pubnet`. | ||
- manually set the the captive core confg by supplying these core parameters which will override any defaults when `stellar_core_config.network` is present also: | ||
`stellar_core_config.captive_core_toml_path` | ||
`stellar_core_config.history_archive_urls` | ||
`stellar_core_config.network_passphrase` | ||
**Explanation:** | ||
|
||
Ensure you have stellar-core installed and set `stellar_core_config.stellar_core_binary_path` to it's path on o/s. | ||
* `--platform linux/amd64`: Specifies the platform architecture (adjust if needed for your system). | ||
* `-v`: Mounts volumes to map your local GCP credentials and config.toml file to the container: | ||
* `$HOME/.config/gcloud/application_default_credentials.json`: Your local GCP credentials file. | ||
* `${PWD}/config.toml`: Your local configuration file. | ||
* `-e GOOGLE_APPLICATION_CREDENTIALS=/.config/gcp/credentials.json`: Sets the environment variable for credentials within the container. | ||
* `stellar/ledger-exporter`: The Docker image name. | ||
* `<command>`: The Stellar Ledger Exporter command: [append](#1-append-continuously-export-new-data), [scan-and-fill](#2-scan-and-fill-fill-data-gaps)) | ||
|
||
Enable web service that will be bound to localhost post and publishes metrics by including `admin_port = {port}` | ||
## Command Line Interface (CLI) | ||
|
||
An example config, demonstrating preconfigured captive core settings and gcs data store config. | ||
```toml | ||
admin_port = 6061 | ||
The Ledger Exporter offers two mode of operation for exporting ledger data: | ||
|
||
[datastore_config] | ||
type = "GCS" | ||
### 1. append: Continuously Export New Data | ||
|
||
[datastore_config.params] | ||
destination_bucket_path = "your-bucket-name/<optional_subpath1>/<optional_subpath2>/" | ||
|
||
[datastore_config.schema] | ||
ledgers_per_file = 64 | ||
files_per_partition = 10 | ||
Exports ledgers initially searching from --start, looking for the next absent ledger sequence number proceeding --start on the data store. If abscence is detected, the export range is narrowed to `--start <absent_ledger_sequence>`. | ||
This feature requires ledgers to be present on the remote data store for some (possibly empty) prefix of the requested range and then absent for the (possibly empty) remainder. | ||
|
||
[stellar_core_config] | ||
network = "testnet" | ||
stellar_core_binary_path = "/my/path/to/stellar-core" | ||
captive_core_toml_path = "my-captive-core.cfg" | ||
history_archive_urls = ["http://testarchiveurl1", "http://testarchiveurl2"] | ||
network_passphrase = "test" | ||
``` | ||
In this mode, the --end ledger can be provided to stop the process once export has reached that ledger, or if absent or 0 it will result in continous exporting of new ledgers emitted from the network. | ||
|
||
### Exported Files | ||
It’s guaranteed that ledgers exported during `append` mode from `start` and up to the last logged ledger file `Uploaded {ledger file name}` were contiguous, meaning all ledgers within that range were exported to the data lake with no gaps or missing ledgers in between. | ||
|
||
#### File Organization: | ||
- Ledgers are grouped into files, with the number of ledgers per file set by `ledgers_per_file`. | ||
- Files are further organized into partitions, with the number of files per partition set by `files_per_partition`. | ||
|
||
### Filename Structure: | ||
- Filenames indicate the ledger range they contain, e.g., `0-63.xdr.zstd` holds ledgers 0 to 63. | ||
- Partition directories group files, e.g., `/0-639/` holds files for ledgers 0 to 639. | ||
**Usage:** | ||
|
||
#### Example: | ||
with `ledgers_per_file = 64` and `files_per_partition = 10`: | ||
- Partition names: `/0-639`, `/640-1279`, ... | ||
- Filenames: `/0-639/0-63.xdr.zstd`, `/0-639/64-127.xdr.zstd`, ... | ||
```bash | ||
docker run --platform linux/amd64 -d \ | ||
-v "$HOME/.config/gcloud/application_default_credentials.json":/.config/gcp/credentials.json:ro \ | ||
-e GOOGLE_APPLICATION_CREDENTIALS=/.config/gcp/credentials.json \ | ||
-v ${PWD}/config.toml:/config.toml \ | ||
stellar/ledger-exporter \ | ||
append --start <start_ledger> [--end <end_ledger>] [--config-file <config_file>] | ||
``` | ||
|
||
Arguments: | ||
- `--start <start_ledger>` (required): The starting ledger sequence number for the export process. | ||
- `--end <end_ledger>` (optional): The ending ledger sequence number. If omitted or set to 0, the exporter will continuously export new ledgers as they appear on the network. | ||
- `--config-file <config_file_path>` (optional): The path to your configuration file, containing details like GCS bucket information. If not provided, the exporter will look for config.toml in the directory where you run the command. | ||
|
||
### 2. scan-and-fill: Fill Data Gaps | ||
|
||
#### Special Cases: | ||
Scans the datastore (GCS bucket) for the specified ledger range and exports any missing ledgers to the datastore. This mode avoids unnecessary exports if the data is already present. The range is specified using the --start and --end options. | ||
|
||
- If `ledgers_per_file` is set to 1, filenames will only contain the ledger number. | ||
- If `files_per_partition` is set to 1, filenames will not contain the partition. | ||
**Usage:** | ||
|
||
#### Note: | ||
- Avoid changing `ledgers_per_file` and `files_per_partition` after configuration for consistency. | ||
```bash | ||
docker run --platform linux/amd64 -d \ | ||
-v "$HOME/.config/gcloud/application_default_credentials.json":/.config/gcp/credentials.json:ro \ | ||
-e GOOGLE_APPLICATION_CREDENTIALS=/.config/gcp/credentials.json \ | ||
-v ${PWD}/config.toml:/config.toml \ | ||
stellar/ledger-exporter \ | ||
scan-and-fill --start <start_ledger> --end <end_ledger> [--config-file <config_file>] | ||
``` | ||
|
||
#### Retrieving Data: | ||
- To locate a specific ledger sequence, calculate the partition name and ledger file name using `files_per_partition` and `ledgers_per_file`. | ||
- The `GetObjectKeyFromSequenceNumber` function automates this calculation. | ||
Arguments: | ||
- `--start <start_ledger>` (required): The starting ledger sequence number in the range to export. | ||
- `--end <end_ledger>` (required): The ending ledger sequence number in the range. | ||
- `--config-file <config_file_path>` (optional): The path to your configuration file, containing details like GCS bucket information. If not provided, the exporter will look for config.toml in the directory where you run the command. | ||
|
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,42 @@ | ||
|
||
# Sample TOML Configuration | ||
|
||
# Admin port configuration | ||
# Specifies the port number for hosting the web service locally to publish metrics. | ||
admin_port = 6061 | ||
|
||
# Datastore Configuration | ||
[datastore_config] | ||
# Specifies the type of datastore. Currently, only Google Cloud Storage (GCS) is supported. | ||
type = "GCS" | ||
|
||
[datastore_config.params] | ||
# The Google Cloud Storage bucket path for storing data, with optional subpaths for organization. | ||
destination_bucket_path = "your-bucket-name/<optional_subpath1>/<optional_subpath2>/" | ||
|
||
[datastore_config.schema] | ||
# Configuration for data organization | ||
ledgers_per_file = 64 # Number of ledgers stored in each file. | ||
files_per_partition = 10 # Number of files per partition/directory. | ||
|
||
# Stellar-core Configuration | ||
[stellar_core_config] | ||
# Use default captive-core config based on network | ||
# Options are "testnet" for the test network or "pubnet" for the public network. | ||
network = "testnet" | ||
|
||
# Alternatively, you can manually configure captive-core parameters (overrides defaults if 'network' is set). | ||
|
||
# Path to the captive-core configuration file. | ||
#captive_core_config_path = "my-captive-core.cfg" | ||
|
||
# URLs for Stellar history archives, with multiple URLs allowed. | ||
#history_archive_urls = ["http://testarchiveurl1", "http://testarchiveurl2"] | ||
|
||
# Network passphrase for the Stellar network. | ||
#network_passphrase = "Test SDF Network ; September 2015" | ||
|
||
# Path to stellar-core binary | ||
# Not required when running in a Docker container as it has the stellar-core installed and path is set. | ||
# When running outside of Docker, it will look for stellar-core in the OS path if it exists. | ||
#stellar_core_binary_path = "/my/path/to/stellar-core |
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.