diff --git a/docs/accessing-data/overview.mdx b/docs/accessing-data/overview.mdx
index fbc99b0c4..cf0bf6c18 100644
--- a/docs/accessing-data/overview.mdx
+++ b/docs/accessing-data/overview.mdx
@@ -5,7 +5,7 @@ sidebar_position: 0
## What is Hubble?
-Hubble is an open-source, publicly available dataset that provides a complete historical record of the Stellar network. Similar to Horizon, it ingests and presents the data produced by the Stellar network in a format that is easier to consume than the performance-oriented data representations used by Stellar Core. The dataset is hosted on BigQuery–meaning it is suitable for large, analytic workloads, historical data retrieval and complex data aggregation. **Hubble should not be used for real-time data retrieval and cannot submit transactions to the network.** For real time use cases, we recommend [running an API server](/docs/run-api-server).
+Hubble is an open-source, publicly available dataset that provides a complete historical record of the Stellar network. Similar to Horizon, it ingests and presents the data produced by the Stellar network in a format that is easier to consume than the performance-oriented data representations used by Stellar Core. The dataset is hosted on BigQuery–meaning it is suitable for large, analytic workloads, historical data retrieval and complex data aggregation. **Hubble should not be used for real-time data retrieval and cannot submit transactions to the network.** For real time use cases, we recommend [running an API server](/docs/run-platform-server).
This guide describes when to use Hubble and how to connect. To view the underlying data structures, queries and examples, use the [Viewing Metadata](/docs/accessing-data/viewing-metadata) and [Optimizing Queries](/docs/accessing-data/optimizing-queries) tutorials.
diff --git a/docs/fundamentals-and-concepts/stellar-stack.mdx b/docs/fundamentals-and-concepts/stellar-stack.mdx
index 2b56fd5a9..cd88f688a 100644
--- a/docs/fundamentals-and-concepts/stellar-stack.mdx
+++ b/docs/fundamentals-and-concepts/stellar-stack.mdx
@@ -15,11 +15,11 @@ Nodes reach consensus using the Stellar Consensus Protocol, which can you can le
Anyone can run a Stellar Core node, but you don’t have to in order to build on Stellar. We recommend you do so if you issue an asset and want to ensure the accuracy of the ledger, if you want to participate in network governance by voting on protocol version, minimum fees, and resource and ledger limits, and/or if you want to contribute to Stellar’s overall health and decentralization. Check out our tutorial on installing, configuring, and maintaining your own node here: [Run a Core Node Tutorial](../run-core-node).
-## Horizon API
+## Platform Services
-Horizon is the client-facing RESTful HTTP API server that allows programmatic access to submit transactions and query the network’s historical data. It acts as the interface for applications that want to access the Stellar network. You can communicate with Horizon using an SDK, a web browser, or with simple command tools like cURL.
+Horizon is the client-facing RESTful HTTP API server in the platform layer which allows programmatic access to submit transactions and query the network’s historical data. It acts as the interface for applications that want to access the Stellar network. You can communicate with Horizon using an SDK, a web browser, or with simple command tools like cURL.
-You do not need to run your own Horizon instance — when you're getting started, you can use the free SDF Horizon instance to access the network — but it is recommended that you do when you’re ready to launch a finished product. Check out how to do so here: [Run an API Server Tutorial](../run-api-server).
+You do not need to run your own Horizon instance — when you're getting started, you can use the free SDF Horizon instance to access the network — but it is recommended that you do when you’re ready to launch a finished product. Check out how to do so here: [Run Platform Services Tutorial](../run-platform-server)
Learn all there is to know about using Horizon in the Horizon [API Reference documentation](https://developers.stellar.org/api).
@@ -39,8 +39,7 @@ The Soroban CLI is the command line interface to Soroban and can be downloaded [
SDKs simplify some of the work of accessing Horizon and the Soroban RPC by converting the data into friendlier formats and allowing you to program in the language of your choice. Stellar’s SDKs show you how to request data and create and submit transactions. Soroban's SDKs allow you to write smart contracts in Rust and interact with smart contracts in a myriad of other languages.
-View Stellar's [SDK library](../tools-and-sdks#sdk-library) to access our SDKs and their documentation.
-View Soroban's [SDK library](https://soroban.stellar.org/docs/category/sdks) in the Soroban docs.
+View Stellar's [SDK library](../tools-and-sdks#sdk-library) to access our SDKs and their documentation. View Soroban's [SDK library](https://soroban.stellar.org/docs/category/sdks) in the Soroban docs.
## DeFi protocols
diff --git a/docs/run-api-server/configuring.mdx b/docs/run-api-server/configuring.mdx
deleted file mode 100644
index 0aa49f46c..000000000
--- a/docs/run-api-server/configuring.mdx
+++ /dev/null
@@ -1,147 +0,0 @@
----
-title: Configuring
-sidebar_position: 30
----
-
-import { Alert } from "@site/src/components/Alert";
-import { CodeExample } from "@site/src/components/CodeExample";
-
-Once Horizon is [installed](./installing.mdx), you are ready to configure it. Note that Horizon fulfills three important, distinct roles:
-
-- **serving requests** like a regular web-based API,
-- **ingesting ledgers** from the Stellar network to keep its world-view up to date, and
-- **transaction submission** for interacting with the Stellar network.
-
-Though we encourage operators to separate these responsibilities across instances for resilience and independent scaling, a single Horizon instance can perform all of these functions at once.
-
-For the remainder of this guide, we will assume that you want a single, standalone Horizon instance that performs ingestion and allows transaction submission. We'll cover ingestion in detail [later](./ingestion.mdx) if you want to read ahead and decide which approach is right for you.
-
-## Parameters
-
-Horizon can be configured by both command line flags and environment variables. To see Horizon's list of available command line flags, their default values, and their corresponding environmental variable names, run:
-
-
-
-```bash
-stellar-horizon --help
-```
-
-
-
-You'll see that Horizon defines a large number of flags; however, only a handful are required to get started:
-
-| flag | envvar | example |
-| --- | --- | --- |
-| `--db-url` | `DATABASE_URL` | postgres://localhost/horizon_testnet |
-| `--history-archive-urls` | `HISTORY_ARCHIVE_URLS` | https://history.stellar.org/prd/core-testnet/core_testnet_001,https://history.stellar.org/prd/core-testnet/core_testnet_002 |
-
-- The most important parameter, `--db-url`, specifies the Horizon database; its value should be a valid [PostgreSQL Connection URI](http://www.postgresql.org/docs/9.6/static/libpq-connect.html#AEN46025). If you are running horizon locally, you may want to add the `sslmode=disable` query parameter to the connection string, although this is not recommended in production environments.
-- The other parameter, `--history-archive-urls`, specifies a set of comma-separated locations from which Horizon should download [history archives](../run-core-node/publishing-history-archives.mdx).
-
-#### With Ingestion
-
-As outlined at the beginning, we presume you are interested in starting an ingesting instance. For this, you need to specify some additional flags:
-
-| flag | envvar | example |
-| --- | --- | --- |
-| `--captive-core-config-path` | `CAPTIVE_CORE_CONFIG_PATH` | /etc/default/stellar-captive-core.toml |
-| `--stellar-core-binary-path` | `STELLAR_CORE_BINARY_PATH` | /usr/bin/stellar-core |
-
-Note that **ingestion is enabled by default**.
-
-- The first parameter, `--captive-core-config-path`, points to a Captive Core configuration file. This TOML file only requires a few fields (explained [below](#configuring-captive-core)) to get up and running.
-- The second parameter, `--stellar-core-binary-path`, is a filesystem path to a Stellar Core binary. Horizon will actually search your PATH for `stellar-core` by default, so if your environment is configured appropriately, you don't need to pass this.
-
-#### Without Ingestion
-
-If you aren't configuring your Horizon instance to perform ingestion, it still needs awareness about what's going on in the Stellar network to be useful. Thus, you need to point Horizon to a running Stellar Core instance:
-
-| flag | envvar | example |
-| -------------------- | ------------------ | ---------------------- |
-| `--ingest` | `INGEST` | false |
-| `--stellar-core-url` | `STELLAR_CORE_URL` | http://127.0.0.1:11626 |
-
-This would be a [standalone](../run-core-node/) Stellar-Core instance.
-
-### Manual Installation
-
-Specifying command line flags every time you invoke Horizon can be cumbersome, so we recommend using environment variables. There are many tools you can use to manage them, such as [direnv](http://direnv.net/) or [dotenv](https://github.com/bkeepers/dotenv).
-
-For configuration related to [Captive Core](#configuring-captive-core), you should prepare a separate TOML file and pass it to the `--captive-core-config-path`/`CAPTIVE_CORE_CONFIG_PATH` argument.
-
-### Package Manager Installation
-
-If you installed Horizon [via your package manager](./installing.mdx#package-manager), the provided `stellar-horizon-cmd` wrapper will import a configuration from `/etc/default/stellar-horizon` and set up the environment accordingly. Hence, if you want to change things, edit the configuration file in `/etc/default/stellar-horizon`.
-
-
-
-This script invokes Horizon with the `stellar` user, so make sure that permissions for this user are set up accordingly. For example: the `--captive-core-storage-path` (by default the current working directory) should be writable for this user; the user should be able to execute the `horizon` and `stellar-core` binaries; etc.
-
-
-
-Note that the default configuration (located at `/etc/default/stellar-horizon`) provided by the package manager **enables ingestion by default**. Again, refer to the later [Ingestion](./ingestion.mdx) page to see what setup is right for you. If you want certain nodes dedicated exclusively to fulfilling requests, you should set this flag to `false` accordingly.
-
-## Preparing the Database
-
-Before running the Horizon server, you must first prepare the Horizon database specified by the `DATABASE_URL`. This database will be used for all of the information produced by Horizon, most notably historical information about transactions that have occurred on the Stellar network.
-
-To prepare a database for Horizon's use, you must first ensure it is blank. It's easiest to create a new database on your PostgreSQL server specifically for Horizon's use (e.g. `createdb horizon`). Note that you may need to [add a role](https://www.postgresql.org/docs/9.6/sql-createrole.html) for yourself (or the `stellar` user) through the `postgres` user if you're starting from scratch. Next, install the schema by running `stellar-horizon db init`. This command will log any errors that occur.
-
-Remember to update the appropriate DB-related flags or environment variables to configure Horizon as explained [above](#parameters).
-
-### Postgres Configuration
-
-It is recommended to set `random_page_cost=1` in Postgres' configuration if you are using SSD storage. With this setting, Query Planner will make a better use of indices, especially for `JOIN` queries. We've noticed a huge speed improvement for some queries with this setting.
-
-To improve availability of both ingestion and frontend servers it's recommended to set the following values:
-
-- `tcp_keepalives_idle`: 10 seconds
-- `tcp_keepalives_interval`: 1 second
-- `tcp_keepalives_count`: 5
-
-With the config above, if there are no queries from a given client for 10 seconds, Postgres should start sending TCP keepalive packets. It will retry 5 times every second. If there is no response from the client after that time it will drop the connection.
-
-## Configuring Captive Core
-
-While a full Stellar Core node requires a complex configuration with [lots of possible fields](https://github.com/stellar/stellar-core/blob/master/docs/stellar-core_example.cfg), the Captive Core configuration file can be kept extremely barebones. Most of the configuration will be generated automagically at runtime. Here's is a minimal working example, operating under the assumption that you want to connect to the testnet and trust SDF's validators exclusively:
-
-
-
-```toml
-[[HOME_DOMAINS]]
-HOME_DOMAIN="testnet.stellar.org"
-QUALITY="HIGH"
-
-[[VALIDATORS]]
-NAME="sdf_testnet_1"
-HOME_DOMAIN="testnet.stellar.org"
-PUBLIC_KEY="GDKXE2OZMJIPOSLNA6N6F2BVCI3O777I2OOC4BV7VOYUEHYX7RTRYA7Y"
-ADDRESS="core-testnet1.stellar.org"
-HISTORY="curl -sf http://history.stellar.org/prd/core-testnet/core_testnet_001/{0} -o {1}"
-
-[[VALIDATORS]]
-NAME="sdf_testnet_2"
-HOME_DOMAIN="testnet.stellar.org"
-PUBLIC_KEY="GCUCJTIYXSOXKBSNFGNFWW5MUQ54HKRPGJUTQFJ5RQXZXNOLNXYDHRAP"
-ADDRESS="core-testnet2.stellar.org"
-HISTORY="curl -sf http://history.stellar.org/prd/core-testnet/core_testnet_002/{0} -o {1}"
-
-[[VALIDATORS]]
-NAME="sdf_testnet_3"
-HOME_DOMAIN="testnet.stellar.org"
-PUBLIC_KEY="GC2V2EFSXN6SQTWVYA5EPJPBWWIMSD2XQNKUOHGEKB535AQE2I6IXV2Z"
-ADDRESS="core-testnet3.stellar.org"
-HISTORY="curl -sf http://history.stellar.org/prd/core-testnet/core_testnet_003/{0} -o {1}"
-```
-
-
-
-_(For the remainder of this guide, we'll assume this file lives at `/etc/default/stellar-captive-core.toml`.)_
-
-The minimum required fields are the `[[HOME_DOMAINS]]` and a set of `[[VALIDATORS]]`.
-
-If you wanted to adapt this and configure your nodes to work on the Stellar **pubnet**, you'll need to think more carefully about the validators you want to trust in your quorum. As inspiration, [here](https://github.com/stellar/go/blob/master/services/horizon/docker/stellar-core-pubnet.cfg#L15-L202) is the set of domains and validators that SDF includes in its pubnet quorum. You should also familiarize yourself with how to configure a proper quorum set; the [Core documentation](../run-core-node/configuring.mdx#choosing-your-quorum-set) has more on this.
-
-Captive Core's functionality is controlled through this file. Note that while the Captive Core configuration looks like a subset of a traditional Stellar Core configuration file, you cannot use a traditional Stellar Core configuration file to configure Captive Core. The TOML format is preserved for operator ease of [migrating](./migrating.mdx) from Horizon 1.x, but this is a fundamentally different architecture and should be treated as such.
-
-Now, jump ahead to [Running Horizon](./running.mdx)!
diff --git a/docs/run-api-server/index.mdx b/docs/run-api-server/index.mdx
deleted file mode 100644
index 6dc21ec1e..000000000
--- a/docs/run-api-server/index.mdx
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: "Overview"
-sidebar_position: 0
----
-
-Horizon is responsible for providing an HTTP API to data in the Stellar network. It ingests and re-serves the data produced by the Stellar network in a form that is easier to consume by the average application relative to the performance-oriented data representations used by Stellar Core.
-
-This guide describes how to administer a production Horizon 2.0+ instance (you can refer to the [Developers' Blog](https://www.stellar.org/developers-blog/a-new-sun-on-the-horizon) for some background on the performance and architectural improvements of this major version bump). For information about developing on the Horizon codebase, check out the [Development Guide](https://github.com/stellar/go/blob/master/services/horizon/internal/docs/developing.md).
-
-Before we begin, it's worth reiterating the sentiment echoed in the [Run a Core Node](../run-core-node) guide: **we do not endorse running Horizon backed by a standalone Stellar Core instance**, and especially not by a _validating_ Stellar Core. These are two separate concerns, and decoupling them is important for both reliability and performance. Horizon instead manages its own, pared-down version of Stellar Core optimized for its own subset of needs (we'll refer to this as a "Captive Core" instance).
-
-## Upgrading From Horizon 1.x
-
-If you're coming from an existing deployment of Horizon, you're probably running it alongside a standalone "Watcher" Stellar Core node. As noted above, this architecture is now **deprecated**, and support for it will be dropped in Horizon 3.x. The [Migration](./migrating.mdx) guide should facilitate the process to moving to the Captive Core architecture and get you up to speed.
-
-## Why Run Horizon?
-
-You don't need to run your own Horizon instance to build on Stellar: the Stellar Development Foundation runs two Horizon servers, one for the public network and one for the test network: https://horizon.stellar.org and https://horizon-testnet.stellar.org. These servers are free for anyone to use and should be fine for development and small-scale projects. They are, however, rate limited, and we don't recommended using them for production services that need strong reliability.
-
-Running Horizon within your own infrastructure provides a number of benefits. You can:
-
-- Disable request rate limiting for guaranteed network access
-- Have full operational control without dependency on the Stellar Development Foundation
-- Run multiple instances for redundancy and scalability
diff --git a/docs/run-api-server/ingestion-filtering.mdx b/docs/run-api-server/ingestion-filtering.mdx
deleted file mode 100644
index 6e330246f..000000000
--- a/docs/run-api-server/ingestion-filtering.mdx
+++ /dev/null
@@ -1,125 +0,0 @@
----
-title: Ingestion Filtering
-order: 46
----
-
-The Ingestion Filtering feature is now released for public beta testing available from Horizon [version 2.18.0](https://github.com/stellar/go/releases/tag/horizon-v2.18.0) and up.
-
-## Overview
-
-Ingestion Filtering enables Horizon operators to drastically reduce storage footprint of their Horizon DB by whitelisting Assets and/or Accounts that are relevant to their operations. This feature is ideally suited for private Horizon operators who do not need full history for all assets and accounts on the Stellar network.
-
-### Why is it useful:
-
-Previously, the only way to limit data storage is by limiting the amount of history Horizon ingests, either by configuring the starting ledger to be later than genesis block or via rolling retention (ie: last 30 days). This feature allows users to store the full history of assets and accounts (and related entities) that they care about.
-
-For further context, running a full history Horizon instance currently takes ~ 15TB of disk space (as of June 2022) with storage growing at a rate of ~ 1TB / month. As a benchmark, filtering by even 100 of the most active accounts and assets reduces storage by over 90%. For the majority of users who care about an even more limited set of assets and accounts, storage savings should be well over 99%. Other benefits are reducing operating costs for maintaining storage, improved DB health metrics and query performance.
-
-### How does it work:
-
-This feature provides an ability to select which ledger transactions are accepted at ingestion time to be stored in Horizon’s historical database. Filter whitelists are maintained via an admin REST API (and persisted in the DB). The ingestion process checks the list and persists transactions related to Accounts and Assets that are whitelisted. Note that the feature does not filter the current state of the ledger and related DB tables, only history tables.
-
-Whitelisting can include the following supported entities:
-
-- Account id
-- Asset id (canonical)
-
-Given that all transactions related to the white listed entities are included, all historical time series data related to those transactions are saved in horizon's history db as well. For example, whitelisting an Asset will also persist all Accounts that interact with that Asset and vice versa, if an Account is whitelisted, all assets that are held by that Account will also be included.
-
-## Configuration:
-
-The filters and their configuration are optional features and must be enabled with horizon command line or environmental parameters:
-
-```
-admin-port=[your_choice]
-```
-
-and
-
-```
-exp-enable-ingestion-filtering=true
-```
-
-As Environment properties:
-
-```
-ADMIN_PORT=
-```
-
-and
-
-```
-EXP-ENABLE-INGESTION-FILTERING=True
-```
-
-These should be included in addition to the standard ingestion parameters that must be set also to enable the ingestion engine to be running, such as `ingest=true`, etc. Once these flags are included at horizon runtime, filter configurations and their rules are initially empty and the filters are disabled by default. To enable filters, update the configuration settings, refer to the Admin API Docs which are published as Open API 3.0 doc on the Admin Port at `http://localhost:/`. You can paste the contents from that url into any OAPI tool such as [Swagger](https://editor.swagger.io/) which will render a visual explorer of the API endpoints. Follow details and examples for endpoints:
-
-```
-/ingestion/filters/account
-/ingestion/filters/asset
-```
-
-## Operation:
-
-Adding and Removing Entities can be done by submitting PUT requests to the `http://localhost:/` endpoint.
-
-To add new filtered entities, submit an `HTTP PUT` request to the admin API endpoints for either Asset or Account filters. The PUT request body will be JSON that expresses the filter rules, currently the rules model is a whitelist format and expressed as JSON string array. To remove entities, submit an `HTTP PUT` request to update the list accordingly. To retrieve what is currently configured, submit an `HTTP GET` request.
-
-The OAPI doc published by the Admin Server can be pulled directly from the Github repo [here](https://github.com/stellar/go/blob/horizon-v2.18.0/services/horizon/internal/httpx/static/admin_oapi.yml).
-
-### Reverting Options:
-
-1. Disable both Asset and Account Filter config rules via the [Admin API](https://github.com/stellar/go/blob/master/services/horizon/internal/httpx/static/admin_oapi.yml) by setting `enabled=false` in each filter rule, or set `--exp-enable_ingestion_filtering=false`, this will open up forward ingestion to include all data again. It is then your choice whether to run a Re-ingestion to capture older data from past that would have been dropped by filters but could now be re-imported with filters off, e.g. `horizon db reingest `
-
-2. If you have a DB backup:
-
-- restore the DB
-- run a Reingestion Gap Fill command to fill in the gaps to current tip of the chain
-- resume Ingestion Sync
-
-3. Start over with a fresh DB (or see Patching Historical Data below)
-
-### Patching Historical Data:
-
-If new Assets or Accounts are added to the whitelist and you would like to patch in its missing historical data, Reingestion can be run. The Reingestion process is idempotent and will re-ingest the data from the designated ledger range and overwrite or insert new data if not already on current DB.
-
-## Sample Use Case:
-
-As an Asset Issuer, I have issued 4 assets and am interested in all transaction data related to those assets including customer Accounts that interact with those assets and the following:
-
-- Operations
-- Effects
-- Payments
-- Claimable balances
-- Trades
-
-I would like to store the full history of all transactions related from the genesis of those assets.
-
-### Pre-requisites:
-
-You have an existing Horizon installed, configured and has forward ingestion enabled at a minimum to be able to successfully sync to the current state of the Stellar network. Bonus if you are familiar with running re-ingestion.
-
-Steps:
-
-1. Configure 4 whitelisted Assets via the Admin API. Also check the `HISTORY_RETENTION_COUNT` and set it to `0` if you don’t want any history purged anymore now that you are filtering, otherwise it will continue to reap all data older than the retention.
-
-2. Decide if you want to wipe existing history data on the DB first before the filtering starts running, you can effectively clear the history by running
-
-```
-HISTORY_RETENTION_COUNT=1 stellar-horizon db reap
-```
-
-or drop/create the db and run `stellar-horizon db init`.
-
-Alternatively, if you do not need to free up old history tables, you can effectively stop here, anytime changes or enablement of filter rules are done, the history tables will immediately reflect filtered data per those latest rules from the time the filter config is updated and forward.
-
-3. If starting with a fresh DB, decide if you want to re-run ingestion from the earliest ledger # related to the whitelisted entities to populate history for just the allowed data from filters.
-
-- Tip: To find this ledger number, you can check for the earliest transaction of the Account issuing that asset.
-- Also consider running parallel workers to speed up the process.
-
-4. Optional: When re-ingestion is finished, run an ingestion gap fill `stellar-horizon db fill-gaps` to fill any gaps that may have been missed.
-
-5. Verify that your data is there
-
-- Do a spot check of Accounts that should be automatically be ingested against a full history Horizon instance such as SDF Horizon
diff --git a/docs/run-api-server/ingestion.mdx b/docs/run-api-server/ingestion.mdx
deleted file mode 100644
index 6c7538544..000000000
--- a/docs/run-api-server/ingestion.mdx
+++ /dev/null
@@ -1,305 +0,0 @@
----
-title: Ingestion
-sidebar_position: 45
----
-
-import { CodeExample } from "@site/src/components/CodeExample";
-
-Horizon provides access to both current and historical state on the Stellar network through a process called **ingestion**.
-
-Horizon provides most of its utility through ingested data, and your Horizon server can be configured to listen for and ingest transaction results from the Stellar network. Ingestion enables API access to both current (e.g. someone's balance) and historical state (e.g. someone's transaction history).
-
-## Ingestion Types
-
-There are two primary ingestion use-cases for Horizon operations:
-
-- ingesting **live** data to stay up to date with the latest, real-time changes to the Stellar network, and
-- ingesting **historical** data to peek how the Stellar ledger has changed over time
-
-### Ingesting Live Data
-
-Though this option is disabled by default, in this guide we've [assumed](./configuring.mdx) you turned it on. If you haven't, pass the `--ingest` flag or set `INGEST=true` in your environment.
-
-For a serious setup, **we highly recommend having more than one live ingesting instance**, as this makes it easier to avoid downtime during upgrades and adds resilience to your infrastructure, ensuring you always have the latest network data.
-
-### Ingesting Historical Data
-
-Providing API access to historical data is facilitated by a Horizon subcommand:
-
-
-
-```
-stellar-horizon db reingest range
-```
-
-
-
-_(The command name is a bit of a misnomer: you can use `reingest` both to ingest new ledger data and reingest old data.)_
-
-You can run this process in the background while your Horizon server is up. It will continuously decrement the `history.elder_ledger` in your `/metrics` endpoint until the `` ledger is reached and the backfill is complete. If Horizon receives a request for a ledger it hasn't ingested, it returns a 503 error and clarify that it's `Still Ingesting` (see [below](#some-endpoints-are-not-available-during-state-ingestion)).
-
-#### Deciding on how much history to ingest
-
-You should think carefully about the amount of ingested data you'd like to keep around. Though the storage requirements for the entire Stellar network are substantial, **most organizations and operators only need a small fraction of the history** to fit their use case. For example,
-
-- If you just started developing a new application or service, you can probably get away with just doing live ingestion, since nothing you do requires historical data.
-
-- If you're moving an existing service away from reliance on SDF's Horizon, you likely only need history from the point at which you started using the Stellar network.
-
-- If you provide temporal guarantees to your users--a 6-month guarantee of transaction history like some online banks do, or history only for the last thousand ledgers (see [below](#managing-storage)), for example--then you similarly don't have heavy ingestion requirements.
-
-Even a massively-popular, well-established custodial service probably doesn't need full history to service its users. It will, however, need full history to be a [Full Validator](../run-core-node/index.mdx#full-validator) with published history archives.
-
-#### Reingestion
-
-Regardless of whether you are running live ingestion or building up historical data, you may occasionally need to \_re_ingest ledgers anew (for example on certain upgrades of Horizon). For this, you use the same command as above.
-
-#### Parallel ingestion
-
-Note that historical (re)ingestion happens independently for any given ledger range, so you can reingest in parallel across multiple Horizon processes:
-
-
-
-```
-horizon1> stellar-horizon db reingest range 1 10000
-horizon2> stellar-horizon db reingest range 10001 20000
-horizon3> stellar-horizon db reingest range 20001 30000
-# ... etc.
-```
-
-
-
-#### Managing storage
-
-Over time, the recorded network history will grow unbounded, increasing storage used by the database. Horizon needs sufficient disk space to expand the data ingested from Stellar Core. Unless you need to maintain a [history archive](../run-core-node/publishing-history-archives.mdx), you should configure Horizon to only retain a certain number of ledgers in the database.
-
-This is done using the `--history-retention-count` flag or the `HISTORY_RETENTION_COUNT` environment variable. Set the value to the number of recent ledgers you wish to keep around, and every hour the Horizon subsystem will reap expired data. Alternatively, Horizon provides a command to force a collection:
-
-
-
-```bash
-stellar-horizon db reap
-```
-
-
-
-### Common Issues
-
-Ingestion is a complicated process, so there are a number of things to look out for.
-
-#### Some endpoints are not available during state ingestion
-
-Endpoints that display state information are not available during initial state ingestion and will return a `503 Service Unavailable`/`Still Ingesting` error. An example is the `/paths` endpoint (built using offers). Such endpoints will become available after state ingestion is done (usually within a couple of minutes).
-
-#### State ingestion is taking a lot of time
-
-State ingestion shouldn't take more than a couple of minutes on an AWS `c5.xlarge` instance or equivalent.
-
-It's possible that the progress logs (see [below](#reading-the-logs)) will not show anything new for a longer period of time or print a lot of progress entries every few seconds. This happens because of the way history archives are designed.
-
-The ingestion is still working but it's processing entries of type `DEADENTRY`. If there is a lot of them in the bucket, there are no _active_ entries to process. We plan to improve the progress logs to display actual percentage progress so it's easier to estimate an ETA.
-
-If you see that ingestion is not proceeding for a very long period of time:
-
-1. Check the RAM usage on the machine. It's possible that system ran out of RAM and is using swap memory that is extremely slow.
-1. If above is not the case, file a [new issue](https://github.com/stellar/go/issues/new/choose) in the [Horizon repository](https://github.com/stellar/go/tree/master/services/horizon).
-
-#### CPU usage goes high every few minutes
-
-**This is by design**. Horizon runs a state verifier routine that compares state in local storage to history archives every 64 ledgers to ensure data changes are applied correctly. If data corruption is detected, Horizon will block access to endpoints serving invalid data.
-
-We recommend keeping this security feature turned on; however, if it's causing problems (due to CPU usage) this can be disabled via the `--ingest-disable-state-verification`/`INGEST_DISABLE_STATE_VERIFICATION` parameter.
-
-## Ingesting Full Public Network History
-
-In some (albeit rare) cases, it can be convenient to (re)ingest the full Stellar Public Network history into Horizon (e.g. when running Horizon for the first time). Using multiple Captive Core workers on a high performance environment (powerful machines on which to run Horizon + a powerful database) makes this possible in ~1.5 days.
-
-The following instructions assume the reingestion is done on AWS. However, they should be applicable to any other environment with equivalent capacity. In the same way, the instructions can be adapted to reingest only specific parts of the history.
-
-### Prerequisites
-
-Before we begin, we make some assumptions around the environment required. Please refer to the [Prerequisites](./prerequisites.mdx) section for the current HW requirements to run Horizon reingestion for either historical catch up or real-time ingestion (for staying in sync with the ledger). A few things to keep in mind:
-
-1. For reingestion, the more parallel workers are provisioned to speed up the process, the larger the machine size is required in terms of RAM, CPU, IOPS and disk size. The size of the RAM per worker also increases over time (14GB RAM / worker as of mid 2022) due to the growth of the ledger. HW specs can be downsized once reingestion is completed.
-
-1. [Horizon](./installing.mdx) latest version installed on the machine from (1).
-
-1. [Core](https://github.com/stellar/stellar-core) latest version installed on the machine from (1).
-
-1. A Horizon database where to reingest the history. Preferably, the database should be empty to minimize storage (Postgres accumulates data during usage, which is only deleted when `VACUUM`ed) and have the minimum spec's for reingestion as outlined in [Prerequisites](./prerequisites.mdx).
-
-As the DB storage grows, the IO capacity will grow along with it. The number of workers (and the size of the instance created in (1), should be increased accordingly if we want to take advantage of it. To make sure we are minimizing reingestion time, we should watch write IOPS. It should ideally always be close to the theoretical limit of the DB.
-
-### Parallel Reingestion
-
-Once the prerequisites are satisfied, we can spawn two Horizon reingestion processes in parallel:
-
-1. One for the first 17 million ledgers (which are almost empty).
-1. Another one for the rest of the history.
-
-This is due to first 17 million ledgers being almost empty whilst the rest are much more packed. Having a single Horizon instance with enough workers to saturate the IO capacity of the machine for the first 17 million would kill the machine when reingesting the rest (during which there is a higher CPU and memory consumption per worker).
-
-64 workers for (1) and 20 workers for (2) saturates instance with RAM and 15K IOPS. Again, as the DB storage grows, a larger number of workers and faster storage should be considered.
-
-In order to run the reingestion, first set the following environment variables in the [configuration](./configuring.mdx) (updating values to match your database environment, of course):
-
-
-
-```bash
-export DATABASE_URL=postgres://postgres:secret@db.local:5432/horizon
-export APPLY_MIGRATIONS=true
-export HISTORY_ARCHIVE_URLS=https://s3-eu-west-1.amazonaws.com/history.stellar.org/prd/core-live/core_live_001
-export NETWORK_PASSPHRASE="Public Global Stellar Network ; September 2015"
-export STELLAR_CORE_BINARY_PATH=$(which stellar-core)
-export ENABLE_CAPTIVE_CORE_INGESTION=true
-# Number of ledgers per job sent to the workers.
-# The larger the job, the better performance from Captive Core's perspective,
-# but, you want to choose a job size which maximizes the time all workers are
-# busy.
-export PARALLEL_JOB_SIZE=100000
-# Retries per job
-export RETRIES=10
-export RETRY_BACKOFF_SECONDS=20
-
-# Enable optional config when running captive core ingestion
-
-# For stellar-horizon to download buckets locally at specific location.
-# If not enabled, stellar-horizon would download data in the current working directory.
-# export CAPTIVE_CORE_STORAGE_PATH="/var/lib/stellar"
-
-```
-
-
-
-(Naturally, you can also edit the configuration file at `/etc/default/stellar-horizon` directly if you installed [from a package manager](./installing.mdx#package-manager).)
-
-If Horizon was previously running, first ensure it is stopped. Then, run the following commands in parallel:
-
-1. `stellar-horizon db reingest range --parallel-workers=64 1 16999999`
-1. `stellar-horizon db reingest range --parallel-workers=20 17000000 `
-
-(Where you can find `` under [SDF Horizon's](https://horizon.stellar.org/) `core_latest_ledger` field.)
-
-When saturating a database instance with 15K IOPS capacity:
-
-(1) should take a few hours to complete.
-
-(2) should take about 3 days to complete.
-
-Although there is a retry mechanism, reingestion may fail half-way. Horizon will print the recommended range to use in order to restart it.
-
-When reingestion is complete it's worth running `ANALYZE VERBOSE [table]` on all tables to recalculate the stats. This should improve the query speed.
-
-### Monitoring reingestion process
-
-This script should help monitor the reingestion process by printing the ledger subranges being reingested:
-
-
-
-```bash
-#!/bin/bash
-echo "Current ledger ranges being reingested:"
-echo
-I=1
-for S in $(ps aux | grep stellar-core | grep catchup | awk '{print $15}' | sort -n); do
- printf '%15s' $S
- if [ $(( I % 5 )) = 0 ]; then
- echo
- fi
- I=$(( I + 1))
-done
-```
-
-
-
-Ideally we would be using Prometheus metrics for this, but they haven't been implemented yet.
-
-Here is an example run:
-
-
-
-```
-Current ledger ranges being reingested:
- 99968/99968 199936/99968 299904/99968 399872/99968 499840/99968
- 599808/99968 699776/99968 799744/99968 899712/99968 999680/99968
- 1099648/99968 1199616/99968 1299584/99968 1399552/99968 1499520/99968
- 1599488/99968 1699456/99968 1799424/99968 1899392/99968 1999360/99968
- 2099328/99968 2199296/99968 2299264/99968 2399232/99968 2499200/99968
- 2599168/99968 2699136/99968 2799104/99968 2899072/99968 2999040/99968
- 3099008/99968 3198976/99968 3298944/99968 3398912/99968 3498880/99968
- 3598848/99968 3698816/99968 3798784/99968 3898752/99968 3998720/99968
- 4098688/99968 4198656/99968 4298624/99968 4398592/99968 4498560/99968
- 4598528/99968 4698496/99968 4798464/99968 4898432/99968 4998400/99968
- 5098368/99968 5198336/99968 5298304/99968 5398272/99968 5498240/99968
- 5598208/99968 5698176/99968 5798144/99968 5898112/99968 5998080/99968
- 6098048/99968 6198016/99968 6297984/99968 6397952/99968 17099967/99968
- 17199935/99968 17299903/99968 17399871/99968 17499839/99968 17599807/99968
- 17699775/99968 17799743/99968 17899711/99968 17999679/99968 18099647/99968
- 18199615/99968 18299583/99968 18399551/99968 18499519/99968 18599487/99968
- 18699455/99968 18799423/99968 18899391/99968 18999359/99968 19099327/99968
- 19199295/99968 19299263/99968 19399231/99968
-```
-
-
-
-## Reading Logs
-
-In order to check the progress and status of ingestion you should check your logs regularly; all logs related to ingestion are tagged with `service=ingest`.
-
-It starts with informing you about state ingestion:
-
-
-
-```
-INFO[...] Starting ingestion system from empty state... pid=5965 service=ingest temp_set="*io.MemoryTempSet"
-INFO[...] Reading from History Archive Snapshot pid=5965 service=ingest ledger=25565887
-```
-
-
-
-During state ingestion, Horizon will log the number of processed entries every 100,000 entries (there are currently around 10M entries in the public network):
-
-
-
-```
-INFO[...] Processing entries from History Archive Snapshot ledger=25565887 numEntries=100000 pid=5965 service=ingest
-INFO[...] Processing entries from History Archive Snapshot ledger=25565887 numEntries=200000 pid=5965 service=ingest
-INFO[...] Processing entries from History Archive Snapshot ledger=25565887 numEntries=300000 pid=5965 service=ingest
-INFO[...] Processing entries from History Archive Snapshot ledger=25565887 numEntries=400000 pid=5965 service=ingest
-INFO[...] Processing entries from History Archive Snapshot ledger=25565887 numEntries=500000 pid=5965 service=ingest
-```
-
-
-
-When state ingestion is finished, it will proceed to ledger ingestion starting from the next ledger after the checkpoint ledger (25565887+1 in this example) to update the state using transaction metadata:
-
-
-
-```
-INFO[...] Processing entries from History Archive Snapshot ledger=25565887 numEntries=5400000 pid=5965 service=ingest
-INFO[...] Processing entries from History Archive Snapshot ledger=25565887 numEntries=5500000 pid=5965 service=ingest
-INFO[...] Processed ledger ledger=25565887 pid=5965 service=ingest type=state_pipeline
-INFO[...] Finished processing History Archive Snapshot duration=2145.337575904 ledger=25565887 numEntries=5529931 pid=5965 service=ingest shutdown=false
-INFO[...] Reading new ledger ledger=25565888 pid=5965 service=ingest
-INFO[...] Processing ledger ledger=25565888 pid=5965 service=ingest type=ledger_pipeline updating_database=true
-INFO[...] Processed ledger ledger=25565888 pid=5965 service=ingest type=ledger_pipeline
-INFO[...] Finished processing ledger duration=0.086024492 ledger=25565888 pid=5965 service=ingest shutdown=false transactions=14
-INFO[...] Reading new ledger ledger=25565889 pid=5965 service=ingest
-INFO[...] Processing ledger ledger=25565889 pid=5965 service=ingest type=ledger_pipeline updating_database=true
-INFO[...] Processed ledger ledger=25565889 pid=5965 service=ingest type=ledger_pipeline
-INFO[...] Finished processing ledger duration=0.06619956 ledger=25565889 pid=5965 service=ingest shutdown=false transactions=29
-INFO[...] Reading new ledger ledger=25565890 pid=5965 service=ingest
-INFO[...] Processing ledger ledger=25565890 pid=5965 service=ingest type=ledger_pipeline updating_database=true
-INFO[...] Processed ledger ledger=25565890 pid=5965 service=ingest type=ledger_pipeline
-INFO[...] Finished processing ledger duration=0.071039012 ledger=25565890 pid=5965 service=ingest shutdown=false transactions=20
-```
-
-
-
-## Managing Stale Historical Data
-
-Horizon ingests ledger data from a managed, pared-down Captive Stellar Core instance. In the event that Captive Core crashes, lags, or if Horizon stops ingesting data for any other reason, the view provided by Horizon will start to lag behind reality. For simpler applications, this may be fine, but in many cases this lag is unacceptable and the application should not continue operating until the lag is resolved.
-
-To help applications that cannot tolerate lag, Horizon provides a configurable "staleness" threshold. If enough lag accumulates to surpass this threshold (expressed in number of ledgers), Horizon will only respond with an error: [`stale_history`](https://github.com/stellar/go/blob/master/services/horizon/internal/docs/reference/errors/stale-history.md). To configure this option, use the `--history-stale-threshold`/`HISTORY_STALE_THRESHOLD` parameter.
-
-**Note:** Non-historical requests (such as submitting transactions or checking account balances) will not error out if the staleness threshold is surpassed.
diff --git a/docs/run-api-server/installing.mdx b/docs/run-api-server/installing.mdx
deleted file mode 100644
index 9d8f4fd68..000000000
--- a/docs/run-api-server/installing.mdx
+++ /dev/null
@@ -1,86 +0,0 @@
----
-title: Installing
-sidebar_position: 20
----
-
-import { CodeExample } from "@site/src/components/CodeExample";
-
-To install Horizon, you have a few choices. You can...
-
-- install prebuilt binaries [from our repositories](#package-manager) via your package manager if running a Debian-based system,
-- download a [prebuilt release](https://github.com/stellar/go/releases/latest) of Horizon for your target architecture and operation system, or
-- [build Horizon and Stellar Core yourself](#building) from scratch.
-
-**The first method is recommended**: Not only do you ensure OS compatibility and dependency management, you'll also install some convenient wrappers that make running Horizon and Stellar Core in their respective environments much simpler.
-
-## Installation Methods
-
-### Package Manager
-
-SDF publishes new releases to its custom Ubuntu repositories. Follow [this guide](https://github.com/stellar/packages/blob/master/docs/adding-the-sdf-stable-repository-to-your-system.md#adding-the-sdf-stable-repository-to-your-system) to add the stable SDF repository to your system. [This page](https://github.com/stellar/packages/blob/master/docs/installing-individual-packages.md#installing-individual-packages) outlines the various commands that these packages make available. We'll need:
-
-
-
-```bash
-sudo apt update
-sudo apt install stellar-horizon stellar-core
-```
-
-
-
-Next, you can jump to [Testing Your Installation](#completing-and-testing-your-installation).
-
-### Building
-
-Should you decide not to use one of our prebuilt releases, you may instead build Horizon from source. To do so, you need to prepare a developer environment, including:
-
-- A Unix-like operating system with the common core commands (cp, tar, mkdir, bash, etc.)
-- A compatible distribution of [Golang](https://golang.org/dl/) (v1.15 or later)
-- [git](https://git-scm.com/)
-
-_(Though Horizon can run on Windows, *building* directly on Windows is not supported.)_
-
-At this point, you can easily build the Horizon binary:
-
-
-
-```bash
-git clone https://github.com/stellar/go monorepo && cd monorepo
-go install -v ./services/horizon
-```
-
-
-
-_(You should refer to the list of [Horizon releases](https://github.com/stellar/go/releases) and `git checkout` accordingly before building if you're looking for a stable release rather than the bleeding edge `master` branch.)_
-
-At this point, you can either copy the binary from the `GOPATH` to the system PATH (as [we'll do later](#completing-and-testing-your-installation)), or add Go binaries to your PATH in your `.bashrc` (or equivalent):
-
-
-
-```bash
-export PATH=$(go env GOPATH)/bin:$PATH
-```
-
-
-
-You will also need to compile Stellar Core from its source code if you need ingestion or transaction submission. You should refer to [their installation guide](https://github.com/stellar/stellar-core/blob/master/INSTALL.md) for details.
-
-Next, jump ahead to [Testing Your Installation](#completing-and-testing-your-installation).
-
-## Completing and Testing Your Installation
-
-If you [built from source](#building) or downloaded a release [from GitHub](https://github.com/stellar/go/releases), make sure to copy the native binary into a directory that is part of your PATH. Most Unix-like systems have `/usr/local/bin` in PATH by default, so unless you have a preference or know better, we recommend you copy the binary there:
-
-
-
-```bash
-sudo cp horizon /usr/local/bin/stellar-horizon
-```
-
-
-
-_(We've renamed it here to keep it consistent with the results of the recommended [Package Manager](#package-manager) method.)_
-
-To test your installation, simply run `stellar-horizon --help` from a terminal. If the help for Horizon is displayed, your installation was successful.
-
-**Note**: Some shells (such as [zsh](https://www.zsh.org/)) cache PATH lookups. You may need to clear your cache (by using `rehash` in zsh, for example) or restart your shell before trying to run the aforementioned command.
diff --git a/docs/run-api-server/migrating.mdx b/docs/run-api-server/migrating.mdx
deleted file mode 100644
index 78e73b410..000000000
--- a/docs/run-api-server/migrating.mdx
+++ /dev/null
@@ -1,155 +0,0 @@
----
-title: Migrating From 1.x
-sidebar_position: 15
----
-
-import { Alert } from "@site/src/components/Alert";
-import { CodeExample } from "@site/src/components/CodeExample";
-
-
-
-If you aren't coming from an existing deployment of Horizon, feel free to skip this section and move on to [installing Horizon](./installing.mdx)!
-
-
-
-## Introduction
-
-Starting with version v1.6.0, Horizon allows using Stellar Core in "captive" mode for ingestion. This mode has been enabled by default since Horizon 2.0, so even though you can enable captive mode on 1.6+, this migration guide is catered towards upgrading to 2.x due to the stability and configuration improvements introduced in the later versions.
-
-
-
-Please note that Horizon team will support the previous non-captive mode for the time being. To use the previous method, set `ENABLE_CAPTIVE_CORE_INGESTION=false` in your ingesting instances. After 6 months, this compatibility flag will be removed.
-
-
-
-Please start with the [blog post](https://www.stellar.org/developers-blog/a-new-sun-on-the-horizon) to understand the major changes that Horizon 2.0 introduces with the Captive Core architecture. In summary, Captive Core is a specialized, narrowed-down Stellar-Core instance with the sole aim of emitting transaction metadata to Horizon. It means:
-
-- no separate Stellar Core instance
-- no Core database: everything done in-memory
-- _much_ faster ingestion
-
-Captive Stellar Core completely eliminates all Horizon issues caused by connecting to Stellar Core's database, but it requires extra time to initialize and manage its Stellar Core subprocess. Captive Core can be used in both reingestion (`horizon db reingest range`) and normal Horizon operation (`horizon serve`). In fact, using Captive Core to reingest historical data is considerably faster than without it.
-
-### How It Works
-
-The blog post linked [above](#introduction) gives a high-level overview, while this section dives a little deeper into the technical differences relative to Horizon's relationship with standalone, "Watcher" Core.
-
-When using Captive Core, Horizon runs the `stellar-core` binary as a subprocess. Then, both processes communicate over filesystem pipe: Core sends `xdr.LedgerCloseMeta` structs with information about each ledger and Horizon reads it.
-
-The behaviour is slightly different when reingesting old ledgers and when reading recently closed ledgers:
-
-- **When reingesting**, Stellar Core is started in a special `catchup` mode that simply replays the requested range of ledgers. This mode requires an additional 3GiB of RAM because all ledger entries are stored in memory, making it extremely fast. This mode only depends on the history archives, so a Captive Core configuration (see [below](#configuration)) **is not** required.
-
-- **When reading recently closed ledgers**, Core is started with a normal `run` command. This mode _also_ requires an additional 3GiB of RAM for in-memory ledger entries. In this case, a configuration file (again, read on [below](#configuration)) **is** required in order to configure a quorum set so that it can connect to the Stellar network.
-
-### Known Limitations
-
-As discussed earlier, Captive Core provides much better decoupling for Horizon at the expense of persistence. You should be aware of the following consequences:
-
-- Captive Core requires a couple of minutes to complete the "apply buckets" stage _first_ time Horizon is started, but it should reuse the cached buckets on subsequent restarts (as of Horizon 2.5 and Core 17.1).
-- If the Horizon process terminates, Stellar Core is also terminated.
-- Running Horizon now requires more RAM and less disk space. You can refer to the earlier [Prerequisites](./prerequisites.mdx) page for details.
-
-To hedge against these limitations, we recommend running multiple ingesting Horizon servers in a single cluster. This allows other ingesting instances to maintain service without interruptions if a Captive Core instance is restarted.
-
-## Migration
-
-Now, we'll discuss migrating existing systems running the pre-2.0 versions of Horizon to the new Captive Core world.
-
-### Configuration
-
-The first major change from 1.x is how you will configure Horizon. You will no longer need your Stellar Core configuration, but will rather need to craft a configuration file describing Captive Core's behavior. Read [this section](./configuring.mdx#configuring-captive-core) to understand what the stub should contain.
-
-**Your old configuration cannot be used directly**: Horizon needs special settings for Captive Core. Otherwise, running Horizon may fail with the following error, or errors like it:
-
-
-
-```
-Invalid captive core toml file: LOG_FILE_PATH in captive core config file does not match Horizon captive-core-log-path flag
-```
-
-
-
-Again, while the Captive Core configuration file may appear to just be a subset of Stellar Core's configuration, you shouldn't think about it that way and treat it as its own format. It may diverge in the future, and not all of Core's options are available to Captive Core.
-
-You should pass the location of this new TOML configuration to the `--captive-core-config-path`/`CAPTIVE_CORE_CONFIG_PATH` command-line flag / environmental variable.
-
-If you want to continue to have access to the underlying Stellar Core subprocess (like you did previously with a standalone Watcher Core), you should set the `HTTP_PORT` field in your configuration file accordingly.
-
-### Installation
-
-Once you have a configuration file ready, you should also modify your Horizon configuration to include Captive Core parameters. Within `/etc/default/stellar-horizon`, you should add:
-
-
-
-```bash
-# Captive Core Ingestion Config
-ENABLE_CAPTIVE_CORE_INGESTION=true
-STELLAR_CORE_BINARY_PATH=/usr/bin/stellar-core
-CAPTIVE_CORE_CONFIG_PATH=/etc/default/stellar-captive-core.toml
-CAPTIVE_CORE_STORAGE_PATH=/var/lib/stellar
-# end Captive Core
-```
-
-
-
-You may need to adjust these accordingly, for example by pointing `CAPTIVE_CORE_CONFIG_PATH` to your configuration file and possibly `CAPTIVE_CORE_STORAGE_PATH` to where you'd like Captive Core to store its bucket files (but keep in mind the [disk space](./prerequisites.mdx) and [permissions](./configuring.mdx#package-manager-installation) requirements).
-
-Finally, the process for upgrading both Stellar Core and Horizon is covered [here](https://github.com/stellar/packages/blob/master/docs/upgrading.md#upgrading).
-
-
-
-Depending on the version you're migrating from, you may need to include an additional step here: **manual reingestion**. This can still be accomplished with Captive Core; see [below](#reingestion).
-
-
-
-### Restarting Services
-
-Now, we can stop Core and restart Horizon:
-
-
-
-```bash
-sudo systemctl stop stellar-core
-sudo systemctl restart stellar-horizon
-```
-
-
-
-After a few moments, the logs should show Captive Core running successfully as a subprocess, and eventually Horizon will be running as usual except with Captive Core rapidly generating transaction metadata in-memory!
-
-## Private Networks
-
-If you want your Captive Core instance to connect to a private Stellar network, you will need to specify the validator(s) of the private network in the Captive Core configuration file.
-
-Assuming the validator of your private network has a public key of `GD5KD2KEZJIGTC63IGW6UMUSMVUVG5IHG64HUTFWCHVZH2N2IBOQN7PS` and can be accessed at `private1.validator.com`, then the Captive Core config would consist of the following:
-
-
-
-```toml
-UNSAFE_QUORUM=true
-FAILURE_SAFETY=0
-
-[[VALIDATORS]]
-NAME="private"
-HOME_DOMAIN="validator.com"
-PUBLIC_KEY="GD5KD2KEZJIGTC63IGW6UMUSMVUVG5IHG64HUTFWCHVZH2N2IBOQN7PS"
-ADDRESS="private1.validator.com"
-QUALITY="MEDIUM"
-```
-
-
-
-`UNSAFE_QUORUM=true` and `FAILURE_SAFETY=0` are required when there are too few validators in the private network to form a quorum.
-
-You will also need to set `RUN_STANDALONE=false` in the Stellar Core configuration _for the validator_. Otherwise, the validator will not accept connections on its peer port, which means Captive Core will not be able to connect to the validator.
-
-On a new Stellar network, the first history archive snapshot is published after ledger 63 is closed. Captive Core depends on the history archives, which means that Horizon ingestion via Captive Core will not begin until after ledger 63 is closed. Assuming the standard 5 second delay in between ledgers, it will take ~5 minutes for the network to progress from the genesis ledger to ledger 63.
-
-There are cases where you may need to repeatedly create new private networks (e.g. spawning a private network during integration tests) and this 5 minute delay is too costly. In that case, you can consider including `ARTIFICIALLY_ACCELERATE_TIME_FOR_TESTING=true` in both the validator configuration and the Captive Core configuration. When this parameter is set, Stellar Core will publish a new ledger every _second_. It will also publish history archive snapshots every 8 ledgers, so you will need to set Horizon's checkpoint frequency parameter (`--checkpoint-frequency`/`CHECKPOINT_FREQUENCY`) to 8.
-
-## Reingestion
-
-After migrating to the Captive Core world, you will assuredly need to reingest your history again.
-
-The [Ingestion guide](./ingestion.mdx#reingestion) should refresh your memory on this: nothing has really changed aside from how quickly reingestion gets done. For example, a [full reingestion](#using-captive-core-to-reingest-the-full-public-network-history) of the entire network only takes ~1.5 days (as opposed to weeks previously) on an [m5.8xlarge](https://aws.amazon.com/ec2/pricing/on-demand/) instance.
diff --git a/docs/run-api-server/monitoring.mdx b/docs/run-api-server/monitoring.mdx
deleted file mode 100644
index 582961d32..000000000
--- a/docs/run-api-server/monitoring.mdx
+++ /dev/null
@@ -1,113 +0,0 @@
----
-title: Monitoring
-sidebar_position: 60
----
-
-import { CodeExample } from "@site/src/components/CodeExample";
-
-To ensure that your instance of Horizon is performing correctly, we encourage you to monitor it and provide both logs and metrics to do so.
-
-## Metrics
-
-Metrics are collected while a Horizon process is running and they are exposed _privately_ via the `/metrics` path, accessible only through the Horizon admin port. You need to configure this via `--admin-port` or `ADMIN_PORT`, since it's disabled by default. If you're running such an instance locally, you can access this endpoint:
-
-
-
-```
-$ stellar-horizon --admin-port=4200 &
-$ curl localhost:4200/metrics
-# HELP go_gc_duration_seconds A summary of the GC invocation durations.
-# TYPE go_gc_duration_seconds summary
-go_gc_duration_seconds{quantile="0"} 1.665e-05
-go_gc_duration_seconds{quantile="0.25"} 2.1889e-05
-go_gc_duration_seconds{quantile="0.5"} 2.4062e-05
-go_gc_duration_seconds{quantile="0.75"} 3.4226e-05
-go_gc_duration_seconds{quantile="1"} 0.001294239
-go_gc_duration_seconds_sum 0.002469679
-go_gc_duration_seconds_count 25
-# HELP go_goroutines Number of goroutines that currently exist.
-# TYPE go_goroutines gauge
-go_goroutines 23
-and so on...
-```
-
-
-
-## Logs
-
-Horizon will output logs to standard out. Information about what requests are coming in will be reported, but more importantly, warnings or errors will also be emitted by default. A correctly running Horizon instance will not output any warning or error log entries.
-
-Below we present a few standard log entries with associated fields. You can use them to build metrics and alerts. Please note that these represent Horizon app metrics only. You should also monitor your hardware metrics like CPU or RAM Utilization.
-
-### Starting HTTP request
-
-| Key | Value |
-| --- | --- |
-| **`msg`** | **`Starting request`** |
-| `client_name` | Value of `X-Client-Name` HTTP header representing client name |
-| `client_version` | Value of `X-Client-Version` HTTP header representing client version |
-| `app_name` | Value of `X-App-Name` HTTP header representing app name |
-| `app_version` | Value of `X-App-Version` HTTP header representing app version |
-| `forwarded_ip` | First value of `X-Forwarded-For` header |
-| `host` | Value of `Host` header |
-| `ip` | IP of a client sending HTTP request |
-| `ip_port` | IP and port of a client sending HTTP request |
-| `method` | HTTP method (`GET`, `POST`, ...) |
-| `path` | Full request path, including query string (ex. `/transactions?order=desc`) |
-| `streaming` | Boolean, `true` if request is a streaming request |
-| `referer` | Value of `Referer` header |
-| `req` | Random value that uniquely identifies a request, attached to all logs within this HTTP request |
-
-### Finished HTTP request
-
-| Key | Value |
-| --- | --- |
-| **`msg`** | **`Finished request`** |
-| `bytes` | Number of response bytes sent |
-| `client_name` | Value of `X-Client-Name` HTTP header representing client name |
-| `client_version` | Value of `X-Client-Version` HTTP header representing client version |
-| `app_name` | Value of `X-App-Name` HTTP header representing app name |
-| `app_version` | Value of `X-App-Version` HTTP header representing app version |
-| `duration` | Duration of request in seconds |
-| `forwarded_ip` | First value of `X-Forwarded-For` header |
-| `host` | Value of `Host` header |
-| `ip` | IP of a client sending HTTP request |
-| `ip_port` | IP and port of a client sending HTTP request |
-| `method` | HTTP method (`GET`, `POST`, ...) |
-| `path` | Full request path, including query string (ex. `/transactions?order=desc`) |
-| `route` | Route pattern without query string (ex. `/accounts/{id}`) |
-| `status` | HTTP status code (ex. `200`) |
-| `streaming` | Boolean, `true` if request is a streaming request |
-| `referer` | Value of `Referer` header |
-| `req` | Random value that uniquely identifies a request, attached to all logs within this HTTP request |
-
-### Metrics
-
-Using the entries above you can build metrics that will help understand performance of a given Horizon node. For example:
-
-- Number of requests per minute.
-- Number of requests per route (the most popular routes).
-- Average response time per route.
-- Maximum response time for non-streaming requests.
-- Number of streaming vs. non-streaming requests.
-- Number of rate-limited requests.
-- List of rate-limited IPs.
-- Unique IPs.
-- The most popular SDKs/apps sending requests to a given Horizon node.
-- Average ingestion time of a ledger.
-- Average ingestion time of a transaction.
-
-### Alerts
-
-Below are example alerts with potential causes and solutions. Feel free to add more alerts using your metrics:
-
-| Alert | Cause | Solution |
-| --- | --- | --- |
-| Spike in number of requests | Potential DoS attack | Lower rate-limiting threshold |
-| Large number of rate-limited requests | Rate-limiting threshold too low | Increase rate-limiting threshold |
-| Ingestion is slow | Horizon server spec too low | Increase hardware spec |
-| Spike in average response time of a single route | Possible bug in a code responsible for rendering a route | Report an issue in Horizon repository. |
-
-## I'm Stuck! Help!
-
-If any of the above steps don't work or you are otherwise prevented from correctly setting up Horizon, please join our community and let us know. Either post a question at [our Stack Exchange](https://stellar.stackexchange.com/) or chat with us on [Keybase in #dev_discussion](https://keybase.io/team/stellar.public) to ask for help.
diff --git a/docs/run-api-server/prerequisites.mdx b/docs/run-api-server/prerequisites.mdx
deleted file mode 100644
index 38f7b45d4..000000000
--- a/docs/run-api-server/prerequisites.mdx
+++ /dev/null
@@ -1,58 +0,0 @@
----
-title: Prerequisites
-sidebar_position: 10
----
-
-Horizon only has one true dependency: a PostgreSQL server that it uses to store data that has been processed and ingested from Stellar Core. **Horizon requires PostgreSQL version >= 9.5**.
-
-As far as system requirements go, there are a few main things to keep in mind. Starting from version 2.0, Horizon must be run as a standalone service. A full Horizon build consists of three functions:
-
-1. **ingesting data** _from_ the decentralized Stellar network,
-1. **submitting transactions** _to_ the network, and
-1. **serving** API requests
-
-The first two happen through _Captive Core_, a pared down, non-validating version of Stellar Core packaged directly into Horizon.
-
-With these three functions in mind, you can also run Horizon in two different ways: **real-time ingestion** and **historical catch-up**:
-
-- _Real-time ingestion_ is an “online” process: it involves keeping in sync with the live Stellar network state and digesting ledger data into a holistic view of the network. If you just want function (3) from above, you still need to do this.
-
-- _Historical catch-up_ is an “offline” process: it lets you look into the past and catch up your Horizon instance to a given retention period (e.g. 30 days of history). Because it’s typically done offline and a one-time process, you can dedicate more compute power and configure parallel workers to catch up faster.
-
-### Historical Catch-up
-
-In this scenario, the hardware specifications are more demanding than what is necessary for the day-to-day operation of real-time ingestion, but catch-up only needs to occur once.
-
-However, the requirements will vary depending on your chosen retention period and desired catch-up speed. Note that **most operators will not need full history**, and as the network continues to grow, tracking full history will become increasingly prohibitive. As of late 2021, DB storage to support historical retention is growing at a rate of 0.8 TB / month. It is highly recommended to configure retention of only the history needed to support your functionality.
-
-#### Requirements
-
-Minimally, your disk storage type **must** be an SSD (e.g. NVMe, Direct Attached Storage) and your I/O **must** handle >15k iops (I/O operations per second). The following table breaks down hardware specifications for ingestion at different retention levels and performance tiers.
-
-Note that each component can be scaled independently and for redundancy, in the manner of traditional _n_-tier systems which is covered later in [Scaling](./scaling.mdx). Ingestion can be sped up via configuring more Captive Core parallel workers (requiring more compute and RAM).
-
-| Component | | Retention Period |
-| :-- | :-- | :-- | --- |
-| | **30 days** | **90 days** | **Full History** |
-| **Parallel worker count**
(est. ingestion time) | 6 workers (1 day) | 10 workers (1 day) | 20+ workers (2 days) |
-| **Horizon** | **CPU**: 10 cores (min: 6)
**RAM**: 64 GB (min: 32)
| **CPU**: 16 (min: 8)
**RAM**: 128 GB (64)
| **CPU**: 16 (10)
**RAM**: 512 GB (256) |
-| **Database** | **CPU**: 16 cores (min: 8)
**RAM**: 64 GB (min: 32GB)
**Storage**: 2 TB
**IOPS**: 20K (min: 15K) | **CPU**: 16 (12)
**RAM**: 128 GB (64)
**Storage**: 4 TB
**IOPS**: 20K (15K) | **CPU**: 64 (32)
**RAM**: 512 GB (256)
**Storage**: 10 TB
**IOPS**: 20k (15k) |
-| **Storage**
(all same) | | **SSD** (NVMe, Direct Attached Storage preferred) | |
-| **AWS**
(reference) | **Captive Core**: `m5.2xlarge`
**Database**: `r5.2xlarge` | **Captive Core**: `m5.4xlarge`
**DB**: `r5.4xlarge` | **Captive Core**: `c5.2xlarge` (x2)
**DB**: `r5.16xlarge` (ro)
`r5.8xlarge` (rw) |
-
-### Real-Time Ingestion
-
-In this scenario, the goal is just to stay in sync with the Stellar network for day-to-day operations.
-
-There are two extremes to this spectrum: running a **single private instance** of Horizon for a specific application all the way up to a serious **enterprise public instance** of Horizon. In the former case, you’d run all three functions on a single machine and have low request volume; in the latter case, you’d have high-availability, redundancy, high request volume, full history, etc.
-
-#### Requirements
-
-The following table breaks down requirements along this spectrum; if you fall somewhere in between, interpolate the requirements accordingly.
-
-| Category | Private Instance | Enterprise Public Instance |
-| :-- | :-- | :-- |
-| **Compute** | Both **API Service** + **Captive Core**:
**CPU**: 4
**RAM**: 32 GB | **API Service**
**CPU**: 4
**RAM**: 8 GB
N instances, load balanced
**Captive Core**
**CPU**: 8
**RAM**: 256 GB
2 instances for redundancy |
-| **Database** | **CPU**: 4
**RAM**: 32 GB
**IOPS**: 7k (min: 2.5k) | **CPU**: 32 - 64
**RAM**: 256 - 512 GB
**IOPS**: 20k (min: 15k)
2 HA instances: 1RO, 1RW |
-| **Storage** (SSD) | depends on retention period | 10 TB |
-| **AWS** (reference) | **API Service + Captive Core**
`m5.2xlarge`
**Database**
`r5.2xlarge` (ro)
`r5.xlarge` (rw) | **API Service**
`c5.xlarge` (_n_)
**Captive Core**
`c5.2xlarge` (x2)
**Database** `r5.16xlarge` (ro)
`r5.8xlarge` (rw) |
diff --git a/docs/run-api-server/running.mdx b/docs/run-api-server/running.mdx
deleted file mode 100644
index b05c6e969..000000000
--- a/docs/run-api-server/running.mdx
+++ /dev/null
@@ -1,40 +0,0 @@
----
-title: Running
-sidebar_position: 40
----
-
-import { CodeExample } from "@site/src/components/CodeExample";
-
-Once your Horizon database and Captive Core configuration is set up properly, you're ready to run Horizon. Run `stellar-horizon` with the [appropriate parameters](./configuring.mdx#parameters) set (or `stellar-horizon-cmd serve` if you [installed via the package manager](./installing.mdx#package-manager), which will automatically import your configuration from `/etc/default/stellar-horizon`), which starts the HTTP server and starts logging to standard out. When run, you should see output similar to:
-
-
-
-```
-INFO[...] Starting horizon on :8000 pid=29013
-```
-
-
-
-Note that the numbers may naturally be different for your installation. The log line above announces that Horizon is ready to serve client requests.
-
-Next, you can confirm that Horizon is responding correctly by loading the root resource. In the example above, that URL would be http://127.0.0.1:8000/, and simply running `curl http://127.0.0.1:8000/` would show you that the root resource loads correctly:
-
-
-
-```json
-{
- "_links": {
- "account": {
- "href": "http://127.0.0.1:8000/accounts/{account_id}",
- "templated": true
- },
- "accounts": {
- "href": "http://127.0.0.1:8000/accounts{?signer,sponsor,asset,cursor,limit,order}",
- "templated": true
- }
- }
- // etc.
-}
-```
-
-
diff --git a/docs/run-core-node/index.mdx b/docs/run-core-node/index.mdx
index be3d4fa11..756b04aec 100644
--- a/docs/run-core-node/index.mdx
+++ b/docs/run-core-node/index.mdx
@@ -9,7 +9,7 @@ Stellar is a peer-to-peer network made up of nodes, which are computers that kee
You don’t need to run a node to build on Stellar: you can start developing with your [SDK of choice](../tools-and-sdks.mdx#sdk-library), and use public instances of Horizon to query the ledger and submit transactions right away. In fact, the Stellar Development Foundation offers two public instances of Horizon — one for the public network and one for the testnet — which you can read more about in our [API reference docs](https://developers.stellar.org/api). [Lobstr](https://horizon.stellar.lobstr.co), [Public Node](https://horizon.publicnode.org/), and [Coinqvest](https://horizon.stellar.coinqvest.com) also offer public Horizon instances.
-Even if you _do_ want to run your [own instance of Horizon](../run-api-server/index.mdx), it bundles its own version of Core and manages its lifetime entirely, so there's no need to run a standalone instance.
+Even if you _do_ want to run your [own instance of Horizon](../run-platform-server/index.mdx), it bundles its own version of Core and manages its lifetime entirely, so there's no need to run a standalone instance.
If you’re serious about building on Stellar, have a production-level product or service that requires high-availability access network, or want to help increase network health and decentralization, then you probably _do_ want to run a node, or even a trio of nodes (more on that in the [Tier 1 section](./tier-1-orgs.mdx)). At that point, you have a choice: you can pay a service provider like [Blockdaemon](https://app.blockdaemon.com/marketplace/categories/-/stellar-horizon) to set up and run your node for you, or you can do it yourself.
@@ -27,7 +27,7 @@ The basic flow, which you can navigate through using the menu on the left, goes
## Types of nodes
-All nodes perform the same basic functions: they run Stellar Core, connect to peers, submit transactions, store the state of the ledger in a SQL [database](./configuring.mdx#database), and keep a duplicate copy of the ledger in flat XDR files called [buckets](./configuring.mdx#buckets). Though all nodes also support [Horizon](../run-api-server/index.mdx), the Stellar API, this is a deprecated way of architecting your system and will be discontinued soon. If you want to run Horizon, you don't need a separate Stellar Core node.
+All nodes perform the same basic functions: they run Stellar Core, connect to peers, submit transactions, store the state of the ledger in a SQL [database](./configuring.mdx#database), and keep a duplicate copy of the ledger in flat XDR files called [buckets](./configuring.mdx#buckets).
In addition to those basic functions, there are two key configuration options that determine how a node behaves. A node can:
diff --git a/docs/run-core-node/prerequisites.mdx b/docs/run-core-node/prerequisites.mdx
index f15296137..a62ab320f 100644
--- a/docs/run-core-node/prerequisites.mdx
+++ b/docs/run-core-node/prerequisites.mdx
@@ -9,8 +9,6 @@ You can install Stellar Core a [number of different ways](./installation.mdx), a
We recently asked Stellar Core operators about their setups, and should have some updated information soon based on their responses. So stay tuned. In early 2018, Stellar Core with PostgreSQL running on the same machine worked well on a [m5.large](https://aws.amazon.com/ec2/instance-types/m5/) in AWS (dual core 2.5 GHz Intel Xeon, 8 GB RAM). Storage-wise, 20 GB was enough in 2018, but the ledger has grown a lot since then, and most people seem to have at least 1TB on hand.
-If you decide to run Stellar Core on the same machine as Horizon (though note that this is a deprecated architecture, since Horizon now bundles Core for its needs), you will additionally need to ensure that your setup is also equipped to handle Horizon's [compute requirements](../run-api-server/prerequisites.mdx) as well.
-
Stellar Core is designed to run on relatively modest hardware so that a whole range of individuals and organizations can participate in the network, and basic nodes should be able to function pretty well without tremendous overhead. That said, the more you ask of your node, the greater the requirements.
## Network access
diff --git a/docs/run-api-server/_category_.json b/docs/run-platform-server/_category_.json
similarity index 66%
rename from docs/run-api-server/_category_.json
rename to docs/run-platform-server/_category_.json
index 528f1fdf3..2f8744a03 100644
--- a/docs/run-api-server/_category_.json
+++ b/docs/run-platform-server/_category_.json
@@ -1,6 +1,6 @@
{
"position": 70,
- "label": "Run an API Server",
+ "label": "Run Platform Services",
"link": {
"type": "doc", "id": "index"
}
diff --git a/docs/run-platform-server/configuring.mdx b/docs/run-platform-server/configuring.mdx
new file mode 100644
index 000000000..8f0de7818
--- /dev/null
+++ b/docs/run-platform-server/configuring.mdx
@@ -0,0 +1,170 @@
+---
+title: Configuring
+sidebar_position: 30
+---
+
+import { Alert } from "@site/src/components/Alert";
+
+## Prerequisites
+
+- You have identified the [installation](./installing.mdx) method for the host system:
+
+ - For bare-metal, you have two executables installed on the host operation system path: `stellar-horizon` and `stellar-core`.
+ - For running Horizon image with Docker daemon, you will use the [stellar/stellar-horizon](https://hub.docker.com/r/stellar/stellar-horizon) hosted on Docker Hub. You have already pulled the stellar/stellar-horizon image via `docker pull stellar/stellar-horizon:` onto host. This image contains the `stellar-horizon` and `stellar-core` within.
+ - For Kubernetes with [Horizon Helm Chart](https://github.com/stellar/helm-charts/tree/main/charts/horizon), you have followed the [Install Horizon with Helm Chart](./installing.mdx#helm-chart-installation).
+
+- [Initialize database](#initialize-horizon-database)
+
+You are now ready to identify the configuration parameters needed to perform three important roles:
+
+- **Serving read-only API requests** via a regular web-based HTTP API;
+- **Ingesting ledgers** from Core nodes of the Stellar network to keep its world-view up to date;
+- **Submitting transactions** via a regular web-based HTTP API, forwarding the transaction submission request to the Stellar network.
+
+To perform these roles, you can choose from one of two deployment modes below (single instance deployment or multiple instance deployment). Each has its own configuration parameters.
+
+## Single Instance Deployment
+
+Run `stellar-horizon` in a single o/s process and it will perform all three roles simultaneously.
+
+| environment variable | example |
+| ------------------------- | ----------------------------------- |
+| `DATABASE_URL` | postgres://localhost/horizon_pubnet |
+| `NETWORK` | pubnet |
+| `HISTORY_RETENTION_COUNT` | 518400 |
+
+## Multiple Instance Deployment
+
+In this scalable deployment variant, you run multiple instances of `stellar-horizon`, each performing a subset of the roles. This allows you to horizontally scale each of the role functions independently.
+
+### Ingestion Role Instance
+
+You must allocate **at least** one instance to perform ongoing ingestion to capture network activity. This will default to limiting storage of ingested network activity in the database to our recommendation of a sliding window of the last 30 days.
+
+| environment variable | example |
+| ------------------------- | ----------------------------------- |
+| `DATABASE_URL` | postgres://localhost/horizon_pubnet |
+| `NETWORK` | pubnet |
+| `HISTORY_RETENTION_COUNT` | 518400 |
+| `DISABLE_TX_SUB` | true |
+
+### API Role Instance
+
+You can run none or multiple instances to serve read-only API requests. Notice there is no need to define network settings here, as Horizon only reads from database.
+
+| environment variable | example |
+| -------------------- | ----------------------------------- |
+| `DATABASE_URL` | postgres://localhost/horizon_pubnet |
+| `INGEST` | false |
+| `DISABLE_TX_SUB` | true |
+
+### Transaction Submission Role Instance
+
+You can run none or multiple instances to serve transaction submission requests. If you run an instance with transaction submission enabled, the Horizon deployment is required to have at least one instance perform the ingestion role on the same database. Horizon transaction submission depends on this **live** ingestion taking place against the database in order to confirm tx submission status.
+
+If ingestion is planned to be done on a separate instance, add `INGEST=false` on this instance, otherwise don't include the parameter, Horizon will default to `INGEST=true`. When a transaction submission enabled instance has `INGEST=true` effective, it will configure the related `STELLAR_CORE_URL` parameter automatically to use the internally launched captive core instance and the deployment does not need to set the configuration value explicitly.
+
+If setting `INGEST=false`, then **must** define the `STELLAR_CORE_URL` variable on this transaction submission enabled instance, since there will be no internally hosted captive core instance as part of ingestion available to reference, instead the `STELLAR_CORE_URL` provides the ability to define the URL of a core instance HTTP port which Horizon will send transaction submissions towards.
+
+| environment variable | example |
+| -------------------- | ----------------------------------- |
+| `DATABASE_URL` | postgres://localhost/horizon_pubnet |
+| `STELLAR_CORE_URL` | http://example.watcher.core:11626 |
+| `INGEST` | false |
+
+## Notes
+
+### Ingestion
+
+If you have configured your deployment to perform the ingestion role, then it is **strongly** recommended to review [Ingestion](./ingestion.mdx) first and [Filtering](./ingestion-filtering) second and factor that into configuration parameters to achieve best performance related to your application requirements before proceeding further.
+
+- Horizon will create a sub-directory under the current working directory of the o/s process to store captive core runtime data files. Refer to [Prerequisites](./prerequisites.mdx) for the type and amount of storage recommended. You can override this location with the optional `CAPTIVE_CORE_STORAGE_PATH` environment variable, set to a directory on the file system where captive core will store the runtime files.
+
+### `DISABLE_TX_SUB`
+
+This config parameter is optional, set as FALSE by default. Controls whether Horizon will accept HTTP requests to the `/tx` API endpoint and forward to the network. Refer to [Channel Accounts](../encyclopedia/channel-accounts.mdx) for some recommendations on optional client transaction submission optimizations.
+
+- When set to FALSE, it requires **live** ingestion process to be running on the same database because Horizon depends on new ledgers from the network to confirm a transaction submission status, Horizon will report a startup error if it detects no **live** ingestion. Requires `INGEST=true` or `STELLAR_CORE_URL` to be defined for access to a Core instance.
+- When transaction submission is disabled by setting it to TRUE, Horizon will return 405 on POSTs to /tx.
+
+### `NETWORK`
+
+This config parameter is optional, can be one of Stellar's public networks, 'pubnet', or 'testnet'. Triggers Horizon to automatically set configurations for remaining Horizon settings and generate the correct core toml/cfg settings. If you only need Horizon to connect to one of those public Stellar networks, this will take care of all related configurations.
+
+- If you want to connect Horizon to a different Stellar network other than pubnet or testnet or override any of the defaults that `NETWORK` usage will initiate, the key environment variables that can be set are: `HISTORY_ARCHIVE_URLS`, `CAPTIVE_CORE_CONFIG_PATH`, `NETWORK_PASSPHRASE`, `CAPTIVE_CORE_STORAGE_PATH`, `STELLAR_CORE_URL`.
+
+### `DB_URL`
+
+This config parameter is required, specifies the Horizon database. It's value follows this format: `dbname= user= password= host=`
+
+### `LOG_LEVEL`
+
+This config parameter is optional, can be one of 'info', 'error', 'debug'.
+
+### `HISTORY_RETENTION_COUNT`
+
+This config parameter is optional, it determines the maximum sliding window of historical network data to retain on the database from ingestion. The value is expressed as absolute ledger count, which is an indirect way to define a duration of time, each ledger being approximately 5 seconds. It is defaulted to 0, which means it will not purge any history from the database. To enact the recommended sliding window of one month, set this to 518400, which is the approximate number of ledgers in 30 days. Refer to [Compute Resources](./prerequisites.mdx) for how database storage space is closely related to this setting.
+
+## Passing Configurations to Horizon
+
+The `stellar-horizon` binary searches process environment variables for configuration. Depending on how Horizon was installed, the method you perform to configure the process environment will differ:
+
+- Bare-metal
+ - Non-package manager: use O/S environment variables to pass configurations. There are many tools you can use to manage them, such as [direnv](http://direnv.net/) or [dotenv](https://github.com/bkeepers/dotenv).
+ - [Package manager](./installing.mdx#package-manager): the provided `stellar-horizon-cmd` wrapper will start a new process and create environment variables in the process from `/etc/default/stellar-horizon` and then launch the 'stellar-horizon'. To set configurations, edit the file at `/etc/default/stellar-horizon`.
+
+ This script invokes Horizon with the `stellar` user, so make sure that
+ permissions for the user are set up accordingly. The current working
+ directory should be writable for this user and the user should be able to
+ execute the `stellar-horizon` and `stellar-core` binaries; etc.
+
+- Containerized
+ - Non-Helm: pass all configuration parameters to the horizon docker image as [docker environment variables](https://docs.docker.com/engine/reference/commandline/run/#env).
+ - Helm: pass all configuration parameters in the [Helm install command](https://helm.sh/docs/helm/helm_install/) as a values file.
+
+## Initialize Horizon Database
+
+Before running the Horizon server for the first time, you must initialize the Horizon database. This database will be used for all of the information produced by Horizon, most notably historical information about transactions that have occurred on the Stellar network.
+
+To prepare a database for Horizon's use, first ensure it is blank. It's easiest to create a new database on your PostgreSQL server specifically for Horizon's use. We recommend creating a new user(role) in postgres dedicated to Horizon's database and assigning that user(role) as the owner of this database.
+
+To illustrate an example using `psql`, first login to the database server using the `psql` command-line tool as a superuser, and then create the new user(role) and database for Horizon:
+
+```
+postgres=#
+postgres=# CREATE ROLE horizon WITH LOGIN;
+CREATE ROLE
+postgres=#
+postgres=# CREATE DATABASE horizon OWNER horizon;
+CREATE DATABASE
+postgres=#
+```
+
+Additionally, you can set a password on your new `horizon` postgres user with `ALTER USER`.
+
+Once completed, you can compose the full value of the configuration parameter for db access `DATABASE_URL="dbname=horizon user=horizon password= host="`.
+
+Next, execute the Horizon binary to install the schema onto the empty db from the command line. In this example, assume the current shell doesn't have `DATABASE_URL` in environment yet, so export it first into shell:
+
+```
+$ export DATABASE_URL="dbname=horizon user=horizon password= host="
+$ stellar-horizon db init
+```
+
+### Optional Postgres Configurations
+
+Based on performance observations over time, we recommend additional Postgres configuration settings(postgresql.conf), but these are not required:
+
+- Set `random_page_cost=1` if you are using SSD storage. With this setting, Query Planner will make a better use of indices, especially for `JOIN` queries. We've noticed a huge speed improvement for some queries with this setting.
+
+\_ To improve availability of ingestion, api, transaction submission servers it's recommended to set the following values:
+
+- `tcp_keepalives_idle`: 10 seconds
+- `tcp_keepalives_interval`: 1 second
+- `tcp_keepalives_count`: 5
+
+With the config above, if there are no queries from a given client for 10 seconds, Postgres should start sending TCP keepalive packets. It will retry 5 times every second. If there is no response from the client after that time it will drop the connection.
+
+## Next Step
+
+After configuration is complete, you are now ready to proceed to [Running Horizon](./running.mdx)!
diff --git a/docs/run-platform-server/index.mdx b/docs/run-platform-server/index.mdx
new file mode 100644
index 000000000..a07458b59
--- /dev/null
+++ b/docs/run-platform-server/index.mdx
@@ -0,0 +1,17 @@
+---
+title: "Overview"
+sidebar_position: 0
+---
+
+Horizon is a central component of the Stellar platform: it provides an HTTP API to data in the Stellar network. It ingests and re-serves the data produced by the Stellar network in a form that is easier to consume by the average application relative to the performance-oriented data representations used by Stellar Core.
+
+This guide describes how to administer a production Horizon instance (refer to the [Developers' Blog](https://www.stellar.org/developers-blog/a-new-sun-on-the-horizon) for some background on the performance and architectural improvements of this major version bump). For information about developing on the Horizon codebase, check out the [Development Guide](https://github.com/stellar/go/blob/master/services/horizon/internal/docs/developing.md).
+
+Before we begin, it's worth reiterating the sentiment echoed in the [Run a Core Node](../run-core-node) guide: **we do not endorse running Horizon backed by a standalone Stellar Core instance**, and especially not by a _validating_ Stellar Core. These are two separate concerns, and decoupling them is important for both reliability and performance. Horizon instead manages its own, pared-down version of Stellar Core optimized for its own subset of needs (we'll refer to this as a "Captive Core" instance).
+
+## Why Run Horizon?
+
+Running Horizon within your own infrastructure provides a number of benefits. You can:
+
+- Have full operational control without dependency on the Stellar Development Foundation for network data and transaction submission to networks;
+- Run multiple instances for redundancy and scalability.
diff --git a/docs/run-platform-server/ingestion-filtering.mdx b/docs/run-platform-server/ingestion-filtering.mdx
new file mode 100644
index 000000000..536633cb2
--- /dev/null
+++ b/docs/run-platform-server/ingestion-filtering.mdx
@@ -0,0 +1,96 @@
+---
+title: Ingestion Filtering
+order: 46
+---
+
+## Overview
+
+Ingestion Filtering enables Horizon operators to drastically reduce the storage footprint of the historical data in the Horizon database by white-listing Assets and/or Accounts that are relevant to their operations.
+
+### Why is it useful:
+
+Previously, the only way to limit data storage was by limiting the temporal range of history via rolling retention (e.g. the last 30 days). The filtering feature allows users to store a longer historical timeframe in the Horizon database for only whitelisted assets, accounts, and their related historical entities (transactions, operations, trades, etc.).
+
+For further context, running an unfiltered `full` history Horizon instance currently requires over 30TB of disk space (as of June 2023) with storage growing at a rate of about 1TB/month. As a benchmark, filtering by even 100 of the most active accounts and assets reduces storage by over 90%. For the majority of applications which are interested in an even more limited set of assets and accounts, storage savings should be well over 99%. Other benefits include reducing operating costs for maintaining storage, improved DB health metrics and query performance.
+
+### How does it work:
+
+Filtering feature operates during ingestion in **live** and **historical range** processes. It tells ingestion process to only accept incoming ledger transactions which match on a filter rule, any transactions which don't match on filter rules are skipped by ingestion and therefore not stored on database.
+
+Some key aspects to note about filtering behavior:
+
+- Filtering applies only to ingestion of historical data in the database, it does not affect how ingestion process maintains current state data stored in database, which is the last known ledger entry for each unique entity within accounts, trustlines, liquidity pools, offers. However, current state data consumes a relatively small amount of the overall storage capacity.
+- When filter rules are changed, they only apply to existing, running ingestion processes(**live** and **historical range**). They don't trigger any retro-active filtering or back-filling of existing historical data on the database.
+ - When the filter rules are updated to include additional accounts or assets in the white-list, the related transactions from **live** ingestion will only appear in the historical database data once the filter rules have been updated using the Admin API. The same applies to **historical range** ingestion, where the new filter rules will only affect the data from the current ledger within its configured range at the time of the update.
+ - Updating the filter rules to include additional accounts or assets does not trigger automatic back-filling related to new entites in the historical database. To include prior history of newly white-listed entites in the database you can manually run a new [Historical Ingestion Range](ingestion.mdx#ingesting-historical-data) after updating the filter rules.
+ - When the filter rules are updated to remove accounts or assets previously defined on white-list, the historical data in the database will not be retroactively purged or filtered based on the updated rules. The data is stored in the history tables for the lifetime of the database or until the `HISTORY_RETENTION_COUNT` is exceeded. Once the retention limit is reached, Horizon will purge all historical data related to older ledgers, regardless of any filtering rules.
+- Filtering will not affect the performance or throughput rate of an ingestion process, it will remain consistent whether filter rules are present or not.
+
+Filter rules define white-lists of the following supported entities:
+
+- Account id
+- Asset id (canonical)
+
+Given that all transactions related to the white listed entities are included, all historical time series data related to those transactions are saved in horizon's history db, including transaction itself, all operations in the transaction, and references to any ancillary entities from operations.
+
+## Configuration:
+
+Filtering is enabled by default with no filter rules defined. When no filter rules are defined, it effectively means no filtering of ingested data occurs. To start filtering ingestion, need to define at least one filter rule:
+
+- enable Horizon admin port with environmental configuration parameter `ADMIN_PORT=XXXXX`, this will allow you to access the port.
+- define filter whitelists. submit Admin HTTP API requests to view and update the filter rules:
+
+ Refer to the [Horizon Admin API Docs](https://github.com/stellar/go/blob/master/services/horizon/internal/httpx/static/admin_oapi.yml) which are also published on Horizon running instances as Open API 3.0 doc on the Admin Port when enabled at `http://localhost:/`. You can paste the contents from that url into any OAPI tool such as [Swagger](https://editor.swagger.io/) which will render a visual explorer of the API endpoints. On the swagger editor you can also load the published Horizon admin.oapi.yml directly as a url, choose `File->Import URL`:
+
+ ```
+ https://raw.githubusercontent.com/stellar/go/master/services/horizon/internal/httpx/static/admin_oapi.yml
+ ```
+
+ Follow details and examples of request/response payloads to read and update the filter rules for these endpoints:
+
+ ```
+ /ingestion/filters/account
+ /ingestion/filters/asset
+ ```
+
+ Choosing `Try it out` button from either endpoint will display `curl` examples of entire HTTP request.
+
+## Sample Use Case:
+
+As an Asset Issuer, I have issued 4 assets and am interested in all transaction data related to those assets including customer Accounts that interact with those assets through the following operations:
+
+- Operations
+- Effects
+- Payments
+- Claimable balances
+- Trades
+
+I would like to store the full history of all transactions related from the genesis of those assets.
+
+### Pre-requisites:
+
+You have installed Horizon with empty database and it has **live** ingestion enabled.
+
+### Steps:
+
+1. Configure a filter rule with 4 white-listed Assets by POST'ing the request to Horizon ADMIN API `:/ingestion/filters/asset`.
+
+```
+curl -X 'PUT' \
+ 'http://localhost:4200/ingestion/filters/asset' \
+ -H 'accept: application/json' \
+ -H 'Content-Type: application/json' \
+ -d '{
+ "whitelist": [
+ "USDC:GAFRNZHK4DGH6CSF4HB5EBKK6KARUOVWEI2Y2OIC5NSQ4UBSN4DR456U",
+ "DOTT:GAFRNZHK4DGH6CSF4HB5EBKK6KARUOVWEI2Y2OIC5NSQ4UBSN4DR456U",
+ "ABCD:GAFRNZHK4DGH6CSF4HB5EBKK6KARUOVWEI2Y2OIC5NSQ4UBSN4DR456U",
+ "EFGH:GAFRNZHK4DGH6CSF4HB5EBKK6KARUOVWEI2Y2OIC5NSQ4UBSN4DR456U"
+ ],
+ "enabled": true
+}'
+```
+
+2. Since this is new horizon database, and first filter rules, there is nothing more to do, and effectively stop here.
+
+3. However, for sake of exercise, suppose you already had Horizon running for a while and the database populated based on some filter rules, and these new rules were additional white-listings you just added. In this case, you choose whether you want to retro-actively back fill historical data on horizon database for these new white-listed entites from a prior time up to the present time, because they were originally dropped at prior ingestion time and not included on the database. If you decide you want to back fill, then you run a separate Horizon **historical range** ingestion process, refer to [Historical Ingestion Range](ingestion.mdx#ingesting-historical-data) for steps:
diff --git a/docs/run-platform-server/ingestion.mdx b/docs/run-platform-server/ingestion.mdx
new file mode 100644
index 000000000..a50b44e6b
--- /dev/null
+++ b/docs/run-platform-server/ingestion.mdx
@@ -0,0 +1,97 @@
+---
+title: Ingestion
+sidebar_position: 45
+---
+
+import { CodeExample } from "@site/src/components/CodeExample";
+
+Horizon API provides most of its utility through ingested data, and your Horizon server can be configured to listen for and ingest transaction results from the Stellar network. Ingestion enables API access to both current state (e.g. someone's balance) and historical state (e.g. someone's transaction history).
+
+## Ingestion Types
+
+There are two primary ingestion use cases for Horizon operations:
+
+- Ingesting **live** data to stay up to date with the latest ledgers from the network, accumulating a sliding window of aged ledgers;
+- Ingesting **historical** data to retroactively add network data from a time range in the past to the database.
+
+## Determine Storage Space
+
+You should think carefully about the historical timeframe of ingested data you'd like to retain in Horizon's database. The storage requirements for transactions on the Stellar network are substantial and are growing unbounded over time. This is something that you may need to continually monitor and reevaluate as the network continues to grow. We have found that most organizations need only a small fraction of recent historical data to satisfy their use cases. Through analyzing traffic patterns on SDF's Horizon instance, we see that most requests are for very recent data.
+
+To keep your storage footprint small, we recommend the following:
+
+- Use **live** ingestion, use **historical** ingestion only in limited exceptional cases.
+- If your application requires access to all network data, no filtering can be done, we recommend limiting historical retention of ingested data to a sliding window of 1 month (HISTORY_RETENTION_COUNT=518400) which is default set by Horizon.
+- If your application can work on a [filtered network dataset](./ingestion-filtering.mdx) based on specific accounts and assets, then we recommend applying ingestion filter rules. When using filter rules, it provides benefit of choice in longer historical retention timeframe since the filtering is reducing the overall database size to such a degree, historical retention(`HISTORY_RETENTION_COUNT`) can be set in terms of years rather than months or even disabled(`HISTORY_RETENTION_COUNT=0`).
+- If you cannot limit your history retention window to 30 days and cannot use filter rules, we recommend considering [Stellar Hubble Data Warehouse](https://developers.stellar.org/docs/accessing-data/overview) for any historical data.
+
+### Ingesting Live Data
+
+This option is enabled by default and is the recommended mode of ingestion to run. It is controlled with environment configuration flag `INGEST`. Refer to [Configuration](./configuring.mdx) for how an instance of Horizon performs the ingestion role.
+
+For high availability requirements, **we recommend deploying more than one live ingesting instance**, as this makes it easier to avoid downtime during upgrades and adds resilience, ensuring you always have the latest network data (refer to [Ingestion Role Instance](./configuring.mdx#multiple-instance-deployment)).
+
+### Ingesting Historical Data
+
+Import network data from a past date range into the database:
+
+
+
+```
+stellar-horizon db reingest range
+```
+
+
+
+Running any historical range of ingestion requires coordination with the data retention configuration chosen. When setting a temporal limit on history with `HISTORY_RETENTION_COUNT=`, the temporal limit takes precedence, and any data ingested beyond that limit will be automatically purged.
+
+Typically the only time you need to run historical ingestion is once when boot-strapping a system after first deployment, from that point forward **live** ingestion will keep the database populated with the expected sliding window of trailing historical data. Maybe one exception is if you think you have a gap in the database caused by the **live** ingestion being down, in which case you can run historical ingestion range to essentially gap fill.
+
+You can run historical ingestion in parallel in background while your main Horizon server separately performs **live** ingestion. If the range specified overlaps with data already in the database, it is ok and will simply be overwritten, effectively idempotent.
+
+#### Parallel Ingestion Workers
+
+You can parallelize the ingestion of target historical ledger range by dividing it into sequential slices of smaller ranges and run the db reingest range command for each sub-range in parallel as a separate process on the same or a different machine. The shorthand rule for best performance is to identify the number of CPU cores available per target machine, if multi-core, then add `--parallel-workers ` to the command, this will enable the command to further parallelize internally within a single process using multiple threads and sub-divided smaller ranges.
+
+
+
+```
+# target range 1 30000, on single machine with 1 CPU core
+horizon1> stellar-horizon db reingest range 1 30000
+
+# target range 1 30000, on single machine with 4 CPU cores
+horizon1> stellar-horizon db reingest range 1 30000 --parallel-workers 4
+
+# target range 1 30000, on two machines, each has 2 CPU cores
+horizon1> stellar-horizon db reingest range 1 15000 --parallel-workers 2
+horizon2> stellar-horizon db reingest range 15001 30000 --parallel-workers 2
+```
+
+
+
+### Notes
+
+#### Some endpoints may report not available during **live** ingestion
+
+- Endpoints that display current state information from **live** ingestion may return `503 Service Unavailable`/`Still Ingesting` error. An example is the `/paths` endpoint (built using offers). Such endpoints will become available after **live** ingestion has finished network synchronization and catch up (usually within a couple of minutes).
+
+#### If more than five minutes has elapsed with no new ingested data:
+
+- Verify the host machine meets recommended [Prerequisites](./prerequisites.mdx).
+
+- Check Horizon log output.
+ - If there are many `level=error` messages, it may point to an environmental issue, inability to access the database.
+ - **Live** ingestion will emit two key log lines about once every 5 seconds based on latest ledger emitted from network. Tail the Horizon log output and grep for presence of these lines with a filter:
+ ```
+ tail -f horizon.log | | grep -E 'Processed ledger|Closed ledger'
+ ```
+ If you don't see output from this pipeline every couple of seconds for a new ledger then ingestion is not proceeding, look at full logs and see if any alternative messages are printing reasons to the contrary. May see lines mentioning 'catching up' When connecting to pubnet, as it can take up to 5 minutes for the captive core process started by Horizon to catch up to pubnet network.
+ - Check RAM usage on the machine, it's possible that system ran low on RAM and is using swap memory which will result in slow performance. Verify host machine meets minimum RAM [prerequisites](./prerequisites.mdx).
+ - Verify the read/write throughput speeds on the volume that current working directory for horizon process is using. Based on [prerequisites](./prerequisites.mdx), volume should have at least 10mb/s, one way to roughly verify this on host machine(linux/mac) command line:
+ ```
+ sudo dd if=/dev/zero of=/tmp/test_speed.img bs=1G count=1
+ ```
+
+#### Monitoring Ingestion Process
+
+For high-availability deployments, it is recommended to implement monitoring of ingestion process for visibility on performance/health. Refer to [Monitoring](./monitoring.mdx) for accessing logs and metrics from Horizon. Stellar publishes the example [Horizon Grafana Dashboard](https://grafana.com/grafana/dashboards/13793-stellar-horizon/), which demonstrates queries against key horizon ingestion metrics, specifically look at the `Local Ingestion Delay [Ledgers]` and `Last ledger age` in the `Health Summary` panel.
diff --git a/docs/run-platform-server/installing.mdx b/docs/run-platform-server/installing.mdx
new file mode 100644
index 000000000..e244cd482
--- /dev/null
+++ b/docs/run-platform-server/installing.mdx
@@ -0,0 +1,78 @@
+---
+title: Installing
+sidebar_position: 20
+---
+
+import { CodeExample } from "@site/src/components/CodeExample";
+
+To install Horizon in production or non-development environments, we recommend the following based on target infrastructure:
+
+### Bare-Metal
+
+- If host is Debian Linux, install prebuilt binaries [from repositories](#package-manager) using a package manager.
+- For any other hosts, download [prebuilt release binaries](#prebuilt-releases) of Stellar Horizon and Core for host target architecture and operation system or [compile from the source](https://github.com/stellar/go/blob/master/services/horizon/internal/docs/GUIDE_FOR_DEVELOPERS.md#building-horizon).
+
+### Containerized
+
+- Non-Orchestrated: if the target deployment environment does not include a container orchestrator such as Kubernetes, then this means you intend to run the Horizon release image from [dockerhub.com/stellar/stellar-horizon](https://hub.docker.com/r/stellar/stellar-horizon) as a container directly with Docker daemon on host. Choose the tag of the Horizon image for the specific release version and then pull the image using `docker pull stellar/stellar-horizon:` to get it locally onto host.
+- Orchestrated: when the target environment has container orchestration, such as Kubernetes cluster, we recommend using the [Horizon Helm Chart](https://github.com/stellar/helm-charts/tree/main/charts/horizon) to manage the installation and deployment lifecycle of the Horizon image as container(s) on the cluster.
+
+To install Horizon in development environments, refer to the [Horizon README](https://github.com/stellar/go/blob/master/services/horizon/README.md#try-it-out) from the source code repo for options available.
+
+### Notes on Installation
+
+#### Package Manager
+
+SDF publishes new releases to its custom Ubuntu repositories. Follow [this guide](https://github.com/stellar/packages/blob/master/docs/adding-the-sdf-stable-repository-to-your-system.md#adding-the-sdf-stable-repository-to-your-system) to add the stable SDF repository to your host system. If you are interested in installing release candidate versions of software that have yet to reach stable, refer to [Adding the Bleeding Edge Testing Repository](https://github.com/stellar/packages/blob/master/docs/adding-the-sdf-stable-repository-to-your-system.md#adding-the-bleeding-edge-testing-repository). Lastly, [install package](https://github.com/stellar/packages/blob/master/docs/installing-individual-packages.md#installing-individual-packages) outlines the various commands that these packages make available.
+
+To proceed with installation:
+
+
+
+```bash
+sudo apt update
+sudo apt install stellar-horizon stellar-core
+```
+
+
+
+#### Prebuilt Releases
+
+Refer to the list of [Horizon releases](https://github.com/stellar/go/releases) and [Core releases](https://github.com/stellar/stellar-core/releases). Copy the binaries to host PATH.
+
+#### Verify Bare-Metal Installations
+
+Run `stellar-horizon --help` from a terminal. If the help for Horizon is displayed, your installation was successful.
+
+Some shells (such as [zsh](https://www.zsh.org/)) cache PATH lookups. You may need to clear your cache (by using `rehash` in zsh, for example) or restart your shell before trying to run the command above.
+
+#### Helm Chart Installation
+
+If the deployment can be done on Kubernetes, there is a [Horizon Helm Chart](https://github.com/stellar/helm-charts/blob/main/charts/horizon) available. Install the [Helm CLI tool](https://helm.sh/docs/intro/install/), if you haven't already on your workstation, minimum of version 3. Next, add the Stellar repo to the helm client's list of repos and confirm that you can view the list of available chart versions for the repo:
+
+
+
+```bash
+helm repo add stellar https://helm.stellar.org/charts
+helm repo update stellar
+helm search repo stellar/horizon --versions --devel
+```
+
+
+
+Wait to install the Horizon Helm Chart, it will be done after [Configuring](./configuring.mdx) is completed and in [Running](./running.mdx).
+
+If Kubernetes is not an option, the helm charts may still be good reference for showing how to configure and run the Horizon Docker container. Just run the helm command with `template` to display the generated Kubeneretes manifests, which demonstrate all the container configurations needed:
+
+
+
+```bash
+git clone https://github.com/stellar/helm-charts; cd helm-charts
+helm template -f charts/horizon/values.yaml charts/horizon/
+```
+
+
+
+## Next Step
+
+After installation is complete, you are now ready to proceed to [Configuring Horizon](./configuring.mdx)!
diff --git a/docs/run-platform-server/monitoring.mdx b/docs/run-platform-server/monitoring.mdx
new file mode 100644
index 000000000..2b0389e43
--- /dev/null
+++ b/docs/run-platform-server/monitoring.mdx
@@ -0,0 +1,118 @@
+---
+title: Monitoring
+sidebar_position: 60
+---
+
+import { CodeExample } from "@site/src/components/CodeExample";
+
+## Metrics
+
+Metrics are emitted from Horizon over HTTP in [the de facto text-based exposition format](https://github.com/prometheus/docs/blob/main/content/docs/instrumenting/exposition_formats.md#text-based-format). The Metrics are published on the _private_ `/metrics` path of Horizon Admin port, which is an optional service to be started and will be bound by the Horizon process onto the host machine loopback network(localhost or 127.0.0.1). To enable the Admin port, add environment configuration parameter `ADMIN_PORT=XXXXX`, the metrics endpoint will be reachable on the host machine as `localhost:/metrics`. You can verify this by pointing any browser that can reach this address, it will print out all metrics keys.
+
+### Exporting
+
+Once the Admin port is enabled, the Horizon metrics endpoint can be 'scraped' by external monitoring infrastructure. Since the metrics output is encoded to [standard text-based format](https://github.com/prometheus/docs/blob/main/content/docs/instrumenting/exposition_formats.md#text-based-format) it will be compatible for usage with many types of monitoring infrastructure that interoperate with the same standard format.
+
+In the case of Horizon, metrics are published on the Admin HTTP port which is bound to the host machine's loop back network interface(127.0.0.1), so external monitoring systems or services cannot reach the port directly. To expose the metrics securely, we recommend following the exporter pattern, which is common metrics scraping strategy.
+
+One real-world example for exporting from a bare metal Horizon installation (Horizon has been installed directly onto an operating system), use [FluentBit and the Prometheus Exporter](https://docs.fluentbit.io/manual/pipeline/outputs/prometheus-exporter) on the same host machine as Horizon is running. FluentBit will perform a simple port forwarding pipeline on the host machine. Configure the input to be Horizon's `localhost:/metrics` and the output to be the `host` and `port` representing the target network interface and port on the host machine, and then configure your monitoring infrastructure to scrape that host address.
+
+In container-orchestrated environments such as Kubernetes, you can use the same exporter strategy. We assume you already have a metrics infrastructure deployment like Prometheus and Grafana setup on the cluster via the [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator), and will just need to configure that infrastructure to scrape the Horizon pod based on the ADMIN_PORT.
+
+### Data model
+
+There are numerous application metrics keys emitted by Horizon at runtime, encoded in four types of exposition formats: `counter`, `gauge`, `histogram`, `summary`. Each key is further qualified with labels for more granularity. To summarize, we can highlight the groupings of metrics keys by common function denoted in the prefix of their name:
+
+- `go_`: golang specfic runtime performance
+- `horizon_txsub_`: attributes of Horizon transaction submission sub system if enabled.
+- `horizon_stellar_core_`: runtime attributes of stellar network reported by the captive core.
+- `horizon_order_book_`: runtime attributes of the in memory order book maintained by Horizon of the current stellar network
+- `horizon_log_`: counters of how many log messages printed at each severity level
+- `horizon_ingest_`: performance measurements and stateful aspects of Horizon's internal ingestion sub system
+- `horizon_http_`: statistics and measurements of Horizon's HTTP API service, all aspects of request/response load and timings.
+- `horizon_history_`: statistics on Horizon ingested historical ledgers
+- `horizon_db_`: measurements on database performance, query times per endpoints, pooling stats
+- `process_`: generic host machine compute measurements
+
+And for each key, there will be a possibility of 0 or more labels, the serialized output(exposition) format follows this template:
+
+```
+{label_1="value",label_2="value",,,}
+```
+
+Rather than listing all individual metrics keys in docs, as they change often, the recommendation is to perform an HTTP GET against the Horizon metrics endpoint,`localhost:/metrics`, using any http client(browser, curl, wget, etc) and the response will have the metrics keys and additional meta information on each metric key for description and type(counter, gauge, histogram, summary), as an example for one key, `horizon_http_requests_duration_seconds`:
+
+```
+# HELP horizon_http_requests_duration_seconds HTTP requests durations, sliding window = 10m
+# TYPE horizon_http_requests_duration_seconds summary
+horizon_http_requests_duration_seconds{method="GET",route="/",status="200",streaming="false",quantile="0.5"} 0.000186958
+horizon_http_requests_duration_seconds{method="GET",route="/",status="200",streaming="false",quantile="0.9"} 0.00043625
+horizon_http_requests_duration_seconds{method="GET",route="/",status="200",streaming="false",quantile="0.99"} 0.000645
+...
+
+```
+
+### Queries
+
+Build queries against the metrics data model to highlight the performance of a given Horizon deployment. Refer to Stellar's [Grafana Horizon Dashboard](https://grafana.com/grafana/dashboards/13793-stellar-horizon/) for examples of metrics queries to derive application performance:
+
+- Number of requests per minute.
+- Number of requests per route (the most popular routes).
+- Average response time per route.
+- Maximum response time for non-streaming requests.
+- Number of streaming vs. non-streaming requests.
+- Number of rate-limited requests.
+- List of rate-limited IPs.
+- Unique IPs.
+- The most popular SDKs/apps sending requests to a given Horizon node.
+- Average ingestion time of a ledger.
+- Average ingestion time of a transaction.
+
+Choose the [revisions tab](https://grafana.com/grafana/dashboards/13793-stellar-horizon/?tab=revisions), and download the dashboard source file to have access to the Grafana dashboard source code and metrics queries that build each panel in dashboards.
+
+### Alerts
+
+Once queries are developed on a Grafana dashboard, it enables a convenient follow-on step to add [alert rules](https://grafana.com/docs/grafana/latest/alerting/alerting-rules/create-grafana-managed-rule/) based on specific queries to trigger notifications when thresholds are exceeded.
+
+Here are some example alerts to consider with potential causes and solutions.
+
+| Alert | Cause | Solution |
+| --- | --- | --- |
+| Spike in number of requests | Potential DoS attack | network load balance or content switch configurations |
+| Ingestion is slow | host server compute resources are low | increase compute specs |
+| HTTP API responses are returning errors | host server compute resources are low or networking to DB is lost | check the [Horizon logs](#logs) to see what errors are being emitted, narrow down root cause from there |
+
+## Logs
+
+Horizon will output logs to operating system's standard out. It will log on all aspects of runtime, including HTTP requests and ingestion. Typically, there are very few `warn` or `error` severity level messages emitted. The default severity level logged in Horizon is configured to `LOG_LEVEL=info`, this environment configuration parameter can be set to one of `trace, debug, info, warn, error`. The verbosity of log output is inverse of the severity level chosen. I.e. for most verbose logs use 'trace', for least verbose logs use 'error'.
+
+For production deployments, we recommend using the default severity setting of `info` level and choose a log capture strategy depending on the deployment.
+
+- Bare metal deployment direct to operating system, redirect the standard out from Horizon process to a file on disk and apply a log rotation tool on the file such as [logrotate](https://man7.org/linux/man-pages/man8/logrotate.8.html) to manage disk space usage.
+- Orchestrated deployment on Kubernetes, use an EFK/ELK stack on the cluster and it can be configured to capture the standard out from Horizon pod.
+
+## Runtime Profiling
+
+Horizon is written in Golang, therefore it has been enabled to optionally emit the Golang runtime diagnostics and profiling output [pprof](https://go.dev/doc/diagnostics). The pprof HTTP endpoints are hosted on Horizon's admin HTTP port, it can be enabled by adding environment configuration parameter `ADMIN_PORT=XXXXX`, since the admin port binding is disabled by default.
+
+Two of the standard predefined profiles are published:
+
+`localhost:/debug/pprof/heap` - heap profiling
+
+`localhost:/debug/pprof/profile` - cpu profiling
+
+Use Go's pprof command line tool to access the published endpoints and visualize the profiled diagnostic data that is emitted. A brief example usage of the pprof tool from command line to get started, using `web` to display a graphical representation of current heap allocations:
+
+```
+$ go tool pprof http://localhost:6060/debug/pprof/heap
+Fetching profile over HTTP from http://localhost:6060/debug/pprof/heap
+Saved profile in ./pprof/pprof.stellar-horizon.alloc_objects.alloc_space.inuse_objects.inuse_space.022.pb.gz
+File: stellar-horizon
+Type: inuse_space
+Entering interactive mode (type "help" for commands, "o" for options)
+(pprof) web
+```
+
+## I'm Stuck! Help!
+
+If any of the above steps don't work or you are otherwise prevented from correctly setting up Horizon, please join our community and let us know. Either post a question at [our Stack Exchange](https://stellar.stackexchange.com/) or chat with us on [Horizon Discord](https://discord.com/channels/897514728459468821/912466080960766012) to ask for help.
diff --git a/docs/run-platform-server/prerequisites.mdx b/docs/run-platform-server/prerequisites.mdx
new file mode 100644
index 000000000..dd5f99bd6
--- /dev/null
+++ b/docs/run-platform-server/prerequisites.mdx
@@ -0,0 +1,49 @@
+---
+title: Prerequisites
+sidebar_position: 10
+---
+
+The Horizon service is responsible for synchronizing with the Stellar network and processing ledger data. To understand the scope of Horizon's services, please read the [configuring](./configuring.mdx) section before you move on to the prerequisites for computation.
+
+The Horizon service can be [installed](./installing.mdx) on bare metal or a virtual machine. It is natively supported on both Linux and Windows operating systems.
+
+## Single Instance Deployment Model
+
+For a basic setup using the [Single Instance Deployment model](./configuring.mdx#single-instance-deployment), you will need a sum of two distinct compute profiles:
+
+- One for hosting the Horizon service
+- Another for hosting the PostgreSQL server
+
+The minimum hardware specifications to effectively run Horizon are as follows:
+
+### Horizon Compute Instance:
+
+- CPU: 4 cores
+- RAM: 16 GB
+- Storage: SSD with a capacity of 100 GB capable of handling at least 1.5K IOPS (I/O operations per second)
+
+### PostgreSQL Database Server Compute Instance:
+
+- CPU: 4 cores
+- RAM: 32 GB
+- Storage: SSD with a capacity of 2 TB (NVMe or Direct Attached Storage) capable of handling at least 7K IOPS (I/O operations per second)
+
+Please note that a minimum of PostgreSQL version 12 is required.
+
+These specifications assume a 30-day retention window for data storage. For a longer retention window, the system requirements will be higher. For more information about data ingestion, history retention, and managing storage, check the [ingestion](./ingestion.mdx) section.
+
+## Multiple Instance Deployment
+
+To achieve high availability, redundancy, and high throughput, explore the [scaling](./scaling.mdx) strategy. It provides detailed prerequisites and guidelines to determine the appropriate [number of Horizon instances](./configuring.mdx#multiple-instance-deployment) to deploy.
+
+## Network Access
+
+- Ensure that the Horizon instance can establish a connection with the PostgreSQL database instance. The default port for PostgreSQL is 5432.
+
+- A stable and fast network connection with the Internet is required for any Horizon instance running the ingestion role. This is to ensure it has efficient outbound connectivity to remote hosts in the [quorum set](https://developers.stellar.org/docs/run-core-node/configuring#choosing-your-quorum-set) and [archive urls](https://developers.stellar.org/docs/run-core-node/configuring#history) for the chosen Stellar network. During ingestion, the Horizon instance communicates with these hosts, receiving network transaction data through its local captive core sub-process.
+
+:::note
+
+Hardware requirements may increase as the Stellar network grows and/or if you're sharing resources or using custom configs.
+
+:::
diff --git a/docs/run-platform-server/running.mdx b/docs/run-platform-server/running.mdx
new file mode 100644
index 000000000..7878b0920
--- /dev/null
+++ b/docs/run-platform-server/running.mdx
@@ -0,0 +1,123 @@
+---
+title: Running
+sidebar_position: 40
+---
+
+import { CodeExample } from "@site/src/components/CodeExample";
+
+Once you have [established the Horizon database](./configuring.mdx#initialize-horizon-database) and have [identified the Horizon runtime config per host](./configuring.mdx#pre-requisites), you're ready to run Horizon.
+
+## Bare-metal installation
+
+Run the `stellar-horizon` binary with the [appropriate environment parameters](./configuring.mdx#parameters) set (or `stellar-horizon-cmd serve` if you [installed via the package manager](./installing.mdx#package-manager), which will automatically import your configuration from `/etc/default/stellar-horizon`).
+
+## Containerized installation
+
+You don't execute the Horizon binary directly, instead the [stellar/stellar-horizon](https://hub.docker.com/r/stellar/stellar-horizon) image has a pre-defined entrypoint that will start running Horizon at image startup time. The Horizon process will get all configuration settings from container environment variables.
+
+### Docker daemon
+
+Use `docker run stellar/stellar-horizon: --env-file `, and specify each Horizon configuration flag identified during [Configuring](./configuring.mdx) as a separate line in `` of `HORIZON_CONFIG_PARAM=value`.
+
+### Kubernetes using Helm Chart
+
+Ensure you have followed the [pre-requisite](./installing.mdx#helm-chart-installation) of installing the Helm CLI tool and added the Stellar chart repo to Helm client.
+
+The Horizon process [requires access to a Postgres 12 database](./configuring.mdx#preparing-the-database). First use the common Kubernetes CLI tool `kubectl` from your workstation to create a Kubernetes secret on the intended namespace of the Kubernetes cluster which will hold the Horizon database URL.
+
+
+
+```bash
+# copy your horizon DATABASE_URL into a secure file, no line breaks.
+echo -n 'database_url_here' > my_creds.txt
+
+# now generate the kubernetes secret from the file
+kubectl create secret generic \
+-n my-namepsace\
+my-db-secret \
+--from-file=DATABASE_URL=my_creds.txt
+```
+
+
+
+Now deploy Horizon onto the cluster using the Helm Chart:
+
+
+
+```bash
+helm install my-horizon stellar/horizon \
+--namespace my-horizon-namespace-on-cluster \
+--set ingest.persistence.enabled=true \
+--set web.replicaCount=1 \
+--set web.enabled=true \
+--set ingest.enabled=true \
+--set ingest.replicaCount=1 \
+--set web.existingSecret=my-db-secret \
+--set global.image.horizon.tag=2.26.1 \
+--set global.network=testnet \
+--set ingest.existingSecret=my-db-secret \
+--set ingest.horizonConfig.captiveCoreUseDb=true \
+--set ingest.resources.limits.cpu=1 \
+--set ingest.resources.limits.memory=6Gi
+```
+
+
+
+This example of Helm Chart usage highlights some key aspects:
+
+- Uses the `global.network=[testnet|pubnet]` parameter, this automates generation of all the Horizon configuration parameters specific to the network such as archive urls, captive core config, and other parameters mentioned in [Configuring](./configuring.mdx).
+- `global.image.horizon.tag` should be set to one of the Docker Hub tags published on [stellar/stellar-horizon](https://hub.docker.com/r/stellar/stellar-horizon)
+- Enables all roles on the deployment instance: ingesting and web API (includes transaction submission). If you choose to have a multi-instance deployment with each instance performing a single role of just web API or ingestion, then you will do two Helm installations, one for each role: `my-horizon-ingestion-installation` and `my-horizon-api-installation`. Each of these Helm installations will set `ingest.enabled`, `web.enabled`, `ingest.replicaCount`, `web.replicaCount` respectively for the role they are performing.
+- To customize further, the best approach is to download the [Horizon Helm Chart values.yaml](https://github.com/stellar/helm-charts/blob/main/charts/horizon/values.yaml), update the settings in your local copy of values.yaml, and pass to Helm install, rather than have many individual `--set` on Helm install:
+
+
+
+```bash
+helm install myhorizon stellar/horizon \
+--namespace my-horizon-namespace-on-cluster \
+--values values.yaml
+```
+
+
+
+- Customizing network configuration parameters, If you want to connect to a network other than presets of `testnet` or `pubnet`, then you won't use `global.network`, instead, use local copy of [values.yaml](https://github.com/stellar/helm-charts/blob/main/charts/horizon/values.yaml) and set `ingest.coreConfig`, and refer to [\_core-config.tpl](https://github.com/stellar/helm-charts/blob/main/charts/horizon/templates/_core-config.tpl) for example of all the key/value pairs to include.
+
+- Minimum resource limits, verify whether `LimitRange` defaults are defined on the target namespace in Kubernetes for deployment, if so, ensure that the defaults provide at least minimum resource limits of `6Gi` of memory and `1` cpu. Otherwise, define the limits explicitly on the helm install via the `ingest.resources.limits.*` shown in example, to ensure the deployed pods have adequate resources.
+
+
+
+Once the Horizon process starts, it will emit logging to standard out. When run, you should see output similar to:
+
+
+
+```
+INFO[...] Starting horizon on :8000 pid=29013
+```
+
+
+
+Note that the numbers may naturally be different for your installation. The log line above announces that Horizon is ready to serve client requests.
+
+Next, you can confirm that Horizon is responding correctly by loading the root resource. In the example above, that URL would be http://127.0.0.1:8000/, and simply running `curl http://127.0.0.1:8000/` would show you that the root resource loads correctly:
+
+
+
+```json
+{
+ "_links": {
+ "account": {
+ "href": "http://127.0.0.1:8000/accounts/{account_id}",
+ "templated": true
+ },
+ "accounts": {
+ "href": "http://127.0.0.1:8000/accounts{?signer,sponsor,asset,cursor,limit,order}",
+ "templated": true
+ }
+ }
+ // etc.
+}
+```
+
+
+
+Refer to [Monitoring](./monitoring.mdx) for more details on Horizon runtime logging and metrics available.
diff --git a/docs/run-api-server/scaling.mdx b/docs/run-platform-server/scaling.mdx
similarity index 83%
rename from docs/run-api-server/scaling.mdx
rename to docs/run-platform-server/scaling.mdx
index 5ed600723..299337890 100644
--- a/docs/run-api-server/scaling.mdx
+++ b/docs/run-platform-server/scaling.mdx
@@ -3,7 +3,7 @@ title: Scaling
sidebar_position: 70
---
-As alluded to in the discussion on [Prerequisites](./prerequisites.mdx), Horizon encompasses different logical tiers that can be scaled independently for high throughput, isolation, and high availability. The following components can be independently scaled:
+As alluded to in the discussion in [Prerequisites](./prerequisites.mdx), Horizon encompasses different logical tiers that can be scaled independently for high throughput, isolation, and high availability. The following components can be independently scaled:
- Web service API (serving)
- Captive Core (ingestion and transaction submission)
@@ -23,12 +23,6 @@ For low to medium load environments with up to 30-90 days of data history retent
![](/assets/horizon-scaling/Topology-2VMs.png)
-### Extension: Isolating Captive Core
-
-Additionally, Captive Core can be further isolated into its own VM, especially for isolating high throughput historical catch-up with parallel workers, leaving it unaffected by API request servicing load.
-
-![](/assets/horizon-scaling/Topology-3VMs.png)
-
## Enterprise _n_-Tier
This architecture services high request and data processing throughput with isolation and redundancy for each component. Scale the API service horizontally by adding a load balancer in front of multiple API service instances, each only limited by the database I/O limit. If necessary, use ALB routing to direct specific endpoints to specific request-serving instances, which are tied to a specific, dedicated DB. Now, if an intense endpoint gets clobbered, all other endpoints are unaffected.
@@ -39,8 +33,8 @@ Additionally, a second Captive Core instance shares ingestion load and serves as
![](/assets/horizon-scaling/Topology-Enterprise.png)
-### Extension: Redundant Hot Backup
+### Redundant Hot Backup
-The entire architecture can be replicated to a second cluster. The backup cluster can be upgraded independently or fail-overed to with no downtime. Additionally, capacity can be doubled in an emergency if needed.
+The entire architecture can be replicated to a second cluster. The backup cluster can be upgraded independently or fail-overed to with no downtime. Additionally, capacity can be doubled in an emergency if needed. This is synonymous with the [Blue/Green deployment model](https://en.wikipedia.org/wiki/Blue%E2%80%93green_deployment).
![](/assets/horizon-scaling/Topology-Enterprise-HotBackup.png)
diff --git a/docs/run-platform-server/upgrading.mdx b/docs/run-platform-server/upgrading.mdx
new file mode 100644
index 000000000..cf2c37411
--- /dev/null
+++ b/docs/run-platform-server/upgrading.mdx
@@ -0,0 +1,142 @@
+---
+title: Upgrading
+sidebar_position: 80
+---
+
+import { Alert } from "@site/src/components/Alert";
+import { CodeExample } from "@site/src/components/CodeExample";
+
+Here we'll describe the recommended steps for upgrading a Horizon 2.x installation.
+
+### Pre-requisites
+
+- An existing Horizon deployment consisting of one or more instances of Horizon.
+- All instances are on same 2.x version to begin.
+- If [bare-metal](./installing.mdx#bare-metal) install, you have shell, or command line access to each host having a Horizon installation.
+- If [deployed direct on Docker daemon](./installing.mdx#containerized), you have command line access to the host that is running the Docker daemon.
+- If [deployed on Kubernetes with Helm chart](./installing.mdx#helm-chart-installation), you have kubectl and helm command line tools on your workstation and a user login with appropriate access levels to change resources in target namespace of Horizon deployment on the cluster.
+
+### Assess current installation
+
+- Identify the list of all instances of Horizon that need to be upgraded.
+
+ - Bare-metal installations: the list of hosts is managed by you.
+ - Docker daemon deployments: the list of hosts and running containers is managed by you.
+ - Kubernetes deployments: get the list of pods that are deployed from your prior Helm installation, they will have an annotation for `release=your_helm_horizon_installation_name`:
+
+
+
+ ```bash
+ kubectl get pods -l release=your_helm_horizon_installation_name -n
+ ```
+
+
+
+- Identify your current Horizon software version:
+
+ - Obtain command line access to the operating system of each Horizon instance:
+ - Bare-metal installations, this is typically ssh on Linux or powershell on Windows.
+ - Docker daemon deployments, use `docker exec -it /bin/bash`
+ - For Kubernetes deployments, use `kubectl exec -it -n -- /bin/bash`
+ - On command line of each instance, run `stellar-horizon version`
+
+- All instances should report the same version, if not, the system may be inconsistent, use this upgrade as opportunity to establish consistency and get them all on same version.
+
+### Determine the target version for upgrade
+
+Now that you know your current Horizon version, visit [Horizon Releases](https://github.com/stellar/go/releases) and choose the next greater version above your current version to upgrade. Follow steps [recommended by GitHub to compare releases](https://docs.github.com/en/repositories/releasing-projects-on-github/comparing-releases), click on the `Compare` dropdown of the chosen release, and then select your current release and GH will display the differences between versions, select the `Files changed` tab, and go to the `services/horizon/CHANGELOG.md`, it will highlight the new release notes for changes that have occurred between your current version and the new version you selected. Review this and look for any `Breaking Changes`, `State Rebuild` and `DB Schema Migration` sections for consideration, as the latter two will also mention expected time for the state rebuild or db migration to apply respectively.
+
+### Install the new version
+
+Now that you have indentified the new version and are aware of the potential impacts from upgrading to new version based on release notes, such as state rebuilds and db migrations, you are informed and ready to proceed with upgrade.
+
+Upgrading production deployents should leverage a secondary, hot-backup deployment, also known as a [blue/green model](./scaling.mdx#redundant-hot-backup) and perform the upgrade on the inactive deployment first. This will avoid downtime of system to your external users, as the upgrade takes place on the inactive deployment.
+
+A good strategy for upgrading Horizon and applicable to single or multi-instance deployments - shut all instances down, install new Horizon version on one of the ingesting instances first. The reason being Horizon software will only initate `State Rebuild` and `DB Schema Migration` actions related to an upgrade on an instance that it detects ingestion has been enabled with configuration parameter, `INGEST=true`. This lowers complexity for you during the upgrade as you only need to focus on one instance and it avoids potential concurrent Horizon ingestion processes attempting the same upgrade on the database.
+
+- Bare-metal installations, stop the Horizon process on all instances first, then shell into one instance that is configured for ingestion, and use apt package manager on linux.
+
+
+
+ ```bash
+ sudo apt update
+ sudo apt install stellar-horizon=new_horizon_debian_pkg_version
+ ```
+
+
+
+ Restart Horizon using the configuration already in place, but include `APPLY_MIGRATIONS=true` environment variable, this will trigger Horizon to automatically run any db migrations that it detects are needed.
+
+- Docker daemon deployments, stop all docker containers first, then choose one container that has ingestion enabled, set the new tag for the image based on release published on dockerhub - [stellar/stellar-horizon](https://hub.docker.com/r/stellar/stellar-horizon/tags), and restart the container in docker daemon, include `APPLY_MIGRATIONS=true` environment variable to the container envrionment, this will trigger Horizon to automatically run any db migrations that it detects are needed.
+- For Helm installations on Kubernetes, first use your helm cli tool to stop all Horizon instances by scaling all your Horizon installations(for ingest and web) down to 0 replicas, which you've created prior on [run steps](./running.mdx).
+
+
+
+ ```bash
+ helm upgrade all-my-horizon-installations \
+ --namespace my-horizon-namespace-on-cluster \
+ --set ingest.replicaCount=0 \
+ --set web.replicaCount=0
+ ```
+
+
+
+ Now, use helm to start just a single Horizon instance from a helm installation that has ingestion enabled on kubernetes cluster, you will set the `global.image.horizon.tag` to the release tag published on [stellar/stellar-horizon](https://hub.docker.com/r/stellar/stellar-horizon/tags)
+
+
+
+ ```bash
+ helm upgrade my-horizon \
+ --namespace my-horizon-namespace-on-cluster \
+ --set global.image.horizon.tag=new_horizon_release_number \
+ --set ingest.horizonConfig.applyMigrations=True \
+ --set ingest.replicaCount=1
+ ```
+
+
+
+### Confirming the upgrade on single ingestion instance first
+
+If you have [monitoring](./monitoring.mdx) infrastructure in place, then you have two options for assessing the upgrade status:
+
+- View metrics outputs using grafana dashboards that leverage queries on the [Horizon metrics data model](./monitoring.mdx#data-model) to check key stats like ingestion and network ledgers are advancing and in step.
+
+- View the Horizon web server 'status' url path on the upgraded instance:
+
+
+
+ ```bash
+ curl http://localhost:8000/
+ ```
+
+
+
+ The response will be HTTP status code 200 and body of response will be a text based json data structure with diagnostic info on current Horizon software version, and ledger numbers for ingestion and network, refresh the url every 5 seconds or so, and should see the ingestion and network ledger numbers advancing and in step, indicating good connection to the network and ingestion.
+
+If metrics and/or the Horizon 'status' url respones don't indicate healthy status based on advancing ledger ingestion, two steps to triage further:
+
+- A delay in Horizon achieving healthy status after an upgrade is expected and legitmate for any upgrade cases where `State Rebuild` or `DB Migration` was noted in the release delta as part of prior [Determine the target version for upgrade step](#determine-the-target-version-for-upgrade). Typically the notes will also mention relative timeframe expectations for those to complete which can be factored in to how long to wait on delay.
+- Check the logs from the upgraded instance to confirm what's going on. Any `State Rebuild` or `DB Migration` initiated will be mentioned. For example, a db migration will be noted in logs with following lines for start and finish:
+ ```
+ 2023/09/22 18:27:01 Applying DB migrations...
+ 2023/09/22 18:27:01 successfully applied 5 Horizon migrations
+ ```
+
+### Upgrade all remaining instances
+
+At this point, you have upgraded one ingesting instance to the new Horizon version, it has automatically updated the database if required and the instance is running with healthy status. Now, install the same Horizon software version on the remainder of instances, restarting each after the upgrade. For bare-metal and docker daemon installations that will likely be self explanatory on how to accomplish that for remainder of instances, on helm chart installations, run the helm upgrade again, setting the image tag and also restoring original `replicaCount`s:
+
+
+
+```bash
+helm upgrade all-my-horizon-installations \
+--namespace my-horizon-namespace-on-cluster \
+--set ingest.replicaCount=1 \
+--set web.replicaCount=1 \
+--set global.image.horizon.tag=new_horizon_release_number \
+--set ingest.horizonConfig.applyMigrations=False
+```
+
+
+
+For production deployments following the hot backup or blue/green model, this is the opportunity to confirm the inactive deployment has taken the upgrade correctly and stable, at which point, switch the load balancers now to forward traffic to the inactive deployment, making it the active deployment. Now, can take time to perform same upgrade on the other deployment which is now inactive.