diff --git a/docs/pages/setup/admin/trustedclusters.mdx b/docs/pages/setup/admin/trustedclusters.mdx
index f8432db18ec0f..60308292efda9 100644
--- a/docs/pages/setup/admin/trustedclusters.mdx
+++ b/docs/pages/setup/admin/trustedclusters.mdx
@@ -4,287 +4,451 @@ description: How to configure access and trust between two SSH and Kubernetes en
h1: Trusted Clusters
---
-The design of trusted clusters allows Teleport users to connect to compute infrastructure
-located behind firewalls without any open TCP ports. The real-world usage examples of this
-capability include:
+Teleport can partition compute infrastructure into multiple clusters. A cluster
+is a group of Teleport resources connected to the cluster's Auth Service, which
+acts as a certificate authority (CA) for all users and Nodes in the cluster.
+
+Trusted Clusters allow the users of one cluster, the **root cluster**, to
+seamlessly SSH into the Nodes of another cluster, the **leaf cluster**, while
+remaining authenticated with only a single Auth Service. The leaf cluster can
+be running behind a firewall with no TCP ports open to the root cluster.
+
+Uses for Trusted Clusters include:
- Managed service providers (MSP) remotely managing the infrastructure of their clients.
-- Device manufacturers remotely maintaining computing appliances deployed on-premises.
-- Large cloud software vendors manage multiple data centers using a common proxy.
+- Device manufacturers remotely maintaining computing appliances deployed on premises.
+- Large cloud software vendors managing multiple data centers using a common proxy.
-**Example of a MSP provider using trusted cluster to obtain access to clients clusters.**
+Here is an example of an MSP using Trusted Clusters to obtain access to client clusters:
![MSP Example](../../../img/trusted-clusters/TrustedClusters-MSP.svg)
-The Trusted Clusters chapter in the Admin Guide
-offers an example of a simple configuration which:
+This setup works as follows: a leaf cluster creates an outbound reverse SSH
+tunnel to the root cluster and keeps the tunnel open. When a user tries to
+connect to a Node inside the leaf cluster using the root's Proxy Service, the
+reverse tunnel is used to establish this connection.
-- Uses a static cluster join token defined in a configuration file.
-- Does not cover inter-cluster Role-Based Access Control (RBAC).
+![Tunnels](../../../img/tunnel.svg)
-This guide's focus is on more in-depth coverage of trusted clusters features and will cover the following topics:
+This guide will explain how to:
-- How to add and remove trusted clusters using CLI commands.
+- Add and remove Trusted Clusters using CLI commands.
- Enable/disable trust between clusters.
-- Establish permissions mapping between clusters using Teleport roles.
+- Establish permission mapping between clusters using Teleport roles.
-
- If you have a large number of devices on different networks, such as managed IoT devices or a couple of nodes on a different network you can utilize [Teleport Node Tunneling](./adding-nodes.mdx).
-
+## Prerequisites
+
+
+
+
+- Two running Teleport clusters. For details on how to set up your clusters, see
+ one of our [Getting Started](/docs/getting-started) guides.
+
+- The `tctl` admin tool and `tsh` client tool version >= (=teleport.version=).
+
+ ```code
+ $ tctl version
+ # Teleport v(=teleport.version=) go(=teleport.golang=)
+
+ $ tsh version
+ # Teleport v(=teleport.version=) go(=teleport.golang=)
+ ```
+
+ See [Installation](/docs/installation.mdx) for details.
+
+- A Teleport Node that is joined to one of your clusters. We will refer to this
+ cluster as the **leaf cluster** throughout this guide.
+
+ See [Adding Nodes](adding-nodes.mdx) for how to launch a Teleport Node in
+ your cluster.
+
+
+
+
+- Two running Teleport clusters. For details on how to set up your clusters, see
+ our Enterprise [Getting Started](/docs/enterprise/getting-started) guide.
+
+- The `tctl` admin tool and `tsh` client tool version >= (=teleport.version=),
+ which you can download by visiting the
+ [customer portal](https://dashboard.gravitational.com/web/login).
+
+ ```code
+ $ tctl version
+ # Teleport v(=teleport.version=) go(=teleport.golang=)
+
+ $ tsh version
+ # Teleport v(=teleport.version=) go(=teleport.golang=)
+ ```
+
+- A Teleport Node that is joined to one of your clusters. We will refer to this
+ cluster as the **leaf cluster** throughout this guide.
+
+ See [Adding Nodes](adding-nodes.mdx) for how to launch a Teleport Node in
+ your cluster.
+
+
+
-## Introduction
+- A Teleport Cloud account. If you do not have one, visit the
+ [sign up page](https://goteleport.com/signup/) to begin your free trial.
-As explained in the [architecture document](../../architecture/overview.mdx#design-principles),
-Teleport can partition compute infrastructure into multiple clusters.
-A cluster is a group of SSH nodes connected to the cluster's *auth server*
-acting as a certificate authority (CA) for all users and nodes.
+- A second Teleport cluster, which will act as the leaf cluster. For details on
+how to set up this cluster, see one of our
+[Getting Started](/docs/getting-started) guides.
-To retrieve an SSH certificate, users must authenticate with a cluster through a
-*proxy server*. So, if users want to connect to nodes belonging to different
-clusters, they would normally have to use different `--proxy` flags for each
-cluster. This is not always convenient.
+ As an alternative, you can set up a second Teleport Cloud account.
-The concept of *leaf clusters* allows Teleport administrators to connect
-multiple clusters and establish trust between them. Trusted clusters
-allow users of one cluster, the root cluster to seamlessly SSH into the nodes of
-another cluster without having to "hop" between proxy servers. Moreover, users don't
-even need to have a direct connection to other clusters' proxy servers.
+- The `tctl` admin tool and `tsh` client tool version >= (=cloud.version=).
+ To download these tools, visit the [Downloads](/docs/cloud/downloads) page.
+
+ ```code
+ $ tctl version
+ # Teleport v(=cloud.version=) go(=teleport.golang=)
+
+ $ tsh version
+ # Teleport v(=cloud.version=) go(=teleport.golang=)
+ ```
+
+- A Teleport Node that is joined to one of your clusters. We will refer to this
+ cluster as the **leaf cluster** throughout this guide.
+
+ See [Adding Nodes](adding-nodes.mdx) for how to launch a Teleport Node in
+ your cluster.
+
+
+
(!docs/pages/includes/permission-warning.mdx!)
-The user experience looks like this:
+## Step 1/5. Prepare your environment
+
+In this guide, we will enable users of your root cluster to SSH into the
+Teleport Node in your leaf cluster as the user `visitor`. First, we will create
+the `visitor` user and a Teleport role that can assume this username when
+logging in to your Node.
+
+### Add a user to your Node
+
+On your Node, run the following command to add the `visitor` user:
```code
-# Log in using the root "root" cluster credentials:
-$ tsh login --proxy=root.example.com
+$ sudo useradd --create-home visitor
+```
-# SSH into some host inside the "root" cluster:
-$ tsh ssh host
+
-# SSH into the host located in another cluster called "leaf"
-# The connection is established through root.example.com:
-$ tsh ssh --cluster=leaf host
+This command also creates a home directory for the `visitor` user, which is
+required for accessing a shell on the Node.
-# See what other clusters are available
-$ tsh clusters
+
+
+### Create a role to access your Node
+
+On your local machine, log in to your leaf cluster using your Teleport username:
+
+
+
+```code
+# Log out of all clusters to begin this guide from a clean state
+$ tsh logout
+$ tsh login --proxy=leafcluster.teleport.sh --user=myuser
```
-Leaf clusters also have their own restrictions on user access, i.e.
-*permissions mapping* takes place.
+
+
-**Once a connection has been established it's easy to switch from the "root" root cluster**
-![Teleport Cluster Page](../../../img/trusted-clusters/teleport-trusted-cluster.png)
+```code
+# Log out of all clusters to begin this guide from a clean state
+$ tsh logout
+$ tsh login --proxy=leafcluster.example.com --user=myuser
+```
-Let's take a look at how a connection is established between the "root" cluster
-and the "leaf" cluster:
+
-![Tunnels](../../../img/tunnel.svg)
+Create a file called `visitor.yaml` with the
+following content:
+
+```yaml
+kind: role
+version: v5
+metadata:
+ name: visitor
+spec:
+ allow:
+ logins:
+ - visitor
+ # In case your Node is labeled, you will need to explicitly allow access
+ # to Nodes with labels in order to SSH into your Node.
+ node_labels:
+ '*': '*'
+```
-This setup works as follows:
+Create the role:
-1. The "leaf" creates an outbound reverse SSH tunnel to "root" and keeps the tunnel open.
-2. **Accessibility only works in one direction.** The "leaf" cluster allows users from "root" to access its nodes but users in the "leaf" cluster can not access the "root" cluster.
-3. When a user tries to connect to a node inside "leaf" using the root's proxy, the reverse tunnel from step 1 is used to establish this connection shown as the green line above.
+```code
+$ tctl create visitor.yaml
+role 'visitor' has been created
+```
-
- The scheme above also works even if the "root" cluster uses multiple proxies behind a load balancer (LB) or a DNS entry with multiple values.
- This works by "leaf" establishing a tunnel to *every* proxy in "root". This requires that an LB uses a round-robin or a similar balancing algorithm. Do not use sticky load balancing algorithms (a.k.a. "session affinity" or "sticky sessions") with
- Teleport proxies.
-
+Now you have a `visitor` role on your leaf cluster that enables users to assume
+the `visitor` login on your Node.
+
+### Add a login to your root cluster user
-## Join Tokens
+The `visitor` role allows users with the `visitor` login to access Nodes in the
+leaf cluster. In the next step, we will add the `visitor` login to your user so
+you can satisfy the conditions of the role and access the Node.
-Lets start with the diagram of how connection between two clusters is established:
+Make sure that you are logged in to your root cluster.
-![Tunnels](../../../img/trusted-clusters/TrustedClusters-Simple.svg)
+
-The first step in establishing a secure tunnel between two clusters is for the *leaf* cluster "leaf" to connect to the *root* cluster "root". When this
-happens for *the first time*, clusters know nothing about each other, thus a shared secret needs to exist for "root" to accept the connection from "leaf".
+```code
+$ tsh logout
+$ tsh login --proxy=rootcluster.example.com --user=myuser
+```
-This shared secret is called a "join token". There are two ways to create join tokens: to statically define them in a configuration file or to create them on the fly using `tctl` tool.
+
+
-
- It's important to note that join tokens are only used to establish the connection for the first time. The clusters will exchange certificates and won't use tokens to re-establish their connection afterward.
-
+```code
+$ tsh logout
+$ tsh login --proxy=rootcluster.teleport.sh --user=myuser
+```
-### Static Join Tokens
+
-To create a static join token, update the configuration file on "root" cluster
-to look like this:
+Create a file called `user.yaml` with your current user configuration. Replace
+`myuser` with your Teleport username:
-```yaml
-# fragment of /etc/teleport.yaml:
-auth_service:
- enabled: true
- tokens:
- # If using static tokens we recommend using tools like `pwgen -s 32`
- # to generate sufficiently random tokens of 32+ byte length
- - trusted_cluster:mk9JgEVqsgz6pSsHf4kJPAHdVDVtpuE0
+```code
+$ tctl get user/myuser > user.yaml
```
-This token can be used an unlimited number of times.
+Make the following change to `user.yaml`:
-### Security implications
+```diff
+ traits:
+ logins:
++ - visitor
+ - ubuntu
+ - root
+```
+
+Apply your changes:
+
+```code
+$ tctl create -f user.yaml
+```
+
+In the next section, we will allow users on the root cluster to access your Node
+while assuming the `visitor` role.
-Consider the security implications when deciding which token method to use. Short-lived tokens decrease the window for an attack but will require any automation which uses these tokens to refresh them regularly.
+## Step 2/5. Establish trust between clusters
-### Dynamic Join Tokens
+Teleport establishes trust between the root cluster and a leaf cluster using
+a **join token**.
-Creating a token dynamically with a CLI tool offers the advantage of applying a time-to-live (TTL) interval on it, i.e. it will be impossible to re-use such token after a specified time.
+To register your leaf cluster as a Trusted Cluster, you will first create a
+join token via the root cluster's Auth Service. You will then use the Auth Service on
+the leaf cluster to create a `trusted_cluster` resource.
-To create a token using the CLI tool, execute this command on the *auth server*
-of cluster "root":
+The `trusted_cluster` resource will include the join token, proving to the root
+cluster that the leaf cluster is the one you expected to register.
+
+### Create a join token
+
+You can create a join token using the `tctl` tool.
+
+First, log out of all clusters and log in to the root cluster.
+
+
```code
-# Generates a trusted cluster token to allow an inbound connection from a leaf cluster:
-$ tctl tokens add --type=trusted_cluster --ttl=5m
-# Example output:
-# The cluster invite token: (=presets.tokens.first=)
-# This token will expire in 5 minutes
+$ tsh logout
+$ tsh login --user=myuser --proxy=rootcluster.example.com
+> Profile URL: https://rootcluster.example.com:443
+ Logged in as: myuser
+ Cluster: rootcluster.example.com
+ Roles: access, auditor, editor
+ Logins: root
+ Kubernetes: enabled
+ Valid until: 2022-04-29 03:07:22 -0400 EDT [valid for 12h0m0s]
+ Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty
+```
+
+
+
+
+```code
+$ tsh login --user=myuser --proxy=myrootclustertenant.teleport.sh
+> Profile URL: https://rootcluster.teleport.sh:443
+ Logged in as: myuser
+ Cluster: rootcluster.teleport.sh
+ Roles: access, auditor, editor
+ Logins: root
+ Kubernetes: enabled
+ Valid until: 2022-04-29 03:07:22 -0400 EDT [valid for 12h0m0s]
+ Extensions: permit-agent-forwarding, permit-port-forwarding, permit-pty
+```
+
+
+
+Execute the following command on your development machine:
+
+```code
+# Generates a Trusted Cluster token to allow an inbound connection from a leaf cluster:
+$ tctl tokens add --type=trusted_cluster --ttl=15m
+The cluster invite token: (=presets.tokens.first=)
+This token will expire in 15 minutes
+
+Use this token when defining a trusted cluster resource on a remote cluster.
+```
-# Generates a trusted cluster token with labels:
-# every cluster joined using this token will inherit env:prod labels.
-$ tctl tokens add --type=trusted_cluster --labels=env=prod
+This command generates a Trusted Cluster join token. The token can be used
+multiple times and has an expiration time of 5 minutes.
-# You can also list the outstanding non-expired tokens:
+Copy the join token for later use. If you need to display your join token again,
+run the following command against your root cluster:
+
+```code
$ tctl tokens ls
+Token Type Labels Expiry Time (UTC)
+---------------------------------------------------------------- --------------- -------- ---------------------------
+(=presets.tokens.first=) trusted_cluster 28 Apr 22 19:19 UTC (4m48s)
+```
-# ... or delete/revoke an invitation:
+
+
+You can revoke a join token with the following command:
+
+```code
$ tctl tokens rm (=presets.tokens.first=)
```
-Users of Teleport will recognize that this is the same way you would add any
-node to a cluster. The token created above can be used multiple times and has
-an expiration time of 5 minutes.
+
+
+
+
+ It's important to note that join tokens are only used to establish a
+ connection for the first time. Clusters will exchange certificates and
+ won't use tokens to re-establish their connection afterward.
+
+
-Now, the administrator of "leaf" must create the following resource file:
+### Define a Trusted Cluster resource
+
+On your local machine, create a file called `trusted_cluster.yaml` with the
+following content:
```yaml
# cluster.yaml
kind: trusted_cluster
version: v2
metadata:
- # The trusted cluster name MUST match the 'cluster_name' setting of the
- # root cluster
- name: root
+ name: rootcluster.example.com
spec:
- # This field allows to create tunnels that are disabled, but can be enabled later.
enabled: true
- # The token expected by the "root" cluster:
- token: ba4825847f0378bcdfe18113c4998498
- # The address in 'host:port' form of the reverse tunnel listening port on the
- # "root" proxy server:
- tunnel_addr: root.example.com:3024
- # The address in 'host:port' form of the web listening port on the
- # "root" proxy server:
- web_proxy_addr: root.example.com:443
- # The role mapping allows to map user roles from one cluster to another
- # (enterprise editions of Teleport only)
+ token: (=presets.tokens.first=)
+ tunnel_addr: rootcluster.example.com:11106
+ web_proxy_addr: rootcluster.example.com:443
role_map:
- - remote: "admin" # users who have "admin" role on "root"
- local: ["auditor"] # will be assigned "auditor" role when logging into "leaf"
+ - remote: "access"
+ local: ["visitor"]
```
-Then, use `tctl create` to add the file:
+Change the fields of `trusted_cluster.yaml` as follows:
+
+#### `metadata.name`
+
+Use the name of your root cluster, e.g., `teleport.example.com``mytenant.teleport.sh`.
+
+#### `spec.token`
+
+This is join token you created earlier.
+
+#### `spec.tunnel_addr`
+
+This is the reverse tunnel address of the Proxy Service in the root cluster. Run
+the following command to retrieve the value you should use:
+
+
```code
-$ tctl create cluster.yaml
+$ PROXY=rootcluster.example.com
+$ curl https://${PROXY?}/webapi/ping | jq 'if .proxy.tls_routing_enabled == true then .proxy.ssh.public_addr else .proxy.ssh.ssh_tunnel_public_addr end'
+"rootcluster.example.com:443"
```
-At this point, the users of the "root" cluster should be able to see "leaf" in the list of available clusters.
+
+
-
- If the `web_proxy_addr` endpoint of the root cluster uses a self-signed or invalid HTTPS certificate, you will get an error: *"the trusted cluster uses misconfigured HTTP/TLS certificate"*. For
- ease of testing, the Teleport daemon on "leaf" can be started with the `--insecure` CLI flag to accept self-signed certificates. Make sure to configure
- HTTPS properly and remove the insecure flag for production use.
-
+```code
+$ PROXY=rootcluster.teleport.sh
+$ curl https://${PROXY?}/webapi/ping | jq 'if .proxy.tls_routing_enabled == true then .proxy.ssh.public_addr else .proxy.ssh.ssh_tunnel_public_addr end'
+"rootcluster.teleport.sh:443"
+```
-## RBAC
+
-When a *leaf* cluster "leaf" from the diagram above establishes trust with
-the *root* cluster "root", it needs a way to configure which users from
-"root" should be allowed in and what permissions should they have. Teleport offers
-two methods of limiting access, by using role mapping of cluster labels.
+#### `web_proxy_addr`
-Consider the following:
+This is the address of the Proxy Service on the root cluster. Obtain this with the
+following command:
-- Both clusters "root" and "leaf" have their own locally defined roles.
-- Every user in Teleport Enterprise is assigned a role.
-- When creating a *trusted cluster* resource, the administrator of "leaf" must define how roles from "root" map to roles on "leaf".
-- To update the role map for an existing *trusted cluster* delete and re-create the *trusted cluster* with the updated role map.
+
-### Example
+```code
+$ curl https://${PROXY?}/webapi/ping | jq .proxy.ssh.public_addr
+"teleport.example.com:443"
+```
-Let's make a few assumptions for this example:
+
+
-- The cluster "root" has two roles: *user* for regular users and *admin* for local administrators.
-- We want administrators from "root" (but not regular users!) to have restricted access to "leaf". We want to deny them access to machines
- with "environment=production" and any Government cluster labeled "customer=gov"
+```code
+$ curl https://${PROXY?}/webapi/ping | jq .proxy.ssh.public_addr
+"mytenant.teleport.sh:443"
+```
-First, we need to create a special role for root users on "leaf":
+
-```yaml
-# Save this into root-user-role.yaml on the leaf cluster and execute:
-# tctl create root-user-role.yaml
-kind: role
-version: v5
-metadata:
- name: local-admin
-spec:
- allow:
- node_labels:
- '*': '*'
- # Cluster labels control what clusters user can connect to. The wildcard ('*') means
- # any cluster. If no role in the role set is using labels and the cluster is not labeled,
- # the cluster labels check is not applied. Otherwise, cluster labels are always enforced.
- # This makes the feature backward-compatible.
- cluster_labels:
- 'env': '*'
- deny:
- # Cluster labels control what clusters user can connect to. The wildcard ('*') means
- # any cluster. By default none is set in deny rules to preserve backward compatibility
- cluster_labels:
- 'customer': 'gov'
- node_labels:
- 'environment': 'production'
-```
+#### `spec.role_map`
+
+When a leaf cluster establishes trust with a root cluster, it needs a way to
+configure access from users in the root cluster. Teleport enables you to limit
+access to Trusted Clusters by mapping Teleport roles to cluster labels.
-Now, we need to establish trust between roles "root:admin" and "leaf:admin". This is
-done by creating a trusted cluster [resource](../reference/resources.mdx) on "leaf"
-which looks like this:
+When creating a `trusted_cluster` resource, the administrator of the leaf
+cluster must define how roles from the root cluster map to roles on the leaf
+cluster.
+
+`trusted_cluster.yaml` uses the following configuration:
```yaml
-# Save this as root-cluster.yaml on the auth server of "leaf" and then execute:
-# tctl create root-cluster.yaml
-kind: trusted_cluster
-version: v1
-metadata:
- name: "name-of-root-cluster"
-spec:
- enabled: true
role_map:
- - remote: admin
- # admin <-> admin works for the Open Source Edition. Enterprise users
- # have great control over RBAC.
- local: [access]
- token: "join-token-from-root"
- tunnel_addr: root.example.com:3024
- web_proxy_addr: root.example.com:3080
+ - remote: "access"
+ local: ["visitor"]
```
-What if we wanted to let *any* user from "root" to be allowed to connect to
-nodes on "leaf"? In this case, we can use a wildcard `*` in the `role_map` like this:
+Here, if a user has the `access` role on the root cluster, the leaf cluster will grant
+them the `visitor` role when they attempt to log in to a Node.
+
+If your user on the root cluster has the `access` role, leave this as it is. If
+not, change `access` to one of your user's roles.
+
+
+
+### Wildcard characters
+
+In role mappings, wildcard characters match any characters in a string.
+
+For example, if we wanted to let *any* user from the root cluster connect to the
+leaf cluster, we can use a wildcard `*` in the `role_map` like this:
```yaml
role_map:
@@ -292,15 +456,21 @@ role_map:
local: [access]
```
+In this example, we are mapping any roles on the root cluster that begin with
+`cluster-` to the role `clusteradmin` on the leaf cluster.
+
```yaml
role_map:
- remote: 'cluster-*'
local: [clusteradmin]
```
-You can even use [regular expressions](https://github.com/google/re2/wiki/Syntax) to
-map user roles from one cluster to another, you can even capture parts of the remote
-role name and use reference it to name the local role:
+### Regular expressions
+
+You can also use regular expressions to map user roles from one cluster to
+another. Our regular expression syntax enables you to use capture groups to
+reference part of an remote role name that matches a regular expression in the
+corresponding local role:
```yaml
# In this example, remote users with a remote role called 'remote-one' will be
@@ -309,244 +479,497 @@ role name and use reference it to name the local role:
local: [local-$1]
```
-**NOTE:** The regexp matching is activated only when the expression starts
-with `^` and ends with `$`
+Regular expression matching is activated only when the expression starts
+with `^` and ends with `$`.
+
+Regular expressions use Google's re2 syntax, which you can read about in the re2 [syntax guide](https://github.com/google/re2/wiki/Syntax).
+
+
+
+
+
+You can share user SSH logins, Kubernetes users/groups, and database users/names between Trusted Clusters.
+
+Suppose you have a root cluster with a role named `root` and the following
+allow rules:
+
+```yaml
+logins: ["root"]
+kubernetes_groups: ["system:masters"]
+kubernetes_users: ["alice"]
+db_users: ["postgres"]
+db_names: ["dev", "metrics"]
+```
+
+When setting up the Trusted Cluster relationship, the leaf cluster can choose
+to map this `root` cluster role to its own `admin` role:
-### Trusted Cluster UI
+```yaml
+role_map:
+- remote: "root"
+ local: ["admin"]
+```
-For customers using Teleport Enterprise, they can easily configure *leaf* nodes using the
-Teleport Proxy UI.
+The role `admin` of the leaf cluster can now be set up to use the root cluster's
+role logins, Kubernetes groups and other traits using the following variables:
-**Creating Trust from the Leaf node to the root node.**
+```yaml
+logins: ["{{internal.logins}}"]
+kubernetes_groups: ["{{internal.kubernetes_groups}}"]
+kubernetes_users: ["{{internal.kubernetes_users}}"]
+db_users: ["{{internal.db_users}}"]
+db_names: ["{{internal.db_names}}"]
+```
+
+User traits that come from the identity provider (such as OIDC claims or SAML
+attributes) are also passed to the leaf clusters and can be access in the role
+templates using `external` variable prefix:
+
+```yaml
+logins: ["{{internal.logins}}", "{{external.logins_from_okta}}"]
+node_labels:
+ env: "{{external.env_from_okta}}"
+```
+
+
+
+### Create your Trusted Cluster resource
+
+Log out of the root cluster.
+
+```code
+$ tsh logout
+```
+
+Log in to the leaf cluster:
+
+
+
+```code
+$ tsh login --user=myuser --proxy=leafcluster.example.com
+```
+
+
+
+
+```code
+$ tsh login --user=myuser --proxy=leafcluster.teleport.sh
+```
+
+
+
+Create the Trusted Cluster:
+
+```code
+$ tctl create trusted_cluster.yaml
+```
+
+
+
+You can easily configure leaf nodes using the Teleport Web UI.
+
+Here is an example of creating trust between a leaf and a root node.
![Tunnels](../../../img/trusted-clusters/setting-up-trust.png)
+
+
+
-## Updating Trusted Cluster role map
+To update the role map for a Trusted Cluster, run the following commands on the
+leaf cluster.
-To update the role map for a trusted cluster, first, we'll need to remove the cluster by executing:
+First, remove the cluster:
```code
$ tctl rm tc/root-cluster
```
-Then following updating the role map, we can re-create the cluster by executing:
+When this is complete, we can re-create the cluster:
```code
$ tctl create root-user-updated-role.yaml
```
-### Updating cluster labels
+
+
+Log out of the leaf cluster and log back in to the root cluster. When you run
+`tsh clusters`, you should see listings for both the root cluster and the leaf
+cluster:
-Teleport gives administrators of root clusters the ability to control cluster labels.
-Allowing leaf clusters to propagate their own labels could create a problem with
-rogue clusters updating their labels to bad values.
+
-An administrator of a root cluster can control a remote/leaf cluster's
-labels using the remote cluster API without any fear of override:
+```code
+$ tsh clusters
+tsh clusters
+Cluster Name Status Cluster Type Selected
+----------------------------------------------------- ------ ------------ --------
+rootcluster.example.com online root *
+leafcluster.example.com online leaf
+```
+
+
+
+```code
+$ tsh clusters
+Cluster Name Status Cluster Type Selected
+----------------------------------------------------- ------ ------------ --------
+rootcluster.teleport.sh online root *
+leafcluster.teleport.sh online leaf
+```
+
+
+## Step 3/5. Manage access to your Trusted Cluster
+
+### Apply labels
+
+When you created a `trusted_cluster` resource on the leaf cluster, the leaf
+cluster's Auth Service sent a request to the root cluster's Proxy Service to
+validate the Trusted Cluster. After validating the request, the root cluster's
+Auth Service created a `remote_cluster` resource to represent the Trusted
+Cluster.
+
+By applying labels to the `remote_cluster` resource on the root cluster, you can
+manage access to the leaf cluster. It is not possible to manage labels on the
+leaf cluster—allowing leaf clusters to propagate their own labels could create a
+problem with rogue clusters updating their labels to unexpected values.
+
+To retrieve a `remote_cluster`, make sure you are logged in to the root cluster
+and run the following command:
```code
$ tctl get rc
-# kind: remote_cluster
-# metadata:
-# name: two
-# status:
-# connection: online
-# last_heartbeat: "2020-09-14T03:13:59.35518164Z"
-# version: v3
+kind: remote_cluster
+metadata:
+ id: 1651261581522597792
+ name: rootcluster.example.com
+status:
+ connection: online
+ last_heartbeat: "2022-04-29T19:45:35.052864534Z"
+version: v3
```
-Using `tctl` to update the labels on the remote/leaf cluster:
+Still logged in to the root cluster, use `tctl` to update the labels on the leaf
+cluster:
+
+
```code
-$ tctl update rc/two --set-labels=env=prod
+$ tctl update rc/leafcluster.teleport.sh --set-labels=env=demo
-# Cluster two has been updated
+# Cluster leafcluster.teleport.sh has been updated
```
-Using `tctl` to confirm that the updated labels have been set:
+
+
```code
-$ tctl get rc
+$ tctl update rc/leafcluster.example.com --set-labels=env=demo
+
+# Cluster leafcluster.example.com has been updated
+```
+
+
+
+### Change cluster access privileges
+
+At this point, the `tctl get rc` command may return an empty result, and
+`tsh clusters` may only display the root cluster.
+
+This is because, if a Trusted Cluster has a label, a user must have explicit
+permission to access clusters with that label. Otherwise, the Auth Service will
+not return information about that cluster when a user runs `tctl get rc` or
+`tsh clusters`.
+
+While logged in to the root cluster, create a role that allows access to your
+Trusted Cluster by adding the following content to a file called
+`demo-cluster-access.yaml`:
+
+```yaml
+kind: role
+metadata:
+ name: demo-cluster-access
+spec:
+ allow:
+ cluster_labels:
+ 'env': 'demo'
+version: v5
+```
-# kind: remote_cluster
-# metadata:
-# labels:
-# env: prod
-# name: two
-# status:
-# connection: online
-# last_heartbeat: "2020-09-14T03:13:59.35518164Z"
+Create the role:
+
+```code
+$ tctl create demo-cluster-access.yaml
+role 'demo-cluster-access' has been created
```
-## Using Trusted Clusters
+Next, retrieve your user's role definition and overwrite the `user.yaml` file
+you created earlier. Replace `myuser` with the name of your Teleport user:
+
+```code
+$ tctl get user/myuser > user.yaml
+```
+
+Make the following change to `user.yaml`:
+
+```diff
+ spec:
+ roles:
+ - editor
+ - access
++ - demo-cluster-access
+```
-Now an admin from the "root" cluster can see and access the "leaf" cluster:
+Update your user:
```code
-# Log into the root cluster:
-$ tsh --proxy=root.example.com login admin
+$ tctl create -f user.yaml
```
+When you log out of the cluster and log in again, you should see the
+`remote_cluster` you just labeled.
+
+Confirm that the updated labels have been set:
+
```code
-# See the list of available clusters
-$ tsh clusters
+$ tctl get rc
+
+$ sudo tctl get rc
+kind: remote_cluster
+metadata:
+ id: 1651262381521336026
+ labels:
+ env: demo
+ name: rootcluster.example.com
+status:
+ connection: online
+ last_heartbeat: "2022-04-29T19:55:35.053054594Z"
+version: v3
+```
+
+## Step 4/5. Access a Node in your remote cluster
+
+With the `trusted_cluster` resource you created earlier, you can log in to the
+Node in your leaf cluster as a user of your root cluster.
+
+First, make sure that you are logged in to root cluster:
-# Cluster Name Status
-# ------------ ------
-# root online
-# leaf online
+
+
+```code
+$ tsh logout
+$ tsh --proxy=rootcluster.example.com --user=myuser login
```
+
+
+
```code
-# See the list of machines (nodes) behind the leaf cluster:
-$ tsh ls --cluster=leaf
+$ tsh logout
+$ tsh --proxy=rootcluster.teleport.sh --user=myuser login
+```
+
+
+
+To log in to your Node, confirm that your Node is joined to your leaf cluster:
-# Node Name Node ID Address Labels
-# --------- ------------------ -------------- -----------
-# db1.leaf cf7cc5cd-935e-46f1 10.0.5.2:3022 role=db-leader
-# db2.leaf 3879d133-fe81-3212 10.0.5.3:3022 role=db-follower
+```code
+$ tsh ls --cluster=leafcluster.example.com
+
+Node Name Address Labels
+--------------- -------------- ------------------------------------
+mynode 127.0.0.1:3022 env=demo,hostname=ip-172-30-13-38
```
+SSH into your Node:
+
```code
-# SSH into any node in "leaf":
-$ tsh ssh --cluster=leaf user@db1.leaf
+$ tsh ssh --cluster=leafcluster.example.com visitor@mynode
```
+
+
+The Teleport Auth Service on the leaf cluster checks the permissions of users in
+remote clusters similarly to how it checks permissions for users in the same
+cluster: using certificate-based SSH authentication.
+
+You can think of an SSH certificate as a "permit" issued and time-stamped by a
+certificate authority. A certificate contains four important pieces of data:
+
+- List of allowed Unix logins a user can use. They are called "principals" in
+ the certificate.
+- Signature of the certificate authority that issued it (the Teleport Auth Service)
+- Metadata (certificate extensions): additional data protected by the signature
+ above. Teleport uses the metadata to store the list of user roles and SSH
+ options like "permit-agent-forwarding".
+- The expiration date.
+
+When a user from the root cluster attempts to access a Node in the leaf cluster,
+the leaf cluster's Auth Service authenticates the user's certificate and reads
+these pieces of data from it. It then performs the following actions:
+
+- Checks that the certificate signature matches one of its Trusted Clusters.
+- Applies role mapping (as discussed earlier) to associate a role on the leaf
+ cluster with one of the remote user's roles.
+- Checks if the local role allows the requested identity (Unix login) to have
+ access.
+- Checks that the certificate has not expired.
+
+
+
+
+
+ The leaf cluster establishes a reverse tunnel to the root cluster even if the
+ root cluster uses multiple proxies behind a load balancer (LB) or a DNS entry
+ with multiple values. In this case, the leaf cluster establishes a tunnel to
+ *every* proxy in the root cluster.
+
+ This requires that an LB use a round-robin or a similar balancing algorithm.
+ Do not use sticky load balancing algorithms (i.e., "session affinity" or
+ "sticky sessions") with Teleport Proxies.
+
+
+
- Trusted clusters work only one way. So, in the example above users from "leaf"
- cannot see or connect to the nodes in "root".
+
+ Trusted Clusters work only in one direction. In the example above, users from
+ the leaf cluster cannot see or connect to Nodes in the root cluster.
+
-### Disabling trust
+## Step 5/5. Remove trust between your clusters
+
+### Temporarily disable a Trusted Cluster
+
+You can temporarily disable the trust relationship by logging in to the leaf
+cluster and editing the `trusted_cluster` resource you created earlier.
-To temporarily disable trust between clusters, i.e. to disconnect the "leaf"
-cluster from "root", edit the YAML definition of the trusted cluster resource
-and set `enabled` to "false", then update it:
+Retrieve the Trusted Cluster resource you created earlier:
+
+
```code
-$ tctl create --force cluster.yaml
+$ tctl get trusted_cluster/rootcluster.example.com > trusted_cluster.yaml
```
-### Remove Leaf Cluster relationship from both sides
+
+
-Once established, to fully remove a trust relationship between two clusters, do
-the following:
+```code
+$ tctl get trusted_cluster/rootcluster.teleport.sh > trusted_cluster.yaml
+```
-- Remove the relationship from the leaf cluster: `tctl rm tc/root.example.com` (`tc` = trusted cluster)
-- Remove the relationship from the root cluster: `tctl rm rc/leaf.example.com` (`rc` = remote cluster)
+
-### Remove Leaf Cluster relationship from the root
+Make the following change to the resource:
-Remove the relationship from the root cluster: `tctl rm rc/leaf.example.com`.
+```diff
+ spec:
+- enabled: true
++ enabled: false
+ role_map:
+ - local:
+ - visitor
+```
-
- The `leaf.example.com` cluster will continue to try and ping the root cluster,
- but will not be able to connect. To re-establish the trusted cluster relationship,
- the trusted cluster has to be created again from the leaf cluster.
-
+Update the Trusted Cluster:
-### Remove Leaf Cluster relationship from the leaf
+```code
+$ tctl create --force cluster.yaml
+```
-Remove the relationship from the leaf cluster: `tctl rm tc/root.example.com`.
+This closes the reverse tunnel between your leaf cluster and your root cluster.
+It also deactivates and deactivates the root cluster's certificate authority on
+the leaf cluster.
-## Sharing user traits between Trusted Clusters
+You can enable the trust relationship again by setting `enabled` to `true`.
-You can share user SSH logins, Kubernetes users/groups, and database users/names between Trusted Clusters.
+### Remove a leaf cluster relationship from both sides
-Suppose you have a root cluster with a role named `root` and the following
-allow rules:
+If you want to remove a trust relationship without the possibility of restoring
+it later, you can take the following steps.
-```yaml
-logins: ["root"]
-kubernetes_groups: ["system:masters"]
-kubernetes_users: ["alice"]
-db_users: ["postgres"]
-db_names: ["dev", "metrics"]
-```
+On the leaf cluster, run the following command. This performs the same tasks as
+setting `enabled` to `false` in a `trusted_cluster` resource, but also removes
+the Trusted Cluster resource from the Auth Service backend:
-When setting up the Trusted Cluster relationship, the leaf cluster can choose
-to map this `root` cluster role to its own `admin` role:
+
-```yaml
-role_map:
-- remote: "root"
- local: ["admin"]
+```code
+$ tctl rm trusted_cluster/rootcluster.example.com
```
-The role `admin` of the leaf cluster can now be set up to use the root cluster's
-role logins, Kubernetes groups and other traits using the following variables:
+
+
-```yaml
-logins: ["{{internal.logins}}"]
-kubernetes_groups: ["{{internal.kubernetes_groups}}"]
-kubernetes_users: ["{{internal.kubernetes_users}}"]
-db_users: ["{{internal.db_users}}"]
-db_names: ["{{internal.db_names}}"]
+```code
+$ tctl rm trusted_cluster/rootcluster.teleport.sh
```
-User traits that come from the identity provider (such as OIDC claims or SAML
-attributes) are also passed to the leaf clusters and can be access in the role
-templates using `external` variable prefix:
+
-```yaml
-logins: ["{{internal.logins}}", "{{external.logins_from_okta}}"]
-node_labels:
- env: "{{external.env_from_okta}}"
-```
+Next, run the following command on the root cluster. This command deletes the
+certificate authorities associated with the remote cluster and removes the
+`remote_cluster` resource from the root cluster's Auth Service backend.
-## How does it work?
+
-At a first glance, Trusted Clusters in combination with RBAC may seem
-complicated. However, it is based on certificate-based SSH authentication
-which is fairly easy to reason about:
+```code
+$ tctl rm rc/leafcluster.example.com
+```
-One can think of an SSH certificate as a "permit" issued and time-stamped by a
-certificate authority. A certificate contains four important pieces of data:
+
+
-- List of allowed UNIX logins a user can use. They are called "principals" in the certificate.
-- Signature of the certificate authority who issued it (the *auth* server)
-- Metadata (certificate extensions): additional data protected by the signature above. Teleport uses the metadata to store the list of user roles and SSH
- options like "permit-agent-forwarding".
-- The expiration date.
+```code
+$ tctl rm rc/leafcluster.teleport.sh
+```
-Try executing `tsh status` right after `tsh login` to see all these fields in the
-client certificate.
+
-When a user from "root" tries to connect to a node inside "leaf", her
-certificate is presented to the auth server of "leaf" and it performs the
-following checks:
+
-- Checks that the certificate signature matches one of the trusted clusters.
-- Tries to find a local role that maps to the list of principals found in the certificate.
-- Checks if the local role allows the requested identity (UNIX login) to have access.
-- Checks that the certificate has not expired.
+ You can remove the relationship by running only `tctl rm rc/leaf.example.com`.
+
+ The leaf cluster will continue to try and ping the root cluster, but will not
+ be able to connect. To re-establish the Trusted Cluster relationship, the
+ Trusted Cluster has to be created again from the leaf cluster.
+
+
## Troubleshooting
+
+
There are three common types of problems Teleport administrators can run into when configuring
trust between two clusters:
- **HTTPS configuration**: when the root cluster uses a self-signed or invalid HTTPS certificate.
-- **Connectivity problems**: when a leaf cluster "leaf" does not show up in
- `tsh clusters` output on "root".
-- **Access problems**: when users from "root" get "access denied" error messages trying to connect to nodes on "leaf".
+- **Connectivity problems**: when a leaf cluster does not show up in the output
+ of `tsh clusters` on the root cluster.
+- **Access problems**: when users from the root cluster get "access denied" error messages
+ trying to connect to nodes on the leaf cluster.
### HTTPS configuration
-If the `web_proxy_addr` endpoint of the root cluster uses a self-signed or invalid HTTPS certificate,
-you will get an error: "the trusted cluster uses misconfigured HTTP/TLS certificate". For ease of
-testing, the teleport daemon on "leaf" can be started with the `--insecure` CLI flag to accept
-self-signed certificates. Make sure to configure HTTPS properly and remove the insecure flag for production use.
+If the `web_proxy_addr` endpoint of the root cluster uses a self-signed or
+invalid HTTPS certificate, you will get an error: "the trusted cluster uses
+misconfigured HTTP/TLS certificate". For ease of testing, the `teleport` daemon
+on the leaf cluster can be started with the `--insecure` CLI flag to accept
+self-signed certificates. Make sure to configure HTTPS properly and remove the
+insecure flag for production use.
### Connectivity problems
-To troubleshoot connectivity problems, enable verbose output for the auth
-servers on both clusters. Usually this can be done by adding `--debug` flag to
+To troubleshoot connectivity problems, enable verbose output for the Auth
+Servers on both clusters. Usually this can be done by adding `--debug` flag to
`teleport start --debug`. You can also do this by updating the configuration
-file for both auth servers:
+file for both Auth Servers:
```yaml
# Snippet from /etc/teleport.yaml
@@ -572,9 +995,29 @@ how your network security groups are configured on AWS.
Troubleshooting access denied messages can be challenging. A Teleport administrator
should check to see the following:
-- Which roles a user is assigned on "root" when they retrieve their SSH certificate via `tsh login`. You can inspect the retrieved certificate with `tsh status` command on the client-side.
-- Which roles a user is assigned on "leaf" when the role mapping takes place.
- The role mapping result is reflected in the Teleport audit log. By default,
- it is stored in `/var/lib/teleport/log` on a *auth* server of a cluster.
- Check the audit log messages on both clusters to get answers for the
- questions above.
+- Which roles a user is assigned on the root cluster when they retrieve their SSH
+ certificate via `tsh login`. You can inspect the retrieved certificate with the
+ `tsh status` command on the client-side.
+- Which roles a user is assigned on the leaf cluster when the role mapping takes
+ place. The role mapping result is reflected in the Teleport audit log. By
+ default, it is stored in `/var/lib/teleport/log` on the Auth Server of a
+ cluster. Check the audit log messages on both clusters to get answers for the
+ questions above.
+
+
+Troubleshooting "access denied" messages can be challenging. A Teleport administrator
+should check to see the following:
+
+- Which roles a user is assigned on the root cluster when they retrieve their SSH
+ certificate via `tsh login`. You can inspect the retrieved certificate with the
+ `tsh status` command on the client-side.
+- Which roles a user is assigned on the leaf cluster when the role mapping takes
+ place. The role mapping result is reflected in the Teleport audit log, which
+ you can access via the Teleport Web UI.
+
+
+
+## Further reading
+- Read more about how Trusted Clusters fit into Teleport's overall architecture:
+ [Architecture Introduction](../../architecture/overview.mdx).
+
diff --git a/docs/pages/setup/reference/cli.mdx b/docs/pages/setup/reference/cli.mdx
index dd460ea557b7d..4b2d8a184993b 100644
--- a/docs/pages/setup/reference/cli.mdx
+++ b/docs/pages/setup/reference/cli.mdx
@@ -510,7 +510,7 @@ $ tsh login [] []
#### Arguments
-- `` - the name of the cluster, see [Trusted Cluster](../../setup/admin/trustedclusters.mdx#introduction) for more information.
+- `` - the name of the cluster, see [Trusted Cluster](../../setup/admin/trustedclusters.mdx) for more information.
#### Flags