You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An internal investigation led to the discovery of an issue with the Consul 1.4 ACL system that, given a very specific set of conditions and events, can allow an unauthorized client to gain the privileges of one other arbitrary ACL token within secondary datacenters. This affects Consul versions 1.4.0, 1.4.1 and 1.4.2.
Summary
You should take action if you utilize Consul 1.4.0+ with ACL token replication (multi-datacenter). Given this was introduced in version 1.4.0, we do recommend upgrading to the minor 1.4.3 release regardless of usage of the ACL replication features.
Remediation steps with an upgrade to Consul 1.4.3:
Upgrade to version 1.4.3 of Consul in all datacenters. Only Consul servers must be upgraded as the clients are unaffected.
If the conditions outlined in the Background section had been met, the suspect token will automatically be invalidated and removed. Unless aware of this vulnerability, users will not be affected by this token removal as they were not aware of its existence.
Remediation steps without an upgrade - disabling token replication:
Unless using local tokens the simplest way to remediate the problem without an upgrade is to disable token replication. If relying on local tokens this cannot be done as token replication is a prerequisite for local tokens.
Set the acl.token_replication configuration to false for Consul servers in the secondary datacenters.
Perform a rolling restart of the Consul servers within those secondary datacenters.
Background
Privilege for a specific token can be achieved with the following conditions:
Must have ACL token replication enabled in secondary datacenters
Must have assigned replication tokens to the Consul servers in secondary datacenters.
The assigned replication token must have read permissions on ACLs but not write permissions
When those conditions are met, the ACL token replication process in a secondary datacenter can end up injecting a token with a secret of <hidden> into the local state store. Then other clients could pass <hidden> as the token to various API calls instead of the tokens original/correct secret and gain access to whatever the token is allowed to access.
Note that the token will have the secret ID of the literal string <hidden>, and this is not obfuscation. This is due to the string <hidden> being used to redact the value of the token in some internal endpoints for security reasons. In this case, this led to a bug that caused the value to be the string <hidden>.
Detection
It is possible to detect if a secondary datacenter is affected using the following Consul CLI command:
consul acl token read -token "<hidden>" -self
This assumes that you are running the consul command from within the secondary datacenter on a Consul node with the HTTP API enabled on localhost:8500. If these assumptions are not accurate other CLI options may be used to have the command use the correct HTTP and datacenter settings. These are documented here .
Mitigations
Consul already prevents multiple tokens from having the same secret so this can only grant access to a single token, though it is not deterministic which token that will be. In the worst case it could be a privileged token capable of unrestricted access in the secondary datacenter.
The <hidden> secret is still invalid within the primary datacenter and therefore no new policies or globally valid tokens may be created (or any operation within the primary datacenter). The replication process updates tokens in secondary DCs in batches and will error out the whole batch if more than one token has an error inserting. So the bad token can only be injected when only a single token needs replicating from the primary datacenter.
Changes to token replication and the ACL system were made in 1.4.0. This vulnerability affects versions 1.4.0, 1.4.1, and 1.4.2. Version 1.4.3 fixes this and provides a mitigation for users coming from affected versions. If a token exists with the <hidden> value in these versions, after upgrading and when a Consul 1.4.3 server in the secondary datacenter has gained leadership, the offending token will be removed from the data store. This happens prior to replication being started and will allow the token to be fetched again and inserted locally with the correct secret ID.
The text was updated successfully, but these errors were encountered:
mkeeler
changed the title
TBD
Consul CVE-2019-8336: Potential Privilege Escalation in ACL Replication
Mar 5, 2019
An internal investigation led to the discovery of an issue with the Consul 1.4 ACL system that, given a very specific set of conditions and events, can allow an unauthorized client to gain the privileges of one other arbitrary ACL token within secondary datacenters. This affects Consul versions 1.4.0, 1.4.1 and 1.4.2.
Summary
You should take action if you utilize Consul 1.4.0+ with ACL token replication (multi-datacenter). Given this was introduced in version 1.4.0, we do recommend upgrading to the minor 1.4.3 release regardless of usage of the ACL replication features.
Remediation steps with an upgrade to Consul 1.4.3:
Remediation steps without an upgrade - disabling token replication:
Unless using local tokens the simplest way to remediate the problem without an upgrade is to disable token replication. If relying on local tokens this cannot be done as token replication is a prerequisite for local tokens.
acl.token_replication
configuration tofalse
for Consul servers in the secondary datacenters.Background
Privilege for a specific token can be achieved with the following conditions:
When those conditions are met, the ACL token replication process in a secondary datacenter can end up injecting a token with a secret of
<hidden>
into the local state store. Then other clients could pass<hidden>
as the token to various API calls instead of the tokens original/correct secret and gain access to whatever the token is allowed to access.Note that the token will have the secret ID of the literal string
<hidden>
, and this is not obfuscation. This is due to the string<hidden>
being used to redact the value of the token in some internal endpoints for security reasons. In this case, this led to a bug that caused the value to be the string<hidden>
.Detection
It is possible to detect if a secondary datacenter is affected using the following Consul CLI command:
This assumes that you are running the
consul
command from within the secondary datacenter on a Consul node with the HTTP API enabled onlocalhost:8500
. If these assumptions are not accurate other CLI options may be used to have the command use the correct HTTP and datacenter settings. These are documented here .Mitigations
Consul already prevents multiple tokens from having the same secret so this can only grant access to a single token, though it is not deterministic which token that will be. In the worst case it could be a privileged token capable of unrestricted access in the secondary datacenter.
The
<hidden>
secret is still invalid within the primary datacenter and therefore no new policies or globally valid tokens may be created (or any operation within the primary datacenter). The replication process updates tokens in secondary DCs in batches and will error out the whole batch if more than one token has an error inserting. So the bad token can only be injected when only a single token needs replicating from the primary datacenter.Changes to token replication and the ACL system were made in 1.4.0. This vulnerability affects versions 1.4.0, 1.4.1, and 1.4.2. Version 1.4.3 fixes this and provides a mitigation for users coming from affected versions. If a token exists with the
<hidden>
value in these versions, after upgrading and when a Consul 1.4.3 server in the secondary datacenter has gained leadership, the offending token will be removed from the data store. This happens prior to replication being started and will allow the token to be fetched again and inserted locally with the correct secret ID.The text was updated successfully, but these errors were encountered: