Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vault panic, when vault-agent try renew token #10715

Closed
pepsi1k opened this issue Jan 18, 2021 · 4 comments · Fixed by hashicorp/consul-template#1447 or #10756
Closed

Vault panic, when vault-agent try renew token #10715

pepsi1k opened this issue Jan 18, 2021 · 4 comments · Fixed by hashicorp/consul-template#1447 or #10756
Labels
bug Used to indicate a potential bug

Comments

@pepsi1k
Copy link

pepsi1k commented Jan 18, 2021

Describe the bug
When vault-agent try to renew vault-token, a memory error occurs

To Reproduce
Steps to reproduce the behavior:

  1. Configure k8s-auth method
    # enable k8s auth method:
    vault auth enable kubernetes
    
    # configure vault to talk to kubernetes
    vault write auth/kubernetes/config \
      token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
      kubernetes_host=https://${KUBERNETES_PORT_443_TCP_ADDR}:443 \
      kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    
    # create a named role:
    vault write auth/kubernetes/role/demo \
      bound_service_account_names=vault-cli \
      bound_service_account_namespaces=api \
      policies=default \
      ttl=30s
  2. Try inject vault-token
annotations:
        vault.hashicorp.com/agent-pre-populate: 'false'
        vault.hashicorp.com/agent-inject: 'true'
        vault.hashicorp.com/agent-inject-token: 'true'
        vault.hashicorp.com/role: 'demo'
  1. See error
2021-01-18T14:40:18.497Z [INFO]  sink.file: creating file sink
2021-01-18T14:40:18.497Z [INFO]  sink.file: file sink configured: path=/home/vault/.vault-token mode=-rw-r-----
2021-01-18T14:40:18.500Z [INFO]  template.server: starting template server
2021/01/18 14:40:18.500455 [INFO] (runner) creating new runner (dry: false, once: false)
2021/01/18 14:40:18.501015 [INFO] (runner) creating watcher
2021-01-18T14:40:18.501Z [INFO]  auth.handler: starting auth handler
2021-01-18T14:40:18.501Z [INFO]  auth.handler: authenticating
2021-01-18T14:40:18.501Z [INFO]  sink.server: starting sink server
2021-01-18T14:40:18.511Z [ERROR] auth.handler: error authenticating: error="Put "http://vault.vault.svc:8200/v1/auth/kubernetes/login": dial tcp 172.20.56.237:8200: connect: connection refused" backoff=1.069381047
2021-01-18T14:40:19.580Z [INFO]  auth.handler: authenticating
2021-01-18T14:40:19.586Z [ERROR] auth.handler: error authenticating: error="Put "http://vault.vault.svc:8200/v1/auth/kubernetes/login": dial tcp 172.20.56.237:8200: connect: connection refused" backoff=1.256373445
2021-01-18T14:40:20.842Z [INFO]  auth.handler: authenticating
2021-01-18T14:40:20.942Z [ERROR] auth.handler: authentication returned nil auth info: backoff=1.442667843
2021-01-18T14:40:22.385Z [INFO]  auth.handler: authenticating
2021-01-18T14:40:22.443Z [ERROR] auth.handler: authentication returned nil auth info: backoff=2.662632468
2021-01-18T14:40:25.105Z [INFO]  auth.handler: authenticating
2021-01-18T14:40:25.663Z [INFO]  auth.handler: authentication successful, sending token to sinks
2021-01-18T14:40:25.663Z [INFO]  auth.handler: starting renewal process
2021-01-18T14:40:25.664Z [INFO]  sink.file: token written: path=/home/vault/.vault-token
2021-01-18T14:40:25.664Z [INFO]  template.server: template server received new token
2021/01/18 14:40:25.664415 [INFO] (runner) stopping
2021/01/18 14:40:25.664544 [INFO] (runner) creating new runner (dry: false, once: false)
2021/01/18 14:40:25.664767 [INFO] (runner) creating watcher
2021/01/18 14:40:25.664817 [INFO] (runner) starting
2021/01/18 14:40:25.732946 [WARN] vault.read(auth/token/lookup-self): failed to check if auth/token/lookup-self is KVv2, assume not: Error making API request.

URL: GET http://vault.vault.svc:8200/v1/sys/internal/ui/mounts/auth/token/lookup-self
Code: 403. Errors:

* preflight capability check returned 403, please ensure client's policies grant access to path "auth/token/lookup-self/"
2021/01/18 14:40:25.774889 [INFO] (runner) rendered "(dynamic)" => "/vault/secrets/token"
2021-01-18T14:40:47.583Z [INFO]  auth.handler: lifetime watcher done channel triggered
2021-01-18T14:40:47.583Z [INFO]  auth.handler: authenticating
2021-01-18T14:40:47.609Z [INFO]  auth.handler: authentication successful, sending token to sinks
2021-01-18T14:40:47.609Z [INFO]  auth.handler: starting renewal process
2021-01-18T14:40:47.609Z [INFO]  sink.file: token written: path=/home/vault/.vault-token
2021-01-18T14:40:47.609Z [INFO]  template.server: template server received new token
2021/01/18 14:40:47.609854 [INFO] (runner) stopping
2021/01/18 14:40:47.610000 [INFO] (runner) creating new runner (dry: false, once: false)
2021/01/18 14:40:47.610201 [INFO] (runner) creating watcher
2021/01/18 14:40:47.610277 [INFO] (runner) starting
2021/01/18 14:40:47.611388 [INFO] (runner) received finish
2021-01-18T14:40:47.641Z [INFO]  auth.handler: renewed auth token
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x30 pc=0x1910825]

goroutine 79 [running]:
github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/dependency.isKVv2(0xc000460460, 0xc000058600, 0x16, 0x0, 0x0, 0xc000a3f900, 0x0, 0x0)
	/gopath/src/github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/dependency/vault_common.go:305 +0x365
github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/dependency.(*VaultReadQuery).readSecret(0xc000b0aeb0, 0xc000b6e1e0, 0xc000b0afa0, 0x1, 0x203000, 0x203000)
	/gopath/src/github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/dependency/vault_read.go:142 +0x3fa
github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/dependency.(*VaultReadQuery).fetchSecret(0xc000b0aeb0, 0xc000b6e1e0, 0xc000b0af50, 0xc000b0af50, 0x0)
	/gopath/src/github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/dependency/vault_read.go:96 +0x8b
github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/dependency.(*VaultReadQuery).Fetch(0xc000b0aeb0, 0xc000b6e1e0, 0xc000b0af50, 0x1, 0x1, 0xc0005d45f0, 0x1, 0x1)
	/gopath/src/github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/dependency/vault_read.go:79 +0xd8
github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/watch.(*View).fetch(0xc0000d3b00, 0xc000b0da40, 0xc000b0daa0, 0xc000b6a960)
	/gopath/src/github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/watch/view.go:204 +0x168
created by github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/watch.(*View).poll
	/gopath/src/github.com/hashicorp/vault/vendor/github.com/hashicorp/consul-template/watch/view.go:117 +0x44d

Expected behavior
Vault-agent renew vault-token without any panic errors.

Environment:

  • Kubernetes: 1.18
  • hashicorp/vault chart: 0.9.0 app: 1.6.1
  • Istio: 1.8

Vault server configuration:

listener "tcp" {
  tls_disable = 1
  address = "[::]:8200"
  cluster_address = "[::]:8201"
}

storage "postgresql" {
  connection_url = "xxxxx"
  ha_enabled = "true"
  ha_table = "vault_ha_locks"
  table = "vault_kv_store"
}

service_registration "kubernetes" {}

seal "awskms" {
  region = "eu-central-1"
  kms_key_id = "xxxxx"
}
@HridoyRoy
Copy link
Contributor

HridoyRoy commented Jan 20, 2021

Hi @pepsi1k , thanks for filing this issue!

I believe this is because /v1/sys/internal/ui/mounts/ is returning nil, which is causing a panic later on in the consul template code. I'll schedule a fix for this in 1.7, and look into backporting to 1.6.

Edit: Due to this incorporating a large dependency upgrade, we will not be back porting to 1.6, but instead releasing this fix out with 1.7.

@HridoyRoy HridoyRoy added the bug Used to indicate a potential bug label Jan 20, 2021
@eugene-dounar
Copy link

eugene-dounar commented Jan 21, 2021

This seems to happen when Vault is configured with awskms, and the EC2 Metadata (IMDSv1) is disabled. Re-enabbbling IMDSv1 can be used as a workaround

@pepsi1k
Copy link
Author

pepsi1k commented Jan 22, 2021

@eugene-dounar, the curl http://169.254.169.254/latest/meta-data/ works fine

@HridoyRoy
Copy link
Contributor

Hi @pepsi1k , I have submitted the PR above as a fix to this issue. Meanwhile, it may also be helpful to create a duplicate issue for the consul-template repo and link the issues together.

Thanks so much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Used to indicate a potential bug
Projects
None yet
3 participants