Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vault agent panics with more than 1 redirect 302 #5813

Closed
JulienBalestra opened this issue Nov 17, 2018 · 1 comment
Closed

vault agent panics with more than 1 redirect 302 #5813

JulienBalestra opened this issue Nov 17, 2018 · 1 comment

Comments

@JulienBalestra
Copy link
Contributor

Describe the bug

In case of more than one redirect 302, the following function returns a tuple of nil:

//  "github.com/hashicorp/vault/api"
func (c *Logical) Write(path string, data map[string]interface{}) (*Secret, error)

"github.com/hashicorp/vault/api"

The command line vault write ... does nothing and returns zero.

Vault agent panics.

To Reproduce
Steps to reproduce the behavior:

  1. Setup a vault server behind at least 2 redirects, example, I have a vault server listening over http://127.0.0.1:8200 and the following snippet:
package main

import (
	"net/http"
)

func edgeHandler(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Location", "http://127.0.0.1:8201/v1/auth/pupernetes/login2")
	w.WriteHeader(302)
}

func handlerToRealVault(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Location", "http://127.0.0.1:8200/v1/auth/pupernetes/login")
	w.WriteHeader(302)
}

func handlerRenew(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Location", "http://127.0.0.1:8200/v1/auth/token/renew-self")
	w.WriteHeader(302)
}

func main() {
	http.HandleFunc("/v1/auth/pupernetes/login", edgeHandler)
	http.HandleFunc("/v1/auth/pupernetes/login2", handlerToRealVault)
	http.HandleFunc("/v1/auth/token/renew-self", handlerRenew)
	http.ListenAndServe("127.0.0.1:8201", nil)
}
  1. Properly configures a vault auth over Kubernetes, for example:
kubectl --context p8s apply -f - <<EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: vaultd
  namespace: default
EOF

kubectl  --context p8s apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: vaultd-tokenreview-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: vaultd
  namespace: default
EOF

token_id=$(kubectl get serviceaccount vaultd --namespace=default -o 'jsonpath={.secrets[0].name}')
token=$(kubectl get secret $token_id --namespace=default -o 'jsonpath={.data.token}' | base64 --decode -)
ca=$(kubectl get secret $token_id --namespace=default -o 'jsonpath={.data.ca\.crt}' | base64 --decode -)

vault write sys/policy/vaultd policy='path "pki/sign/*" { capabilities = ["update"] }
path "pki/issue/*" { capabilities = ["update"] }'

vault secrets enable -path=pki pki
vault write pki/root/generate/internal -common_name=dev.com -ttl=87600h
vault write pki/roles/vaultd allow_any_name=true ttl=87600h policy=vaultd
vault auth enable --path pupernetes kubernetes

vault write auth/pupernetes/config \
  token_reviewer_jwt="$token"  \
  kubernetes_host="https://127.0.0.1:6443" \ 
  kubernetes_ca_cert="$ca"

vault write auth/pupernetes/role/vaultd \
    name=vaultd \
    bound_service_account_names=default,metrics-server \
    bound_service_account_namespaces=default,kube-system \
    policies=vaultd \
    period=600s

echo $(kubectl get secret $(kubectl get serviceaccount default --namespace=default -o 'jsonpath={.secrets[0].name}') --namespace=default -o 'jsonpath={.data.token}' | base64 --decode -) > service-account-token
  1. Run vault write auth/pupernetes/login role=vaultd jwt=@service-account-token -address=http://127.0.0.1:8201
Success! Data written to: auth/pupernetes/login
$ echo $?
0

It's also an issue with vault agent because it panics:

Tested on master:

==> Vault agent configuration:
2018-11-17T18:14:54.096+0100 [INFO]  sink.file: creating file sink

                     Cgo: enabled
2018-11-17T18:14:54.096+0100 [INFO]  sink.file: file sink configured: path=/tmp/vault-token
               Log Level: info
2018-11-17T18:14:54.096+0100 [INFO]  sink.server: starting sink server
                 Version: Vault v1.0.0-beta2

==> Vault server started! Log data will stream in below:

2018-11-17T18:14:54.096+0100 [INFO]  auth.handler: starting auth handler
2018-11-17T18:14:54.096+0100 [INFO]  auth.handler: authenticating
2018-11-17T18:14:54.097+0100 [INFO]  auth.handler: auth handler stopped
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x832a7f]

goroutine 37 [running]:
github.com/hashicorp/vault/command/agent/auth.(*AuthHandler).Run(0xc0004ce280, 0x2ed7980, 0xc0004ce200, 0x2ed7d80, 0xc0006ad590)
	/home/jb/go/src/github.com/hashicorp/vault/command/agent/auth/auth.go:179 +0xecf
created by github.com/hashicorp/vault/command.(*AgentCommand).Run
	/home/jb/go/src/github.com/hashicorp/vault/command/agent.go:336 +0x14ab

In our production, vault agent v0.10.4

URL: PUT https://***/v1/auth/token/renew-self
Code: 500. Errors:
* 1 error occurred:
* failed to persist lease entry: 1 error occurred:
* error closing connection: Post https://www.googleapis.com/upload/storage/v1/b/***/o?alt=json&projection=full&uploadTy
2018-11-09T10:43:15.566Z [INFO ] auth.handler: authenticating
2018-11-09T10:43:17.990Z [ERROR] auth.handler: error authenticating: error="Error making API request.
URL: PUT https://***/v1/auth/gcp/login
Code: 500. Errors:
* internal error" backoff=2.526192433
2018-11-09T10:43:20.517Z [INFO ] auth.handler: authenticating
2018-11-09T10:43:20.613Z [INFO ] auth.handler: auth handler stopped
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x7f251c]
goroutine 27 [running]:
github.com/hashicorp/vault/command/agent/auth.(*AuthHandler).Run(0xc420687e00, 0x277b940, 0xc420687dc0, 0x277bc40, 0xc4202d6500)
        /gopath/src/github.com/hashicorp/vault/command/agent/auth/auth.go:158 +0xeac
created by github.com/hashicorp/vault/command.(*AgentCommand).Run
        /gopath/src/github.com/hashicorp/vault/command/agent.go:326 +0x1457
vault-agent.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
vault-agent.service: Failed with result 'exit-code'.

Expected behavior
In the case of the vault agent, I would like it handles the failure.

The code here panics because of the nil value of secret.
Patch could be:

diff --git a/command/agent/auth/auth.go b/command/agent/auth/auth.go
index 8288e9c18..a706468e5 100644
--- a/command/agent/auth/auth.go
+++ b/command/agent/auth/auth.go
@@ -176,7 +176,7 @@ func (ah *AuthHandler) Run(ctx context.Context, am AuthMethod) {
                        }
 
                default:
-                       if secret.Auth == nil {
+                       if secret == nil || secret.Auth == nil {
                                ah.logger.Error("authentication returned nil auth info", "backoff", backoff.Seconds())
                                backoffOrQuit(ctx, backoff)
                                continue
diff --git a/command/agent/auth/kubernetes/kubernetes.go b/command/agent/auth/kubernetes/kubernetes.go
index 89f1f1053..bed3a21a5 100644

But this manages only the case of the vault agent as all clients should be impacted because of this code.

Environment:

  • Vault Server Version (retrieve with vault status): 0.10.4 and 0.11.0
  • Vault CLI Version (retrieve with vault version): 0.11.1
  • Server Operating System/Architecture: ubuntu 18.04 LTS

Additional context

This is happening during the issue #5419 reported by my co-workers.
When all vaults behind a google load balancer report a 500, the load balancer will then send the traffic to all of them.
The HA configuration with consul creates an infinite number of redirects.

@chrishoffman
Copy link
Contributor

Fixed in #5814.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants