You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run vault write auth/pupernetes/login role=vaultd jwt=@service-account-token -address=http://127.0.0.1:8201
Success! Data written to: auth/pupernetes/login
$ echo $?
0
It's also an issue with vault agent because it panics:
Tested on master:
==> Vault agent configuration:
2018-11-17T18:14:54.096+0100 [INFO] sink.file: creating file sink
Cgo: enabled
2018-11-17T18:14:54.096+0100 [INFO] sink.file: file sink configured: path=/tmp/vault-token
Log Level: info
2018-11-17T18:14:54.096+0100 [INFO] sink.server: starting sink server
Version: Vault v1.0.0-beta2
==> Vault server started! Log data will stream in below:
2018-11-17T18:14:54.096+0100 [INFO] auth.handler: starting auth handler
2018-11-17T18:14:54.096+0100 [INFO] auth.handler: authenticating
2018-11-17T18:14:54.097+0100 [INFO] auth.handler: auth handler stopped
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x832a7f]
goroutine 37 [running]:
github.com/hashicorp/vault/command/agent/auth.(*AuthHandler).Run(0xc0004ce280, 0x2ed7980, 0xc0004ce200, 0x2ed7d80, 0xc0006ad590)
/home/jb/go/src/github.com/hashicorp/vault/command/agent/auth/auth.go:179 +0xecf
created by github.com/hashicorp/vault/command.(*AgentCommand).Run
/home/jb/go/src/github.com/hashicorp/vault/command/agent.go:336 +0x14ab
In our production, vault agent v0.10.4
URL: PUT https://***/v1/auth/token/renew-self
Code: 500. Errors:
* 1 error occurred:
* failed to persist lease entry: 1 error occurred:
* error closing connection: Post https://www.googleapis.com/upload/storage/v1/b/***/o?alt=json&projection=full&uploadTy
2018-11-09T10:43:15.566Z [INFO ] auth.handler: authenticating
2018-11-09T10:43:17.990Z [ERROR] auth.handler: error authenticating: error="Error making API request.
URL: PUT https://***/v1/auth/gcp/login
Code: 500. Errors:
* internal error" backoff=2.526192433
2018-11-09T10:43:20.517Z [INFO ] auth.handler: authenticating
2018-11-09T10:43:20.613Z [INFO ] auth.handler: auth handler stopped
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x50 pc=0x7f251c]
goroutine 27 [running]:
github.com/hashicorp/vault/command/agent/auth.(*AuthHandler).Run(0xc420687e00, 0x277b940, 0xc420687dc0, 0x277bc40, 0xc4202d6500)
/gopath/src/github.com/hashicorp/vault/command/agent/auth/auth.go:158 +0xeac
created by github.com/hashicorp/vault/command.(*AgentCommand).Run
/gopath/src/github.com/hashicorp/vault/command/agent.go:326 +0x1457
vault-agent.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
vault-agent.service: Failed with result 'exit-code'.
Expected behavior
In the case of the vault agent, I would like it handles the failure.
The code here panics because of the nil value of secret.
Patch could be:
diff --git a/command/agent/auth/auth.go b/command/agent/auth/auth.go
index 8288e9c18..a706468e5 100644
--- a/command/agent/auth/auth.go
+++ b/command/agent/auth/auth.go
@@ -176,7 +176,7 @@ func (ah *AuthHandler) Run(ctx context.Context, am AuthMethod) {
}
default:
- if secret.Auth == nil {
+ if secret == nil || secret.Auth == nil {
ah.logger.Error("authentication returned nil auth info", "backoff", backoff.Seconds())
backoffOrQuit(ctx, backoff)
continue
diff --git a/command/agent/auth/kubernetes/kubernetes.go b/command/agent/auth/kubernetes/kubernetes.go
index 89f1f1053..bed3a21a5 100644
But this manages only the case of the vault agent as all clients should be impacted because of this code.
Environment:
Vault Server Version (retrieve with vault status): 0.10.4 and 0.11.0
Vault CLI Version (retrieve with vault version): 0.11.1
Server Operating System/Architecture: ubuntu 18.04 LTS
Additional context
This is happening during the issue #5419 reported by my co-workers.
When all vaults behind a google load balancer report a 500, the load balancer will then send the traffic to all of them.
The HA configuration with consul creates an infinite number of redirects.
The text was updated successfully, but these errors were encountered:
Describe the bug
In case of more than one redirect
302
, the following function returns a tuple ofnil
:"github.com/hashicorp/vault/api"
The command line
vault write ...
does nothing and returns zero.Vault agent panics.
To Reproduce
Steps to reproduce the behavior:
http://127.0.0.1:8200
and the following snippet:vault write auth/pupernetes/login role=vaultd jwt=@service-account-token -address=http://127.0.0.1:8201
It's also an issue with
vault agent
because it panics:Tested on master:
In our production, vault agent
v0.10.4
Expected behavior
In the case of the vault agent, I would like it handles the failure.
The code here panics because of the nil value of
secret
.Patch could be:
But this manages only the case of the vault agent as all clients should be impacted because of this code.
Environment:
vault status
):0.10.4
and0.11.0
vault version
):0.11.1
Additional context
This is happening during the issue #5419 reported by my co-workers.
When all vaults behind a google load balancer report a
500
, the load balancer will then send the traffic to all of them.The HA configuration with consul creates an infinite number of redirects.
The text was updated successfully, but these errors were encountered: