Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to login using local AWS profile with role_arn / source_profile and MFA #5767

Open
gwilym opened this issue Nov 13, 2018 · 22 comments
Open
Labels
auth/aws bug Used to indicate a potential bug

Comments

@gwilym
Copy link

gwilym commented Nov 13, 2018

Describe the bug

Attempting to vault login using this particular IAM profile setup in ~/.aws/credentials fails with the following error:

Error authenticating: failed to retrieve credentials from credential chain: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors

The same setup works OK with the official aws CLI (AWS_PROFILE=admin aws sts get-caller-identity works), as well as with basic usage of the Go SDK.

Example of ~/.aws/credentials:

[main]
aws_access_key_id = ?
aws_secret_access_key = ?
[admin]
mfa_serial = arn:aws:iam::MAINACCOUNTID:mfa/USERNAME
role_arn = arn:aws:iam::SUBACCOUNTID:role/admin
source_profile = main

Note: account IDs above may be the same account, though for this case it likely doesn't matter because Vault fails during the credential-load stage. There's likely two potential issues here, one with credential loading and one with enabling an MFA token provider for the AWS SDK.

To Reproduce
Steps to reproduce the behavior:

  1. Establish an AWS account setup that uses MFA and role-switching
  2. Run vault server -dev with credentials to utilise aws auth
  3. Run vault auth enable aws
  4. Run vault write auth/aws/config/client iam_server_id_header_value=vault.example.com
  5. Run vault write auth/aws/role/admin auth_type=iam 'bound_iam_principal_arn=arn:aws:sts::SUBACCOUNTID:assumed-role/admin/*' max_ttl=8h
  6. Run AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=admin vault login -method=aws header_value=vault.example.com role=admin
  7. See error

Expected behavior

  1. Vault should prompt for an MFA code.
  2. Vault should use credentials of the assumed role to generate signed payloads for the login action.
  3. In theory, the login should succeed and you should be able to access Vault, but I haven't reached this point yet.

Environment:

  • Vault Server Version (retrieve with vault status): 1.0.0-beta2
  • Vault CLI Version (retrieve with vault version): Vault v1.0.0-beta2 ('8f61c4953620801477ad40f9d75063659acb5d84')
  • Server Operating System/Architecture: darwin/amd64

Vault server configuration file(s):

None, I've been using -dev.

Additional context

Apologies up front if I'm missing anything fundamental: I am brand new to Vault. If anything looks off here let me know and I will try to clarify.

When I modify Vault's awsutil package to enable verbose errors like so ...

-	creds := credentials.NewChainCredentials(providers)
+	creds := credentials.NewCredentials(&credentials.ChainProvider{
+		Providers:     providers,
+		VerboseErrors: true,
+	})

... I get the following extra info:

Error authenticating: failed to retrieve credentials from credential chain: NoCredentialProviders: no valid providers in chain
caused by: EnvAccessKeyNotFound: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY not found in environment
SharedCredsAccessKey: shared credentials admin in CREDSFILENAME did not contain aws_access_key_id
caused by: error when getting key of section 'admin': key 'aws_access_key_id' not exists
EC2RoleRequestError: no EC2 instance role found
caused by: RequestError: send request failed
caused by: Get http://169.254.169.254/latest/meta-data/iam/security-credentials/: dial tcp 169.254.169.254:80: connect: host is down

The EC2 errors are expected since I'm running this locally, however, the admin profile shouldn't need access keys within it due to source_profile. When I take the access keys and put them in the profile, then that prevents the role-switch from happening and it attempts to login using the original credentials instead (which is not expected).

Not sure if this helps, but here's an example of a simple, working-as-expected Go AWS SDK usage:

package main

import (
	"log"

	"github.com/aws/aws-sdk-go/aws"
	"github.com/aws/aws-sdk-go/aws/credentials/stscreds"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/sts"
)

func main() {
	sess, err := session.NewSessionWithOptions(session.Options{
		Config:                  *aws.NewConfig(),
		AssumeRoleTokenProvider: stscreds.StdinTokenProvider,
	})

	if err != nil {
		log.Fatalf("session error: %v", err)
	}

	svc := sts.New(sess)
	result, err := svc.GetCallerIdentity(&sts.GetCallerIdentityInput{})
	if err != nil {
		log.Fatalf("GetCallerIdentity error: %v", err)
	}

	log.Printf("GetCallerIdentity result: %#v", result)
}
@gwilym
Copy link
Author

gwilym commented Nov 15, 2018

I haven't had time to learn the Vault codebase yet, but I was able to whip up a workaround for this that others might find usable. It's based on the signing code present in the Vault CLI.

https://gist.github.com/gwilym/1db446f67a4d62db50d1139082e5b719

The output of this app should usable as part of a vault write, like below (assuming you build it as vault-aws-login):

$ AWS_SDK_LOAD_CONFIG=1 AWS_PROFILE=aws-profile-name vault-aws-login -server vault.example.com
# capture above using your preferred method, remembering that it may prompt / read for an MFA on stdin, and use it below, like ...
$ vault write auth/aws/login role=vault-profile-name $output

@Vince-Chenal
Copy link

Do you have any update ?
I still have the exact same problem

@michelvocks michelvocks added auth/aws bug Used to indicate a potential bug labels Nov 12, 2019
@kevinpgrant
Copy link

kevinpgrant commented Jan 3, 2020

I've been reading this post a few times, whilst trying to solve a different issue, but then I read something that finally clicked - you have used source_profile in ~/.aws/credentials but according to the docs, it can only be used in the CLI config file ~/.aws/config

"Note that configuration variables for using IAM roles can only be in the AWS CLI config file."

Example configuration using source_profile:

# In ~/.aws/credentials:
[development]
aws_access_key_id=foo
aws_secret_access_key=bar

# In ~/.aws/config
[profile crossaccount]
role_arn=arn:aws:iam:...
source_profile=development

see https://docs.aws.amazon.com/cli/latest/topic/config-vars.html

@spangenberg spangenberg self-assigned this Jan 22, 2020
@spangenberg
Copy link
Contributor

As @kevinpgrant pointed out correctly, that ~/.aws/credentials should not contain the source_profile and/or role_arn; instead, it belongs inside the ~/.aws/config.
Nevertheless, I verified that the bug still exists and needs to be addressed.
@gwilym thanks a lot for the effort you already put in to investigate and come up with a workaround.

@spangenberg
Copy link
Contributor

We haven't heard back regarding this issue in over 29 days. To try and keep our GitHub issues current, we'll be closing this issue in approximately seven days if we do not hear back regarding this issue. Please let us know if you can still reproduce this issue, and if there is any more information you could share, otherwise we'll be closing this issue.

@dalvizu
Copy link

dalvizu commented Feb 22, 2020

Yes this is still an issue, the last comment was from yourself confirming this is still an issue?

@spangenberg
Copy link
Contributor

Sorry, didn't mean to put the comment there. Too many open tabs, please ignore it.

@cablespaghetti
Copy link

cablespaghetti commented Feb 22, 2020

I know this ticket is about MFA, but I am correct in thinking that it also currently isn't possible to use AWS CLI profiles which assume roles? Currently I'm working around this with the aws sts assume-role command and exporting various environment variables from the output of that.

Currently doing this:

AWS_ROLE=<role-arn-here>
CREDENTIALS=`aws sts assume-role --role-arn "$AWS_ROLE" --role-session-name vaultSession --duration-seconds 3600 --output=json`
export AWS_ACCESS_KEY_ID=`echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId'`
export AWS_SECRET_ACCESS_KEY=`echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey'`
export AWS_SESSION_TOKEN=`echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken'`
export AWS_EXPIRATION=`echo ${CREDENTIALS} | jq -r '.Credentials.Expiration'`
vault login -method=aws

@gw0
Copy link

gw0 commented Apr 22, 2020

@cablespaghetti I came to the exactly the same conclusion and same workaround.

@spangenberg spangenberg removed their assignment Apr 22, 2020
@cablespaghetti
Copy link

@gw0 I actually stopped doing this as it was a horrible user experience. I now have a dockerised bash script running as a Kubernetes cron job which syncs up the members of an IAM Group with Vault, so they can log in as their normal user.

@cyrus-mc
Copy link
Contributor

Still and issue for me. Would be nice to have this fixed.

@midacts
Copy link

midacts commented Aug 26, 2020

I too am receiving this by logging into a central AWS account role that is used to assume role into ather AWS accounts.

@dzmitry-kankalovich
Copy link

dzmitry-kankalovich commented Oct 24, 2020

I actually managed to make it work in docker-compose (somewhat work), but I have a particular problem which drives me crazy.

First of all, I need to mention that I use this approach to simulate ECS agent behavior for the containers I have in docker-compose: https://aws.amazon.com/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/

This essentially creates a credentials provider server at 169.254.170.2 which AWS SDK, AWS CLI, and supposedly vault should fallback to grab credentials, if not found in envs / config files.

So inside the container I run this script:

if [[ ${IS_LOCAL_ENV} =~ (true) ]]; then
    export VAULT_ROLE="<my role ARN>";
    
    rm -rf ~/.aws
    mkdir ~/.aws

    echo "Running in local env. Assuming IAM Role ${VAULT_ROLE}";
    aws sts assume-role \
    --role-arn ${VAULT_ROLE} \
    --role-session-name docker-compose-local > creds.json;
    
    echo "Setting up AWS credentials..."

    echo "[default]" > ~/.aws/credentials
    echo "aws_access_key_id     = $(cat ~/creds.json | jq -r '.Credentials.AccessKeyId')" >> ~/.aws/credentials
    echo "aws_secret_access_key = $(cat ~/creds.json | jq -r '.Credentials.SecretAccessKey')" >> ~/.aws/credentials
    echo "aws_session_token     = $(cat ~/creds.json | jq -r '.Credentials.SessionToken')" >> ~/.aws/credentials

    echo "[profile assumed]" > ~/.aws/config
    echo "role_arn = $(cat ~/creds.json | jq -r '.AssumedRoleUser.Arn')" >> ~/.aws/config
    echo "source_profile = default" >> ~/.aws/config

    export AWS_PROFILE=assumed
else
    echo "Running in AWS. Falling back to the associated IAM role.";
fi

echo "Using AWS Profile: ${AWS_PROFILE}"

echo "Running Vault login..."

vault login -method=aws -path=somepath -namespace=somens header_value=someaddress role=read-only

Now the problem:
for some reason, this last vault login statement would fail, telling me my IAM user is not authorized, while it should be using assumed role ARN, yet it completely ignores what I've put in AWS_PROFILE.

HOWEVER if afterward in the same container I would run exactly the same vault login command outside of the script, or just put it in another script and execute - it will correctly pick-up AWS_PROFILE value, resolve to the assumed role and finally issue login token.

I just cannot understand what causes vault login to ignore AWS setup in the script above, yet makes it work in the mentioned cases.

UPDATE: found the root of the issue - the AWS_PROFILE should export profile from ~/.aws/credentials, NOT from ~/.aws/config. It's kinda super confusing design, considering that ~/.aws/config literally contains the word profile and reference to the source of credentials, but it is what it is. Looking at the original question this could be exactly the same problem. To be fair this is the problem of AWS SDK / CLI confusing design, not Vault.

@erks
Copy link

erks commented Apr 13, 2021

the relevant issue on the aws-sdk-go side: aws/aws-sdk-go#3660

@jlestrada
Copy link
Contributor

This is still an issue when attempting to use named AWS Profiles with the Vault CLI (v1.9.3). I am not entirely certain where the issue resides whether it be AWS or Vault. This fails regardless of MFA configuration. As the above have suggested setting the AWS environment variables is the work around. The following code snippet can be leverage as inspiration which grabs the credentials of the assume role process, sets appropriate environment variables, logs into Vault, reads for Database credentials, unsets the AWS environment variables, and lastly logs into a psql database.

#!/bin/bash

unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_EXPIRATION PGHOST PGDATABASE PGPASSWORD PGUSER

AWS_ROLE=$1
VAULT_ROLE=$2
DB_MOUNT_ROLE_PATH=$3 
export PGHOST=$4
export PGDATABASE=$5

echo "Attempting AWS Assume Role for $AWS_ROLE"
CREDENTIALS=`aws sts assume-role --role-arn "$AWS_ROLE" --role-session-name vaultSession --duration-seconds 3600 --output=json`
if [ ! -z "$CREDENTIALS" ]
then
    export AWS_ACCESS_KEY_ID=`echo ${CREDENTIALS} | jq -r '.Credentials.AccessKeyId'`
    export AWS_SECRET_ACCESS_KEY=`echo ${CREDENTIALS} | jq -r '.Credentials.SecretAccessKey'`
    export AWS_SESSION_TOKEN=`echo ${CREDENTIALS} | jq -r '.Credentials.SessionToken'`
    export AWS_EXPIRATION=`echo ${CREDENTIALS} | jq -r '.Credentials.Expiration'`
    
    echo "AWS Assume Role Credentials Successful"
else
    echo "AWS Assume Role Credentials Failed"
    exit 
fi

if vault login -no-print -method=aws role=$VAULT_ROLE
then
    echo "Vault Login Successful"
    DB_CREDENTIALS=`vault read -format=json $DB_MOUNT_ROLE_PATH`
    export PGPASSWORD=`echo ${DB_CREDENTIALS} | jq -r '.data.password'`
    export PGUSER=`echo ${DB_CREDENTIALS} | jq -r '.data.username'`

    unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_EXPIRATION
else
    echo "Vault Login Failed"
    unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN AWS_EXPIRATION
    exit
fi

psql

Execution example

./db_vault_access.sh <assume_role_arn> <vault_binded_role> <mount_path_to_database_role_creds> <db_endpoint> <db_name>

Hope this helps!

@briantist
Copy link
Contributor

This is still an issue. I would like to use my profiles in the AWS config file, and they use source_profile with credential_process.

I would be especially great if Vault agent could work with this... but I would probably file a new issue for that if it were fixed in the CLI.

@Westixy
Copy link

Westixy commented Sep 7, 2022

Any news on this issue? I have the same problem but in my case, it is for Terraform.

NB: I was able to implement a workaround (inspired by previous examples) but looks pretty bad IMHO: https://gist.github.com/Westixy/bc70ee782fe759094bf5c1c65c248f6c

@sud0nick
Copy link

sud0nick commented Oct 4, 2022

This is affecting me as well. I'm surprised to see this issue is 4 years old and we still can't set a source_profile or role_arn in the provider block. Thanks to @Westixy for the workaround as it unblocked me.

@kaplanben
Copy link

same issue here

@alpozcan
Copy link

Here's my version of @Westixy 's script. It doesn't write credentials onto the filesystem.

It also assumes that the AWS backend is configured to require the auth header that is set to the URL of Vault. This is the 3rd parameter, which is also the URL. You'll want to remove lines #12 and #29 if this is not applicable.

@caleb-devops
Copy link

caleb-devops commented Jul 21, 2023

I use this workaround to enable the Vault Terraform provider to have a consistent config in environments where an EC2 instance or IRSA role can be used. This method assumes the selected role and stores the AWS credentials in environment variables. To use it, add the following function to your ~/.bashrc or ~/.zshrc:

# Usage: vault-aws-auth arn:aws:iam::123456789012:role/MyRole

vault-aws-auth() {
  AWS_ROLE_ARN="$1"

  unset AWS_ACCESS_KEY_ID
  unset AWS_SECRET_ACCESS_KEY
  unset AWS_SESSION_TOKEN

  export $(printf "AWS_ACCESS_KEY_ID=%s AWS_SECRET_ACCESS_KEY=%s AWS_SESSION_TOKEN=%s" \
    $(aws sts assume-role \
    --role-arn $AWS_ROLE_ARN \
    --role-session-name vault \
    --query "Credentials.[AccessKeyId,SecretAccessKey,SessionToken]" \
    --output text))
}

After running vault-aws-auth, you can authenticate to Vault using the aws login method:

vault login -method=aws header_value=${VAULT_ADDR}

For the Vault Terraform provider, auth_login_aws does not work due to hashicorp/terraform-provider-vault#1754. Instead, use the auth_login config as follows:

provider "vault" {
  address = var.vault_addr
  auth_login {
    path   = "auth/aws/login"
    method = "aws"
    parameters = {
      role         = var.vault_role
      header_value = var.vault_addr
    }
  }
}

@gcavalcante8808
Copy link

gcavalcante8808 commented Jul 19, 2024

I can confirm that the local aws profile is not being used at all.

I'm doing an AWS Role Anywhere setup which relies on credential_proccess and aws_signin_helper and I can confirm that it works because I've used the following code to test the default profile on the vault pod:

image

import (
	"fmt"
	"os"
	"github.com/aws/aws-sdk-go/aws/session"
	"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
	if len(os.Args) < 2 {
		fmt.Println("you must specify a bucket")
		return
	}

	sess := session.Must(session.NewSession())

	svc := s3.New(sess)

	i := 0
	err := svc.ListObjectsPages(&s3.ListObjectsInput{
		Bucket: &os.Args[1],
	}, func(p *s3.ListObjectsOutput, last bool) (shouldContinue bool) {
		fmt.Println("Page,", i)
		i++

		for _, obj := range p.Contents {
			fmt.Println("Object:", *obj.Key)
		}
		return true
	})
	if err != nil {
		fmt.Println("failed to list objects", err)
		return
	}
}

I've tested with python too and in both cases, using default ProviderChain it worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auth/aws bug Used to indicate a potential bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.