-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws ssm put-parameter performs an HTTP GET request when the value param is an url #2507
Comments
Going by my reading of the code, it appears that when something looks like a URI on the CLI we attempt to fetch it, assuming that the URI is pointing to the real value instead of the uri being the value itself. I'm not sure how to handle this because it certainly seems undesirable to be unable to pass a "raw" URI on the CLI, but I don't see any way for you to say "but no really". Flagging this to get more discussion. |
Hi @dstufft, I was wondering if this behavior is a feature or a bug, but definitely seems like is a bug, what happens if the URL responds with another URL? Thanks for your reply. |
If a URI responds with another URI we won't recursively fetch them, we'll just use the content of the initial URL. As it currently stands it's not a bug since it is expected behavior (in the code, not for end users) that it does this. This was explicitly added so it wasn't a mistake or anything. If anything I can see this as a feature request to add a mechanism to escape the URI to add "raw" parmeters that don't fetch the URI. |
Hi @dstufft, Thanks for you response, I agree with you, we can see this as a feature request to escape the URL, and to check the feature, because is not working at all, for example:
|
I've been caught out by this today. I found this similar issue: #1475 and have successfully used the workaround mentioned in the comments |
Is there any update on this issue? |
@armitagemderivitec the update is that this behavior is a feature, not a bug. |
I just want to say that this is the type of strange issue that is declared to be a feature-not-a-bug that will lead to a security exploit in the future. Unexpected behavior which executes an HTTP request is very dangerous and exploitable to the unwary developer. It is completely counter-intuitive. I was able to use the workaround for a simple script only for testing purposes, but this behavior has made sure that additional security reviews are necessary for AWS API integration. |
Hi everyone, just wanted to give an update on this. As a few people have mentioned, this is a documented feature in the CLI (http://docs.aws.amazon.com/cli/latest/userguide/cli-using-param.html#cli-using-param-file-remote), but we do understand that this can result in unintuitive behavior and would like to address this. What would help is if someone has a concrete proposal on how to disable this functionality. The only constraint is that we do have to preserve backwards compatibility so we can't change the default behavior, but we certainly make an additive change that lets people disable this. |
We absolutely need a parameter to specify a "raw" value as URL, i'm having some issues on my team when setting new variables via CLI for our HTTP endpoints, it fetches the URL instead of using it as string. |
@jamesiri A few ideas
or anything else that works... not picky really 😄 |
I think the simplest thing would be to simply add a switch i.e. --isLiteral Any ideas on a time horizon for this? |
Just kill this 'feature', seriously. Even apart from security concerns it certainly violates the principle of least surprise. If anyone has an actual use case for it then give this functionality a completely different option, e.g. (My "concrete proposal" is simply therefore to treat |
This is beyond surprise. If you escape the urls, then what about ip addresses? Agree with toby5box, if someone needs a feature like this, separate it into another option. |
I also have experienced this issue. I agreed with tony5box as well. This is something that needs to be reconsidered as another option. |
Just in case someone else hit this, I found a workaround (a silly one). I was successful to put the url paramater using aws console in EC2 | SYSTEMS MANAGER SHARED RESOURCES | Parameter Store Still I don't know how to do the same directly from aws-cli |
Depending on how you manage your parameters, this Ansible module might be one possible workaround for this problem. |
I found a workaround by using the cli-input-json:
|
I can see why you might think this is a "feature," but it makes almost no sense for SSM parameters. As a backwards-compatibility solution, correct the "feature" and release CLI 2.x with good behavior and a flag or other option to enable the URL resolution. |
Good Morning! We're closing this issue here on GitHub, as part of our migration to UserVoice for feature requests involving the AWS CLI. This will let us get the most important features to you, by making it easier to search for and show support for the features you care the most about, without diluting the conversation with bug reports. As a quick UserVoice primer (if not already familiar): after an idea is posted, people can vote on the ideas, and the product team will be responding directly to the most popular suggestions. We’ve imported existing feature requests from GitHub - Search for this issue there! And don't worry, this issue will still exist on GitHub for posterity's sake. As it’s a text-only import of the original post into UserVoice, we’ll still be keeping in mind the comments and discussion that already exist here on the GitHub issue. GitHub will remain the channel for reporting bugs. Once again, this issue can now be found by searching for the title on: https://aws.uservoice.com/forums/598381-aws-command-line-interface -The AWS SDKs & Tools Team This entry can specifically be found on UserVoice at: https://aws.uservoice.com/forums/598381-aws-command-line-interface/suggestions/33243994-aws-ssm-put-parameter-performs-an-http-get-request |
Are you sure this is a "feature request" and not a bug report? Some of the users who've been bitten by this would likely classify it as the latter. Presumably bug reports get fixed, not voted on. |
Due to backward-compatability concerns, this feature cannot be universally disabled. We've marked this as a feature request to track the work needed to provide a customer opt-in flag which disables this. Sorry for the confusing messaging. This was done as part of an automatic migration of features we want to track. |
The ticket is long since closed but this retarded (and insecure) bug still remains. Here's the fix: # ~/.aws/config
[default]
cli_follow_urlparam = false |
Hey everyone, just a quick update. While we can't remove this functionality in the current major version of the CLI, based on everyone's feedback, we plan on removing this for CLI v2: #3590 |
This is the dumbest thing I've ever seen. If you wanted to execute a curl call to get the contents of a URL and use it for the value, you'd put a curl call into your CLI call:
Why anyone at AWS would think this was a desirable default behaviour is completely beyond my grasp of human understanding. |
I found myself here today, trying to store a SLACK Webhook URI as a plaintext secret. The solution to use Change this execa.shellSync(
`aws secretsmanager create-secret --kms-key-id "${keyId}" --name "${service}/${stage}/${name}" --secret-string "${value}"`,
); To this execa.shellSync(
`aws secretsmanager create-secret --cli-input-json '{
"KmsKeyId": "${keyId}",
"Name": "${service}/${stage}/${name}",
"SecretString": "${value}"
}'`,
); |
How is this a feature? Parameter Store is supposed to be a place to store secrets/environmental variables. Some of them contain a URL pointing to somewhere! It is nonsense that AWS CLI tries to execute that as a request without explicitly being told to. Being unable to use CLI for this is TERRIBLY inconvenient. Please add a fix ASAP! |
@dankolesnikov See the new-ish config setting https://docs.aws.amazon.com/cli/latest/topic/config-vars.html#general-options |
Thank you @lorengordon !! <3 |
I have added I want to add that I also feel that having the So, how do you store URLs in the parameter store? The cli_follow_urlparam isn't working. Aws version |
@luckyvalentine, Update your version, look up two comments. |
Just found this today, workaround is using
Or adding it from the web console in the parameter store. |
what an ugly feature, who came up with this idea in AWS... we had to do this.
|
check this one guys might be useful for you https://github.com/Bharathkumarraju/uploadsecrets_to_paramstore |
Need to have this in the beginning of the script, if you need to have URL value right now |
Bizare feature. |
This feature is still incredibly frustrating as it's counterintuitive. Developers have to end up on this issue to figure out how to use the command in a very plausible way. The idea that someone wants to store their params at a specific URL seems more like an implementation detail for specific use cases of certain users, not a concern of the cli. Those users could have just curl'ed the value themselves...
|
They removed this "feature" in v2 of the CLI! 🎉🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 https://docs.aws.amazon.com/cli/latest/userguide/cliv2-migration.html#cliv2-migration-paramfile |
This adds a pipeline file that concourse uses to run exports. It has 3 jobs. The first of these keeps this pipeline up to date in concourse, the second is a job that runs on a schedule at 10am to perform the live export and the last job is a test export that runs against the draft survey that can be used for testing. The export job is started at approximately 10am (it is based on when concourse first checks the timer which is on a 2 minute interval). It starts by sending a slack notification to say the job is starting, it then runs the export task. If the export task fails or has an error another slack notification is sent to inform the team (it notifies a group). If the export is successful it outputs a file `output/slack-message.txt` which is consumed by the successful slack notification to include contextual information. For simplicity I've hard coded the test job with google folder ids and my own email since I didn't feel these needed to be secret (we only use these folders for test data). This references a number of secrets that are already configured in Concourse under the govuk-ask-export namespace. This does a few WTFs in that I'll try explain (some might be because this is my first concourse file): - Using the google-drive branch: this is temporary and will be switched to master once merged. This is done to keep concourse up-to-date while this is under review. - The 12pm time: Concourse will only allow running the job when the time is between the times specified. I've set this to 12pm to allow a 2 hour window where we can try retry the job manually if, for instance, the job failed due to an intermittent error. - Only specifying a path for Slack notification: because of a truly bizarre [AWS SSM behaviour](aws/aws-cli#2507) gds-cli can't seem to add full URL secrets to Concourse and won't be able to until they update their AWS library. A path is a workaround. - The strange `<!subteam^S0145ESTQE8>` in the slack notification is the syntax to reference a group. This references the govuk-ask-support group on GDS Slack. I couldn't work out a nice way to stop duplicating it (I didn't really want to add another secret / env var)
This adds a pipeline file that concourse uses to run exports. It has 3 jobs. The first of these keeps this pipeline up to date in concourse, the second is a job that runs on a schedule at 10am to perform the live export and the last job is a test export that runs against the draft survey that can be used for testing. The export job is started at approximately 10am (it is based on when concourse first checks the timer which is on a 2 minute interval). It starts by sending a slack notification to say the job is starting, it then runs the export task. If the export task fails or has an error another slack notification is sent to inform the team (it notifies a group). If the export is successful it outputs a file `output/slack-message.txt` which is consumed by the successful slack notification to include contextual information. For simplicity I've hard coded the test job with google folder ids and my own email since I didn't feel these needed to be secret (we only use these folders for test data). This references a number of secrets that are already configured in Concourse under the govuk-ask-export namespace. This does a few WTFs in that I'll try explain (some might be because this is my first concourse file): - Using the google-drive branch: this is temporary and will be switched to master once merged. This is done to keep concourse up-to-date while this is under review. - The 12pm time: Concourse will only allow running the job when the time is between the times specified. I've set this to 12pm to allow a 2 hour window where we can try retry the job manually if, for instance, the job failed due to an intermittent error. - Only specifying a path for Slack notification: because of a truly bizarre [AWS SSM behaviour](aws/aws-cli#2507) gds-cli can't seem to add full URL secrets to Concourse and won't be able to until they update their AWS library. A path is a workaround. - The strange `<!subteam^S0145ESTQE8>` in the slack notification is the syntax to reference a group. This references the govuk-ask-support group on GDS Slack. I couldn't work out a nice way to stop duplicating it (I didn't really want to add another secret / env var)
This adds a pipeline file that concourse uses to run exports. It has 3 jobs. The first of these keeps this pipeline up to date in concourse, the second is a job that runs on a schedule at 10am to perform the live export and the last job is a test export that runs against the draft survey that can be used for testing. The export job is started at approximately 10am (it is based on when concourse first checks the timer which is on a 2 minute interval). It starts by sending a slack notification to say the job is starting, it then runs the export task. If the export task fails or has an error another slack notification is sent to inform the team (it notifies a group). If the export is successful it outputs a file `output/slack-message.txt` which is consumed by the successful slack notification to include contextual information. For simplicity I've hard coded the test job with google folder ids and my own email since I didn't feel these needed to be secret (we only use these folders for test data). This references a number of secrets that are already configured in Concourse under the govuk-ask-export namespace. This does a few WTFs in that I'll try explain (some might be because this is my first concourse file): - Using the google-drive branch: this is temporary and will be switched to master once merged. This is done to keep concourse up-to-date while this is under review. - The 12pm time: Concourse will only allow running the job when the time is between the times specified. I've set this to 12pm to allow a 2 hour window where we can try retry the job manually if, for instance, the job failed due to an intermittent error. - Only specifying a path for Slack notification: because of a truly bizarre [AWS SSM behaviour](aws/aws-cli#2507) gds-cli can't seem to add full URL secrets to Concourse and won't be able to until they update their AWS library. A path is a workaround. - The strange `<!subteam^S0145ESTQE8>` in the slack notification is the syntax to reference a group. This references the govuk-ask-support group on GDS Slack. I couldn't work out a nice way to stop duplicating it (I didn't really want to add another secret / env var)
This adds a pipeline file that concourse uses to run exports. It has 3 jobs. The first of these keeps this pipeline up to date in concourse, the second is a job that runs on a schedule at 10am to perform the live export and the last job is a test export that runs against the draft survey that can be used for testing. The export job is started at approximately 10am (it is based on when concourse first checks the timer which is on a 2 minute interval). It starts by sending a slack notification to say the job is starting, it then runs the export task. If the export task fails or has an error another slack notification is sent to inform the team (it notifies a group). If the export is successful it outputs a file `output/slack-message.txt` which is consumed by the successful slack notification to include contextual information. For simplicity I've hard coded the test job with google folder ids and my own email since I didn't feel these needed to be secret (we only use these folders for test data). This references a number of secrets that are already configured in Concourse under the govuk-ask-export namespace. This does a few WTFs in that I'll try explain (some might be because this is my first concourse file): - Using the google-drive branch: this is temporary and will be switched to master once merged. This is done to keep concourse up-to-date while this is under review. - The 12pm time: Concourse will only allow running the job when the time is between the times specified. I've set this to 12pm to allow a 2 hour window where we can try retry the job manually if, for instance, the job failed due to an intermittent error. - Only specifying a path for Slack notification: because of a truly bizarre [AWS SSM behaviour](aws/aws-cli#2507) gds-cli can't seem to add full URL secrets to Concourse and won't be able to until they update their AWS library. A path is a workaround. - The strange `<!subteam^S0145ESTQE8>` in the slack notification is the syntax to reference a group. This references the govuk-ask-support group on GDS Slack. I couldn't work out a nice way to stop duplicating it (I didn't really want to add another secret / env var)
This adds a pipeline file that concourse uses to run exports. It has 3 jobs. The first of these keeps this pipeline up to date in concourse, the second is a job that runs on a schedule at 10am to perform the live export and the last job is a test export that runs against the draft survey that can be used for testing. The export job is started at approximately 10am (it is based on when concourse first checks the timer which is on a 2 minute interval). It starts by sending a slack notification to say the job is starting, it then runs the export task. If the export task fails or has an error another slack notification is sent to inform the team (it notifies a group). If the export is successful it outputs a file `output/slack-message.txt` which is consumed by the successful slack notification to include contextual information. For simplicity I've hard coded the test job with google folder ids and my own email since I didn't feel these needed to be secret (we only use these folders for test data). This references a number of secrets that are already configured in Concourse under the govuk-ask-export namespace. This does a few WTFs in that I'll try explain (some might be because this is my first concourse file): - Using the google-drive branch: this is temporary and will be switched to master once merged. This is done to keep concourse up-to-date while this is under review. - The 12pm time: Concourse will only allow running the job when the time is between the times specified. I've set this to 12pm to allow a 2 hour window where we can try retry the job manually if, for instance, the job failed due to an intermittent error. - Only specifying a path for Slack notification: because of a truly bizarre [AWS SSM behaviour](aws/aws-cli#2507) gds-cli can't seem to add full URL secrets to Concourse and won't be able to until they update their AWS library. A path is a workaround. - The strange `<!subteam^S0145ESTQE8>` in the slack notification is the syntax to reference a group. This references the govuk-ask-support group on GDS Slack. I couldn't work out a nice way to stop duplicating it (I didn't really want to add another secret / env var)
This adds a pipeline file that concourse uses to run exports. It has 3 jobs. The first of these keeps this pipeline up to date in concourse, the second is a job that runs on a schedule at 10am to perform the live export and the last job is a test export that runs against the draft survey that can be used for testing. The export job is started at approximately 10am (it is based on when concourse first checks the timer which is on a 2 minute interval). It starts by sending a slack notification to say the job is starting, it then runs the export task. If the export task fails or has an error another slack notification is sent to inform the team (it notifies a group). If the export is successful it outputs a file `output/slack-message.txt` which is consumed by the successful slack notification to include contextual information. For simplicity I've hard coded the test job with google folder ids and my own email since I didn't feel these needed to be secret (we only use these folders for test data). This references a number of secrets that are already configured in Concourse under the govuk-ask-export namespace. This does a few WTFs in that I'll try explain (some might be because this is my first concourse file): - The 12pm time: Concourse will only allow running the job when the time is between the times specified. I've set this to 12pm to allow a 2 hour window where we can try retry the job manually if, for instance, the job failed due to an intermittent error. - Only specifying a path for Slack notification: because of a truly bizarre [AWS SSM behaviour](aws/aws-cli#2507) gds-cli can't seem to add full URL secrets to Concourse and won't be able to until they update their AWS library. A path is a workaround. - The strange `<!subteam^S0145ESTQE8>` in the slack notification is the syntax to reference a group. This references the govuk-ask-support group on GDS Slack. I couldn't work out a nice way to stop duplicating it (I didn't really want to add another secret / env var)
This adds a pipeline file that concourse uses to run exports. It has 3 jobs. The first of these keeps this pipeline up to date in concourse, the second is a job that runs on a schedule at 10am to perform the live export and the last job is a test export that runs against the draft survey that can be used for testing. The export job is started at approximately 10am (it is based on when concourse first checks the timer which is on a 2 minute interval). It starts by sending a slack notification to say the job is starting, it then runs the export task. If the export task fails or has an error another slack notification is sent to inform the team (it notifies a group). If the export is successful it outputs a file `output/slack-message.txt` which is consumed by the successful slack notification to include contextual information. For simplicity I've hard coded the test job with google folder ids and my own email since I didn't feel these needed to be secret (we only use these folders for test data). This references a number of secrets that are already configured in Concourse under the govuk-ask-export namespace. This does a few WTFs in that I'll try explain (some might be because this is my first concourse file): - The 12pm time: Concourse will only allow running the job when the time is between the times specified. I've set this to 12pm to allow a 2 hour window where we can try retry the job manually if, for instance, the job failed due to an intermittent error. - Only specifying a path for Slack notification: because of a truly bizarre [AWS SSM behaviour](aws/aws-cli#2507) gds-cli can't seem to add full URL secrets to Concourse and won't be able to until they update their AWS library. A path is a workaround. - The strange `<!subteam^S0145ESTQE8>` in the slack notification is the syntax to reference a group. This references the govuk-ask-support group on GDS Slack. I couldn't work out a nice way to stop duplicating it (I didn't really want to add another secret / env var)
Workaround (for me !) Insert leading space before 'http', have the value enclosed in quotes so it 'sees' the space. some libraries (eg Python requests) will ignore the space. |
Memo to AWS: PLEASE document this on the AWS CLI documentation - even for most recent, in case people do not have CLI v2 |
To build on the already mentioned
|
When you try to put a parameter into ssm-param-store with an url on the value
aws-cli
perform a HTTP GET request to the value.The text was updated successfully, but these errors were encountered: