Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AMI tags not shared between accounts #353

Closed
wking opened this issue Oct 5, 2018 · 17 comments
Closed

AMI tags not shared between accounts #353

wking opened this issue Oct 5, 2018 · 17 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@wking
Copy link
Member

wking commented Oct 5, 2018

Spun off from discussion in #314, which landed the tags. Here are two acounts, rh-dev and the one we use for CI:

$ AWS_PROFILE=rh-dev aws sts get-caller-identity --query Account --output text
531415883065
$ AWS_PROFILE=ci aws sts get-caller-identity --query Account --output text
460538899914

rh-dev sees alpha tags on AMIs, while CI does not:

$ AWS_PROFILE=rh-dev AWS_DEFAULT_REGION=us-east-1 aws ec2 describe-images --filters "Name=tag:rhcos_tag,Values=alpha" --query 'sort_by(Images, &CreationDate)[-1].Name' --output text
rhcos_dev_4.0.6562-hvm
$ AWS_PROFILE=ci AWS_DEFAULT_REGION=us-east-1 aws ec2 describe-images --filters "Name=tag:rhcos_tag,Values=alpha" --query 'sort_by(Images, &CreationDate)[-1].Name' --output text
None

Get the actual AMI ID:

$ AWS_PROFILE=rh-dev AWS_DEFAULT_REGION=us-east-1 aws ec2 describe-images --filters "Name=tag:rhcos_tag,Values=alpha" --query 'sort_by(Images, &CreationDate)[-1].ImageId' --output text
ami-0a08f82608c65f0bc

The AMI is public:

$ AWS_PROFILE=rh-dev AWS_DEFAULT_REGION=us-east-1 aws ec2 describe-image-attribute --image-id ami-0a08f82608c65f0bc --attribute launchPermission --output text
ami-0a08f82608c65f0bc
LAUNCHPERMISSIONS  all

But I can't describe it from the CI account:

$ AWS_PROFILE=ci AWS_DEFAULT_REGION=us-east-1 aws ec2 describe-image-attribute --image-id ami-0a08f82608c65f0bc --attribute launchPermission --output text

An error occurred (AuthFailure) when calling the DescribeImageAttribute operation: Not authorized for image:ami-0a08f82608c65f0bc

The AMI is there for listing:

$ AWS_PROFILE=ci AWS_DEFAULT_REGION=us-east-1 aws ec2 describe-images --image-ids ami-08b37b9b9700f5a0d --query Images[].Name --output text
rhcos_dev_4.0.6554-hvm

Comparing the descibe-images output between accounts:

$ alias get='AWS_DEFAULT_REGION=us-east-1 aws ec2 describe-images --image-ids ami-08b37b9b9700f5a0d --query Images[0] --output json'
$ diff -u <(AWS_PROFILE=rh-dev get) <(AWS_PROFILE=ci get)
--- /dev/fd/632018-10-05 11:18:12.323494210 -0700
+++ /dev/fd/622018-10-05 11:18:12.323494210 -0700
@@ -1,24 +1,6 @@
 {
     "VirtualizationType": "hvm",
     "Description": "Red Hat CoreOS 4.0.6554 (eca924619ba62e615665c6dfda32a12593edb331277aed113ae9217d44746ffd)",
-    "Tags": [
-        {
-            "Value": "alpha",
-            "Key": "rhcos_tag"
-        },
-        {
-            "Value": "rhcos_dev_4.0.6554-hvm",
-            "Key": "Name"
-        },
-        {
-            "Value": "4.0.6554",
-            "Key": "ostree_version"
-        },
-        {
-            "Value": "eca924619ba62e615665c6dfda32a12593edb331277aed113ae9217d44746ffd",
-            "Key": "ostree_commit"
-        }
-    ],
     "Hypervisor": "xen",
     "EnaSupport": true,
     "SriovNetSupport": "simple",

So why are those tags not visible to me in the CI account?

CC @miabbott

@miabbott
Copy link
Member

miabbott commented Oct 5, 2018

StackOverflow to the rescue yet again

https://stackoverflow.com/questions/46396906/aws-cross-account-shared-ami-tags-not-showing-up

...points to

https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_Tags.html#tag-restrictions

...specifically

You can tag public or shared resources, but the tags you assign are available only to your AWS account and not to the other accounts sharing the resource.

Which basically means tags on public resources are useless. sigh

@wking
Copy link
Member Author

wking commented Oct 5, 2018

So revert #314 (and the other tags?) and back to the drawing board? We can also leave them in AWS for just the rh-dev or other blessed accounts if we want, but the installer shouldn't be looking at them.

@miabbott
Copy link
Member

miabbott commented Oct 5, 2018

We are also publishing a JSON file with AMI IDs to S3 after our own tests pass (same criteria for the alpha tag)...could the installer consume that file to figure out which image to use?

I think the tagging of the oscontainer is still useful, but we can remove the tag metadata from the AMI.

@wking wking changed the title AMI tags not avialable in the CI account AMI tags not shared between accounts Oct 5, 2018
@wking
Copy link
Member Author

wking commented Oct 5, 2018

We are also publishing a JSON file with AMI IDs to S3 after our own tests pass (same criteria for the alpha tag)...could the installer consume that file to figure out which image to use?

Yes, although I'm not sure how to square that with different internal/public release cadence. More discussion on this starting here.

@miabbott
Copy link
Member

miabbott commented Oct 5, 2018

If we wish to maintain the idea that the official API is the AMI JSON files, we could publish multiple variants of the JSON per release channel (aws-buildmaster.json, aws-alpha.json, aws-stable.json, etc). While all of the JSON files would be public, we could restrict the individual AMIs to only be accessed by certain accounts. WDYT?

@wking
Copy link
Member Author

wking commented Oct 5, 2018

While all of the JSON files would be public, we could restrict the individual AMIs to only be accessed by certain accounts.

So would the buildmaster AMIs be mostly internal, while the alpha and stable AMIs would always be public? Otherwise things seem sticky. For example, if the alpha AMI was private, there would be no way for an external caller to ask "what's the most recent public alpha AMI?".

@miabbott
Copy link
Member

miabbott commented Oct 5, 2018

For example, if the alpha AMI was private, there would be no way for an external caller to ask "what's the most recent public alpha AMI?".

Hmm...I didn't know we were targeting having multiple public release channels. Or for that matter, an internal + public version of each release channel. I guess it would help to know how many release channels/variants we are on the hook for. And if we need an internal and public version of each.

@ashcrow
Copy link
Member

ashcrow commented Oct 5, 2018

For now we're just targeting to have one public stream. We know we will add more later but specifics over them haven't been worked out yet.

@wking
Copy link
Member Author

wking commented Oct 5, 2018

For now we're just targeting to have one public stream.

So basically, "we're fine with the installer as it stands and don't need openshift/installer#409 or other ways to select RHCOS channels"? And you folks can just grant access to whatever internal account (or make the image public) to release it to those users.

@ashcrow
Copy link
Member

ashcrow commented Oct 5, 2018

So basically, "we're fine with the installer as it stands and don't need openshift/installer#409 or other ways to select RHCOS channels"? And you folks can just grant access to whatever internal account (or make the image public) to release it to those users.

For customer production usage, yes. There will be one stream now and possibly more later. For people doing testing, kicking the tires, or CI systems, then tagging is as important as noted IMHO.

@wking
Copy link
Member Author

wking commented Oct 5, 2018

There will be one stream now and possibly more later.

Right. We can figure that out when we get multiple public streams.

For people doing testing, kicking the tires, or CI systems, then tagging is as important as noted IMHO.

Can't we cover this use-case with a combination of granting accounts access to private AMIs (like we used to do before #304) and an explicit AMI override before calling the installer (so no installer lookup)? Then whoever is doing testing can either ride the bleeding edge in their testing account (so still a single stream, just different from the public stream) or pin to a specific AMI they want to test before it gets promoted to being a public AMI.

@ashcrow
Copy link
Member

ashcrow commented Oct 8, 2018

Right. We can figure that out when we get multiple public streams.

👍

Can't we cover this use-case with a combination of granting accounts access to private AMIs (like we used to do before #304) and an explicit AMI override before calling the installer (so no installer lookup)?

That sounds reasonable to me. Let me tag in @miabbott and/or @dm0- for a second opinion.

@miabbott
Copy link
Member

miabbott commented Oct 12, 2018

Can't we cover this use-case with a combination of granting accounts access to private AMIs (like we used to do before #304) and an explicit AMI override before calling the installer (so no installer lookup)?

Makes sense to me. I think in this scenario, making the AMIs truly public would then be owned by OpenShift (who had completed some level of additional testing against the AMI).

Going back to the tagging proposal from #150 (and #201):

An initial proposal for a tagging scheme:

  • buildmaster: freshly built from the pipeline; not tested
  • alpha: image has been successfully sanity tested in AWS; made available to OpenShift
  • stable: image has been successfully tested by OpenShift; requires feedback mechanism from OpenShift

...an alpha AMI would be one that we would grant specific account access to and the stable AMI would be publicly available.

@openshift-bot
Copy link

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 27, 2020
@openshift-bot
Copy link

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 26, 2020
@openshift-bot
Copy link

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants