Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add configurable working directory #1231

Merged
merged 3 commits into from
Jun 16, 2023
Merged

Add configurable working directory #1231

merged 3 commits into from
Jun 16, 2023

Conversation

cartermckinnon
Copy link
Member

@cartermckinnon cartermckinnon commented Mar 24, 2023

Issue #, if available:

Closes #1229

Description of changes:

This aims to address issues brought up in #1230 and #1198.

The underlying issue is the base working directory used throughout the AMI template (/tmp) may not be suitable for all use cases. Resources are staged in this directory (/tmp/worker) before being moved into their final position (such as the contents of files/) and this directory is also used to stash the AWS CLI installer, among other things.

After this change:

  • remote_folder can still be used to control the location of Packer scripts on the builder instance.
  • working_dir will default to a sub-directory of remote_folder, and can be adjusted as needed. All provisioners that create files on the builder instance will do so within this directory. This directory is deleted at the end of the build.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

Testing Done

Defaults succeed:

make 1.25

Different remote_folder succeeds:

make 1.25 remote_folder=/home/ec2-user
# working_dir will be /home/ec2-user/worker

Different working_directory succeeds:

make 1.25 working_directory=/tmp/foo

@im-wanyama
Copy link

LGTM

@matt-corbalt
Copy link

@cartermckinnon This PR fixes the issues we were having with hardened linux builds. Thanks again!

@dims
Copy link
Member

dims commented Mar 28, 2023

one question inline. LGTM otherwise!

eks-worker-al2-variables.json Show resolved Hide resolved
eks-worker-al2.json Show resolved Hide resolved
eks-worker-al2.json Show resolved Hide resolved
scripts/install-worker.sh Show resolved Hide resolved
eks-worker-al2-variables.json Show resolved Hide resolved
@matt-corbalt
Copy link

Hey @cartermckinnon, any idea when this might be merged to master? Thanks again for the work

hack/generate-template-variable-doc.py Outdated Show resolved Hide resolved
hack/generate-template-variable-doc.py Show resolved Hide resolved
hack/generate-template-variable-doc.py Outdated Show resolved Hide resolved
hack/generate-template-variable-doc.py Show resolved Hide resolved
hack/generate-template-variable-doc.py Show resolved Hide resolved
@matt-corbalt
Copy link

Hi @cartermckinnon, another friendly ping as this PR solves some issues we've been having and we're pinned on it until merged. Let me know if there's anything I can do to help work towards merging this. Thanks!

@im-wanyama
Copy link

Hi @cartermckinnon any update on when this is going to get merged?

@cartermckinnon
Copy link
Member Author

Sorry folks, I was out of the office and had some other priorities. I'll resolve the conflicts and wrap this one up.

@alexkimin
Copy link

Hi, any update on this PR?

Comment on lines +119 to +120
"mkdir -p {{user `working_dir`}}",
"mkdir -p {{user `working_dir`}}/log-collector-script"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can't this just be combined into one line?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. I think I separated this to make it very obvious what the intent of this provisioner is, I don't feel too strongly either way.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ya. I had mixed feelings. It's unnecessary, but removes the illusion that we're creating this just for the log collector script.

{
"type": "shell",
"inline": [
"rm -rf {{user `working_dir`}}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it possible bad things will happen here? I think not since we're creating {{user remote_folder}}/worker? I don't how we could accidentally delete anything too problematic.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, we're using /tmp right now and deleting it in the cleanup script. I added this to directly mirror the provisioner up top where we create this dir.

Copy link
Member

@mmerkes mmerkes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@alexkimin
Copy link

Thanks for the good PR to allow using this repo with CIS hardened Amazon Linux 2. Hope it merges soon and works again.

@Sandeepsac
Copy link

Sandeepsac commented Jun 19, 2023

@cartermckinnon still i am getting this error

/sandeep/amazon-eks-ami/scripts/generate-version-info.sh
2023-06-19T07:29:38Z: amazon-ebs: ctr: failed to dial "/run/containerd/containerd.sock": context deadline exceeded: connection error: desc = "transport: error while dialing: dial unix:///run/containerd/containerd.sock: timeout"
2023-06-19T07:29:38Z: ==> amazon-ebs: Downloading /home/ec2-user/worker/version-info.json => amazon-eks-node-1.27-v20230619-version-info.json
2023-06-19T07:29:38Z: ==> amazon-ebs: Provisioning step had errors: Running the cleanup provisioner, if present...
2023-06-19T07:29:38Z: ==> amazon-ebs: Terminating the source AWS instance...
2023-06-19T07:30:39Z: ==> amazon-ebs: Cleaning up any extra volumes...
2023-06-19T07:30:39Z: ==> amazon-ebs: No volumes to clean up, skipping
2023-06-19T07:30:39Z: ==> amazon-ebs: Deleting temporary security group...
2023-06-19T07:30:39Z: ==> amazon-ebs: Deleting temporary keypair...

Command :
make 1.27 aws_region=$AWS_REGION source_ami_id=$AMI_ID source_ami_owners=$AMI_OWNER_ACCOUNT_ID source_ami_filter_name="$AMI_NAME" subnet_id=subnet-*** remote_folder=/home/ec2-user

EKS version 1.27
CIS hardening AMI : CIS Amazon Linux 2 Benchmark - Level 2
https://aws.amazon.com/marketplace/pp/prodview-wm36yptaecjnu?sr=0-1&ref_=beagle&applicationId=AWSMPContessa#pdp-pricing

@mmerkes
Copy link
Member

mmerkes commented Jun 19, 2023

@Sandeepsac Doesn't look like it's one of the errors that this PR is meant to fix. Looks like it's a new one. You probably need to look at the containerd logs. Looks like containerd hasn't started.

@cartermckinnon
Copy link
Member Author

The ctr line is just a warning, and it's not causing the build failure (I can see that the subsequent provisioner is running, the one that downloads the version-info.json file).

@Sandeepsac are you using the default for working_dir?

@Sandeepsac
Copy link

yes @cartermckinnon I am using default for working_dir

"working_dir": "{{user remote_folder}}/worker" which is available in eks-worker-al2-variables.json file

imag e

@cartermckinnon
Copy link
Member Author

@Sandeepsac My guess is there's a permissions error at rm -rf {{user 'working_dir'}}. If you add a complete make line here, I can try to reproduce. I'd also try different values for remote_folder.

@Sandeepsac
Copy link

Sandeepsac commented Jun 22, 2023

@cartermckinnon I will check the permission , Thanks
make command:
make 1.27 aws_region=$AWS_REGION source_ami_id=$AMI_ID source_ami_owners=$AMI_OWNER_ACCOUNT_ID source_ami_filter_name="$AMI_NAME" subnet_id=subnet-*** remote_folder=/home/ec2-user

and referring this blog for custom AMI creation
https://aws.amazon.com/blogs/containers/building-amazon-linux-2-cis-benchmark-amis-for-amazon-eks/

CIS hardening AMI : CIS Amazon Linux 2 Benchmark - Level 2
Region : us-west-1
eks-version: V1.27

@im-wanyama
Copy link

Hi there, is there an ETA on when this is going to get tagged?

@Sandeepsac
Copy link

Sandeepsac commented Jul 3, 2023

@cartermckinnon any update for this error

2023-07-03T06:53:02Z: ==> amazon-ebs: Provisioning with shell script: /eks-1.26/amazon-eks-ami/scripts/generate-version-info.sh
2023-07-03T06:53:13Z: amazon-ebs: ctr: failed to dial "/run/containerd/containerd.sock": context deadline exceeded: connection error: desc = "transport: error while dialing: dial unix:///run/containerd/containerd.sock: timeout"
2023-07-03T06:53:13Z: ==> amazon-ebs: Downloading /home/ec2-user/worker/version-info.json => amazon-eks-node-1.26-v20230703-version-info.json
2023-07-03T06:53:13Z: ==> amazon-ebs: Provisioning step had errors: Running the cleanup provisioner, if present...
2023-07-03T06:53:13Z: ==> amazon-ebs: Terminating the source AWS instance...
2023-07-03T06:54:44Z: ==> amazon-ebs: Cleaning up any extra volumes...
2023-07-03T06:54:44Z: ==> amazon-ebs: No volumes to clean up, skipping
2023-07-03T06:54:44Z: ==> amazon-ebs: Deleting temporary security group...
2023-07-03T06:54:44Z: ==> amazon-ebs: Deleting temporary keypair...
2023-07-03T06:54:44Z: Build 'amazon-ebs' errored after 5 minutes 47 seconds: open amazon-eks-node-1.26-v20230703-version-info.json: permission denied

==> Wait completed after 5 minutes 47 seconds

==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: open amazon-eks-node-1.26-v20230703-version-info.json: permission denied

==> Builds finished but no artifacts were created.
make[1]: *** [k8s] Error 1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Not able to build eks ami with CIS BENCHMARK AMAZON LINUX 2
8 participants