-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
amazon-ebs: Error waiting for SSH: handshake failed: ssh: unable to authenticate #788
Comments
I am also seeing this today |
Also seeing the same thing. Tried with packer 0.5.1 today. Had same thing with 0.4.1 before. Even when launching the Packer-created "raw" image manually via EC2 console it is not possible to manually login with SSH and keypair. It asks for password. Only issue with CentOS. Works as expected with Ubuntu and RHEL. |
I get the same error. Using CentOS. |
Same here, I get the same error using CentOs. I can successfully create the Virtual network, Cloud service, VM, Disk, .... but at the end before running provisioners it fails with exactly same error! Has anybody found a solution yet?:) |
The same issue is with me. Tried with,
|
Same error message here with a CentOS 6.5 image. |
Same here, Amazon Linux AMI ami-bba18dd2 |
The problem is (arguably) the timing of when the vanilla CentOS image gets its ssh keypair from the metadata server. The bit of code that does it is in rc.local, and that runs after SSH starts up. Depending on when packer tries to authenticate, it might catch a listening SSH daemon but no authorized key for the root user. It would be awesome for packer to optionally retry failed SSH authentication. |
I just ran into this problem and spent an hour trying to figure out what the hack is happening. Here is my scenario which is pretty similar to yours. I wanted to set up a minimal environment in amazon based on CentOS 6. All I wanted is a fully patched AMI so I could start provisioning on top of that. I ran into the same handshake failure problem. Here is what is happening: I simply did an "rm -rf /root/.ssh/authorized_keys" after patching up the AMI. In my case this had nothing to do with packer, it was the CentOS key fetching script which held the solution. |
@felin-arch - thank you so much for writing that up. My hair is (somewhat) intact because of your gentlemanly contribution. |
@felin-arch, this does indeed fix one case of the problem described by the OP, but the problem also exists when going from official -> raw. If @mitchellh considers this one closed, I can file a new bug detailing the issue from the official image. My workaround was to build a new base image by installing into a chroot and configuring cloud-init to handle the key setup, removing the bits in the dist rc.local. |
@felin-arch @mwedgwood-rmn Guys, so is the problem was solved in somehow, still see the same problem with Centos6.5. |
@i-sam, |
@felin-arch Ok, thank you for fast answer. Got it. |
I agree with @felin-arch. Sorry guys. |
I think the actual cause is different : The "provided" AMI's thet AWS offers add the key to the ec2-user user. That is kinda hardcoded, and enforced by cloud-init script. For me, adding "ssh_username": "ec2-user" made it work (on a CentOS AMI), ubuntu is needed when using an Ubuntu based AMI. |
I can confirm @igmar's solution, which seems like the right one to me. Even apart from Packer, trying to ssh into a CentOS-based AMI as user |
For reference, the contents of /etc/rc.d/rc.local on the official CentOS 6.5 image are as follows:
|
Does rebuilding the official Ubuntu base image with "rm -rf /root/.ssh/authorized_keys" also work as a workaround to this issue? |
@hyperfocus1337 I believe so. That is basically what I did. |
I'll try it out and report back. |
Unfortunately didn't work out for me. I made a ec2 instance manually with the default Ubuntu 14.04 AMI (ami-9eaa1cf6), and then I logged in through ssh and ran the command "rm -rf /root/.ssh/authorized_keys". After that I saved a snapshot. Running the template again with the self created source ami and ebs snapshot gives me the same problem. I have shared my packer template here: https://github.com/hyperfocus1337/packer-amazon-ebs-template Any hints on how I can further troubleshoot this issue? |
You are trying to ssh to the machines as the user ubuntu. You removed the keys for the root user. You have to check the exact way Ubuntu fetches keys and alter your method. I do not know how Ubuntu fetches keys, but I guess it does the same thing as CentOS. The only difference is that it adds the keys under the ubuntu user instead of root. |
This time I removed both the authorized_keys file for the root and ubuntu user which didn't work. Also only removing it for the Ubuntu user didn't work. Also tried using a private ssh key, can't even manually login with it. Is there another way to make the amazon-ebs builder work with Ubuntu? Where can I find the Ubuntu version of the "/etc/rc.d/rc.local" script on CentOS 6.5? Is my "ami_block_device_mappings" setting correctly set on my template, I'm not sure if I fully understand it. And once Packer automatically creates an ssh key, where does it store it for later access? The documentation only mentions the -debug flag. Doesn't sound like it's the only way to retrieve it. |
@hyperfocus1337, You will have to check what Ubuntu does to fetch the keys. As I remember when you use -debug it saves the key to the current directory (I think it also echoed the key location to the console). |
Thanks once again for fast response. Can you elaborate a little bit more on "what Ubuntu does to fetch keys"? Tried to research it but I don't know where to start. What files/directories should I look into? Should I look through Upstart or Systemd files? Since Ubuntu doesn't use the init system like CentOS. |
When you fire up an EC2 instance you can specify a key you want to use to access that machine. The VM needs to set up this key before you can access it. Various distributions use different ways of fetching the correct public key from AWS. Yes init.d would be a good start. Cloud-init must have some documentation that should help you. |
I think this is the script, it's located at /usr/lib/cloud-init/write-ssh-key-fingerprints. A list of all cloud init files on Ubuntu 14.04 is located here: http://packages.ubuntu.com/trusty/all/cloud-init/filelist.
Another cloud init configuration option which might be interesting is: https://cloudinit.readthedocs.org/en/latest/topics/examples.html#configure-instances-ssh-keys. There are also three ssh modules: https://cloudinit.readthedocs.org/en/latest/topics/modules.html#ssh. I'm not good with scripting languages and maybe too inexperienced to figure this out myself. But I'll try to make sense of it and report back. But in the meantime can someone help me verify what it does and help determine what steps I should take to resolve this issue? |
The script you posted above does not do any key fetching. I googled around and you may need to use the ec2-user to log in. Before trying to automate things, you may want to do all the steps manually to make sure you understand what is happening when packer is in play. |
What do you mean by the ec2-user? Your Amazon IAM username? Login with that in the Packer template? Tried it with:
Unfortunately didn't work yet. Clueless on how to proceed. @sethvargo @mitchellh The amazon-ebs builder is not much use for Ubuntu users at this time without any clear instructions on how to make the SSH login work, even though it's not an issue on the Packer side. And I'm guessing many users use both EC2 and Ubuntu. Would love to be able to have this integrated with Atlas. This is my template: https://github.com/hyperfocus1337/packer-amazon-ebs-template |
@felin-arch This was the fix for an issue I was having, thank you! |
for CentOS 7
|
@EliasGoldberg's solution worked for me. |
@EliasGoldberg 's solution also worked for me on CentOS 7. |
ssh_username set as centos worked for me too (on CentOs 7) |
I can confirm that @EliasGoldberg's solution worked for me. Thanks a bunch. |
Yes it woked for me also with ubuntu user. was giving error with default template |
Setting the 'correct' user fixed the error for me as well - thanks to all. FYI - there is a list of SSH users based on the instance type in a Tip at: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html here is the tip verbiage at this point... Tip dbm |
Interestingly I was hitting this problem and the cause was the name I was using in the
If I got the name completely wrong than it would fail early saying it could not find the AMI. But using the name with the wrong case seemed to allow it to find the AMI but then fail at the SSH connection part triggering this error:- I was using |
@codekipple that's really interesting... maybe we should add a line to the docs stating that the ami name is case sensitive. |
@codekipple the biggest problem with that filter is that you don't specify owner or the AMI, so you get the latest public AMI which matches your filter. "Always" specify |
@rickard-von-essen ah ok, I'm new to packer and I'm trying to make a system to create AMI's in 4 different AWS accounts. I omitted the owners on purpose so it worked for all accounts and I didn't have to keep track of the owners ID. |
I solved it by not forgetting to add the public_key_pair (of the aws account) |
FWIW, I commented the original 'ec2-user' username and kept getting this error due to how it's pulled from .kitchen.yml. You should only have one reference to 'username'! |
"builders" : [
{
"type" : "amazon-ebs",
"profile" : "your-aws-profile",
"region" : "{{user `region`}}",
"instance_type" : "t2.micro",
"source_ami" : "ami-XXXX",
"communicator": "ssh",
"ssh_username" : "ubuntu",
"ssh_keypair_name": "XXXX",
"ssh_private_key_file": "/path/to/XXXX.pem",
"ami_name" : "Ubuntu-Sample-AMI",
"ami_description" : "Some message",
"run_tags" : {
"Name" : "Hello World",
"Tool" : "Packer",
"Author" : "XXXX"
}
}
] It is very important to give correct username for different linux flavors, which are. I hope this saves your time!!! |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
I noticed that this problem has occurred before in #130 , but I am not sure if this is the same issue.
I am attempting to build my images in stages. I go from the official release ami to a "raw" image, then from the "raw" image to a "bootstrapped" image.
Where "raw" is basically a local copy of the official image, bootstrapped includes Puppet and Docker, and the base image is the result of a Puppet Apply run that installs and configures our commons.
I am creating AMIs/images for Ubuntu 12.04 and CentOS 6.4.
With Ubuntu, I have successfully completed the "raw" and "bootstrapped" images. With CentOS, the "raw" image builds fine (though I had to increase the ssh_timeout setting). However, when going from "raw" to "bootstrapped" with CentOS, it fails with:
It's worth noting that my "raw" build executes a shell script, but that shell script is empty, so no filesystem changes have occurred, that I am aware of, that could have caused this problem.
command
packer-config.json
output
The text was updated successfully, but these errors were encountered: