-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Packer build for hyperv-iso fails with Waiting for SSH
error.
#5049
Comments
please add debug log output by running packer with the environment variable |
Attached detailed debug log. |
Are you seeing an ip address assigned to them vm? Is it able to download the speed file or run updates? If gets an ip address make sure that ssh server is up and running. Check that your user is configured for ssh server. Check that firewall on the vm. Can you ssh to the vm? Then check firewall on machine running Packer and anywhere between. Windows firewall has blocked access to Packer's http server for me before. |
IP address is not assigned to the VM. It is not able to run updates. It gets stuck waiting for the SSH connection to happen. |
I've found Ubuntu install hangs on Hyper- - it's an old post but maybe... |
There seems to be no solution to this IMO with the current configuration. The issue is not with packer but rather with the provider implementation using hyper-v. Is there is a working example somewhere within packer? |
I am having this problem too, |
I observe the same symptoms. What data can I gather and deliver to move the issue forward? |
I've wrestled with this issue many times. You need to get the Hyper-V plugin running inside the VM during the install process or packer will never detect the IP and thus never connect. It's trickier than it sounds. Especially if you only want to install the Hyper-V plugins when building Hyper-V boxes. I've managed to get it working on Debian, Ubuntu, Alpine, Oracle, CentOS, RHEL, Fedora, Arch, Gentoo and FreeBSD. See here. Which target are you going for? As an aside, a major hurdle I've been having is the installer finishing, then rebooting, only it doesn't eject the install media, and boots from it again. That issue can also cause the symptom you're seeing. It would be nice if packer setup the machines with the hard disk higher in the boot priority, or it auto detected the reboot and ejected... as I never hit this issue on VMWare, Virtualbox or QEMU. |
For Ubuntu (I just noticed the JSON file above), make sure you have these packages being installed via your config:
and you probably need to run this command (assuming your trying to login via root to provision the box):
|
This is amazing advice, @mwhooker can this be added to the hyperv-iso documentation on packer.io to ensure success with this great tool and relieve frustration :) |
There is a nifty tool for determining what hypervisor you are running on
If that is the case it should definitely be documented. |
Wow, thx for all answers.
|
I prefer dmidecode, as it uses far fewer dependencies, and is more generally available. if [[ `dmidecode -s system-product-name` == "VirtualBox" ]]; then
fi
if [[ `dmidecode -s system-manufacturer` == "Microsoft Corporation" ]]; then
fi
if [[ `dmidecode -s system-product-name` == "VMware Virtual Platform" ]]; then
fi
if [[ `dmidecode -s system-product-name` == "KVM" || `dmidecode -s system-manufacturer` == "QEMU" ]]; then
fi Or for those situations where dmidecode and awk aren't available, such as during an automated install process, all you really need is dmesg and grep. For example, with Debian I use: d-i preseed/late_command string \
sed -i -e "s/.*PermitRootLogin.*/PermitRootLogin yes/g" /target/etc/ssh/sshd_config ; \
dmesg | grep "Hypervisor detected: Microsoft HyperV" ; \
if [ $? -eq 0 ]; then \
chroot /target /bin/bash -c 'service ssh stop ; echo "deb http://deb.debian.org/debian jessie main" >> /etc/apt/sources.list ; apt-get update ; apt-get install hyperv-daemons' ; \
eject /dev/cdrom ; \
fi |
I try install Ubuntu 16.0.4 using
The gist what contains
The build is based on https://github.com/geerlingguy/packer-ubuntu-1604 The YouTube video You can see that installation stuck without any information and didn't end correctly. On the video between about 2'34 and 3'38" was removed waiting time for timeout time (40 minutes in total). |
@it-praktyk see my post above regarding an Ubuntu install on Hyper-V. You need to add the following to your pkgsel/include line:
That is the easiest way to get the Hyper-V daemon setup on Ubuntu during the install process, and should solve your problem. |
Yes, I didn't mention but I tried it today also. Do you build images using Windows 10? |
Yes. |
The hard way to solve this problem is open the virtual machine console using the Hyper-V manager, wait until it reboots and then login via the console. Once there install the Hyper-V daemons manually, and packer should connect via SSH within 1 or 2 minutes. Note, you might need to manually enable the daemons using systemctl (if varies between distros, and I don't know whether they are enabled by default on Ubuntu. |
I should add, that if the daemons are running, and you still can't connect, then you need to manually confirm ssh is working properly... so from the console, run ifconfig to determine the IP and see if you can login using the credentials specified in the packer JSON config. It's possible a setting the sshd_config is blocking access. For example password logins may be disabled, or direct root logins may be disabled. If you can login manually via the credentials in the JSON file, and you can confirmed the Hyper-V daemons are running (KVP and VSS) and packer still isn't connecting let us know. |
I don't think that is a problem specifically related to Hyper-V. We need a topic about how to support OSes that don't have built in drivers/support for the Hypervisor you have selected to use. I have run into the problem of ejecting the cd rom as well (installing Pfsense). During an installation process there may be multiple reboots (looking at Windows here with patches). The way to tackle that is to eject the cd from the installation process of the OS. Think of doing something like this:
For a real bastard of an install have a look at: https://github.com/taliesins/packer-baseboxes/blob/master/hyperv-pfsense-2.3.2.json |
I was experiencing this same issue when trying to build RHEL 7.3 and Ubuntu. In my case I found that I first had to ensure an External VM switch was already set up within Hyper-V as packer would only create an internal one. This got Ubuntu working OK, but for RHEL I additionally had to install the Microsoft LIS drivers from https://www.microsoft.com/en-us/download/details.aspx?id=51612 as the built-in ones didn't seem to work. |
For RHEL 7.3 you need the following in your Kickstart file:
|
I started watching this thread with the hope packer would get better at detecting Hyper-V guest IP addresses (like it does with other providers), but it appear anybody is working on that, so I'm going to mute this topic. As such,if anybody else needs help getting another packer to work with a different distro, please message me directly. |
these issues with Hyper-V are specifically related not just to drivers being present, but daemons as well. Most of the "popular" distro's now include the required drivers, however they do not, by default include the daemons. Instructions for installing and enabling the daemons are documented on the Distribution specific pages linked at the bottom of that doc. Once the daemons are installed and running, they will report their IPs and you can winrm, powershell, or ssh to heart's content. As implementation is distro specific, I agree this is not a packer problem, but could very well be remedied by updating the hyperv-iso docs to direct users to the MS docs. |
@ladar you can avoid all the mount madness if you force the network to be available in %post with
for some as yet undermined reason, hyperv doesn't seem to initialize the network connection on its own during the installation, forcing in the kickstart with the |
@wickedviking The mount in the snippet above is RHEL specific, and is required for RHEL installations because the network repos aren't accessible until you register the machine with the RHN. If the machine is registered, you are correct, those commands aren't needed. For example with my CentOS Kickstart config I pull in the packages via the network. As for your suggestion above, I don't believe "pointing" at the MS docs is sufficient. The hard part isn't installing the drivers/daemons, as you're correct most distros include them. The hard part is getting Hyper-V builds to include the daemons during installation so that when the machine reboots, the provisioning process will execute automatically. Notes on what's required for the various operating systems would be nice, but that would require quite a bit of work. |
Just an update on this issue. I've managed to overcome some of the issues people were facing through the use of a legacy network adapter (see #7128). And then overcome issues with the guest rebooting after install by changing the boot order (see #7147), leaving with me with a VM booted and ready for connections. Unfortunately the lack of Hyper-V daemon support chronicled above is blocking further progress. I tried working around that issue using pre-known IP addresses and the |
also, @ladar, how would you feel about me linking to your robox repo from our community tools page? |
@SwampDragons the Hyper-V daemons are still needed to autodetect a guest IP address, and that is obviously necessary when the guest IP address is unpredictable, such as with most DHCP configurations. What this fix does, is give the user the ability to use a defined hard-coded, predictable IP, in the Of course I'm assuming Did I make sense? |
@SwampDragons yes you are welcome to add the robox repo. |
yep yep makes sense; thanks for the clarification. |
I think the way I want to move forward on getting this issue closed is to 1) merge 4825, and 2) clearly document the need for the daemons and the affected operating systems. |
Merge 4825? Isn't that an issue? As I understand it, there are stale/closed pull requests which have tried to find alternative ways to detect the guest IP, without relying on the Hyper-V daemons, but those attempts have all failed? Yes, I agree the Hyper-V daemon issue needs better documentation. I think part of the problem is that some distros, namely Ubuntu, auto-detect Hyper-V, and then auto-install the daemons... which makes the existing documentation examples deceptively simple. A full write up, with workarounds for various distros, would be a big task though, as every distro required a different approach. In general distros require the daemons to be installed separately, which varies in difficulty depending on the distro/installer. And that isn't well documented on the But of course, there are also some operating systems where the daemons just aren't available in any form. That was the case with NetBSD (there is a fork with experimental support, but it would require rebuilding the entire install ISO from source to use). Hence, why I finally relented, and sought the use of the That said, based on the issues I've seen opened the last couple of years, I think the boot ordering problem, and possibly legacy network adapter issues, were causing a subset of the reported failures, which is why I referenced this issue in my write ups. I think those albeit minority problems, had nothing to do with the Hyper-V daemon issue, but got lumped in with the Hyper-V daemon issue. Bug profiling at its worst. @SwampDragons I'm not aware of a PR which resolves the need for the Hyper-V daemons when auto detecting the IP, but I could be wrong. All I know of are the workarounds, like those I put in my configs, which force the daemons to be installed, and/or use a predisposed/static IP to avoid needing the daemons altogether. Of course the latter only worked once the |
Sorry, I meant #7154, which provides a workaround (static IP), as you said. |
@SwampDragons d'oh. I thought you were saying we shouldn't fix the |
nope, already merged it! |
The sad part is that I don't think we can "solve" this daemon problem on the Packer side. We can document the need for daemons, but it's beyond the scope of our docs to provide intimate detail on how to run every operating system on every hypervisor. And it's definitely beyond the scope of the tool to install daemons on guest systems when they boot. So I think that documenting it well and providing a workaround is the best we're going to get. |
@SwampDragons I agree. I just wanted to include my various fixes, so that people who hit the 'Waiting for SSH issue' realize, it might not be the daemon issue, and/or know about the If anybody does decide to tackle a writeup, they are welcome to rip the relevant portions from my configurations, and use them as examples. |
If someone decides to work on this issue again, I think the solution might be looking up the guest MAC address in the ARP table. I confirmed that a guest IP is present in the ARP table, even if the Hyper-V daemons are missing (see screenshot). The possible drawbacks I can think of are: with this strategy the hypervisor, and thus Just throwing around ideas. |
Another workaroud for this problem is to create both internal and external vswitches then share external with internal (ncpa.cpl -> r_click -> properties -> sharing). |
For me on Windows 10 the firewall and the fact that the Hyper-V Standardswitch was not identified and thus treated as public were the problems. This should go into the documentation. Fix (run in Powershell as admin): $VS = "Standardswitch"
$IF_ALIAS = (Get-NetAdapter -Name "vEthernet ($VS)").ifAlias
New-NetFirewallRule -Displayname "Allow incoming from $VS" -Direction Inbound -InterfaceAlias $IF_ALIAS -Action Allow
Set-NetConnectionProfile -InterfaceAlias $IF_ALIAS -NetworkCategory Private |
Regarding the ISO not getting ejected on Generation 2 VMs, why not have a boot command such as that will eject the DVD. AFAIK there is powershell to do the same https://goodworkaround.com/2012/11/08/eject-dvd-iso-from-hyper-v-2012-using-powershell/ That should fix that issue for all old OSs where the installer cannot eject itself. |
I hit this (again) while trying to create Photon OS VM using DHCP - and a Is there some other way I can tell Packer what the guest VM IP is? I'm not sure if there is any way to communicate with the Packer client while it's building? I don't mind some scripting, or even if there is a way to manually input the IP address, but I need some way to tell Packer what it is! |
@cocowalla You can manually input the IP address using the ssh_host option: https://www.packer.io/docs/communicators/ssh.html#ssh_host but you'll need to make sure your preseed file sets up a static IP. |
@SwampDragons as you mentioned though, that's only going to work for static IPs. I'm already aware of the I was thinking more along the lines of some way to programmatically (or even interactively) provide the IP during the build, while Packer is waiting for the IP. I thought perhaps it might listen for commands over HTTP, for example. Thankfully it turned out that Photon does include the Hyper-V daemon, it's just that they gave the package a different name ( |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
BUG:
We are trying to create a Ubuntu Vagrant box using
hyperv-iso
image type. We are stuck with the errorWaiting for SSH to be available
. After a few minutes, it times out and the build fails.The text was updated successfully, but these errors were encountered: