This repository provides infrastructure-as-code examples to automate the creation of virtual machine images and their guest operating systems on VMware vSphere using HashiCorp Packer and the Packer Plugin for VMware vSphere (vsphere-iso
). All examples are authored in the HashiCorp Configuration Language ("HCL2").
Use of this project is mentioned in the VMware Validated Solution: Private Cloud Automation for VMware Cloud Foundation authored by the maintainer. Learn more about this solution at vmware.com/go/vvs.
By default, the machine image artifacts are transferred to a vSphere Content Library as an OVF template and the temporary machine image is destroyed. If an item of the same name exists in the target content library, Packer will update the existing item with the new version of OVF template.
The following builds are available:
- VMware Photon OS 4
- Debian 11
- Ubuntu Server 22.04 LTS (cloud-init)
- Ubuntu Server 20.04 LTS (cloud-init)
- Ubuntu Server 18.04 LTS
- Red Hat Enterprise Linux 9 Server
- Red Hat Enterprise Linux 8 Server
- Red Hat Enterprise Linux 7 Server
- AlmaLinux OS 9
- AlmaLinux OS 8
- Rocky Linux 9
- Rocky Linux 8
- CentOS Stream 9
- CentOS Stream 8
- CentOS Linux 7
- SUSE Linux Enterprise Server 15
- Microsoft Windows Server 2022 - Standard and Datacenter
- Microsoft Windows Server 2019 - Standard and Datacenter
- Microsoft Windows 11
- Microsoft Windows 10
Note
The Microsoft Windows 11 machine image uses a virtual trusted platform module (vTPM). Refer to the VMware vSphere product documenation for requirements and pre-requisites.
The Microsoft Windows 11 machine image is not transferred to the content library by default. It is not supported to clone an encrypted virtual machine to the content library as an OVF Template. You can adjust the common content library settings to use VM Templates.
Operating Systems:
-
VMware Photon OS 4
-
Ubuntu Server 22.04 LTS and 20.04 LTS
-
macOS Monterey and Big Sur (Intel)
Note
Operating systems and versions tested with the project.
Update your
/etc/ssh/ssh_config
or.ssh/ssh_config
to allow authentication with RSA keys if you are using VMware Photon OS 4.0 or Ubuntu 22.04.Update to include the following:
PubkeyAcceptedAlgorithms ssh-rsa
HostkeyAlgorithms ssh-rsa
Packer:
-
HashiCorp Packer 1.8.2 or higher.
Note
Click on the operating system name to display the installation steps.
-
Photon OS
PACKER_VERSION="1.8.2" OS_PACKAGES="wget unzip" if [[ $(uname -m) == "x86_64" ]]; then LINUX_ARCH="amd64" elif [[ $(uname -m) == "aarch64" ]]; then LINUX_ARCH="arm64" fi tdnf install ${OS_PACKAGES} -y wget -q https://releases.hashicorp.com/packer/${PACKER_VERSION}/packer_${PACKER_VERSION}_linux_${LINUX_ARCH}.zip unzip -o -d /usr/local/bin/ packer_${PACKER_VERSION}_linux_${LINUX_ARCH}.zip
-
Ubuntu
sudo apt-get update && sudo apt-get install -y gnupg software-properties-common curl curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add - sudo apt-add-repository "deb [arch=amd64] https://apt.releases.hashicorp.com $(lsb_release -cs) main" sudo apt-get update && sudo apt-get install terraform
-
macOS
brew tap hashicorp/tap brew install hashicorp/tap/packer
-
-
HashiCorp Packer Plugin for VMware vSphere (
vsphere-iso
) 1.0.8 or higher. -
Packer Plugin for Windows Updates 0.14.1 or higher - a community plugin for HashiCorp Packer.
Note
Required plugins are automatically downloaded and initialized when using
./build.sh
. For dark sites, you may download the plugins and place these same directory as your Packer executable/usr/local/bin
or$HOME/.packer.d/plugins
.
Additional Software Packages:
The following software packages must be installed on the opearing system running Packer.
Note
Click on the operating system name to display the installation steps.
-
Git command-line tools.
-
Photon OS
tdnf install git
-
Ubuntu
apt-get install git
-
macOS
brew install git
-
-
Ansible 2.9 or higher.
-
Photon OS
tdnf install ansible
-
Ubuntu
apt-get install ansible
-
macOS
brew install ansible
-
-
A command-line .iso creator. Packer will use one of the following:
-
Photon OS
tdnf install xorriso
-
Ubuntu
apt-get install xorriso
-
macOS
hdiutil (native)
-
-
mkpasswd
-
Ubuntu
apt-get install whois
-
macOS
brew install --cask docker
-
-
Coreutils
-
macOS
brew install coreutils
-
-
HashiCorp Terraform 1.2.8 or higher.
-
Photon OS
TERRAFORM_VERSION="1.2.8" OS_PACKAGES="wget unzip" if [[ $(uname -m) == "x86_64" ]]; then LINUX_ARCH="amd64" elif [[ $(uname -m) == "aarch64" ]]; then LINUX_ARCH="arm64" fi tdnf install ${OS_PACKAGES} -y wget -q https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip unzip -o -d /usr/local/bin/ terraform_${TERRAFORM_VERSION}_linux_${LINUX_ARCH}.zip
-
Ubuntu
sudo apt-get update && sudo apt-get install terraform
-
macOS
brew install hashicorp/tap/terraform
-
-
Gomplate 3.11.2 or higher.
-
Ubuntu
GOMPLATE_VERSION="3.11.2" sudo curl -o /usr/local/bin/gomplate -sSL https://github.com/hairyhenderson/gomplate/releases/download/v${GOMPLATE_VERSION}/gomplate_linux-${LINUX_ARCH} sudo chmod 755 /usr/local/bin/gomplate
-
macOS
brew install gomplate
-
Platform:
- VMware vSphere 7.0 Update 3D or higher
Download the latest release.
You may also clone main
for the latest prerelease updates.
Example:
git clone https://github.com/vmware-samples/packer-examples-for-vsphere.git
The directory structure of the repository.
├── build.sh
├── config.sh
├── set-envvars.sh
├── LICENSE
├── NOTICE
├── README.md
├── ansible
│ ├── roles
│ │ └── <role>
│ │ └── *.yml
│ ├── ansible.cfg
│ └── main.yml
├── artifacts
├── builds
│ ├── ansible.pkvars.hcl.example
│ ├── build.pkvars.hcl.example
│ ├── common.pkvars.hcl.example
│ ├── proxy.pkvars.hcl.example
│ ├── rhsm.pkvars.hcl.example
| |── scc.pkvars.hcl.example
│ ├── vsphere.pkvars.hcl.example
│ ├── linux
│ │ └── <distribution>
│ │ └── <version>
│ │ ├── *.pkr.hcl
│ │ ├── *.auto.pkrvars.hcl
│ │ └── data
│ │ └── ks.pkrtpl.hcl
│ └── windows
│ └── <distribution>
│ └── <version>
│ ├── *.pkr.hcl
│ ├── *.auto.pkrvars.hcl
│ └── data
│ └── autounattend.pkrtpl.hcl
├── manifests
├── scripts
│ └── windows
│ └── *.ps1
└── terraform
│── vsphere-role
└── vsphere-virtual-machine
The files are distributed in the following directories.
ansible
- contains the Ansible roles to prepare Linux machine image builds.artifacts
- contains the OVF artifacts exported by the builds, if enabled.builds
- contains the templates, variables, and configuration files for the machine image builds.scripts
- contains the scripts to initialize and prepare Windows machine image builds.manifests
- manifests created after the completion of the machine image builds.terraform
- contains example Terraform plans to create a custom role and test machine image builds.
Warning
When forking the project for upstream contribution, please be mindful not to make changes that may expose your sensitive information, such as passwords, keys, certificates, etc.
The project supports configuring the ISO from either a datastore or URL source. By default, the project uses the datastore source.
Follow the steps below to configure either option.
If you are using a datastore to store your guest operating system .iso
files, you must download and upload these to a datastore path.
-
Download the x64 guest operating system
.iso
files.Linux Distributions:
- VMware Photon OS 4
- Download the 4.0 Rev2 release of the FULL
.iso
image. (e.g.photon-4.0-xxxxxxxxx.iso
)
- Download the 4.0 Rev2 release of the FULL
- Debian 11
- Download the latest netinst release
.iso
image. (e.g.debian-11.x.0-amd64-netinst.iso
)
- Download the latest netinst release
- Ubuntu Server 22.04 LTS
- Download the latest LIVE release
.iso
image. (e.g.,ubuntu-22.04.x-live-server-amd64.iso
)
- Download the latest LIVE release
- Ubuntu Server 20.04 LTS
- Download the latest LIVE release
.iso
image. (e.g.,ubuntu-20.04.x-live-server-amd64.iso
)
- Download the latest LIVE release
- Ubuntu Server 18.04 LTS
- Download the latest legacy NON-LIVE release
.iso
image. (e.g.,ubuntu-18.04.x-server-amd64.iso
)
- Download the latest legacy NON-LIVE release
- Red Hat Enterprise Linux 9 Server
- Download the latest release of the FULL
.iso
image. (e.g.,rhel-baseos-9.x-x86_64-dvd.iso
)
- Download the latest release of the FULL
- Red Hat Enterprise Linux 8 Server
- Download the latest release of the FULL
.iso
image. (e.g.,rhel-8.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- Red Hat Enterprise Linux 7 Server
- Download the latest release of the FULL
.iso
image. (e.g.,rhel-server-7.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- AlmaLinux OS 9
- Download the latest release of the FULL
.iso
image. (e.g.,AlmaLinux-9.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- AlmaLinux OS 8
- Download the latest release of the FULL
.iso
image. (e.g.,AlmaLinux-8.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- Rocky Linux 9
- Download the latest release of the FULL
.iso
image. (e.g.,Rocky-9.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- Rocky Linux 8
- Download the latest release of the FULL
.iso
image. (e.g.,Rocky-8.x-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- CentOS Stream 9
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-Stream-9-latest-x86_64-dvd1.iso
)
- Download the latest release of the FULL
- CentOS Stream 8
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-Stream-8-x86_64-latest-dvd1.iso
)
- Download the latest release of the FULL
- CentOS Linux 7
- Download the latest release of the FULL
.iso
image. (e.g.,CentOS-7-x86_64-DVD.iso
)
- Download the latest release of the FULL
- SUSE Linux Enterprise 15
- Download the latest 15.3 release of the FULL
.iso
image. (e.g.,SLE-15-SP3-Full-x86_64-GM-Media1.iso
)
- Download the latest 15.3 release of the FULL
Microsoft Windows:
- Microsoft Windows Server 2022
- Microsoft Windows Server 2019
- Microsoft Windows 11
- Microsoft Windows 10
- VMware Photon OS 4
-
Obtain the checksum type (e.g.,
sha256
,md5
, etc.) and checksum value for each guest operating system.iso
from the vendor. This will be use in the build input variables. -
Upload or your guest operating system
.iso
files to the datastore and update the configuration variables, leaving theiso_url
variable asnull
.Example:
config/common.pkvars.hcl
common_iso_datastore = "sfo-w01-cl01-ds-nfs01"
Example:
builds/<type>/<build>/*.auto.pkvars.hcl
iso_url = null iso_path = "iso/linux/photon" iso_file = "photon-4.0-xxxxxxxxx.iso" iso_checksum_type = "md5" iso_checksum_value = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
If you are using a URL source to obtain your guest operating system .iso
files, you must update the input variables to use the URL source.
Update the iso_url
variable to download the .iso
from a URL. The iso_url
variable takes presedence over any other iso_*
variables.
Example: builds/<type>/<build>/*.auto.pkvars.hcl
iso_url = "https://artifactory.rainpole.io/iso/linux/photon/4.0/x86_64/photon-4.0-xxxxxxxxx.iso"
Create a custom vSphere role with the required privileges to integrate HashiCorp Packer with VMware vSphere. A service account can be added to the role to ensure that Packer has least privilege access to the infrastructure. Clone the default Read-Only vSphere role and add the following privileges:
Category | Privilege | Reference |
---|---|---|
Content Library | Add library item | ContentLibrary.AddLibraryItem |
... | Update Library Item | ContentLibrary.UpdateLibraryItem |
Datastore | Allocate space | Datastore.AllocateSpace |
... | Browse datastore | Datastore.Browse |
... | Low level file operations | Datastore.Browse |
Network | Assign network | Network.Assign |
Resource | Assign virtual machine to resource pool | Resource.AssignVMToPool |
vApp | Export | vApp.Export |
Virtual Machine | Configuration > Add new disk | VirtualMachine.Config.AddNewDisk |
... | Configuration > Add or remove device | VirtualMachine.Config.AddRemoveDevice |
... | Configuration > Advanced configuration | VirtualMachine.Config.AdvancedConfig |
... | Configuration > Change CPU count | VirtualMachine.Config.CPUCount |
... | Configuration > Change memory | VirtualMachine.Config.Memory |
... | Configuration > Change settings | VirtualMachine.Config.Settings |
... | Configuration > Change Resource | VirtualMachine.Config.Resource |
... | Configuration > Set annotation | VirtualMachine.Config.Annotation |
... | Edit Inventory > Create from existing | VirtualMachine.Inventory.CreateFromExisting |
... | Edit Inventory > Create new | VirtualMachine.Inventory.Create |
... | Edit Inventory > Remove | VirtualMachine.Inventory.Delete |
... | Interaction > Configure CD media | VirtualMachine.Interact.SetCDMedia |
... | Interaction > Configure floppy media | VirtualMachine.Interact.SetFloppyMedia |
... | Interaction > Connect devices | VirtualMachine.Interact.DeviceConnection |
... | Interaction > Inject USB HID scan codes | VirtualMachine.Interact.PutUsbScanCodes |
... | Interaction > Power off | VirtualMachine.Interact.PowerOff |
... | Interaction > Power on | VirtualMachine.Interact.PowerOn |
... | Provisioning > Create template from virtual machine | VirtualMachine.Provisioning.CreateTemplateFromVM |
... | Provisioning > Mark as template | VirtualMachine.Provisioning.MarkAsTemplate |
... | Provisioning > Mark as virtual machine | VirtualMachine.Provisioning.MarkAsVM |
... | State > Create snapshot | VirtualMachine.State.CreateSnapshot |
If you would like to automate the creation of the custom vSphere role, a Terraform example is included in the project.
-
Navigate to the directory for the example.
cd terraform/vsphere-role
-
Duplicate the
terraform.tfvars.example
file toterraform.tfvars
in the directory.cp terraform.tfvars.example terraform.tfvars
-
Open the
terraform.tfvars
file and update the variables according to your environment. -
Initialize the current directory and the required Terraform provider for VMware vSphere.
terraform init
-
Create a Terraform plan and save the output to a file.
terraform plan -out=tfplan
-
Apply the Terraform plan.
terraform apply tfplan
Once the custom vSphere role is created, assign Global Permissions in vSphere for the service account used for the HashiCorp Packer to VMware vSphere integration. Global permissions are required for the content library. For example:
- Log in to the vCenter Server at <management_vcenter_server_fqdn>/ui as
[email protected]
. - Select Menu > Administration.
- In the left pane, select Access control > Global permissions and click the Add permissions icon.
- In the Add permissions dialog box, enter the service account (e.g., [email protected]), select the custom role (e.g., Packer to vSphere Integration Role) and the Propagate to children check box, and click OK.
In an environment with many vCenter Server instances, such as management and workload domains, you may wish to further reduce the scope of access across the infrastructure in vSphere for the service account. For example, if you do not want Packer to have access to your management domain, but only allow access to workload domains:
-
From the Hosts and clusters inventory, select management domain vCenter Server to restrict scope, and click the Permissions tab.
-
Select the service account with the custom role assigned and click the Change role icon.
-
In the Change role dialog box, from the Role drop-down menu, select No Access, select the Propagate to children check box, and click OK.
The variables are defined in .pkvars.hcl
files.
Run the config script ./config.sh
to copy the .pkvars.hcl.example
files to the config
directory.
The config
folder is the default folder, You may override the default by passing an alternate value as the first argument.
./config.sh foo
./build.sh foo
For example, this is useful for the purposes of running machine image builds for different environment.
San Francisco: us-west-1
./config.sh config/us-west-1
./build.sh config/us-west-1
Los Angeles: us-west-2
./config.sh config/us-west-2
./build.sh config/us-west-2
Edit the config/build.pkvars.hcl
file to configure the following:
- Credentials for the default account on machine images.
Example: config/build.pkvars.hcl
build_username = "rainpole"
build_password = "<plaintext_password>"
build_password_encrypted = "<sha512_encrypted_password>"
build_key = "<public_key>"
You can also override the build_key
value with contents of a file, if required.
For example:
build_key = file("${path.root}/config/ssh/build_id_ecdsa.pub")
Generate a SHA-512 encrypted password for the build_password_encrypted
using tools like mkpasswd.
Example: mkpasswd using Docker on Photon:
rainpole@photon> sudo systemctl start docker
rainpole@photon> sudo docker run -it --rm alpine:latest
mkpasswd -m sha512
Password: ***************
[password hash]
rainpole@photon> sudo systemctl stop docker
Example: mkpasswd using Docker on macOS:
rainpole@macos> docker run -it --rm alpine:latest
mkpasswd -m sha512
Password: ***************
[password hash]
Example: mkpasswd on Ubuntu:
rainpole@ubuntu> mkpasswd -m sha-512
Password: ***************
[password hash]
Generate a public key for the build_key
for public key authentication.
Example: macOS and Linux.
rainpole@macos> cd .ssh/
rainpole@macos ~/.ssh> ssh-keygen -t ecdsa -b 521 -C "[email protected]"
Generating public/private ecdsa key pair.
Enter file in which to save the key (/Users/rainpole/.ssh/id_ecdsa):
Enter passphrase (empty for no passphrase): **************
Enter same passphrase again: **************
Your identification has been saved in /Users/rainpole/.ssh/id_ecdsa.
Your public key has been saved in /Users/rainpole/.ssh/id_ecdsa.pub.
The content of the public key, build_key
, is added the key to the .ssh/authorized_keys
file of the build_username
on the guest operating system.
Warning
Replace the default public keys and passwords.
By default, both Public Key Authentication and Password Authentication are enabled for Linux distributions. If you wish to disable Password Authentication and only use Public Key Authentication, comment or remove the portion of the associated Ansible
configure
role.
Edit the config/ansible.pkvars.hcl
file to configure the following:
- Credentials for the Ansible account on Linux machine images.
Example: config/ansible.pkvars.hcl
ansible_username = "ansible"
ansible_key = "<public_key>"
Note
A random password is generated for the Ansible user.
You can also override the ansible_key
value with contents of a file, if required.
For example:
ansible_key = file("${path.root}/config/ssh/ansible_id_ecdsa.pub")
Edit the config/common.pkvars.hcl
file to configure the following common variables:
- Virtual Machine Settings
- Template and Content Library Settings
- OVF Export Settings
- Removable Media Settings
- Boot and Provisioning Settings
- HCP Packer Registry
Example: config/common.pkvars.hcl
// Virtual Machine Settings
common_vm_version = 19
common_tools_upgrade_policy = true
common_remove_cdrom = true
// Template and Content Library Settings
common_template_conversion = false
common_content_library_name = "sfo-w01-lib01"
common_content_library_ovf = true
common_content_library_destroy = true
// OVF Export Settings
common_ovf_export_enabled = false
common_ovf_export_overwrite = true
// Removable Media Settings
common_iso_datastore = "sfo-w01-cl01-ds-nfs01"
// Boot and Provisioning Settings
common_data_source = "http"
common_http_ip = null
common_http_port_min = 8000
common_http_port_max = 8099
common_ip_wait_timeout = "20m"
common_shutdown_timeout = "15m"
// HCP Packer
common_hcp_packer_registry_enabled = false
http
is the default provisioning data source for Linux machine image builds.
If iptables is enabled on your Packer host, you will need to open common_http_port_min
through common_http_port_max
ports.
Example: Open a port range in iptables.
iptables -A INPUT -p tcp --match multiport --dports 8000:8099 -j ACCEPT`
You can change the common_data_source
from http
to disk
to build supported Linux machine images without the need to use Packer's HTTP server. This is useful for environments that may not be able to route back to the system from which Packer is running.
The cd_content
option is used when selecting disk
unless the distribution does not support a secondary CD-ROM. For distributions that do not support a secondary CD-ROM the floppy_content
option is used.
common_data_source = "disk"
If you need to define a specific IPv4 address from your host for Packer's HTTP Server, modify the common_http_ip
variable from null
to a string
value that matches an IP address on your Packer host. For example:
common_http_ip = "172.16.11.254"
Edit the config/proxy.pkvars.hcl
file to configure the following:
- SOCKS proxy settings used for connecting to Linux machine images.
- Credentials for the proxy server.
Example: config/proxy.pkvars.hcl
communicator_proxy_host = "proxy.rainpole.io"
communicator_proxy_port = 8080
communicator_proxy_username = "rainpole"
communicator_proxy_password = "<plaintext_password>"
Edit the config/redhat.pkvars.hcl
file to configure the following:
- Credentials for your Red Hat Subscription Manager account.
Example: config/redhat.pkvars.hcl
rhsm_username = "rainpole"
rhsm_password = "<plaintext_password>"
These variables are only used if you are performing a Red Hat Enterprise Linux Server build and are used to register the image with Red Hat Subscription Manager during the build for system updates and package installation. Before the build completes, the machine image is unregistered from Red Hat Subscription Manager.
Edit the config/scc.pkvars.hcl
file to configure the following:
- Credentials for your SUSE Customer Connect account.
Example: config/scc.pkvars.hcl
scc_email = "[email protected]"
scc_code = "<plaintext_code>"
These variables are only used if you are performing a SUSE Linux Enterprise Server build and are used to register the image with SUSE Customer Connect during the build for system updates and package installation. Before the build completes, the machine image is unregistered from SUSE Customer Connect.
Edit the builds/vsphere.pkvars.hcl
file to configure the following:
- vSphere Endpoint and Credentials
- vSphere Settings
Example: config/vsphere.pkvars.hcl
vsphere_endpoint = "sfo-w01-vc01.sfo.rainpole.io"
vsphere_username = "[email protected]"
vsphere_password = "<plaintext_password>"
vsphere_insecure_connection = true
vsphere_datacenter = "sfo-w01-dc01"
vsphere_cluster = "sfo-w01-cl01"
vsphere_datastore = "sfo-w01-cl01-ds-vsan01"
vsphere_network = "sfo-w01-seg-dhcp"
vsphere_folder = "sfo-w01-fd-templates"
If you prefer not to save sensitive potentially information in cleartext files, you add the variables to environmental variables using the included set-envvars.sh
script:
rainpole@macos> . ./set-envvars.sh
Note
You need to run the script as source or the shorthand "
.
".
Edit the *.auto.pkvars.hcl
file in each builds/<type>/<build>
folder to configure the following virtual machine hardware settings, as required:
-
CPUs
(int)
-
CPU Cores
(int)
-
Memory in MB
(int)
-
Primary Disk in MB
(int)
-
.iso URL
(string)
-
.iso Path
(string)
-
.iso File
(string)
-
.iso Checksum Type
(string)
-
.iso Checksum Value
(string)
Note
All
variables.auto.pkvars.hcl
default to using the VMware Paravirtual SCSI controller and the VMXNET 3 network card device types.
If required, modify the configuration files for the Linux distributions and Microsoft Windows.
Username and password variables are passed into the kickstart or cloud-init files for each Linux distribution as Packer template files (.pkrtpl.hcl
) to generate these on-demand. Ansible roles are then used to configure the Linux machine image builds.
Variables are passed into the Microsoft Windows unattend files (autounattend.xml
) as Packer template files (autounattend.pkrtpl.hcl
) to generate these on-demand. By default, each unattended file is set to use the KMS client setup keys as the Product Key.
PowerShell scripts are used to configure the Windows machine image builds.
Need help customizing the configuration files?
-
VMware Photon OS - Read the Photon OS Kickstart Documentation.
-
Ubuntu Server - Install and run system-config-kickstart on a Ubuntu desktop.
sudo apt-get install system-config-kickstart ssh -X rainpole@ubuntu sudo system-config-kickstart
-
Red Hat Enterprise Linux (as well as CentOS Linux/Stream, AlmaLinux OS, and Rocky Linux) - Use the Red Hat Kickstart Generator.
-
SUSE Linux Enterprise Server - Use the SUSE Configuration Management System.
-
Microsoft Windows - Use the Microsoft Windows Answer File Generator if you need to customize the provided examples further.
If you are new to HCP Packer, review the following documentation and video to learn more before enabling an HCP Packer registry:
Before you can use the HCP Packer registry, you need to create it by following Create HCP Packer Registry procedure.
Edit the config/common.pkvars.hcl
file to enable the HCP Packer registry.
// HCP Packer
common_hcp_packer_registry_enabled = true
Then, export your HCP credentials before building.
rainpole@macos> export HCP_CLIENT_ID=<client_id>
rainpole@macos> export HCP_CLIENT_SECRET=<client_secret>
Start a build by running the build script (./build.sh
). The script presents a menu the which simply calls Packer and the respective build(s).
You can also start a build based on a specific source for some of the virtual machine images.
For example, if you simply want to build a Microsoft Windows Server 2022 Standard Core, run the following:
Initialize the plugins:
rainpole@macos> packer init builds/windows/server/2022/.
Build a specific machine image:
rainpole@macos> packer build -force \
--only vsphere-iso.windows-server-standard-core \
-var-file="config/vsphere.pkrvars.hcl" \
-var-file="config/build.pkrvars.hcl" \
-var-file="config/common.pkrvars.hcl" \
builds/windows/server/2022
You can set your environment variables if you would prefer not to save sensitive information in cleartext files.
You can add these to environmental variables using the included set-envvars.sh
script.
rainpole@macos> . ./set-envvars.sh
Note
You need to run the script as source or the shorthand "
.
".
Initialize the plugins:
rainpole@macos> packer init builds/windows/server/2022/.
Build a specific machine image using environmental variables:
rainpole@macos> packer build -force \
--only vsphere-iso.windows-server-standard-core \
builds/windows/server/2022
The build script (./build.sh
) can be generated using a template (./build.tmpl
) and a configuration file in YAML (./build.yaml
).
Generate a custom build script:
rainpole@macos> gomplate -c build.yaml -f build.tmpl -o build.sh
Happy building!!!
- Read Debugging Packer Builds.
-
Owen Reynolds @OVDamn
VMware Tools for Windows installation PowerShell script.