-
Notifications
You must be signed in to change notification settings - Fork 102
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempting to user packer-builder-arm as an automatic plugin-in results in naming error #100
Comments
In a similar vein I found the current README.md unhelpful in understanding how to install packer-builder-arm as a plugin that I can use in other packer-based projects. So let me work up a quick PR for that. |
Renaming the repo might break a few things for people who already use the plugin. I wonder if there is some aliasing option on GH |
When you rename a repo the old one automatically redirects to the new. Are you sure it's referring to the repo name and not getting it from somewhere in Go though? Annoying if so, it doesn't seem at all redundant to put 'packer' in the name of the repository for a packer plugin to me... |
I think this could be a feature/bugfix to mention to packer's community. Because the name of the repo shouldn't be relevant for this, and it has many sense to keep the prefix packer. But seems that the github implicit url feature is causing a problem/confusion here: https://www.packer.io/docs/plugins#implicit-github-urls |
When you rename a repo, GitHub will automatically create redirects from the old name - it would be nice for this plugin to be compatible with the new Packer init feature in 1.7+ (although I agree the "implicit GitHub URLs" functionality is a bit confusing) https://docs.github.com/en/repositories/creating-and-managing-repositories/renaming-a-repository |
Any update on this? It would be great to use the plugin with the packer init feature! Thanks in advance. |
I was also trying to set this up today after a long period of maintaining a fork with my own configs and scripts added. In the meantime I've set up a repo with only my things that has git submodule add https://github.com/mkaczanowski/packer-builder-arm
cd packer-builder-arm
git fetch origin --tags
git checkout v1.0.1
cd ..
git add packer-builder-arm
git commit -m "Added arm source [email protected] At time of writing this is the latest tag but you should check what the current version is and keep it up to date as new versions come out Then I use this shell script, which is a little kludgy, but the context is that I want it to run in a gitlab CI pipeline and then store the built images and SHAs in my artifact repository (which is how I built images previously with my fork). To do this in my new repo I have written a shell script that is aware of the submodule. My repo is called #!/bin/bash
set -o errtrace -o nounset -o pipefail -o errexit #-o xtrace
# setup (teardown if no previous successful tearndown)
[ ! -f .cleaned ] && rm -rf packer-builder-arm/sbc-images
# remove successful teardown lockfile if present
[ -f .cleaned ] && rm -f .cleaned
# ensure you have good qemu versions
sudo docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
mkdir -p packer-builder-arm/sbc-images
# built images
cp -r artifacts/ packer-builder-arm/sbc-images/
# board configs
cp -r boards/ packer-builder-arm/sbc-images/
# conf files to use directly
cp -r conf/ packer-builder-arm/sbc-images/
# Dockerfiles or docker-compose.yml files
cp -r docker/ packer-builder-arm/sbc-images/
# Scripts and HCL templates
cp -r scripts/ packer-builder-arm/sbc-images/
cd packer-builder-arm
go mod download
go build
# PACKER_LOG=1 \ # You can uncomment this if something in the build step is going wrong
sudo -E packer build sbc-images/boards/"$1"."${2:-pkr.hcl}" | grep -v "Cannot stat file /proc/"
cd ..
# grab anything we built
cp -f packer-builder-arm/sbc-images/artifacts/* artifacts/
# teardown
rm -rf packer-builder-arm/sbc-images
# indicate successful teardown via lockfile
touch .cleaned You invoke the build script like PKR_VAR_some_var=... ./build.sh armbian/lepotato The script assumes you are using PKR_VAR_some_var=... ./build.sh armbian/lepotato json Then my board configs look like this: ...variables...
source "arm" "customizer" {
...
image_path = "sbc-images/artifacts/${var.image_name}.img"
...
}
build {
sources = [
"source.arm.customizer",
]
# HCL templates are written as they would be with everything co-located
provisioner "file" {
content = templatefile("../../scripts/raspios/templates/headless_user.sh.pkrtpl.hcl", {
user_login = var.user_login
user_password = var.user_password
})
destination = "/tmp/headless_user.sh"
}
# non-template file provisioners don't forget to prefix with the path to your repo
provisioner "file" {
source = "sbc-images/conf/cupsd.conf"
destination = "/tmp/cupsd.conf"
}
# and the same is true of any shell scripts you run with the shell provisioner:
provisioner "shell" {
scripts = [
# boot (sneaky system + bios)
"sbc-images/scripts/raspios/headless_ssh.sh",
"sbc-images/scripts/common/git_completion_prompt_bash-new_users.sh",
"sbc-images/scripts/raspios/call-headless_user.sh",
"sbc-images/scripts/common/preconfigure_wifi.sh",
# system
"sbc-images/scripts/common/call-hostname.sh",
"sbc-images/scripts/common/apt_get_update_upgrade_autoremove__y.sh",
"sbc-images/scripts/common/call-static_ip.sh",
# dependencies
"sbc-images/scripts/common/git.sh",
# langs
"sbc-images/scripts/common/call-rbenv.sh",
"sbc-images/scripts/common/call-python3_pyenv_pipenv_pip.sh",
# hardware
"sbc-images/scripts/raspios/call-hyperpixel4.sh",
"sbc-images/scripts/raspios/call-lil-screenie-boi.sh",
"sbc-images/scripts/raspios/call-pi-official-7in-display.sh",
"sbc-images/scripts/common/call-hplip.sh",
# applications
"sbc-images/scripts/common/vim.sh",
"sbc-images/scripts/common/call-cups.sh",
"sbc-images/scripts/common/call-cloud_print.sh",
"sbc-images/scripts/raspios/call-unimon.sh",
"sbc-images/scripts/common/onboard.sh",
] (If you haven't used HCL templates before, they rule, but if you are templating a shell script you'll need to user a caller script to This set-up works for me and prevents me from having to maintain a fork. If and when we get the new plugin support I can ditch the submodule and the build script and uncomment this which is sitting anticipatorially at the top of my board configs: packer {
required_plugins {
arm = {
version = ">= 1.0.1"
source = "github.com/mkaczanowski/packer-builder-arm"
}
}
} And remove the repo name from the source image_path, non-template file provisioners, and shell provisioners paths. That's the hope at least. I love this project and this setup allows me to have just one config per board. I just pass the packer vars as environment variables to ./build.sh in CI on on the CLI, and hopefully in the future just Hope this helped somebody! |
Hey, just some heads up... merged lots of PRs in the last days. Will look into that topic and the existing PR for it next. |
It would be superb to be able to "automatically" pull this plugin into a local packer environment via the plugin capabilities added in 1.7. I seem to be hitting a naming convention when I try this.
and
and
The text was updated successfully, but these errors were encountered: