Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARM64 support for image #194

Closed
guillermoap opened this issue Apr 25, 2021 · 22 comments
Closed

ARM64 support for image #194

guillermoap opened this issue Apr 25, 2021 · 22 comments

Comments

@guillermoap
Copy link

Hello I'm trying to use this image on an M1 mac but I'm facing the same issue reported here. The solution for this issue, which worked for me with other images is to use arm64 compatible images.

After seeing the supported arch/os for the images I noticed only amd64 is supported. Would it be possible to add support for arm64?

Thanks

@edmorley
Copy link
Member

edmorley commented May 14, 2021

Hi! Thank you for opening this.

I would be open to adding support for this - though from some initial searching now, it seems there are some limitations when generating multi-arch images/manifests, in that they have to be directly pushed to the registry, which means the deployment workflow in this repo would need some refactoring.

This isn't something that we'll be working on soon, but if I would be happy to take a look at PRs if someone wanted to take a look in the meantime :-)

@mmmmmrob
Copy link

These images are super helpful for local dev and test. Thanks for maintaining them.

I've just upgraded to a M1 MacBook Pro and could really do with arm64 heroku image.

@edmorley
Copy link
Member

edmorley commented Mar 17, 2022

So one thing worth noting, is that if ARM64 support were added to the heroku/heroku:* images, I'm presuming Docker Clients would automatically (and silently) start using those images locally on Apple M1 machines, which will cause unexpected breakage in some cases.

For example, people whose Dockerfiles install/run AMD64 binaries without checking architecture (which I'm presuming is most of them), or people running Heroku buildpacks inside these images (since there is currently no ARM64 support for Heroku buildpacks). Worse, for anyone using these images to generate binaries for use on Heroku (eg runtime binaries for use by a buildpack), the change would silently cause the generated binaries to target a different architecture than that used on Heroku.

Also, if people are using the Heroku stack images locally as a way to try and ensure dev-prod parity, running different architectures in development vs production isn't really achieving that (even ignoring the other differences, such as external services/datastores). The only true way to ensure your app works in production is via Heroku CI/Review Apps/pre-production Heroku apps, environments running in the same environment as production. At which point, why not use a slimmer non-Heroku Docker image for development, or else just run your app on your local machine outside of Docker?

That's not to say we won't ever consider adding ARM64 support, but that (a) it will need to be considered carefully, (b) will need lots of support from other parts of the ecosystem too (eg buildpacks), (c) it may actually achieve the opposite of what people are hoping for when using these images (wrt dev-prod parity).

edmorley added a commit to heroku/heroku-buildpack-python that referenced this issue Mar 23, 2022
Currently when any of the Docker related `Makefile` targets are invoked
from a machine that is not using the AMD64 (x86-64) architecture (such
as a machine using the Apple M1), it emits the following warning:

```
WARNING: The requested image's platform (linux/amd64) does not match the
detected host platform (linux/arm64/v8) and no specific platform was requested
```

In addition, were the `heroku/heroku:*` images ever to support ARM64
(see heroku/base-images#194), relying on an
implicit platform value would mean the runtime generation tasks would
silently start to generate binaries for a different architecture.

To prevent this warning, and prevent such surprises with binary
generation, the platform is now specified explicitly using `--platform`.

GUS-W-10884947.
edmorley added a commit to heroku/heroku-buildpack-python that referenced this issue Mar 23, 2022
Currently when any of the Docker related `Makefile` targets are invoked
from a machine that is not using the AMD64 (x86-64) architecture (such
as a machine using the Apple M1), it emits the following warning:

```
WARNING: The requested image's platform (linux/amd64) does not match the
detected host platform (linux/arm64/v8) and no specific platform was requested
```

In addition, were the `heroku/heroku:*` images ever to support ARM64
(see heroku/base-images#194), relying on an
implicit platform value would mean the runtime generation tasks would
silently start to generate binaries for a different architecture.

To prevent this warning, and prevent such surprises with binary
generation, the platform is now specified explicitly using `--platform`.

GUS-W-10884947.
@boboldehampsink
Copy link

boboldehampsink commented Jul 8, 2022

This would really speed up local development at the least. x86 emulation in Docker on M1 is so slow, it isn't even worth having an M1 over a good old Intel mac.

Here's an article on using buildx on github to do multi-arch builds: https://itnext.io/building-multi-cpu-architecture-docker-images-for-arm-and-x86-1-the-basics-2fa97869a99b

@ms-ati
Copy link

ms-ati commented Aug 12, 2022

@edmorley this is also an issue at Panorama Education, as our engineering team moves into M1 Macs. Is there anything we can do to help accelerate this work?

@ms-ati
Copy link

ms-ati commented Aug 12, 2022

And yes, the Buildpacks would also need to have M1 compatible binary builds -- is this not an issue for many/most Heroku clients today? Should we reach out through our account rep?

@edmorley
Copy link
Member

No one has yet replied to this...

Also, if people are using the Heroku stack images locally as a way to try and ensure dev-prod parity, running different architectures in development vs production isn't really achieving that (even ignoring the other differences, such as external services/datastores). The only true way to ensure your app works in production is via Heroku CI/Review Apps/pre-production Heroku apps, environments running in the same environment as production. At which point, why not use a slimmer non-Heroku Docker image for development, or else just run your app on your local machine outside of Docker?

That's not to say we won't ever consider adding ARM64 support, but that (a) it will need to be considered carefully, (b) will need lots of support from other parts of the ecosystem too (eg buildpacks), (c) it may actually achieve the opposite of what people are hoping for when using these images (wrt dev-prod parity).

ie:

The whole point of these ridiculously large (compared to single purpose images) Heroku stack Docker images is to provide a way to ensure dev-prod parity, or to help debug any potential "Heroku deployment specific" problems.

However running different architectures in development vs production isn't really achieving dev-prod parity.

In which case why not just use a simpler/faster approach locally anyway? (Such as a slimmer image which is already ARM64 compat, or just running the Python/Ruby/... runtime on local machine like a lot of people do for best performance anyway)

I mean it would be great if we could have both (full dev-prod parity and great performance), but until Docker sort out the performance issues from emulation (eg by replacing QEMU, see docker/roadmap#384), there aren't really many great options here.

@ms-ati
Copy link

ms-ati commented Aug 12, 2022

Hmm, is the idea that rebuilding the same exact stack of software, but build for a different CPU architecture, is preventing the delivery of 100% of the value that development teams get out of using Docker locally and then deploying on Heroku?

If so, I'm not sure that makes sense to me. Can you say a bit more about what is lost in terms of dev-prod parity, by rebuilding the identical software stack for the other CPU architecture? I would have imagined it's only the risk of architecture-specific bugs in the binaries?

@edmorley
Copy link
Member

Building with different architectures will be close enough most of the time, yes.

My point (from one of my earlier comments) is more that I don't feel using Heroku's stack images locally is best choice most of the time. It's very hard to achieve true production parity - and in most cases the extra effort/reduction in performance that has to be incurred just isn't worth it (eg making devs or third-party CI pull down 1GB+ Heroku stack images).

Even if you try to emulate production with Docker + Heroku's stack images + manually running buildpacks in the Dockerfile (I'm presuming this is what you are doing now?), it's quite far what what happens in production:

  1. Heroku doesn't use Docker + uses slugs not OCI images
  2. the Heroku build system does other things, in addition to what's in the buildpack (and those things won't be in your local Dockerfile)
  3. there will be many other (potentially more significant) differences locally - think datastores, external services, different env vars, running in debug mode etc etc

As such, "it works locally" will never 100% translate to "it works in production". Therefore you are always going to need something like Heroku CI, Review Apps or a staging app to catch such issues.

The question then becomes what configuration locally minimises how many times something is only caught by CI/Review Apps/staging app, but also isn't a pain to maintain / is still performant. IMO right now that sweet spot is to use a lightweight language-specific Docker image locally (not the Heroku stack images), which also happen to already support ARM64.

With CNBs (Cloud Native Buildpacks) some of the above dev-prod compat issues will go away (1+2), and running pack build locally will both get you closer to what's used in production, and in a more performant way (due to the way CNB caching works) - at which point the sweet spot for what to use locally will likely change. However the migration to CNBs is a way out, and the upstream CNB project doesn't yet support multi-arch images either.

As such, in the meantime, I really would just recommend using slimmer official upstream single-language Docker images that already support ARM64.

@benalavi
Copy link
Contributor

@edmorley (& others): We've been using these Heroku stack images as a base for VMs to have (roughly) dev/prod parity for quite a while now (we install the stack image then run buildpacks on top to install the rest of the dependencies). The goal was really to have good enough dev/prod parity that we don't have to maintain ourselves. We have rarely run into a bug where we had a local package that wasn't present on production, more often it has helped us fix things because we have pretty much the same imagemagick with pretty much the same build configuration locally.

With the move to arm macs we've been experimenting with where our local dev environments are going to go. Since x86 emulation seems to be a non-starter we're thinking we could run arm builds locally and they might be "good enough". So far we were able to install the heroku-20 stack images on a ubuntu 20.04 arm image pretty easily with a couple minor changes:

  • Change the sources from http://archive.ubuntu.com/ubuntu/ to http://ports.ubuntu.com/
  • The syslinux package isn't available on arm, but the syslinux-common package may be instead (haven't tried this yet)

We're now working on installing arm instead of x86 binaries via the Ruby & Node buildpacks (our other buildpacks have worked so far).

If we can get everything installed and our tests run then I think we'll have "good enough" dev/prod parity. The benefit for us is really that we have VMs with mostly the same packages/versions as production that we don't have to keep up to date ourselves (same reason we use Heroku really 😅).

The build performance isn't really an issue for our case. We don't rebuild our VMs that often, and when we do we only rebuild the stack images on a clean install (otherwise we rebuild starting w/ the buildpacks).

Notably we aren't using Docker. We could give it a shot (I hear it's popular) and it sounds like that might take the place of needing to install the stack images, but we'd still need to install the buildpacks w/ arm binaries on top, so I'm not sure it would be a huge difference considering the stack images already pretty much work (or at least seem to).

So at least for now I think we may have a reasonable use case for arm versions of the heroku stack images, but to be fair we haven't been able to get everything installed yet and run our tests so I'm not sure yet if it's going to work out as we are hoping. If it does work and Heroku doesn't want to maintain them we could probably automate the changes pretty easily (at least for heroku-20) so that might be another option.

Alternatively there might be some way better solution we should pursue here, but I think we really want to avoid having to maintain our local/dev dependencies separately from our production/CI dependencies (we also use Heroku CI btw).

@boboldehampsink
Copy link

Thanks @benalavi, with your input I have submitted a PR that would build the stack images for both amd64 and arm64: #227

Needs testing, so any feedback would be nice

@boboldehampsink
Copy link

FWIW I now have a multi-arch setup fully functioning. Running Heroku docker images on M1 is now blazing fast.

@ms-ati
Copy link

ms-ati commented Sep 15, 2022

HELLO! Is there an official plan at Heroku to merge #227 or otherwise enable “blazing fast” docker images on M1 Macs yet? We’ve reached out to our account rep as well

@ms-ati
Copy link

ms-ati commented Sep 15, 2022

@boboldehampsink Do you think it might be possible to write up a post detailing all the steps taken to enable other dev teams to follow your lead? I know it would be super valuable!

@boboldehampsink
Copy link

@ms-ati apart from building the stack image in arm64, every other step was specific to the buildpack (I used PHP) - do you perhaps use PHP?

@edmorley
Copy link
Member

Is there an official plan at Heroku to merge #227 or otherwise enable “blazing fast” docker images on M1 Macs yet?

Hi! That PR is a great initial step, however it's not ready for merging yet (doing so would cause a lot of end user breakage for several reasons). The PR hadn't been reviewed yet since it was still in the "draft" status, however I've left some comments now to avoid any further misunderstanding about its readiness. See:
#227 (review)

@ms-ati
Copy link

ms-ati commented Sep 17, 2022

@boboldehampsink thanks for the public example, our team is seeing if we can do the same. Will report back for anyone else following along.

We are using Ruby on Rails with some Buildpacks, including self-created ones to add additional binary dependencies.

@edmorley
Copy link
Member

edmorley commented Jan 18, 2023

it would be great if we could have both (full dev-prod parity and great performance), but until Docker sort out the performance issues from emulation (eg by replacing QEMU, see docker/roadmap#384), there aren't really many great options here.

The latest version of Docker for macOS includes a beta of the Rosetta support mentioned above, that has sped up some of our internal x86_64 images on M1 workflows by 3x - and it seems others are experiencing similar gains, eg:
docker/roadmap#384 (comment)

If you haven't tried it I can highly recommend taking a look - you'll need:

  • the latest Docker Desktop update
  • macOS 13 (it isn't supported on older macOS versions)
  • to enable it via: Settings -> "Features in development" -> then tick "Use Rosetta for x86/amd64 emulation on Apple Silicon" the feature has now been GAed so is available under: Settings -> General -> then tick "Use Rosetta for x86/amd64 emulation on Apple Silicon"

In terms of next steps for ARM64 images, this talk discusses some of the upcoming ideas/work for how the upstream Cloud Native Buildpacks project may support multi-architecture images (which is a pre-requisite for us supporting ARM64 for CNBs):
https://www.youtube.com/watch?v=Sdr5axlOnDI

@fabswt
Copy link

fabswt commented Feb 1, 2024

an ARM64 Heroku image would be really helpful for local dev. here's another example:

i use a Docker image + Flask. Flask automatically reloads when a Python file changes – great for dev! but it's being super slow to reload because qemu crashes:

[2024-01-25 06:36:37 +0100] [353] [INFO] Worker reloading: /var/www/html/python-tests/pronunciation-demo/application/routes/check_vue.py modified
[2024-01-25 06:37:07 +0100] [353] [INFO] Worker exiting (pid: 353)
[2024-01-25 06:37:07 +0100] [350] [CRITICAL] WORKER TIMEOUT (pid:353)
qemu: uncaught target signal 6 (Aborted) - core dumped
[2024-01-25 06:37:07 +0100] [350] [ERROR] Worker (pid:353) was sent SIGABRT!
[2024-01-25 06:37:07 +0100] [379] [INFO] Booting worker with pid: 379
USE_AZURE_CLASSIC_STT_FOR_FREEMIUM: False
[2024-01-25 06:37:09 +0100] [BENCHMARK] app.py loading...
ENGINEIO_LOGGER=OFF
[2024-01-25 06:37:12 +0100] [BENCHMARK] app.py loaded in 2.815551996231079 seconds
Running Python 3.11.7
Running via wsgi.py...
[2024-01-25 06:37:12 +0100] [BENCHMARK] app.py loading...
[2024-01-25 06:37:12 +0100] [BENCHMARK] wsgi.py loaded in 0.00024628639221191406 seconds

35 seconds and a qemu error. this totally breaks the flow when coding. it's faster to MANUALLY stop and restart Flask.

@edmorley
Copy link
Member

edmorley commented Feb 29, 2024

@fabswt For now, I would recommend not using qemu but instead Docker Desktop's Rosetta emulation support - it's much faster and doesn't suffer from the qemu crashes. See my last post above for how to enable it. Since that post it's also been GAed:
https://www.docker.com/blog/docker-desktop-4-26/
https://docs.docker.com/desktop/release-notes/#4250

Longer term, multi-arch is coming to our base images starting with Heroku-24 (Ubuntu 24.04). The initial work for that is occurring in #245, though there are a number of other steps that will need to follow (e.g. buildpack support; and that will be primarily focused on CNBs for now).

@edmorley
Copy link
Member

edmorley commented May 29, 2024

Starting with Heroku-24 (which is due to reach GA soon), we're publishing our base images as multi-architecture images that support both AMD64 and ARM64:
https://github.com/heroku/base-images#heroku-base-images
https://hub.docker.com/r/heroku/heroku/tags?page=&page_size=&ordering=&name=24

Our next-generation Cloud Native Buildpacks (that are currently in preview) are the focus of new development moving forwards - and will officially support ARM64 when using Heroku-24. To learn more about CNBs / experiment with them, see:
https://github.com/heroku/buildpacks

In the meantime, support for ARM in our classic (existing generation) buildpacks will be handled on a best-effort basis. For current status see the issue filed in each language's classic buildpack repo:

If you use the Heroku base images from Docker Hub and wish to upgrade to Heroku-24, but one of the buildpacks or tools you use in your Dockerfile isn't compatible with ARM64, then you can force Docker to use AMD64 by passing --platform linux/amd64 to any docker build or docker run commands (or by using FROM --platform=linux/amd64 IMAGE_NAME in your Dockerfile).

Note also that starting with Heroku-24, the default image Linux user is heroku, which does not have root permissions. If you need to modify locations outside of /home/heroku or /tmp you will need to switch back to the root user. You can do this by adding USER root to your Dockerfile when building images, or by passing --user root to any docker run commands.

@edmorley
Copy link
Member

Closing since the Heroku-24 stack has now been officially released:
https://devcenter.heroku.com/changelog-items/2898
https://devcenter.heroku.com/articles/heroku-24-stack

See the comment above for more details on ARM support status on CNB vs classic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants