Skip to content
/ docker Public
forked from ghdl/docker

Scripts to build and use docker images including GHDL

Notifications You must be signed in to change notification settings

abyszuk/docker

 
 

Repository files navigation

'base' workflow Status 'test' workflow Status 'bookworm' workflow Status

'vunit' workflow Status 'ext' workflow Status 'cosim' workflow Status 'mirror' workflow Status

This repository contains scripts and YAML workflows for GitHub Actions (GHA) to build and to deploy the container images that are used and/or published by the GHDL GitHub organization. All of them are pushed to hub.docker.com/u/ghdl.


ATTENTION: Some images related to synthesis and PnR were moved to hdl/containers and hub.docker.com/u/hdlc. See DEPRECATED.


Images for development (i.e., building and/or testing ghdl):

  • ghdl/build Docker pulls images include development/build depedendencies for ghdl.
  • ghdl/run Docker pulls images include runtime dependencies for ghdl.
  • ghdl/pkg Docker pulls images include the content of ghdl tarballs built in ghdl/build images.
  • ghdl/debug Docker pulls image is based on ghdl/build:buster-mcode and ghdl/pkg:buster-mcode; includes Python pip, GNAT GPS, Graphviz and GDB.

Ready-to-use images:

  • ghdl/ghdl Docker pulls images, which are based on correponding ghdl/run images, include ghdl along with minimum runtime dependencies.
  • ghdl/vunit Docker pulls images, which are based on ghdl/ghdl:bookworm-* images, include ghdl along with VUnit.
    • *-master variants include latest VUnit (master branch), while others include the latest stable release (installed through pip).
  • ghdl/ext Docker pulls GHDL and complements (ghdl-language-server, GtkWave, VUnit, etc.).
  • ghdl/cosim Docker pulls GHDL and other tools for co-simulation such as SciPy, Xyce or GNU Octave.

See USE_CASES.md if you are looking for usage examples from a user perspective.

GHA workflows

· base

Build and push all the ghdl/build:* and ghdl/run:* docker images. :

  • A pair of images is created in one job for [ ls ].
  • One job is created for each of [ fedora (37 | 38), debian (buster | bullseye | bookworm), ubuntu (20 | 22)], and six images are created in each job; two (ghdl/build:*, ghdl/run:*) for each supported backend [ mcode, llvm*, gcc ].
    • ghdl/debug:base is created in the debian buster job.
    • ghdl/build:doc is created in the debian bookworm job.

· test

Build and push almost all the ghdl/ghdl:* and ghdl/pkg:* images. A pair of images is created in one job for each combination of:

  • [ fedora: [37, 38], debian: [bullseye], ubuntu: [20, 22] ] and [mcode, llvm*].
  • [ fedora: [37, 38], debian: [bullseye] ] and [gcc*].
  • For Debian only, [bullseye, bookworm] and [mcode] and [--gpl].
  • For Debian Buster, only [mcode].
    • ghdl/debug is created in this job.

The procedure in each job is as follows:

  • Repo ghdl/ghdl is cloned.
  • ghdl is built in the corresponding ghdl/build:* image.
  • A ghdl/ghdl:* image is created based on the corresponding ghdl/run:* image.
  • The testsuite is executed inside the ghdl/ghdl:* image created in the previous step.
  • If successful, a ghdl/pkg:* image is created from scratch, with the content of the tarball built in the first step.
  • ghdl/ghdl:* and ghdl/pkg:* images are pushed to hub.docker.com/u/ghdl.

NOTE: images with GCC backend include lcov for code coverage analysis.

· bookworm [scheduled daily]

Complement of ghdl.yml, to be run after each successful run of the main workflow in ghdl/ghdl. One job is scheduled for each combination of [ bookworm ] and [ mcode, llvm-14 , gcc-12.3.0 ].

· vunit [triggered after workflow 'bookworm']

Build and push all the ghdl/vunit:* images, which are based on the ones created in the 'bookworm' workflow.

  • Two versions are published for each backend: one with latest stable VUnit (from PyPI) and one with the latest master (from Git).
  • Images with GCC backend include lcov and gcovr for code coverage analysis.

· ext [triggered after workflow 'vunit']

Build and push all the ghdl/ext:* images:

  • ls: ghdl/ext:ls-debian and ghdl/ext:ls-ubuntu (a job for each of them). These include ghdl/ghdl, the ghdl/ghdl-language-server backend and the vscode-client (precompiled but not preinstalled).
  • gui:
    • ghdl/ext:gtkwave: includes GtkWave (gtk3) on top of ghdl/vunit:llvm-master.
    • ghdl/ext:broadway: adds a script to ghdl/ext:gtkwave in order to launch a Broadway server that allows to use GtkWave from a web browser.
    • ghdl/ext:ls-vunit: includes VUnit (master) on top of ghdl/ext:ls-debian.
    • ghdl/ext:latest: includes GtkWave on top of ghdl/ext:ls-vunit.

See ghdl/ghdl-cosim: docker and ghdl.github.io/ghdl-cosim/vhpidirect/examples/vffi_user.

  • ghdl/cosim:mcode: based on ghdl/ghdl:bookworm-mcode, includes GCC.
  • ghdl/cosim:py: based on ghdl/ghdl:bookworm-llvm-7, includes Python.
    • ghdl/cosim:vunit-cocotb: based on ghdl/cosim:py, includes VUnit, cocotb and g++ (required by cocotb).
    • ghdl/cosim:matplotlib: based on ghdl/cosim:py, includes pytest, matplotlib, numpy and Imagemagick.
    • ghdl/cosim:octave: based on ghdl/cosim:py, includes GNU Octave.
    • ghdl/cosim:xyce: based on ghdl/cosim:py, includes Xyce.

NOTE: *-slim variants of matplotlib, octave and xyce images are provided too. Those are based on ghdl/cosim:vunit-cocotb, instead of ghdl/cosim:py.

Packaging

Multiple artifacts of GHDL are generated in these workflows. For example, each job in test.yml generates a tarball that is then installed in a ghdl/ghdl:* image, and the content is published in a ghdl/pkg:* image. These resources might be useful for users/developers who:

  • Want to use a base image which is compatible but different from the ones we use. E.g., use python:3-slim-bookworm instead of debian:bookworm-slim.
  • Do not want to build and test GHDL every time.

However, it is discouraged to use these pre-built artifacts to install GHDL on host systems.

About

Scripts to build and use docker images including GHDL

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Shell 59.3%
  • Dockerfile 40.7%