- Release Notes
- Version 0.1 (March 6 2020)
- Version 0.2 (March 16 2020)
- Version 0.3 (March 27 2020)
- Version 0.4 (April 10 2020)
- Version 0.5 (April 24 2020)
- Version 0.6 (April 25 2020)
- Version 0.7 (April 30 2020)
- Version 0.8 (May 18 2020)
- Version 0.9 (May 27 2020)
- Version 0.10 (Jul 3 2020)
- Version 0.11 (Aug 23 2020)
- Version 0.12 (Sep 28 2020)
- Version 0.13 (Oct 09 2020)
- Version 0.14 (Oct 19 2020)
- Version 0.15 (Nov 10 2020)
- Version 0.16 (Dec 02 2020)
- Version 0.17 (Dev 21 2020)
- 1. Fetching the source using Git
- 2. Build prerequisites
- 3. Building a firmware using cqfd
- 4. Building the firmware manually
- 5. Building an SDK Installer
- 6. Flashing the flash image to an USB key
- 7. Flashing the firmware to the disk
- 8. Tests
- 9. Hypervisors updates
- 10. About this documentation
The Yocto firmware generation has been tested on Ubuntu 18.04. You can either use your host machine’s tools, or use cqfd to build. More details are given in the next sections of this document.
-
Adds KVM/Qemu support
-
Adds libvirt / virsh tools for VM management
-
Adds RT tests on guest machines
-
Adds Openvswtich / DPDK support
-
Adds Docker and Kubernetes
-
Adds VM deployment and testing tools
-
Adds missing drivers for intel I210
-
Adds PMD drivers for dpdk
-
Adds pciutils for dpdk
-
Add guest images in qcow2 format
-
Compressed generated host images
-
Add a partition data mount in /var in all images
-
Update flash description
-
Adds a High Availability VM solution based on Pacemaker
-
Adds a Distributed Storage solution based on Ceph
-
Adds a test tool to check data synchronization
-
Adds support for interfacing with an Active Directory using SSSD/Realmd
-
Adds support for user authentication from a RADIUS server
-
Adds deployment scripts to perform a configuration similar to the High Availability test setup
-
Fixes upstream source download issues
-
Does not start unconfigured systemd services at startup
-
Add "test" and "debug" image variants with BIOS support
-
Generates "guest" images in Vmware disk format
-
First version published on the SEAPATH Github
-
Adds licenses and copyrights information
-
Adds hybrid guest and host images
-
Adds images to perform the first installation
-
Provides cluster configuration support with Ansible. In this version, only network configuration, cluster creation and customization of kernel parameters are available
-
Adds support for Intel 6300esb watchdog virtualizable by libvirt in images
-
Updates Ceph version to version 14.2.15
-
Uses Python3 instead of Python2 for Ceph
-
Ceph is run with the user ceph instead of root
-
Enhances Ansible cluster configuration with the Pacemaker and Ceph configuration
-
Modifies the network configuration made by Ansible to be able to generate several Open vSwtich network layers
We are using repo
to synchronize the source code using a manifest (an XML
file) which describes all git repositories required to build a firmware. The
manifest file is hosted in a git repository named repo-manifest
.
First initialize repo
:
$ cd my_project_dir/ $ repo init -u <manifest_repo_url> $ repo sync
For instance, for Seapath yocto-bsp project:
$ cd my_project_dir/ $ repo init -u https://github.com/seapath/repo-manifest.git $ repo sync
Note
|
The initial build process takes approximately 4 to 5 hours on a current developer machine and will produce approximately 50GB of data. |
Before building you must put a ssh public in keys/ansible_public_ssh_key.pub. It will be used by Ansible to communicate with the machines. See for keys/README for more informations.
cqfd
is a quick and convenient way to run commands in the current directory,
but within a pre-defined Docker container. Using cqfd
allows you to avoid
installing anything else than Docker and repo
on your development machine.
Note
|
We recommend using this method as it greatly simplifies the build configuration management process. |
See docker manual: Install docker
On Ubuntu 20.04:
$ sudo curl -o /usr/local/bin/repo https://storage.googleapis.com/git-repo-downloads/repo $ sudo chmod +x /usr/local/bin/repo $ sed 's|/usr/bin/env python|/usr/bin/env python3|' -i /usr/local/bin/repo
-
Install cqfd:
if necessary install make and pkg-config packages.
For instance, with Ubuntu/Debian distribution:
$ sudo apt-get install build-essential pkg-config
then
$ git clone https://github.com/savoirfairelinux/cqfd.git
$ cd cqfd
$ sudo make install
The project page on Github contains detailed information on usage and installation.
-
Make sure that docker does not require sudo
Please use the following commands to add your user account to the docker
group:
$ sudo groupadd docker
$ sudo usermod -aG docker $USER
Log out and log back in, so that your group membership can be re-evaluated.
The first step with cqfd
is to create the build container. For this, use the
cqfd init
command:
$ cqfd init
Note
|
The step above is only required once, as once the container image has been
created on your machine, it will become persistent. Further calls to cqfd init
will do nothing, unless the container definition (.cqfd/docker/Dockerfile ) has
changed in the source tree.
|
cqfd provides different flavors that allow to call build.sh with certain image, distro and machine parameters. To list the available flavors, run:
$ cqfd flavors
Here is a description of flavors:
-
all: all flavors
-
flash_bios: bios image to flash a server disk
-
flash_efi: efi image to flash a server disk
-
flash_pxe: pxe image to flash a server disk
-
guest_efi: efi guest image (VM)
-
guest_efi_test: similar to guest_efi with additionnal test packages
-
guest_efi_dbg: similar to guest_efi with debug tools
-
host_bios: bios host image (including security, clustering and readonly features)
-
host_bios_dbg: similar to host_bios with debug tools
-
host_bios_minimal: similar to host_bios without security, clustering and readonly features
-
host_bios_no_iommu: similar to host_bios without IOMMU enabled (IOMMU leads)
-
host_bios_test: similar to host_bios with additionnal test packages
-
host_bios_test_no_iommu: similar to host_bios_no_iommu with additionnal test packages
-
host_efi: efi host image (including security, clustering and readonly features)
-
host_efi_dbg: similar to host_efi with debug tools
-
host_efi_test: similar to host_efi with additionnal test packages
-
host_efi_swu: efi host update image (SwUpdate)
-
monitor_bios: bios monitor image (used to monitor the cluster)
-
monitor_efi: efi monitor image (used to monitor the cluster)
-
monitor_efi_swu: efi monitor update image (SwUpdate)
To build on of this flavor, run:
$ cqfd -b <flavor>
Note:
* bash completion works with -b
parameter
* detail command used per flavor is described in .cqfdrc
file
This method relies on the manual installation of all the tools and dependencies required on the host machine.
The following packages need to be installed:
$ sudo apt-get update && apt-get install -y ca-certificates build-essential
$ sudo apt-get install -y gawk wget git-core diffstat unzip texinfo gcc-multilib \ build-essential chrpath socat cpio python python3 python3-pip python3-pexpect \ xz-utils debianutils iputils-ping libsdl1.2-dev xterm repo
The build is started by running the following command:
$ ./build.sh -i seapath-host-efi-image -m boardname --distro distroname
You can also pass custom BitBake commands using the --
separator:
$ ./build.sh -i seapath-host-efi-image -m boardname --distro distroname -- bitbake -c clean package_name
Images can be produced for either UEFI or BIOS compatible firmwares.
You can find below the Yocto images list (with [FW]=bios or [FW]=efi):
-
Host images
-
seapath-host-[FW]-image: production image
-
seapath-host-[FW]-dbg-image: debug image
-
seapath-host-[FW]-test-image: production image with test tools
-
-
Guest images
-
seapath-guest-efi-image: QEMU-compatible virtual machine production image (UEFI only)
-
seapath-guest-efi-dbg-image: QEMU-compatible virtual machine debug image (UEFI only)
-
seapath-guest-efi-test-image: guest production image with test tools (UEFI only)
-
-
Hybrid images
-
seapath-guest-host-bios-image: a production image working as host and guest
-
seapath-guest-host-bios-test-image: a production image working as host and guest with test tool
-
seapath-guest-host-bios-dbg-image: a debug image working as host and guest
-
-
Flasher images
-
seapath-flash-[FW]: USB key flash image used to flash firmware images on disk.
-
seapath-flash-pxe: flash image used to flash firmware images on disk usable during a PXE boot.
-
-
Observer images
-
seapath-monitor-[FW]: production image for an observer (needed for clustering quorum establishment)
-
Different distros can be used:
-
seapath-flash: distro used for flash images
-
seapath-guest: distro used for guest images
-
seapath-host: distro used for host images with security, readonly and clustering features
-
seapath-host-cluster-minimal: distro used for host images with clustering features
-
seapath-host-minimal: distro used for host images without security, readonly and clustering features
-
seapath-host-sb: distro used for host images without security, readonly, clustering and secureboot features
You can create an SDK matching your system’s configuration using with the following command:
$ ./build.sh -i seapath -m boardname --sdk
Note
|
prefix this command with cqfd run if using cqfd.
|
When the bitbake command completes, the toolchain installer will be in
tmp/deploy/sdk/
under your build directory.
To be able to install Seapath firmware on machines you need to use a USB key
running a specific application.
This application is available in seapath-flash-bios
for machine running a BIOS
and seapath-flash-efi
for machine running an UEFI.
To create the flash USB key, on a Linux system, you can use the dd
command.
The image is compressed in gzip format, it must be uncompressed with gzip first.
For instance, if USB key device is /dev/sdx:
$ sudo umount /dev/sdx* $ gzip -d -c build/tmp/deploy/image/boardname/seapath-flash-bios.wic.gz \ | sudo dd of=/dev/sdx bs=16M conv=fsync
Copy the generated image in format wic or wic.gz on the USB key flasher_data parition.
Boot the usb key. Use the flash
script to write the firmware image on the
disk. flash
takes two arguments:
-
--images: the path to the image to be flashed. The image partition are mounted on /media.
-
--disk: the disk to flash. Usualy /dev/sda.
For instance:
$ flash --image /media/seapath-host-efi-image.wic.gz --disk /dev/sda
The Yocto image seapath-test-image incudes Real Time tests such as cyclictest.
On the target, call:
$ cyclictest -l100000000 -m -Sp90 -i200 -h400 -q >output
Note: This test will run around 5 hours Then generate the graphics:
$ ./tools/gen_cyclic_test.sh -i output -n 28 -o seapath.png
Note: we reused OSADL tools.
All Seapath Yocto images include the ability to run guest Virtual Machines (VMs).
We used KVM and Qemu to run them. As we do not have any window manager on the host system, VMs should be launched in console mode and their console output must be correctly set.
For testing purpose, we can run our Yocto image as a guest machine. We do not use the .wic image which includes the Linux Kernel and the rootfs because we need to set the console output. We use two distinct files to modify the Linux Kernel command line:
-
bzImage: the Linux Kernel image
-
seapath-test-image-votp.ext4: the Seapath rootfs
Then run:
$ qemu-system-x86_64 -accel kvm -kernel bzImage -m 4096 -hda seapath-test-image-votp.ext4 -nographic -append 'root=/dev/sda console=ttyS0'
Ptest (package test) is a concept for building, installing and running the test suites that are included in many upstream packages, and producing a consistent output format for the results.
ptest-runner is included in seapath_test_image and allows to run those tests.
For instance:
$ ptest-runner openvswitch libvirt qemu rt-tests
The usage for the ptest-runner is as follows:
$ Usage: ptest-runner [-d directory] [-l list] [-t timeout] [-h] [ptest1 ptest2 ...]
Hypervisors updates are enabled only for production efi images:
-
legacy bios images do not implement update mechanism
-
debug and test update images are not offered
A/B partitioning is used to allow for an atomic and recoverable update procedure. The update will be written to the passive partition. Once the update is successfully transferred to the device, the device will reboot into the passive partition which thereby becomes the new active partition.
If the update causes any failures, a roll back to the original active partition can be done to preserve uptime.
The following partitioning is used on hypervisors:
Slot A | Slot B |
---|---|
Boot A partition (Grub + Kernel) [/dev/<disk>1] |
Boot B partition (Grub + Kernel) [/dev/<disk>2] |
Rootfs A partition [/dev/<disk>3] |
Rootfs B partition [/dev/<disk>4] |
Logs partition [/dev/<disk>5] |
|
Persistent data partition [/dev/<disk>6] |
Hypervisor updates can be performed with SwUpdate.
First, create a SwUpdate image (.swu):
$ cqfd -b host_efi_swu
Then, you have different options
SwUpdate can interact with a Hawbit server to push updates on the device.
We use docker-compose as explained in Hawkbit documentation.
$ git clone https://github.com/eclipse/hawkbit.git $ cd hawkbit/hawkbit-runtime/docker
We decided to enable anonymous connection. To do that, add this line in hawkbit-runtime/docker/docker-compose.yml
-
HAWKBIT_SERVER_DDI_SECURITY_AUTHENTICATION_ANONYMOUS_ENABLED=true
And start the server:
$ docker-compose up -d
Then you can access the http server on port 8080. In System Config menu, enable "Allow targets to download artifact without security credentials", so that anonymous updates can be used. More documentation on Hawkbit is available on Hawkbit website.
Hawkbit Server URL and PORT must be configured in /etc/sysconfig/swupdate_hawkbit.conf or directly in meta-seapath (/recipes-votp/system-config/system-config/efi/swupdate_hawkbit.conf)
A systemd daemon (swupdate_hawkbit.service) is started automatically at boot. If you want to modify swupdate_hawkbit.conf at runtime, you must restart the systemd service.
Once the systemd service is started, you should see the device in Hawkbit interface. Once an update on the device is performed, a reboot will be done.
This documentation uses the AsciiDoc documentation generator. It is a convenient format that allows using plain-text formatted writing that can later be converted to various output formats such as HTML and PDF.
In order to generate an HTML version of this documentation, use the following command (the asciidoc package will need to be installed in your Linux distribution):
$ asciidoc README.adoc
This will result in a README.html file being generated in the current directory.
If you prefer a PDF version of the documentation instead, use the following command (the dblatex package will need to be installed on your Linux distribution):
$ asciidoctor-pdf README.adoc