Skip to content
Claudio Fontana edited this page Jan 29, 2015 · 5 revisions

Core build system

The core part of the build system is triggered by the make command and is based on a global single Makefile.

The global Makefile will detect the host system and will default the build system to be equal to the target system. To override that, one has to provide the ARCH environment variable. Currently supported values for ARCH are "x64" and "aarch64".

This is in contrast to the past where the compiler (optionally a cross-compiler) was launched to determine the target.

Examples: make -j8 ARCH=aarch64

make -j8 ARCH=x64

make -j8 # in this case the architecture of the build system is used for the target

The ARCH variable will be later translated into an internal "arch" Make variable which is checked all over the place to build the right modules for the right target.

The "all" target triggers the ant parts of the build, then finally the submake for the specific architecture and mode. The other relevant target (only enabled for x64) for now is "check", which starts the regression tests.

There are two modes: release and debug. Currently, for AArch64 we use only release, as we have enough problems already rather than trying to make two different builds.

The submake Makefile triggered is for example:

build/release.aarch64/Makefile

for the AArch64 release build,

build/release.x64/Makefile

for the X86_64 release build.

Those builds can in theory coexist. Those makefiles then invoke the global ** build.mk **.

build.mk still has the detection code for the compiler to determine the architecture, but this time its results are not set for the target but for the "host".

CROSS_PREFIX can still be used to specify a crosscompiler.

Example:

export CROSS_PREFIX=aarch64-linux-gnu- make -j8 ARCH=aarch64

CROSS_PREFIX will currently affect variables CXX, CC, LD, STRIP, OBJCOPY.

Variable HOST_CC is used for the host compiler.

the image variable controls which OSv image is actually built. For X86_64 the default image contains an enormous amount of software, and it is called "default". For AArch64 the default image is very much smaller, and it is called "uush", and contains only a very trimmed down proof of concept shell.

ARCH-specific and mode-specific CFLAGS are currently stored in conf/$(arch).mk.

Example:

conf/aarch64.mk conf/x64.mk conf/release.mk conf/debug.mk

The fundamental parts of the build.mk monster are the ones which build the loader image and the user image.

For X64, the loader image includes the loader compression and decompression routines, which occasionally trigger bugs which are still very difficult to trace and reproduce.

For AArch64, the loader image is simpler.

For both x64 and AArch64 the loader.elf rule is the same. The bootfs.manifest.skel is used to determine what goes into this initial file system, used to then construct the zfs file system for usr.img.

The kernel_base variable is set to point at the starting address of the kernel (for x64 0x200000, for AArch64 0x40080000

The final result for both images is loader.img. This is the image that contains only a ram filesystem and which is used (executed) to create the final usr.img, which is terrible of course for cross-compilation and for debugging issues (as is always the case with build systems with self-fed build artifacts).

usr.img is a different beast, and is constructed by launching loader.img and using usr.manifest to determine which modules go into the zfs filesystem. The usr.img is created starting with bare.raw, created with qemu-img create, adding the loader.img, and then using the python script imgedit.py "setpartition" command, then bare.raw is resized to its final size, and the mkfs.py script is run to create the initial ZFS filesystem based on bootfs.manifest. The mkfs.py script launches OSv to create that initial filesystem using libzfs.so inside OSv itself. Then usr.img is finally created by running upload_manifest.py, which again runs OSV to add entries to the ZFS file system based on usr.manifest.

This complex operation requires pci, virtio, virtio-blk and virtio-net to be enabled before having any chances of having a usr.img build.

Finally the command line is edited using the imgedit.py script's "setargs" command.

This is only a high level description of the core part of the OSv build, which ignores all management tools, modules, apps, etc, but it of the most interest for new architecture enablement and highlights the biggest issues with the current system:

  1. lack of a consistent design and end goal for the build system (role of environment variables, build/host/target, lots of ad hoc solutions for each target).

  2. problems with cross-compilation due to: a) use of arch-dependent Python scripts that are launched as part of the build process b) execution of the OSv Images as part of the OSv Image build process itself.

  3. problems with early new architecture enablement due to: a) requirement to have pci, virtio, virtio-blk and virtio-net before even trying to start an usr.img build b) requirement to use native hardware for usr.img full build due to cross-compilation issues mentioned above.

  4. general complexity of single monolitic makefiles, and lack of delegation and uniformity, also leading to

  5. poor tracking of dependencies result in very long builds due to needless steps being performed, as well as hard to debug race condition bugs when running parallel builds

Ongoing work

For addressing mostly point 5), and clearly separate the mostly "working" bits from the more difficult parts, there is work currently ongoing by Nadav on this tree (master branch):

https://github.com/nyh/osv

For addressing 2b, 3a and 3b there is currently work ongoing at Huawei to come up with a user-space tool to create the ZFS filesystem on the linux build host without having to launch VMs.

Some initial material will be posted to github fairly soon, and a link will be hopefully here.

Clone this wiki locally