-
Notifications
You must be signed in to change notification settings - Fork 6.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for non-recursive single-toolchain multi-image builds #13672
Add support for non-recursive single-toolchain multi-image builds #13672
Conversation
c60bad2
to
33bde05
Compare
33bde05
to
67c414d
Compare
This comment has been minimized.
This comment has been minimized.
67c414d
to
90e12b3
Compare
@SebastianBoe thanks for the PR. I would like to understand first what the issue with "recursive" (i.e. using the same framework unchanged multiple times is, could you please briefly describe the problems with that approach? |
The simple answer is that non-recursive building is a The more concrete answer is that non-recursive builds will be Our goal is to produce multiple elf files, or images, and to give There are two ways to do multi-image builds with Zephyr. Either If not familiar with CMake, then imagine the difference between All mainstream build systems use the same model internally, you Meaning, this data structure is important for the speed of the The most important difference between "recursive make" and In a multi-DAG build there would be a parent DAG, and one or more There also exists a performance problem where the same child DAG In addition to a myriad of performance problems related to There is also the matter of controlling the number of running threads. Creating dependencies between images would also be more complex, When build issues arise it is more difficult to debug a multi-DAG system than It also becomes more difficult to share information between In addition to performance considerations there is a matter of
Whereas with a recursive build the user would have to somehow In addition to an awkward target interface it is not clear how And I'm sure there are many other issues that I am not aware [0] Miller, P.A. (1998), Recursive Make Considered Harmful, |
For this to work out-of-the box, an update will be needed to If you add the following to
That will allow people to checkout this PR, and simply do a And then when JuulLabs-OSS/mcuboot#430. is merged, then update this PR to the corresponding SHA in mcuboot. |
90e12b3
to
b30b83a
Compare
Done |
91ac66e
to
c3185b0
Compare
5c549e5
to
682ed49
Compare
c47cf7e
to
36a4e8a
Compare
95cab84
to
e5c819d
Compare
Add support for non-recursive single-toolchain multi-image builds. This resolves zephyrproject-rtos#7868. We are seeing the need to boot Zephyr applications in multiple stages, a real-world usecase of even a 3-stage bootloader has been identified and tested. Each bootloader needs to be individually upgrade-able and have it's own configurations of Zephyr libraries. To achieve this each bootloader is organized as it's own executable. The problem is that the build system can only build one executable. This creates a usability issue as the user must invoke a large set of arcane commands to build, sign, and flash each bootloader. To resolve this we re-organize the build system such that it can build multiple executables. Signed-off-by: Sebastian Bøe <[email protected]> Signed-off-by: Håkon Øye Amundsen <[email protected]>
Port the openamp sample to use multi-image. Signed-off-by: Sebastian Bøe <[email protected]>
Note: This was originally submitted upstream in zephyrproject-rtos#13672 and was rejected. NCS is carrying it as a noup for now. Another, more upstream-friendly approach will be needed. We are seeing the need to boot Zephyr applications in multiple stages, a real-world use-case of even a 3-stage bootloader has been identified and tested. Each bootloader needs to be individually upgrade-able and have it's own configurations of Zephyr libraries. To achieve this each bootloader is organized as it's own executable. The problem is that the build system can only build one executable. This creates a usability issue as the user must invoke a large set of arcane commands to build, sign, and flash each bootloader. To resolve this we re-organize the build system such that it can build multiple executables. To work within CMake restrictions that logical target names must be globally unique (among other naming restrictions), we add an IMAGE variable which is used to name the current Zephyr application being built. Signed-off-by: Sebastian Bøe <[email protected]> Signed-off-by: Håkon Øye Amundsen <[email protected]> Signed-off-by: Øyvind Rønningstad <[email protected]> Signed-off-by: Robert Lubos <[email protected]> Signed-off-by: Ruth Fuchss <[email protected]> (cherry picked from commit 3db22ac) (Conflicts were resolved in lib/posix/CMakeLists.txt. Changes to ia32.cmake were dropped entirely since Intel has rejected this approach. Finally, these follow-up fixes and dependent patches were squashed in: 02eeeb7 c6c3fed 56f6a33 1dc0ca5 6e348b8) Signed-off-by: Marti Bolivar <[email protected]> (cherry picked from commit 1e0a3dc) (Conflict resolved in cmake/app/boilerplate.cmake) Signed-off-by: Robert Lubos <[email protected]>
e5c819d
to
1f54601
Compare
From upstream zephyrproject-rtos#13672 Added a minimal integration test for MCUBoot. It has just enough coverage to catch build issues. Which is better than before (no coverage in the Zephyr CI). Signed-off-by: Sebastian Bøe <[email protected]> (cherry picked from commit 1937b41) (cherry picked from commit 026bd21) (cherry picked from commit 4b3a15b) Signed-off-by: Martí Bolívar <[email protected]>
Note: This was originally submitted upstream in zephyrproject-rtos#13672 and was rejected. NCS is carrying it as a noup for now. Another, more upstream-friendly approach will be needed. We are seeing the need to boot Zephyr applications in multiple stages, a real-world use-case of even a 3-stage bootloader has been identified and tested. Each bootloader needs to be individually upgrade-able and have it's own configurations of Zephyr libraries. To achieve this each bootloader is organized as it's own executable. The problem is that the build system can only build one executable. This creates a usability issue as the user must invoke a large set of arcane commands to build, sign, and flash each bootloader. To resolve this we re-organize the build system such that it can build multiple executables. To work within CMake restrictions that logical target names must be globally unique (among other naming restrictions), we add an IMAGE variable which is used to name the current Zephyr application being built. Signed-off-by: Sebastian Bøe <[email protected]> Signed-off-by: Håkon Øye Amundsen <[email protected]> Signed-off-by: Øyvind Rønningstad <[email protected]> Signed-off-by: Robert Lubos <[email protected]> Signed-off-by: Ruth Fuchss <[email protected]> (cherry picked from commit 3db22ac) (Conflicts were resolved in lib/posix/CMakeLists.txt. Changes to ia32.cmake were dropped entirely since Intel has rejected this approach. Finally, these follow-up fixes and dependent patches were squashed in: 02eeeb7 c6c3fed 56f6a33 1dc0ca5 6e348b8) Signed-off-by: Marti Bolivar <[email protected]> (cherry picked from commit 1e0a3dc) (Conflict resolved in cmake/app/boilerplate.cmake) Signed-off-by: Robert Lubos <[email protected]> (cherry picked from commit 439a1ad) Signed-off-by: Martí Bolívar <[email protected]>
Closing permanently. Upstream did not accept a solution that increased the complexity of the build system in this manner. |
Note: This was originally submitted upstream in zephyrproject-rtos#13672 and was rejected. NCS is carrying it as a noup for now. Another, more upstream-friendly approach will be needed. We are seeing the need to boot Zephyr applications in multiple stages, a real-world use-case of even a 3-stage bootloader has been identified and tested. Each bootloader needs to be individually upgrade-able and have it's own configurations of Zephyr libraries. To achieve this each bootloader is organized as it's own executable. The problem is that the build system can only build one executable. This creates a usability issue as the user must invoke a large set of arcane commands to build, sign, and flash each bootloader. To resolve this we re-organize the build system such that it can build multiple executables. To work within CMake restrictions that logical target names must be globally unique (among other naming restrictions), we add an IMAGE variable which is used to name the current Zephyr application being built. Signed-off-by: Sebastian Bøe <[email protected]> Signed-off-by: Håkon Øye Amundsen <[email protected]> Signed-off-by: Øyvind Rønningstad <[email protected]> Signed-off-by: Robert Lubos <[email protected]> Signed-off-by: Ruth Fuchss <[email protected]> (cherry picked from commit 3db22ac) (Conflicts were resolved in lib/posix/CMakeLists.txt. Changes to ia32.cmake were dropped entirely since Intel has rejected this approach. Finally, these follow-up fixes and dependent patches were squashed in: 02eeeb7 c6c3fed 56f6a33 1dc0ca5 6e348b8) Signed-off-by: Marti Bolivar <[email protected]> (cherry picked from commit 1e0a3dc) (Conflict resolved in cmake/app/boilerplate.cmake) Signed-off-by: Robert Lubos <[email protected]> (cherry picked from commit 439a1ad) Signed-off-by: Martí Bolívar <[email protected]> (cherry picked from commit bbcbd8f) Signed-off-by: Dominik Ermel <[email protected]>
We are seeing the need to boot Zephyr applications in multiple
stages, a real-world use-case of even a 3-stage bootloader has
been identified and tested.
Each bootloader needs to be individually upgrade-able and have
it's own configurations of Zephyr libraries. To achieve this each
bootloader is organized as it's own executable.
The problem is that the build system can only build one
executable. This creates a usability issue as the user must
invoke a large set of arcane commands to build, sign, and flash
each bootloader.
It also causes a problem for the configuration system. If the build and configuration
system can only build one image it will not be able to automatically share configuration
between images or enforce that one image is configured to be compatible with another
image. The user must instead manually keep each image's configuration in sync.
To resolve this we re-organize the build system such that it can build
multiple executables.
For the user, building both a bootloader and an application now looks
like:
In contrast to the 8 or so commands that was needed previously, or the
five commands that are needed with west:
#12903 (comment)
Internally, the build system uses two mechanisms to prevent the
images from conflicting with each other. Firstly, each image is
executed in it's own 'subdirectory', created with
'add_subdirectory'. This ensures that variables set in a
subdirectory (image 3) does not corrupt the variables set in the
parent directory (image 2).
e.g. in the code snippet below:
Due to CMake scoping rules, the variable FOO will be '0' after
'add_subdirectory' has been processed, even if the subdirectory
temporarily set FOO to something else.
When scoping rules can not be used to prevent aliasing issues between
the images, e.g. for global symbols like target names, global
properties, and CACHE variables, a disambiguating prefix is used in
the symbol name.
Meaning, instead of using the library name 'kernel', we would for an
MCUBoot-enabled application have two libraries, 'kernel' and
'mcuboot_kernel'. When writing build scripts that refers to the two
kernel libraries names, one would use;
or equivalently;
where IMAGE is set by boilerplate.cmake to the empty string '' for the
first executable that is processed and then later on to 'mcuboot_' for
the mcuboot image.
or also equivalently;
here, the user of the kernel library does not need to know that the
library name depends on the IMAGE, only that it should refer to the
library name indirectly through the variable KERNEL_LIBRARY.
The same applies to other globals, like targets, and global
properties.
This
fixes #7868
fixes #19918
Challenges with this approach
This does not support multi-toolchain builds. This is due to a limitation in CMake. Only
one program can be set to be the active C toolchain.
It is not clear how to eventually solve this use-case. One option could be to declare it
as out-of-scope for the build system, and instruct users to do automatic or manual recursive
make for each toolchain.
Another option could be to introduce a single delegator program that CMake believes to
be the C toolchain, which again delegates to each enabled toolchain when invoked by ninja.
Similair to how west delegates to ninja, ninja delegates to ccache, ccache delegates to gcc,
and gcc delegates to ld.
To test
sanitycheck -T$ZEPHYR_BASE/samples/subsys/ipc/openamp
TODO: