-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add RPM builds to the CI pipeline #2982
Add RPM builds to the CI pipeline #2982
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you could telegraph the plot a bit more clearly in a few places, but I infer this is likely sound. I'm approving, but I won't pre-resolve the conversations because I'd like to see the answers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Conversations resolved, though I think your responses could be used to form some additional commentary in the files that might help future spelunkers.
46d4526
to
cc6e842
Compare
Since I had to rebase anyway, I improved (I think...) the Makefile comment per Dave's suggestion. |
Switching to draft mode, while I explore the differences between my environment and Mr. Jenkins's. 😝 |
4121d10
to
2e5e5d8
Compare
I'm sure that there are many ways to address this problem, each with a different set of requirements and maintenance loads. Employing Using the approach which you suggest, of invoking each stage inside its own container instead of invoking all the stages inside a single container, trades the problem of "how do you invoke a container from inside a container" (which is now solved) for the problem of "how do you get the build products out of the container and share them with the next stage". This of course, is easily done by making a bunch of interesting changes to Under the approach in this PR, there is a single container execution. This means that all of the dependency management is done in a single container (which has its good points and it's liabilities), which means it is all pre-packaged for the user. And, it means that the file system(s) used by the execution can easily be decoupled from the host. That is, a developer can issue a single command and run the build in isolation from the host, without installing any dependencies or having to worry about file system conflicts. By default, the build outputs are placed in temporary volumes which don't interact with the user's local environment, so the builds and tests will not be adversely affected by artifacts which the user has lying around, and, when the build is done, there won't be artifacts left around which the user will trip on. However, users (and the CI) can easily override this default behavior if they want the build artifacts to be captured on a local file system instead of having them placed on a temporary volume. I can certainly pursue the approach that you suggest. It looks workable on its face, but it means more points of involvement on the host side. So, I think it's pretty close to six of one vs. a half-dozen of the other in terms of the ultimate complexity. I would be happy to give you a guided tour of these changes if that would allay your concerns. There are a lot of constraints here, and many of them are not obvious. I think this PR presents the most versatile solution which meets all of the requirements. I would like to know why you think that we should not use |
529abe8
to
a559f30
Compare
Rebased on to the end of main and resolved the conflicts introduced by the merge of #2937. |
Withdrawing my "change request" after talking with Webb.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Withdrawing my "change request" after talking with Webb. I still have reservations about the additional level of complexity (however minimal it is) and reliance on the new mechanism of podman-remote
when there seems to be a good alternative.
If the rest of the team agrees with the change, great. I'll submit a consideration for an alternative for the team to consider later.
9dad3d3
to
d1aa9b6
Compare
Back to draft mode while I work out support for Git "worktrees". |
PBENCH-720
- Use WORKSPACE from outside the container for mappings requested inside it - Change destination for RPM build from ~/rpmbuild* to ~/pbench/rpmbuild* - Remove spurious trailing slash from relative "Pbench Top" location - Use symbol for Podman socket location - Tweak the rpmbuild directory mapping
2c1b7a7
to
c8ab511
Compare
c8ab511
to
1c09c35
Compare
The problem with Git worktrees turned out to be not in my changes so much as in the fact that RHEL-7 comes with a very old version of Git which doesn't include worktree support! So, the previous support for Git worktrees in The exercise did bring to light the fact that, if you are using an alternate worktree, then the path to your main worktree must not be So, this PR is once again ready for review. |
This PR enhances
build.sh
to extend the CI pipeline to include building binary RPMs for the Agent and the Server.There are a few related changes which include:
build.sh
:${HOME}/.config
directory if it doesn't exist. Previously, in the container scenarios, we mapped this to a temporary file system; now it just resides on the same file system as ${HOME}, but we cannot be guaranteed that it will exist unless we create it.jenkins/Makefile
: A change requested in another PR for a code commentjenkins/ci.fedora.Dockerfile
:podman-remote
package to allow us to invoke containers on the host from within the container running the builddnf
tweaks that I ran across in my investigations which seemed like a good ideanpm
tweaks (one whichnpm
itself suggested, and the other avoids a timeout problem)jenkins/run
: changes to support running a container from inside the containerCONTAINER_HOST
environment variable is defined, then we set up the new-container invocation to be made withpodman-remote
which will run the container at the location specified byCONTAINER_HOST
.CONTAINER_HOST
environment variable is not defined, then we set up the new-container invocation to be made withpodman
, and we start asystemd
service which will listen for a request frompodman-remote
.WORKSPACE
to determine where the Git checkout is. When running in the CI, this is defined by the Jenkins environment; if it is undefined, we set it to the current working directory, whichjenkins/run
requires to be the Git checkout. We define this environment variable inside the container, so that it knows where on the host the Git checkout is, so that invocations ofpodman-remote
can set up the mapping properly, even though they are made from inside a container.WORKSPACE_TMP
to determine where the home directory inside the container should be mapped. When running in the CI, this is defined by the Jenkins environment (although, we have to create the directory tree, which is done inbuild.sh
). If it is undefined, we set up a temporary Podman volume for the mapping (using a UUID for the volume name; the volume is deleted at the end of the script if it is no longer in use).WORKSPACE_TMP
can also be defined to point to a real file system on the host, which a developer can use to capture the build products locally. We define this environment variable inside the container, so that it knows what on the host to map to the home directory inside the container, so that invocations ofpodman-remote
can set up the mapping properly, even though they are made from inside a container.CONTAINER_HOST
environment variable in the container, to trigger the use ofpodman-remote
inside the container, and we point it at the service which we've defined if it was not already defined.podman-remote
command can use it to communicate to the service on the host.utils/rpm.mk
:${BLD_ROOT}
) to${HOME}
by default, but allow the distro-specific build invocations to override it. Likewise, set the build subdirectory torpmbuild
but allow it to be overridden, as well.TMP
directory to theRPMDIRS
list and use it instead of/tmp
, so that we don't have collisions between concurrent distros' builds. Define a variable,${RPMTMP}
for use instead of${TMPDIR}
.<distro>-rpm
target so that it requires only the build output directories and not the spec file and source RPM. (Therpm
target of the sub-make already has those dependencies.)~/rpmbuild
, but, if the build is run inside a container, this location will be remapped appropriately byjenkins/run
.${BLD_ROOT}
and${BLD_SUBDIR}
.PBENCH-720