Skip to content

Commit

Permalink
Merge #166
Browse files Browse the repository at this point in the history
166: Customize builds for different platforms r=jgallag88 a=jgallag88

This change provides the ability to build versions of the appliance
customized for different platforms (hypervisors and cloud providers).
This is done by installing a different versions of the delphix-platform
and delphix-kernel packages depending on which platform we are
building for. Since we only want to have a single upgrade image per
variant, this change also adds a second stage to the build which
combines the live-build output for multiple platform versions of the
same variant into a single upgrade tarball.

The live-build stage of the build is now run by invoking 'gradle' with a
a target which is a combination of variant and platform, e.g.
`gradle buildInternalDevEsx`.

The second stage of the build is run by invoking 'gradle' with a variant
as a target, e.g. `gradle buildUpgradeImageInternalDev`. When the second
stage is run, an environment variable 'AWS_S3_URI_LIVEBUILD_ARTIFACTS'
can be passed. If it is used, previously built live-build artifacts will
be downloaded from the provided S3 URIs, and placed in
`live-build/build/artifacts` as if they had been built locally. If it is
not used, live-build will be invoked for each of the hypervisors
specified in the 'DELPHIX_PLATFORMS' environment variable.

A couple notes about the implementation:

1. This change replaces the Make build with a Gradle one. The build logic
    needed for this change was difficult to express using Make and
    resulted in a Makefile which was very difficult to understand. The
    use of Gradle make handling this build logic more straightforward
    and also made it possible to add better support for incremental
    builds.
2. This change removes the idea of the 'base' live-build variant. The
    base variant contains the kernel, and because the kernel differs
    between hypervisors, it cannot be shared between different hypervisor
    builds. It would be possible to have a different version of the base variant
    per hypervisor, and share that between different variants built for
    the same hypervisor. However, this likely isn't worth the effort
    because it doesn't help in either of the two most common use cases:
      - Building via a jenkins job: when building via Jenkins, each
         variant will now be built via a sub-job running on its own build
         VM, so the base would be rebuilt for each sub-job anyway.
      - Developers iterating on changes on personal build VMs: in this
         case developers are most likely to be building a single variant,
         in which case the 'base' variant would be less likely to be
         re-used.
3. We no longer do the live-build in place (that is, directly in
    `live-build/variant/<variant>/`). Now that we have multiple builds per
    variant, we need to make sure that intermediate live-build
    artifacts from one build are not incorrectly re-used in the next
    build of the same variant, which might be for a different
    hypervisor. The simplest way to accomplish this is just to do the
    live-build in a throw-away directory.

In light of these two changes, some of current layout of the
repository no longer makes sense, so this change re-arranges a number
of files in the repo, particularly in the `live-build/` directory.

Co-authored-by: John Gallagher <[email protected]>
  • Loading branch information
bors[bot] and jgallag88 committed Jan 25, 2019
2 parents fd2243c + ebd884a commit 3552fdc
Show file tree
Hide file tree
Showing 76 changed files with 822 additions and 427 deletions.
20 changes: 19 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1 +1,19 @@
ancillary-repository
#
# Copyright 2018 Delphix
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

.gradle/
.gradleUserHome/
build/
8 changes: 4 additions & 4 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@ services:
- docker

env:
- TARGET=ansiblecheck
- TARGET=shellcheck
- TARGET=shfmtcheck
- TARGET=ansibleCheck
- TARGET=shellCheck
- TARGET=shfmtCheck

install:
- docker build -qt appliance-build:latest docker

script:
- ./scripts/docker-run.sh make $TARGET
- ./scripts/docker-run.sh gradle $TARGET
90 changes: 0 additions & 90 deletions Makefile

This file was deleted.

84 changes: 63 additions & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ Log into that VM using the "ubuntu" user, and run these commands:
$ git clone https://github.com/delphix/appliance-build.git
$ cd appliance-build
$ ansible-playbook bootstrap/playbook.yml
$ ./scripts/docker-run.sh make internal-minimal
$ ./scripts/docker-run.sh gradle buildInternalMinimalKvm
$ sudo qemu-system-x86_64 -nographic -m 1G \
> -drive file=live-build/artifacts/internal-minimal.qcow2
> -drive file=live-build/build/artifacts/internal-minimal-kvm.qcow2

To exit "qemu", use "Ctrl-A X".

Expand Down Expand Up @@ -106,50 +106,92 @@ correcting any deficencies that may exist. This is easily done like so:

Now, with the "bootstrap" VM properly configured, we can run the build:

$ ./scripts/docker-run.sh make
$ ./scripts/docker-run.sh gradle ...

This will create a new container based on the image we previously
created, and then execute "make" inside of that container.
created, and then execute "gradle" inside of that container.

The "./scripts/docker-run" script can also be run without any arguments,
which will provide an interactive shell running in the container
environment, with the appliance-build git repository mounted inside of
the container; this can be useful for debugging and/or experimenting.

By default, all "internal" variants will be built when "make" is
specified without any options. Each variant will have ansible roles
applied according to playbooks in per variant directories under
live-build/variants. A specific variant can be built by passing in the
variant's name:
Each variant will have ansible roles applied according to playbooks in
per variant directories under live-build/variants. An appliance can be
built by invoking the gradle task for the variant and platform desired.
The task name has the form 'build\<Variant\>\<Platform\>'. For instance,
the task to build the 'internal-minimal' variant for KVM is
'buildInternalMinimalKvm':

$ ./scripts/docker-run.sh make internal-minimal
$ ./scripts/docker-run.sh gradle buildInternalMinimalKvm

When this completes, the newly built VM artifacts will be contained in
the "live-build/artifacts" directory:
the "live-build/build/artifacts/" directory:

$ ls -l live-build/artifacts
total 6.0G
-rw-r--r-- 1 root root 975M Apr 30 19:47 internal-minimal.ova
-rw-r--r-- 1 root root 1009M Apr 30 19:43 internal-minimal.qcow2
-rw-r--r-- 1 root root 2.8G Apr 30 19:44 internal-minimal.vhdx
-rw-r--r-- 1 root root 975M Apr 30 19:47 internal-minimal.vmdk
$ ls -lh live-build/build/artifacts/
total 1.9G
-rw-r--r-- 1 root root 275M Jan 11 22:31 internal-minimal-kvm.debs.tar.gz
-rw-r--r-- 1 root root 45 Jan 11 22:31 internal-minimal-kvm.migration.tar.gz
-rw-r--r-- 1 root root 636M Jan 11 22:33 internal-minimal-kvm.qcow2

The appliance produced will contain a kernel optimized for the
specified platform (which can be one of 'aws', 'azure', 'esx', 'gcp',
or 'kvm'). The appliance will also contain kernel modules built for
that optimized kernel, and perhaps some other modules relevant to that
platform only.

### Step 5: Use QEMU for Boot Verfication

Once the live-build artifacts have been generated, we can then leverage
the "qemu" tool to test the "qcow2" artifact:

$ sudo qemu-system-x86_64 -nographic -m 1G \
> -drive file=live-build/artifacts/internal-minimal.qcow2
> -drive file=live-build/build/artifacts/internal-minimal-kvm.qcow2

This will attempt to boot the "qcow2" VM image, minimally verifying that
any changes to the build don't cause a boot failure. Further, after the
image boots (assuming it boots successfully), one can log in via the
console and perform any post-boot verification that's required (e.g.
verify certain packages are installed, etc).
console (username and password are both 'delphix') and perform any
post-boot verification that's required (e.g. verify certain packages are
installed, etc).

To exit "qemu", one can use "Ctrl-A X".

## Building an Upgrade Image

An upgrade image for a particular variant can be built by running the
'buildUpgradeImage\<Variant\>' tasks. For instance, the task to build
an upgrade image for the internal-minimal variant is
'buildUpgradeImageInternalMinimal':

$ DELPHIX_PLATFORMS='kvm aws' ./scripts/docker-run.sh gradle buildUpgradeImageInternalMinimal

An upgrade image can contain the necessary packages to upgrade
appliances running on multiple different platforms. Which platforms are
supported by a particular upgrade image is determined by the list of
platforms specified in the `DELPHIX_PLATFORMS` environment variable. When the
build completes, the upgrade image can be found in the "build/artifacts"
directory:

$ ls -lh build/artifacts/
total 837M
-rw-r--r-- 1 root root 837M Jan 11 22:35 internal-minimal.upgrade.tar.gz

## Using Gradle

As noted in the previous sections, the build logic is implemented using
Gradle. The most commonly used tasks are likely to be

- `build<Variant><Platform>` - Builds the given variant of the appliance for the given platform
- `buildUpgradeImage<Variant>` - Builds an upgrade image for the given variant
- `check` - Runs all style checks
- `format` - Runs all code formatting tasks
- `clean` - Removes all existing build artifacts

The complete list of tasks can be listed using the 'tasks' task:

$ ./scripts/docker-run.sh gradle tasks

## Creating new build variants

This repository contains different build variants which are used to
Expand Down Expand Up @@ -219,4 +261,4 @@ For this example, we add our new role to the playboodk as shown below:
See the instructions [above](#step-4-run-live-build) to setup your build
environment and kick off the build:

$ ./scripts/docker-run.sh make internal-dcenter
$ ./scripts/docker-run.sh gradle buildInternalDcenterEsx
10 changes: 6 additions & 4 deletions bootstrap/roles/appliance-build.bootstrap/tasks/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,12 @@
- qemu
- zfsutils-linux

- docker_image:
path: "{{ toplevel.stdout }}/docker"
name: appliance-build
force: true
#
# We can't use the docker_image module because it doesn't yet support passing
# the 'network' parameter: https://github.com/ansible/ansible/pull/50313, which
# we need to be able to fetch things from Artifactory.
#
- shell: docker build --network host --tag "appliance-build:latest" "{{ toplevel.stdout }}/docker"

- modprobe:
name: zfs
Expand Down
110 changes: 110 additions & 0 deletions build.gradle
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
/*
* Copyright 2019 Delphix
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/

apply plugin: 'base'

apply from: "${rootProject.projectDir}/gradle-lib/util.gradle"

// Build upgrade images for KVM if no platforms are specified via an environment variable
def DEFAULT_PLATFORMS = 'kvm'

createArtifactsDirTask(this)

for (variant in allVariants) {
def taskName = "buildUpgradeImage${toCamelCase(variant).capitalize()}"
tasks.create(taskName, Exec) { task ->
group = 'Build'
description = "Builds an upgrade image for the ${variant} variant of the appliance"
dependsOn mkArtifactsDir

/*
* When building an upgrade image, there are two ways to get the *.debs.tar.gz artifacts
* that are produced by live build and consumed by the build-upgrade-image.sh script. We
* can directly run live build for the appropriate platforms for this variant, or we can
* fetch from S3 the artifacts from previous runs of live-build. Which strategy we use is
* controlled by the AWS_S3_URI_LIVEBUILD_ARTIFACTS and DELPHIX_PLATFORMS env variables,
* so check them and set the appropriate task dependencies.
*/
if (System.getenv("AWS_S3_URI_LIVEBUILD_ARTIFACTS") != null) {
dependsOn ":live-build:fetchLiveBuildArtifacts"
} else {
def platforms = System.getenv("DELPHIX_PLATFORMS") ?: DEFAULT_PLATFORMS
for (platform in platforms.trim().split()) {
def dependentTask = "build" +
toCamelCase(variant).capitalize() +
platform.capitalize()
dependsOn ":live-build:${dependentTask}"
}
}

for (envVar in ["DELPHIX_PLATFORMS", "AWS_S3_URI_LIVEBUILD_ARTIFACTS"]) {
inputs.property(envVar, System.getenv(envVar)).optional(true)
}

doFirst {
if (System.getenv("AWS_S3_URI_LIVEBUILD_ARTIFACTS") == null &&
System.getenv("DELPHIX_PLATFORMS") == null) {

logger.quiet("""
Neither 'AWS_S3_URI_LIVEBUILD_ARTIFACTS' nor 'DELPHIX_PLATFORMS' is defined as an
environment variable, so this upgrade image will be built for the default platform
('${DEFAULT_PLATFORMS}'). To change which platforms are included in the image,
re-run with DELPHIX_PLATFORMS set to a space-delimited list of platforms for which
to build (e.g 'DELPHIX_PLATFORMS="esx aws kvm" gradle ...') or with
AWS_S3_URI_LIVEBUILD_ARTIFACTS set to a space-delimited set of S3 URIs from which
to fetch previously built live-build artifacts.
""".stripIndent())
}
}

commandLine "${rootProject.projectDir}/scripts/build-upgrade-image.sh", "${variant}"
}
}

def shellScripts = fileTree("scripts") +
fileTree("live-build/config/hooks").include({ details ->
details.file.canExecute()
}) +
fileTree("live-build/misc/migration-scripts") +
fileTree("upgrade/upgrade-scripts", {
exclude "README.md"
})

task shfmt(type: Exec) {
commandLine(["shfmt", "-w"] + shellScripts.getFiles())
}

task shfmtCheck(type: Exec) {
commandLine(["shfmt", "-d"] + shellScripts.getFiles())
}

task shellCheck(type: Exec) {
commandLine(["shellcheck", "--exclude=SC1090,SC1091"] + shellScripts.getFiles())
}

task ansibleCheck(type: Exec) {
def ansibleFiles = fileTree("bootstrap").include("**/playbook.yml") +
fileTree("live-build/variants").include("**/playbook.yml")
commandLine(["ansible-lint", "--exclude=SC1090,SC1091"] + ansibleFiles.getFiles())
}

tasks.check.dependsOn shellCheck, shfmtCheck, ansibleCheck

task format() {
dependsOn shfmt
group = "Formatting"
description "Runs all auto-formatting tasks"
}
Loading

0 comments on commit 3552fdc

Please sign in to comment.