-
Notifications
You must be signed in to change notification settings - Fork 61
RFC: Add a docker-app package manager #1189
Conversation
This allows an image to include references to a docker app. Details on docker apps can be found here: advancedtelematic/aktualizr#1189 Signed-off-by: Andy Doan <[email protected]>
It would probably be nice to run a ostree+docker-app install configuration through this logic once merged: |
You mean for testing? Or do you want to pipe OSTree updates through docker somehow? Thanks for your patience with this; we just moved offices and have been busy/distracted with several important things. We haven't forgotten, just catching up with other tasks! |
This allows an image to include references to a docker app. Details on docker apps can be found here: advancedtelematic/aktualizr#1189 Signed-off-by: Andy Doan <[email protected]>
yes. Once that's merged, I was thinking I should rebase this branch and update that test to also check this new code path. |
As an FYI to people following along trying to understand how this all can fit together: Here's the way I'm planning to managing publishing our "release" OTA builds with docker-apps merged in: https://github.com/foundriesio/core-containers/pull/2 |
This allows an image to include references to a docker app. Details on docker apps can be found here: advancedtelematic/aktualizr#1189 Signed-off-by: Andy Doan <[email protected]>
ping .... now that the latest release is tagged, I was hoping I might start getting some preliminary feedback. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On the whole, I think this is looking good. I appreciate your patience; it seems cool. Naturally, I have some questions, though!
bool start() { | ||
auto bin = boost::filesystem::canonical(compose_bin).string(); | ||
std::string cmd("cd " + app_root.string() + " && " + bin + " up --remove-orphans -d"); | ||
if (std::system(cmd.c_str()) != 0) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no problem with using std::system
directly, but any reason to prefer it over Utils::shell
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good question. The "start" command can take a while to start so std::system will print out progress as its made. So basically its a little nicer this way when watching things run. Not sure that's enough to justify doing things differently than everywhere else in the project though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there's a good reason, it's no problem. It is also interesting to consider a version of Utils::shell
that piped output directly to the logger. For now, it's probably fine, but I'd suggest leaving a comment to explain what you just explained here so that it doesn't get changed by an overzealous refactorer in the future.
auto apps = target.custom_data()["docker_apps"]; | ||
bool res = true; | ||
Uptane::ImagesRepository repo; | ||
repo.checkMetaOffline(*storage_); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why checkMetaOffline
here? That should be unnecessary unless I'm overlooking something.
It is on our radar that we should recheck the metadata (offline) before downloading and installing; we currently do it redundantly during checkUpdates
. It's on my todo list, but once it is fixed, it will be fixed for all package managers, which would make this call explicitly redundant (if I'm not mistaken).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
checkMetaOffline
pulls in data from INvStorage
to properly initialize the targets
member. It could be avoided by getting the SotaUptaneClient to pass its copy of that variable, but when I tried that approach I didn't really like the way to code looked. I'm open to trying it that way again and letting you decide what you prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that would require changing all of the package managers to support it. For now it's probably fine; it just seems like an indirect way to get what you are looking for. Again, a comment to explain why the check is useful might be helpful.
cd $DEST_DIR | ||
echo "Running repo server port on $PORT" | ||
find ./ | ||
exec python3 -m http.server $PORT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not critical, but it might be worth naming this with the word "test" somewhere in the name just to make it clear at a high level what the intention is.
CopyFromConfig(docker_app_bin, "docker_app_bin", pt); | ||
CopyFromConfig(docker_compose_bin, "docker_compose_bin", pt); | ||
} | ||
#endif |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, we wouldn't have package manager-specific configuration options, but I think we need to think a bit more about how to make that a reality, so for now, this is probably fine. We have some examples of config sections that are specific to certain modules, but I don't know if that'd work here because of the package manager design.
Can you provide an example of what the docker_apps_root
would be in practice? Would you image it being somewhere in /var
, or should it really be a temporary directory? And you don't seem to use (My apologies if this is obvious to people who use docker-apps more regularly.)docker_app_params
; can you explain when it would be helpful?
(Edited because I read your commit messages again.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you provide an example of what the docker_apps_root would be in practice? Would you image it being somewhere in /var, or should it really be a temporary directory?
It winds up being a directory structure like:
root//{logs.dockerapp,docker-compose.yml}
I really just wanted this to use something relative to StorageConfig::path and the configuration I set for everyone has been /var/sota/docker-apps. I didn't see a great way to infer this that was very clean. I'm open to suggestions. Maybe we could do something like this:
- Change this directory to always be compiled in and be named
PackageManagerConfig::packages_dir
. - Change
Config::postUpdateValues
to always set this relative to StorageConfig::path?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really just wanted this to use something relative to StorageConfig::path and the configuration I set for everyone has been /var/sota/docker-apps. I didn't see a great way to infer this that was very clean. I'm open to suggestions.
Now I understand. We have the same issue with import.base_path
. Currently it is actually independent of storage.path
but in practice we never change the defaults, which are /var/sota/import
and /var/sota
, respectively. It probably makes sense to copy that style for now, but I'm open to discussion on that point. Either of your alternate suggestions could work, but I want to keep custom logic in the Config class to a minimum if possible.
// The DockerAppManager needs the actual Target as defined targets.json so | ||
// it can access the custom data. | ||
auto repo_target = findTargetInDelegationTree(target); | ||
return package_manager_->install(*repo_target); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is the target received via uptaneInstall
not good enough? It's whatever you provide it, normally what you receive from checkUpdates
. Is it an issue of the version from Director vs. Images? If so, this is another "known issue" type of thing. I'm trying to think of better logic to handle comparing and synchronizing the metadata.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its a "Director's" version of the target. What you get from the Director will not have the custom data in it. So things worked just fine when I first tested things with aktualizr-lite, but when I started testing with OTA connect it failed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm not surprised it didn't work. Unfortunately that won't "just work". What really needs to happen is that we need logic to determine which things must match and which things can be pulled from one or the other. The first part is partly related to an Uptane question that was resolved in uptane/uptane-standard#98, which we haven't fully implemented yet. The second part is up to the implementation to decide. I don't think we can get away with simply throwing away what is in unique to the Director version, and clearly just ignoring what is in the Images version doesn't work for you. I'm aware that we need to sort this out, but it's not even on my roadmap right now. You could try to take a stab at it (or just simply use the Director version and copy the custom field in question from the Images version and leave the rest of this issue untouched) or you can wait until someone on our side can get around to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that i've gotten most of the target iterator stuff out of sotauptaneclient, I think I could handle this look up directly in the docker-app logic so that we aren't throwing away the original director target. I'll give that a show and see what happens
|
||
client->package_manager_->install(target); | ||
std::string content = Utils::readFile(config.pacman.docker_apps_root / "app1/docker-compose.yml"); | ||
ASSERT_EQ("DOCKER-APP RENDER OUTPUT\nprimary\n", content); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the fact that the docker-app file being used is named "primary.txt" is confusing me. So is this actually testing that you can install a file with that name as a docker app? I think an explanation of what this test is specifically covering would be really helpful. Historically, we've linked test comments to items in the actions.md file to track what is covered by tests. That might get revised, but either way, at least having a comment preceding this test to explain what makes it interesting would be great.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd just copy/pasted that from another test in this repository. I can fix that.
@patrickvacek - just did a force-push with some clean ups. I think the biggest thing I haven't addressed is the config entry for |
@patrickvacek - i think this addresses everything now. Please take a close look at aefca65 to make sure that's sufficient compared to how I was doing with the previous hack in sotauptaneclient.cc. I've hit one interesting issue while testing, but I think it may be orthogonal to this: My OTA-TUF repo server is configured to store targets in AWS S3. I've put the docker app target files there and things have worked just fine under aktualizr-lite. However, when I run aktualizr, I get this error:
I grabbed that URL right after the error and could download it with |
Codecov Report
@@ Coverage Diff @@
## master #1189 +/- ##
=========================================
Coverage ? 79.33%
=========================================
Files ? 170
Lines ? 10044
Branches ? 0
=========================================
Hits ? 7968
Misses ? 2076
Partials ? 0
Continue to review full report at Codecov.
|
I think something might be missing... I'm still seeing the change in sotauptaneclient.cc that fetches the image repo target and uses that instead.
I agree, it does look like the problem is something with using SSL certs when they aren't necessary. Not sure how that only came up now but works normally. I guess we normally go through our treehub instance and get redirected, so maybe we just assume we always need them. It is probably an orthogonal issue that might be better addressed in a separate PR, although I'm not quite sure what the right solution is yet. |
I've pushed the proper update that doesn't alter sotauptaneclient. I agree with you that the SSL issue can probably be treated orthogonally for now. |
a949abc
to
94f08ba
Compare
This will be an array of docker-apps the device will be deploying. Signed-off-by: Andy Doan <[email protected]>
This will be needed for the DockerAppManager to know if there's update data it needs to handle. Signed-off-by: Andy Doan <[email protected]>
PV: fixed a CMake block warning. Signed-off-by: Andy Doan <[email protected]> Signed-off-by: Patrick Vacek <[email protected]>
94f08ba
to
5612a16
Compare
These are the places we'll need to do docker-app actions. PV: fix formatting. Signed-off-by: Andy Doan <[email protected]> Signed-off-by: Patrick Vacek <[email protected]>
Signed-off-by: Andy Doan <[email protected]>
Signed-off-by: Andy Doan <[email protected]>
Docker Apps have the notion of a local paramters file that allows you to render an app with host specific customizations. An example of a parameters file is: https://github.com/docker/app/blob/master/examples/voting-app/voting-app.dockerapp/parameters/production.yml Where its default parameters are: https://github.com/docker/app/blob/master/examples/voting-app/voting-app.dockerapp/parameters.yml Signed-off-by: Andy Doan <[email protected]>
5612a16
to
3879216
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@doanac I rebased to fix a merge conflict and fixed a couple very minor issues. It looks find from my perspective and can be merged if you are okay with it!
@patrickvacek +1. thanks so much for your help! |
This is related to issue #1118. This PR adds an optional (both compile-time and run-time) package manager to aktualizr that handles OSTree based systems that can be customized with docker-apps.
Conceptually docker-app is almost identical to docker-compose. However it addresses some issues with docker-compose that make it very attractive for our device-management use case:
https://github.com/docker/app/#the-problem-application-packages-solve
Docker app is still under active development(as of April 2019), but is functional enough for our needs. There are two ways you can use docker apps right now:
For now, I'm using approach #1. This could easily be changed, runtime configured, or #ifdef’d in the future.
What It Looks Like
The best place to start looking at this is from the TUF targets.json file. We first add new docker app targets to the file that look like:
The OTA Connect Reposerver also stores a copy of these files which are valid docker-app formatted.
An installation target in TUF can have an optional “custom” data dictionary. We are (ab)using this value to like an OSTree install target with a list of valid docker-apps:
The
docker_apps
dictionary provides pointers to the correct docker-app file/version for this OSTree image. This gives aktualizr all the information it needs to then manage docker apps.