-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Begin exec rework #5088
Begin exec rework #5088
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mheon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Update: looking more into exec sessions in Docker, it looks like they are removed immediately after termination - probably just the exit code is retrieved via Inspect. We will need the ability to autoremove exec sessions for detached containers. I still see some potential for leaking, but we can probably counter that by clearing them on container restart. |
21e3a65
to
4204f35
Compare
DB changes should be complete. Still doesn't compile, but all functions are there. Going to convince everything to use the new exec sessions, then write unit tests for the DB. |
4204f35
to
46aeb44
Compare
Exec API has been rewritten to use the new structure. |
46aeb44
to
f304b6a
Compare
f304b6a
to
96a5bd4
Compare
Libpod work mostly done. It compiles, even. Next steps: need to rewrite resize code, write unit tests for DB code. |
5e12f80
to
8cb75a8
Compare
8cb75a8
to
3395e88
Compare
Added resizing. Going to chase down test failures, add some unit tests, then we can merge. |
3395e88
to
1202f9d
Compare
Rebased, healthcheck errors should be fixed. Stripping WIP, should be ready for merge. |
|
||
sessionExists := execBucket.Get(sessionID) | ||
if sessionExists == nil { | ||
return define.ErrNoSuchExecSession |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wrap with id?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
up to you i guess, if the caller wraps it, then ok too
libpod/boltdb_state_internal.go
Outdated
return errors.Wrapf(define.ErrCtrExists, "container %s has active exec sessions: %s", ctr.ID(), strings.Join(sessions, ", ")) | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this block and the previous block at 800 appear to be the same and also have the same error message. any chance we could stub that out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually they're completely redundant. Oops.
libpod/container_exec.go
Outdated
} | ||
|
||
if session.State != define.ExecStateRunning { | ||
return errors.Wrapf(define.ErrExecSessionStateInvalid, "can only stop running exec sessions, while container %s session %s state is %q", c.ID(), session.ID(), session.State.String()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
% is not running, current state %s ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
lgtm, couple of things to think about |
328d53b
to
8e180ab
Compare
As part of the rework of exec sessions, we want to split Create and Start - and, as a result, we need to keep everything needed to start exec sessions in the struct, not just the bare minimum for tracking running ones. Signed-off-by: Matthew Heon <[email protected]>
As part of the rework of exec sessions, we need to address them independently of containers. In the new API, we need to be able to fetch them by their ID, regardless of what container they are associated with. Unfortunately, our existing exec sessions are tied to individual containers; there's no way to tell what container a session belongs to and retrieve it without getting every exec session for every container. This adds a pointer to the container an exec session is associated with to the database. The sessions themselves are still stored in the container. Exec-related APIs have been restructured to work with the new database representation. The originally monolithic API has been split into a number of smaller calls to allow more fine-grained control of lifecycle. Support for legacy exec sessions has been retained, but in a deprecated fashion; we should remove this in a few releases. Signed-off-by: Matthew Heon <[email protected]>
This produces detailed information about the configuration of an exec session in a format suitable for the new HTTP API. Signed-off-by: Matthew Heon <[email protected]>
8e180ab
to
e89c638
Compare
Rebased on top of fixed Conmon. Fingers are crossed. |
@mheon is this the same issue you were seeing or a different one? |
Tests are (amazingly) passing now, save a network flake. @TomSweeneyRedHat @rhatdan @vrothberg PTAL, let's see about getting this landed |
@rhatdan Different issue, looks like a flake in the APIv2 tests |
@mheon Do we want to get this into podman 1.8.2 or wait until we ship it? |
LGTM |
LGTM and happy green tests buttons. |
/hold |
/hold cancel |
As part of the rework of exec sessions, we need to address them independently of containers. In the new API, we need to be able to fetch them by their ID, regardless of what container they are associated with. Unfortunately, our existing exec sessions are tied to individual containers; there's no way to tell what container a session belongs to and retrieve it without getting every exec session for every container.
I originally debated just adding a registry to the database to point individual exec sessions at each container, but if we're going so far as to completely restructure exec sessions, it makes sense to store them separately as well. Existing exec sessions will be retained as-is for legacy purposes, while the new-style exec sessions will be stored separately from containers and accessed via their own API endpoints.
There's some potential performance impact to be discussed here in that we'll now need one DB call to get each exec session, and then one call per exec session to update its state, whenever we need to do an operation that needs to check if there are running exec sessions. I'm also concerned as to the circumstances of them remaining in the database; I need to verify on the Docker side as to when they are cleaned (only on reboot? on container restart as well? when the exec session finishes?).
This is very early; I'm only about halfway through DB changes, which will then require a refactor of existing exec code to use the new fields in the DB, and associated changes to the lifecycle of exec sessions.
TODO: