-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Second partition for storage-drive testing #3594
Add Second partition for storage-drive testing #3594
Conversation
e436b79
to
1f07da1
Compare
Tests aren't hip @cevich |
Oh no they're not, completely FUBARd. I'm on one of the VMs twidling bits trying to figure out to make this work :D |
6fa1242
to
c2b366d
Compare
☔ The latest upstream changes (presumably #3601) made this pull request unmergeable. Please resolve the merge conflicts. |
baa0572
to
4d1122b
Compare
b882875
to
2468826
Compare
3f0cfd6
to
87f4677
Compare
f7e27de
to
297e81b
Compare
@rhatdan @nalind PTAL, this is ultimately for buildah's benefit. Once this can merge, I'll work on bringing the capability over there. @mheon @baude I'm assuming libpod CI doesn't currently need to test with a device-mapper storage driver or similar "special" storage setup (swap enabled?). Otherwise let me know. |
LGTM |
Oh woops. No matter, |
@rhatdan this PR is the other reason I want to try using the libpod produced VM cache-images in c/storage and c/buildah. (First reason being, fixing the stupid dpkg is locked thing). |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cevich, mheon The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@mheon @edsantiago @vrothberg @TomSweeneyRedHat @baude PTAL and lets get this merged. |
LGTM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm pretty baffled by this, but that's because I have no understanding of the /dev/sda
setup offered by the CI environment. It might be nice in this or a future PR to document that, explain how it is that parted
can be relied upon to have that space available in the sda device. As long as this works, I guess it's ok, it's just confusing.
I'll add a comment about this. The "expansion space" is created because of the difference between base-image build time (20gig storage request) and the runtime request (200gig always otherwise gcloud issues warning). We have a unit-test in place to catch the 200gig case, and I tested manually to verify If when the There are other ways this can break though, and no good solutions that are less complicated (I spent about two weeks trying - no joke 😦) |
This is mainly/initially to support use of Cirrus-CI in https://github.com/containers/buildah since that setup re-uses the VM images from this project. However, it also opens doors here, if libpod ever needs/wants to do things with a dedicated storage device and/or storage-drivers. Signed-off-by: Chris Evich <[email protected]>
297e81b
to
0a05af1
Compare
That's right - I remember now. It's just that there are so many obscure trivial details to remember. Comments are helpful for that. Thanks. |
Well I guess that makes me the king of obscure trivial details then (depending on my current level of caffeination) 🤣 I do really appreciate the feedback. There's nothing better than another pair of eyes to spot camouflaged assumptions. |
(This PR does not need to go in until after release) |
/lgtm |
This is mainly/initially to support use of Cirrus-CI
in https://github.com/containers/storage since that setup
re-uses the VM images from this project. However, it also
opens doors here, if libpod ever needs/wants to do things
with a dedicated storage device and/or storage-drivers.
Depends on #3632