-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Review OCI tests #269
Review OCI tests #269
Conversation
Codecov Report
@@ Coverage Diff @@
## main #269 +/- ##
===========================================
+ Coverage 40.29% 55.47% +15.17%
===========================================
Files 46 46
Lines 2169 2248 +79
===========================================
+ Hits 874 1247 +373
+ Misses 1236 909 -327
- Partials 59 92 +33
Continue to review full report at Codecov.
|
d8182ff
to
4c54640
Compare
16ad25c
to
3a8e5a8
Compare
3a8e5a8
to
b7a52b7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm struggling to run these integration tests locally a bit. I'm guessing we need to make sure devpool.sh
has run, and that containerd is running, and that the CTR_SOCK_PATH
is set. It would be good to document this, or have this setup and teardown automated (the teardown especially can be annoying to remember).
Also when I have that all set up and run make test-with-cov
, the tests are still skipped, I think because the tests are executed with sudo
in the make target and the env is not preserved (bear in mind that we cannot do sudo make test-with-cov
in the workflow file because then they will be run with a different Go version 🤦♀️ )
Looking at the action logs these integration tests are not running:
|
Can we add some comments or log lines to each stage of the test, just so that it is easier to tell at a glance what should be happenning (will be useful later when/if something breaks to know exactly what should be going on and if the test is legit doing that). Something like: // When <condition X> <thing Y> should happen |
Log line sounds a good idea, I'll add them. However I'm bad writing logs like that, I can write a long message or nothing. In this case the best I can do is simple form a sentence from the name of the function like
Which I think gives not much more help than the name of the function. Note from Slack: "oh i was talking about the integration tests" Missing test run, interesting, maybe I don't set an environment variable with my action. ( No it is there:
|
* use yitsushi/devmapper-containerd-action@v1 * extends imageservice test * invalid image pull * double check if volume is mounted * volume is not mounted test * kernel volume mount test fixes liquidmetal-dev#15
d8d24fb
to
381f4e7
Compare
ImageName: testImage, | ||
Owner: testOwner, | ||
}) | ||
g.Expect(err).To(g.HaveOccurred()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's check that the error is the one we expect here (ditto all the rest too probably)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn't go that deep as they can be very ugly by design as we just pass through the error without checking anything on them and at the end is a wrapped error wrapped into a new error that's wrapped into a new error that's wrapped ................. and who knows how deep is that. From the point we call client.Pull
the error is basically the same containerd will throw back and we just wrap them with fmt.Errorf(..., err)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That would be cool if we can refactor this code to be less chaotic (and a bit more complex) on error handling, but with the code we have now, I don't think it has any use to test if the error is nope
wrapped in N layers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you can do like a ContainSubstring
and look for a certain piece
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but yeh LOTS of refactoring would be good in all this area
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
don't we wrap it? if we hit the failure at that point i would expect to have substring "listing existing containerd leases: nope"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes we wrap, but we wrap all of them. so it will be always something: something: something: something: nope
(and the amount of something
changes based on what we called), but 100% it will contain nope
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so we can still do
g.Expect(err).To(g.HaveOccurred()) | |
g.Expect(err).To(g.MatchError(ContainSubstring("listing existing containerd leases: nope"))) |
to match the tail end of the actual thing which caused the error and the point in our code where we caught the 3rd party fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(pure personal opinion) In my eyes it falls into the "we can test it for coverage and because looks good, but useless" category.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i mean it doesn;t look great
but seriously 90% of the reviews i have done in my life have been "do we not want to catch that error?" followed by an "oh shit yeh!" response. so checking errors are returned when we expect and in the way we expect them is an unhealthy obsession of mine 😭
@yitsushi do you know why we call |
Co-authored-by: Claudia <[email protected]>
Co-authored-by: Claudia <[email protected]>
(pure assumption) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just some final trolls
Use: opts.Use, | ||
OwnerUsageID: testOwnerUsageID, | ||
}) | ||
Expect(err).ToNot(HaveOccurred()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Expect(err).ToNot(HaveOccurred()) | |
Expect(err).NotTo(HaveOccurred()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But but but but bu but... to not have occurred
vs not to have occurred
, the first one sounds better to my ears.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
honestly they both sound off to me, i have had this discussion like 100 times 😂 i have a vim shortcut which autocompletes to NotTo and I have no idea why
everywhere else (in this file) we have NotTo
so for consistency i guess?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flintlock on 15-review-oci-tests via v1.17.1
❯ ag 'NotTo' | wc -l
97
flintlock on 15-review-oci-tests via v1.17.1
❯ ag 'ToNot' | wc -l
31
flintlock on 15-review-oci-tests via v1.17.1
❯ ag -l 'NotTo'
core/models/vmid_test.go
core/application/app_test.go
core/steps/runtime/dir_create_test.go
core/plans/microvm_delete_test.go
core/plans/microvm_create_update_test.go
test/e2e/utils/utils.go
test/e2e/utils/runner.go
pkg/wait/wait_test.go
pkg/log/log_test.go
pkg/validation/validate_test.go
pkg/planner/actuator_test.go
infrastructure/containerd/repo_test.go
infrastructure/containerd/image_service_integration_test.go
infrastructure/controllers/microvm_controller_test.go
infrastructure/network/utils_test.go
flintlock on 15-review-oci-tests via v1.17.1
❯ ag -l 'ToNot'
core/models/vmid_test.go
core/steps/runtime/kernel_mount_test.go
core/steps/runtime/initrd_mount_test.go
core/steps/runtime/repo_release_test.go
core/steps/runtime/volume_mount_test.go
core/steps/microvm/start_test.go
core/steps/microvm/create_test.go
core/steps/microvm/delete_test.go
core/steps/network/interface_create_test.go
core/steps/network/interface_delete_test.go
core/steps/event/publish_test.go
test/e2e/utils/utils.go
test/e2e/e2e_test.go
infrastructure/containerd/image_service_test.go
infrastructure/containerd/image_service_integration_test.go
it's more like 2/3.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lool and some of those ToNots were mine 🤦♀️
ImageName: testImage, | ||
Owner: testOwner, | ||
}) | ||
g.Expect(err).To(g.HaveOccurred()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i mean it doesn;t look great
but seriously 90% of the reviews i have done in my life have been "do we not want to catch that error?" followed by an "oh shit yeh!" response. so checking errors are returned when we expect and in the way we expect them is an unhealthy obsession of mine 😭
What this PR does / why we need it:
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):fixes #15
Special notes for your reviewer:
Checklist: