-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[e2e] Cluster is deleted before running "Full upgrade" test #2654
Comments
/assign |
They are defined in this structure:
So the after-each only applies to the describe, not each Specify. The correct way to run the tests, and this is my fault for not documenting appropriately, is to use the FOCUS var. try |
I was running without a FOCUS and observed it being triggered but will recheck. |
i think there's a lot of room for improvement in how the capd-e2e tests are organized. I'm not sure if this helps or not, but this is what I'm looking for in the test organization:
I think |
I started with
Logs
|
hmm that's strange. It shouldn't be calling AfterEach here, but maybe there's something weird in our failure handler. Does it work for you when the basic test passes? |
After reading https://onsi.github.io/ginkgo/#organizing-specs-with-containers-describe-and-context it looks like my assertions are wrong and we should for sure fix this. I almost want to move the Deletes into their own Describe, but that gets weird with failure cleanups :( Maybe what I want is not easily possible with Ginkgo |
FYI: @chuckha pointed out that the reason that the basic test is failing on my local machine was due to some containers exited from previous runs. So, it is important to have a clean docker environment before running these tests. We still need to address avoiding deleting the clusters at each describe so I moved the deletion to AfterSuite() in the PR. |
I also ran into the upgrade issue and luckily found this bug report. |
Absolutely agree. It's not ideal and is quite confusing. I haven't found a nice way to achieve both speed and convenience, but I'm definitely open to ideas! Requirements are: Run the CI job as quickly as possible (only spin up one cluster). It would be nice if we could reuse the same cluster for other operations to optimize the run, but it's starting to look like that's not possible without internal state management which simply doesn't exist and might be fairly ugly to implement. |
If all jobs require a baseline cluster deployed, then you could theoretically spin it up in the BeforeSuite, and then tear it down in the AfterSuite. That said, that would only work in the case that we never expect to run the jobs in parallel, since parallel invocations could clobber each other and each test would have to leave the cluster back in the expected state at the end of it's run. Unless resource utilization is a major concern, I think the best course of action would be something similar to what we are doing in the AWS tests, where each test brings up it's own cluster and we improved runtime by parallelizing the test runs (though if there are shared pre-requisites, you have do do some trickiness around ensuring those are configured correctly, but I don't expect that to be the case in the CAPD use case). |
Yeah, that might be the right answer for our general use case |
Makes sense. I will spin up a different cluster for each tests and then look for parallelizing them. |
/reopen to track failing e2e test. |
Should this issue have remained open? |
@wfernandes I will create another issue to avoid rereading this thread as it includes some solved issues. |
What steps did you take and what happened:
Run e2e tests by
There are two e2e tests for docker. In AfterEach(), the cluster that was created in the first test is being deleted, which is alse used in the second test.
What did you expect to happen:
I'd expect it to not delete the cluster after each test or make tests selfs sufficient so that they don't share any state.
/kind bug
The text was updated successfully, but these errors were encountered: