Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration Tests #151

Closed
richardcase opened this issue Aug 8, 2018 · 9 comments
Closed

Integration Tests #151

richardcase opened this issue Aug 8, 2018 · 9 comments
Labels
help wanted Extra attention is needed
Milestone

Comments

@richardcase
Copy link
Contributor

A general testing strategy was introduced with #46 and there has been progress on including tests. However, no work has been done on integration tests.

This issue is to track progress for creating a separate integration test suite that tests against AWS. This suite can be run manually initially but if we can find a sponsor for the AWS costs then we can run automatically on a regular basis.

I quite like how minikube have done this. We'd have to perform operations with eksctl and then check its done what we expected using the AWS sdk directly (or something else). Also this package could help with the integration suite.

@errordeveloper errordeveloper modified the milestones: 0.1.0, 0.1.1 Aug 9, 2018
@errordeveloper
Copy link
Contributor

errordeveloper commented Aug 9, 2018

Adding this to 0.1.1 milestone for now, to indicate that it should block the release until we have basic test suite.

@richardcase
Copy link
Contributor Author

Gomega will also help with this, specifically this.

@richardcase richardcase self-assigned this Aug 9, 2018
@richardcase richardcase mentioned this issue Aug 17, 2018
@richardcase
Copy link
Contributor Author

richardcase commented Aug 22, 2018

@errordeveloper @Raffo - if we are creating an integration test for "cluster creation". We want to check that the command completes and the exit code is 0. To what level of detail should we check whats created on AWS? For instance, do we only care that we have an EKS cluster, the right number of instances and the instance sizes are correct? And we don't care that we have cloudformation stacks name X, Y & Z?

@errordeveloper
Copy link
Contributor

To what level of detail should we check whats created on AWS? For instance, do we only care that we have an EKS cluster, the right number of instances and the instance sizes are correct?

I am not entirely sure this worse checking so deeply, we should trust the APIs and have a unit test that parses flags, applies defaults and checks resulting config struct. After CloudFormation changes land, we should be able to test such things in stages, I believe.

The create cluster code path already waits for --nodes/--min-nodes, so in effect 0 exit code guarantees this already. I think what will be most critical to check is whether resulting kubeconfig can be used to deploy a workload and workload runs. Eventually we will need more rigorous tests that e.g. create ELB and access the workload, but I think it's okay to leave that out for the purpose of 0.1.1 release.

And we don't care that we have cloudformation stacks name X, Y & Z?

I would actually test that (it should be fairly easy). it will change, but it'd make sense to validate this. And when we add a test for deletion, I'd check that those same stacks get deleted. But we don't think we need all that for 0.1.1.

@richardcase richardcase mentioned this issue Aug 22, 2018
5 tasks
@errordeveloper
Copy link
Contributor

It'd be nice to figure out how to get coverage reports, I recall we had something in Scope that did this. But surely we don't needed for 0.1.1, just a note for now.

@errordeveloper
Copy link
Contributor

errordeveloper commented Sep 4, 2018

@richardcase I think we should be able to close this, once we have defined some follow-up improvements based on the conversation here.

So far this is what I've been able to parse:

  • test how combinations affect resulting CloudFormation stacks (requires a mode where we only output the stacks in JSON) – this is kind of a unit test, but we might go via an exec
  • more rigorous test that create ELB, attach a volume etc
  • delete via eksctl delete cluster, check that resources get deleted

Anything you'd like to add?

@Raffo do you have any thoughts on this?

@richardcase
Copy link
Contributor Author

The integration test that we started was always meant to be (in my head that is) a complete "happy path" lifecycle test, so:
create->get->delete
Perhaps we could hold off closing this issue until that is implemented and as you say until we define the future improvements.

I like your idea of testing combinations of the command line and what effect this has on the resulting stack. I have seen this done with a table test where there is a list of 'parameter files' that specify different combination of command line args and what the resulting output should be. I think that this would enable use to cover a lot of use cases without us having to write repetitive tests.

I'd be happy to look at this fairly soon if you felt it would be a valuable addition.

@Lazyshot
Copy link

An initial pass on the integration tests including a single "happy path" was put into #255.

@gemagomez
Copy link

We are closing this issue, that will be solved as part of smaller, more actionable feature chunks.

torredil pushed a commit to torredil/eksctl that referenced this issue May 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants