-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Increase test coverage of deployment and task serializers #138
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks really great! I think these tests have really helped to highlight some design issues with our existing code. Going through the tests, I few things popped up which I thought would be good to discuss during our regular meeting.
-
Unlike our cloudbridge tests, the django tests themselves are quite opaque because we are testing at a ReST API level - it's sometimes hard to follow the functional intent - and that may explain why we never really got around to writing these in the first place. The objects are quite low level and the tests also directly create backend objects, and therefore the actual logic of the test seems lost in the details. I think the API refactoring work you're doing will help greatly here - but when that's done - how can we make these tests better/clearer?
-
What should we do about testing the celery async tasks?
def test_only_one_launch_task(self): | ||
"""Test LAUNCH task not allowed if one already exists.""" | ||
app_deployment = self._create_test_deployment(user=self.user) | ||
ApplicationDeploymentTask.objects.create( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this test has really highlighted a design issue with the existing cloudlaunch code - we should probably move the duplication launch check from the serializer to the model?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I think that's a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nuwang after doing some additional reading, the way I understand it in DRF you should do validation in the serializer, not in the model. See Differences between ModelSerializer validation and ModelForm
This change also means that we no longer use the .full_clean() method on model instances, but instead perform all validation explicitly on the serializer. This gives a cleaner separation, and ensures that there's no automatic validation behavior on ModelSerializer classes that can't also be easily replicated on regular Serializer classes.
...
The one difference that you do need to note is that the .clean() method will not be called as part of serializer validation, as it would be if using a ModelForm. Use the serializer .validate() method to perform a final validation step on incoming data where required.
.clean()
is the most natural place to put the model validation but DRF recommends against that. The way it is currently implemented, with the validate_action()
method, is the recommended way.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the original reason this would have been useful was to support the internal API layer. In that case, when attempting to save a deployment task, the duplicate task checker logic wouldn't have run, since it's in the serializer. I still wonder whether this constraint should ideally be double checked in the save() method, since the constraint shouldn't be violated no matter how we save it?
) | ||
return app_deployment | ||
|
||
@patch("cloudlaunch.tasks.health_check.delay") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was wondering - since there is a celery option to execute the task synchronously, should we allow it to execute, or just mock it out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to mock it out because I just want to guard against regressions when refactoring the code in the serializers and views. For the refactor it doesn't much matter what happens in the task itself.
Thanks @nuwang . Regarding each of your points:
|
Changed DeploymentSerializer.to_internal_value to make a copy of the data dict before changing 'application_version' because in the test framework the data dict is immutable.
3df457d
to
3b3a576
Compare
@nuwang these tests are ready for review. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the tests are much clearer after moving test data creation to the setup method. The mock/patching logic is a bit complicated, but I guess there's no way around that. The one issue is with the atomicity of the duplicate launch check. I'll merge this in now, but I guess we should file an issue to solve that later.
def save(self, *args, **kwargs): | ||
# validate at most one LAUNCH task per deployment | ||
if self.action == self.LAUNCH: | ||
if ApplicationDeploymentTask.objects.filter( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One problem with this is that this is not atomic. I guess we should make this transactional, so that there's no race condition between checking and saving. Or we could just ignore it for now...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good point, I hadn't thought of that. Thanks Nuwan.
This is for issue #137.
This PR isn't finished yet but I thought I would go ahead and create it to get some early feedback. @afgane and @nuwang , if you have some time and can take a look I would appreciate your feedback on this.