Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setup and teardown methods #194

Closed
ppcano opened this issue May 2, 2017 · 15 comments
Closed

setup and teardown methods #194

ppcano opened this issue May 2, 2017 · 15 comments
Labels

Comments

@ppcano
Copy link
Contributor

ppcano commented May 2, 2017

When running a test, it's a common practice to prepare the system under test. For example, database setup or data population are common actions to perform before a test begins.

I think it may be useful to provide an API to support the setup case, and additionally the teardown API.

Because the async nature of http requests, the setup callback could accept to return an async request or promise object to start the test after its completion.

Below an example (not a proposal) to show the idea:

export default function() {

   setup: function() {
     return http.post('xxx/admin/prepare-load-test');
   }
   // my load test code goes here
 
}

Thoughts?

@liclac
Copy link
Contributor

liclac commented May 2, 2017

This has been talked about before, but was shut down… I can't entirely remember why though? @ragnarlonn @robingustafsson

@micsjo
Copy link
Contributor

micsjo commented May 5, 2017

It could very well be done by a test management framework/tool controlling the test execution outside of k6 where k6 is only the execution of the test.

The principle is sound - but it might not belong inside k6? More an implementation sample using some reasonable test management framework/tool?

@ppcano
Copy link
Contributor Author

ppcano commented May 5, 2017

As a user, I see the feature to be a k6 API.

  • Running the test locally: the setup is executed only once before the X virtual users execute the test.

  • Running the test in a cloud execution: the setup is executed only once before the X virtual users execute the test in Y number of distributed machines.

@micsjo My doubt is if this feature is useful (demanded feature) when load testing some systems. I may be wrong when doing the analogy with other test frameworks.

@kingletas
Copy link

@ppcano I think this is a very useful feature for logging and notifying stakeholders - I'd love to see it implemented in k6 as well but as @liclac suggested I think we can do this via a bash script for now.

@robingustafsson
Copy link
Member

I got more requests for this when demoing k6 to a group of people a week or so ago.

What about something like this:

export let options = {
    lifecycle: {
        setup: setupFun,
        teardown: teardownFun
    }
};

function setupFun() {
    ...
}

function teardownFun() {
    ...
}

export default function() {
    ...
}

In the case of local execution these would execute once. In the case of distributed execution it could be executed once on each node/server unless user uses env vars or similar to limit it to one node/server or region. Thoughts?

@liclac
Copy link
Contributor

liclac commented Jun 13, 2017

Could work, alternatively something like:

export function setup() {
  // ...
}

export function teardown() {
  // ...
}

export function default() {
  // ...
}

I don't think they should be run per-instance though, their primary use will be in setting up test databases, etc. and it doesn't make sense to do that multiple things, nor does it really matter where it's done from.

@robingustafsson
Copy link
Member

@liclac Yeah, that looks like a better API, will make sure setup/teardown function always has the same name.

Agree that we should start with most common use case and iterate from there, so start with running it global-once (in a distributed/cloud context). I guess it will require some additional flag being added to the CLI like --no-setup-teardown, --replica-mode or similar (or at least being able to set that through the k6 REST API).

@ppcano
Copy link
Contributor Author

ppcano commented Jun 14, 2017

  1. What about the asynchronous behavior of API requests in the setup callback? I commented this issue above.

If users want to perform some initializations routines, they may need to perform requests to their systems and wait for the task completion to start the load test execution.

Is this an issue? Thoughts?

@liclac @robingustafsson

  1. @liclac API syntax LGTM

@liclac
Copy link
Contributor

liclac commented Jun 15, 2017

There's no asynchrony in k6, things run and return when finished.

@ppcano
Copy link
Contributor Author

ppcano commented Jun 15, 2017

There's no asynchrony in k6, things run and return when finished.

Perfect, go ahead!

I am looking forward seeing how users will make use of this feature.

@robingustafsson
Copy link
Member

After some offline discussion the following API was proposed:

export function setup() {
  let setupData = ...;
  return JSON.stringify(setupData); // return valid JSON string
}

export function teardown() {
  // ...
}

export function default(setupData) {
  let parsedSetupData = JSON.parse(setupData);
  // ...
}

This API seems to work well for both the local as well as cloud/clustered execution contexts, as it'd let k6 execute the setup() once and then distribute the setupData JSON to all nodes participating in the execution of a test.

@liclac @ppcano @martinfijal Did I miss anything from the discussion we had regarding this issue?

@liclac
Copy link
Contributor

liclac commented Sep 1, 2017

There's not even a need for JSON.parse(), we can do that faster out-of-JS, but yeah.

I'll also introduce warnings for unrecognised exports, now that we're ending up with a few more.

@marklagendijk
Copy link
Contributor

marklagendijk commented Jan 30, 2018

I want to give some input on a nice use case for this.

Currently our project has the following structure:

- test
-- context
-- description
-- execution

The idea is like this:

  • context: context information that can be different for different executions of the same test. For example: if you execute the same test on the qa environment, you need different urls then on the staging environment.
  • description: the test code, which uses variables for the context information.
  • execution: files which combine context with description.

Example
context/qa.js:

export default {
  someUrl: 'http://example.com'
}

description/test.js:

export default function ({ someUrl }){
  http.get(someUrl);
}

execution/qa/test.js:

import description from '../../description/test.js';
import context from '../../context/qa.js';

export default () => description(context);

The setup feature makes it easier to use this kind of approach. It would be perfect if the setup could be supplied as a path to a file. This way you wouldn't need execution files.

@ppcano
Copy link
Contributor Author

ppcano commented Jan 30, 2018

@marklagendijk

Last day, I had a similar situation when discussing environment variables. I am not sure if this solves your case but, what about the following structure?

./settings.js

export default {
  default: {someUrl: 'http://dev.example.com'},
  staging: {someUrl: 'http://staging.example.com'},
  production: {someUrl: 'http://example.com'},
}

import settings from "./settings";

export default function() {
    
    var context = __ENV.CONTEXT || 'default';
    var setting = settings[context];
    // setting.someUrl and all your settings are available
}

Now, you could easily change the script context with environment variable like:

CONTEXT=staging k6 run script.js

In the case that you want your context settings in different files, you could also modify your settings.js to import the particular context settings from different files.

@marklagendijk
Copy link
Contributor

@ppcano interesting approach. However, one of the things I want to accomplish is that the description does not need to know anything about the context it will be executed with (except what information it needs from that context).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

6 participants