-
Notifications
You must be signed in to change notification settings - Fork 220
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
chore: add redirects to load test types pages (#1509)
- Loading branch information
1 parent
c6a77ca
commit 30561f5
Showing
7 changed files
with
42 additions
and
582 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
91 changes: 1 addition & 90 deletions
91
src/data/markdown/translated-guides/en/06 Test Types/00 Load test types.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,93 +1,4 @@ | ||
--- | ||
title: 'Load test types' | ||
head_title: 'Understanding the Different Types of Load Tests: Goals and Recommendations' | ||
excerpt: 'A series of conceptual articles explaining the different types of load tests. Learn about planning, running, and interpreting different tests for different performance goals.' | ||
canonicalUrl: https://grafana.com/load-testing/types-of-load-testing/ | ||
redirect: https://grafana.com/load-testing/types-of-load-testing/ | ||
--- | ||
|
||
Many things can go wrong when a system is under load. | ||
The system must run numerous operations simultaneously and respond to different requests from a variable number of users. | ||
To prepare for these performance risks, teams use load testing. | ||
|
||
But a good load-testing strategy requires more than just executing a single script. | ||
Different patterns of traffic create different risk profiles for the application. | ||
For comprehensive preparation, teams must test the system against different _test types_. | ||
|
||
![Overview of load test shapes](./images/chart-load-test-types-overview.png) | ||
|
||
## Different tests for different goals | ||
|
||
Start with smoke tests, then progress to higher loads and longer durations. | ||
|
||
The main types are as follows. Each type has its own article outlining its essential concepts. | ||
|
||
- [**Smoke tests**](/test-types/smoke-testing) validate that your script works and that the system performs adequately under minimal load. | ||
|
||
- [**Average-load test**](/test-types/load-testing) assess how your system performs under expected normal conditions. | ||
|
||
- [**Stress tests**](/test-types/stress-testing) assess how a system performs at its limits when load exceeds the expected average. | ||
|
||
- [**Soak tests**](/test-types/soak-testing) assess the reliability and performance of your system over extended periods. | ||
|
||
- [**Spike tests**](/test-types/spike-testing) validate the behavior and survival of your system in cases of sudden, short, and massive increases in activity. | ||
|
||
- [**Breakpoint tests**](/test-types/breakpoint-testing) gradually increase load to identify the capacity limits of the system. | ||
|
||
<Blockquote mod="note" title=""> | ||
|
||
In k6 scripts, configure the load configuration using [`options`](/get-started/running-k6/#using-options) or [`scenarios`](/using-k6/scenarios). This separates workload configuration from iteration logic. | ||
|
||
</Blockquote> | ||
|
||
## Test-type cheat sheet | ||
|
||
The following table provides some broad comparisons. | ||
|
||
| Type | VUs/Throughput | Duration | When? | | ||
|------------|-----------------------|----------------------------|------------------------------------------------------------------------------------------------------------------| | ||
| [Smoke](/test-types/smoke-testing) | Low | Short (seconds or minutes) | When the relevant system or application code changes. It checks functional logic, baseline metrics, and deviations | | ||
| [Load](/test-types/load-testing) | Average production | Mid (5-60 minutes) | Often to check system maintains performance with average use | | ||
| [Stress](/test-types/stress-testing) | High (above average) | Mid (5-60 minutes) | When system may receive above-average loads to check how it manages | | ||
| [Soak](/test-types/soak-testing) | Average | Long (hours) | After changes to check system under prolonged continuous use | | ||
| [Spike](/test-types/spike-testing) | Very high | Short (a few minutes) | When the system prepares for seasonal events or receives frequent traffic peaks | | ||
| [Breakpoint](/test-types/breakpoint-testing) | Increases until break | As long as necessary | A few times to find the upper limits of the system | | ||
|
||
|
||
## General recommendations | ||
|
||
When you write and run different test types in k6, consider the following. | ||
|
||
### Start with a smoke test | ||
|
||
Start with a [smoke test](/test-types/smoke-testing). | ||
Before beginning larger tests, validate that your scripts work as expected and that your system performs well with a few users. | ||
|
||
After you know that the script works and the system responds correctly to minimal load, | ||
you can move on to average-load tests. | ||
From there, you can progress to more complex load patterns. | ||
|
||
### The specifics depend on your use case | ||
|
||
Systems have different architectures and different user bases. As a result, the correct load testing strategy is highly dependent on the risk profile for your organization. Avoid thinking in absolutes. | ||
|
||
For example, k6 can model load by either number of VUs or by number of iterations per second ([open vs. closed](https://k6.io/docs/using-k6/scenarios/concepts/open-vs-closed/)). | ||
When you design your test, consider which pattern makes sense for the type. | ||
|
||
What's more, **no single test type eliminates all risk.** | ||
To assess different failure modes of your system, incorporate multiple test types. | ||
The risk profile of your system determines what test types to emphasize: | ||
- Some systems are more at risk of longer use, in which case soaks should be prioritized. | ||
- Others are more at risk of intensive use, in which case stress tests should take precedence. | ||
|
||
In any case, **no single test can uncover all issues**. | ||
|
||
What's more, the categories themselves are relative to use cases. A stress test for one application is an average-load test for another. Indeed, no consensus even exists about the names of these test types (each of the following topics provides alternative names). | ||
|
||
### Aim for simple designs and reproducible results | ||
|
||
While the specifics are greatly context-dependent, what's constant is that you want to make results that you can compare and interpret. | ||
|
||
Stick to simple load patterns. For all test types, directions is enough: ramp-up, plateau, ramp-down. | ||
|
||
Avoid "rollercoaster" series where load increases and decreases multiple times. These will waste resources and make it hard to isolate issues. | ||
|
80 changes: 1 addition & 79 deletions
80
src/data/markdown/translated-guides/en/06 Test Types/01 Smoke Testing.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,82 +1,4 @@ | ||
--- | ||
title: "Smoke testing" | ||
head_title: 'What is Smoke Testing? How to create a Smoke Test in k6' | ||
excerpt: "A Smoke test is a minimal load test to run when you create or modify a script." | ||
canonicalUrl: https://grafana.com/blog/2024/01/30/smoke-testing/ | ||
redirect: https://grafana.com/blog/2024/01/30/smoke-testing/ | ||
--- | ||
|
||
Smoke tests have a minimal load. | ||
Run them to verify that the system works well under minimal load and to gather baseline performance values. | ||
|
||
This test type consists of running tests with a few VUs — more than 5 VUs could be considered a mini-load test. | ||
|
||
Similarly, the test should execute for a short period, either a low number of [iterations](/using-k6/k6-options/reference/#iterations) or a [duration](/using-k6/k6-options/reference/#duration) from seconds to a few minutes maximum. | ||
|
||
![Overview of a smoke test](images/chart-smoke-test-overview.png) | ||
|
||
In some testing conversation, smoke tests are also called shakeout tests. | ||
|
||
## When to run a Smoke test | ||
|
||
Teams should run smoke tests whenever a test script is created or updated. Smoke testing should also be done whenever the relevant application code is updated. | ||
|
||
It's a good practice to run a smoke test as a first step, with the following goals: | ||
|
||
- Verify that your test script doesn't have errors. | ||
- Verify that your system doesn't throw any errors (performance or system related) when under minimal load. | ||
- Gather baseline performance metrics of your system’s response under minimal load. | ||
- With simple logic, to serve as a synthetic test to monitor the performance and availability of production environments. | ||
|
||
## Considerations | ||
|
||
When you prepare a smoke test, consider the following: | ||
|
||
|
||
- **Each time you create or update a script, run a smoke test** | ||
|
||
Because smoke tests verify test scripts, try to run one every time you create or update a script. Avoid running other test types with untested scripts. | ||
|
||
- **Keep throughput small and duration short** | ||
|
||
Configure your test script to be executed by a small number of VUs (from 2 to 20) with few iterations or brief durations (30 seconds to 3 minutes). | ||
|
||
## Smoke testing in k6 | ||
|
||
<CodeGroup labels={["smoke.js"]} lineNumbers={[]} showCopyButton={[true]}> | ||
|
||
```javascript | ||
import http from 'k6/http'; | ||
import { check, sleep} from 'k6'; | ||
|
||
export const options = { | ||
vus: 3, // Key for Smoke test. Keep it at 2, 3, max 5 VUs | ||
duration: '1m', // This can be shorter or just a few iterations | ||
}; | ||
|
||
export default () => { | ||
const urlRes = http.get('https://test-api.k6.io'); | ||
sleep(1); | ||
// MORE STEPS | ||
// Here you can have more steps or complex script | ||
// Step1 | ||
// Step2 | ||
// etc. | ||
}; | ||
``` | ||
|
||
</CodeGroup> | ||
|
||
|
||
The following script is an example smoke test. You can copy it, change the endpoints, and start testing. For more comprehensive test logic, refer to [Examples](/examples). | ||
The VU chart of a smoke test should look similar to this. | ||
|
||
![The shape of the smoke test as configured in the preceding script](images/chart-smoke-test-k6-script-example.png) | ||
|
||
## Results analysis | ||
|
||
The smoke test initially validates that your script runs without errors. If any script-related errors appear, correct the script before trying any more extensive tests. | ||
|
||
On the other hand, if you notice poor performance with these low VU numbers, report it, fix your environment, and try again with a smoke test before any further tests. | ||
|
||
Once your smoke test shows zero errors and the performance results seem acceptable, you can proceed to other test types. | ||
|
106 changes: 1 addition & 105 deletions
106
src/data/markdown/translated-guides/en/06 Test Types/02 Load Testing.md
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,108 +1,4 @@ | ||
--- | ||
title: 'Load testing' | ||
head_title: 'What is Load Testing? How to create a Load Test in k6' | ||
excerpt: 'An average load test assesses the performance of your system in terms of concurrent users or requests per second.' | ||
canonicalUrl: https://grafana.com/blog/2024/01/30/average-load-testing/ | ||
redirect: https://grafana.com/blog/2024/01/30/average-load-testing/ | ||
--- | ||
|
||
An average-load test assesses how the system performs under typical load. Typical load might be a regular day in production or an average moment. | ||
|
||
Average-load tests simulate the number of concurrent users and requests per second that reflect average behaviors in the production environment. This type of test typically increases the throughput or VUs gradually and keeps that average load for some time. Depending on the system's characteristics, the test may stop suddenly or have a short ramp-down period. | ||
|
||
![Overview of an average load test](images/chart-average-load-test-overview.png) | ||
|
||
Since “load test” might refer to all types of tests that simulate traffic, this guide uses the name _average-load test_ to avoid confusion. | ||
In some testing conversation, this test also might be called a day-in-life test or volume test. | ||
|
||
## When to run an average load test | ||
|
||
Average-Load testing helps understand whether a system meets performance goals on a typical day (commonplace load). _Typical day_ here means when an average number of users access the application at the same time, doing normal, average work. | ||
|
||
You should run an average load test to: | ||
|
||
* Assess the performance of your system under a typical load. | ||
* Identify early degradation signs during the ramp-up or full load periods. | ||
* Assure that the system still meets the performance standards after system changes (code and infrastructure). | ||
|
||
## Considerations | ||
|
||
When you prepare an average-load test, consider the following: | ||
|
||
* **Know the specific number of users and the typical throughput per process in the system.** | ||
|
||
To find this, look through APMs or analytic tools that provide information from the production environment. If you can't access such tools, the business must provide these estimations. | ||
* **Gradually increase load to the target average.** | ||
|
||
That is, use a _ramp-up period_. This period usually lasts between 5% and 15% of the total test duration. A ramp-up period has many essential uses: | ||
* It gives your system time to warm up or auto-scale to handle the traffic. | ||
* It lets you compare response times between the low-load and average-load stages. | ||
* If you run tests using our cloud service, a ramp up lets the automated performance alerts understand the expected behavior of your system. | ||
|
||
* **Maintain average for a period longer than the ramp up.** | ||
|
||
Aim for an average duration at least five times longer than the ramp-up to assess the performance trend over a significant period of time. | ||
|
||
* **Consider a ramp-down period.** | ||
|
||
The ramp down is when virtual user activity gradually decreases. The ramp down usually lasts as long as the ramp up or a bit less. | ||
|
||
## Average-load testing in k6 | ||
|
||
<Blockquote mod="note" title="Start small"> | ||
|
||
If this is your first time running load tests, we recommend starting small or configuring the ramp-up to be slow. Your application and infrastructure might not be as rock solid as you think. We've had thousands of users run load tests that quickly crash their applications (or staging environments). | ||
|
||
</Blockquote> | ||
|
||
The goal of an average-load test is to simulate the average amount of activity on a typical day in production. The pattern follows this sequence: | ||
|
||
1. Increase the script's activity until it reaches the desired number of users and throughput. | ||
1. Maintain that load for a while | ||
1. Depending on the test case, stop the test or let it ramp down gradually. | ||
|
||
Configure load in the `options` object: | ||
|
||
<CodeGroup labels={["average-load.js"]} lineNumbers={[]} showCopyButton={[true]}> | ||
|
||
```javascript | ||
import http from 'k6/http'; | ||
import {sleep} from 'k6'; | ||
|
||
export const options = { | ||
// Key configurations for avg load test in this section | ||
stages: [ | ||
{ duration: '5m', target: 100 }, // traffic ramp-up from 1 to 100 users over 5 minutes. | ||
{ duration: '30m', target: 100 }, // stay at 100 users for 30 minutes | ||
{ duration: '5m', target: 0 }, // ramp-down to 0 users | ||
], | ||
}; | ||
|
||
export default () => { | ||
const urlRes = http.get('https://test-api.k6.io'); | ||
sleep(1); | ||
// MORE STEPS | ||
// Here you can have more steps or complex script | ||
// Step1 | ||
// Step2 | ||
// etc. | ||
}; | ||
``` | ||
|
||
</CodeGroup> | ||
|
||
|
||
This script logic has only one request (to open a web page). Your test behavior likely has more steps. If you would like to see more complex tests that use groups, checks, thresholds, and helper functions, refer to [Examples](/examples). | ||
|
||
The VU or throughput chart of an average-load test looks similar to this: | ||
|
||
![The shape of the average-load test as configured in the preceding script](images/chart-average-load-test-k6-script-example.png "Note that the number of users or throughput starts at 0, gradually ramps up to the desired value, and stays there for the indicated period. Then load ramps down for a short period." ) | ||
|
||
|
||
## Results analysis | ||
|
||
An initial outcome for the average-load test appears during the ramp-up period to find whether the response time degrades as the load increases. Some systems might even fail during the ramp-up period. | ||
|
||
The test validates if the system's performance and resource consumption stay stable during the period of full load, as some systems may display erratic behavior in this period. | ||
|
||
Once you know your system performs well and survives a typical load, you may need to push it further to determine how it behaves at above-average conditions. Some of these above-average conditions are known as [Stress tests](/test-types/stress-testing). | ||
|
Oops, something went wrong.