Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add tests #2

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,5 @@ dist/
node_modules/
data.json
.idea/
test-results/
playwright-report/
21 changes: 21 additions & 0 deletions README - Tests.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Introduction

## Scripts

Please check the setup and tests script in package.json

## Parametrization

This demo showcases test parametrization. Tests are basically functions and can have params. This allows us to have a significant volume of test cases covered by a compact volume of code bodies

## Response Validation

This demo showcases using schemas to validate response structures. In real production scenarios we have highly complex structures in responses. Schemas are a standardized and simplified way to validate them

## The problem of Scale VS The problem of troubleshooting in QA Automation

QA Automation has some very standard problems. One such problem is the problem of scale. It is trivial to manage a project with 50 e2e test cases, but 500 is hard and 5000 is a living nightmare. The most obvious way to manage scale is to reuse code. This demo showcases 2 strategies to do that - parametrization and schemas.

Both strategies are completely unnecessary for this test, because the problem we are solving here is super simple. My solution is a showcase

Troubleshooting failing tests is the most common activity in QA. The most common way to waste our budget is to have QA automation where processing test results (this includes re-running, troubleshooting) is a nightmare. A common criticism to strategies I showcase here is that they make troubleshooting harder. There is some truth here. As anything in programing, we are dancing between trade-offs. If this topic is interesting to you, we can dive quite deeper.
77 changes: 76 additions & 1 deletion package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

7 changes: 5 additions & 2 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -5,13 +5,16 @@
"main": "index.js",
"scripts": {
"start": "npx tsc && node dist/index.js",
"test": "echo \"Error: no test specified\" && exit 1"
"setup": "npm install && npx playwright install",
"test": "npx playwright test"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"@playwright/test": "^1.42.1",
"@types/node": "^20.14.10",
"typescript": "^5.5.3"
"typescript": "^5.5.3",
"zod": "^3.23.8"
}
}
48 changes: 48 additions & 0 deletions playwright.config.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
import { defineConfig, devices } from '@playwright/test';

/**
* Read environment variables from file.
* https://github.com/motdotla/dotenv
*/
// require('dotenv').config();

/**
* See https://playwright.dev/docs/test-configuration.
*/
export default defineConfig({
testDir: './tests',
/* Run tests in files in parallel */
fullyParallel: true,
/* Fail the build on CI if you accidentally left test.only in the source code. */
forbidOnly: !!process.env.CI,
/* Retry on CI only */
retries: process.env.CI ? 2 : 0,
/* Opt out of parallel tests on CI. */
workers: process.env.CI ? 1 : undefined,
/* Reporter to use. See https://playwright.dev/docs/test-reporters */
reporter: [['list'], ['html']],
// required because of mailslurp latency
timeout: 120000,
/* Shared settings for all the projects below. See https://playwright.dev/docs/api/class-testoptions. */
use: {
/* Base URL, we can set per environment */
baseURL: 'http://localhost:3000',

/* Maximum timeout for individual interaction, in milliseconds */
actionTimeout: 5 * 1000,

/* Collect trace when retrying the failed test. See https://playwright.dev/docs/trace-viewer */
trace: 'on-first-retry',
},

expect: {
timeout: 10000,
},

projects: [
{
name: 'api_tests',
},
],

});
120 changes: 120 additions & 0 deletions tests/demo.spec.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
import { test, expect } from '@playwright/test';
import { UserSchema, UserListSchema } from './schemas';

test.describe('Basic api tests', () => {

// The following are few custom validation functions
// They validate contents of responses
const validateUserList = (res: any) => {
expect(res).toMatchObject(
{ users: [ { name: 'Alice' }, { name: 'Bob' } ] }
);
expect(res.users.length).toBe(2);
}

const validateUser = (res: any) => {
expect(res).toMatchObject(
{ name: 'Bob' }
);
}

// The following is the data for all test cases
const TEST_CASES = [
{
name: "Get All Users",
url: "/users",
method: "GET",
expectedStatus: 200,
schema: UserListSchema,
customValidation: validateUserList,
},
{
name: "Get Specific User",
url: "/users/2",
method: "GET",
expectedStatus: 200,
schema: UserSchema,
customValidation: validateUser,
},
{
name: "Get Nonexistent Endpoint",
url: "/nonexistent",
method: "GET",
expectedStatus: 404,
},
{
name: "Get Nonexistent user",
url: "/users/22",
method: "GET",
expectedStatus: 404,
},
{
name: "Create user - unsupported",
url: "/users",
method: "POST",
expectedStatus: 404,
},
{
name: "Update user - unsupported",
url: "/users/2",
method: "PUT",
expectedStatus: 404,
},
{
name: "DELETE user - unsupported",
url: "/users/2",
method: "DELETE",
expectedStatus: 404,
},
{
name: "Get Health",
url: "/health",
method: "GET",
expectedStatus: 200,
expectedNonJsonText: "OK",
},
]
for (const params of TEST_CASES) {
test(`Scenario: ${params.name}`, async ({ request}) => {
// Send Request
let response;
switch (params.method) {
case 'GET':
response = await request.get(params.url);
break;
case 'POST':
response = await request.post(params.url);
break;
case 'PUT':
response = await request.put(params.url);
break;
case 'DELETE':
response = await request.delete(params.url);
break;
default:
throw new Error(`Unsupported HTTP method: ${params.method}`);
}

// Check Status Code
expect(response.status()).toBe(params.expectedStatus);

// Validate Schema, optional param
if (params.schema !== undefined) {
const res = await response.json();
params.schema.parse(res);
}

// Additional custom validation, optional param
if (params.customValidation !== undefined) {
const res = await response.json();
params.customValidation(res);
}

// Some endpoints return plain text, optional param
if (params.expectedNonJsonText !== undefined) {
const actualText = await response.text();
expect(actualText).toBe(params.expectedNonJsonText);
}
});
}
});
12 changes: 12 additions & 0 deletions tests/schemas.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
import { z } from 'zod';

// User
export const UserSchema = z.object({
id: z.string(),
name: z.string(),
});

// List of users
export const UserListSchema = z.object({
users: z.array(UserSchema),
});
2 changes: 1 addition & 1 deletion tsconfig.json
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"target": "ES6",
"module": "commonjs",
"outDir": "./dist",
"rootDir": "./src",
"rootDirs": ["./src", "./tests"],
"strict": true,
"esModuleInterop": true
}
Expand Down