diff --git a/docs/guides/2-cli.md b/docs/guides/2-cli.md index e4707dd3e..897e8ffb9 100644 --- a/docs/guides/2-cli.md +++ b/docs/guides/2-cli.md @@ -39,6 +39,7 @@ Other options include: --stdin-filepath path to a file to pretend that stdin comes from [string] --resolver path to custom json-ref-resolver instance [string] -r, --ruleset path/URL to a ruleset file [string] + --scoring-config path/URL to a scoring config file [string] -F, --fail-severity results of this level or above will trigger a failure exit code [string] [choices: "error", "warn", "info", "hint"] [default: "error"] -D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false] @@ -60,6 +61,99 @@ Here you can build a [custom ruleset](../getting-started/3-rulesets.md), or exte - [OpenAPI ruleset](../reference/openapi-rules.md) - [AsyncAPI ruleset](../reference/asyncapi-rules.md) +## Scoring the API + +Scoring an API definition is a way to understand at a high level how compliant the API definition is with the rulesets provided. This helps teams to understand the quality of the APIs regarding the definition. + +The scoring is produced in two different metrics: + +- A number scoring: Calculated by subtracting any error or warning from 100%. +- A letter scoring, which groups numeric scoring in letters from A to Z, with A being the best score. + +Also it introduces a quality gate, were an API scoring below the specific threshold will fail in a pipeline. + +Enabling scoring is done using a new parameter called --scoring-config and the scoring configuration file, where you can define how an error or a warning affects to the scoring + +Usage: + +```bash + spectral lint ./reference/**/*.oas*.{json,yml,yaml} --ruleset mycustomruleset.js --scoring-config ./scoringFile.json +``` + +Heres an example of this scoringFile config file: + +``` + { + "scoringSubtract": + { + "error": + { + 1:55, + 2:65, + 3:75, + 6:85, + 10:95 + } + "warn": + { + 1:3, + 2:7, + 3:10, + 6:15, + 10:18 + } + }, + "scoringLetter": + { + "A":75, + "B":65, + "C":55, + "D":45, + "E":0 + }, + "threshold":50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false + } +``` + +Where: + +- scoringSubtract : An object with key/value pair objects for every result level we want to subtract percentage, with the percentage to subtract from number of results on every result type +- scoringLetter : An object with key/value pairs with scoring letter and scoring percentage, that the result must be greater, for this letter +- threshold : A number with minimum percentage value to provide valid the file we are checking. Any scoring below this thresold will mark the API as a failure in the scoring. +- onlySubtractHigherSeverityLevel : A boolean to decide if only the higher severity level who appears in the results for the API to analize, are subtracted from scoring or every severity level are subtracted from scoring. + +See sample: + + true + + API with Errors and Warnings, only Errors substract from scoring + API with Warnings, Warnings substract from scoring + + false + + API with Errors and Warnings, Errors and Warnings substracts from scoring + API with Warnings, Warnings substract from scoring + +- uniqueErrors : A boolean to count unique errors or all errors. An error is considered unique if its code and message have not been seen yet + +Example: + + With previous scoring config file, if we have: + + 1 error, the scoring is 45% and D + 2 errors, the scoring is 35% and E + 3 errors, the scoring is 25% and E + 4 errors, the scoring is 25% and E + and so on + +Output: + + Below your output log you can see the scoring, like: + + ✖ SCORING: A (93%) + ## Error Results Spectral has a few different error severities: `error`, `warn`, `info`, and `hint`, and they're in order from highest to lowest. By default, all results are shown regardless of severity, but since v5.0, only the presence of errors causes a failure status code of 1. Seeing results and getting a failure code for it are now two different things. diff --git a/packages/cli/src/commands/__tests__/lint.test.ts b/packages/cli/src/commands/__tests__/lint.test.ts index eb98e7050..fc3fe1928 100644 --- a/packages/cli/src/commands/__tests__/lint.test.ts +++ b/packages/cli/src/commands/__tests__/lint.test.ts @@ -146,6 +146,22 @@ describe('lint', () => { ); }); + it('calls lint with document, ruleset and scoring config file', async () => { + const doc = './__fixtures__/empty-oas2-document.json'; + const ruleset = 'custom-ruleset.json'; + const configFile = 'scoring-config.json'; + await run(`lint -r ${ruleset} --scoring-config ${configFile} ${doc}`); + expect(lint).toBeCalledWith([doc], { + encoding: 'utf8', + format: ['stylish'], + output: { stylish: '' }, + ruleset: 'custom-ruleset.json', + stdinFilepath: undefined, + ignoreUnknownFormat: false, + failOnUnmatchedGlobs: false, + }); + }); + it.each(['json', 'stylish'])('calls formatOutput with %s format', async format => { await run(`lint -f ${format} ./__fixtures__/empty-oas2-document.json`); expect(formatOutput).toBeCalledWith(results, format, { failSeverity: DiagnosticSeverity.Error }); @@ -244,13 +260,13 @@ describe('lint', () => { expect(process.stderr.write).nthCalledWith(2, `Error #1: ${chalk.red('some unhandled exception')}\n`); expect(process.stderr.write).nthCalledWith( 3, - expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:236`), + expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:252`), ); expect(process.stderr.write).nthCalledWith(4, `Error #2: ${chalk.red('another one')}\n`); expect(process.stderr.write).nthCalledWith( 5, - expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:237`), + expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:253`), ); expect(process.stderr.write).nthCalledWith(6, `Error #3: ${chalk.red('original exception')}\n`); diff --git a/packages/cli/src/commands/lint.ts b/packages/cli/src/commands/lint.ts index 7f4521960..aaab79257 100644 --- a/packages/cli/src/commands/lint.ts +++ b/packages/cli/src/commands/lint.ts @@ -14,6 +14,14 @@ import { formatOutput, writeOutput } from '../services/output'; import { FailSeverity, ILintConfig, OutputFormat } from '../services/config'; import { CLIError } from '../errors'; +import { ScoringConfig } from './../formatters/types'; +import { + getScoringConfig, + getScoringLevel, + groupBySource, + getCountsBySeverity, + getUniqueErrors, +} from '../formatters//utils'; const formatOptions = Object.values(OutputFormat); @@ -127,6 +135,10 @@ const lintCommand: CommandModule = { description: 'path/URL to a ruleset file', type: 'string', }, + 'scoring-config': { + description: 'path/URL to a scoring config file', + type: 'string', + }, 'fail-severity': { alias: 'F', description: 'results of this level or above will trigger a failure exit code', @@ -168,6 +180,7 @@ const lintCommand: CommandModule = { failSeverity, displayOnlyFailures, ruleset, + scoringConfig, stdinFilepath, format, output, @@ -197,20 +210,30 @@ const lintCommand: CommandModule = { results = filterResultsBySeverity(results, failSeverity); } + const scoringConfigData = await getScoringConfig(scoringConfig); + await Promise.all( format.map(f => { - const formattedOutput = formatOutput(results, f, { failSeverity: getDiagnosticSeverity(failSeverity) }); + const formattedOutput = formatOutput(results, f, { + failSeverity: getDiagnosticSeverity(failSeverity), + scoringConfig: scoringConfigData, + }); return writeOutput(formattedOutput, output?.[f] ?? ''); }), ); if (results.length > 0) { - process.exit(severeEnoughToFail(results, failSeverity) ? 1 : 0); + process.exit( + scoringThresholdNotEnough(results, scoringConfigData) ? 1 : severeEnoughToFail(results, failSeverity) ? 1 : 0, + ); } else if (config.quiet !== true) { const isErrorSeverity = getDiagnosticSeverity(failSeverity) === DiagnosticSeverity.Error; process.stdout.write( `No results with a severity of '${failSeverity}' ${isErrorSeverity ? '' : 'or higher '}found!\n`, ); + if (scoringConfig !== void 0) { + process.stdout.write(`SCORING: (100%)\nPASSED!`); + } } } catch (ex) { fail(isError(ex) ? ex : new Error(String(ex)), config.verbose === true); @@ -273,6 +296,25 @@ const filterResultsBySeverity = (results: IRuleResult[], failSeverity: FailSever return results.filter(r => r.severity <= diagnosticSeverity); }; +const scoringThresholdNotEnough = (results: IRuleResult[], scoringConfig: ScoringConfig | undefined): boolean => { + if (scoringConfig !== void 0) { + const groupedResults = groupBySource(results); + let groupedUniqueResults = { ...groupedResults }; + if (scoringConfig.uniqueErrors) { + groupedUniqueResults = { ...groupBySource(getUniqueErrors(results)) }; + } + return ( + scoringConfig.threshold > + getScoringLevel( + getCountsBySeverity(groupedUniqueResults), + scoringConfig.scoringSubtract, + scoringConfig.onlySubtractHigherSeverityLevel, + ) + ); + } + return false; +}; + export const severeEnoughToFail = (results: IRuleResult[], failSeverity: FailSeverity): boolean => { const diagnosticSeverity = getDiagnosticSeverity(failSeverity); return results.some(r => r.severity <= diagnosticSeverity); diff --git a/packages/cli/src/formatters/json.ts b/packages/cli/src/formatters/json.ts index 4ff9fbce9..a411dc59e 100644 --- a/packages/cli/src/formatters/json.ts +++ b/packages/cli/src/formatters/json.ts @@ -1,6 +1,15 @@ -import { Formatter } from './types'; +import { ISpectralDiagnostic } from '@stoplight/spectral-core'; +import { Formatter, FormatterOptions } from './types'; -export const json: Formatter = results => { +import { groupBySource, getUniqueErrors, getCountsBySeverity, getScoringText } from './utils'; + +export const json: Formatter = (results: ISpectralDiagnostic[], options: FormatterOptions) => { + let groupedResults; + let scoringText = ''; + if (options.scoringConfig !== void 0) { + groupedResults = groupBySource(getUniqueErrors(results)); + scoringText = getScoringText(getCountsBySeverity(groupedResults), options.scoringConfig); + } const outputJson = results.map(result => { return { code: result.code, @@ -11,5 +20,16 @@ export const json: Formatter = results => { source: result.source, }; }); - return JSON.stringify(outputJson, null, '\t'); + let objectOutput; + if (options.scoringConfig !== void 0) { + const scoring = +(scoringText !== null ? scoringText.replace('%', '').split(/[()]+/)[1] : 0); + objectOutput = { + scoring: scoringText.replace('SCORING:', '').trim(), + passed: scoring >= options.scoringConfig.threshold, + results: outputJson, + }; + } else { + objectOutput = outputJson; + } + return JSON.stringify(objectOutput, null, '\t'); }; diff --git a/packages/cli/src/formatters/pretty.ts b/packages/cli/src/formatters/pretty.ts index 3d1a40403..38c9a62b1 100644 --- a/packages/cli/src/formatters/pretty.ts +++ b/packages/cli/src/formatters/pretty.ts @@ -24,12 +24,22 @@ * @author Ava Thorn */ +import { ISpectralDiagnostic } from '@stoplight/spectral-core'; import { printPath, PrintStyle } from '@stoplight/spectral-runtime'; -import { IDiagnostic, IRange } from '@stoplight/types'; +import { IDiagnostic, IRange, DiagnosticSeverity } from '@stoplight/types'; import chalk from 'chalk'; -import { Formatter } from './types'; -import { getColorForSeverity, getHighestSeverity, getSummary, getSeverityName, groupBySource } from './utils'; +import { Formatter, FormatterOptions } from './types'; +import { + getColorForSeverity, + getHighestSeverity, + getSummary, + getSeverityName, + groupBySource, + getScoringText, + getCountsBySeverity, + getUniqueErrors, +} from './utils'; function formatRange(range?: IRange): string { if (range === void 0) return ''; @@ -37,9 +47,10 @@ function formatRange(range?: IRange): string { return ` ${range.start.line + 1}:${range.start.character + 1}`; } -export const pretty: Formatter = results => { +export const pretty: Formatter = (results: ISpectralDiagnostic[], options: FormatterOptions) => { const cliui = require('cliui'); let output = '\n'; + const DEFAULT_TOTAL_WIDTH = process.stdout.columns; const COLUMNS = [10, 13, 25, 20, 20]; const variableColumns = DEFAULT_TOTAL_WIDTH - COLUMNS.reduce((a, b) => a + b); @@ -50,10 +61,23 @@ export const pretty: Formatter = results => { const PAD_TOP1_LEFT0 = [1, 0, 0, 0]; const ui = cliui({ width: DEFAULT_TOTAL_WIDTH, wrap: true }); + const uniqueResults = getUniqueErrors(results); const groupedResults = groupBySource(results); - const summaryColor = getColorForSeverity(getHighestSeverity(results)); + const summaryColor = getColorForSeverity(getHighestSeverity(uniqueResults)); const summaryText = getSummary(groupedResults); + let groupedUniqueResults = { ...groupedResults }; + let scoringColor = ''; + let scoringText = null; + + if (options.scoringConfig !== void 0) { + if (options.scoringConfig.uniqueErrors) { + groupedUniqueResults = { ...groupBySource(uniqueResults) }; + } + scoringColor = getColorForSeverity(DiagnosticSeverity.Information); + scoringText = getScoringText(getCountsBySeverity(groupedUniqueResults), options.scoringConfig); + } + const uniqueIssues: IDiagnostic['code'][] = []; Object.keys(groupedResults).forEach(i => { const pathResults = groupedResults[i]; @@ -83,6 +107,15 @@ export const pretty: Formatter = results => { output += ui.toString(); output += chalk[summaryColor].bold(`${uniqueIssues.length} Unique Issue(s)\n`); output += chalk[summaryColor].bold(`\u2716${summaryText !== null ? ` ${summaryText}` : ''}\n`); + if (options.scoringConfig !== void 0) { + output += chalk[scoringColor].bold(`\u2716${scoringText !== null ? ` ${scoringText}` : ''}\n`); + const scoring = +(scoringText !== null ? scoringText.replace('%', '').split(/[()]+/)[1] : 0); + if (scoring >= options.scoringConfig.threshold) { + output += chalk['green'].bold(`\u2716 PASSED!\n`); + } else { + output += chalk['red'].bold(`\u2716 FAILED!\n`); + } + } return output; }; diff --git a/packages/cli/src/formatters/stylish.ts b/packages/cli/src/formatters/stylish.ts index 7f0aecf34..5ee66bb9f 100644 --- a/packages/cli/src/formatters/stylish.ts +++ b/packages/cli/src/formatters/stylish.ts @@ -24,15 +24,26 @@ * @author Sindre Sorhus */ -import type { DiagnosticSeverity, IRange } from '@stoplight/types'; +import { ISpectralDiagnostic } from '@stoplight/spectral-core'; +import type { IRange } from '@stoplight/types'; +import { DiagnosticSeverity } from '@stoplight/types'; import chalk from 'chalk'; import stripAnsi = require('strip-ansi'); import table from 'text-table'; import { printPath, PrintStyle } from '@stoplight/spectral-runtime'; import type { IRuleResult } from '@stoplight/spectral-core'; -import type { Formatter } from './types'; -import { getColorForSeverity, getHighestSeverity, getSeverityName, getSummary, groupBySource } from './utils'; +import type { Formatter, FormatterOptions } from './types'; +import { + getColorForSeverity, + getHighestSeverity, + getSummary, + getSeverityName, + groupBySource, + getScoringText, + getCountsBySeverity, + getUniqueErrors, +} from './utils'; // ----------------------------------------------------------------------------- // Helpers @@ -55,12 +66,26 @@ function getMessageType(severity: DiagnosticSeverity): string { // Public Interface // ----------------------------------------------------------------------------- -export const stylish: Formatter = results => { +export const stylish: Formatter = (results: ISpectralDiagnostic[], options: FormatterOptions) => { let output = '\n'; + + const uniqueResults = getUniqueErrors(results); const groupedResults = groupBySource(results); - const summaryColor = getColorForSeverity(getHighestSeverity(results)); + const summaryColor = getColorForSeverity(getHighestSeverity(uniqueResults)); const summaryText = getSummary(groupedResults); + let groupedUniqueResults = { ...groupedResults }; + let scoringColor = ''; + let scoringText = null; + + if (options.scoringConfig !== void 0) { + if (options.scoringConfig.uniqueErrors) { + groupedUniqueResults = { ...groupBySource(uniqueResults) }; + } + scoringColor = getColorForSeverity(DiagnosticSeverity.Information); + scoringText = getScoringText(getCountsBySeverity(groupedUniqueResults), options.scoringConfig); + } + Object.keys(groupedResults).map(path => { const pathResults = groupedResults[path]; @@ -92,6 +117,15 @@ export const stylish: Formatter = results => { } output += chalk[summaryColor].bold(`\u2716 ${summaryText}\n`); + if (options.scoringConfig !== void 0) { + output += chalk[scoringColor].bold(`\u2716${scoringText !== null ? ` ${scoringText}` : ''}\n`); + const scoring = +(scoringText !== null ? scoringText.replace('%', '').split(/[()]+/)[1] : 0); + if (scoring >= options.scoringConfig.threshold) { + output += chalk['green'].bold(`\u2716 PASSED!\n`); + } else { + output += chalk['red'].bold(`\u2716 FAILED!\n`); + } + } return output; }; diff --git a/packages/cli/src/formatters/types.ts b/packages/cli/src/formatters/types.ts index 80607838e..5b020a4a3 100644 --- a/packages/cli/src/formatters/types.ts +++ b/packages/cli/src/formatters/types.ts @@ -1,8 +1,27 @@ import { ISpectralDiagnostic } from '@stoplight/spectral-core'; +import type { HumanReadableDiagnosticSeverity } from '@stoplight/spectral-core'; import type { DiagnosticSeverity } from '@stoplight/types'; +export type ScoringTable = { + [key in HumanReadableDiagnosticSeverity]: ScoringSubtract[]; +}; +export interface ScoringSubtract { + [key: number]: number; +} +export interface ScoringLevel { + [key: string]: number; +} +export type ScoringConfig = { + scoringSubtract: ScoringTable[]; + scoringLetter: ScoringLevel[]; + threshold: number; + onlySubtractHigherSeverityLevel: boolean; + uniqueErrors: boolean; +}; + export type FormatterOptions = { failSeverity: DiagnosticSeverity; + scoringConfig?: ScoringConfig; }; export type Formatter = (results: ISpectralDiagnostic[], options: FormatterOptions) => string; diff --git a/packages/cli/src/formatters/utils/getCountsBySeverity.ts b/packages/cli/src/formatters/utils/getCountsBySeverity.ts new file mode 100644 index 000000000..ff1dc144b --- /dev/null +++ b/packages/cli/src/formatters/utils/getCountsBySeverity.ts @@ -0,0 +1,38 @@ +import { IRuleResult } from '@stoplight/spectral-core'; +import { DiagnosticSeverity, Dictionary } from '@stoplight/types'; +import { groupBySeverity } from './groupBySeverity'; + +export const getCountsBySeverity = ( + groupedResults: Dictionary, +): { + [DiagnosticSeverity.Error]: number; + [DiagnosticSeverity.Warning]: number; + [DiagnosticSeverity.Information]: number; + [DiagnosticSeverity.Hint]: number; +} => { + let errorCount = 0; + let warningCount = 0; + let infoCount = 0; + let hintCount = 0; + + for (const results of Object.values(groupedResults)) { + const { + [DiagnosticSeverity.Error]: errors, + [DiagnosticSeverity.Warning]: warnings, + [DiagnosticSeverity.Information]: infos, + [DiagnosticSeverity.Hint]: hints, + } = groupBySeverity(results); + + errorCount += errors.length; + warningCount += warnings.length; + infoCount += infos.length; + hintCount += hints.length; + } + + return { + [DiagnosticSeverity.Error]: errorCount, + [DiagnosticSeverity.Warning]: warningCount, + [DiagnosticSeverity.Information]: infoCount, + [DiagnosticSeverity.Hint]: hintCount, + }; +}; diff --git a/packages/cli/src/formatters/utils/getScoring.ts b/packages/cli/src/formatters/utils/getScoring.ts new file mode 100644 index 000000000..984622556 --- /dev/null +++ b/packages/cli/src/formatters/utils/getScoring.ts @@ -0,0 +1,68 @@ +import { SEVERITY_MAP } from '@stoplight/spectral-core'; +import { DiagnosticSeverity } from '@stoplight/types'; +import { ScoringConfig, ScoringTable, ScoringSubtract } from '../types'; +import * as path from '@stoplight/path'; +import fs from 'fs'; + +export const getScoringConfig = async (scoringFile?: string): Promise => { + if (scoringFile === void 0) { + return undefined; + } else if (!path.isAbsolute(scoringFile)) { + scoringFile = path.join(process.cwd(), scoringFile); + } + + const scoringConfig: ScoringConfig = JSON.parse(await fs.promises.readFile(scoringFile, 'utf8')) as ScoringConfig; + + return scoringConfig; +}; + +export const getScoringLevel = ( + issuesCount: { + [DiagnosticSeverity.Error]: number; + [DiagnosticSeverity.Warning]: number; + [DiagnosticSeverity.Information]: number; + [DiagnosticSeverity.Hint]: number; + }, + scoringSubtract: ScoringTable[], + onlySubtractHigherSeverityLevel: boolean, +): number => { + let scoring = 100; + Object.keys(issuesCount).forEach(key => { + const scoringKey = Object.keys(SEVERITY_MAP).filter(mappedKey => SEVERITY_MAP[mappedKey] == key)[0]; + if (scoringSubtract[scoringKey] !== void 0) { + if (scoring < 100 && !onlySubtractHigherSeverityLevel) return; + let subtractValue = 0; + Object.keys(scoringSubtract[scoringKey] as ScoringSubtract[]).forEach((subtractKey: string): void => { + subtractValue = ( + issuesCount[key] >= subtractKey + ? (scoringSubtract[scoringKey] as ScoringSubtract[])[subtractKey] + : subtractValue + ) as number; + }); + scoring -= subtractValue; + } + }); + return scoring > 0 ? scoring : 0; +}; + +export const getScoringText = ( + issuesCount: { + [DiagnosticSeverity.Error]: number; + [DiagnosticSeverity.Warning]: number; + [DiagnosticSeverity.Information]: number; + [DiagnosticSeverity.Hint]: number; + }, + scoringConfig: ScoringConfig, +): string => { + const { scoringSubtract, scoringLetter, onlySubtractHigherSeverityLevel } = scoringConfig; + const scoring = getScoringLevel(issuesCount, scoringSubtract, onlySubtractHigherSeverityLevel); + let scoringLevel: string = Object.keys(scoringLetter)[Object.keys(scoringLetter).length - 1]; + Object.keys(scoringLetter) + .reverse() + .forEach(key => { + if (scoring > (scoringLetter[key] as number)) { + scoringLevel = key; + } + }); + return `SCORING: ${scoringLevel} (${scoring}%)`; +}; diff --git a/packages/cli/src/formatters/utils/index.ts b/packages/cli/src/formatters/utils/index.ts index 7733c6615..1f076d762 100644 --- a/packages/cli/src/formatters/utils/index.ts +++ b/packages/cli/src/formatters/utils/index.ts @@ -1,8 +1,11 @@ export * from './getColorForSeverity'; +export * from './getCountsBySeverity'; export * from './getHighestSeverity'; +export * from './getScoring'; export * from './getSeverityName'; export * from './getSummary'; export * from './groupBySeverity'; export * from './groupBySource'; export * from './pluralize'; +export * from './uniqueErrors'; export * from './xmlEscape'; diff --git a/packages/cli/src/formatters/utils/uniqueErrors.ts b/packages/cli/src/formatters/utils/uniqueErrors.ts new file mode 100644 index 000000000..efa469353 --- /dev/null +++ b/packages/cli/src/formatters/utils/uniqueErrors.ts @@ -0,0 +1,16 @@ +import { IRuleResult } from '@stoplight/spectral-core'; + +export const getUniqueErrors = (results: IRuleResult[]): IRuleResult[] => { + const filteredResults: IRuleResult[] = []; + results.forEach((result: IRuleResult) => { + if ( + !filteredResults.some( + (element: IRuleResult) => element.code === result.code && element.message === result.message, + ) + ) { + filteredResults.push(result); + } + }); + + return filteredResults; +}; diff --git a/packages/cli/src/services/__tests__/__fixtures__/scoring-config.json b/packages/cli/src/services/__tests__/__fixtures__/scoring-config.json new file mode 100644 index 000000000..4d6abc0b3 --- /dev/null +++ b/packages/cli/src/services/__tests__/__fixtures__/scoring-config.json @@ -0,0 +1,32 @@ +{ + "scoringSubtract": + { + "error": + { + "1":55, + "2":65, + "3":75, + "6":85, + "10":95 + }, + "warn": + { + "1":3, + "2":7, + "3":10, + "6":15, + "10":18 + } + }, + "scoringLetter": + { + "A": 75, + "B": 65, + "C": 55, + "D": 45, + "E": 0 + }, + "threshold": 50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false +} \ No newline at end of file diff --git a/packages/cli/src/services/__tests__/linter.test.ts b/packages/cli/src/services/__tests__/linter.test.ts index e17a1b77e..69d299cee 100644 --- a/packages/cli/src/services/__tests__/linter.test.ts +++ b/packages/cli/src/services/__tests__/linter.test.ts @@ -18,6 +18,7 @@ jest.mock('../output'); const validCustomOas3SpecPath = resolve(__dirname, '__fixtures__/openapi-3.0-valid-custom.yaml'); const invalidRulesetPath = resolve(__dirname, '__fixtures__/ruleset-invalid.js'); const validRulesetPath = resolve(__dirname, '__fixtures__/ruleset-valid.js'); +const validScoringConfigRulesetPath = resolve(__dirname, '__fixtures__/scorint-config.json'); const validOas3SpecPath = resolve(__dirname, './__fixtures__/openapi-3.0-valid.yaml'); async function run(command: string) { @@ -368,6 +369,24 @@ describe('Linter service', () => { }); }); + describe('--scoring-config ', () => { + describe('when single scoring-config option provided', () => { + it('outputs normal output if it does not exist', () => { + return expect( + run(`lint ${validCustomOas3SpecPath} -r ${validRulesetPath} --scoring-config non-existent-path`), + ).resolves.toEqual([]); + }); + + it('outputs no issues', () => { + return expect( + run( + `lint ${validCustomOas3SpecPath} -r ${validRulesetPath} --scoring-config ${validScoringConfigRulesetPath}`, + ), + ).resolves.toEqual([]); + }); + }); + }); + describe('when loading specification files from web', () => { it('outputs no issues', () => { const document = join(__dirname, `./__fixtures__/stoplight-info-document.json`); diff --git a/packages/cli/src/services/__tests__/output.test.ts b/packages/cli/src/services/__tests__/output.test.ts index c9f08e305..88d38eeaf 100644 --- a/packages/cli/src/services/__tests__/output.test.ts +++ b/packages/cli/src/services/__tests__/output.test.ts @@ -2,6 +2,7 @@ import { DiagnosticSeverity } from '@stoplight/types'; import * as fs from 'fs'; import * as process from 'process'; import * as formatters from '../../formatters'; +import { ScoringLevel, ScoringTable } from '../../formatters/types'; import { OutputFormat } from '../config'; import { formatOutput, writeOutput } from '../output'; @@ -14,6 +15,23 @@ jest.mock('fs', () => ({ })); jest.mock('process'); +const scoringConfig = { + scoringSubtract: { + error: [0, 55, 65, 75, 75, 75, 85, 85, 85, 85, 95], + warn: [0, 3, 7, 10, 10, 10, 15, 15, 15, 15, 18], + } as unknown as ScoringTable[], + scoringLetter: { + A: 75, + B: 65, + C: 55, + D: 45, + E: 0, + } as unknown as ScoringLevel[], + threshold: 50, + onlySubtractHigherSeverityLevel: true, + uniqueErrors: false, +}; + describe('Output service', () => { describe('formatOutput', () => { it.each(['stylish', 'json', 'junit'])('calls %s formatter with given result', format => { @@ -41,6 +59,34 @@ describe('Output service', () => { (formatters[format] as jest.Mock).mockReturnValueOnce(output); expect(formatOutput(results, format as OutputFormat, { failSeverity: DiagnosticSeverity.Error })).toEqual(output); }); + + it.each(['stylish', 'json', 'pretty'])('calls %s formatter with given result and scoring-config', format => { + const results = [ + { + code: 'info-contact', + path: ['info'], + message: 'Info object should contain `contact` object.', + severity: DiagnosticSeverity.Information, + range: { + start: { + line: 2, + character: 9, + }, + end: { + line: 6, + character: 19, + }, + }, + source: '/home/Stoplight/spectral/src/__tests__/__fixtures__/petstore.oas3.json', + }, + ]; + + const output = `value for ${format}`; + (formatters[format] as jest.Mock).mockReturnValueOnce(output); + expect( + formatOutput(results, format as OutputFormat, { failSeverity: DiagnosticSeverity.Error, scoringConfig }), + ).toEqual(output); + }); }); describe('writeOutput', () => { diff --git a/packages/cli/src/services/config.ts b/packages/cli/src/services/config.ts index 50024e510..cba83ef8b 100644 --- a/packages/cli/src/services/config.ts +++ b/packages/cli/src/services/config.ts @@ -19,6 +19,7 @@ export interface ILintConfig { output?: Dictionary; resolver?: string; ruleset?: string; + scoringConfig?: string; stdinFilepath?: string; ignoreUnknownFormat: boolean; failOnUnmatchedGlobs: boolean; diff --git a/packages/core/src/ruleset/index.ts b/packages/core/src/ruleset/index.ts index 50addc0e8..84f5769f9 100644 --- a/packages/core/src/ruleset/index.ts +++ b/packages/core/src/ruleset/index.ts @@ -1,5 +1,5 @@ export { assertValidRuleset, RulesetValidationError } from './validation/index'; -export { getDiagnosticSeverity } from './utils/severity'; +export { getDiagnosticSeverity, SEVERITY_MAP } from './utils/severity'; export { createRulesetFunction, SchemaDefinition as RulesetFunctionSchemaDefinition } from './function'; export { Format } from './format'; export { RulesetDefinition, RuleDefinition, ParserOptions, HumanReadableDiagnosticSeverity } from './types'; diff --git a/packages/core/src/ruleset/utils/severity.ts b/packages/core/src/ruleset/utils/severity.ts index aadc8c77f..b950e71cd 100644 --- a/packages/core/src/ruleset/utils/severity.ts +++ b/packages/core/src/ruleset/utils/severity.ts @@ -3,7 +3,7 @@ import { HumanReadableDiagnosticSeverity } from '../types'; export const DEFAULT_SEVERITY_LEVEL = DiagnosticSeverity.Warning; -const SEVERITY_MAP: Record = { +export const SEVERITY_MAP: Record = { error: DiagnosticSeverity.Error, warn: DiagnosticSeverity.Warning, info: DiagnosticSeverity.Information, diff --git a/test-harness/scenarios/formats/results-default-format-scoring-json.scenario b/test-harness/scenarios/formats/results-default-format-scoring-json.scenario new file mode 100644 index 000000000..56a9b97c2 --- /dev/null +++ b/test-harness/scenarios/formats/results-default-format-scoring-json.scenario @@ -0,0 +1,147 @@ +====test==== +Invalid document outputs results with scoring data --format=json +====document==== +--- +info: + version: 1.0.0 + title: Stoplight +====asset:ruleset.json==== + { + "rules": { + "api-servers": { + "description": "\"servers\" must be present and non-empty array.", + "recommended": true, + "given": "$", + "then": { + "field": "servers", + "function": "schema", + "functionOptions": { + "dialect": "draft7", + "schema": { + "items": { + "type": "object", + }, + "minItems": 1, + "type": "array" + } + } + } + }, + "info-contact": { + "description": "Info object must have a \"contact\" object.", + "recommended": true, + "type": "style", + "given": "$", + "then": { + "field": "info.contact", + "function": "truthy", + } + }, + "info-description": { + "description": "Info \"description\" must be present and non-empty string.", + "recommended": true, + "type": "style", + "given": "$", + "then": { + "field": "info.description", + "function": "truthy" + } + } + } + } +====asset:scoring-config.json==== +{ + "scoringSubtract": + { + "error": + { + "1":55, + "2":65, + "3":75, + "6":85, + "10":95 + }, + "warn": + { + "1":3, + "2":7, + "3":10, + "6":15, + "10":18 + } + }, + "scoringLetter": + { + "A": 75, + "B": 65, + "C": 55, + "D": 45, + "E": 0 + }, + "threshold": 50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false +} +====command==== +{bin} lint {document} --format=json --ruleset "{asset:ruleset.json}" --scoring-config "{asset:scoring-config.json}" +====stdout==== +{ + "scoring": "A (90%)", + "passed": true, + "results": [ + { + "code": "api-servers", + "path": [], + "message": "\"servers\" must be present and non-empty array.", + "severity": 1, + "range": { + "start": { + "line": 0, + "character": 0 + }, + "end": { + "line": 3, + "character": 18 + } + }, + "source": "{document}" + }, + { + "code": "info-contact", + "path": [ + "info" + ], + "message": "Info object must have a \"contact\" object.", + "severity": 1, + "range": { + "start": { + "line": 1, + "character": 5 + }, + "end": { + "line": 3, + "character": 18 + } + }, + "source": "{document}" + }, + { + "code": "info-description", + "path": [ + "info" + ], + "message": "Info \"description\" must be present and non-empty string.", + "severity": 1, + "range": { + "start": { + "line": 1, + "character": 5 + }, + "end": { + "line": 3, + "character": 18 + } + }, + "source": "{document}" + } + ]} diff --git a/test-harness/scenarios/formats/results-default-scoring.scenario b/test-harness/scenarios/formats/results-default-scoring.scenario new file mode 100644 index 000000000..4757b259c --- /dev/null +++ b/test-harness/scenarios/formats/results-default-scoring.scenario @@ -0,0 +1,95 @@ +====test==== +Invalid document returns results with scoring data in default (stylish) format +====document==== +--- +info: + version: 1.0.0 + title: Stoplight +====asset:ruleset.json==== +{ + "rules": { + "api-servers": { + "description": "\"servers\" must be present and non-empty array.", + "recommended": true, + "given": "$", + "then": { + "field": "servers", + "function": "schema", + "functionOptions": { + "dialect": "draft7", + "schema": { + "items": { + "type": "object", + }, + "minItems": 1, + "type": "array" + } + } + } + }, + "info-contact": { + "description": "Info object must have a \"contact\" object.", + "recommended": true, + "type": "style", + "given": "$", + "then": { + "field": "info.contact", + "function": "truthy", + } + }, + "info-description": { + "description": "Info \"description\" must be present and non-empty string.", + "recommended": true, + "type": "style", + "given": "$", + "then": { + "field": "info.description", + "function": "truthy" + } + } + } +} +====asset:scoring-config.json==== +{ + "scoringSubtract": + { + "error": + { + "1":55, + "2":65, + "3":75, + "6":85, + "10":95 + }, + "warn": + { + "1":3, + "2":7, + "3":10, + "6":15, + "10":18 + } + }, + "scoringLetter": + { + "A": 75, + "B": 65, + "C": 55, + "D": 45, + "E": 0 + }, + "threshold": 50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false +} +====command==== +{bin} lint {document} --ruleset "{asset:ruleset.json}" --scoring-config "{asset:scoring-config.json}" +====stdout==== +{document} + 1:1 warning api-servers "servers" must be present and non-empty array. + 2:6 warning info-contact Info object must have a "contact" object. info + 2:6 warning info-description Info "description" must be present and non-empty string. info + +✖ 3 problems (0 errors, 3 warnings, 0 infos, 0 hints) +✖ SCORING: A (90%) +✖ PASSED! diff --git a/test-harness/scenarios/formats/results-format-stylish-scoring.scenario b/test-harness/scenarios/formats/results-format-stylish-scoring.scenario new file mode 100644 index 000000000..93f99854e --- /dev/null +++ b/test-harness/scenarios/formats/results-format-stylish-scoring.scenario @@ -0,0 +1,96 @@ +====test==== +Invalid document outputs results with scoring data when --format=stylish +====document==== +--- +info: + version: 1.0.0 + title: Stoplight +paths: {} +====asset:ruleset.json==== +{ + "rules": { + "api-servers": { + "description": "\"servers\" must be present and non-empty array.", + "recommended": true, + "given": "$", + "then": { + "field": "servers", + "function": "schema", + "functionOptions": { + "dialect": "draft7", + "schema": { + "items": { + "type": "object", + }, + "minItems": 1, + "type": "array" + } + } + } + }, + "info-contact": { + "description": "Info object must have a \"contact\" object.", + "recommended": true, + "type": "style", + "given": "$", + "then": { + "field": "info.contact", + "function": "truthy", + } + }, + "info-description": { + "description": "Info \"description\" must be present and non-empty string.", + "recommended": true, + "type": "style", + "given": "$", + "then": { + "field": "info.description", + "function": "truthy" + } + } + } +} +====asset:scoring-config.json==== +{ + "scoringSubtract": + { + "error": + { + "1":55, + "2":65, + "3":75, + "6":85, + "10":95 + }, + "warn": + { + "1":3, + "2":7, + "3":10, + "6":15, + "10":18 + } + }, + "scoringLetter": + { + "A": 75, + "B": 65, + "C": 55, + "D": 45, + "E": 0 + }, + "threshold": 50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false +} +====command==== +{bin} lint {document} --format=stylish --ruleset "{asset:ruleset.json}" --scoring-config "{asset:scoring-config.json}" +====stdout==== +{document} + 1:1 warning api-servers "servers" must be present and non-empty array. + 2:6 warning info-contact Info object must have a "contact" object. info + 2:6 warning info-description Info "description" must be present and non-empty string. info + +✖ 3 problems (0 errors, 3 warnings, 0 infos, 0 hints) +✖ SCORING: A (90%) +✖ PASSED! diff --git a/test-harness/scenarios/formats/too-few-outputs.scenario b/test-harness/scenarios/formats/too-few-outputs.scenario index 733e54185..f4b630195 100644 --- a/test-harness/scenarios/formats/too-few-outputs.scenario +++ b/test-harness/scenarios/formats/too-few-outputs.scenario @@ -24,6 +24,7 @@ Options: --stdin-filepath path to a file to pretend that stdin comes from [string] --resolver path to custom json-ref-resolver instance [string] -r, --ruleset path/URL to a ruleset file [string] + --scoring-config path/URL to a scoring config file [string] -F, --fail-severity results of this level or above will trigger a failure exit code [string] [choices: "error", "warn", "info", "hint"] [default: "error"] -D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false] --ignore-unknown-format do not warn about unmatched formats [boolean] [default: false] diff --git a/test-harness/scenarios/formats/too-many-outputs.scenario b/test-harness/scenarios/formats/too-many-outputs.scenario index c127e994a..f31ac9898 100644 --- a/test-harness/scenarios/formats/too-many-outputs.scenario +++ b/test-harness/scenarios/formats/too-many-outputs.scenario @@ -24,6 +24,7 @@ Options: --stdin-filepath path to a file to pretend that stdin comes from [string] --resolver path to custom json-ref-resolver instance [string] -r, --ruleset path/URL to a ruleset file [string] + --scoring-config path/URL to a scoring config file [string] -F, --fail-severity results of this level or above will trigger a failure exit code [string] [choices: "error", "warn", "info", "hint"] [default: "error"] -D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false] --ignore-unknown-format do not warn about unmatched formats [boolean] [default: false] diff --git a/test-harness/scenarios/formats/unmatched-outputs.scenario b/test-harness/scenarios/formats/unmatched-outputs.scenario index 69f7f1fc5..8abf03ea1 100644 --- a/test-harness/scenarios/formats/unmatched-outputs.scenario +++ b/test-harness/scenarios/formats/unmatched-outputs.scenario @@ -24,6 +24,7 @@ Options: --stdin-filepath path to a file to pretend that stdin comes from [string] --resolver path to custom json-ref-resolver instance [string] -r, --ruleset path/URL to a ruleset file [string] + --scoring-config path/URL to a scoring config file [string] -F, --fail-severity results of this level or above will trigger a failure exit code [string] [choices: "error", "warn", "info", "hint"] [default: "error"] -D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false] --ignore-unknown-format do not warn about unmatched formats [boolean] [default: false] diff --git a/test-harness/scenarios/help-no-document.scenario b/test-harness/scenarios/help-no-document.scenario index 8e686198b..d9274754f 100644 --- a/test-harness/scenarios/help-no-document.scenario +++ b/test-harness/scenarios/help-no-document.scenario @@ -25,6 +25,7 @@ Options: --stdin-filepath path to a file to pretend that stdin comes from [string] --resolver path to custom json-ref-resolver instance [string] -r, --ruleset path/URL to a ruleset file [string] + --scoring-config path/URL to a scoring config file [string] -F, --fail-severity results of this level or above will trigger a failure exit code [string] [choices: "error", "warn", "info", "hint"] [default: "error"] -D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false] --ignore-unknown-format do not warn about unmatched formats [boolean] [default: false] diff --git a/test-harness/scenarios/overrides/aliases-scoring.scenario b/test-harness/scenarios/overrides/aliases-scoring.scenario new file mode 100644 index 000000000..12b9d4061 --- /dev/null +++ b/test-harness/scenarios/overrides/aliases-scoring.scenario @@ -0,0 +1,133 @@ +====test==== +Respect overrides with aliases and scoring +====asset:spectral.js==== +const { DiagnosticSeverity } = require('@stoplight/types'); +const { pattern } = require('@stoplight/spectral-functions'); + +module.exports = { + aliases: { + Info: ['$.info'], + }, + rules: { + 'description-matches-stoplight': { + message: 'Description must contain Stoplight', + given: '#Info', + recommended: true, + severity: DiagnosticSeverity.Error, + then: { + field: 'description', + function: pattern, + functionOptions: { + match: 'Stoplight', + }, + }, + }, + 'title-matches-stoplight': { + message: 'Title must contain Stoplight', + given: '#Info', + then: { + field: 'title', + function: pattern, + functionOptions: { + match: 'Stoplight', + }, + }, + }, + 'contact-name-matches-stoplight': { + message: 'Contact name must contain Stoplight', + given: '#Info.contact', + recommended: false, + then: { + field: 'name', + function: pattern, + functionOptions: { + match: 'Stoplight', + }, + }, + }, + }, + overrides: [ + { + files: [`**/*.json`], + rules: { + 'description-matches-stoplight': 'error', + 'title-matches-stoplight': 'warn', + }, + }, + { + files: [`v2/**/*.json`], + rules: { + 'description-matches-stoplight': 'info', + 'title-matches-stoplight': 'hint', + }, + }, + ], +}; +====asset:scoring-config.json==== +{ + "scoringSubtract": + { + "error": + { + "1":55, + "2":65, + "3":75, + "6":85, + "10":95 + }, + "warn": + { + "1":3, + "2":7, + "3":10, + "6":15, + "10":18 + } + }, + "scoringLetter": + { + "A": 75, + "B": 65, + "C": 55, + "D": 45, + "E": 0 + }, + "threshold": 50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false +} +====asset:v2/document.json==== +{ + "info": { + "description": "", + "title": "", + "contact": { + "name": "" + } + } +} +====asset:legacy/document.json==== +{ + "info": { + "description": "", + "title": "", + "contact": { + "name": "" + } + } +} +====command==== +{bin} lint **/*.json --ruleset {asset:spectral.js} --fail-on-unmatched-globs --scoring-config "{asset:scoring-config.json}" +====stdout==== + +{asset:legacy/document.json} + 3:20 error description-matches-stoplight Description must contain Stoplight info.description + 4:14 warning title-matches-stoplight Title must contain Stoplight info.title + +{asset:v2/document.json} + 3:20 information description-matches-stoplight Description must contain Stoplight info.description + 4:14 hint title-matches-stoplight Title must contain Stoplight info.title + +✖ 4 problems (1 error, 1 warning, 1 info, 1 hint) +✖ SCORING: E (42%) +✖ FAILED! diff --git a/test-harness/scenarios/severity/fail-on-error-no-error-scoring.scenario b/test-harness/scenarios/severity/fail-on-error-no-error-scoring.scenario new file mode 100644 index 000000000..66dfb2b87 --- /dev/null +++ b/test-harness/scenarios/severity/fail-on-error-no-error-scoring.scenario @@ -0,0 +1,64 @@ +====test==== +Will only fail if there is an error, and there is not. Can still see all warnings with scoring data. +====document==== +- type: string +- type: number +====asset:ruleset.json==== +{ + "rules": { + "valid-type": { + "given": "$..type", + "then": { + "function": "enumeration", + "functionOptions": { + "values": ["object"] + } + } + } + } +} +====asset:scoring-config.json==== +{ + "scoringSubtract": + { + "error": + { + "1":55, + "2":65, + "3":75, + "6":85, + "10":95 + }, + "warn": + { + "1":3, + "2":7, + "3":10, + "6":15, + "10":18 + } + }, + "scoringLetter": + { + "A": 75, + "B": 65, + "C": 55, + "D": 45, + "E": 0 + }, + "threshold": 50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false +} +====command==== +{bin} lint {document} --ruleset "{asset:ruleset.json}" --fail-severity=error --scoring-config "{asset:scoring-config.json}" +====status==== +0 +====stdout==== +{document} + 1:9 warning valid-type "string" must be equal to one of the allowed values: "object" [0].type + 2:9 warning valid-type "number" must be equal to one of the allowed values: "object" [1].type + +✖ 2 problems (0 errors, 2 warnings, 0 infos, 0 hints) +✖ SCORING: A (93%) +✖ PASSED! diff --git a/test-harness/scenarios/severity/fail-on-error-scoring.scenario b/test-harness/scenarios/severity/fail-on-error-scoring.scenario new file mode 100644 index 000000000..b30c9b5e5 --- /dev/null +++ b/test-harness/scenarios/severity/fail-on-error-scoring.scenario @@ -0,0 +1,78 @@ +====test==== +Will fail and return 1 as exit code because errors exist with scoring data +====document==== +- type: string +- type: array +====asset:ruleset.json==== +{ + "rules": { + "valid-type": { + "given": "$..type", + "severity": "error", + "then": { + "function": "enumeration", + "functionOptions": { + "values": ["object"] + } + } + }, + "no-primitive-type": { + "given": "$..type", + "severity": "warn", + "then": { + "function": "enumeration", + "functionOptions": { + "values": ["string", "number", "boolean", "null"] + } + } + } + } +} +====asset:scoring-config.json==== +{ + "scoringSubtract": + { + "error": + { + "1":55, + "2":65, + "3":75, + "6":85, + "10":95 + }, + "warn": + { + "1":3, + "2":7, + "3":10, + "6":15, + "10":18 + } + }, + "scoringLetter": + { + "A": 75, + "B": 65, + "C": 55, + "D": 45, + "E": 0 + }, + "threshold": 50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false +} +====command-nix==== +{bin} lint {document} --ruleset "{asset:ruleset.json}" --fail-severity=error --scoring-config "{asset:scoring-config.json}" +====command-win==== +{bin} lint {document} --ruleset "{asset:ruleset.json}" --fail-severity error --scoring-config "{asset:scoring-config.json}" +====status==== +1 +====stdout==== +{document} + 1:9 error valid-type "string" must be equal to one of the allowed values: "object" [0].type + 2:9 warning no-primitive-type "array" must be equal to one of the allowed values: "string", "number", "boolean", "null" [1].type + 2:9 error valid-type "array" must be equal to one of the allowed values: "object" [1].type + +✖ 3 problems (2 errors, 1 warning, 0 infos, 0 hints) +✖ SCORING: E (32%) +✖ FAILED! diff --git a/test-harness/scenarios/strict-options.scenario b/test-harness/scenarios/strict-options.scenario index 8b1cb3708..cdab123cf 100644 --- a/test-harness/scenarios/strict-options.scenario +++ b/test-harness/scenarios/strict-options.scenario @@ -25,6 +25,7 @@ Options: --stdin-filepath path to a file to pretend that stdin comes from [string] --resolver path to custom json-ref-resolver instance [string] -r, --ruleset path/URL to a ruleset file [string] + --scoring-config path/URL to a scoring config file [string] -F, --fail-severity results of this level or above will trigger a failure exit code [string] [choices: "error", "warn", "info", "hint"] [default: "error"] -D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false] --ignore-unknown-format do not warn about unmatched formats [boolean] [default: false] diff --git a/test-harness/scenarios/valid-no-errors.oas2-scoring.scenario b/test-harness/scenarios/valid-no-errors.oas2-scoring.scenario new file mode 100644 index 000000000..50c16f90b --- /dev/null +++ b/test-harness/scenarios/valid-no-errors.oas2-scoring.scenario @@ -0,0 +1,57 @@ +====test==== +Valid OAS2 document returns no results with scoring data +====document==== +swagger: "2.0" +info: + version: 1.0.0 + title: Stoplight + description: lots of text + contact: + name: fred +host: localhost +schemes: + - http +paths: {} +tags: + - name: my-tag +====asset:ruleset==== +const { oas } = require('@stoplight/spectral-rulesets'); +module.exports = oas; +====asset:scoring-config.json==== +{ + "scoringSubtract": + { + "error": + { + "1":55, + "2":65, + "3":75, + "6":85, + "10":95 + }, + "warn": + { + "1":3, + "2":7, + "3":10, + "6":15, + "10":18 + } + }, + "scoringLetter": + { + "A": 75, + "B": 65, + "C": 55, + "D": 45, + "E": 0 + }, + "threshold": 50, + "onlySubtractHigherSeverityLevel": true, + "uniqueErrors": false +} +====command==== +{bin} lint {document} --ruleset "{asset:ruleset}" --scoring-config "{asset:scoring-config.json}" +====stdout==== +No results with a severity of 'error' found! +SCORING: (100%)PASSED! diff --git a/test-harness/scenarios/valid-no-errors.oas2.scenario b/test-harness/scenarios/valid-no-errors.oas2.scenario index b671062f2..f2db9703c 100644 --- a/test-harness/scenarios/valid-no-errors.oas2.scenario +++ b/test-harness/scenarios/valid-no-errors.oas2.scenario @@ -9,7 +9,7 @@ info: contact: name: fred host: localhost -schemes: +schemes: - http paths: {} tags: