Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doc: added importance and custom markers section #43

Merged
merged 1 commit into from
Sep 27, 2023

Conversation

danlavu
Copy link

@danlavu danlavu commented Sep 12, 2023

No description provided.

@jakub-vavra-cz
Copy link
Contributor

Hello, looks mostly good.
We might want to add also "security" marker from the start and label tests that cover logins with wrong password, expired/disabled accounts eg. . WDYIT @pbrezina?

jakub-vavra-cz
jakub-vavra-cz previously approved these changes Sep 12, 2023
Copy link
Contributor

@jakub-vavra-cz jakub-vavra-cz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@danlavu
Copy link
Author

danlavu commented Sep 12, 2023

@jakub-vavra-cz , some of it should be under authentication, and we don't have any access control tests ported in yet, so that group will be added eventually.

docs/marks.rst Show resolved Hide resolved
docs/marks.rst Outdated Show resolved Hide resolved
@pbrezina
Copy link
Member

Hello, looks mostly good. We might want to add also "security" marker from the start and label tests that cover logins with wrong password, expired/disabled accounts eg. . WDYIT @pbrezina?

AFAIK this is the "authentication" or "authorization" marker. So my opinion is to either use these, or scratch them in favor of "security".


From my upstream point of view, both importance and custom markers do not really sense. Before release and before pull request push, all tests must be green. So IIUIC it is more related to how you want to run the tests in downstream and this should be explained here so however reads it knows the meaning of these markers

@danlavu
Copy link
Author

danlavu commented Sep 15, 2023

@pbrezina, it's simply a way to organize the tests when we execute regression runs, so we can run a subset of tests that a patch, build, and errata effects without executing all tests. Gating, for example, gating is supposed to be a quick, core tests that touch all features, aka "critical". While "high" will be a comprehensive run of the features. Medium will be features that are not used all the time or will not impact customer's operations if they fail, like the CLI tools, sssctl, sss_override.

I'll expand the documentation and add this kind of thing to the doc.

@pbrezina
Copy link
Member

@pbrezina, it's simply a way to organize the tests when we execute regression runs, so we can run a subset of tests that a patch, build, and errata effects without executing all tests. Gating, for example, gating is supposed to be a quick, core tests that touch all features, aka "critical". While "high" will be a comprehensive run of the features. Medium will be features that are not used all the time or will not impact customer's operations if they fail, like the CLI tools, sssctl, sss_override.

Yes, exactly this kind of thing should go to the docs.

I'll expand the documentation and add this kind of thing to the doc.

@jakub-vavra-cz
Copy link
Contributor

@pbrezina, @sidecontrol as for difference between critical, high, medium, low.
The categorization of tests is important internally for dividing test to tiers (that will have a different frequence of execution), it is useful to evaluate seriousness of possible failure. (Failure of low importance test will have a different impact that critical test.

"Critical" tests should contain only small sample of tests for the most important functionality.
These should have very short runtime so they can be run often (like on PRs).
They should be covering only the "sunny day" (positive test) scenarios.

"High" tests that cover all functionality properly still focusing on positive scenarios.
Consider tests with properly configured sssd and kerberos, automount, ... making sure that everything works properly.

"Medium" tests comprehensively including some less used functionality and more complex setups.
Tests with slightly misconfigured sssd, less common setups, basic failover.

"Low" tests for a rarely used functionality, edge cases and very specific complex setups, performance/stress tests.
Tests where the sssd / environment is misconfigured.

@danlavu
Copy link
Author

danlavu commented Sep 18, 2023

@jakub-vavra-cz , how does this sound?

* - critical
  - Core subset of tests that covers all operational features. This is used for gating where it maybe ran several
    times a day. To manage resources, the execution all these tests should be kept under an hour.
* - high
  - The most comprehensive set of tests, that covers all operational features.
* - medium
  - Tess that do not impact operational functionality, like the CLI commands included in sss-tools package.
* - low
  - Tests that may have a long execution time, edge cases or complex scenarios that demand a lot of resources.

@jakub-vavra-cz
Copy link
Contributor

Hi @sidecontrol ,
I would shift some stuff from critical to high. Gating is Crit+High, I would keep Critical for smoke tests that we will execute
on commits/PRs.

    • critical
    • Core subset of tests that covers most important operational features. This is used in pipelines where it maybe ran multiple times a day. The execution should be kept as short as possible.
    • high
    • The comprehensive set of tests, that covers all operational features. This is used for gating where it maybe ran several times a day. To manage resources, the execution all these tests should be kept under an hour.
    • medium
    • Extended set that covers tests that do not impact operational functionality, like the CLI commands included in sss-tools package. Tests that cover negative scenarios and misconfigured environment fit here as well.
    • low
    • Tests that may have a long execution time, edge cases or complex scenarios that demand a lot of resources. Consider performance and stress tests as part of this set.

@danlavu
Copy link
Author

danlavu commented Sep 20, 2023

Updated.

jakub-vavra-cz
jakub-vavra-cz previously approved these changes Sep 20, 2023
Copy link
Contributor

@jakub-vavra-cz jakub-vavra-cz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@pbrezina pbrezina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, see minor comments inside.

docs/marks.rst Outdated Show resolved Hide resolved
docs/marks.rst Outdated Show resolved Hide resolved
docs/marks.rst Outdated Show resolved Hide resolved
docs/marks.rst Outdated Show resolved Hide resolved
docs/marks.rst Outdated Show resolved Hide resolved
docs/marks.rst Outdated Show resolved Hide resolved
docs/marks.rst Outdated Show resolved Hide resolved
@patriki01
Copy link
Contributor

Hi @sidecontrol,
It seems that you forgot to put config marker to your original PR to Testing Framework. Jakub has already added the marker. Could you please add it to the docs as well?
Thanks

@danlavu
Copy link
Author

danlavu commented Sep 26, 2023

@patriki01 Done.

Copy link
Contributor

@jakub-vavra-cz jakub-vavra-cz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@pbrezina pbrezina merged commit 8da78df into SSSD:master Sep 27, 2023
2 checks passed
@danlavu danlavu deleted the markers branch April 17, 2024 02:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants