You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I posted this on Slack, and @s0undt3ch suggested that it would be better to file an issue so it won't get lost.
The idea is to focus on adding low to medium effort / high impact tests, that would help to improve feature quality/parity. There are many things in Salt that could be invoked in multiple ways (or are pluggable) and are expected to work identically. Just a few examples:
Feature parity between different commands like salt, salt-call, salt-ssh, salt-run salt.*, salt-run ssh.cmd. For example, missing wrappers for salt-ssh: No way to debug templates over salt-ssh #50196. What if some automated tool was able to run each module using all these salt-* commands and reported any inconsistencies/crashes?
Another pylint plugin to detect states that do not respect test=True (this is one of the basic expectations about Salt!). Just a rough example: comm -23 <(find salt/states -name '*.py' | sort -u) <(find salt/states -name '*.py' -exec grep -l -E '__opts__.*test' \{\} \; | sort -u )
These meta-tests (or conformance tests) could automatically check all new code to conform to existing coding conventions/invariants/architectural decisions. Two examples: #52368#51900
I'm sure there is a lot of institutional knowledge that could be transformed into these kinds of tests that will assist in PR reviews.
The text was updated successfully, but these errors were encountered:
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.
I posted this on Slack, and @s0undt3ch suggested that it would be better to file an issue so it won't get lost.
The idea is to focus on adding low to medium effort / high impact tests, that would help to improve feature quality/parity. There are many things in Salt that could be invoked in multiple ways (or are pluggable) and are expected to work identically. Just a few examples:
salt
,salt-call
,salt-ssh
,salt-run salt.*
,salt-run ssh.cmd
. For example, missing wrappers for salt-ssh: No way to debug templates over salt-ssh #50196. What if some automated tool was able to run each module using all thesesalt-*
commands and reported any inconsistencies/crashes?test=True
(this is one of the basic expectations about Salt!). Just a rough example:comm -23 <(find salt/states -name '*.py' | sort -u) <(find salt/states -name '*.py' -exec grep -l -E '__opts__.*test' \{\} \; | sort -u )
These meta-tests (or conformance tests) could automatically check all new code to conform to existing coding conventions/invariants/architectural decisions. Two examples: #52368 #51900
I'm sure there is a lot of institutional knowledge that could be transformed into these kinds of tests that will assist in PR reviews.
The text was updated successfully, but these errors were encountered: