-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add automated tests for demo code snippets #7
Conversation
README.md
Outdated
@@ -198,13 +202,13 @@ malicious code. | |||
|
|||
```shell | |||
cd ../functionary_carl | |||
echo "something evil" >> demo-project/foo.py | |||
echo something evil >> demo-project/foo.py | |||
``` | |||
Carl thought that this is the sane code he got from Bob and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably shouldn't be in this PR but maybe okay to sneak in?
sane -> same
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually did mean to write "sane", but since you're the second person (#3) to stumble across this, I'll change it.
Does "genuine" sound right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I think genuine could work better than sane.
I also noticed that #5 adds |
Regarding |
Add a script that extracts the demo code snippets from README.md and runs them in a shell, raising `SystemExit`, if the output is not as expected. This is an automated testing alternative to the existing `run_demo.py`, which replicates the commands from the demo instructions, with the advantage that the commands don't have to be synced. NOTE: The script requires the in-toto version specified in requirements.txt, i.e. 0.2.3 at the moment.
Adopt demo instructions to be used with newly created script that extracts snippets from fenced code blocks and runs them. This commit marks to snippets for exclusion, by specifying the snippet language, used for syntax highlighting, as `bash` (`run_demo_md.py` only extracts `shell` snippets). Plus some minor cleanup.
`tree` behaves (sorts) differently on different platforms, which is a problem, when testing against an expected output.
Requires changes in the expected output of the automated demo run script.
This is necessary to get the same output in different environments, which in turn is necessary to compare it to hardcoded expected output.
These are treated differently with `set -x` on different systems and thus break the build E.g. on Travis (ubuntu): `+ echo something evil` and locally: `+ echo 'something evil'`
Kudos to @CameronLonsdale and @adityasaky for pointing it out.
This PR adds a script that extracts the demo code snippets from
README.md
and runs them in a shell, raisingSystemExit
, if the output is not as expected.Comparing outputs requires some minor modifications in the used commands (to make the output cross-platform deterministic). The PR further
In the future, automatic builds will also be triggered when a new version of in-toto is released (using
dependabot
.In terms of testing this is an alternative to the existing
run_demo.py
script (which just replicates the commands from README.md), with the advantage that now the commands don't have to besynced anymore.
If we decide to remove
run_demo.py
we might want to add a simple clean command, to remove files added during the demo, and consider live output (see comment aboutin_toto.process.run_duplicate_streams
inrun_demo_md.py
).RELATED WORK: This is a rather custom solution for a problem that was also discussed in theupdateframework/python-tuf#808. A more generic tool, would need to allow different languages, e.g. shell and python (using
doctest
) and maybe more options to customize the test success/fail conditions.