Pinocchio is a set of extensions to the nose unit testing framework for Python.
You can get the most recent version from pypi:
https://pypi.python.org/pypi/pinocchio/
You can install it via easy_install or pip
easy_install pinocchio pip install pinocchio
Pinocchio only works with nose 0.9a1 and above. Pinocchio is compatible with both Python versions 2 and 3.
Contents
- Extensions
- License
- Author Information
Sometimes your unit tests just seem to take forever. Well, now you can get rid of the slow ones automatically!
The pinocchio.stopwatch extension module lets you time the unit tests being run, and -- once times have been recorded -- then lets you select only those that run faster than a given amount of time. As a bonus, the test names and run times are stored in a simple format -- a pickled dictionary -- so you can target specific tests for speedup, too.
There's really only one use case here: your tests take too long to run, so you and your developers aren't running them very frequently. The stopwatch module lets you pick out the fast ones to be run via the command line; you can always run the slow ones in your continuous build system, right?
--with-stopwatch
enables test timing.
--stopwatch-file
changes the filename used to save the pickled test
times from '.nose-stopwatch-times' to the specified file.
--faster-than
sets an upper time limit (in seconds) for which tests
should be run.
See examples/test_stopwatch.py
for some examples; use
nosetests -w examples/ --with-stopwatch --faster-than 1 test_stopwatch.py nosetests -w examples/ --with-stopwatch --faster-than 1 test_stopwatch.py
to run a subset of the tests. Note that you need to run it twice -- once to record the times (i.e. all tests will be run, independent of the --faster-than parameter) and again to select only the "fast" tests.
The attrib extension module for nose is a great way to select subsets of tests based on attributes you've given the test functions, classes, or methods. But what if you don't want to modify the source code to add the attributes?
The pinocchio.decorator extension module lets you provide the names of functions, classes, and methods to which to add tags. For example,
TestModule.test_function: a TestModule.TestClass: b TestModule.TestClass.test_method: c
would set attributes on a function, a class, and a method. Then
nosetests -a a
would run only the function,
nosetests -a b
would run all methods on the class, and
nosetests -a c
would run only the method.
There are a couple of scenarios where this can come in handy:
- You're working on a bunch of tests that are failing, and you only want to execute the failing tests. If modifying the failing test code itself would be more work than simply listing them in a file, then use the decorator extension.
- You have a 'private' set of unit tests that bear on the code you're working on. You want to iterate quickly, just running those unit tests. Again, if modifying the test code itself is more work than simply listing the relevant tests in a file, use the decorator extension.
- You have a few unit tests that are failing, and you don't want to execute them (that is, the reverse of the first scenario).
--decorator-file
specifies the file containing the tags to use.
See examples/test_decorator.py
for some examples; use
examples/test_decorator.attrib
as the decorator file. For example,
try the following commands:
nosetests --decorator-file examples/test_decorator.attribs examples/test_decorator.py -a one nosetests --decorator-file examples/test_decorator.attribs examples/test_decorator.py -a two nosetests --decorator-file examples/test_decorator.attribs examples/test_decorator.py -a three
(You'll need to install figleaf to use this plugin; it will install the figleaf
package and the annotate-sections
script.)
This plugin lets you record code coverage per unit test, and then annotate your Python source code with which unit tests are running which lines of code. It's a useful way to figure out which nose tests are exercising what parts of your program.
See http://ivory.idyll.org/blog/feb-07/figleaf-goodness.html for some detailed examples.
To try it out, do this:
nosetests -w examples/ --with-figleafsections examples/test_sections.py annotate-sections examples/test_sections.py
The output will be placed in the file examples/test_sections.py
.
This plugin records the stdout from each test into a separate file, with a prefix indicating whether or not the test succeeded.
The main use case is when you have MANY failing tests and you want to take a look at the output without having to page through the nose error output linearly.
--with-outputsave
enables the plugin. Output from successful tests
will be placed in success-<testname>
, output from failed tests will
be placed in fail-<testname>
, and output from errors will be placed
in error-<testname>
.
--omit-success
does not save output from successful tests, i.e. only
'fail-' and 'error-' output files will be created.
--save-directory
places all saved output into the given directory,
creating it if it does not exist.
Try:
nosetests -w examples/ --with-outputsave --save-directory=output examples/test_outputsave.py
Then look at the 'output' directory.
spec lets you generate a "specification" similar to testdox . The ppec plugin can generate simple documentation directly from class and method names of test cases. For example, a test case like:
class TestFoobar: def test_is_a_singleton(self): pass def test_can_be_automatically_documented(self): pass
during the test run will generate the following specification:
Foobar - is a singleton - can be automatically documented
Test functions put directly into a module will have a context based on the name of the containing module. For example, if you define functions test_are_marked_as_deprecated() and test_doesnt_work_with_sets() in a module test_containers.py, you'll get the following specs:
Containers - are marked as deprecated - doesn't work with sets
If you follow a good naming convention for your tests you'll get free up-to-date specification of your application - it will be as accurate as your tests are.
--with-spec
enables the plugin, and automatically sets the verbose
level for nose to "detailed output". During the test run all your
test descriptions will be shown as a special kind of specification -
your test classes set up a context and test methods set up a single
specification.
--spec-color
enables colored output. Successful tests will be marked
as green, while failed/error cases as red. Skipped and deprecated test
cases will be shown in yellow. You need an ANSI terminal to use this.
--spec-doctests
enables experimental support for doctests.
--spec-file=SPEC_FILE
outputs specification to a separate file instead
of the default nose stream. When this option is used nose reporter is not
replaced, so error details will still go to stderr.
Try:
nosetests --with-spec --spec-color examples/test_spec.py
(Yes, you should see an error.)
Look at examples/test_spec.py source code and tests inside tests/spec_test_cases/ directory to see how test cases are mapped into specifications.
pinocchio is available under the MIT license.
The author of the stopwatch, decorator, figleafsections, and outputsave extensions is Titus Brown. You can contact him at [email protected], or check out his main site at http://ivory.idyll.org/.
The author of the spec plugin is Michal Kwiatkowski. His homepage is at http://joker.linuxstuff.pl/ and his mail address is [email protected].