Skip to content
Bob Fahr edited this page Nov 14, 2018 · 19 revisions

Submitting Contributions to the Project

Before contributing please review the contribution guide.

Before You Submit Your Pull Request

  • Ensure your commit message is useful and properly formatted.
  • There should optimally be only one commit. All of your intermediate work-in-progress commits should be squashed.
  • Include unit tests to cover both positive and negative cases utilizing reasonable examples of input data.
  • Run py.test to validate your code submission.
  • Run coverage tests to ensure that you are sufficiently testing your code:
    py.test --cov=insights
    
  • Run flake8 in the project root directory to check all files for PEP8 compliance. In insights project directory the file .flake8 configures the compliance checks that are performed.
  • Make sure your code is sufficiently documented following the Sphinx Google style for documentation. Include:
    • example input data;
    • an Examples: sections showing examples of use and attribute data structure;
    • clear description of how to use the code, assuming user has some knowledge of the domain; and
    • be clear on what “empty” data (falsey data objects) represents.
  • Check spelling in your documentation and comments:
    $ pylint --disable all --enable spelling --spelling-dict en_US --spelling-private-dict-file $HOME/my.dict module_filename.py
    You may have to install pylint, python-enchant and aspell-en in order to perform spell check. Also you will want to create your own dictionary ($HOME/my.dict) to add words like redhat that may not be in the default dictionary.
  • Check your documentation by running the generation script and verifying that the generated documentation looks correct and is accurate. Note: Python 2.6 is not supported for Sphinx documentation builds, please use Python 2.7 or greater.
    $ cd docs
    $ make clean html_debug
    $ cd _build/html
    $ firefox index.html
  • You may also check your module documentation with $ pydoc module_name and the PEP 257 checking tool described in this article.
  • If you are exposing an attribute of type defaultdict or any other data structure that would mutate as a side effect of accessing the object, consider converting to a different type, for example dict.

After Submission of Your Pull Request

Once you have submitted your pull request a member of the project will review your submission. You may also add one or more of your team members as reviewers to your pull request. Continuous integration tests will be run on your code when the pull request has been submitted. The following tests will run:

  1. py.test will be executed for the project.
  2. Documentation will be generated to check for any errors.
  3. flake8 will be executed to check for PEP8 compliance.

If any of those tests fail you will receive a notification, and you will need to fix any problems before the pull request will be reviewed by the project team.

Review by the Project Team

One or more members of the project team will review your pull request and either provide feedback, or merge your request. If there are reviewers that have commented on your merge request you need to resolve those comments before the request can be merged.

If you make subsequent commits to the same pull request branch there is no need to squash subsequent commits, however multiple commits may be squashed when your request is merged. Upon successful merge the topic branch will be removed.

Spec Changes

If you have made any changes or additions to the specs then you will also need to perform some additional steps prior to your PR being merged. You will need to submit a PR to the insights-frontend-assets project with modifications to the uploader.json file for your spec changes. You will need to follow these steps:

  1. Fork and clone https://github.com/redhatinsights/insights-frontend-assets.
  2. Check out uploader_json branch.
  3. Make the appropriate change:
    1. If modifying an existing spec, find the spec in the JSON and modify the command/file.
    2. If adding a new spec, add a new JSON subdocument to the appropriate section. Make sure the subdocument has the following keys:
      • "file" for files, or "command" for commands
      • "pattern": the set of filters, if any. Must be a list
      • "symbolic_name": The symbolic name of the corresponding spec
  4. Log in to a RHEL system with insights-client installed.
  5. Copy your modified uploader.json to /etc/insights-client/.cache.json.
  6. Run as root: # insights-client --offline --no-gpg.
  7. Optional: Manually inspect the archive to see if your file is present as expected.
  8. Copy the archive to a location accessible to an insights-core virtualenv.
  9. Run: $ python -m insights -p <your parser module> -- <path to archive>. The output should indicate your file/command was passed to your parser.
  10. Optional: Perform further inspection by analyzing parser output.
  11. Once validated, push your changes to a branch on your fork and open a pull request against the insights-frontend-assets/uploader_json branch.

Checklist for Reviewers

  • Update the branch in Github to ensure it is up-to-date
  • Checkout pull request. See Resources [1]
  • Remove any pre-existing .pyc files from the project directory:
    $ find . -name '*.pyc' -delete
  • Run tests: py.test
  • Run coverage tests to determine test coverage for the changed code. Coverage should be 100% but there may be good reasons why it is not. Also having 100% coverage does not ensure testing is sufficient so be sure to review the tests.
    $ py.test --cov=insights
  • Generate the documentation and review the rendered HTML in a browser to check for problems.
    • Parsers and combiners should have headings in one of two forms:
      • For a module containing a single class that rules would use, the heading should be:
        '''
        ParserClass - (file ``/path/to/file``|command ``/path/to/command with arguments``)
        ==================================================================================
        '''
        Documentation for the class should be included in the module documentation.
      • For a module containing multiple classes that rules would use, the heading should be:
        '''
        Descriptive title for these parsers
        ===================================
        
        This module contains the following parsers:
        
        ParserClass - (file ``/path/to/file``|command ``/path/to/command with arguments``)
        ----------------------------------------------------------------------------------
        ParserClass - (file ``/path/to/file``|command ``/path/to/command with arguments``)
        ----------------------------------------------------------------------------------
        ParserClass - (file ``/path/to/file``|command ``/path/to/command with arguments``)
        ----------------------------------------------------------------------------------
        '''
        Documentation for each class should be included in the class documentation.
      • This ensures that they are listed consistently in the shared listings.
  • Review code changes/additions to ensure the code follows good coding practices, is maintainable, and is performs the described functions.
  • Make sure only datasources, parsers, combiners, and associated bug fixes are accepted in PRs to master. No interface breaking changes are allowed as part of a regular release.
  • Review the test data and documentation to make sure no sensitive information, e.g. username, password, fqdn or public IP, is included.
  • Review the commit history to determine if the pull request needs to be squashed during the merge.
  • Add comments to the merge request if required.
  • Merge the request when all comments have been resolved. Use the option to Squash and Merge if the request includes multiple commits.

Resources

[1]: Consider using Hub for Git to make working with GitHub repos easier.