Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pygmt.test() failed due to matplotlib.testing.exceptions.ImageComparisonFailure #315

Closed
holishing opened this issue Jul 12, 2019 · 4 comments

Comments

@holishing
Copy link

holishing commented Jul 12, 2019

Description of the problem

Full code that generated the error

import pygmt
pygmt.test()

Full error message
https://gist.github.com/holishing/07537de8f8fb313a9d6302a12dbbbe45

error message
Loaded libgmt:
  binary dir: /home/rau/Tools/pygmt/bin
  cores: 4
  grid layout: rows
  library path: /usr/lib/x86_64-linux-gnu/libgmt.so
  padding: 2
  plugin dir: /usr/lib/x86_64-linux-gnu/gmt/plugins
  share dir: /usr/share/gmt
  version: 6.0.0rc2
============================= test session starts ==============================
platform linux -- Python 3.7.3, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- /home/rau/Tools/pygmt/bin/python
cachedir: .pytest_cache
Matplotlib: 3.1.1
Freetype: 2.6.1
rootdir: /home/rau/Tools/pygmt
plugins: mpl-0.10, cov-2.7.1
collecting ... collected 159 items

lib/python3.7/site-packages/pygmt/base_plotting.py::pygmt.base_plotting.BasePlotting._preprocess PASSED [ 0%]
lib/python3.7/site-packages/pygmt/figure.py::pygmt.figure.Figure PASSED [ 1%]
lib/python3.7/site-packages/pygmt/clib/conversion.py::pygmt.clib.conversion._as_array PASSED [ 1%]
lib/python3.7/site-packages/pygmt/clib/conversion.py::pygmt.clib.conversion.as_c_contiguous PASSED [ 2%]
lib/python3.7/site-packages/pygmt/clib/conversion.py::pygmt.clib.conversion.dataarray_to_matrix PASSED [ 3%]
lib/python3.7/site-packages/pygmt/clib/conversion.py::pygmt.clib.conversion.kwargs_to_ctypes_array PASSED [ 3%]
lib/python3.7/site-packages/pygmt/clib/conversion.py::pygmt.clib.conversion.vectors_to_arrays PASSED [ 4%]
lib/python3.7/site-packages/pygmt/clib/session.py::pygmt.clib.session.Session PASSED [ 5%]
lib/python3.7/site-packages/pygmt/clib/session.py::pygmt.clib.session.Session._check_dtype_and_dim PASSED [ 5%]
lib/python3.7/site-packages/pygmt/clib/session.py::pygmt.clib.session.Session.extract_region PASSED [ 6%]
lib/python3.7/site-packages/pygmt/clib/session.py::pygmt.clib.session.Session.get_libgmt_func PASSED [ 6%]
lib/python3.7/site-packages/pygmt/clib/session.py::pygmt.clib.session.Session.open_virtual_file PASSED [ 7%]
lib/python3.7/site-packages/pygmt/clib/session.py::pygmt.clib.session.Session.virtualfile_from_grid PASSED [ 8%]
lib/python3.7/site-packages/pygmt/clib/session.py::pygmt.clib.session.Session.virtualfile_from_matrix PASSED [ 8%]
lib/python3.7/site-packages/pygmt/clib/session.py::pygmt.clib.session.Session.virtualfile_from_vectors PASSED [ 9%]
lib/python3.7/site-packages/pygmt/datasets/earth_relief.py::pygmt.datasets.earth_relief._is_valid_resolution PASSED [ 10%]
lib/python3.7/site-packages/pygmt/datasets/earth_relief.py::pygmt.datasets.earth_relief._shape_from_resolution PASSED [ 10%]
lib/python3.7/site-packages/pygmt/helpers/decorators.py::pygmt.helpers.decorators.fmt_docstring PASSED [ 11%]
lib/python3.7/site-packages/pygmt/helpers/decorators.py::pygmt.helpers.decorators.kwargs_to_strings PASSED [ 11%]
lib/python3.7/site-packages/pygmt/helpers/decorators.py::pygmt.helpers.decorators.use_alias PASSED [ 12%]
lib/python3.7/site-packages/pygmt/helpers/tempfile.py::pygmt.helpers.tempfile.GMTTempFile PASSED [ 13%]
lib/python3.7/site-packages/pygmt/helpers/utils.py::pygmt.helpers.utils.build_arg_string PASSED [ 13%]
lib/python3.7/site-packages/pygmt/helpers/utils.py::pygmt.helpers.utils.data_kind PASSED [ 14%]
lib/python3.7/site-packages/pygmt/helpers/utils.py::pygmt.helpers.utils.dummy_context PASSED [ 15%]
lib/python3.7/site-packages/pygmt/helpers/utils.py::pygmt.helpers.utils.is_nonstr_iter PASSED [ 15%]
lib/python3.7/site-packages/pygmt/tests/test_basemap.py::test_basemap_required_args PASSED [ 16%]
lib/python3.7/site-packages/pygmt/tests/test_basemap.py::test_basemap PASSED [ 16%]
lib/python3.7/site-packages/pygmt/tests/test_basemap.py::test_basemap_list_region PASSED [ 17%]
lib/python3.7/site-packages/pygmt/tests/test_basemap.py::test_basemap_loglog PASSED [ 18%]
lib/python3.7/site-packages/pygmt/tests/test_basemap.py::test_basemap_power_axis FAILED [ 18%]
lib/python3.7/site-packages/pygmt/tests/test_basemap.py::test_basemap_polar FAILED [ 19%]
lib/python3.7/site-packages/pygmt/tests/test_basemap.py::test_basemap_winkel_tripel FAILED [ 20%]
lib/python3.7/site-packages/pygmt/tests/test_basemap.py::test_basemap_aliases FAILED [ 20%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_load_libgmt PASSED [ 21%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_load_libgmt_fail PASSED [ 22%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_get_clib_path PASSED [ 22%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_check_libgmt PASSED [ 23%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_clib_name PASSED [ 23%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_getitem PASSED [ 24%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_create_destroy_session PASSED [ 25%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_create_session_fails PASSED [ 25%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_destroy_session_fails PASSED [ 26%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_call_module PASSED [ 27%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_call_module_invalid_arguments PASSED [ 27%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_call_module_invalid_name PASSED [ 28%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_call_module_error_message PASSED [ 28%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_method_no_session PASSED [ 29%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_parse_constant_single PASSED [ 30%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_parse_constant_composite PASSED [ 30%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_parse_constant_fails PASSED [ 31%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_create_data_dataset PASSED [ 32%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_create_data_grid_dim PASSED [ 32%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_create_data_grid_range PASSED [ 33%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_create_data_fails PASSED [ 33%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_put_vector PASSED [ 34%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_put_vector_invalid_dtype PASSED [ 35%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_put_vector_wrong_column PASSED [ 35%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_put_vector_2d_fails PASSED [ 36%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_put_matrix PASSED [ 37%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_put_matrix_fails PASSED [ 37%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_put_matrix_grid PASSED [ 38%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtual_file PASSED [ 38%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtual_file_fails PASSED [ 39%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtual_file_bad_direction PASSED [ 40%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtualfile_from_vectors PASSED [ 40%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtualfile_from_vectors_transpose PASSED [ 41%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtualfile_from_vectors_diff_size PASSED [ 42%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtualfile_from_matrix PASSED [ 42%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtualfile_from_matrix_slice PASSED [ 43%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtualfile_from_vectors_pandas PASSED [ 44%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_virtualfile_from_vectors_arraylike PASSED [ 44%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_extract_region_fails PASSED [ 45%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_extract_region_two_figures PASSED [ 45%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_write_data_fails PASSED [ 46%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_dataarray_to_matrix_dims_fails PASSED [ 47%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_dataarray_to_matrix_inc_fails PASSED [ 47%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_get_default PASSED [ 48%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_get_default_fails PASSED [ 49%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_info_dict PASSED [ 49%]
lib/python3.7/site-packages/pygmt/tests/test_clib.py::test_fails_for_wrong_version PASSED [ 50%]
lib/python3.7/site-packages/pygmt/tests/test_coast.py::test_coast FAILED [ 50%]
lib/python3.7/site-packages/pygmt/tests/test_coast.py::test_coast_iceland FAILED [ 51%]
lib/python3.7/site-packages/pygmt/tests/test_coast.py::test_coast_aliases FAILED [ 52%]
lib/python3.7/site-packages/pygmt/tests/test_coast.py::test_coast_world_mercator FAILED [ 52%]
lib/python3.7/site-packages/pygmt/tests/test_contour.py::test_contour_fail_no_data PASSED [ 53%]
lib/python3.7/site-packages/pygmt/tests/test_contour.py::test_contour_vec FAILED [ 54%]
lib/python3.7/site-packages/pygmt/tests/test_contour.py::test_contour_matrix FAILED [ 54%]
lib/python3.7/site-packages/pygmt/tests/test_contour.py::test_contour_from_file FAILED [ 55%]
lib/python3.7/site-packages/pygmt/tests/test_datasets.py::test_japan_quakes PASSED [ 55%]
lib/python3.7/site-packages/pygmt/tests/test_datasets.py::test_sample_bathymetry PASSED [ 56%]
lib/python3.7/site-packages/pygmt/tests/test_datasets.py::test_usgs_quakes PASSED [ 57%]
lib/python3.7/site-packages/pygmt/tests/test_datasets.py::test_earth_relief_fails PASSED [ 57%]
lib/python3.7/site-packages/pygmt/tests/test_datasets.py::test_earth_relief_60 PASSED [ 58%]
lib/python3.7/site-packages/pygmt/tests/test_datasets.py::test_earth_relief_30 PASSED [ 59%]
lib/python3.7/site-packages/pygmt/tests/test_figure.py::test_figure_region PASSED [ 59%]
lib/python3.7/site-packages/pygmt/tests/test_figure.py::test_figure_region_multiple PASSED [ 60%]
lib/python3.7/site-packages/pygmt/tests/test_figure.py::test_figure_region_country_codes PASSED [ 61%]
lib/python3.7/site-packages/pygmt/tests/test_figure.py::test_figure_savefig_exists PASSED [ 61%]
lib/python3.7/site-packages/pygmt/tests/test_figure.py::test_figure_savefig_transparent PASSED [ 62%]
lib/python3.7/site-packages/pygmt/tests/test_figure.py::test_figure_savefig PASSED [ 62%]
lib/python3.7/site-packages/pygmt/tests/test_figure.py::test_figure_show PASSED [ 63%]
lib/python3.7/site-packages/pygmt/tests/test_figure.py::test_shift_origin PASSED [ 64%]
lib/python3.7/site-packages/pygmt/tests/test_grdcontour.py::test_grdcontour FAILED [ 64%]
lib/python3.7/site-packages/pygmt/tests/test_grdcontour.py::test_grdcontour_labels FAILED [ 65%]
lib/python3.7/site-packages/pygmt/tests/test_grdcontour.py::test_grdcontour_slice FAILED [ 66%]
lib/python3.7/site-packages/pygmt/tests/test_grdcontour.py::test_grdcontour_file PASSED [ 66%]
lib/python3.7/site-packages/pygmt/tests/test_grdcontour.py::test_grdcontour_interval_file_full_opts FAILED [ 67%]
lib/python3.7/site-packages/pygmt/tests/test_grdcontour.py::test_grdcontour_fails PASSED [ 67%]
lib/python3.7/site-packages/pygmt/tests/test_grdimage.py::test_grdimage PASSED [ 68%]
lib/python3.7/site-packages/pygmt/tests/test_grdimage.py::test_grdimage_slice PASSED [ 69%]
lib/python3.7/site-packages/pygmt/tests/test_grdimage.py::test_grdimage_file PASSED [ 69%]
lib/python3.7/site-packages/pygmt/tests/test_grdimage.py::test_grdimage_fails PASSED [ 70%]
lib/python3.7/site-packages/pygmt/tests/test_helpers.py::test_unique_name PASSED [ 71%]
lib/python3.7/site-packages/pygmt/tests/test_helpers.py::test_kwargs_to_strings_fails PASSED [ 71%]
lib/python3.7/site-packages/pygmt/tests/test_helpers.py::test_kwargs_to_strings_no_bools PASSED [ 72%]
lib/python3.7/site-packages/pygmt/tests/test_helpers.py::test_gmttempfile PASSED [ 72%]
lib/python3.7/site-packages/pygmt/tests/test_helpers.py::test_gmttempfile_unique PASSED [ 73%]
lib/python3.7/site-packages/pygmt/tests/test_helpers.py::test_gmttempfile_prefix_suffix PASSED [ 74%]
lib/python3.7/site-packages/pygmt/tests/test_helpers.py::test_gmttempfile_read PASSED [ 74%]
lib/python3.7/site-packages/pygmt/tests/test_image.py::test_image PASSED [ 75%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_info PASSED [ 76%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_info_c PASSED [ 76%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_info_i PASSED [ 77%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_info_c_i PASSED [ 77%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_info_t PASSED [ 78%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_info_fails PASSED [ 79%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_grdinfo PASSED [ 79%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_grdinfo_file PASSED [ 80%]
lib/python3.7/site-packages/pygmt/tests/test_info.py::test_grdinfo_fails PASSED [ 81%]
lib/python3.7/site-packages/pygmt/tests/test_logo.py::test_logo FAILED [ 81%]
lib/python3.7/site-packages/pygmt/tests/test_logo.py::test_logo_on_a_map FAILED [ 82%]
lib/python3.7/site-packages/pygmt/tests/test_logo.py::test_logo_fails PASSED [ 83%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_red_circles PASSED [ 83%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_fail_no_data PASSED [ 84%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_fail_size_color PASSED [ 84%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_projection FAILED [ 85%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_colors FAILED [ 86%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_sizes PASSED [ 86%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_colors_sizes PASSED [ 87%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_colors_sizes_proj FAILED [ 88%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_matrix FAILED [ 88%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_matrix_color PASSED [ 89%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_from_file PASSED [ 89%]
lib/python3.7/site-packages/pygmt/tests/test_plot.py::test_plot_vectors PASSED [ 90%]
lib/python3.7/site-packages/pygmt/tests/test_psconvert.py::test_psconvert PASSED [ 91%]
lib/python3.7/site-packages/pygmt/tests/test_psconvert.py::test_psconvert_twice PASSED [ 91%]
lib/python3.7/site-packages/pygmt/tests/test_psconvert.py::test_psconvert_int_options PASSED [ 92%]
lib/python3.7/site-packages/pygmt/tests/test_psconvert.py::test_psconvert_aliases PASSED [ 93%]
lib/python3.7/site-packages/pygmt/tests/test_session_management.py::test_begin_end PASSED [ 93%]
lib/python3.7/site-packages/pygmt/tests/test_sphinx_gallery.py::test_pygmtscraper PASSED [ 94%]
lib/python3.7/site-packages/pygmt/tests/test_surface.py::test_surface_input_file PASSED [ 94%]
lib/python3.7/site-packages/pygmt/tests/test_surface.py::test_surface_input_data_array PASSED [ 95%]
lib/python3.7/site-packages/pygmt/tests/test_surface.py::test_surface_input_xyz PASSED [ 96%]
lib/python3.7/site-packages/pygmt/tests/test_surface.py::test_surface_input_xy_no_z PASSED [ 96%]
lib/python3.7/site-packages/pygmt/tests/test_surface.py::test_surface_wrong_kind_of_input PASSED [ 97%]
lib/python3.7/site-packages/pygmt/tests/test_surface.py::test_surface_with_outfile_param PASSED [ 98%]
lib/python3.7/site-packages/pygmt/tests/test_surface.py::test_surface_short_aliases PASSED [ 98%]
lib/python3.7/site-packages/pygmt/tests/test_which.py::test_which PASSED [ 99%]
lib/python3.7/site-packages/pygmt/tests/test_which.py::test_which_fails PASSED [100%]

=================================== FAILURES ===================================
___________________________ test_basemap_power_axis ____________________________
Error: Image files did not match.
RMS Value: 35.671429514168814
Expected:
/tmp/tmp9b0v1gcd/baseline-test_basemap_power_axis.png
Actual:
/tmp/tmp9b0v1gcd/test_basemap_power_axis.png
Difference:
/tmp/tmp9b0v1gcd/test_basemap_power_axis-failed-diff.png
Tolerance:
2
______________________________ test_basemap_polar ______________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7ccff898>
filename = 'test_basemap_polar.png', result_dir = '/tmp/tmpp6x_r0m2'
test_image = '/tmp/tmpp6x_r0m2/test_basemap_polar.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_basemap_polar.png'
baseline_image = '/tmp/tmpp6x_r0m2/baseline-test_basemap_polar.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (1822, 1961, 3) actual size (1821, 1958, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
__________________________ test_basemap_winkel_tripel __________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c82aa58>
filename = 'test_basemap_winkel_tripel.png', result_dir = '/tmp/tmprjs3spdz'
test_image = '/tmp/tmprjs3spdz/test_basemap_winkel_tripel.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_basemap_winkel_tripel.png'
baseline_image = '/tmp/tmprjs3spdz/baseline-test_basemap_winkel_tripel.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (1961, 3120, 3) actual size (1958, 3128, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
_____________________________ test_basemap_aliases _____________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c82bf98>
filename = 'test_basemap_aliases.png', result_dir = '/tmp/tmp1n4kmr_c'
test_image = '/tmp/tmp1n4kmr_c/test_basemap_aliases.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_basemap_aliases.png'
baseline_image = '/tmp/tmp1n4kmr_c/baseline-test_basemap_aliases.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (1057, 2267, 3) actual size (1057, 2275, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
__________________________________ test_coast __________________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7ccec828>
filename = 'test_coast.png', result_dir = '/tmp/tmpbjxega_2'
test_image = '/tmp/tmpbjxega_2/test_coast.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_coast.png'
baseline_image = '/tmp/tmpbjxega_2/baseline-test_coast.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (2768, 2081, 3) actual size (2765, 2089, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
______________________________ test_coast_iceland ______________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7cd5e198>
filename = 'test_coast_iceland.png', result_dir = '/tmp/tmpz2hnhqg6'
test_image = '/tmp/tmpz2hnhqg6/test_coast_iceland.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_coast_iceland.png'
baseline_image = '/tmp/tmpz2hnhqg6/baseline-test_coast_iceland.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (1441, 2585, 3) actual size (1438, 2593, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
______________________________ test_coast_aliases ______________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c82b550>
filename = 'test_coast_aliases.png', result_dir = '/tmp/tmpameqrse_'
test_image = '/tmp/tmpameqrse_/test_coast_aliases.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_coast_aliases.png'
baseline_image = '/tmp/tmpameqrse_/baseline-test_coast_aliases.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (2768, 2081, 3) actual size (2765, 2089, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
__________________________ test_coast_world_mercator ___________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c7bebe0>
filename = 'test_coast_world_mercator.png', result_dir = '/tmp/tmpc_luw7x_'
test_image = '/tmp/tmpc_luw7x_/test_coast_world_mercator.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_coast_world_mercator.png'
baseline_image = '/tmp/tmpc_luw7x_/baseline-test_coast_world_mercator.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (2480, 3281, 3) actual size (2477, 3289, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
_______________________________ test_contour_vec _______________________________
Error: Image files did not match.
RMS Value: 20.626215523959708
Expected:
/tmp/tmpqkzi2u4k/baseline-test_contour_vec.png
Actual:
/tmp/tmpqkzi2u4k/test_contour_vec.png
Difference:
/tmp/tmpqkzi2u4k/test_contour_vec-failed-diff.png
Tolerance:
2
_____________________________ test_contour_matrix ______________________________
Error: Image files did not match.
RMS Value: 24.87899400349341
Expected:
/tmp/tmprfynr4hj/baseline-test_contour_matrix.png
Actual:
/tmp/tmprfynr4hj/test_contour_matrix.png
Difference:
/tmp/tmprfynr4hj/test_contour_matrix-failed-diff.png
Tolerance:
2
____________________________ test_contour_from_file ____________________________
Error: Image files did not match.
RMS Value: 6.095108676368807
Expected:
/tmp/tmppi6oun6_/baseline-test_contour_from_file.png
Actual:
/tmp/tmppi6oun6_/test_contour_from_file.png
Difference:
/tmp/tmppi6oun6_/test_contour_from_file-failed-diff.png
Tolerance:
2
_______________________________ test_grdcontour ________________________________
Error: Image files did not match.
RMS Value: 4.004358658997154
Expected:
/tmp/tmpfreaysdx/baseline-test_grdcontour.png
Actual:
/tmp/tmpfreaysdx/test_grdcontour.png
Difference:
/tmp/tmpfreaysdx/test_grdcontour-failed-diff.png
Tolerance:
2
____________________________ test_grdcontour_labels ____________________________
Error: Image files did not match.
RMS Value: 25.29289935666731
Expected:
/tmp/tmp059qv41o/baseline-test_grdcontour_labels.png
Actual:
/tmp/tmp059qv41o/test_grdcontour_labels.png
Difference:
/tmp/tmp059qv41o/test_grdcontour_labels-failed-diff.png
Tolerance:
2
____________________________ test_grdcontour_slice _____________________________
Error: Image files did not match.
RMS Value: 2.28545667589373
Expected:
/tmp/tmph_nen23w/baseline-test_grdcontour_slice.png
Actual:
/tmp/tmph_nen23w/test_grdcontour_slice.png
Difference:
/tmp/tmph_nen23w/test_grdcontour_slice-failed-diff.png
Tolerance:
2
___________________ test_grdcontour_interval_file_full_opts ____________________
Error: Image files did not match.
RMS Value: 12.619357010431845
Expected:
/tmp/tmploh5cjoo/baseline-test_grdcontour_interval_file_full_opts.png
Actual:
/tmp/tmploh5cjoo/test_grdcontour_interval_file_full_opts.png
Difference:
/tmp/tmploh5cjoo/test_grdcontour_interval_file_full_opts-failed-diff.png
Tolerance:
2
__________________________________ test_logo ___________________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c829b00>
filename = 'test_logo.png', result_dir = '/tmp/tmpaq6uehuf'
test_image = '/tmp/tmpaq6uehuf/test_logo.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_logo.png'
baseline_image = '/tmp/tmpaq6uehuf/baseline-test_logo.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (304, 601, 3) actual size (304, 600, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
______________________________ test_logo_on_a_map ______________________________

args = (), kwargs = {}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c854550>
filename = 'test_logo_on_a_map.png', result_dir = '/tmp/tmpin582jvi'
test_image = '/tmp/tmpin582jvi/test_logo_on_a_map.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_logo_on_a_map.png'
baseline_image = '/tmp/tmpin582jvi/baseline-test_logo_on_a_map.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (1986, 2023, 3) actual size (1983, 2031, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
_____________________________ test_plot_projection _____________________________

args = ()
kwargs = {'data': array([[43.4847, 0.6227, 0.5309],
[22.331 , 3.7556, 0.3817],
[40.8023, 5.5903, 0.7764],
... 0.7622],
[61.7074, 1.4425, 0.4305],
[28.1125, 3.8456, 0.9338],
[47.8333, -0.7225, 0.5969]])}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c79ef60>
filename = 'test_plot_projection.png', result_dir = '/tmp/tmp4jyvb8ih'
test_image = '/tmp/tmp4jyvb8ih/test_plot_projection.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_plot_projection.png'
baseline_image = '/tmp/tmp4jyvb8ih/baseline-test_plot_projection.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (894, 1367, 3) actual size (891, 1375, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
_______________________________ test_plot_colors _______________________________
Error: Image files did not match.
RMS Value: 2.5153753240346766
Expected:
/tmp/tmpoz8_twfq/baseline-test_plot_colors.png
Actual:
/tmp/tmpoz8_twfq/test_plot_colors.png
Difference:
/tmp/tmpoz8_twfq/test_plot_colors-failed-diff.png
Tolerance:
2
_________________________ test_plot_colors_sizes_proj __________________________

args = ()
kwargs = {'data': array([[43.4847, 0.6227, 0.5309],
[22.331 , 3.7556, 0.3817],
[40.8023, 5.5903, 0.7764],
...1.4425, 0.4305],
[28.1125, 3.8456, 0.9338],
[47.8333, -0.7225, 0.5969]]), 'region': [10, 70, -5, 10]}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c8295c0>
filename = 'test_plot_colors_sizes_proj.png', result_dir = '/tmp/tmpu46ags4g'
test_image = '/tmp/tmpu46ags4g/test_plot_colors_sizes_proj.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_plot_colors_sizes_proj.png'
baseline_image = '/tmp/tmpu46ags4g/baseline-test_plot_colors_sizes_proj.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (908, 3225, 3) actual size (905, 3234, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
_______________________________ test_plot_matrix _______________________________

args = ()
kwargs = {'data': array([[43.4847, 0.6227, 0.5309],
[22.331 , 3.7556, 0.3817],
[40.8023, 5.5903, 0.7764],
... 0.7622],
[61.7074, 1.4425, 0.4305],
[28.1125, 3.8456, 0.9338],
[47.8333, -0.7225, 0.5969]])}
baseline_dir = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline'
baseline_remote = False, fig = <pygmt.figure.Figure object at 0x7f8c7c7e66a0>
filename = 'test_plot_matrix.png', result_dir = '/tmp/tmpa0cnc3v_'
test_image = '/tmp/tmpa0cnc3v_/test_plot_matrix.png'
baseline_image_ref = '/home/rau/Tools/pygmt/lib/python3.7/site-packages/pygmt/tests/baseline/test_plot_matrix.png'
baseline_image = '/tmp/tmpa0cnc3v_/baseline-test_plot_matrix.png'

@wraps(item.function)
def item_function_wrapper(*args, **kwargs):

    baseline_dir = compare.kwargs.get('baseline_dir', None)
    if baseline_dir is None:
        if self.baseline_dir is None:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), 'baseline')
        else:
            baseline_dir = self.baseline_dir
        baseline_remote = False
    else:
        baseline_remote = baseline_dir.startswith(('http://', 'https://'))
        if not baseline_remote:
            baseline_dir = os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir)

    with plt.style.context(style, after_reset=True), switch_backend(backend):

        # Run test and get figure object
        if inspect.ismethod(original):  # method
            # In some cases, for example if setup_method is used,
            # original appears to belong to an instance of the test
            # class that is not the same as args[0], and args[0] is the
            # one that has the correct attributes set up from setup_method
            # so we ignore original.__self__ and use args[0] instead.
            fig = original.__func__(*args, **kwargs)
        else:  # function
            fig = original(*args, **kwargs)

        if remove_text:
            remove_ticks_and_titles(fig)

        # Find test name to use as plot name
        filename = compare.kwargs.get('filename', None)
        if filename is None:
            filename = item.name + '.png'
            filename = filename.replace('[', '_').replace(']', '_')
            filename = filename.replace('/', '_')
            filename = filename.replace('_.png', '.png')

        # What we do now depends on whether we are generating the
        # reference images or simply running the test.
        if self.generate_dir is None:

            # Save the figure
            result_dir = tempfile.mkdtemp(dir=self.results_dir)
            test_image = os.path.abspath(os.path.join(result_dir, filename))

            fig.savefig(test_image, **savefig_kwargs)
            close_mpl_figure(fig)

            # Find path to baseline image
            if baseline_remote:
                baseline_image_ref = _download_file(baseline_dir, filename)
            else:
                baseline_image_ref = os.path.abspath(os.path.join(os.path.dirname(item.fspath.strpath), baseline_dir, filename))

            if not os.path.exists(baseline_image_ref):
                pytest.fail("Image file not found for comparison test in: "
                            "\n\t{baseline_dir}"
                            "\n(This is expected for new tests.)\nGenerated Image: "
                            "\n\t{test}".format(baseline_dir=baseline_dir, test=test_image), pytrace=False)

            # distutils may put the baseline images in non-accessible places,
            # copy to our tmpdir to be sure to keep them in case of failure
            baseline_image = os.path.abspath(os.path.join(result_dir, 'baseline-' + filename))
            shutil.copyfile(baseline_image_ref, baseline_image)
          msg = compare_images(baseline_image, test_image, tol=tolerance)

lib/python3.7/site-packages/pytest_mpl/plugin.py:275:


lib/python3.7/site-packages/matplotlib/testing/compare.py:458: in compare_images
rms = calculate_rms(expected_image, actual_image)


expected_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)
actual_image = array([[[255, 255, 255],
[255, 255, 255],
[255, 255, 255],
...,
[255, 255, 255],
...[255, 255, 255],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]]], dtype=int16)

def calculate_rms(expected_image, actual_image):
    "Calculate the per-pixel errors, then compute the root mean square error."
    if expected_image.shape != actual_image.shape:
        raise ImageComparisonFailure(
            "Image sizes do not match expected size: {} "
          "actual size {}".format(expected_image.shape, actual_image.shape))

E matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size: (908, 3225, 3) actual size (905, 3234, 3)

lib/python3.7/site-packages/matplotlib/testing/compare.py:366: ImageComparisonFailure
=============================== warnings summary ===============================
lib/python3.7/site-packages/_pytest/mark/structures.py:332
/home/rau/Tools/pygmt/lib/python3.7/site-packages/_pytest/mark/structures.py:332: PytestUnknownMarkWarning: Unknown pytest.mark.mpl_image_compare - is this a typo? You can register custom marks to avoid this warning - for details, see https://docs.pytest.org/en/latest/mark.html
PytestUnknownMarkWarning,

-- Docs: https://docs.pytest.org/en/latest/warnings.html
============== 21 failed, 138 passed, 1 warnings in 28.75 seconds ==============

main error is about

matplotlib.testing.exceptions.ImageComparisonFailure: Image sizes do not match expected size

System information

  • Operating system: Debian Buster 10.0
  • Python installation (Anaconda, system, ETS): system (with virtualenv)
  • Version of GMT: 6.0.0rc2
  • Version of Python: 3.7.3
  • Version of this package: master version ( d84f308 )
  • If using conda, paste the output of conda list below:
  • I'm using python3.7.3+virtualenv, paste the output of pip list --local below:
output of pip list --local
Package                  Version  
------------------------ ---------
alabaster                0.7.12   
appdirs                  1.4.3    
astroid                  2.2.5    
atomicwrites             1.3.0    
attrs                    19.1.0   
Babel                    2.7.0    
backcall                 0.1.0    
black                    19.3b0   
bleach                   3.1.0    
certifi                  2019.6.16
cftime                   1.0.3.4  
chardet                  3.0.4    
Click                    7.0      
coverage                 4.5.3    
cycler                   0.10.0   
decorator                4.4.0    
defusedxml               0.6.0    
docutils                 0.14     
entrypoints              0.3      
flake8                   3.7.8    
idna                     2.8      
imagesize                1.1.0    
importlib-metadata       0.18     
ipykernel                5.1.1    
ipython                  7.6.1    
ipython-genutils         0.2.0    
ipywidgets               7.5.0    
isort                    4.3.21   
jedi                     0.14.0   
Jinja2                   2.10.1   
jsonschema               3.0.1    
jupyter                  1.0.0    
jupyter-client           5.3.1    
jupyter-console          6.0.0    
jupyter-core             4.5.0    
kiwisolver               1.1.0    
lazy-object-proxy        1.4.1    
MarkupSafe               1.1.1    
matplotlib               3.1.1    
mccabe                   0.6.1    
mistune                  0.8.4    
more-itertools           7.1.0    
nbconvert                5.5.0    
nbformat                 4.4.0    
nbsphinx                 0.4.2    
netCDF4                  1.5.1.2  
nose                     1.3.7    
notebook                 5.7.8    
numpy                    1.16.4   
numpydoc                 0.9.1    
packaging                19.0     
pandas                   0.24.2   
pandocfilters            1.4.2    
parso                    0.5.0    
pexpect                  4.7.0    
pickleshare              0.7.5    
Pillow                   6.1.0    
pip                      19.1.1   
pkg-resources            0.0.0    
pluggy                   0.12.0   
prometheus-client        0.7.1    
prompt-toolkit           2.0.9    
ptyprocess               0.6.0    
py                       1.8.0    
pycodestyle              2.5.0    
pyflakes                 2.1.1    
Pygments                 2.4.2    
pygmt                    0+unknown
pylint                   2.3.1    
pyparsing                2.4.0    
pyrsistent               0.15.3   
pytest                   5.0.1    
pytest-cov               2.7.1    
pytest-mpl               0.10     
python-dateutil          2.8.0    
pytz                     2019.1   
pyzmq                    18.0.2   
qtconsole                4.5.1    
requests                 2.22.0   
Send2Trash               1.5.0    
setuptools               41.0.1   
six                      1.12.0   
snowballstemmer          1.9.0    
Sphinx                   1.8.5    
sphinx-gallery           0.4.0    
sphinx-rtd-theme         0.4.3    
sphinxcontrib-websupport 1.1.2    
terminado                0.8.2    
testpath                 0.4.2    
toml                     0.10.0   
tornado                  6.0.3    
traitlets                4.3.2    
typed-ast                1.4.0    
urllib3                  1.25.3   
wcwidth                  0.1.7    
webencodings             0.5.1    
wheel                    0.33.4   
widgetsnbextension       3.5.0    
wrapt                    1.11.2   
xarray                   0.12.3   
zipp                     0.5.2    
@welcome
Copy link

welcome bot commented Jul 12, 2019

👋 Thanks for opening your first issue here! Please make sure you filled out the template with as much detail as possible. You might also want to take a look at our contributing guidelines and code of conduct.

@leouieda
Copy link
Member

Hi @holishing thanks for reporting this failure! Could you please post here the version of ghostscript that you have installed? A lot of these image mismatches are due to different versions of ghostscript generating slightly different pngs.

It would also be very helpful if you could find the images mentioned in the error log (Actual, Expected, Difference) and post them here. The file names are something like this: /tmp/tmpqkzi2u4k/baseline-test_contour_vec.png. Since they are in temp folders, they might be gone by now.

@holishing
Copy link
Author

holishing commented Jul 17, 2019

related file is still preserved:
https://drive.google.com/file/d/1BXHopy1zhQKMxJKOU7xl0I8kUcUMLZ2J/view?usp=sharing

SHA256 Checksum of tar file:

4605876cfd97645117dbcd01b7e23f8af8ca8286e55d1e935c1ecffc5f343af7  tmpqkzi2u4k.tar.gz
$ ghostscript --version
9.27
$ apt show ghostscript
Package: ghostscript
Version: 9.27~dfsg-2
Priority: optional
Section: text
Maintainer: Debian Printing Team <[email protected]>
Installed-Size: 230 kB
Provides: postscript-viewer
Depends: libgs9 (= 9.27~dfsg-2), libc6 (>= 2.4)
Recommends: gsfonts
Suggests: ghostscript-x
Homepage: https://www.ghostscript.com/
Tag: admin::hardware, hardware::printer, implemented-in::c,
 interface::commandline, role::program, scope::utility, use::driver,
 use::printing, works-with-format::pdf, works-with-format::postscript,
 works-with::dtp, works-with::font, works-with::image,
 works-with::image:raster, works-with::image:vector, works-with::text
Download-Size: 94.6 kB
APT-Manual-Installed: no
APT-Sources: http://free.nchc.org.tw/debian buster/main amd64 Packages
Description: interpreter for the PostScript language and for PDF
 GPL Ghostscript is used for PostScript/PDF preview and printing.
 Usually as a back-end to a program such as ghostview, it can display
 PostScript and PDF documents in an X11 environment.
 .
 Furthermore, it can render PostScript and PDF files as graphics to be
 printed on non-PostScript printers. Supported printers include common
 dot-matrix, inkjet and laser models.

@leouieda
Copy link
Member

OK, the baseline images that we use were generated using gs 9.21 so there is bound to be some slight differences. In this case, the difference is really small (see test_contour_vec-failed-diff.png):

test_contour_vec-failed-diff

So it's safe to ignore these errors. Thanks for reporting! Please feel free to let us know if you have any suggestions/requests.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants