Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replace flakeheaven and isort with ruff #2747

Merged
merged 6 commits into from
Oct 15, 2023
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/format-command.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ jobs:
# Install formatting tools
- name: Install formatting tools
run: |
python -m pip install black blackdoc docformatter flakeheaven isort
python -m pip install black blackdoc docformatter ruff
python -m pip list
sudo apt-get install dos2unix

Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/style_checks.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,11 +34,11 @@ jobs:

- name: Install packages
run: |
python -m pip install black blackdoc docformatter flakeheaven pylint isort
python -m pip install black blackdoc docformatter pylint ruff
python -m pip list
sudo apt-get install dos2unix

- name: Formatting check (black, blackdoc, docformatter, flakeheaven and isort)
- name: Formatting check (black, blackdoc, docformatter, ruff)
run: make check

- name: Linting (pylint)
Expand Down
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ MANIFEST
.coverage
coverage.xml
htmlcov/
.flakeheaven_cache/
.pytest_cache/
.ruff_cache/
results/
result_images/
tmp-test-dir-with-unique-name/
Expand Down
9 changes: 4 additions & 5 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ help:
@echo " fulltest run the test suite (including all doctests)"
@echo " doctest run the doctests only"
@echo " test_no_images run the test suite (including all doctests) but skip image comparisons"
@echo " format run black, blackdoc, docformatter and isort to automatically format the code"
@echo " check run code style and quality checks (black, blackdoc, docformatter, flakeheaven and isort)"
@echo " format run black, blackdoc, docformatter and ruff to automatically format the code"
@echo " check run code style and quality checks (black, blackdoc, docformatter, ruff)"
@echo " codespell run codespell to check common misspellings"
@echo " lint run pylint for a deeper (and slower) quality check"
@echo " clean clean up build and generated files"
Expand Down Expand Up @@ -60,17 +60,16 @@ test_no_images: PYTEST_ARGS=-o addopts="--verbose --durations=0 --durations-min=
test_no_images: _runtest

format:
isort .
docformatter --in-place $(FORMAT_FILES)
black $(FORMAT_FILES)
blackdoc $(FORMAT_FILES)
ruff check --fix $(FORMAT_FILES)

check:
isort . --check
docformatter --check $(FORMAT_FILES)
black --check $(FORMAT_FILES)
blackdoc --check $(FORMAT_FILES)
FLAKEHEAVEN_CACHE_TIMEOUT=0 flakeheaven lint $(FORMAT_FILES)
ruff check $(FORMAT_FILES)

codespell:
@codespell
Expand Down
4 changes: 2 additions & 2 deletions doc/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
import datetime
from importlib.metadata import metadata

# isort: off
# ruff: isort: off
from sphinx_gallery.sorting import ( # pylint: disable=no-name-in-module
ExplicitOrder,
ExampleTitleSortKey,
Expand All @@ -16,7 +16,7 @@
from pygmt import __commit__, __version__
from pygmt.sphinx_gallery import PyGMTScraper

# isort: on
# ruff: isort: on

extensions = [
"myst_parser",
Expand Down
6 changes: 3 additions & 3 deletions doc/contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -476,7 +476,7 @@ We use some tools to format the code so we don't have to think about it:
- [Black](https://github.com/psf/black)
- [blackdoc](https://github.com/keewis/blackdoc)
- [docformatter](https://github.com/myint/docformatter)
- [isort](https://pycqa.github.io/isort/)
- [ruff](https://docs.astral.sh/ruff)

Black and blackdoc loosely follows the [PEP8](http://pep8.org) guide but with a few
differences. Regardless, you won't have to worry about formatting the code yourself.
Expand All @@ -499,14 +499,14 @@ words bridged only by consonants, such as `distcalc`, and `crossprofile`. This
convention is not applied by the code checking tools, but the PyGMT maintainers
will comment on any pull requests as needed.

We also use [flakeheaven](https://flakeheaven.readthedocs.io) and
We also use [ruff](https://docs.astral.sh/ruff) and
[pylint](https://pylint.pycqa.org/) to check the quality of the code and quickly catch
common errors.
The [`Makefile`](https://github.com/GenericMappingTools/pygmt/blob/main/Makefile)
contains rules for running both checks:

```bash
make check # Runs black, blackdoc, docformatter, flakeheaven and isort (in check mode)
make check # Runs black, blackdoc, docformatter, ruff (in check mode)
make lint # Runs pylint, which is a bit slower
```

Expand Down
3 changes: 1 addition & 2 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,9 +26,8 @@ dependencies:
- blackdoc
- codespell
- docformatter>=1.7.2
- flakeheaven>=3
- isort>=5
- pylint
- ruff
# Dev dependencies (unit testing)
- matplotlib
- pytest-cov
Expand Down
4 changes: 2 additions & 2 deletions examples/projections/cyl/cyl_universal_transverse_mercator.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

.. _GMT_utm_zones:

.. figure:: https://docs.generic-mapping-tools.org/latest/_images/GMT_utm_zones.png # noqa: W505
.. figure:: https://docs.generic-mapping-tools.org/latest/_images/GMT_utm_zones.png
:width: 700 px
:align: center

Expand All @@ -34,7 +34,7 @@

The projection is set with **u** or **U**. *zone* sets the zone for the figure,
and the figure size is set with *scale* or *width*.
"""
""" # noqa: W505

# %%
import pygmt
Expand Down
4 changes: 2 additions & 2 deletions pygmt/datasets/earth_age.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def load_earth_age(resolution="01d", region=None, registration=None):
r"""
Load the Earth seafloor crustal age dataset in various resolutions.

.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_age.png # noqa: W505
.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_age.png
:width: 80 %
:align: center

Expand Down Expand Up @@ -91,7 +91,7 @@ def load_earth_age(resolution="01d", region=None, registration=None):
... region=[120, 160, 30, 60],
... registration="gridline",
... )
"""
""" # noqa: W505
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In commit ece0730, I had to move these noqa: W505 (doc-too-long) comments (mostly added in #2728) to behind the triple quotes instead of inline, because ruff requires it to be like so. See astral-sh/ruff#3972 and https://docs.astral.sh/ruff/configuration/#error-suppression.

grid = _load_remote_dataset(
dataset_name="earth_age",
dataset_prefix="earth_age_",
Expand Down
4 changes: 2 additions & 2 deletions pygmt/datasets/earth_free_air_anomaly.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ def load_earth_free_air_anomaly(resolution="01d", region=None, registration=None
Load the IGPP Global Earth Free-Air Anomaly datatset in various
resolutions.

.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_faa.jpg # noqa: W505
.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_faa.jpg
:width: 80 %
:align: center

Expand Down Expand Up @@ -95,7 +95,7 @@ def load_earth_free_air_anomaly(resolution="01d", region=None, registration=None
... region=[120, 160, 30, 60],
... registration="gridline",
... )
"""
""" # noqa: W505
grid = _load_remote_dataset(
dataset_name="earth_free_air_anomaly",
dataset_prefix="earth_faa_",
Expand Down
4 changes: 2 additions & 2 deletions pygmt/datasets/earth_geoid.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def load_earth_geoid(resolution="01d", region=None, registration=None):
r"""
Load the EGM2008 Global Earth Geoid dataset in various resolutions.

.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_geoid.jpg # noqa: W505
.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_geoid.jpg
:width: 80 %
:align: center

Expand Down Expand Up @@ -84,7 +84,7 @@ def load_earth_geoid(resolution="01d", region=None, registration=None):
... region=[120, 160, 30, 60],
... registration="gridline",
... )
"""
""" # noqa: W505
grid = _load_remote_dataset(
dataset_name="earth_geoid",
dataset_prefix="earth_geoid_",
Expand Down
6 changes: 3 additions & 3 deletions pygmt/datasets/earth_magnetic_anomaly.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,8 @@ def load_earth_magnetic_anomaly(

* - Global Earth Magnetic Anomaly Model (EMAG2)
- World Digital Magnetic Anomaly Map (WDMAM)
* - .. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_mag4km.jpg # noqa: W505
- .. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_wdmam.jpg # noqa: W505
* - .. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_mag4km.jpg
- .. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_wdmam.jpg

The grids are downloaded to a user data directory
(usually ``~/.gmt/server/earth/earth_mag/``,
Expand Down Expand Up @@ -132,7 +132,7 @@ def load_earth_magnetic_anomaly(
>>> grid = load_earth_magnetic_anomaly(
... resolution="20m", registration="gridline", data_source="wdmam"
... )
"""
""" # noqa: W505
magnetic_anomaly_sources = {
"emag2": "earth_mag_",
"emag2_4km": "earth_mag4km_",
Expand Down
4 changes: 2 additions & 2 deletions pygmt/datasets/earth_mask.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def load_earth_mask(resolution="01d", region=None, registration=None):
r"""
Load the GSHHG Global Earth Mask dataset in various resolutions.

.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_mask.png # noqa: W505
.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_mask.png
:width: 80 %
:align: center

Expand Down Expand Up @@ -88,7 +88,7 @@ def load_earth_mask(resolution="01d", region=None, registration=None):
>>> # location (170°E, 50°N) is in oceanic area (0)
>>> grid.sel(lon=170, lat=50).values
array(0, dtype=int8)
"""
""" # noqa: W505
grid = _load_remote_dataset(
dataset_name="earth_mask",
dataset_prefix="earth_mask_",
Expand Down
4 changes: 2 additions & 2 deletions pygmt/datasets/earth_relief.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ def load_earth_relief(
Load the Earth relief datasets (topography and bathymetry) in various
resolutions.

.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_gebcosi.jpg # noqa: W505
.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_gebcosi.jpg
:width: 80 %
:align: center

Expand Down Expand Up @@ -136,7 +136,7 @@ def load_earth_relief(
... registration="gridline",
... use_srtm=True,
... )
"""
""" # noqa: W505
# resolutions of original land-only SRTM tiles from NASA
land_only_srtm_resolutions = ["03s", "01s"]

Expand Down
4 changes: 2 additions & 2 deletions pygmt/datasets/earth_vertical_gravity_gradient.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ def load_earth_vertical_gravity_gradient(
Load the IGPP Global Earth Vertical Gravity Gradient dataset in various
resolutions.

.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_vgg.jpg # noqa: W505
.. figure:: https://www.generic-mapping-tools.org/remote-datasets/_images/GMT_earth_vgg.jpg
:width: 80 %
:align: center

Expand Down Expand Up @@ -97,7 +97,7 @@ def load_earth_vertical_gravity_gradient(
... region=[120, 160, 30, 60],
... registration="gridline",
... )
"""
""" # noqa: W505
grid = _load_remote_dataset(
dataset_name="earth_vgg",
dataset_prefix="earth_vgg_",
Expand Down
7 changes: 3 additions & 4 deletions pygmt/datasets/samples.py
Original file line number Diff line number Diff line change
Expand Up @@ -301,7 +301,6 @@ def list_sample_data():


def load_sample_data(name):
# pylint: disable=line-too-long
"""
Load an example dataset from the GMT server.

Expand All @@ -317,8 +316,8 @@ def load_sample_data(name):
Returns
-------
:class:`pandas.DataFrame` or :class:`xarray.DataArray`
Sample dataset loaded as a :class:`pandas.DataFrame` for tabular data or
:class:`xarray.DataArray` for raster data.
Sample dataset loaded as a :class:`pandas.DataFrame` for tabular data
or :class:`xarray.DataArray` for raster data.

See Also
--------
Expand All @@ -345,7 +344,7 @@ def load_sample_data(name):
'usgs_quakes': 'Table of earthquakes from the USGS'}
>>> # load the sample bathymetry dataset
>>> data = load_sample_data("bathymetry")
"""
""" # noqa: W505
if name not in datasets:
raise GMTInvalidInput(f"Invalid dataset name '{name}'.")
return datasets[name].func()
4 changes: 2 additions & 2 deletions pygmt/src/nearneighbor.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ def nearneighbor(data=None, x=None, y=None, z=None, **kwargs):
criteria and :math:`r_i` is the distance from the node to the *i*'th data
point. If no data weights are supplied then :math:`w_i = 1`.

.. figure:: https://docs.generic-mapping-tools.org/dev/_images/GMT_nearneighbor.png # noqa: W505
.. figure:: https://docs.generic-mapping-tools.org/dev/_images/GMT_nearneighbor.png
:width: 300 px
:align: center

Expand Down Expand Up @@ -146,7 +146,7 @@ def nearneighbor(data=None, x=None, y=None, z=None, **kwargs):
... region=[245, 255, 20, 30],
... search_radius="10m",
... )
"""
""" # noqa: W505
with GMTTempFile(suffix=".nc") as tmpfile:
with Session() as lib:
table_context = lib.virtualfile_from_data(
Expand Down
33 changes: 18 additions & 15 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -85,27 +85,30 @@ make-summary-multi-line = true
wrap-summaries = 79
wrap-descriptions = 79

[tool.flakeheaven]
max_line_length = 88
max_doc_length = 79
show_source = true
[tool.ruff]
line-length = 88 # E501 (line-too-long)
show-source = true
select = [
"E", # pycodestyle
"F", # pyflakes
"I", # isort
"W", # pycodestyle warnings
]
ignore = ["E501"] # Avoid enforcing line-length violations
Comment on lines -88 to +97
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously with flakeheaven, we set max_line_length=88 (to match black). But setting line-length=88 with ruff without setting ignore = ["E501"] would raise errors on lines like these:

pygmt/helpers/decorators.py:206:89: E501 Line too long (98 > 88 characters)
    |
204 |     "interpolation": r"""
205 |         interpolation : str
206 |             [**b**\|\ **c**\|\ **l**\|\ **n**][**+a**][**+b**\ *BC*][**+c**][**+t**\ *threshold*].
    |                                                                                         ^^^^^^^^^^ E501
207 |             Select interpolation mode for grids. You can select the type of
208 |             spline used:
    |

Note that black doesn't format these code lines over 88 characters, but ruff complains about it. The thing is, we recently disabled pylint's line-length checking in #2735, because we assumed that flakeheaven's pycodestyle checker would handle it, but ruff's pycodestyle checker is a little different.

So what should we do in this case?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we ignore both E501 (for codes) and W505 (for docs) in ruff and enable pylint's line-length checking (I assume it checks both codes and docs).

For long URLs, we can either add # pylint: disable=line-too-long to the lines or improve the regex for the ignore-long-lines option to make it work for lines like

.. figure: https://very-long-urls.com

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

enable pylint's line-length checking (I assume it checks both codes and docs).

I'm considering that we replace pylint with ruff eventually (see #2741 (comment)), so would prefer not to add pylint's line-length checking back.

Can we ignore both E501 (for codes) and W505 (for docs) in ruff

E501 is ignored already right now. We could ignore W505 too I suppose, assuming that blackdoc and docformatter is sufficient, and we don't plan to replace both with ruff 🙂

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

assuming that blackdoc and docformatter is sufficient, and we don't plan to replace both with ruff 🙂

docformatter: astral-sh/ruff#1335
blackdoc: astral-sh/ruff#7146

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

. We could ignore W505 too I suppose, assuming that blackdoc and docformatter is sufficient, and we don't plan to replace both with ruff

Sounds good to me.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm actually, I just did some tesitng and maybe we shouldn't ignore W505. I just tried ignoring W505 in pyproject.toml and removed all the noqa: W505, and blackdoc/docformatter doesn't format things properly when I run make format:

docformatter --in-place pygmt doc/conf.py examples
black pygmt doc/conf.py examples
All done! ✨ 🍰 ✨
277 files left unchanged.
blackdoc pygmt doc/conf.py examples

All done! ✨ 🍰 ✨
277 files left unchanged.
ruff check --fix pygmt doc/conf.py examples
pygmt/datasets/earth_age.py:75:80: W505 Doc line too long (133 > 79 characters)
   |
73 |     (i.e., ``grid.gmt.registration`` and ``grid.gmt.gtype`` respectively).
74 |     However, these properties may be lost after specific grid operations (such
75 |     as slicing) and will need to be manually set before passing the grid to any PyGMT data processing or plotting functions. Refer to
   |                                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ W505
76 |     :class:`pygmt.GMTDataArrayAccessor` for detailed explanations and
77 |     workarounds.
   |

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://docs.astral.sh/ruff/rules/doc-line-too-long/#why-is-this-bad

Ignores lines that end with a URL, as long as the URL starts before the line-length threshold.

I think it means that we can simply remove all noqa: W505 and don't have to ignore W505.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://docs.astral.sh/ruff/rules/doc-line-too-long/#why-is-this-bad

Ignores lines that end with a URL, as long as the URL starts before the line-length threshold.

I think it means that we can simply remove all noqa: W505 and don't have to ignore W505.

Tried removing all the noqa: W505 in a208ca8. Style Checks show these errors:

ruff check pygmt doc/conf.py examples
pygmt/datasets/earth_age.py:75:80: W505 Doc line too long (133 > 79 characters)
   |
73 |     (i.e., ``grid.gmt.registration`` and ``grid.gmt.gtype`` respectively).
74 |     However, these properties may be lost after specific grid operations (such
75 |     as slicing) and will need to be manually set before passing the grid to any PyGMT data processing or plotting functions. Refer to
   |                                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ W505
76 |     :class:`pygmt.GMTDataArrayAccessor` for detailed explanations and
77 |     workarounds.
   |

pygmt/datasets/samples.py:334:80: W505 Doc line too long (80 > 79 characters)
    |
332 |     >>> # use list_sample_data to see the available datasets
333 |     >>> pprint(list_sample_data(), width=120)
334 |     {'bathymetry': 'Table of ship bathymetric observations off Baja California',
    |                                                                                ^ W505
335 |      'earth_relief_holes': 'Regional 20 arc-minutes Earth relief grid with holes',
336 |      'fractures': 'Table of hypothetical fracture lengths and azimuths',
    |

pygmt/datasets/samples.py:335:80: W505 Doc line too long (82 > 79 characters)
    |
333 |     >>> pprint(list_sample_data(), width=120)
334 |     {'bathymetry': 'Table of ship bathymetric observations off Baja California',
335 |      'earth_relief_holes': 'Regional 20 arc-minutes Earth relief grid with holes',
    |                                                                                ^^^ W505
336 |      'fractures': 'Table of hypothetical fracture lengths and azimuths',
337 |      'hotspots': 'Table of locations, names, and symbol sizes of hotpots from Müller et al. (1993)',
    |

pygmt/datasets/samples.py:337:80: W505 Doc line too long (100 > 79 characters)
    |
335 |      'earth_relief_holes': 'Regional 20 arc-minutes Earth relief grid with holes',
336 |      'fractures': 'Table of hypothetical fracture lengths and azimuths',
337 |      'hotspots': 'Table of locations, names, and symbol sizes of hotpots from Müller et al. (1993)',
    |                                                                                ^^^^^^^^^^^^^^^^^^^^^ W505
338 |      'japan_quakes': 'Table of earthquakes around Japan from the NOAA NGDC database',
339 |      'mars_shape': 'Table of topographic signature of the hemispheric dichotomy of Mars from Smith and Zuber (1996)',
    |

pygmt/datasets/samples.py:338:80: W505 Doc line too long (85 > 79 characters)
    |
336 |      'fractures': 'Table of hypothetical fracture lengths and azimuths',
337 |      'hotspots': 'Table of locations, names, and symbol sizes of hotpots from Müller et al. (1993)',
338 |      'japan_quakes': 'Table of earthquakes around Japan from the NOAA NGDC database',
    |                                                                                ^^^^^^ W505
make: *** [Makefile:72: check] Error 1
339 |      'mars_shape': 'Table of topographic signature of the hemispheric dichotomy of Mars from Smith and Zuber (1996)',
340 |      'maunaloa_co2': 'Table of CO2 readings from Mauna Loa',
    |

pygmt/datasets/samples.py:339:80: W505 Doc line too long (117 > 79 characters)
    |
337 |      'hotspots': 'Table of locations, names, and symbol sizes of hotpots from Müller et al. (1993)',
338 |      'japan_quakes': 'Table of earthquakes around Japan from the NOAA NGDC database',
339 |      'mars_shape': 'Table of topographic signature of the hemispheric dichotomy of Mars from Smith and Zuber (1996)',
    |                                                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ W505
340 |      'maunaloa_co2': 'Table of CO2 readings from Mauna Loa',
341 |      'notre_dame_topography': 'Table 5.11 in Davis: Statistics and Data Analysis in Geology',
    |

pygmt/datasets/samples.py:341:80: W505 Doc line too long (93 > 79 characters)
    |
339 |      'mars_shape': 'Table of topographic signature of the hemispheric dichotomy of Mars from Smith and Zuber (1996)',
340 |      'maunaloa_co2': 'Table of CO2 readings from Mauna Loa',
341 |      'notre_dame_topography': 'Table 5.11 in Davis: Statistics and Data Analysis in Geology',
    |                                                                                ^^^^^^^^^^^^^^ W505
342 |      'ocean_ridge_points': 'Table of ocean ridge points for the entire world',
343 |      'rock_compositions': 'Table of rock sample compositions',
    |

pygmt/src/nearneighbor.py:43:80: W505 Doc line too long (80 > 79 characters)
   |
41 |     Grid table data using a "Nearest neighbor" algorithm.
42 | 
43 |     **nearneighbor** reads arbitrarily located (*x*, *y*, *z*\ [, *w*]) triplets
   |                                                                                ^ W505
44 |     [quadruplets] and uses a nearest neighbor algorithm to assign a weighted
45 |     average value to each node that has one or more data points within a search
   |

pygmt/src/nearneighbor.py:142:80: W505 Doc line too long (83 > 79 characters)
    |
140 |     >>> data = pygmt.datasets.load_sample_data(name="bathymetry")
141 |     >>> # Create a new grid with 5 arc-minutes spacing in the designated region
142 |     >>> # Set search_radius to only consider points within 10 arc-minutes of a node
    |                                                                                ^^^^ W505
143 |     >>> output = pygmt.nearneighbor(
144 |     ...     data=data,
    |

Found 9 errors.

You're right that the long URLs from #2728 don't raise W505 errors anymore, but there are 9 others we'll need to fix.


[tool.ruff.pycodestyle]
max-doc-length = 79

[tool.flakeheaven.plugins]
pycodestyle = ["+*", "-E501", "-W503"]
pyflakes = ["+*"]
[tool.ruff.per-file-ignores]
"__init__.py" = ["F401"] # Ignore `F401` (unused-import) in all `__init__.py` files

[tool.flakeheaven.exceptions."**/__init__.py"]
pyflakes = ["-F401"]
[tool.ruff.isort]
known-third-party = ["pygmt"]

[tool.pytest.ini_options]
minversion = "6.0"
addopts = "--verbose --durations=0 --durations-min=0.2 --doctest-modules --mpl --mpl-results-path=results"

[tool.isort]
profile = "black"
skip_gitignore = true
Comment on lines -104 to -106
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that ruff's isort rules is nearly-equivalent to isort's profile="black" settings (see https://docs.astral.sh/ruff/faq/#how-does-ruffs-import-sorting-compare-to-isort). Also, files that are gitignored are automatically skipped from being check (see astral-sh/ruff#1234), so no need to apply these rules anymore.

known_third_party = "pygmt"

[tool.pylint.MASTER]
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
# number of processors available to use.
Expand Down Expand Up @@ -136,5 +139,5 @@ max-module-lines=2000
disable=[
"duplicate-code",
"import-error",
"line-too-long", # Already checked by flakeheaven/pycodestyle
"line-too-long", # Already checked by ruff's pycodestyle
]