Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ruff 0.6 #12834

Merged
merged 13 commits into from
Aug 14, 2024
Merged

Ruff 0.6 #12834

merged 13 commits into from
Aug 14, 2024

Conversation

@MichaReiser MichaReiser added the breaking Breaking API change label Aug 12, 2024
@MichaReiser MichaReiser added this to the v0.6 milestone Aug 12, 2024
Copy link

codspeed-hq bot commented Aug 12, 2024

CodSpeed Performance Report

Merging #12834 will improve performances by 5.36%

Comparing ruff-0.6 (8da1c12) with main (3898d73)

Summary

⚡ 1 improvements
✅ 31 untouched benchmarks

Benchmarks breakdown

Benchmark main ruff-0.6 Change
linter/default-rules[pydantic/types.py] 1.9 ms 1.8 ms +5.36%

Copy link
Contributor

github-actions bot commented Aug 12, 2024

ruff-ecosystem results

Linter (stable)

ℹ️ ecosystem check detected linter changes. (+1753 -1275 violations, +2000 -0 fixes in 22 projects; 32 projects unchanged)

DisnakeDev/disnake (+1 -0 violations, +0 -0 fixes)

+ disnake/utils.py:1161:56: RUF101 [*] `PGH001` is a redirect to `S307`

RasaHQ/rasa (+1 -0 violations, +0 -0 fixes)

+ tests/graph_components/validators/test_default_recipe_validator.py:815:72: RUF101 [*] `RUF011` is a redirect to `B035`

Snowflake-Labs/snowcli (+181 -2 violations, +0 -0 fixes)

+ src/snowflake/cli/_app/cli_app.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/commands_registration/command_plugins_loader.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/commands_registration/typer_registration.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/dev/docs/commands_docs_generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/dev/docs/generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/dev/docs/project_definition_docs_generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/loggers.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/main_typer.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/printing.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/snow_connector.py:15:1: I001 [*] Import block is un-sorted or un-formatted
... 173 additional changes omitted for project

alteryx/featuretools (+5 -0 violations, +0 -0 fixes)

+ docs/source/guides/time_series.ipynb:cell 1:4:1: E402 Module level import not at top of cell
+ docs/source/guides/time_series.ipynb:cell 1:6:1: E402 Module level import not at top of cell
+ docs/source/guides/time_series.ipynb:cell 1:7:1: E402 Module level import not at top of cell
+ docs/source/guides/time_series.ipynb:cell 1:8:1: E402 Module level import not at top of cell
+ docs/source/resources/frequently_asked_questions.ipynb:cell 101:5:8: E721 Use `is` and `is not` for type comparisons, or `isinstance()` for isinstance checks

PlasmaPy/PlasmaPy (+264 -0 violations, +0 -0 fixes)

+ docs/notebooks/analysis/fit_functions.ipynb:cell 12:3:39: NPY002 Replace legacy `np.random.normal` call with `np.random.Generator`
+ docs/notebooks/analysis/nullpoint.ipynb:cell 11:10:1: T201 `print` found
+ docs/notebooks/analysis/nullpoint.ipynb:cell 11:9:1: T201 `print` found
+ docs/notebooks/analysis/nullpoint.ipynb:cell 14:10:1: T201 `print` found
+ docs/notebooks/analysis/nullpoint.ipynb:cell 14:11:1: T201 `print` found
+ docs/notebooks/analysis/nullpoint.ipynb:cell 9:1:5: D103 Missing docstring in public function
+ docs/notebooks/analysis/swept_langmuir/find_floating_potential.ipynb:cell 24:29:42: FBT003 Boolean positional value in function call
+ docs/notebooks/analysis/swept_langmuir/find_floating_potential.ipynb:cell 24:29:48: FBT003 Boolean positional value in function call
+ docs/notebooks/analysis/swept_langmuir/find_floating_potential.ipynb:cell 24:30:42: FBT003 Boolean positional value in function call
+ docs/notebooks/analysis/swept_langmuir/find_floating_potential.ipynb:cell 24:30:48: FBT003 Boolean positional value in function call
... 254 additional changes omitted for project

apache/airflow (+37 -317 violations, +1214 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --no-preview --select ALL

- airflow/__init__.py:108:5: RET506 Unnecessary `elif` after `raise` statement
+ airflow/__init__.py:108:5: RET506 [*] Unnecessary `elif` after `raise` statement
- airflow/api/auth/backend/kerberos_auth.py:128:9: RET505 Unnecessary `elif` after `return` statement
+ airflow/api/auth/backend/kerberos_auth.py:128:9: RET505 [*] Unnecessary `elif` after `return` statement
- airflow/api/auth/backend/kerberos_auth.py:165:13: RET505 Unnecessary `elif` after `return` statement
+ airflow/api/auth/backend/kerberos_auth.py:165:13: RET505 [*] Unnecessary `elif` after `return` statement
- airflow/api_connexion/endpoints/config_endpoint.py:119:5: RET505 Unnecessary `elif` after `return` statement
+ airflow/api_connexion/endpoints/config_endpoint.py:119:5: RET505 [*] Unnecessary `elif` after `return` statement
... 917 additional changes omitted for rule RET505
- airflow/api_connexion/endpoints/forward_to_fab_endpoint.py:47:9: RET506 Unnecessary `else` after `raise` statement
+ airflow/api_connexion/endpoints/forward_to_fab_endpoint.py:47:9: RET506 [*] Unnecessary `else` after `raise` statement
- airflow/cli/commands/dag_command.py:280:5: RET506 Unnecessary `elif` after `raise` statement
+ airflow/cli/commands/dag_command.py:280:5: RET506 [*] Unnecessary `elif` after `raise` statement
... 255 additional changes omitted for rule RET506
+ airflow/cli/commands/kubernetes_command.py:91:5: PLR1730 [*] Replace `if` statement with `min_pending_minutes = max(min_pending_minutes, 5)`
- airflow/dag_processing/manager.py:586:21: RET508 Unnecessary `elif` after `break` statement
+ airflow/dag_processing/manager.py:586:21: RET508 [*] Unnecessary `elif` after `break` statement
... 1553 additional changes omitted for project

apache/superset (+739 -129 violations, +24 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --no-preview --select ALL

- RELEASING/verify_release.py:52:5: RET505 Unnecessary `else` after `return` statement
+ RELEASING/verify_release.py:52:5: RET505 [*] Unnecessary `else` after `return` statement
- RELEASING/verify_release.py:93:9: RET505 Unnecessary `elif` after `return` statement
+ RELEASING/verify_release.py:93:9: RET505 [*] Unnecessary `elif` after `return` statement
- scripts/build_docker.py:67:5: RET505 Unnecessary `elif` after `return` statement
+ scripts/build_docker.py:67:5: RET505 [*] Unnecessary `elif` after `return` statement
... 11 additional changes omitted for rule RET505
+ superset-frontend/plugins/legacy-plugin-chart-country-map/scripts/Country Map GeoJSON Generator.ipynb:cell 10:3:15: Q000 [*] Single quotes found but double quotes preferred
+ superset-frontend/plugins/legacy-plugin-chart-country-map/scripts/Country Map GeoJSON Generator.ipynb:cell 10:3:1: T201 `print` found
+ superset-frontend/plugins/legacy-plugin-chart-country-map/scripts/Country Map GeoJSON Generator.ipynb:cell 10:3:38: Q000 [*] Single quotes found but double quotes preferred
+ superset-frontend/plugins/legacy-plugin-chart-country-map/scripts/Country Map GeoJSON Generator.ipynb:cell 11:1:1: PD901 Avoid using the generic variable name `df` for DataFrames
... 882 additional changes omitted for project

aws/aws-sam-cli (+4 -0 violations, +0 -0 fixes)

+ samcli/lib/observability/cw_logs/cw_log_puller.py:136:17: PLR1730 [*] Replace `if` statement with `max` call
+ samcli/lib/observability/xray_traces/xray_event_puller.py:163:21: PLR1730 [*] Replace `if` statement with `max` call
+ samcli/lib/observability/xray_traces/xray_events.py:52:13: PLR1730 [*] Replace `if` statement with `max` call
+ samcli/lib/observability/xray_traces/xray_events.py:87:13: PLR1730 [*] Replace `if` statement with `max` call

bokeh/bokeh (+185 -457 violations, +306 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --no-preview --select ALL

- docs/bokeh/docserver.py:31:1: I001 [*] Import block is un-sorted or un-formatted
- docs/bokeh/source/conf.py:9:1: I001 [*] Import block is un-sorted or un-formatted
- docs/bokeh/source/docs/first_steps/examples/first_steps_5_vectorize_color_and_size.py:1:1: I001 [*] Import block is un-sorted or un-formatted
- examples/advanced/extensions/wrapping.py:1:1: I001 [*] Import block is un-sorted or un-formatted
- examples/advanced/integration/d3-voronoi.py:1:1: I001 [*] Import block is un-sorted or un-formatted
- examples/basic/annotations/band.py:11:1: I001 [*] Import block is un-sorted or un-formatted
... 299 additional changes omitted for rule I001
+ examples/models/structure/ModelStructureExample.ipynb:cell 11:2:5: T201 `print` found
+ examples/models/structure/ModelStructureExample.ipynb:cell 13:1:1: B018 Found useless expression. Either assign it to a variable or remove it.
+ examples/models/structure/ModelStructureExample.ipynb:cell 13:2:17: Q000 [*] Single quotes found but double quotes preferred
+ examples/models/structure/ModelStructureExample.ipynb:cell 13:2:40: Q000 [*] Single quotes found but double quotes preferred
... 938 additional changes omitted for project

freedomofpress/securedrop (+53 -0 violations, +0 -0 fixes)

+ admin/tests/test_integration.py:534:1: PT001 [*] Use `@pytest.fixture` over `@pytest.fixture()`
+ molecule/testinfra/app-code/test_securedrop_app_code.py:55:1: PT023 [*] Use `@pytest.mark.skip_in_prod` over `@pytest.mark.skip_in_prod()`
+ molecule/testinfra/app-code/test_securedrop_app_code.py:69:1: PT023 [*] Use `@pytest.mark.skip_in_prod` over `@pytest.mark.skip_in_prod()`
+ molecule/testinfra/app/test_app_network.py:12:1: PT023 [*] Use `@pytest.mark.skip_in_prod` over `@pytest.mark.skip_in_prod()`
+ molecule/testinfra/app/test_appenv.py:18:1: PT023 [*] Use `@pytest.mark.skip_in_prod` over `@pytest.mark.skip_in_prod()`
+ molecule/testinfra/app/test_appenv.py:90:1: PT023 [*] Use `@pytest.mark.skip_in_prod` over `@pytest.mark.skip_in_prod()`
+ molecule/testinfra/app/test_ossec_agent.py:42:1: PT023 [*] Use `@pytest.mark.xfail` over `@pytest.mark.xfail()`
... 23 additional changes omitted for rule PT023
+ securedrop/tests/conftest.py:102:1: PT001 [*] Use `@pytest.fixture` over `@pytest.fixture()`
+ securedrop/tests/conftest.py:122:1: PT001 [*] Use `@pytest.fixture` over `@pytest.fixture()`
+ securedrop/tests/conftest.py:151:1: PT001 [*] Use `@pytest.fixture` over `@pytest.fixture()`
... 43 additional changes omitted for project

jrnl-org/jrnl (+2 -0 violations, +0 -0 fixes)

+ tests/lib/given_steps.py:12:23: ICN001 `xml.etree.ElementTree` should be imported as `ET`
+ tests/lib/then_steps.py:7:23: ICN001 `xml.etree.ElementTree` should be imported as `ET`

langchain-ai/langchain (+31 -0 violations, +0 -0 fixes)

+ libs/cli/langchain_cli/integration_template/docs/chat.ipynb:cell 11:4:89: E501 Line too long (102 > 88)
+ libs/cli/langchain_cli/integration_template/docs/chat.ipynb:cell 12:1:1: T201 `print` found
+ libs/cli/langchain_cli/integration_template/docs/chat.ipynb:cell 14:7:89: E501 Line too long (97 > 88)
+ libs/cli/langchain_cli/integration_template/docs/chat.ipynb:cell 3:5:89: E501 Line too long (98 > 88)
+ libs/cli/langchain_cli/integration_template/docs/document_loaders.ipynb:cell 12:1:1: T201 `print` found
+ libs/cli/langchain_cli/integration_template/docs/document_loaders.ipynb:cell 3:4:89: E501 Line too long (94 > 88)
+ libs/cli/langchain_cli/integration_template/docs/llms.ipynb:cell 3:5:89: E501 Line too long (98 > 88)
+ libs/cli/langchain_cli/integration_template/docs/provider.ipynb:cell 2:1:1: I001 [*] Import block is un-sorted or un-formatted
+ libs/cli/langchain_cli/integration_template/docs/provider.ipynb:cell 2:1:29: F401 [*] `__module_name__.Chat__ModuleName__` imported but unused
+ libs/cli/langchain_cli/integration_template/docs/provider.ipynb:cell 2:2:29: F401 [*] `__module_name__.__ModuleName__LLM` imported but unused
... 21 additional changes omitted for project

milvus-io/pymilvus (+14 -0 violations, +0 -0 fixes)

+ examples/hello_milvus.ipynb:cell 14:13:9: T201 `print` found
+ examples/hello_milvus.ipynb:cell 14:14:1: T201 `print` found
+ examples/hello_milvus.ipynb:cell 16:5:1: T201 `print` found
+ examples/hello_milvus.ipynb:cell 16:6:1: T201 `print` found
+ examples/hello_milvus.ipynb:cell 18:7:9: T201 `print` found
+ examples/hello_milvus.ipynb:cell 18:8:1: T201 `print` found
... 5 additional changes omitted for rule T201
+ examples/hello_milvus.ipynb:cell 2:1:1: I001 [*] Import block is un-sorted or un-formatted
+ pymilvus/bulk_writer/buffer.py:270:13: PLR1730 [*] Replace `if` statement with `max` call
+ pymilvus/bulk_writer/buffer.py:272:13: PLR1730 [*] Replace `if` statement with `min` call
+ pymilvus/orm/iterator.py:54:9: PLR1730 [*] Replace `if` statement with `min` call
... 4 additional changes omitted for project

... Truncated remaining completed project reports due to GitHub comment length restrictions

Changes by rule (87 rules affected)

code total + violation - violation + fix - fix
RET505 1576 0 0 1576 0
PT023 661 196 465 0 0
Q000 539 539 0 0 0
I001 492 193 299 0 0
RET506 372 0 0 372 0
PT004 356 0 356 0 0
PT001 295 159 136 0 0
T201 151 151 0 0 0
E501 74 74 0 0 0
ANN001 62 62 0 0 0
PLW0642 52 52 0 0 0
RET507 36 0 0 36 0
NPY002 32 32 0 0 0
D103 25 25 0 0 0
W293 23 23 0 0 0
ANN201 22 22 0 0 0
TCH001 18 18 0 0 0
TCH002 18 0 18 0 0
FBT003 17 17 0 0 0
RET508 16 0 0 16 0
PYI062 15 15 0 0 0
PLR1730 13 13 0 0 0
F401 12 12 0 0 0
W291 11 11 0 0 0
COM812 11 11 0 0 0
E402 7 7 0 0 0
PTH123 6 6 0 0 0
C408 6 6 0 0 0
F821 5 5 0 0 0
ICN001 5 5 0 0 0
ERA001 5 5 0 0 0
ANN202 5 5 0 0 0
E701 5 5 0 0 0
PD011 4 4 0 0 0
PD901 4 4 0 0 0
PLR2004 4 4 0 0 0
UP031 4 4 0 0 0
RUF101 3 3 0 0 0
B007 3 3 0 0 0
T203 3 3 0 0 0
SLF001 3 3 0 0 0
RUF001 3 3 0 0 0
PTH110 3 3 0 0 0
E741 2 2 0 0 0
PLR1736 2 2 0 0 0
PLW2901 2 2 0 0 0
I002 2 2 0 0 0
PD002 2 2 0 0 0
PLW0602 2 2 0 0 0
ARG001 2 2 0 0 0
E721 1 1 0 0 0
B008 1 1 0 0 0
TID253 1 1 0 0 0
SIM115 1 1 0 0 0
S301 1 1 0 0 0
PLW0211 1 1 0 0 0
PLR0913 1 1 0 0 0
BLE001 1 1 0 0 0
RUF010 1 1 0 0 0
PD008 1 1 0 0 0
S113 1 1 0 0 0
PTH111 1 1 0 0 0
PTH202 1 1 0 0 0
PTH102 1 1 0 0 0
FBT001 1 1 0 0 0
D400 1 1 0 0 0
D415 1 1 0 0 0
PTH107 1 1 0 0 0
C405 1 1 0 0 0
B018 1 1 0 0 0
N803 1 1 0 0 0
C901 1 1 0 0 0
SIM910 1 1 0 0 0
E711 1 1 0 0 0
RUF005 1 1 0 0 0
D205 1 1 0 0 0
D212 1 1 0 0 0
F841 1 1 0 0 0
SIM108 1 1 0 0 0
UP030 1 1 0 0 0
UP032 1 1 0 0 0
S506 1 1 0 0 0
PLW0127 1 1 0 0 0
PLW0128 1 1 0 0 0
F811 1 1 0 0 0
PYI057 1 1 0 0 0
PT005 1 0 1 0 0

Linter (preview)

ℹ️ ecosystem check detected linter changes. (+278 -744 violations, +0 -0 fixes in 19 projects; 35 projects unchanged)

DisnakeDev/disnake (+1 -1 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

+ disnake/ext/commands/flag_converter.py:314:28: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- disnake/ext/commands/flag_converter.py:314:28: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead

RasaHQ/rasa (+0 -3 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

- rasa/core/evaluation/marker_stats.py:114:51: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- rasa/core/featurizers/single_state_featurizer.py:119:20: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- tests/core/featurizers/test_precomputation.py:110:17: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead

Snowflake-Labs/snowcli (+181 -2 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

+ src/snowflake/cli/_app/cli_app.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/commands_registration/command_plugins_loader.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/commands_registration/typer_registration.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/dev/docs/commands_docs_generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/dev/docs/generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/dev/docs/project_definition_docs_generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/loggers.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/main_typer.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/printing.py:15:1: I001 [*] Import block is un-sorted or un-formatted
+ src/snowflake/cli/_app/snow_connector.py:15:1: I001 [*] Import block is un-sorted or un-formatted
... 173 additional changes omitted for project

PlasmaPy/PlasmaPy (+1 -0 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

+ src/plasmapy/particles/ionization_state_collection.py:693:40: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead

apache/airflow (+3 -319 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL

+ helm_tests/airflow_aux/test_annotations.py:409:63: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- helm_tests/airflow_aux/test_annotations.py:409:63: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- helm_tests/airflow_aux/test_pod_template_file.py:48:9: PT004 Fixture `setup_test_cases` does not return anything, add leading underscore
- kubernetes_tests/conftest.py:27:5: PT004 Fixture `initialize_providers_manager` does not return anything, add leading underscore
- kubernetes_tests/test_base.py:58:9: PT004 Fixture `base_tests_setup` does not return anything, add leading underscore
- kubernetes_tests/test_kubernetes_pod_operator.py:89:5: PT004 Fixture `mock_get_connection` does not return anything, add leading underscore
- kubernetes_tests/test_kubernetes_pod_operator.py:98:9: PT004 Fixture `setup_tests` does not return anything, add leading underscore
- tests/always/test_pandas.py:31:9: PT004 Fixture `setup_test_cases` does not return anything, add leading underscore
... 311 additional changes omitted for rule PT004
- tests/providers/ssh/operators/test_ssh.py:68:9: PT005 Fixture `_patch_exec_ssh_client` returns a value, remove leading underscore
+ tests/test_utils/get_all_tests.py:26:23: ICN001 `xml.etree.ElementTree` should be imported as `ET`
+ tests/ti_deps/deps/test_mapped_task_upstream_dep.py:459:36: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- tests/ti_deps/deps/test_mapped_task_upstream_dep.py:459:36: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
... 310 additional changes omitted for project

apache/superset (+3 -43 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL

+ superset/reports/notifications/email.py:58:28: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- superset/reports/notifications/email.py:58:28: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
+ superset/reports/notifications/slack_mixin.py:107:43: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- superset/reports/notifications/slack_mixin.py:107:43: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
+ superset/reports/notifications/slack_mixin.py:97:39: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- superset/reports/notifications/slack_mixin.py:97:39: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- tests/integration_tests/celery_tests.py:71:5: PT004 Fixture `setup_sqllab` does not return anything, add leading underscore
- tests/integration_tests/charts/api_tests.py:1274:9: PT004 Fixture `load_energy_charts` does not return anything, add leading underscore
- tests/integration_tests/charts/api_tests.py:89:9: PT004 Fixture `clear_data_cache` does not return anything, add leading underscore
- tests/integration_tests/charts/data/api_tests.py:93:5: PT004 Fixture `skip_by_backend` does not return anything, add leading underscore
... 36 additional changes omitted for project

bokeh/bokeh (+23 -318 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL

- docs/bokeh/docserver.py:31:1: I001 [*] Import block is un-sorted or un-formatted
- docs/bokeh/source/conf.py:9:1: I001 [*] Import block is un-sorted or un-formatted
- docs/bokeh/source/docs/first_steps/examples/first_steps_5_vectorize_color_and_size.py:1:1: I001 [*] Import block is un-sorted or un-formatted
- examples/advanced/extensions/wrapping.py:1:1: I001 [*] Import block is un-sorted or un-formatted
- examples/advanced/integration/d3-voronoi.py:1:1: I001 [*] Import block is un-sorted or un-formatted
- examples/basic/annotations/band.py:11:1: I001 [*] Import block is un-sorted or un-formatted
... 292 additional changes omitted for rule I001
+ src/bokeh/command/subcommands/file_output.py:145:17: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- src/bokeh/command/subcommands/file_output.py:145:17: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
+ src/bokeh/command/subcommands/serve.py:831:17: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- src/bokeh/command/subcommands/serve.py:831:17: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- src/bokeh/events.py:182:9: PLW0642 Invalid assignment to `cls` argument in class method
+ src/bokeh/events.py:182:9: PLW0642 Reassigned `cls` variable in class method
+ src/bokeh/sampledata/us_counties.py:45:33: ICN001 `xml.etree.ElementTree` should be imported as `ET`
+ src/bokeh/sampledata/us_states.py:44:33: ICN001 `xml.etree.ElementTree` should be imported as `ET`
+ tests/integration/models/test_plot.py:27:28: TCH001 Move application import `bokeh.document.Document` into a type-checking block
- tests/integration/models/test_plot.py:27:28: TCH002 Move third-party import `bokeh.document.Document` into a type-checking block
... 325 additional changes omitted for project

docker/docker-py (+1 -0 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

+ docker/types/containers.py:725:22: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead

freedomofpress/securedrop (+1 -0 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

+ molecule/testinfra/ossec/test_journalist_mail.py:14:45: RUF100 [*] Unused `noqa` directive (non-enabled: `PT004`)

ibis-project/ibis (+1 -1 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

+ ibis/common/egraph.py:776:17: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
- ibis/common/egraph.py:776:17: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead

jrnl-org/jrnl (+2 -0 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

+ tests/lib/given_steps.py:12:23: ICN001 `xml.etree.ElementTree` should be imported as `ET`
+ tests/lib/then_steps.py:7:23: ICN001 `xml.etree.ElementTree` should be imported as `ET`

mlflow/mlflow (+1 -0 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

+ tests/metrics/genai/test_genai_metrics.py:554:37: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead

pandas-dev/pandas (+51 -51 violations, +0 -0 fixes)

ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview

- pandas/core/arrays/categorical.py:572:13: PLW0642 Invalid assignment to `self` argument in instance method
+ pandas/core/arrays/categorical.py:572:13: PLW0642 Reassigned `self` variable in instance method
- pandas/core/arrays/datetimelike.py:1065:9: PLW0642 Invalid assignment to `self` argument in instance method
+ pandas/core/arrays/datetimelike.py:1065:9: PLW0642 Reassigned `self` variable in instance method
- pandas/core/arrays/datetimelike.py:1080:9: PLW0642 Invalid assignment to `self` argument in instance method
+ pandas/core/arrays/datetimelike.py:1080:9: PLW0642 Reassigned `self` variable in instance method
- pandas/core/arrays/datetimelike.py:1081:9: PLW0642 Invalid assignment to `self` argument in instance method
+ pandas/core/arrays/datetimelike.py:1081:9: PLW0642 Reassigned `self` variable in instance method
- pandas/core/arrays/datetimelike.py:1109:9: PLW0642 Invalid assignment to `self` argument in instance method
+ pandas/core/arrays/datetimelike.py:1109:9: PLW0642 Reassigned `self` variable in instance method
... 92 additional changes omitted for project

... Truncated remaining completed project reports due to GitHub comment length restrictions

Changes by rule (10 rules affected)

code total + violation - violation + fix - fix
I001 480 181 299 0 0
PT004 356 0 356 0 0
PLW0642 104 52 52 0 0
C420 18 18 0 0 0
TCH001 18 18 0 0 0
RUF025 18 0 18 0 0
TCH002 18 0 18 0 0
ICN001 5 5 0 0 0
RUF100 4 4 0 0 0
PT005 1 0 1 0 0

Formatter (stable)

ℹ️ ecosystem check detected format changes. (+1240 -987 lines in 26 files in 8 projects; 46 projects unchanged)

PlasmaPy/PlasmaPy (+69 -55 lines across 3 files)

docs/notebooks/analysis/fit_functions.ipynb~L291

     "plt.legend(fontsize=14, loc=\"upper left\")\n",
     "\n",
     "txt = f\"$f(x) = {explin.latex_str}$\\n$r^2 = {explin.rsq:.3f}$\\n\"\n",
-    "for name, param, err in zip(explin.param_names, explin.params, explin.param_errors, strict=False):\n",
+    "for name, param, err in zip(\n",
+    "    explin.param_names, explin.params, explin.param_errors, strict=False\n",
+    "):\n",
     "    txt += f\"{name} = {param:.3f} $\\\\pm$ {err:.3f}\\n\"\n",
     "txt_loc = [-13.0, ax.get_ylim()[1]]\n",
     "txt_loc = ax.transAxes.inverted().transform(ax.transData.transform(txt_loc))\n",

docs/notebooks/analysis/swept_langmuir/find_floating_potential.ipynb~L308

     "axs[0].legend(fontsize=12)\n",
     "\n",
     "# zoom on fit\n",
-    "for ii, label, rtn in zip([1, 2], [\"Exponential\", \"Linear\"], [results, results_lin], strict=False):\n",
+    "for ii, label, rtn in zip(\n",
+    "    [1, 2], [\"Exponential\", \"Linear\"], [results, results_lin], strict=False\n",
+    "):\n",
     "    vf = rtn[0]\n",
     "    extras = rtn[1]\n",
     "\n",

docs/notebooks/simulation/particle_tracker.ipynb~L6

    "metadata": {
     "collapsed": false
    },
+   "outputs": [],
    "source": [
     "%matplotlib inline"
-   ],
-   "outputs": []
+   ]
   },
   {
    "cell_type": "markdown",

docs/notebooks/simulation/particle_tracker.ipynb~L28

    "metadata": {
     "collapsed": false
    },
+   "outputs": [],
    "source": [
     "import astropy.units as u\n",
     "import matplotlib.pyplot as plt\n",

docs/notebooks/simulation/particle_tracker.ipynb~L41

     "from plasmapy.simulation.particle_tracker.termination_conditions import (\n",
     "    TimeElapsedTerminationCondition,\n",
     ")"
-   ],
-   "outputs": []
+   ]
   },
   {
    "cell_type": "raw",

docs/notebooks/simulation/particle_tracker.ipynb~L66

    "metadata": {
     "collapsed": false
    },
+   "outputs": [],
    "source": [
     "grid_length = 10\n",
     "grid = CartesianGrid(-1 * u.m, 1 * u.m, num=grid_length)"
-   ],
-   "outputs": []
+   ]
   },
   {
    "cell_type": "markdown",

docs/notebooks/simulation/particle_tracker.ipynb~L88

    "metadata": {
     "collapsed": false
    },
+   "outputs": [],
    "source": [
     "Bx_fill = 4 * u.T\n",
     "Bx = np.full(grid.shape, Bx_fill.value) * u.T\n",

docs/notebooks/simulation/particle_tracker.ipynb~L96

     "Ey = np.full(grid.shape, Ey_fill.value) * u.V / u.m\n",
     "\n",
     "grid.add_quantities(B_x=Bx, E_y=Ey)\n",
-    "ExB_drift(np.asarray([0, Ey_fill.value, 0]) * u.V / u.m, np.asarray([Bx_fill.value, 0, 0]) * u.T)"
-   ],
-   "outputs": []
+    "ExB_drift(\n",
+    "    np.asarray([0, Ey_fill.value, 0]) * u.V / u.m,\n",
+    "    np.asarray([Bx_fill.value, 0, 0]) * u.T,\n",
+    ")"
+   ]
   },
   {
    "cell_type": "markdown",
-   "source": [
-    "|ParticleTracker| takes arrays of particle positions and velocities of the shape [nparticles, 3], so these arrays represent one particle starting at the origin."
-   ],
    "metadata": {
     "collapsed": false
-   }
+   },
+   "source": [
+    "|ParticleTracker| takes arrays of particle positions and velocities of the shape [nparticles, 3], so these arrays represent one particle starting at the origin."
+   ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
    "source": [
     "x0 = [[0, 0, 0]] * u.m\n",
     "v0 = [[1, 0, 0]] * u.m / u.s\n",
     "particle = Particle(\"p+\")"
-   ],
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": []
+   ]
   },
   {
    "cell_type": "markdown",
+   "metadata": {
+    "collapsed": false
+   },
    "source": [
     "Initialize our stop condition and save routine. We can determine a relevant\n",
     "duration for the experiment by calculating the gyroperiod for the particle."
-   ],
-   "metadata": {
-    "collapsed": false
-   }
+   ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
+   "metadata": {
+    "collapsed": false
+   },
+   "outputs": [],
    "source": [
-    "particle_gyroperiod = 1 / gyrofrequency(Bx_fill, particle).to(u.Hz, equivalencies=u.dimensionless_angles())\n",
+    "particle_gyroperiod = 1 / gyrofrequency(Bx_fill, particle).to(\n",
+    "    u.Hz, equivalencies=u.dimensionless_angles()\n",
+    ")\n",
     "\n",
     "simulation_duration = 100 * particle_gyroperiod\n",
     "save_interval = particle_gyroperiod / 10\n",
     "\n",
     "termination_condition = TimeElapsedTerminationCondition(simulation_duration)\n",
     "save_routine = IntervalSaveRoutine(save_interval)"
-   ],
-   "metadata": {
-    "collapsed": false
-   },
-   "outputs": []
+   ]
   },
   {
    "cell_type": "markdown",
-   "source": [
-    "Initialize the trajectory calculation."
-   ],
    "metadata": {
     "collapsed": false
-   }
+   },
+   "source": [
+    "Initialize the trajectory calculation."
+   ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "source": [
-    "simulation = ParticleTracker(grid, save_routine=save_routine, termination_condition=termination_condition, verbose=False)"
-   ],
    "metadata": {
     "collapsed": false
    },
-   "outputs": []
+   "outputs": [],
+   "source": [
+    "simulation = ParticleTracker(\n",
+    "    grid,\n",
+    "    save_routine=save_routine,\n",
+    "    termination_condition=termination_condition,\n",
+    "    verbose=False,\n",
+    ")"
+   ]
   },
   {
    "cell_type": "markdown",
+   "metadata": {
+    "collapsed": false
+   },
    "source": [
     "We still have to initialize the particle's velocity. We'll limit ourselves to\n",
     "one in the x direction, parallel to the magnetic field B -\n",
     "that way, it won't turn in the z direction.\n",
     "\n"
-   ],
-   "metadata": {
-    "collapsed": false
-   }
+   ]
   },
   {
    "cell_type": "code",
    "execution_count": null,
-   "source": [
-    "simulation.load_particles(x0, v0, particle)"
-   ],
    "metadata": {
     "collapsed": false
    },
-   "outputs": []
+   "outputs": [],
+   "source": [
+    "simulation.load_particles(x0, v0, particle)"
+   ]
   },
   {
    "cell_type": "markdown",

docs/notebooks/simulation/particle_tracker.ipynb~L205

    "metadata": {
     "collapsed": false
    },
+   "outputs": [],
    "source": [
     "simulation.run()"
-   ],
-   "outputs": []
+   ]
   },
   {
    "cell_type": "markdown",

docs/notebooks/simulation/particle_tracker.ipynb~L221

    "cell_type": "code",
    "execution_count": null,
    "metadata": {},
+   "outputs": [],
    "source": [
     "results = save_routine.results\n",
     "particle_trajectory = results[\"x\"][:, 0]\n",
     "particle_position_z = particle_trajectory[:, 2]\n",
     "\n",
     "plt.scatter(results[\"time\"], particle_position_z)"
-   ],
-   "outputs": []
+   ]
   },
   {
    "cell_type": "markdown",

docs/notebooks/simulation/particle_tracker.ipynb~L246

      "nbsphinx-thumbnail"
     ]
    },
+   "outputs": [],
    "source": [
     "fig = plt.figure()\n",
-    "ax = fig.add_subplot(projection='3d')\n",
+    "ax = fig.add_subplot(projection=\"3d\")\n",
     "\n",
     "ax.plot(*particle_trajectory.T)\n",
     "ax.set_xlabel(\"X\")\n",
     "ax.set_ylabel(\"Y\")\n",
     "ax.set_zlabel(\"Z\")"
-   ],
-   "outputs": []
+   ]
   },
   {
    "cell_type": "markdown",

docs/notebooks/simulation/particle_tracker.ipynb~L271

    "metadata": {
     "collapsed": false
    },
+   "outputs": [],
    "source": [
     "v_mean = results[\"v\"][:, :, 2].mean()\n",
     "print(\n",
     "    f\"The calculated drift velocity is {v_mean:.4f} to compare with the \"\n",
     "    f\"expected E0/B0 = {-(Ey_fill/Bx_fill).value:.4f}\"\n",
     ")"
-   ],
-   "outputs": []
+   ]
   }
  ],
  "metadata": {

apache/airflow (+20 -20 lines across 2 files)

dev/stats/explore_pr_candidates.ipynb~L19

    "metadata": {},
    "outputs": [],
    "source": [
-    "file = open(\"prlist\",\"rb\") # open the pickled file\n",
+    "file = open(\"prlist\", \"rb\")  # open the pickled file\n",
     "selected_prs = pickle.load(file)"
    ]
   },

dev/stats/explore_pr_candidates.ipynb~L33

     "\n",
     "for pr_stat in selected_prs:\n",
     "    data = {\n",
-    "        'number': [pr_stat.pull_request.number],\n",
-    "        'url': [pr_stat.pull_request.html_url],\n",
-    "        'title': [pr_stat.pull_request.title],\n",
-    "        'overall_score': [pr_stat.score],\n",
-    "        'label_score': [pr_stat.label_score],\n",
-    "        'length_score': [pr_stat.length_score],\n",
-    "        'body_length': [pr_stat.body_length],\n",
-    "        'comment_length': [pr_stat.comment_length],\n",
-    "        'interaction_score': [pr_stat.interaction_score],\n",
-    "        'comments': [pr_stat.num_comments],\n",
-    "        'reactions': [pr_stat.num_reactions],\n",
-    "        'reviews': [pr_stat.num_reviews],\n",
-    "        'num_interacting_users': [pr_stat.num_interacting_users],\n",
-    "        'change_score': [pr_stat.change_score],\n",
-    "        'additions': [pr_stat.num_additions],\n",
-    "        'deletions': [pr_stat.num_deletions],\n",
-    "        'num_changed_files': [pr_stat.num_changed_files],\n",
+    "        \"number\": [pr_stat.pull_request.number],\n",
+    "        \"url\": [pr_stat.pull_request.html_url],\n",
+    "        \"title\": [pr_stat.pull_request.title],\n",
+    "        \"overall_score\": [pr_stat.score],\n",
+    "        \"label_score\": [pr_stat.label_score],\n",
+    "        \"length_score\": [pr_stat.length_score],\n",
+    "        \"body_length\": [pr_stat.body_length],\n",
+    "        \"comment_length\": [pr_stat.comment_length],\n",
+    "        \"interaction_score\": [pr_stat.interaction_score],\n",
+    "        \"comments\": [pr_stat.num_comments],\n",
+    "        \"reactions\": [pr_stat.num_reactions],\n",
+    "        \"reviews\": [pr_stat.num_reviews],\n",
+    "        \"num_interacting_users\": [pr_stat.num_interacting_users],\n",
+    "        \"change_score\": [pr_stat.change_score],\n",
+    "        \"additions\": [pr_stat.num_additions],\n",
+    "        \"deletions\": [pr_stat.num_deletions],\n",
+    "        \"num_changed_files\": [pr_stat.num_changed_files],\n",
     "    }\n",
     "    df = pd.DataFrame(data)\n",
-    "    rows = pd.concat([df, rows]).reset_index(drop = True)"
+    "    rows = pd.concat([df, rows]).reset_index(drop=True)"
    ]
   },
   {

tests/system/providers/papermill/input_notebook.ipynb~L91

     }
    ],
    "source": [
-    "sb.glue('message', msgs)"
+    "sb.glue(\"message\", msgs)"
    ]
   }
  ],

apache/superset (+512 -402 lines across 1 file)

superset-frontend/plugins/legacy-plugin-chart-country-map/scripts/Country Map GeoJSON Generator.ipynb~L96

     "if not os.path.exists(data_dir):\n",
     "    os.mkdir(data_dir)\n",
     "\n",
+    "\n",
     "def download_files(skip_existing: bool):\n",
     "    for url in [\n",
     "        \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_0_countries.zip\",\n",
     "        \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_1_states_provinces.zip\",\n",
-    "        \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_admin_1_states_provinces.zip\"\n",
+    "        \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_admin_1_states_provinces.zip\",\n",
     "    ]:\n",
-    "        file_name = url.split('/')[-1]\n",
-    "        full_file_name = f'{data_dir}/{file_name}'\n",
+    "        file_name = url.split(\"/\")[-1]\n",
+    "        full_file_name = f\"{data_dir}/{file_name}\"\n",
     "        # temporary fix\n",
-    "        url = url.replace(\"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download\", \"https://naciscdn.org/naturalearth\")\n",
+    "        url = url.replace(\n",
+    "            \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download\",\n",
+    "            \"https://naciscdn.org/naturalearth\",\n",
+    "        )\n",
     "        with requests.get(\n",
     "            url,\n",
     "            headers={\n",
     "                \"accept-encoding\": \"gzip, deflate, br\",\n",
-    "                \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36\"\n",
+    "                \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36\",\n",
     "            },\n",
     "            stream=True,\n",
     "        ) as res:\n",
-    "            file_size = int(res.headers['content-length'])\n",
+    "            file_size = int(res.headers[\"content-length\"])\n",
     "            if res.status_code != 200:\n",
-    "                print(\"Error downloading files. Please open the URL to download them from browser manually.\")\n",
+    "                print(\n",
+    "                    \"Error downloading files. Please open the URL to download them from browser manually.\"\n",
+    "                )\n",
     "                break\n",
     "            if (\n",
-    "                skip_existing and\n",
-    "                os.path.exists(full_file_name) and\n",
-    "                file_size == os.path.getsize(full_file_name)\n",
+    "                skip_existing\n",
+    "                and os.path.exists(full_file_name)\n",
+    "                and file_size == os.path.getsize(full_file_name)\n",
     "            ):\n",
     "                print(f\"Skip {file_name} because it already exists\")\n",
     "                continue\n",

superset-frontend/plugins/legacy-plugin-chart-country-map/scripts/Country Map GeoJSON Generator.ipynb~L130

     "                fh.write(res.content)\n",
     "    print(\"Done.                                                            \")\n",
     "\n",
+    "\n",
     "download_files(skip_existing=False)"
    ]
   },

<a href='https://github.com/ap...*[Comment body truncated]*

edhinard and others added 13 commits August 14, 2024 18:08
Co-authored-by: Alex Waygood <[email protected]>
Closes #12754
## Summary

Occasionally, we receive bug reports that imports in `src` directories
aren't correctly detected. The root of the problem is that we default to
`src = ["."]`, so users have to set `src = ["src"]` explicitly. This PR
extends the default to cover _both_ of them: `src = [".", "src"]`.

Closes #12454.

## Test Plan

I replicated the structure described in
#12453, and verified that the
imports were considered sorted, but that adding `src = ["."]` showed an
error.
@MichaReiser MichaReiser enabled auto-merge (rebase) August 14, 2024 16:13
@MichaReiser MichaReiser merged commit 73160dc into main Aug 14, 2024
19 checks passed
@MichaReiser MichaReiser deleted the ruff-0.6 branch August 14, 2024 16:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
breaking Breaking API change
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants