-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ruff 0.6 #12834
Ruff 0.6 #12834
Conversation
CodSpeed Performance ReportMerging #12834 will improve performances by 5.36%Comparing Summary
Benchmarks breakdown
|
|
code | total | + violation | - violation | + fix | - fix |
---|---|---|---|---|---|
RET505 | 1576 | 0 | 0 | 1576 | 0 |
PT023 | 661 | 196 | 465 | 0 | 0 |
Q000 | 539 | 539 | 0 | 0 | 0 |
I001 | 492 | 193 | 299 | 0 | 0 |
RET506 | 372 | 0 | 0 | 372 | 0 |
PT004 | 356 | 0 | 356 | 0 | 0 |
PT001 | 295 | 159 | 136 | 0 | 0 |
T201 | 151 | 151 | 0 | 0 | 0 |
E501 | 74 | 74 | 0 | 0 | 0 |
ANN001 | 62 | 62 | 0 | 0 | 0 |
PLW0642 | 52 | 52 | 0 | 0 | 0 |
RET507 | 36 | 0 | 0 | 36 | 0 |
NPY002 | 32 | 32 | 0 | 0 | 0 |
D103 | 25 | 25 | 0 | 0 | 0 |
W293 | 23 | 23 | 0 | 0 | 0 |
ANN201 | 22 | 22 | 0 | 0 | 0 |
TCH001 | 18 | 18 | 0 | 0 | 0 |
TCH002 | 18 | 0 | 18 | 0 | 0 |
FBT003 | 17 | 17 | 0 | 0 | 0 |
RET508 | 16 | 0 | 0 | 16 | 0 |
PYI062 | 15 | 15 | 0 | 0 | 0 |
PLR1730 | 13 | 13 | 0 | 0 | 0 |
F401 | 12 | 12 | 0 | 0 | 0 |
W291 | 11 | 11 | 0 | 0 | 0 |
COM812 | 11 | 11 | 0 | 0 | 0 |
E402 | 7 | 7 | 0 | 0 | 0 |
PTH123 | 6 | 6 | 0 | 0 | 0 |
C408 | 6 | 6 | 0 | 0 | 0 |
F821 | 5 | 5 | 0 | 0 | 0 |
ICN001 | 5 | 5 | 0 | 0 | 0 |
ERA001 | 5 | 5 | 0 | 0 | 0 |
ANN202 | 5 | 5 | 0 | 0 | 0 |
E701 | 5 | 5 | 0 | 0 | 0 |
PD011 | 4 | 4 | 0 | 0 | 0 |
PD901 | 4 | 4 | 0 | 0 | 0 |
PLR2004 | 4 | 4 | 0 | 0 | 0 |
UP031 | 4 | 4 | 0 | 0 | 0 |
RUF101 | 3 | 3 | 0 | 0 | 0 |
B007 | 3 | 3 | 0 | 0 | 0 |
T203 | 3 | 3 | 0 | 0 | 0 |
SLF001 | 3 | 3 | 0 | 0 | 0 |
RUF001 | 3 | 3 | 0 | 0 | 0 |
PTH110 | 3 | 3 | 0 | 0 | 0 |
E741 | 2 | 2 | 0 | 0 | 0 |
PLR1736 | 2 | 2 | 0 | 0 | 0 |
PLW2901 | 2 | 2 | 0 | 0 | 0 |
I002 | 2 | 2 | 0 | 0 | 0 |
PD002 | 2 | 2 | 0 | 0 | 0 |
PLW0602 | 2 | 2 | 0 | 0 | 0 |
ARG001 | 2 | 2 | 0 | 0 | 0 |
E721 | 1 | 1 | 0 | 0 | 0 |
B008 | 1 | 1 | 0 | 0 | 0 |
TID253 | 1 | 1 | 0 | 0 | 0 |
SIM115 | 1 | 1 | 0 | 0 | 0 |
S301 | 1 | 1 | 0 | 0 | 0 |
PLW0211 | 1 | 1 | 0 | 0 | 0 |
PLR0913 | 1 | 1 | 0 | 0 | 0 |
BLE001 | 1 | 1 | 0 | 0 | 0 |
RUF010 | 1 | 1 | 0 | 0 | 0 |
PD008 | 1 | 1 | 0 | 0 | 0 |
S113 | 1 | 1 | 0 | 0 | 0 |
PTH111 | 1 | 1 | 0 | 0 | 0 |
PTH202 | 1 | 1 | 0 | 0 | 0 |
PTH102 | 1 | 1 | 0 | 0 | 0 |
FBT001 | 1 | 1 | 0 | 0 | 0 |
D400 | 1 | 1 | 0 | 0 | 0 |
D415 | 1 | 1 | 0 | 0 | 0 |
PTH107 | 1 | 1 | 0 | 0 | 0 |
C405 | 1 | 1 | 0 | 0 | 0 |
B018 | 1 | 1 | 0 | 0 | 0 |
N803 | 1 | 1 | 0 | 0 | 0 |
C901 | 1 | 1 | 0 | 0 | 0 |
SIM910 | 1 | 1 | 0 | 0 | 0 |
E711 | 1 | 1 | 0 | 0 | 0 |
RUF005 | 1 | 1 | 0 | 0 | 0 |
D205 | 1 | 1 | 0 | 0 | 0 |
D212 | 1 | 1 | 0 | 0 | 0 |
F841 | 1 | 1 | 0 | 0 | 0 |
SIM108 | 1 | 1 | 0 | 0 | 0 |
UP030 | 1 | 1 | 0 | 0 | 0 |
UP032 | 1 | 1 | 0 | 0 | 0 |
S506 | 1 | 1 | 0 | 0 | 0 |
PLW0127 | 1 | 1 | 0 | 0 | 0 |
PLW0128 | 1 | 1 | 0 | 0 | 0 |
F811 | 1 | 1 | 0 | 0 | 0 |
PYI057 | 1 | 1 | 0 | 0 | 0 |
PT005 | 1 | 0 | 1 | 0 | 0 |
Linter (preview)
ℹ️ ecosystem check detected linter changes. (+278 -744 violations, +0 -0 fixes in 19 projects; 35 projects unchanged)
DisnakeDev/disnake (+1 -1 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ disnake/ext/commands/flag_converter.py:314:28: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - disnake/ext/commands/flag_converter.py:314:28: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
RasaHQ/rasa (+0 -3 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
- rasa/core/evaluation/marker_stats.py:114:51: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - rasa/core/featurizers/single_state_featurizer.py:119:20: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - tests/core/featurizers/test_precomputation.py:110:17: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
Snowflake-Labs/snowcli (+181 -2 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ src/snowflake/cli/_app/cli_app.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/commands_registration/command_plugins_loader.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/commands_registration/typer_registration.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/dev/docs/commands_docs_generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/dev/docs/generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/dev/docs/project_definition_docs_generator.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/loggers.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/main_typer.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/printing.py:15:1: I001 [*] Import block is un-sorted or un-formatted + src/snowflake/cli/_app/snow_connector.py:15:1: I001 [*] Import block is un-sorted or un-formatted ... 173 additional changes omitted for project
PlasmaPy/PlasmaPy (+1 -0 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ src/plasmapy/particles/ionization_state_collection.py:693:40: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
apache/airflow (+3 -319 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL
+ helm_tests/airflow_aux/test_annotations.py:409:63: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - helm_tests/airflow_aux/test_annotations.py:409:63: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - helm_tests/airflow_aux/test_pod_template_file.py:48:9: PT004 Fixture `setup_test_cases` does not return anything, add leading underscore - kubernetes_tests/conftest.py:27:5: PT004 Fixture `initialize_providers_manager` does not return anything, add leading underscore - kubernetes_tests/test_base.py:58:9: PT004 Fixture `base_tests_setup` does not return anything, add leading underscore - kubernetes_tests/test_kubernetes_pod_operator.py:89:5: PT004 Fixture `mock_get_connection` does not return anything, add leading underscore - kubernetes_tests/test_kubernetes_pod_operator.py:98:9: PT004 Fixture `setup_tests` does not return anything, add leading underscore - tests/always/test_pandas.py:31:9: PT004 Fixture `setup_test_cases` does not return anything, add leading underscore ... 311 additional changes omitted for rule PT004 - tests/providers/ssh/operators/test_ssh.py:68:9: PT005 Fixture `_patch_exec_ssh_client` returns a value, remove leading underscore + tests/test_utils/get_all_tests.py:26:23: ICN001 `xml.etree.ElementTree` should be imported as `ET` + tests/ti_deps/deps/test_mapped_task_upstream_dep.py:459:36: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - tests/ti_deps/deps/test_mapped_task_upstream_dep.py:459:36: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead ... 310 additional changes omitted for project
apache/superset (+3 -43 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL
+ superset/reports/notifications/email.py:58:28: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - superset/reports/notifications/email.py:58:28: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead + superset/reports/notifications/slack_mixin.py:107:43: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - superset/reports/notifications/slack_mixin.py:107:43: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead + superset/reports/notifications/slack_mixin.py:97:39: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - superset/reports/notifications/slack_mixin.py:97:39: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - tests/integration_tests/celery_tests.py:71:5: PT004 Fixture `setup_sqllab` does not return anything, add leading underscore - tests/integration_tests/charts/api_tests.py:1274:9: PT004 Fixture `load_energy_charts` does not return anything, add leading underscore - tests/integration_tests/charts/api_tests.py:89:9: PT004 Fixture `clear_data_cache` does not return anything, add leading underscore - tests/integration_tests/charts/data/api_tests.py:93:5: PT004 Fixture `skip_by_backend` does not return anything, add leading underscore ... 36 additional changes omitted for project
bokeh/bokeh (+23 -318 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview --select ALL
- docs/bokeh/docserver.py:31:1: I001 [*] Import block is un-sorted or un-formatted - docs/bokeh/source/conf.py:9:1: I001 [*] Import block is un-sorted or un-formatted - docs/bokeh/source/docs/first_steps/examples/first_steps_5_vectorize_color_and_size.py:1:1: I001 [*] Import block is un-sorted or un-formatted - examples/advanced/extensions/wrapping.py:1:1: I001 [*] Import block is un-sorted or un-formatted - examples/advanced/integration/d3-voronoi.py:1:1: I001 [*] Import block is un-sorted or un-formatted - examples/basic/annotations/band.py:11:1: I001 [*] Import block is un-sorted or un-formatted ... 292 additional changes omitted for rule I001 + src/bokeh/command/subcommands/file_output.py:145:17: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - src/bokeh/command/subcommands/file_output.py:145:17: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead + src/bokeh/command/subcommands/serve.py:831:17: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - src/bokeh/command/subcommands/serve.py:831:17: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - src/bokeh/events.py:182:9: PLW0642 Invalid assignment to `cls` argument in class method + src/bokeh/events.py:182:9: PLW0642 Reassigned `cls` variable in class method + src/bokeh/sampledata/us_counties.py:45:33: ICN001 `xml.etree.ElementTree` should be imported as `ET` + src/bokeh/sampledata/us_states.py:44:33: ICN001 `xml.etree.ElementTree` should be imported as `ET` + tests/integration/models/test_plot.py:27:28: TCH001 Move application import `bokeh.document.Document` into a type-checking block - tests/integration/models/test_plot.py:27:28: TCH002 Move third-party import `bokeh.document.Document` into a type-checking block ... 325 additional changes omitted for project
docker/docker-py (+1 -0 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ docker/types/containers.py:725:22: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
freedomofpress/securedrop (+1 -0 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ molecule/testinfra/ossec/test_journalist_mail.py:14:45: RUF100 [*] Unused `noqa` directive (non-enabled: `PT004`)
ibis-project/ibis (+1 -1 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ ibis/common/egraph.py:776:17: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead - ibis/common/egraph.py:776:17: RUF025 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
jrnl-org/jrnl (+2 -0 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ tests/lib/given_steps.py:12:23: ICN001 `xml.etree.ElementTree` should be imported as `ET` + tests/lib/then_steps.py:7:23: ICN001 `xml.etree.ElementTree` should be imported as `ET`
mlflow/mlflow (+1 -0 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
+ tests/metrics/genai/test_genai_metrics.py:554:37: C420 [*] Unnecessary dict comprehension for iterable; use `dict.fromkeys` instead
pandas-dev/pandas (+51 -51 violations, +0 -0 fixes)
ruff check --no-cache --exit-zero --ignore RUF9 --output-format concise --preview
- pandas/core/arrays/categorical.py:572:13: PLW0642 Invalid assignment to `self` argument in instance method + pandas/core/arrays/categorical.py:572:13: PLW0642 Reassigned `self` variable in instance method - pandas/core/arrays/datetimelike.py:1065:9: PLW0642 Invalid assignment to `self` argument in instance method + pandas/core/arrays/datetimelike.py:1065:9: PLW0642 Reassigned `self` variable in instance method - pandas/core/arrays/datetimelike.py:1080:9: PLW0642 Invalid assignment to `self` argument in instance method + pandas/core/arrays/datetimelike.py:1080:9: PLW0642 Reassigned `self` variable in instance method - pandas/core/arrays/datetimelike.py:1081:9: PLW0642 Invalid assignment to `self` argument in instance method + pandas/core/arrays/datetimelike.py:1081:9: PLW0642 Reassigned `self` variable in instance method - pandas/core/arrays/datetimelike.py:1109:9: PLW0642 Invalid assignment to `self` argument in instance method + pandas/core/arrays/datetimelike.py:1109:9: PLW0642 Reassigned `self` variable in instance method ... 92 additional changes omitted for project
... Truncated remaining completed project reports due to GitHub comment length restrictions
Changes by rule (10 rules affected)
code | total | + violation | - violation | + fix | - fix |
---|---|---|---|---|---|
I001 | 480 | 181 | 299 | 0 | 0 |
PT004 | 356 | 0 | 356 | 0 | 0 |
PLW0642 | 104 | 52 | 52 | 0 | 0 |
C420 | 18 | 18 | 0 | 0 | 0 |
TCH001 | 18 | 18 | 0 | 0 | 0 |
RUF025 | 18 | 0 | 18 | 0 | 0 |
TCH002 | 18 | 0 | 18 | 0 | 0 |
ICN001 | 5 | 5 | 0 | 0 | 0 |
RUF100 | 4 | 4 | 0 | 0 | 0 |
PT005 | 1 | 0 | 1 | 0 | 0 |
Formatter (stable)
ℹ️ ecosystem check detected format changes. (+1240 -987 lines in 26 files in 8 projects; 46 projects unchanged)
PlasmaPy/PlasmaPy (+69 -55 lines across 3 files)
docs/notebooks/analysis/fit_functions.ipynb~L291
"plt.legend(fontsize=14, loc=\"upper left\")\n",
"\n",
"txt = f\"$f(x) = {explin.latex_str}$\\n$r^2 = {explin.rsq:.3f}$\\n\"\n",
- "for name, param, err in zip(explin.param_names, explin.params, explin.param_errors, strict=False):\n",
+ "for name, param, err in zip(\n",
+ " explin.param_names, explin.params, explin.param_errors, strict=False\n",
+ "):\n",
" txt += f\"{name} = {param:.3f} $\\\\pm$ {err:.3f}\\n\"\n",
"txt_loc = [-13.0, ax.get_ylim()[1]]\n",
"txt_loc = ax.transAxes.inverted().transform(ax.transData.transform(txt_loc))\n",
docs/notebooks/analysis/swept_langmuir/find_floating_potential.ipynb~L308
"axs[0].legend(fontsize=12)\n",
"\n",
"# zoom on fit\n",
- "for ii, label, rtn in zip([1, 2], [\"Exponential\", \"Linear\"], [results, results_lin], strict=False):\n",
+ "for ii, label, rtn in zip(\n",
+ " [1, 2], [\"Exponential\", \"Linear\"], [results, results_lin], strict=False\n",
+ "):\n",
" vf = rtn[0]\n",
" extras = rtn[1]\n",
"\n",
docs/notebooks/simulation/particle_tracker.ipynb~L6
"metadata": {
"collapsed": false
},
+ "outputs": [],
"source": [
"%matplotlib inline"
- ],
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
docs/notebooks/simulation/particle_tracker.ipynb~L28
"metadata": {
"collapsed": false
},
+ "outputs": [],
"source": [
"import astropy.units as u\n",
"import matplotlib.pyplot as plt\n",
docs/notebooks/simulation/particle_tracker.ipynb~L41
"from plasmapy.simulation.particle_tracker.termination_conditions import (\n",
" TimeElapsedTerminationCondition,\n",
")"
- ],
- "outputs": []
+ ]
},
{
"cell_type": "raw",
docs/notebooks/simulation/particle_tracker.ipynb~L66
"metadata": {
"collapsed": false
},
+ "outputs": [],
"source": [
"grid_length = 10\n",
"grid = CartesianGrid(-1 * u.m, 1 * u.m, num=grid_length)"
- ],
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
docs/notebooks/simulation/particle_tracker.ipynb~L88
"metadata": {
"collapsed": false
},
+ "outputs": [],
"source": [
"Bx_fill = 4 * u.T\n",
"Bx = np.full(grid.shape, Bx_fill.value) * u.T\n",
docs/notebooks/simulation/particle_tracker.ipynb~L96
"Ey = np.full(grid.shape, Ey_fill.value) * u.V / u.m\n",
"\n",
"grid.add_quantities(B_x=Bx, E_y=Ey)\n",
- "ExB_drift(np.asarray([0, Ey_fill.value, 0]) * u.V / u.m, np.asarray([Bx_fill.value, 0, 0]) * u.T)"
- ],
- "outputs": []
+ "ExB_drift(\n",
+ " np.asarray([0, Ey_fill.value, 0]) * u.V / u.m,\n",
+ " np.asarray([Bx_fill.value, 0, 0]) * u.T,\n",
+ ")"
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "|ParticleTracker| takes arrays of particle positions and velocities of the shape [nparticles, 3], so these arrays represent one particle starting at the origin."
- ],
"metadata": {
"collapsed": false
- }
+ },
+ "source": [
+ "|ParticleTracker| takes arrays of particle positions and velocities of the shape [nparticles, 3], so these arrays represent one particle starting at the origin."
+ ]
},
{
"cell_type": "code",
"execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
"source": [
"x0 = [[0, 0, 0]] * u.m\n",
"v0 = [[1, 0, 0]] * u.m / u.s\n",
"particle = Particle(\"p+\")"
- ],
- "metadata": {
- "collapsed": false
- },
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
"source": [
"Initialize our stop condition and save routine. We can determine a relevant\n",
"duration for the experiment by calculating the gyroperiod for the particle."
- ],
- "metadata": {
- "collapsed": false
- }
+ ]
},
{
"cell_type": "code",
"execution_count": null,
+ "metadata": {
+ "collapsed": false
+ },
+ "outputs": [],
"source": [
- "particle_gyroperiod = 1 / gyrofrequency(Bx_fill, particle).to(u.Hz, equivalencies=u.dimensionless_angles())\n",
+ "particle_gyroperiod = 1 / gyrofrequency(Bx_fill, particle).to(\n",
+ " u.Hz, equivalencies=u.dimensionless_angles()\n",
+ ")\n",
"\n",
"simulation_duration = 100 * particle_gyroperiod\n",
"save_interval = particle_gyroperiod / 10\n",
"\n",
"termination_condition = TimeElapsedTerminationCondition(simulation_duration)\n",
"save_routine = IntervalSaveRoutine(save_interval)"
- ],
- "metadata": {
- "collapsed": false
- },
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
- "source": [
- "Initialize the trajectory calculation."
- ],
"metadata": {
"collapsed": false
- }
+ },
+ "source": [
+ "Initialize the trajectory calculation."
+ ]
},
{
"cell_type": "code",
"execution_count": null,
- "source": [
- "simulation = ParticleTracker(grid, save_routine=save_routine, termination_condition=termination_condition, verbose=False)"
- ],
"metadata": {
"collapsed": false
},
- "outputs": []
+ "outputs": [],
+ "source": [
+ "simulation = ParticleTracker(\n",
+ " grid,\n",
+ " save_routine=save_routine,\n",
+ " termination_condition=termination_condition,\n",
+ " verbose=False,\n",
+ ")"
+ ]
},
{
"cell_type": "markdown",
+ "metadata": {
+ "collapsed": false
+ },
"source": [
"We still have to initialize the particle's velocity. We'll limit ourselves to\n",
"one in the x direction, parallel to the magnetic field B -\n",
"that way, it won't turn in the z direction.\n",
"\n"
- ],
- "metadata": {
- "collapsed": false
- }
+ ]
},
{
"cell_type": "code",
"execution_count": null,
- "source": [
- "simulation.load_particles(x0, v0, particle)"
- ],
"metadata": {
"collapsed": false
},
- "outputs": []
+ "outputs": [],
+ "source": [
+ "simulation.load_particles(x0, v0, particle)"
+ ]
},
{
"cell_type": "markdown",
docs/notebooks/simulation/particle_tracker.ipynb~L205
"metadata": {
"collapsed": false
},
+ "outputs": [],
"source": [
"simulation.run()"
- ],
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
docs/notebooks/simulation/particle_tracker.ipynb~L221
"cell_type": "code",
"execution_count": null,
"metadata": {},
+ "outputs": [],
"source": [
"results = save_routine.results\n",
"particle_trajectory = results[\"x\"][:, 0]\n",
"particle_position_z = particle_trajectory[:, 2]\n",
"\n",
"plt.scatter(results[\"time\"], particle_position_z)"
- ],
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
docs/notebooks/simulation/particle_tracker.ipynb~L246
"nbsphinx-thumbnail"
]
},
+ "outputs": [],
"source": [
"fig = plt.figure()\n",
- "ax = fig.add_subplot(projection='3d')\n",
+ "ax = fig.add_subplot(projection=\"3d\")\n",
"\n",
"ax.plot(*particle_trajectory.T)\n",
"ax.set_xlabel(\"X\")\n",
"ax.set_ylabel(\"Y\")\n",
"ax.set_zlabel(\"Z\")"
- ],
- "outputs": []
+ ]
},
{
"cell_type": "markdown",
docs/notebooks/simulation/particle_tracker.ipynb~L271
"metadata": {
"collapsed": false
},
+ "outputs": [],
"source": [
"v_mean = results[\"v\"][:, :, 2].mean()\n",
"print(\n",
" f\"The calculated drift velocity is {v_mean:.4f} to compare with the \"\n",
" f\"expected E0/B0 = {-(Ey_fill/Bx_fill).value:.4f}\"\n",
")"
- ],
- "outputs": []
+ ]
}
],
"metadata": {
apache/airflow (+20 -20 lines across 2 files)
dev/stats/explore_pr_candidates.ipynb~L19
"metadata": {},
"outputs": [],
"source": [
- "file = open(\"prlist\",\"rb\") # open the pickled file\n",
+ "file = open(\"prlist\", \"rb\") # open the pickled file\n",
"selected_prs = pickle.load(file)"
]
},
dev/stats/explore_pr_candidates.ipynb~L33
"\n",
"for pr_stat in selected_prs:\n",
" data = {\n",
- " 'number': [pr_stat.pull_request.number],\n",
- " 'url': [pr_stat.pull_request.html_url],\n",
- " 'title': [pr_stat.pull_request.title],\n",
- " 'overall_score': [pr_stat.score],\n",
- " 'label_score': [pr_stat.label_score],\n",
- " 'length_score': [pr_stat.length_score],\n",
- " 'body_length': [pr_stat.body_length],\n",
- " 'comment_length': [pr_stat.comment_length],\n",
- " 'interaction_score': [pr_stat.interaction_score],\n",
- " 'comments': [pr_stat.num_comments],\n",
- " 'reactions': [pr_stat.num_reactions],\n",
- " 'reviews': [pr_stat.num_reviews],\n",
- " 'num_interacting_users': [pr_stat.num_interacting_users],\n",
- " 'change_score': [pr_stat.change_score],\n",
- " 'additions': [pr_stat.num_additions],\n",
- " 'deletions': [pr_stat.num_deletions],\n",
- " 'num_changed_files': [pr_stat.num_changed_files],\n",
+ " \"number\": [pr_stat.pull_request.number],\n",
+ " \"url\": [pr_stat.pull_request.html_url],\n",
+ " \"title\": [pr_stat.pull_request.title],\n",
+ " \"overall_score\": [pr_stat.score],\n",
+ " \"label_score\": [pr_stat.label_score],\n",
+ " \"length_score\": [pr_stat.length_score],\n",
+ " \"body_length\": [pr_stat.body_length],\n",
+ " \"comment_length\": [pr_stat.comment_length],\n",
+ " \"interaction_score\": [pr_stat.interaction_score],\n",
+ " \"comments\": [pr_stat.num_comments],\n",
+ " \"reactions\": [pr_stat.num_reactions],\n",
+ " \"reviews\": [pr_stat.num_reviews],\n",
+ " \"num_interacting_users\": [pr_stat.num_interacting_users],\n",
+ " \"change_score\": [pr_stat.change_score],\n",
+ " \"additions\": [pr_stat.num_additions],\n",
+ " \"deletions\": [pr_stat.num_deletions],\n",
+ " \"num_changed_files\": [pr_stat.num_changed_files],\n",
" }\n",
" df = pd.DataFrame(data)\n",
- " rows = pd.concat([df, rows]).reset_index(drop = True)"
+ " rows = pd.concat([df, rows]).reset_index(drop=True)"
]
},
{
tests/system/providers/papermill/input_notebook.ipynb~L91
}
],
"source": [
- "sb.glue('message', msgs)"
+ "sb.glue(\"message\", msgs)"
]
}
],
apache/superset (+512 -402 lines across 1 file)
"if not os.path.exists(data_dir):\n",
" os.mkdir(data_dir)\n",
"\n",
+ "\n",
"def download_files(skip_existing: bool):\n",
" for url in [\n",
" \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_0_countries.zip\",\n",
" \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/10m/cultural/ne_10m_admin_1_states_provinces.zip\",\n",
- " \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_admin_1_states_provinces.zip\"\n",
+ " \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download/50m/cultural/ne_50m_admin_1_states_provinces.zip\",\n",
" ]:\n",
- " file_name = url.split('/')[-1]\n",
- " full_file_name = f'{data_dir}/{file_name}'\n",
+ " file_name = url.split(\"/\")[-1]\n",
+ " full_file_name = f\"{data_dir}/{file_name}\"\n",
" # temporary fix\n",
- " url = url.replace(\"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download\", \"https://naciscdn.org/naturalearth\")\n",
+ " url = url.replace(\n",
+ " \"https://www.naturalearthdata.com/http//www.naturalearthdata.com/download\",\n",
+ " \"https://naciscdn.org/naturalearth\",\n",
+ " )\n",
" with requests.get(\n",
" url,\n",
" headers={\n",
" \"accept-encoding\": \"gzip, deflate, br\",\n",
- " \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36\"\n",
+ " \"user-agent\": \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.71 Safari/537.36\",\n",
" },\n",
" stream=True,\n",
" ) as res:\n",
- " file_size = int(res.headers['content-length'])\n",
+ " file_size = int(res.headers[\"content-length\"])\n",
" if res.status_code != 200:\n",
- " print(\"Error downloading files. Please open the URL to download them from browser manually.\")\n",
+ " print(\n",
+ " \"Error downloading files. Please open the URL to download them from browser manually.\"\n",
+ " )\n",
" break\n",
" if (\n",
- " skip_existing and\n",
- " os.path.exists(full_file_name) and\n",
- " file_size == os.path.getsize(full_file_name)\n",
+ " skip_existing\n",
+ " and os.path.exists(full_file_name)\n",
+ " and file_size == os.path.getsize(full_file_name)\n",
" ):\n",
" print(f\"Skip {file_name} because it already exists\")\n",
" continue\n",
" fh.write(res.content)\n",
" print(\"Done. \")\n",
"\n",
+ "\n",
"download_files(skip_existing=False)"
]
},
<a href='https://github.com/ap...*[Comment body truncated]*
Co-authored-by: Alex Waygood <[email protected]>
Co-authored-by: Alex Waygood <[email protected]>
## Summary Occasionally, we receive bug reports that imports in `src` directories aren't correctly detected. The root of the problem is that we default to `src = ["."]`, so users have to set `src = ["src"]` explicitly. This PR extends the default to cover _both_ of them: `src = [".", "src"]`. Closes #12454. ## Test Plan I replicated the structure described in #12453, and verified that the imports were considered sorted, but that adding `src = ["."]` showed an error.
Co-authored-by: Alex Waygood <[email protected]> Closes: #12456 Closes: astral-sh/ruff-vscode#546
Summary
Feature branch for Ruff 0.6
Changelog
unnecessary-dict-comprehension-for-iterable
(RUF025
) asC420
#12533UP027
#12843ASYNC100
,ASYNC109
,ASYNC110
,ASYNC115
andASYNC116
behavior changes #12844RET50{5-8}
#12840flake8-pyi
rules #12860redirected-noqa
(RUF101
) #12869src
layouts by default #12848Merging
Don't squash merge this PR. Use Rebase and Merge!