-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
More Ruff checks, and make it fix #3239
Conversation
7067e86
to
96ebcf3
Compare
I added "noqa: I001" and the import line stays in place.
Now when I try to look at the pull request to see tests, github is giving
me 404s half the time, so I think it's flaky at the moment.
|
Given the quirks so far, I will have to go through the diff in detail to make sure nothing strange is going on. |
Is this PR ready? |
Sorry, I missed this message. Works for me. |
Rebased to current main. Thoughts? |
pyproject.toml
Outdated
"RUF005", | ||
"RUF012", | ||
"PLR0915", | ||
"INP001", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get this. INP
has only one rule INP001
. What's the point of adding checks for it but ignoring the only thing it checks? I didn't look at all of them but there are other similar ones. e.g. ISC
has 3 rules but two off them are ignored or T20
has two rules (but practically T201
is the most common one) and that is ignored...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just did the quick thing: added each thing in the list, suppressed those subrules with violations, figuring there were probably other rules being checked. It's like closing the gate, so we don't introduce new violations.
You've found a couple of weird cases, it's true, but I don't understand how you'd like to move forward.
I thought it was probably useful to enforce a bunch of the rules quickly. Perhaps you disagree, but what now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to understand the logic behind some checks.
INP
for example disallows implicit namespace packages. That's not going to work. We have namespace packages and will probably add more. Why should we disallow that? I understand the reason behind the rule: "Hey, maybe you forgot to add__init__.py
?" but that's not applicable here. We intentionally use implicit namespace packages.T20
disallowsprint
(or its cousinpprint
), which is a very project specific convention. We clearly have scripts thatprint
stuff (pelican-quickstart
etc). Either we don't care about this rule or we switch fromprint
to something else. Until then, this rule is effectively not enforceable.ISC
is one thing I can agree with but we are ignoring most of them :).
I understand that to un-ignore some of these rules, perhaps some manual fixes are needed. That is fine but then I'd treat this as a "work-in-progress" and the ignored rules more like a "to-do list" :).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really appreciate your careful review. Thanks.
My overall goal is to make some progress here, merge something. In particular, the auto-fix ("--fix") and auto-format.
I agree the ignored rules are like a TODO list. I was hoping to merge that TODO list, since it represents some knowledge that might be useful to lay out in the codebase.
However, if you don't want to merge that, then another possibility is to take out the extra rules and the suppressions.
What is the quickest path to moving forward?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Another possibility would be to not include any rules that have suppressions, and drop INP and T20. Not sure where that would leave the list, I'd have to try it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also probably drop these comments:
suppression in order of # of violations:
TODO: these only have one violation each:
While they might be accurate at the time, these kind of comments have a very strong tendency to become incorrect and/or misleading quickly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. Done.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Done" for adding the comments.
For dropping, how about I add "in Dec 2023"? That would preserve the info (these were in order) without lying.
If I were to work on the TODO list, I'd start with the ones that had one violation each in 2023, it might be helpful to have them marked.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added the "in Dec 2023". But I'm happy to drop those comments entirely if you'd prefer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that comments next to rules are helpful. As I've mentioned before, the set of rules and ignores that I've used for some of my plugin code-style check experimentation is an example of that practice: https://github.com/pelican-plugins/featured-image/blob/3d3473269c27dfd3ae123a73aebeb1af4d8b3dd0/pyproject.toml#L58-L91
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@boxydog: You kindly asked, “What is the quickest path to moving forward?”, and yet the moving forward has not been very quick. Please accept my sincere apologies for that, as well as my gratitude for your patience. I made a few changes to bring the branch current with code that has been merged to master
in the interim.
If you are still willing to work on the TODO list as you indicated a few months ago, then I for one would be happy to merge this PR in its current form. Would you be amenable to that?
@avaris: Any follow-up thoughts on the current state of this PR?
(Note: Windows test failures in CI are unrelated — see #3299)
Sounds good to me. |
Oh, I'd be willing to work on TODOs as well. I have no particular order in mind, tho. Open to suggestions. I'd probably just pick a couple marked as TODO, uncomment them, and see what ruff does. |
Alternatively, I could try to pick some of the TODOs at the bottom of the "ignore" list and see if I can get rid of them. (The ones at the bottom have only one violation apiece.) Again, I'm open to suggestions. |
So where is this PR? I have no planned work. At one point you had said you'd merge it as-is. I'd suggest adding If you wished, I could work on other ruff changes in a later PR. |
Ah, right you are.
Yes. So I did the following:
So now the change just added --exit-non-zero-on-fix |
A test is still failing. It's only failing on windows, an environment I don't have set up. It's this:
|
You might try just re-running those tests? I don't think I have permissions to do it. |
Given that the re-running of tests now failed on macos, which previously succeeded, that makes me think it's a flaky test. Perhaps the behavior of that deprecated_attribute decorator depends on import order? |
I'm going to create another branch that's not in this pull request and try a few variants to see if I can probe more precisely what's failing. For one thing, the test uses a method without |
boxydog#2 is a pull request with only changes to pyproject.toml, no python changes. It still has that failing test ( That says to me that it's a flaky test. I am not sure why it's flaky, but I'm surprised it has not failed before. |
boxydog#3 is a pull request that just adds one blank line to pyproject.toml, and it has the same flaky test failing. |
https://github.com/boxydog/pelican/actions/runs/9239087370/job/25417800210?pr=4 reveals a bit more of the story (from boxydog#4). I print some debugging to stderr:
This shows:
I don't understand why that should be, especially since the test often succeeds. What about LogCountHandler is flaky? This is a subtle bug somewhere. LogCountHandler is a pelican class in support.py, added as a handler in pelican's LoggedTestCase. LogCountHandler derives from logging.handlers.BufferingHandler, a python class. So BufferingHandler is sometimes not receiving the log messages? Is there something about the order the tests are called in that makes adding the handler not work? Help? |
I will try rewriting the unit test to use mocks instead, if that's acceptable. |
No objection from my side. Thank you for digging into that, @boxydog! 😊 |
With that change collection of the aforementioned test triggers the warning:
and during actual testing the warning is not emitted second time because of Something like this helps: diff --git a/pelican/tests/test_utils.py b/pelican/tests/test_utils.py
index 22dd8e38..bfdcc50a 100644
--- a/pelican/tests/test_utils.py
+++ b/pelican/tests/test_utils.py
@@ -23,9 +23,18 @@ from pelican.tests.support import (
from pelican.writers import Writer
-class TestUtils(LoggedTestCase):
+class ClassDeprAttr:
_new_attribute = "new_value"
+ @utils.deprecated_attribute(
+ old="_old_attribute", new="_new_attribute", since=(3, 1, 0), remove=(4, 1, 3)
+ )
+ def _old_attribute():
+ return None
+
+
+class TestUtils(LoggedTestCase):
+
def setUp(self):
super().setUp()
self.temp_output = mkdtemp(prefix="pelicantests.")
@@ -34,15 +43,10 @@ class TestUtils(LoggedTestCase):
super().tearDown()
shutil.rmtree(self.temp_output)
- @utils.deprecated_attribute(
- old="_old_attribute", new="_new_attribute", since=(3, 1, 0), remove=(4, 1, 3)
- )
- def _old_attribute():
- return None
-
def test_deprecated_attribute(self):
- value = self._old_attribute
- self.assertEqual(value, self._new_attribute)
+ test_class = ClassDeprAttr()
+ value = test_class._old_attribute
+ self.assertEqual(value, test_class._new_attribute)
self.assertLogCountEqual(
count=1,
msg=( |
Oh great! Someone knows what is happening. Thanks! I filed #3309 to track this. I'll try filing a pull request with your fix. |
@justinmayer - #3310 indeed passes tests, so perhaps you can merge that unit test fix, then we can merge it in here, and get this PR in? |
I merged the test fix into Any objections if I squash all the commits in this PR into a single commit before merging this PR? Assuming there are no objections, after merging this PR, I will add that commit hash to Which reminds me… In the future we should try to split code formatting PRs into two commits: one for the code style configuration changes, and the other for the application of that new configuration to the code itself. That way, only the latter commit's hash can be added to |
I have no objections to you squashing. On all my projects I only do squash commits, so there are fewer commits, and it's easier to revert things if need be. I also made #3311 that I think is what you wanted: config changes in one commit, auto-fixes (plus one manual noqa) in another commit. You might use that one instead if you review it and find it is what you want. It's a non-trivial change because it has to preserve import order. I tried to scroll through both changes and see that the right thing happened, but it's subtle. Personally, I would not get so hung up on the git history. I would focus on speed, i.e. evaluating pull requests faster (getting them in, responding to them, closing them, whatever is appropriate). Speed is what moves a project, gets people excited. But, it's your project and I respect that. I will try to do what you wish. |
Thanks for sharing your thoughts. I usually prefer a small number of commits, squashing to a single commit when appropriate, but I sometimes like to ask folks (particularly when there's been a lot of effort invested) if they have preferences before merging. I do my best to keep things clean but am not a Git-history-purist. I do prefer to keep I agree that speed is helpful and do my best in that regard. That is sometimes hampered by (1) how much time I can devote to open source, and (2) how long it takes to get feedback from other folks, as I prefer to have something resembling consensus rather than making all decisions unilaterally. I realize that last point can inhibit speed and try my best to strike a good balance. |
I added the relevant commit hash to |
I agree that open source is a complicated balance of many factors. I'm glad you are still shepherding pelican. Some open source maintainers just give up because it's too much time and hassle for the return. |
Closing this because #3311 did the same thing, but with git commits the way Justin wanted. |
Pull Request Checklist
Resolves: #3216
Note: there are no manual python fixes in this PR, it's all ruff.
Also, making ruff and ruff-format both fix means if there are pre-commit failures, you can just run it again and it might pass. This is a common pattern for me: run commit once (auto-fixes), run again (commits).