-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Delete multiple default_off rulesets (^Z) #10516
Conversation
I'm not feeling comfortable with just deleting I have no doubts for server related issues, like |
I've already merged a lot of these deletion pull requests, for what it's worth. If the ruleset is bad then there's no point in keeping it around. For your point about cert errors, 99.9% of the time an invalid certificate for any reason will be understood as broken/buggy by the user. For that 0.1% case where the user has a special relationship to the site and is willing to trust their self-signed certificate or whatever, they can make a custom rule. I do think that a ruleset should be
|
@J0WI In addition to the reasons @jeremyn pointed out, the rulesets I delete are trivial ones which only rewrite |
@cschanaj would it be to much overhead to differ between rules that have been turned off and rules that have been created with the |
@J0WI I do not know which rules are created with |
I fear |
|
@J0WI a relatively quick way to check $ git log --oneline -- Example.com.xml | head -n 2 | wc -l if the output is |
I'm still not sure why we would want to keep a |
e.g. because of |
@J0WI I agree with that specific example. Looking at the list in #9906 (comment):
|
Noted that I believe the number of rulesets created with It should be totally fine in term of the level of protection we provide. |
Given the discussion here and my discovery of the There have been several pages of new deletion PRs and it would be great to get clarification before we get too many more. |
AFAIK, the script can only re-activate Noted that a number of deletion PRs come from #10595 where the candidate list is generated such that
as described in #10595 (comment) For reference; #10378 |
I am interested in this conversation. For your information, here is how I have been picking rules to remove until now:
I was also pondering the possibility of removing rulesets that have been failing for the exact same reason in the two I think rules outside those cases should not be removed but simply |
For the reference, there is around 230 rules failed for the exact same reasons in the comprehensive fetch test, see |
Linking this with #2088 |
ping @Hainish |
When the full fetch test tool is run on a regular / continual basis, it will be easier to follow a standardized rule for deleting rulesets that fail subsequent runs. If we decide to run it every week, for instance, two fails for the same reason would be a hasty criteria for deletion. Deleting rulesets which fail the fetch test over an extended period of time (say 6 months) makes a lot of sense to me. I think 6 months would also be a good default for how often we run the full fetch test, so streamlining the deletion of rules with the periodic runs of the tool strikes me as reasonable, especially if we have a fetch whitelist (#10227) in place. If this seems like a reasonable course of action, we should not only implement it but document it in |
Note that in the above comment, when I say "run the full fetch test" I mean follow the "3-colocation" methodology I outline in #10538 (comment), so we don't get overaggressive disabling rulesets.. |
It would be nice if @Hainish What do you want to do with the many recent open PRs that delete rulesets? Do you want to say that only |
@Hainish As mentioned in #10538 (comment), for Related #9906. |
@jeremyn I think there is some value in manually reviewing the deletion of rules, but it may not be worth the time if we're going to auto-disable rulesets. Does human overview give enough added value, in your experience, to warrant continued close inspection? If not, we shouldn't continue doing it. I haven't been watching the process closely enough to have insight here. One thing we may want to do is after every run of the fetch tester, create a list of the rulesets that would be deleted due to consecutive disables with the same reason, and then manually review if they actually should be deleted. Instead of auto-deleting these rulesets, if there is a failure with just one host, for example, this would give us a chance to fix it. This would allay some of @cschanaj's concerns. |
@cschanaj no, the |
@Hainish I've made the very occasional comment on deletion PRs. I don't think human review is critical for the simple cases. |
@jeremyn in that case we can probably safely close the deletion PRs and leave that to the |
#9906 all
target
not working over HTTPS.I would like to delete more rulesets with a single yet relatively small PR. thanks!
Remark:
target
target
mismatches
,problematic
etc in filename.target
not working over HTTPSalexa-top-1m
Source code: https://github.com/cschanaj/covfefe