-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a link checker for this repo #359
Comments
@duglin any chance you might be interested in doing this? |
I'd like to work on this. I've looked at verify-links.sh script that @duglin wrote, but I'm not totally clear on how you want the link checking implemented. Are you wanting something like a scheduled task to regularly scan for dead links? Or just something to manually run from time to time? |
@chupman it should run as part of the verification checks that are part of the normal kube build process. So basically it should run on every PR. Similar to what is done for lint and gofmt checks. |
I'm most likely overthinking it, but my concern was that if you run on every PR it could cause unrelated errors if an existing link goes dead. Would it make sense to expect an extra arg with the files changed in addition to the repo base dir? Or is it better to just leave it simple and just scan the whole repo and let the reviewer read the error log? |
The purpose behind running the test is exact what you said... to find dead links. So we want things to fail if there area dead links. I suspect the first time we turn this on we'll find a TON and it'll result in the PR to add this to also include a ton of md file fixes. :-) Now if you're concerned about having to check too many files and it being slow, we should wrapper the call to the checker with code that only calls it for files changed in the current PR/commit. |
For reference, here the repo with the link checker: https://github.com/duglin/vlinker |
If nobody else is already working on this, I'd like to help out. @duglin are there any docs about how to set up CI on a new repo? I'm not sure what the conventions are - e.g. is it okay to use Travis or does everything need to go through the existing https://github.com/kubernetes/test-infra? |
Also, instead of writing a new script, perhaps we could use or refactor https://github.com/kubernetes/kubernetes/blob/master/cmd/mungedocs/links.go and run all the mung scripts on a PR? |
I'm currently waiting on code review from @duglin for some enhancements on his link checker. Despite the fact that I've already invested some time in this I agree that if there's something out there that's already maintained it's probably a better option. In case we do go with vlinker I'd also like to know about the CI integration. I was using travis on my own forks for testing while I was whittling down false positives. |
https://github.com/k8s-oncall works https://github.com/search?q=k8s-support-oncall&type=Users&utf8=%E2%9C%93 shows no matches, and https://github.com/k8s-support-oncall is a 404. I guess this is another call for kubernetes#359.
https://github.com/k8s-oncall works https://github.com/search?q=k8s-support-oncall&type=Users&utf8=%E2%9C%93 shows no matches, and https://github.com/k8s-support-oncall is a 404. I guess this is another call for kubernetes#359.
https://github.com/k8s-oncall works https://github.com/search?q=k8s-support-oncall&type=Users&utf8=%E2%9C%93 shows no matches, and https://github.com/k8s-support-oncall is a 404. I guess this is another call for kubernetes#359.
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-help |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale
…On Tue, Jun 18, 2019 at 6:45 PM fejta-bot ***@***.***> wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta
<https://github.com/fejta>.
/lifecycle stale
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#359?email_source=notifications&email_token=AD24BUH2EL352FRQLC5UQQ3P3DNXVA5CNFSM4C7Y5L4KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX6SQLY#issuecomment-503130159>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AD24BUAQUMEKT2F6QL3FH5LP3DNXVANCNFSM4C7Y5L4A>
.
|
/assign |
@spiffxp: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
* Remove list of members in early disclosure list This list is very much outdated. We can't keep updating this as more people join the list. * update again with member companies * Update EARLY-DISCLOSURE.md
Add my candidacy for the October 2023 cycle. Fixes: kubernetes#359 Signed-off-by: Fupan Li <[email protected]>
While looking at the API conventions docs this morning, I found that many of the links in that doc were broken. We should add a link checker for this repo (and eventually other repos in the community).
The text was updated successfully, but these errors were encountered: