-
Notifications
You must be signed in to change notification settings - Fork 363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
upstream request timeout #856
Comments
We're currently looking into the timeout situation, there seems to be a recent uptick in the number of timeouts we are getting. Can you share the time it took to timeout for your CI? |
This command is part of a much larger step in our CI (standard GitHub Actions), so we cant give a duration for the timeout. |
|
out of 100 attempts, 12 failed with that error. |
Thansk @jayvdb ! Is there any way you'd be able to share the Cargo.lock with us? If it's something that's OK to share with us privately, you can email ochang AT google.com. |
Sent via email |
The timeout issue should be mitigated now! Please test it out!
Timeout is on the server side, and ideally no queries should come close to the 60 seconds limit, no matter how many packages are being queried (it's split into chunks of 1000). We're looking at more options to prevent this from happening in the future. |
We had a enough CI jobs run without this occurring for me to be happy that it is resolved. As this is essentially a repeat of google/osv.dev#1363 , I do believe something additional is needed. A few ideas come to mind:
|
I opened #860 to update the retry logic, and we are also doing a few things in the backend to reduce the number of timeouts that will happen as well. There are some plans for adding verbosity levels to tell the user more information about what is happening (first part is completed in #727 ), though nothing concrete there yet. |
Fixes #856 Fixes a couple of things with our OSV API retry logic: - Now also retry non 200 responses, in addition to retrying on network errors. (is there a need to only retry for 500 >= responses? Current logic retrying all non 200) - Fix batchquery code so that retries are not sending empty data. - Increase maxRetryAttempts from 3 to 4 - Implement exponential backoff (actually polynomial/quadratic backoff) - Correctly close the body after reading error message. - Implement retries for determineversion API call
We run osv-scanner as part of our CI, and have been seeing lots of the following error for the last few days
Probably this is related to google/osv.dev#2039
We are scanning a
Cargo.lock
which is size 202485 and 758 entries in it, so it is a bit large.AFAICS there is no way to configure the timeout. I am wondering whether that might be a useful addition, as we wou'd prefer it take longer rather than cause a failure in a large complicated CI workflow.
The text was updated successfully, but these errors were encountered: