Skip to content

Commit

Permalink
Added retry logic to retriever download
Browse files Browse the repository at this point in the history
  • Loading branch information
MSt-10 committed Mar 11, 2024
1 parent a555027 commit a7514e5
Showing 1 changed file with 37 additions and 6 deletions.
43 changes: 37 additions & 6 deletions action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -152,12 +152,43 @@ runs:
- name: Download Retriever
shell: bash
run: |
curl -s ${{ env.retriever }} \
| grep -E 'browser_download_url' \
| grep linux \
| grep x86_64 \
| grep -Eo 'https://[^\"]*' \
| xargs wget -O "${{ env.tmp_dir }}/retriever.zip"
# Set maximum retries
max_retries=3
attempt=0
success=0
while [ $attempt -lt $max_retries ]; do
((attempt=attempt+1))
echo "Attempt $attempt of $max_retries..."
# Execute the curl command and save its output
output=$(curl --retry 3 -s ${{ env.retriever }})
# Check for 'browser_download_url' (this is sometimes missing which causes an error -> retry)
echo "$output" | grep 'browser_download_url' > /dev/null
if [ $? -eq 0 ]; then
# If found, use the URL to download the file with wget
echo "$output" | grep -E 'browser_download_url' \
| grep linux \
| grep x86_64 \
| grep -Eo 'https://[^\"]*' \
| xargs wget -O "${{ env.tmp_dir }}/retriever.zip"
success=1
break # Exit loop on success
else
echo "URL not found, retrying..."
sleep 5
fi
done
# Check if the download was successful
if [ $success -ne 1 ]; then
echo "Failed to find the download URL after $max_retries attempts."
exit 1
fi


- name: Extract Retriever
shell: bash
Expand Down

0 comments on commit a7514e5

Please sign in to comment.