-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Various iNaturalist updates #3846
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't test this PR because I get this error in check_for_file_updates
step:
[2024-03-04, 06:40:29 UTC] {base.py:83} INFO - Using connection ID 'aws_default' for task execution.
[2024-03-04, 06:40:29 UTC] {connection_wrapper.py:378} INFO - AWS Connection (conn_id='aws_default', conn_type='aws') credentials retrieved from login and password.
[2024-03-04, 06:40:29 UTC] {taskinstance.py:2698} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/models/taskinstance.py", line 428, in _execute_task
result = execute_callable(context=context, **execute_callable_kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 199, in execute
return_value = self.execute_callable()
File "/home/airflow/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 216, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/opt/airflow/catalog/dags/providers/provider_api_scripts/inaturalist.py", line 205, in compare_update_dates
last_modified = s3_client.head_object(
File "/home/airflow/.local/lib/python3.10/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/airflow/.local/lib/python3.10/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found
I tried setting the AWS_ACCESS_KEY
and AWS_SECRET_KEY
values (from infrastructure repo) in the .env
file, but the error persists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🥳 This works for me!
I found it fiddly to try to stop the initial dagrun in time, so for testing it might be easier to instead just locally temporarily update the default for sql_rm_source_data_after_ingesting
to False here. I got 14 images.
@obulat have you modified AIRFLOW_CONN_AWS_DEFAULT
in your local .env, maybe from testing something else? Can you reset it to the default from env.template:
AIRFLOW_CONN_AWS_DEFAULT=aws://test_key:test_secret@?region_name=us-east-1&endpoint_url=http%3A%2F%2Fs3%3A5000
Thank you, @stacimc, I think resetting the local .env fixed the issue. Next up, an error in the
|
@obulat are you copying the COL file into the container? 😮 or just running the DAG? |
Just running the DAG... Where is the COL file supposed to be? I tried looking inside the catalog container, but couldn't find anything inside that folder |
@obulat I also received that error when I first tried. My guess was that it had something to do with not stopping the first, automatic dagrun in time and getting into a weird failure state. I wiped my local catalog and then retried it, setting the default for sql_rm_source_data_after_ingesting to False here instead so you can just turn the DAG on and test immediately. |
Based on the medium urgency of this PR, the following reviewers are being gently reminded to review this PR: @rwidom Excluding weekend1 days, this PR was ready for review 5 day(s) ago. PRs labelled with medium urgency are expected to be reviewed within 4 weekday(s)2. @AetherUnbound, if this PR is not ready for a review, please draft it to prevent reviewers from getting further unnecessary pings. Footnotes
|
Thank you, @stacimc! I also didn't stop the first DAG run before it went on to
I also had an error in the s3 steps. Turns out, I didn't have the AWS variables set in the root After I added the variables, the whole DAG ran smoothly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I managed to successfully run the DAG locally, and the code and test changes look great.
@AetherUnbound, do you think the error I had with the COL
zip file can ever happen in production?
It definitely should not! iNaturalist, like our other provider DAGs, has "max active runs" set to 1. The only reason this happened locally was because we triggered a manual run, which overrides some of these rules and caused the race condition you mentioned. #3847 should help with this scenario in the future because we can just enable the DAG and let the scheduled workflow run instead of having to trigger it with a parameter. Thanks for the reviews, and the assistance troubleshooting @stacimc! |
Fixes
Fixes #3631 by @rwidom
Description
This PR fixes a number of aspects of the iNaturalist ingestion, with the hope that we should be able to turn it back on in production after this!
common.sql
methods directly.It seems it had been a minute since we've gotten iNaturalist running 😅 There were quite a few changes, but I think I've covered them all and added references to documentation for where to look if they change again.
Testing Instructions
Due to our use of parameters over variables for defining when to skip the removal of the source data, this is a little tricky
sql_rm_source_data_after_ingesting
in the parameter settings before runningChecklist
Update index.md
).main
) or a parent feature branch.Developer Certificate of Origin
Developer Certificate of Origin