-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Collect image dimensions from Europeana #2782
Conversation
# Testing with the first webresource from the first aggregation, got dimensions | ||
# for 1424 / 1589 images; so not too worried about performance implications of | ||
# the loops. Limiting factor was much more the delay between requests. | ||
if item_object := item_response.get("object"): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably reduce nesting if we used something like
if aggregations := item_response.get("object", {}).get("aggregations"):
and
for webresource in aggregation.get("webResources", [])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran the script locally and the dimensions are being picked up well. I'll defer to @stacimc on the delay timing because she has more experience with it.
I added a comment on making the code a little bit less nested, but I don't want to block on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@rwidom Thanks for adding these four fields in one PR! :) Seems like this is changing the delay of 30 seconds that we currently have. @stacimc probably has better reasons to argue about that, as Olga said, but I'll just ask: did you confirm there is no way to grab these fields from the search endpoint?
|
||
ITEM_HAPPY_RESPONSE = _get_resource_json("item_full.json") | ||
ITEM_NOT_1ST_RESPONSE = _get_resource_json("item_not_first_webresource.json") | ||
ITEM_HAPPY_WEBRESOURCE = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like it could be extracted from the item_full.json
file. Could it be the same for ITEM_NOT_1ST_WEBRESOURCE
?
@pytest.mark.parametrize( | ||
"item_data, expected", | ||
[ | ||
pytest.param( | ||
ITEM_HAPPY_WEBRESOURCE, {"width": 381, "height": 480}, id="happy_path" | ||
), | ||
pytest.param({"no": "dimensions"}, {}, id="no_dimensions"), | ||
], | ||
) | ||
def test_get_image_dimensions(item_data, expected, record_builder): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to add the cases where there is only one of the dimensions.
I looked in the search API documentation here, and it seems like it maybe should be possible (with some kind of profile magic?), but it's not clear to me how to do it. Definitely when I look at the test files we've actually downloaded, I don't see anything that looks like those fields. It would be great if we didn't have to query individual objects, because then we could leave the 30 seconds alone. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for taking this on @rwidom!
I spent some time looking at the API documentation and found the aggregations very confusing. My takeaway is that what we're getting back from the search endpoint is a collection of "metadata records", each of which may have multiple actual images associated with it. When we query the individual record endpoint, then, we may get multiple images in the aggregations
with different filesizes etc.
So I took a look at one of the examples for record id /2021650/memorix_0000ee69_1b9e_6823_478f_c88af61736c6
. The image url that we get from ingestion is https://images.memorix.nl/nda/thumb/fullsize/010a31d7-9316-54d7-6ad0-83b9914688d9.jpg. That url is one of the webresources we can find at the record endpoint, but it's not the first one -- that's a (smaller) version of the same image, https://images.memorix.nl/nda/thumb/640x480/010a31d7-9316-54d7-6ad0-83b9914688d9.jpg.
The result is that, at least some of the time, the final record that gets ingested will have image dimensions for a different size than the one we're linking out to, which I don't think we want. Moreover based on the API documentation, I'm worried that we might also get different photos entirely -- like related photos or multiple angles of the same object, in which case it would be really strange to get the dimensions of the wrong image.
I wonder if we could try to filter the webresources for one that exactly matches the url we used for image_url
?
...ags/providers/provider_api_scripts/resources/europeana/sample_search_results_additional.json
Outdated
Show resolved
Hide resolved
@stacimc , I think I made the code-only changes you asked for, except the timing one. I'm not sure the best way to reach out to Europeana, but while I was looking, I came across this "Harvesting and Downloads" page which seems directly relevant to us, though I haven't looked in more detail. Hmmm... |
Thanks for talking through the timeout issue @stacimc and @AetherUnbound ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks great! Just a couple tiny clean-up requests and it looks like we need to just catalog/generate-dag-docs
, but then this should be good to go! 🚢
@@ -56,15 +55,6 @@ def __post_init__(self): | |||
|
|||
|
|||
PROVIDER_REINGESTION_WORKFLOWS = [ | |||
ProviderReingestionWorkflow( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🥳
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks and tests great @rwidom! 🚀 Thanks for leading the discussion on this one, I'm excited for this :)
Fixes
Fixes #1484 by @stacimc
Description
Adds image dimensions, filetype, and filesize from Europeana item data.
Testing Instructions
At a high level, the goal is to run all the tests using
pytest
, then run the dag in airflow, then look at the data to confirm that it looks reasonable.just catalog/test-session
and thenpytest tests/dags/providers/provider_api_scripts/test_europeana.py
.pytest
alone from within the same test session, I got a bunch of errors fromtest_ingestion_server.test_index_readiness_check
but that's just because the ingestion server stuff isn't started up when you're just running docker for a catalog test session.just catalog/recreate
. Then you can turn on the dag and it will go through daily runs and be sure to pick up some data. Check out the results using sql afterjust catalog/pgcli
.I temporarily replaced the dag start date with 8/8/2023 in provider_workflows.py, to get a smaller sample size for testing. Only two days out of nine had data, but it seems like the 3 second timeout might be reasonable at least for the daily runs, if these numbers of records are comparable with production. I would need to know the total number of records in production to figure out if that is workable for a refresh.
The new fields are populated in all of the inserted records:
Checklist
Update index.md
).main
) or a parent feature branch.Developer Certificate of Origin
Developer Certificate of Origin