Skip to content
This repository has been archived by the owner on Aug 4, 2023. It is now read-only.

Handle Science Museum errors with batches larger than 50 pages #905

Merged
merged 3 commits into from
Dec 7, 2022

Conversation

stacimc
Copy link
Contributor

@stacimc stacimc commented Dec 2, 2022

Fixes

Fixes WordPress/openverse#1363

Background

The Science Museum API appears to be throwing 400 errors when you attempt to access a page number greater than 50 for large datasets, even when there should legitimately be > 50 pages. For example, a recent run received this response for the 50th page of a dataset:

# Excerpt from response_json
  "links":{
      "self":"https://collection.sciencemuseumgroup.org.uk/search/date[from]/1875/date[to]/1900/images/image_license?page[size]=100&page[number]=50",
      "first":"https://collection.sciencemuseumgroup.org.uk/search/date[from]/1875/date[to]/1900/images/image_license?page[size]=100",
      "last":"https://collection.sciencemuseumgroup.org.uk/search/date[from]/1875/date[to]/1900/images/image_license?page[size]=100&page[number]=65",
      "prev":"https://collection.sciencemuseumgroup.org.uk/search/date[from]/1875/date[to]/1900/images/image_license?page[size]=100&page[number]=49",
      "next":"https://collection.sciencemuseumgroup.org.uk/search/date[from]/1875/date[to]/1900/images/image_license?page[size]=100&page[number]=51"
   },
   "meta":{
      "total_pages":66,
      ...

Note that total_pages > 50, and the links section does contain a link to page 51. However if you attempt to actually request page 51, you get a 400 error and the DAG eventually errors after a few retry attempts.

We were already splitting the data from the Science Museum up into year_range intervals to try to get smaller batches of data, presumably to avoid this error. It looks as though some of the year ranges are now too large.

Description

This PR:

  • Reduces the length of our existing year_range intervals so that none contains more than 50 pages of data
  • Updates the year_range calculation to be future-proof
  • Future-proofs the DAG by detecting when we're about to exceed 50 pages, then halting ingestion for the batch early and alerting in Slack.

In order to get the Slack alert working I had to make a small modification to pass dag_id into provider data ingesters, so there are a few changes related to that. I also updated docs and cleaned up a few small things.

Testing Instructions

just test

Run the DAG locally and ensure that you don't see any Slack alerts or errors. Setting an ingestion_limit will prevent the buggy scenario from being reached, so if you want to speed up the process you can instead manually edit the starting page_number in get_next_query_params:

    from_, to_ = kwargs["year_range"]
        if not prev_query_params:
            # Reset the page number to 49 for testing
            self.page_number = 49
        else:
            # Increment the page number
            self.page_number += 1

To test the Slack alerting when a page number greater than 50 is reached, you can edit the year_ranges locally to be a few very long ranges that will definitely have more than 50 pages:

year_ranges = [(0, 1900), (1900, 2023),]

Then run the DAG again and verify that you can see the send_alert message in the logs, but ingestion continues. (Ie you should ingest the first 50 pages worth of data for each of your long ranges).

Checklist

  • My pull request has a descriptive title (not a vague title like Update index.md).
  • My pull request targets the default branch of the repository (main) or a parent feature branch.
  • My commit messages follow best practices.
  • My code follows the established code style of the repository.
  • I added or updated tests for the changes I made (if applicable).
  • I added or updated documentation (if applicable).
  • I tried running the project locally and verified that there are no visible errors.

Developer Certificate of Origin

Developer Certificate of Origin
Developer Certificate of Origin
Version 1.1

Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
1 Letterman Drive
Suite D4700
San Francisco, CA, 94129

Everyone is permitted to copy and distribute verbatim copies of this
license document, but changing it is not allowed.


Developer's Certificate of Origin 1.1

By making a contribution to this project, I certify that:

(a) The contribution was created in whole or in part by me and I
    have the right to submit it under the open source license
    indicated in the file; or

(b) The contribution is based upon previous work that, to the best
    of my knowledge, is covered under an appropriate open source
    license and I have the right under that license to submit that
    work with modifications, whether created in whole or in part
    by me, under the same open source license (unless I am
    permitted to submit under a different license), as indicated
    in the file; or

(c) The contribution was provided directly to me by some other
    person who certified (a), (b) or (c) and I have not modified
    it.

(d) I understand and agree that this project and the contribution
    are public and that a record of the contribution (including all
    personal information I submit with it, including my sign-off) is
    maintained indefinitely and may be redistributed consistent with
    this project or the open source license(s) involved.

@stacimc stacimc self-assigned this Dec 2, 2022
@stacimc stacimc added bug Something isn't working 🟧 priority: high Stalls work on the project or its dependents 🛠 goal: fix Bug fix 💻 aspect: code Concerns the software code in the repository labels Dec 2, 2022
@stacimc stacimc marked this pull request as ready for review December 2, 2022 01:49
@stacimc stacimc requested a review from a team as a code owner December 2, 2022 01:49
@stacimc stacimc changed the title Stop ingestion early and alert in Slack if we get more than 50 pages Handle Science Museum errors with batches larger than 50 pages Dec 2, 2022
@AetherUnbound
Copy link
Contributor

In doing some poking around on the Science Museum side of things, I was able to find this issue: TheScienceMuseum/collectionsonline#735. Looks like they came to a similar conclusion as us: WordPress/openverse-api#857

Pagination is a hard problem it seems!

What's interesting is that their API limits to the first 150 pages still:

https://github.com/TheScienceMuseum/collectionsonline/blob/6e29580414256db34783993a97624d75d60e9b8d/lib/search.js#L15

Do you think we should reach out for this case and file a bug on their repo, given that the issue we're seeing is on page 50 and not 150? And also given that a next link is still provided even though we aren't allowed to traverse beyond page 50.

Copy link
Contributor

@AetherUnbound AetherUnbound left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is excellent! I'm not able to test it right now, but the code looks good. love the new tests 🙌🏼 Just a few notes on wording and my previous comments wondering if we should file an upstream issue.

Comment on lines +354 to +374
@pytest.mark.parametrize(
"next_url, page_number, should_continue, should_alert",
[
# Happy path, should continue
(
"https://collection.sciencemuseumgroup.org.uk/search/date[from]/1875/date[to]/1900/images/image_license?page[size]=100&page[number]=20",
20,
True,
False,
),
# Don't continue when next_url is None, regardless of page number
(None, 20, False, False),
(None, 50, False, False),
# Don't continue and DO alert when page number is 50 and there is a next_url
(
"https://collection.sciencemuseumgroup.org.uk/search/date[from]/1875/date[to]/1900/images/image_license?page[size]=100&page[number]=50",
50,
False,
True,
),
],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fantastic! 🤩

@AetherUnbound
Copy link
Contributor

I was able to test, this worked well! One note, your example has: year_ranges = [(0-1900), (1900-2023),], I think it needs to be year_ranges = [(0, 1900), (1900, 2023),] (I was getting a TypeError when I copy and pasted the example since it was just subtracting the numbers).

Copy link
Member

@krysal krysal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Lots of nice improvements here.

@stacimc stacimc merged commit 4a6e1b9 into main Dec 7, 2022
@stacimc stacimc deleted the fix/science-museum-large-batch-issue branch December 7, 2022 00:46
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
💻 aspect: code Concerns the software code in the repository bug Something isn't working 🛠 goal: fix Bug fix 🟧 priority: high Stalls work on the project or its dependents
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Science museum script needs to implement get_should_continue to stop ingestion
3 participants