Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat!: Drop the block_structure.storage_backing_for_cache WaffleSwitch #35185

Merged
merged 16 commits into from
Nov 6, 2024

Conversation

feanil
Copy link
Contributor

@feanil feanil commented Jul 25, 2024

This work is part of DEPR openedx/public-engineering#32

See the above link for rational. This PR make is so that the course structure
cache is always backed by persistent storage.

BREAKING CHANGE: The block_structure.storage_backing_for_cache will no
longer exist and its value will be ignored. If you have this switch set
in your instance you can remove it. The backing cache is now always ON.

Questions

Q: Do we need to run any commands to generate the initial
cache for all existing operators? Are there docs we can link to or create for
this?

A: Unclear but, as a part of the get method on the BlockStructureStore if the structure is
not found in the cache/storage, it wll be retrieved from the backing store and added
to the cache. But currently the same check is not happening for getting the structure
from the model. Should we update this? I think operators will need to run the
generate_course_blocks management command before they deploy this code or
the will see errors in their deployment.

eg. ./manage.py lms generate_course_blocks --all_courses --settings=production

@feanil feanil force-pushed the feanil/blockstructure_cache branch from 88f76e1 to 092f565 Compare July 29, 2024 17:44
@feanil feanil force-pushed the feanil/blockstructure_cache branch 2 times, most recently from 18d943f to 2548c2c Compare August 8, 2024 18:39
@feanil feanil force-pushed the feanil/blockstructure_cache branch from 2548c2c to fd1e239 Compare August 9, 2024 14:26
@feanil feanil marked this pull request as ready for review August 9, 2024 17:24
@feanil feanil force-pushed the feanil/blockstructure_cache branch from fd1e239 to 8f4ae4e Compare August 13, 2024 18:31
@feanil
Copy link
Contributor Author

feanil commented Aug 20, 2024

@ormsbee per our conversation last week, I dug a bit deeper into the query count differences and it looks like a lot of the extra queries are savepoints around updating a block blockstructure and then also setting the modified date on the parent block when a block structure changes. I've collected the before and after queries for the test that changed by more than one in this file.

Copy link
Contributor

@ormsbee ormsbee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some minor questions and requests.

In terms of the deployment risk for this: I agree that operators should run the backfill command when upgrading. Honestly, operators should in general probably run a simulate_publish on all their courses every time they upgrade, because not all devs will be as conscientious as you are about thinking through their upgrade process.

I suspect that whether to lazily re-generate the data on read or to just fail the request has to do with the failure mode when something goes wrong with the publish process. The collect process is so expensive for large courses that it's probably better for a high volume site to fail the request altogether than to try to dynamically regenerate on the fly.

Unclear but, as a part of the get method on the BlockStructureStore if the structure is not found in the cache/storage, it wlll be retrieved from the backing store and added to the cache. But currently the same check is not happening for getting the structure from the model. Should we update this?

I don't think we should update it. Grabbing a completely generated blob of data from S3/filestore and stuffing it into memcached/redis is relatively cheap. Regenerating the collected data is much more expensive.

At edX, the publish process would sometimes break block structure collection for a certain course because of some content edge case. The result would be that some data would be unavailable or out of date (e.g. the outline not showing a newly added subsection). It's frustrating for users, but it's better to show that stale data quickly than to constantly hammer the site to break on the same errors over and over again.

# Review Question: Should we be doing this here? Does it make sense to do
# this in the xmodule/modulestore/tests/factories.py BlockFactory class
# as a part of the publish there?
bs_api.update_course_in_cache(self.course_data.course_key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think doing it here manually is okay. I'd probably just enable the published signal for this test case and let it build everything–to test part of the machinery as well, since the triggers for that may change over time. It might involve splitting off this test from the rest of the test case though.

I wouldn't put it in xmodule/modulestore/tests/factories.py though, since that's called by a lot of things that may not require this functionality.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, I think I'll leave it as is then.

@@ -76,44 +77,45 @@ def setUp(self):
CourseEnrollment.enroll(self.student, self.course.id)
self.instructor = UserFactory.create(is_staff=True, username='test_instructor', password=self.TEST_PASSWORD)
self.refresh_course()
# Since this doesn't happen automatically and we don't want to run all the publish signal handlers
# Just make sure we have the latest version of the course in cache before we test the problem.
bs_api.update_course_in_cache(self.course.id)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, I'd probably emit the signal and let everything run (I think this is the most expensive part of it anyhow), but I think either way is okay.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this class already has the course_published signal enabled and so I thought he issue might be that the CourseFactor needed the emit_signals parameter set to true, but even with that this did not work. How important is it that this be figured out for this PR? I can dig into it further if needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does it work when you test the functionality locally via browser? If it doesn't happen automatically, it's possible that something is going wrong (e.g. it's going to trigger later on lazy-load). I'm really sorry to keep dragging this PR out, but so many things rely on block structures that I think it's worth investigating why it doesn't refresh.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ormsbee I've tested it locally and created a new course and made various changes to the course sections/subsections and I can see them all propagate correctly to the course outline page. Let me know if there is anything else you want me to test.

@kdmccormick kdmccormick removed their request for review September 16, 2024 17:13
@feanil feanil requested a review from ormsbee September 23, 2024 18:21
@feanil feanil force-pushed the feanil/blockstructure_cache branch 2 times, most recently from 5541aae to 9fb8492 Compare October 2, 2024 14:13
Any places where the storage backing cache for block structures was on
conditionally previously it will be ON by default.
Since the cache is now always on, remove test cases that test the case
when it's disabled.
Remove the `enable_storage_backing_for_cache_in_request` function and
its uses in the platform.  The function is no longer needed because the
storage backing for the block_structure cache will be ON by default
moving forward.

BREAKING CHANGE: This `enable_storage_backing_for_cache_in_request`
function no longer exists and any calls to it should be removed.  The
cache it enables is now always ON.
This work is part of DEPR openedx/public-engineering#32

Now that we've removed all uses for this switch remove the decleration
as well.

BREAKING CHANGE: The `block_structure.storage_backing_for_cache` will no
longer exist and its value will be ignored. If you have this switch set
in your instance you can remove it. The backing cache is now always ON.
Previously, we were not caching BlockStructures to the database when we
were adding them to the store by default.  Now that we are doing that,
the BlockStructureFactory test failed because it was taking a shortcut
that would no longer work.  It was just creating a blockstructure that
had relations but not any block information.

Now that we're always persisting updates to the database, this broke
because to persist the structure to the database, we have to look up the
block information from the block_structure which now fails.

This change updates the test mixin to add more data so that the content
can be persisted to the database successfully as a part of this test.
We don't call the modulestore or update the cache here now that we are
backed by the database model.  Previously the cache would change because
the `_encode_root_cache_key` function in `BlockStructureStore` class
used the `BlockstoreBlockData.VERSION` as a part of the cache key when
the data was not being cached in a DB model.
Now that the model backed cache is on by default, we don't need to keep
the StubModel object around.
With the new block_structure model cache enabled by default, we're donig
queries to it as a part of other tests.
Because signals are disabled by default for performance reasons, this
doesn't happen automatically.  So we manually refresh the course in the
cache after all the changes have been made so that the course in the
cache matches the latest version in the modulestore.
This test had a redundant call to get the course data from the store
because that already happens at the end of the setup function.  And also
because expected call structure was being built inside the assert, it
made it harder to inspect when debugging.  Make the code a little bit
easire to debug in case we're back here in the future.
Provide a link to more docs from the code in case we need more context
in the future.
@feanil feanil force-pushed the feanil/blockstructure_cache branch from 9fb8492 to 4768568 Compare November 5, 2024 16:09
@feanil feanil merged commit 5904b54 into master Nov 6, 2024
49 checks passed
@feanil feanil deleted the feanil/blockstructure_cache branch November 6, 2024 15:52
@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX staging environment in preparation for a release to production.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX production environment.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX staging environment in preparation for a release to production.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX production environment.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX staging environment in preparation for a release to production.

@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX production environment.

1 similar comment
@edx-pipeline-bot
Copy link
Contributor

2U Release Notice: This PR has been deployed to the edX production environment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants