-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prevent user sign up on LOD imports #11827
Prevent user sign up on LOD imports #11827
Conversation
Use dataset flag to infer if we should allow sign up or not.
Build Artifacts
|
LGTM - learners on a LOD can no longer sign up. Tested on Android and Ubuntu with imported, created and migrated learners plus regression tested. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Manual QA passes, good to go! 💯 👏🏽
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks good, I've left a couple of notes as a reminder to myself of things to do in future kolibri versions.
@@ -19,6 +19,7 @@ | |||
from kolibri.core.content.constants.schema_versions import MIN_CONTENT_SCHEMA_VERSION | |||
from kolibri.utils.android import ANDROID_PLATFORM_SYSTEM_VALUE | |||
from kolibri.utils.android import on_android | |||
from kolibri.utils.lru_cache import lru_cache |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's remember to remove this file in 0.17, after stopping python 2.7 support
@@ -486,6 +487,7 @@ def get_device_info(version=DEVICE_INFO_VERSION): | |||
return info | |||
|
|||
|
|||
@lru_cache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using lru_cache
without a max_size
does not use lru
capabilities, it's a normal cache without memory limit. This is not a problem in this case because not many dataset_ids will be used to call this function. Maybe using a @cache
decorator would result in a clearer code for future devs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I was misled by our backport, which has a default max_size parameter: https://github.com/learningequality/kolibri/blob/develop/kolibri/utils/lru_cache.py#L54 - we should ensure other uses of lru_cache in the codebase add a max_size parameter as well, as none of them do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the functools version has a default maxsize too? https://docs.python.org/3/library/functools.html#functools.lru_cache
Summary
References
Fixes #11817
Reviewer guidance
Do a learner only import from a facility that allows learners to sign up.
Ensure that you cannot create an account.
Testing checklist
PR process
Reviewer checklist
yarn
andpip
)