-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move files + update cache one by one instead of bulk update #13775
Comments
@icewind1991 can you please comment on this ? It might be the solution for #13391 |
For cross storage moves this seems good |
@icewind1991 I think the problem here #13391 was about local moves. It's just the cache that takes too much time to update at the end, or there is a time window where the disk content and cache are completely different. We need to shrink that time window as much as possible (unless we can come up with a better solution, some kind of locking?) |
#13948 for the cache issue |
Problem was fixed through transactional file locking. |
When moving a huge folder, the following happens:
If a PHP timeout occurs and/or the process is killed, the discrepancy between cache and filesystem is very big.
We should rather move the files one by one, and for each moved file, update the cache directly.
This way at if there is a crash, the database would still be in sync with the last change. The unmoved files would still be in their old location with their matching cache entries. The most damage that would happen is a single broken file.
@icewind1991 @MorrisJobke @DeepDiver1975 what do you think ?
But on a local FS I'd still expect the FS
rename()
to be atomic, so the benefit would mostly be on non-local storages. But even for local storages, the huge final DB update for all the entries might be huge and take time.The text was updated successfully, but these errors were encountered: