-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(sharing): Avoid (dead)locking during orphan deletion #43252
Conversation
The old test still passes. This is good to go. |
/backport to stable28 |
/backport to stable27 |
/backport to stable26 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it not cleaner to select everything and chunk the deletes with array_chunk? It avoids the premature stop I think.
Avoids the stop but could exhaust memory if there are lots of orphans |
Agree, same as #41272 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hum ok but that would mean like a LOT of orphans. Other things will break first :-P
Anyway, fine with either approach.
I've also created #43605 to reduce the interval from every 15m to daily |
This is done |
Conflicting due to #43605 |
f0743d5
to
5073fdb
Compare
rebased |
Signed-off-by: Christoph Wurst <[email protected]>
5073fdb
to
5c20f5b
Compare
Summary
Concurrent modifications of shared entries in oc_filecache makes the share orphan background job lock. This is fine. If the row is UPDATED, we don't care. If the row is DELETEd, the next background job will catch it.
This replaces widely locking DELETE with SELECT+DELETE. See code comments.
How to test
The real world (dead)lock is hard to reproduce in a lab setup. You have to simulate concurrent modifications of filecache.
SELECT id FROM oc_jobs WHERE class = 'OCA\\Files_Sharing\\DeleteOrphanedSharesJob'
. Note down the job id.SELECT file_target, file_source FROM oc_share ORDER BY id DESC LIMIT 2;
. These are your f1 and f2 shares. Note down the file_source of f1.SELECT storage, path_hash FROM oc_filecache WHERE fileid=<file_source>;
. Note down storage and path_hash values.START TRANSACTION;
UPDATE `oc_filecache` SET `mtime` = GREATEST(`mtime`, 1706777193), `etag` = '659cf751000cd' WHERE (`storage` = <storage>) AND (`path_hash` IN (<path_hash>));
php occ background-job:execute <id> --force-execute
master: job will stall. Run
ROLLBACK;
in the datbase shell to unlock the occ command.here: job will finish and clean up the oc_share entry of f2 despite the lock on f1's filecache entry. Run
ROLLBACK;
to unlock your dev env for other operations ;-)Checklist