-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Queries from occ maintenance:repair-share-owner
stuck for 40+ hours
#47184
Comments
The query does not use indices and has to scan a lot of data, especially in oc_mounts. It looks like the query comes from \OC\Core\Command\Maintenance\RepairShareOwnership::getWrongShareOwnership. The code was moved in 3d68a52. Before that there was no cast, which seems to prevent some index usage.
|
occ maintenance:repair-share-owner
stuck for 40+ hours
Discussing with @icewind1991 it seems the cast was put in place because postgresql isn't as forgiving as MariaDB when comparing strings to integers. An easy approach would be to add a condition to the code so it only uses the cast for postgresql, otherwise use a bare query. |
That will make the query perform poorly on postgres still. Are there any other ways to restructure it into more efficient execution plans? |
Can't we cast the |
it works too and allows the db to use the PRIMARY index |
If that works with Postgres it's a really nice solution. The original query takes >3s on my personal instance. Without cast and with the changed cast it finishes in <=0.003s Uneducated 👍 |
Bug description
Queries from occ maintenance:repair-share-owner in state=executing for 40+ hours for large instances
SELECT 's'.'id', 'm'.'user_id', 's'.'uid_owner', 's'.'uid_initiator', 's'.'share_with', 's'.'file_target' FROM 'oc_share' 's' INNER JOIN 'oc_filecache' 'f' ON 's'.'item_source' = CAST('f'.'fileid' AS CHAR) INNER JOIN 'oc_mounts' 'm' ON 'f'.'storage' = 'm'.'storage_id' WHERE ('m'.'user_id' <> 's'.'uid_owner') AND (CONCAT('/', 'm'.'user_id', '/') = 'm'.'mount_point')
'Explain' output
Steps to reproduce
• having cronjobs active (every 15 min)
• running occ maintenance:repair-share-owner
• causing some changes to the file cache every hour:
o creating 1000 files directly in the data directory with 1 MB of random binary content
o use occ files:scan on that directory
o delete the files
o run occ files:scan again on that directory
• Waiting for 12+ hours ...
Expected behavior
Database doesn't get stuck
Installation method
None
Nextcloud Server version
28
Operating system
Debian/Ubuntu
PHP engine version
PHP 8.1
Web server
Apache (supported)
Database engine version
MySQL
Is this bug present after an update or on a fresh install?
Upgraded to a MAJOR version (ex. 28 to 29)
Are you using the Nextcloud Server Encryption module?
Encryption is Disabled
What user-backends are you using?
Configuration report
No response
List of activated Apps
Nextcloud Signing status
No response
Nextcloud Logs
No response
Additional info
mysql-processes
The text was updated successfully, but these errors were encountered: