-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bulk Upload - Deadlock found when trying to get lock #29985
Comments
This comment has been minimized.
This comment has been minimized.
cc @artonge @nextcloud/desktop |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Why was this closed, the issue being referred #29987 is referring to this issue. I am still having this issue |
I also had this exact same issue and decided to make some re-installations and tests. Since I have a NAS at home I decided to keep all files on the NAS but running NextCloud on a separate server (Proxmox). Mounting the NFS share on the Nextcloud server works fine. The problems started to occur for me when also hosting the database (both postgres and mariadb) on the NAS, and accessing it over the network by IP in Nextcloud. When I changed to a locally installed database on the nextcloud server itself and accessing it over the socket everything works flawlessly. Could it be network speeds or just that the DB is externally accessed? LAN speed it 1000/1000 in my tests. I also tried with and without Redis as cache but that didn't change anything. I've been running nextcloud with a local database for several weeks now without any problems. |
Yes, had deadlocks here too. They were corresponding to the query: This does not look like a very complex statement, so its up to mariadb to get this going. I also have not found bad behaviour of NC or php with that (deadlocks before made it impossible to sync, as it always restarted to sync and always ran into that deadlock). I changed mariadb- config to use fsync instead of direct writes (https://mariadb.com/docs/reference/mdb/system-variables/innodb_flush_method/), than upped the Version of mariadb to 10.6.5 now there had been no deadlock any more since some days, but my system is currently not on heavy load. Maybe check your config and temporary tables by using mysqltuner.pl and your slow.log to eleminate bottlenecks for your db, it may help to reduce deadlocks when the db performs good in general |
Still an issue on NC 23.0.2 |
Still an issue on NC 23.0.3 |
ok, still a problem here too. Can you check in the logs, if this is related to the "update oc_filecache" as it is for me? It was caused when deleting multiple files in a directory and when recalculationg the directory size, which seems to do it recursivly (scanning all upper folders) by function move2trash of "class":"OCA\Files_Trashbin\Trashbin". The way this is done looks like the calculation of directory sizes is not synchronised by nexcloud, so it would not suprise me if the next deletion would cause deadlocks in that table for the entry - which may not be bad as such, but may lead to wrong data. Maybe some approach to nextcloud for this: synchronise filecache- operations... |
My log from the web interface is:
|
yes, this is all due to filecache updates. In your case putting a file to nextcloud, in my case deleting it, the top case too (maybe in the versions-manager, which irritated me, but its basically the same). Nextcloud should really think about serialisation of the class "OC\Files\Cache\Updater" function "update" to not interfere with itself when doing many changes, but i would rate it minor, as it will not impact basic functionality of the server. |
still an issue with Nextcloud 24 |
Still a blocking issue, users can not upload bulk directories, making Nextcloud useless for them… |
Is there any realiable workaround for this? Huge issue when users are migrating data from old DMS to nextcloud, because it breaks somewhere in the middle. |
Same as #22482 which is locked and all of these get duped to. |
This has become much more prominent with the v25 release, simply uploading something with loads of small files through the webui always results in this issue. Resulting in simply unworkable situations. |
I got same issue over and over on a new installation despite following many advices from different forums & docs. I tried the snap install on another VM I and was able to upload smoothly the full directory without issue. |
Same problem when upload folder with lots of small files. Compress file uploaded without a hitch |
I've applied the recommended configuration of MySQL (MariaDB for me) seen here : https://docs.nextcloud.com/server/stable/admin_manual/configuration_database/linux_database_configuration.html Adding this to my conf file :
makes the error disappear. |
thanks @nderambure it seems your suggestion fixed the issue and I am running NC 25. For others not so familiar with sql and this sort of things like I am. I added the following to
|
Wow.
Tested with 200x random 10kb files in 1 directory. NC25. This goes in an awesome direction. Approaching happiness. btw |
Thank you for figuring this out! According to MySQL manual:
This might explain the lack of deadlocks with this configuration. Nextcloud could also set this isolation level during connection time (so we wouldn't need to set this globally, affecting every database):
Similar situation with binlog_format:
|
About the "discover" of this configuration recommandation in NC doc, I think that when I first Install my NC (the 15th !), this doc was not existing, and since I'm not a Database specialist, I missed it. Thank you @szotsaki for the precision, I'm wondering why NC does not set this for himself instead of configuring it globally, which force to use a custom instance of MySQL specifically for NC. Any NC dev that could enlighten us ? |
Changed the aformentioned values in MariaDB, re-enabled bulk upload in the config, deleted and uploaded 3k files and still encountered the same issues. |
Closing as this seems to be a configuration issue based on #29985 (comment) |
Same here changing the db config to READ_COMMITTED did not solve the problem. |
I'm running postgres (where default_transaction_isolation = 'read committed' is actually the default) and the problem exists with nc 26.0.1 |
@robbytobby @szaimen @szotsaki Unfortunatelly currently i am low on ressources - but please Nextclouders, have a look at this! Best approach imo would either be to cache the sizecalculation (best to memcached/redis) and only write Results once after te Bunch of commits to the DB (when using Bulkuploads) - or to synchronise Access to the DB, which would be much slower and maybe complicated to do so, but would solve the Deadlocks. |
@obel1x Hi, just wondering if there is any temporary solution? Like manually delete something? My nextcloud is totally stopped for days. |
It seems that this error did not make my nextcloud stopped working. I deleted all the file lock in the MySQL and execute |
The text was updated successfully, but these errors were encountered: