Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

big folder owncloud/data/<user>/cache #9513

Closed
mzaian opened this issue Jul 8, 2014 · 41 comments
Closed

big folder owncloud/data/<user>/cache #9513

mzaian opened this issue Jul 8, 2014 · 41 comments
Labels

Comments

@mzaian
Copy link

mzaian commented Jul 8, 2014

Hello,
I'm having a big issue with my owncloud setup at work i recently upgraded the server to 6.04 and the latest client 1.6.1.

Some owncloud users have a big cache directory and still creating chunks of 10mb in their respective directory i tried to delete these files manually but its hitting back again and my storage is growing since i'm having the issue with half of my users.

I need to solve it as soon as possible.

Thanks,

@PVince81
Copy link
Contributor

PVince81 commented Jul 8, 2014

Please try switching to system mode cron in the admin page, then setup the crontab as specified here: http://doc.owncloud.org/server/6.0/admin_manual/configuration/background_jobs.html#cron

There is a cleanup routine that auto-deletes abandoned chunks that are older than one day.

Not sure why you have that many abandoned (cancelled) chunks in the first place.

@PVince81 PVince81 added the Bug label Jul 8, 2014
@mzaian
Copy link
Author

mzaian commented Jul 9, 2014

I discussed the issue with you yesterday on #IRC, i have setup the croon on system but still getting the same problem with my users.

What the routine cleanup? you mean using find and deleting the files?

@PVince81
Copy link
Contributor

PVince81 commented Jul 9, 2014

No, I mean there is already code here https://github.com/owncloud/core/blob/master/lib/private/cache/file.php#L118 to garbage collect.

And now that I see it, it seems to be only triggered when the user logs in again... and the sync clients keep the current session.

I'll double check, I did remember seeing a background job that was supposed to call that function somehow.

@PVince81
Copy link
Contributor

PVince81 commented Jul 9, 2014

CC @icewind1991 for clarification

@PVince81
Copy link
Contributor

PVince81 commented Jul 9, 2014

@mzaian in the meantime you could try doing it with a shell script run through a cron job.
The script should use find and find all files that are older than a day or so and delete them.

@mzaian
Copy link
Author

mzaian commented Jul 9, 2014

I will clean up the cache dirs manually until i have a fix for this issue

@PVince81
Copy link
Contributor

PVince81 commented Jul 9, 2014

Let us know whether that worked.
If yes, then it confirms that putting a background job in ownCloud to clean up old chunks will work as well.
Thanks.

@mzaian
Copy link
Author

mzaian commented Jul 12, 2014

Background job is not cleaning up the bad old chunks its not the solution, can anyone confirm if we are hit by a bug or what is the root cause of this problem?

@PVince81
Copy link
Contributor

@mzaian I don't understand. Do you mean you setup a cron job to clean up old chunks (ideally once a day) but the cache folder is still full of abandoner chunks ?

Something else to check maybe: have a look at a specific known files.
For example I could have data1.bin which is 59 MB big.
Then try uploading it with the sync client.
It should generate 6 chunks of 10 MB, the last one with 9 MB.

Now to find out whether the remaining chunks were from an abandoner/cancelled upload, check whether the total size of the chunks matches the size of the known file.

If the size matches: it's a bug, because the chunks must be deleted after the final file was assembler.
If the size does not match or a chunk is missing: it's an aborted upload.

@mzaian
Copy link
Author

mzaian commented Jul 14, 2014

Doing a cleanup using find and rm manually or using a cronjob is fine.

This is an example for a specific file which is totally :-
File name: oaj2se.rar
Size : 14.3 MB (15,016,406 bytes)

Now to the user's cache directory in the server:-

root@owncloud:/data/owncloud/data/eomar/cache# ls -ltrah oaj2se*
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855759923-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855769361-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855781090-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855776849-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855750221-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855771204-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855775489-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855773964-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855776794-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855759486-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855767325-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855778454-0
-rw-r--r-- 1 www-data www-data 10M Jul 14 2014 oaj2se.rar-chunking-1855753758-0

@mzaian
Copy link
Author

mzaian commented Jul 16, 2014

No updates regarding the issue is confirmed or not?

@PVince81
Copy link
Contributor

I cannot confirm you issue as I don't have an environment where the issue happens.
In my case the chunks are always auto-cleant up properly.

From what I see from your last info it seems the chunk number 0 is recreated over and over again .
The other big number is the transaction id. As you can see it starts a new transaction with the first chunk. For some reason it looks like the client is re-uploading the first chunk multiple times.

Can you provide a similar listing but where the time appears ? (there's only date)
Want to check how often it happens.

Also would be good to check owncloud.log and the web server error log to see if there are errors related to such files / repeated uploads.

Also you can check the sync client log from that same time as the server to see whether the client was getting errors which made it retry multiple times.

@mzaian
Copy link
Author

mzaian commented Jul 16, 2014

The issue is reported in the forums from many users i will post file log time stamp and the same from the user's sync client.

@PVince81
Copy link
Contributor

Can you post a link to the forum topics you had in mind so I can have a look ?
If they also contain the required info it will also help debugging.

@mzaian
Copy link
Author

mzaian commented Jul 16, 2014

Here the file chunks from server side:-

root@owncloud:/data/owncloud/data/mhazem/cache# ls -ltrah --full-time *
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 09:58:00.000000000 +0200 visualvm_122.zip-chunking-3068839342-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 09:58:43.000000000 +0200 visualvm_122.zip-chunking-3068830077-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:03:56.000000000 +0200 visualvm_122.zip-chunking-3068848323-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:09:08.000000000 +0200 visualvm_122.zip-chunking-3068842398-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:14:20.000000000 +0200 visualvm_122.zip-chunking-3068834074-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:19:31.000000000 +0200 visualvm_122.zip-chunking-3068849677-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:24:43.000000000 +0200 visualvm_122.zip-chunking-3068845714-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:29:54.000000000 +0200 visualvm_122.zip-chunking-3068831735-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:35:07.000000000 +0200 visualvm_122.zip-chunking-3068824583-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:40:18.000000000 +0200 visualvm_122.zip-chunking-3068836555-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:45:30.000000000 +0200 visualvm_122.zip-chunking-3068824620-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:50:41.000000000 +0200 visualvm_122.zip-chunking-3068848342-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 10:55:54.000000000 +0200 visualvm_122.zip-chunking-3068849637-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:01:06.000000000 +0200 visualvm_122.zip-chunking-3068831493-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:06:18.000000000 +0200 visualvm_122.zip-chunking-3068828783-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:11:30.000000000 +0200 visualvm_122.zip-chunking-3068830407-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:16:41.000000000 +0200 visualvm_122.zip-chunking-3068837922-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:21:54.000000000 +0200 visualvm_122.zip-chunking-3068851757-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:27:05.000000000 +0200 visualvm_122.zip-chunking-3068853579-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:32:31.000000000 +0200 visualvm_122.zip-chunking-3068833348-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:37:43.000000000 +0200 visualvm_122.zip-chunking-3068840562-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:42:55.000000000 +0200 visualvm_122.zip-chunking-3068832301-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:48:06.000000000 +0200 visualvm_122.zip-chunking-3068849349-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:53:18.000000000 +0200 visualvm_122.zip-chunking-3068838102-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 11:58:30.000000000 +0200 visualvm_122.zip-chunking-3068847174-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:03:41.000000000 +0200 visualvm_122.zip-chunking-3068823928-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:08:53.000000000 +0200 visualvm_122.zip-chunking-3068847065-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:14:04.000000000 +0200 visualvm_122.zip-chunking-3068842877-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:19:16.000000000 +0200 visualvm_122.zip-chunking-3068835992-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:24:28.000000000 +0200 visualvm_122.zip-chunking-3068827076-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:29:39.000000000 +0200 visualvm_122.zip-chunking-3068826715-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:34:51.000000000 +0200 visualvm_122.zip-chunking-3068844362-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:40:02.000000000 +0200 visualvm_122.zip-chunking-3068824653-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:45:14.000000000 +0200 visualvm_122.zip-chunking-3068847221-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:50:25.000000000 +0200 visualvm_122.zip-chunking-3068846943-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 12:55:37.000000000 +0200 visualvm_122.zip-chunking-3068823139-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 13:00:49.000000000 +0200 visualvm_122.zip-chunking-3068846108-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 13:06:03.000000000 +0200 visualvm_122.zip-chunking-3068845308-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 13:11:15.000000000 +0200 visualvm_122.zip-chunking-3068823851-0
-rw-r--r-- 1 www-data www-data 10M 2014-07-17 13:16:26.000000000 +0200 visualvm_122.zip-chunking-3068827903-0

Here are the sync client log:-

#=#=#=# Syncrun started 2014-07-16T12:29:28 until 2014-07-16T12:29:39 (10856 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T12:34:40 until 2014-07-16T12:34:51 (10941 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T12:39:51 until 2014-07-16T12:40:02 (10892 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T12:45:03 until 2014-07-16T12:45:14 (10871 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T12:50:15 until 2014-07-16T12:50:26 (10956 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T12:55:26 until 2014-07-16T12:55:37 (11222 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T13:00:38 until 2014-07-16T13:00:49 (11309 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T13:05:50 until 2014-07-16T13:06:03 (12873 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T13:11:04 until 2014-07-16T13:11:15 (11305 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|
#=#=#=# Syncrun started 2014-07-16T13:16:15 until 2014-07-16T13:16:27 (11022 msec)
|0|Desktop/tmp/EGYTRANS/visualvm_122.zip|INST_NEW|Up|1279524740||10680841||3|Local file changed during sync.|0|0|0|||INST_NONE|

@PVince81
Copy link
Contributor

Okay thanks. Interesting, every five minutes. And always only the first chunk.
It seems to match the sync rhythm of the client.

Have you ever been able to upload a file bigger than 10 MB ?

Now let's see if we find clues in your logs about those aborted chunks.

@PVince81
Copy link
Contributor

CC @dragotin @ogoffart did you see something similar before ?

@ogoffart
Copy link

What it might be: the file is always changing on the client side, and the client has detected it has changed between chunk, so it aborts the upload after the first chunk.

Maybe the server could detect that the client is starting uploading the file from scratch and delete the stale chunks. But that's hard to do if we don't want to break the use case of two client uploading in parallel the same file (someone need to win the race eventually and we should not end up in a live lock)

@PVince81
Copy link
Contributor

@ogoffart I think we should be able to find this in the sync client logs, right ? Find whether the file has been changed.
Then it could also be a bug with detecting the file changes on the client side.

I agree that the server cannot safely delete chunks for the current file.
The only thing the server can do (still need to be implemented) is clean up old expired chunks after a few hours (what @mzaian's cron job is currently doing) which is a workaround for that.

Still hoping to find the core issue.

@ogoffart
Copy link

well, there is the error "Local file changed during sync." which one can see in the log.

I think the client does the right thing. If there was an API to cancel the chunked upload, the client could also use it.

@ogoffart
Copy link

I mean, if the file is actually NOT changed on the local file system, then there is a bug in the client, yes.

@PVince81
Copy link
Contributor

One idea would be for the client to send the previous transaction ID with the new transaction.
This way we can identify the chunk to remove.

Yes, we still need to find out why the client thinks the file changed that often.

@ogoffart
Copy link

you mean a header like X-OC-Chunking-Cancel: 1855759923 ?
That would be verry easy to implement in the client

@PVince81
Copy link
Contributor

Yes 😄
Make sure it's a transaction that already reached the server.
Not sure if there are cases where you'd need to send multiple ones ?

@PVince81
Copy link
Contributor

I'll make a separate ticket for that.

Let's keep @mzaian's ticket here to investigate why the sync client is repeatedly sending the same file.

@PVince81
Copy link
Contributor

See #9676 for discussing the new header.

@mzaian
Copy link
Author

mzaian commented Jul 17, 2014

Do you want me to provide any other logs or information?

@PVince81
Copy link
Contributor

@mzaian yes, please provide the owncloud.log and sync log of a failing file.
For the sync log you'll need to run the client with an extra option as specified here:
https://forum.owncloud.org/viewtopic.php?f=17&t=6526

Goal is to find out why the sync client is re-sending the same chunk over and over again instead of finishing the file.

@PVince81
Copy link
Contributor

@mzaian can you check the "creation date" and "modified date" of the broken files ?
We discovered a bug with the sync client with dates from 1998 or when modified date < creation date.
See #9781

Just wanted to see if your issue was the same or something different.

@mzaian
Copy link
Author

mzaian commented Jul 25, 2014

Some of the broken files are created months ago, files are synced with +1 day to the server if you check the provided info above.

@christianrj
Copy link

As you can see in the discussion of #9781 it seems the Owncloud Client changes the timezone in use by the client computer to UTC or something else when syncing some files, breaking the sync operation. I did a more extensive test and reported in #9781. Hope this gets fixed soon. This is a serious bug.

@FlatOutRoot
Copy link

Hi everyone, I just had a look on my ownCloud server and saw 13.6 GiB of chunk files in my cache folder. I do remember very well what happened when I tried to upload these files, because I created an isse for that reason. The upload using the desktop client for OS X failed. Please see issue #2042: owncloud/client#2042. Maybe these two problems correlate?

@PVince81
Copy link
Contributor

Stray chunks are there because of failed upload.
They should be cleant up when you login again (there's a login hook it seems).
But as this isn't the best solution, I raised #10661 to implement a better cleanup mechanism.

Note that this isn't related to what was discussed with @ogoffart above where the client would send the old transaction id. That part would be a secondary mechanism to expire chunks even earlier.

@lemmy04
Copy link

lemmy04 commented Dec 8, 2014

running owncloud 7.0.3 and the latest desktop clients: the clean up does not happen.
Can I just delete the files manually?

@PVince81
Copy link
Contributor

PVince81 commented Dec 8, 2014

Does the cleanup happen when the user logs in over the web UI ?

You can delete older files manually, yes

@lemmy04
Copy link

lemmy04 commented Dec 8, 2014

no, cleanup doesn't happen on user login, and not through cron job, either.

@PVince81
Copy link
Contributor

PVince81 commented Dec 8, 2014

We're looking into ways to directly write chunks into the final file and remove the need of the cache folder. (#4997)

@PVince81
Copy link
Contributor

Cleanup of chunks will now happen as part of a background job as per #14500 (CC @icewind1991)

@mzaian
Copy link
Author

mzaian commented Mar 25, 2015

Which owncloud server version im currently using 7.0.3

@PVince81
Copy link
Contributor

It will be in the upcoming 8.1

@PVince81
Copy link
Contributor

I see it was backported, so will probably be in 8.0.3

@lock lock bot locked as resolved and limited conversation to collaborators Aug 12, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

6 participants