-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Etag changed without visible reason #5264
Comments
Ah, needless to say that this problem blocks not also error free syncing, but also the debugging of errors that might happen on client side, which currently really bugs me. That's why I set Panic! |
Because this happens without explicit user interaction, the periodic file scanner is the prime culprit. The scanner CAN preserve the etag - but it MUST NOT. @icewind1991 any idea? THX |
I did a small experiment which may hopefully help you. If you don't change the file but you remove it's entry from the oc_filecache server-side and the you PROPFIND this file - you get a different etag back. Two etags have first 6 bytes in common:
|
@moscicki Yes. Because it is currently a random number. We should changed to calculate it in a predictable way as mentioned above. |
I've had no luck reproducing it myself, I've tried re-producing various cases in test cases but never got un-expected etag changes |
We could write the filename and the etag to the debug log whenever something is newly generated. Perhaps we can find some bugs with that. |
That probably won't help finding the cause of the changes though, we'll need a stack trace for that or a request log around that time |
@icewind1991 what about the scanner which is running periodically? The scanner can preserve the etag - but it must not depending on the flag - could that be a reason? |
@DeepDiver1975 the background scanner uses shallow scanning which should preserve the etag. I'll look into the background scanner nevertheless since I can't think of many other causes. |
I too have experienced this issue, quite a few times. This is a snippet from the log file of the sync client. Basically the file PR used to have an server etag of 5236da35cfee7, then all of a sudden it has a server etag of 5267e3eed8e06 for no apparent reason. That and 1.6GB of other files with changed etags. This is quite a major issue since this is a shared area and that dataset is synced to about 16 machines. I know that it has changed because I have a backup of last nights data and it was 5236da35cfee7 last night... and yet when I check the apache logs, that file hasn't been touched during that time.
Any help on this would be appreciated |
@icewind1991 Any idea why the scanner is not scanning the Shared folder? I observed an etag change in a Shared folder after the owner logged in again after some time. |
@DeepDiver1975 This makes alot of sense and now I think about it, my etag change occured exactly when I logged in to the web portal with the user who "controls" all the shares and who almost never logs in. |
I came to this bug report via owncloud/client#994 (comment). My gut tells me you guys are on to something here. I run OC clients on Linux, Windows, and Mac OS and things seem to sync right along. The times I've experienced the re-download behavior is after I've logged in to the web interface. The last time it happened, I just adjusted the sharing settings on one particular folder and a completely different sub-folder suddenly started to resync on all of my clients. Oddly, as I watched this behavior manifest, the resync counts on the various clients was slightly different, i.e. 34Gb on one machine, 22Gb on another, only 7Gb on a third (I don't remember the exact numbers, but they were in this ballpark). I'm now running client logging wherever I can, is there something on the server side I can do that would help track this down? I have yet to find a way to reliably cause it to happen though, so it's tough to know exactly when I need to be watching. |
We spent quite some time last week during our bug fixing hackatron on this issue. ... we are not yet there |
I'm not sure, but it seems this bug is causing other problems as well. I have discovered that sometimes when deleting files, only some of them will be marked as deleted. Others will be redownloaded after a few minutes. Could be due to mismatching etags? |
I am sure this question has been answered many times, but given how critical etags have become, shouldn't we have some kind of redundancy or checking in case they are lost or changed incorrectly. Ideas around this are ... make the etag an MD5 hash of the file (resource intensive, but only done once for unchanging files and if it does get done again due to data loss, at least it ends up the same.) or and etag change log of some kind where we can keep track of previous etags? |
Hello, The full redownload happened again right now for me. It seems to be triggered by web activity. Here is what happened:
So either of these two web actions have triggered the full redownload of my sync folder on all devices. BTW. I have folder pairs (clientsync folder and Shared/folder) setup explicitly (so not syncing to the whole owncloud account). I don't think it matters but just in case you know what's my setup. Are you able to reproduce this bug? kuba |
we merged some improvements to git master - not sure if the root cause has been solved. |
Thomas, Were these changes merged into the OC 5 releases or just the OC 6 Thanks, Mark
|
Only git master for now. Backport will only happen if this really helps at all. Von Samsung Mobile gesendet |
What is the status here? Has someone seen this problem in the meantime? |
I can confirm that this still exists in .13. |
@rendezz Great. Can you create a step by step way to reproduce that? ideally based an a fresh installation. We still have trouble finding this issue. If you can reproduce it then it's already the half fix. |
I haven't had the time to sit down time to reproduce this off a fresh installation, but I can pretty much confirm the circumstances as to when it happens. We have a central user (which isn't an active user) which is called utility, utility holds all the shared documents and isn't used for anything else. After creating the shared structure, many other users (10+) access and sync those folders without a problem (43GB worth). I know right now if I login with utility that it will scan the entire shared folder structure and reset every single etag. I backup the etags every 2 hours, so if this happens I can restore them, but if I leave it for another period of time without logging in as utility and then login as utility (web interface), it will happen again. I don't know how long this time is, but its certainly more than a few days. Btw, I think the oc_filecache table name is a total misnomer, caches most commonly ephemeral storages to aid in lookups and such, this table is critical to the function of owncloud, should be called oc_fileregistry or something... but I digress. |
@rendezz do you use some kind of external storage or an external storage? where are the files located? on the local harddisk or somewhere else? |
The storage is all local in a data directory, I have never used any other kind of storage. |
fixed by this #6201 |
I was running owncloud branch fix_5126_2 (SHA dd202d9) when happened to me what people constantly report (and run out of patience slowly) in the mirall issue tracker: Out of a sudden, the client redownloaded a whole part of the synced directory, in my case a sub directory somewhere in the tree.
I could grab the log and it shows that all ETags of the files within the directory have changed, so the client was downloading them for a valid reason. BUT the files can not have changed. They were not shared and I was monitoring them closely because I was debugging something else. For that, I edited a file not within the dir in question with the text editor.
In the ownCloud log file I find a lot :-/ I try to match by timestamp. No idea if its related:
I am not sure tough if this problem is really oC6 specific, as said, other people report that against 5.x as well, see owncloud/client#994
The text was updated successfully, but these errors were encountered: