-
-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: not show files s3 primary storage #34407
Comments
Hi, Do you see the file-listings in your instance? (via web-interface) take a good hard look at your sql-table Could you post the oc_storages info? (do anonymize the data) I have quite a bit of experience with NextCloud->S3, so I might just be able to help you out.. |
please answer:
|
the web interface does not show the files, not previews, it does not even try to upload them, in cloudflare R2 I have more than 1TB of data but I cannot access it without nextcloud |
you have you have check your I have had trouble with migrating to S3 storage.. and have written a migration tool to local (take a look at: lukasmu/nextcloud-s3-to-disk-migration#6) and later have built the reverse back to s3.. what you will need to do is first make backups! Of your data.. of your SQL. Did you have a separate storage config? (that was part of my trouble..) Is there data in your LOCAL data folder? (besides the default stuff) check your oc_filecache, are there storage id's attached to number 1? and 49? This BRUTE FORCE approach might work (but do make backups..........):
This Q & D approach might work.. (but do make backups..........) The problem is you are now "in limbo" things are set to local AND S3.. Nextcloud seems to prefer local.. resulting in you not seeing S3.. |
If I already have a backup of my database, I use plesk so it is very easy, and I also have the R2 bucket backed up in another bucket |
hi @mrAceT |
NO!! In contrary to local storage, S3 storage can NOT rebuild itself via the data in S3 (that should be possible, but for some reason NextCloud has decided to use the cryptic numbers, I have asked about this in the forum, but to my knowledge no one has ever given a good answer)
With that last sentence I ment the numbers of all those 'home & local' ID's.. if you would remove those, all data in there should refer to S3-data.. So for Sonya.. how many entries are there in oc_filecache with storage ID 28 ? Does that "feel right"? |
delete the oc_filecache entries with ID 1 ID 28 and ID 49, it does not solve the problem either, I remember that it happened to me about three months ago and I managed to solve it with a command that I execute in the occ binary, that command was written by someone in the official forum but I couldn't find it |
i'm pretty sure my problem lies with the database because i install a clean instance with s3 and it works fine, it just breaks when i import the database |
Oh dear.. number 28 are the entries for OBJECT->S3.. do you understand the structure of the numbers? You need to remove the NOT object-numbers |
The truth is that all the data that matters to me is in the admin account, and since I share it with other users, I can delete the other user accounts without problems. Is there any way to do a kind of cleanup to the database without touching the admin data? |
Then I'dd keep it simple.. and remove all the other accounts via the web-interface.. then you would have a nice clean oc_storage (the basis S3 entry and that one user?. Then remove all but those two numbers form the oc_filecache .. If nothing else has been broken, you should see all your files you'dd want to rescue in your filecache.. Also, if the other users are "simply shares".. then you'dd not have any other files lingering in your S3 storage.. |
I am going to eliminate all the accounts, I will try to clean the cache of the database and now later I write here to inform |
hi
|
./occ files:cleanup won't do a thing with S3 to my knowledge.. This isn't a version problem, it's a database problem (that likely arose due to Nextcloud being migrated to S3 and then an update that got confused with /local/S3) Questions:
|
answers 1 yes 2 id 1 object::store:amazon::nextcloud 3 exactly in oc_filecache there are only entries with id 1 and id 2 4 I can't create new users, it gives me an error when registering, of course enabling the registration app from the console because I had it disabled 5 I can log in as admin, but when I enter everything looks broken, as if the css styles and other things did not load, the files are not listed as it should happen 6 is correct, the id match 7 I enter the cloudflare R2 panel, I see all the files with names urn:oid*** |
5 I can log in as admin, but when I enter everything looks broken, as if the css styles and other things did not load, the files are not listed as it should happen I have had trouble with S3 (and a lot of it..) but it never broke the styling!? Are you still in that bèta version? Something else broke down.. it must have.. another "weird idea":
get out of maintenance mode let me know how it goes.. |
Did this solve it? |
no friend I couldn't fix it, thank you very much for your help and the suggestions you have given me |
the last thing i tried to do was create a clean install of netxcloud, regardless of the broken database, connect the broken install with rclone via webdav, i can list the file folders correctly, but when i try to send from my nextcloud broken to my new nextcloud just rclone stays thinking for hours and does not show an error or any information |
In all honesty I do not understand why my last option did not work.. it should have.. Do you have 1Tb of disk space available?
Your data is in there.. You could create a "spin off" of my s3-to-local-script that simply creates the folder-structure with the correct file name.. In essence that is what that script does.. If that does not work.. I could fix it for you, I'm practically certain.. but I'dd need shell access to your account with 1TB of space and access to your S3 credentials and database.. |
OK.. it's past midnight and I seriously need to go to bed, but here is my "special standalone S3 > local" version for you.. I am hoping you know about 'vendor/autoload' and know how to install the Aws\S3\S3Client package.. NEEDED:
Set up the variables and let it rip.. My first run of I believe about 100 Gb took hours.. so be patient.. There is a progress indication.. but do be patient.. All I'll ask for it is that you go to OpenStreetMap and look op Friesland (in The Netherlands) and remember that that's where your saviour lives ;) [update] $NR_OF_COPY_ERRORS_OK has no real use in this case => removed
Lett me know how it went.. |
(no) success? |
no my friend, i tell you what i am trying to do today, i am using your script to see if data access, create a new install using primary s3, mount that new install to a folder on my server using webdav, now i am seeing if with your script it is possible to recover the data and store it in that new installation, I thought I would write here if it worked for me |
The script should continue.. it means that number 482060 does exist in your database, but not in your S3 storage. It is very obvious something went horribly wrong.. what that was, I can not tell you.. But did you cancel the script or did you let it continue? If it canceled by itself.. show me more.. |
the script is running, it has been running for more than an hour, the percentage is slowly increasing, I am also monitoring the storage and it does not increase, that is, no information is being downloaded from the s3 storage, I suppose that the script is currently consulting the thousands of indices that I have in my database, I will calmly wait the hours that are necessary |
Are you using S3 via WebDav to use S3 as a "mounted folder"? If so, I have tried that.. with various ways of connecting.. I'dd advise against it for "life usage".. I got it working, but it was (beyond) slow.. If it's for restoring your data because of lack of disk space I'dd expect you will need to be extremely patient with that amount of data PS: maybe change PS2: if you have set NON_EMPTY_TARGET_OK to '1' you can abort and restart it, the script will skip the files it already got! |
I was using webdav mounted in a folder but it was extremely slow, so I canceled that process and I'm doing it with a normal system folder and of course it goes faster |
Hi, please update to 24.0.9 or better 25.0.3 and report back if it fixes the issue. Thank you! My goal is to add a label like e.g. 25-feedback to this ticket of an up-to-date major Nextcloud version where the bug could be reproduced. However this is not going to work without your help. So thanks for all your effort! If you don't manage to reproduce the issue in time and the issue gets closed but you can reproduce the issue afterwards, feel free to create a new bug report with up-to-date information by following this link: https://github.com/nextcloud/server/issues/new?assignees=&labels=bug%2C0.+Needs+triage&template=BUG_REPORT.yml&title=%5BBug%5D%3A+ |
no friend, in the end I ended up losing almost 1TB of files that I had in cloudflare R2 and I could not recover them in any way, unfortunately I did not have them backed up elsewhere, I tried to do everything to recover them, using clean installations and nothing worked for me, anyway In fact, I had to delete the bucket where you had that data because clouflare was charging me and my clients did not have access to the information, now I am using an independent company to take care of storing my data and redid my clients to their links, I will close this topic thanks for the help anyway |
Hello MrAceT, first of all, thank you very much for your script. I would also like to reduce the complexity of my NC installation. I have therefore also tried to use your script to switch from S3 to Local. Unfortunately I have not (yet) been successful as the script always aborts. I get similar inconsistency messages as @edwinosky but let the script run until it aborts. However, I cannot see where it breaks off. Your script copies all the data from S3 to local. But since the script then aborts, it probably does not update the DB entries to the end or can also run through all other necessary points. Because all the shares no longer exist as soon as I dial into Nextcloud. Perhaps I have configured something incorrectly. I still have the following questions about your script that are not quite clear to me:
Thanks in advance for your answer. Best regards PS: I just got this error message. It copied all data, then asked to continue: [...] Continue?Y Copying files finished |
Yikes.. I am guessing that the data lost was regrettable but not a disaster? (I think I would have been able to rescue that data). |
Bug description
Hello guys, I have more than a week dealing with a serious problem with my nextcloud installation, I tried to test many things before writing here
I have a nextcloud installation for months that I use to share content with family, friends and clients, I have been using cloudflare's R2 as primary S3 storage, everything worked great for months, until a week ago the installation is completely broken and no shows the files, if it tries to open a previously shared url if it shows the list of files but when trying to play or load them an error is shown, I already did a new installation, but when configuring S3 as primary storage it returns and breaks, it does not load the css styles of the installation, I will leave some screenshots of how it is shown
Steps to reproduce
Expected behavior
that everything works correctly as this great software does
Installation method
Community Web installer on a VPS or web space
Operating system
Debian/Ubuntu
PHP engine version
PHP 8.0
Web server
Apache (supported)
Database engine version
MariaDB
Is this bug present after an update or on a fresh install?
Updated to a major version (ex. 22.2.3 to 23.0.1)
Are you using the Nextcloud Server Encryption module?
No response
What user-backends are you using?
Configuration report
List of activated Apps
Nextcloud Signing status
Nextcloud Logs
Additional info
The text was updated successfully, but these errors were encountered: