-
-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Script aborting #11
Comments
Hi mrAceT, I modified some settings in the my.cnf so that the script ran to the end. But unfortunately I don't see any shared files. In the overview of files I see the shared folders with the size, but when I enter the folders, they are empty. Do you have any ideas? occ files:scan + files:scan-app-data did not did the trick. Thank you for your help. Greetings |
Hi @tomcatcw1980, First of all, it is rarely wise to add comments to a closed issue. Second, in response:
You are welcome (buy me a cup of coffee ;) )
The 'cloud user' is the user of you NextCloud installation. When you perform actions as root, all file-actions are performed as root and thus it is very likely your NextCloud installation does not have the rights to access the migrated files.. If you are unable to run as your NextCloud installation user, you will need to manually set the owner of files and folders to you nextcloud user!
Looking at the next line, the 'base path' in your case will be something like 'var/www/nextcloud'
I am guessing simeting like '/public' or '/public_html' (the web part of your NextCloud installation.
Good question! I expect that the NextCloud structure will retain shares. This because my script only redirects "the pointer" to the file from S3 to local (or the other way around).
I expect the few lines above your quote will be more reveling to me..
Yikes, does this happen every time? This means the connection to your MySql server has been lost. The part above line 367 regularly uses the $mysqli connection.. so I am thinking database corruption? place NextCloud in maintenance mode and perform some database repair actions to be sure on the table the error given in your line 367. If that doesn't do the trick, try and do a re connection (the part "connect to sql-database..." somewhere around line 80 and copy those lines directly below "Copying files finished") see if that does the trick. |
But first, it would be nice, if you had any idea, where the shares have been gone. By modifiying the my.cnf the script went till the end. When I log into the NC-instance with migrated data, the share don't work anymore. See the screenshots above Therefore I have >200 users I can't restore them manually by each users himself. I ran occ sharing:delete-orphan-shares, then all share are not shown anymore because they are orphaned. I would be very grateful for help Greetings |
If still possible I'dd restore the backup first (reconnecting to your S3, use the backup.sql) |
I can all undone. I set up a second instance and can try and try. What do you mean exactly: Shall I restore the backup.sql before running the script again? |
Ah, you red my manual.. wow, people who read the manual and act on it really exist! ;) So my theory the shares would remain in tact doesn't hold. Darn.. That will mean I will need to (re)build my test setup and find out how that share changes.. I need to dive into the share structure and find out what needs to be migrated :-/ About a year ago I have done some digging into shares (for my project https://github.com/GeoArchive) and need to brush up on it for my automation in creating an account in that platform (which uses NextCloud for the data).. but I haven't had the time for that yet.. in how much of a hurry are you? |
AD: If you can "correct" one share and EXACTLY tell me what you've changed I'll try to build the code migration code on that, idea? |
yes ;-) for my own safety. this instance is productive. If I would destroy I will be dead ;-)
I can't push you. I'm really grateful if you take care of this problem. |
You mean I should reshare an old share, that didn't work anymore? I think this isn't a good idea, because i'm in staging environment. I guess, then the user would be informed by mail that a new share on another instance is made. Can we avoid that? |
If I understood you correctly you have done a test migration in a copied instance. I need to (re)build my test setup to find out what needs to be changed. I was hoping you could tell me what needs to be changed (in the MySql database table(s) ). Based upon that I could extend the migration script. But I will admit that'll require a bit of digging (trust me, I know ;) ) so I won't blame you if you leave that to me ;) |
I have done the following:
I noticed the following thing, but I have no idea if it matters: The data that is downloaded from the S3 bucket apparently does not contain a file named .ocdata. I also had to copy this over from the old location to the new NC data folder. I was then able to log into the staging instance, can see all migrated data from S3. But unfortunately the shares no longer work, although they are apparently displayed. I can delete everything on this instance again, restore the original DB of the old instance and turn everything back to the beginning. Since the S3 data is only copied, nothing is broken. But I'm afraid I can contribute less here. I would therefore be grateful if you could somehow get to the bottom of the problem. |
Hi, I did a bit of research again and made an extract of the oc_share table. The table contains the following data. In the production system the tables ist filled with data:
Remember: After running the script, shares were still displayed for the user and the folder was then empty if I enter. I then ran the occ sharing:delete-orphan-shares. After that, the oc_share table was empty and the folders were no longer displayed as shared for the user. Now I have imported the backup.sql again. The table with oc_share is filled again. But the shares are not displayed for the user. Somehow we are overlooking something. |
I have no peace about it. I have now performed the migration again. Immediately after the successful migration - the script ran cleanly to the end - I looked in the oc_share table: The data is there. I have just noticed one more thing: When I click on the info box of a folder that is displayed as Share (by the owner), an error message appears when I switch to the Share tab: "Selected keywords could not be loaded" With the occ sharing:delete-orphan-shares I got this result: I hope you find a solution somehow. Thank you very much. |
That might be the root cause of your problem. The script already does a "files:scan --all" (line 397).. but since you need to do it again, I think line 397 actually breaks the migration, because the data isn't accessible by Nextcloud.. This because you are unable to run as the user "www-data" (which is an assumption on the creation of the migration script). Try this:
(assuming the user and the group of the data folder is 'www-data') I 'abuse' the function 'occ' I created to perform this owner swap. Then the owner is correct BEFORE we pull out of maintenance mode.. that might just do the trick.. I think/hope (this idea is untested) Please try and let me know [update] changed the location of the added lines (this spot is better) |
I have updated the S3toLocal script to version 0.32 Someone else pointed out an other hiatus in the migration script (that that I copy from a part in localtoS3..) I also added the option that I am assuming will help you in the migration.. Could you try? |
Thank very much. I will try. Is there way not to copy the whole data from S3 each time? I have the 140GB that I copy every time. PS: Hope you got the little motivation via PP. |
I saw it, thanks! (added the option via Paypal now :)
RTM ;) check line 25: |
Yikes, that qualifies as an OOPS Fixed the code, could you retry? (I am unable to test atm, my instance is (happily) running via S3) I have taken a look at your image, the /nc_data (a root folder) is the location of you NexCloud files the /var/nc_data is the location the script will look for data of a previous migration. If the data (size/date) is thge same it will use that file, then it will not download the file again, but will simply copy it from /var/nc_data to /nc_data PS: you are able to run as 'sudo -u www-data' ? then the extra '$CLOUDUSER' value should not be needed.. |
it must be the time.. typo => updated |
Now i get this. But I dont understand. The path of the new Nextcloud data directory is set to: $PATH_DATA = $PATH_BASE.'/nc_data'; there I copied the .ocdata file from the old instance. But it doesn't work. What am I doing wrong. I'm going to call it a day for now. I won't be able to get back to it until tomorrow evening. Thanks for your help so far. greetings and have nive evening. |
This looks like a 'clouduser' thing.. it looks like you changed something that doesn't allow you to set maintenance mode to 'on' (that sudo -u part?) Do one or the other.. run as root and do a chown, or run as the clouduser (and do not chown).. by the looks of a previous image, you have a dedicated server running for Nextcloud, and the data is situated in a root folder (although I would suggest using '/var/nextcloud_data' as the data folder and '/var/nextcloud_data.bkp' as the backup/previous migration folder (I don't like to put stuff in root...) PS: when you have $NON_EMPTY_TARGET_OK set to 1 you may get some files in your final setup that a user has deleted. That is why I created '$PATH_DATA_BKP', then it looks in that folder and you get a clean migration, check step 5 in the readme (you understand?) |
Hi, two questions: a) Shall I add a user named clouduser and make him sudo so that I can adopt your script without modifying anything? b) Basic question that comes into my mind: My instance runs on S3 with primary storage configured. In my config.php exists a datadirectory that links to a local folder /var/nc_data. Does this variable exists independently if the instance has local data hosted or S3 as primary? Or do I have a relict in my config.php? I ask this because in your script there are only variables for the new data directory and for the data directory if one have existing data from previous migrations from S3. Doesn't there have to be a variable for the data directory for the current installation? |
Ah, then you need to use the existing one! /var/nc_data I expect the 'chown' option I added will do the trick. In your case I'dd try:
|
Hi, I think it worked now. Thank you so much! I will now do some tests and try it again so that I have a safe way to migrate. In another thread - I think of help.nextcloud.com - can't find anymore, I saw the hint, that the circles app may cause some broken links. So before migrating I diasabled this just to be safe. I really appriciate your help. Greetings |
Glad to have helped! To speed up migration in your live setup, use the $PATH_DATA_BKP option! |
One thing I forgot, that I did: I had trouble to login, told me something with invalid token. occ maintenance:data-fingerprint help then. |
That is most likely because you used a copy of your live instance.. each instance must have its own "fingerprint'. |
Hi There,
I don't want to duplicate, but don't know if my issue is here better placed:
nextcloud/server#34407 (comment)
Greetings
Chrstian
The text was updated successfully, but these errors were encountered: