Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

driver specific files need to be separated with a dot (.) #13

Closed
wants to merge 1 commit into from

Conversation

chiraganand
Copy link
Contributor

See this link:

restore.<SYSTEM_TM>: [Only for KVM drivers] If this script exists, the restore script will execute it right at the beginning to extract the checkpoint from the system datastore. For example, for the ceph system datastore the restore.ceph script is defined.

  1. File: vmm/kvm/save_linstor_un should be renamed to vmm/kvm/save.linstor_un.
  2. File: vmm/kvm/restore_linstor_un should be renamed to

@chiraganand chiraganand marked this pull request as ready for review July 21, 2020 12:30
@kvaps
Copy link
Collaborator

kvaps commented Jul 21, 2020

Hi, there were big discussion about this
https://forum.opennebula.io/t/shared-mode-for-the-ceph-based-system-datastires/7164/3?u=kvaps

Actually save_linstor_un and restore_linstor_un are not exactly same as save.linstor_un and restore.linstor_un, the main difference that save.linstor_un and restore_linstor_un handled by save and restore scripts of vmm driver remotely on the compute node. But we need to handle them as any other tm actions on the controller side, because it has an access to Linstor API.

There were no other option except overriding standard save and restore vmm actions by local ones, you're doing by updating the VM_MAD driver config:

 VM_MAD = [
     NAME           = "kvm",
-    ARGUMENTS      = "-t 15 -r 0 kvm",
+    ARGUMENTS      = "-t 15 -r 0 kvm -l save=save_linstor_un,restore=restore_linstor_un",
 ]

BTW there is an unfinished PR on upstream project OpenNebula/one#3273, which aims to add presave, postsave, prerestore and postrestore tm actions, which actually solves this issue.

@kvaps
Copy link
Collaborator

kvaps commented Jul 21, 2020

Unfortunately I can't accept your PR because it will break the driver logic by calling save.linstor_un and restore.linstor_un on the remote node side which will never be finished successfully.

@kvaps kvaps closed this Jul 21, 2020
@chiraganand
Copy link
Contributor Author

Hi, there were big discussion about this
https://forum.opennebula.io/t/shared-mode-for-the-ceph-based-system-datastires/7164/3?u=kvaps

Actually save_linstor_un and restore_linstor_un are not exactly same as save.linstor_un and restore.linstor_un, the main difference that save.linstor_un and restore_linstor_un handled by save and restore scripts of vmm driver remotely on the compute node. But we need to handle them as any other tm actions on the controller side, because it has an access to Linstor API.

Thanks for the explanation, Andrei. This makes sense.

There were no other option except overriding standard save and restore vmm actions by local ones, you're doing by updating the VM_MAD driver config:

I have re-deployed linstor_un using this now and everything is working fine. I am guessing there was an issue with these files names on the earlier OpenNebula (5.6) which seems to have gone now. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants