Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix remove error, add flags #26

Closed
wants to merge 6 commits into from
Closed

Conversation

Psayker
Copy link
Contributor

@Psayker Psayker commented Jan 28, 2020

  • Add new args for detailed settings
  • Fix error with remove
  • Unified prefix for all driver args and environment variables.

Error with remove - if we manual delete VM from Proxmox VE, we can obtain misdeleate only by VMID

Archive with binaries, if you have any error with downloading from repo
docker-machine-driver-proxmoxve.macos-amd64.tar.gz
docker-machine-driver-proxmoxve.linux-amd64.tar.gz

Copy link
Owner

@lnxbil lnxbil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious:
This is a nice idea, but why do we need this? Or more precisely, what problem does this solve?

@Psayker
Copy link
Contributor Author

Psayker commented Jan 28, 2020

You have problem with Remove method, it delete by VMID VM in Proxmox.
Step for reproduce this error:

  1. Create via docker-machine VM
  2. Delete manualy in Proxmox VM
  3. Create new VM
  4. run docker-machine rm vm_name
    On stage 4, docker-machine use VMID, that saved localy, and send delete request to this VMID, but VM created by docker-machine already deleted and some new VM was created and assigned by this VMID.

New flags give us more flexible settings for our VM. Unified driver name, help when you want build custom driver or change it, and get both versions, modified and base.

EnvVar: "PROXMOXVE_VM_CPU",
Name: "proxmoxve-vm-cpu-cores",
EnvVar: upperDriverName + "_VM_SCSI_ARGS",
Name: driverName + "-vm-scsi-args",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cover #28

@Psayker
Copy link
Contributor Author

Psayker commented Feb 4, 2020

Fix few mistakes, delete hardcode for thinpool, directive if in Makefile

@lnxbil
Copy link
Owner

lnxbil commented Feb 6, 2020

You have problem with Remove method, it delete by VMID VM in Proxmox.
Step for reproduce this error:

  1. Create via docker-machine VM
  2. Delete manualy in Proxmox VM
  3. Create new VM
  4. run docker-machine rm vm_name
    On stage 4, docker-machine use VMID, that saved localy, and send delete request to this VMID, but VM created by docker-machine already deleted and some new VM was created and assigned by this VMID.

Oh sorry, now I get what the problem is. The solution is very nice.

We're added a lot of complexity to the whole driver (in comparison to other drivers) and need a good test environment. Any ideas how to set one up?

Mine looks currently like this:
I created a VM in my PVE with nested PVE inside that has LVM, ZFS, NFS, Directory storage to test that. I was unable to install a single-node ceph setup and the env is still lacking lvm-thin and iscsi. This is only the storage side. We need also network testing (for the vlan stuff) and now also the manual deletion of a VM and possibly any other combination of flags.

@onegreyonewhite
Copy link

VLAN can be safely used in OpenVSwitch (we checked this). In VE proxy, in principle, only file-oriented storage and all the others are different, so it’s enough to add only verification with lvm-thinpool.
One way or another, it is enough to verify the correctness of the settings passed, and testing Proxmox VE functionality would be unnecessary.

@travisghansen
Copy link
Contributor

@onegreyonewhite this is a great PR. I have implemented most of the features in a new branch here #34 if you'd like to collaborate at all.

Essentially I've made it much more 'cloud-native' in behavior with cloud-init support and using clone functionality.

@onegreyonewhite
Copy link

@travisghansen thanks for the tip. I think we're going to fork this project later and will develop it independently, because it is unacceptable to wait six months for pull requests to be accepted.

@travisghansen
Copy link
Contributor

Fair enough. I’ve pretty well got it in a final state. Just starting to test now with rancher actually so I’m sure there will be some nuances to clean up as a result of that.

@lnxbil
Copy link
Owner

lnxbil commented Jul 28, 2020

@travisghansen thanks for the tip. I think we're going to fork this project later and will develop it independently, because it is unacceptable to wait six months for pull requests to be accepted.

Sorry about that. My time for open source is very limited at the moment.

@lnxbil
Copy link
Owner

lnxbil commented Jul 29, 2020

As @travisghansen said, included so I close this pull request

@lnxbil lnxbil closed this Jul 29, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants