[852] Implement deploy from ova#985
Conversation
|
Hi @GullapalliAkhil @Haroon-Dweikat-Ntx , Is there any possibility to have a look at this? |
|
Hi @GullapalliAkhil @Haroon-Dweikat-Ntx , Could you let me know if something needs to happen by me? |
|
@kvandenhoute Thanks for your contributions!! Ova(Create, Deploy) support has been part of 2.3.2 release. Why the need of this review? |
|
2.3.2 indeed includes ova create. And that's it. There is no support for update or delete, nor does it keep anything correctly in state. I made the PR when 2.3.2 was still in progress, with the idea it could maybe be included in that release. That's too late now, but I still would like the changes to be included at some point. The review is needed because I would like this functionality to be released so I no longer have to run my own builds. I think this PR makes the ova_deploy way more useful for a lot of people, as right now ova_deploy does the bare minimum. I am willing to make further changes if needed after review. I would like this to become production grade. I know you guys probably have an internal roadmap to implement things. |
| ReadContext: ResourceNutanixOvaVMDeploymentRead, | ||
| UpdateContext: ResourceNutanixOvaVMDeploymentUpdate, | ||
| DeleteContext: ResourceNutanixOvaVMDeploymentDelete, | ||
| Importer: &schema.ResourceImporter{ |
There was a problem hiding this comment.
IMHO: Import is not needed here?
There was a problem hiding this comment.
Thank you for taking the time to have a look.
Following the greater idea of the PR, I think it might need an importer.
However, now that I gave it a second look, I'm not sure this import will work as is (it is the only part I have not tried out, the create - update - delete I'm running my own build of in test).
When we agree on the direction of this PR, I will write a custom import function, or remove it
There was a problem hiding this comment.
This resource represents an action only; there are no existing objects to import.
| d.SetId(*vmUUID) | ||
| log.Printf("[DEBUG] OVA VM deployment completed successfully: vm_id=%s", *vmUUID) | ||
|
|
||
| // Handle additional disks after initial VM deployment |
There was a problem hiding this comment.
This is more of a resource for just to deploy the Vm through Ova. Once the deployment is done, the VM can be managed using resource nutanix_virtual_machine_v2 through import rather than having disks, power state managed here.
There was a problem hiding this comment.
I don' see how it's a viable solution to use deploy_from_ova and then import. When having to deploy hundreds of VM's, it is certainly not feasible to do a manual import for every one. When you have a nuke and replace strategy (deploy from ova in combination with volumegroup for persistent data) this would mean having to re-import on every deploy too.
Using the import {} block does not seem to work either, because the id of an import block can not be a variable; so I would need to hard copy the created-vm id.
resource "nutanix_ova_vm_deploy_v2" "vm" {
...
}
import {
to = nutanix_virtual_machine_v2.vm
id = nutanix_ova_vm_deploy_v2.vm.id -> this id must be known during the plan operation, so this won't work
}
resource "nutanix_virtual_machine_v2" "vm" {
...
}When using prism central, you can do this in 1 flow (deploy vm from ova + being able to add disks/nics/...), so I would expect to be able to do something similar in terraform.
I might be missing something here, how would you suggest creating an VM from OVA, with optional extra disk, and optional power state (basically everything you can do on a normal VM) with 1 terraform apply?
There was a problem hiding this comment.
When using prism central, you can do this in 1 flow (deploy vm from ova + being able to add disks/nics/...), so I would expect to be able to do something similar in terraform.
As of now in UI: if user tried to deploy a VM from OVA it uses a V3 API
https://PCIP:9440/api/nutanix/v3/vmsto deploy. IMHO: The api which we use for this resource should be improved to support adding disks, PowerState and etc.
There was a problem hiding this comment.
Yes, I have noticed that, which is why my first PR used the v3 api and I think, as long as the v4 api is not ready for real usage, the v3 api should be used. But since the nutanix terraform deprecated all <v4, this is not an option.
I do agree it would be nicer if the v4 api would implement this, but at this moment it doesn't. It's also not opensource afaik, so creating a PR there is no solution. It don't think it makes sense to do it in the ntnx-api-golang-clients as that's just the translation of the api. So I have to do it here.
When the prism API would get this update, the provider can be adapted. This way the provider can move forward.
It's not like the prism API is abused in this PR, it just does the 'import mechanism' in code in the provider by calling the vm update api with the id of the vm that was just created. It is indeed multiple api calls, create - update - update, and it would be nicer if it was one. However, all of them should be properly awaited and captured so I don't think the fact that that happens is a big issue.
There was a problem hiding this comment.
Hi @GullapalliAkhil ,
What is the way forward here?
|
Hi @GullapalliAkhil , Have you heard anything about the prism v4 API supporting proper OVA vm creation? Should I keep maintaining this PR or are you not planning on including this? Thank you, |
| }, | ||
| }, | ||
| }, | ||
| "disks": { |
There was a problem hiding this comment.
the override_vm_config dose not support the disks attribute as shown in the API Docs
There was a problem hiding this comment.
Indeed, the API does not support this. I think that is a limitation of the API, but it should not limit the functionality in the terraform module. By combining some API calls, we can deliver this functionality in the terraform module. It would indeed be nice if the API supported this natively, but it does not and it is closed source
There was a problem hiding this comment.
@kvandenhoute I feel, instead of us making it work by custom patches.. It would be so better if API supported it like you mentioned above. I have already raised several improvements on Ova's which would help us to achieve this scenario.
There was a problem hiding this comment.
Hi @Haroon-Dweikat-Ntx @GullapalliAkhil ,
Thanks for taking the time to review.
If you guys would like to wait for the API changes, does it make sense for me to keep working on this PR? How do you envision these changes to get in place?
Thanks,
Karel
| ReadContext: ResourceNutanixOvaVMDeploymentRead, | ||
| UpdateContext: ResourceNutanixOvaVMDeploymentUpdate, | ||
| DeleteContext: ResourceNutanixOvaVMDeploymentDelete, | ||
| Importer: &schema.ResourceImporter{ |
There was a problem hiding this comment.
This resource represents an action only; there are no existing objects to import.
| Description: "Memory size of the VM in bytes.", | ||
| ValidateFunc: validation.IntAtLeast(1), | ||
| }, | ||
| "power_state": { |
There was a problem hiding this comment.
as @GullapalliAkhil said, t would be so better if API supported it.
| } | ||
|
|
||
| var vmUUID *string | ||
| for _, entity := range taskResult.EntitiesAffected { |
There was a problem hiding this comment.
can you please update this, now we have utils func to extract the uuid from the task details
uuid, err := common.ExtractEntityUUIDFromTask(taskDetails, utils.RelEntityTypeVM, "Virtual Machine")
if err != nil {
return diag.FromErr(err)
}| } | ||
| } | ||
|
|
||
| if vmUUID == nil { |
There was a problem hiding this comment.
This validation is redundant now that the utility function extracts the entity ID from the task details.
|
|
||
| locals { | ||
| cluster_ext_id = data.nutanix_clusters_v2.clusters.cluster_entities[0].ext_id | ||
| config = jsondecode(file("%[1]s")) |
There was a problem hiding this comment.
This line cannot be removed—we read the test variables from the test variables file, and removing it would break the test.
|
|
||
|
|
||
| data "nutanix_subnets_v2" "subnets" { | ||
| filter = "name eq '${local.vmm.subnet_name}'" |
There was a problem hiding this comment.
We are selecting a specific subnet by filtering on its name; removing this would result in a random subnet being used.
| } | ||
|
|
||
|
|
||
| `, filepath, vmName, vmDescription, ovaName) |
There was a problem hiding this comment.
please revert this change
| } | ||
|
|
||
| data "nutanix_subnets_v2" "subnets" { | ||
| filter = "name eq '${local.vmm.subnet_name}'" |
There was a problem hiding this comment.
same as previous
|
|
||
| locals { | ||
| cluster_ext_id = data.nutanix_clusters_v2.clusters.cluster_entities[0].ext_id | ||
| config = jsondecode(file("%[1]s")) |
There was a problem hiding this comment.
same as previous
|
@kvandenhoute JFYI: We discussed internally with the team and taking this in. Let us work together and make the way for the PR to be in master. + @Haroon-Dweikat-Ntx |
|
Great to hear, thanks for advancing with it. I am however currently on holiday, so will start implementing the requested changes in two weeks. |
|
@GullapalliAkhil @Haroon-Dweikat-Ntx |
|
/ok-to-test TestAccV2NutanixOvaVmDeployResource
==================================================🧪 TEST SUMMARY 🧪=================================================================================
Total Test Cases Run 🚀: 5
Total Test Cases Passed ✅: 5 (100 %)
Total Test Cases Failed ❌: 0 (0 %)
Total Test Cases Skipped ⚠️: 0 (0 %)
==================================================================================================================================================
================================================== TESTS SUCCEEDED ✅ =============================================================================
✅ TestAccV2NutanixOvaVmDeployResource_DeployVMFromOva
✅ TestAccV2NutanixOvaVmDeployResource_DeployVmFromOvaDoesNotExist
✅ TestAccV2NutanixOvaVmDeployResource_BasicUpdate
✅ TestAccV2NutanixOvaVmDeployResource_FullUpdate
✅ TestAccV2NutanixOvaVmDeployResource_DiskUpdate
===================================================================================================================================================
================================================== TESTS FAILED ❌ ================================================================================
🎉💃 No tests failed 🕺🎉
===================================================================================================================================================
================================================== TESTS SKIPPED ⚠️ ==============================================================================
🎉💃 No tests skipped 🕺🎉
===================================================================================================================================================
|
| args := make(map[string]interface{}) | ||
| args["If-Match"] = getEtagHeader(readResp, vmmConn) | ||
|
|
||
| var powerResp interface{} |
There was a problem hiding this comment.
here is a bug here. The PowerOn/Off API returns a response of type ApiResp, not taskRef.
Below is the correct implementation.
| var powerResp interface{} | |
| var taskUUID *string | |
| switch newPowerState { | |
| case "ON": | |
| powerResp, err := vmmConn.VMAPIInstance.PowerOnVm(utils.StringPtr(d.Id()), args) | |
| if err != nil { | |
| return diag.Errorf("error powering on VM: %v", err) | |
| } | |
| TaskRef := powerResp.Data.GetValue().(import3.TaskReference) | |
| taskUUID = TaskRef.ExtId | |
| case "OFF": | |
| powerResp, err := vmmConn.VMAPIInstance.PowerOffVm(utils.StringPtr(d.Id()), args) | |
| if err != nil { | |
| return diag.Errorf("error powering off VM: %v", err) | |
| } | |
| TaskRef := powerResp.Data.GetValue().(import3.TaskReference) | |
| taskUUID = TaskRef.ExtId | |
| default: | |
| return diag.Errorf("unsupported power state: %s", newPowerState) | |
| }``` |
|
hi @kvandenhoute can you please address my requested changes |
Co-authored-by: Haroon Dweikat Ntx <haroon.dweikat@nutanix.com>
| var powerResp interface{} | ||
| var taskUUID *string | ||
|
|
||
| switch newPowerState { | ||
| case "ON": | ||
| powerResp, err = vmmConn.VMAPIInstance.PowerOnVm(utils.StringPtr(d.Id()), args) | ||
| if err != nil { | ||
| return diag.Errorf("error powering on VM: %v", err) | ||
| } | ||
| TaskRef := powerResp.(*import3.TaskReference) | ||
| taskUUID = TaskRef.ExtId | ||
| case "OFF": | ||
| powerResp, err = vmmConn.VMAPIInstance.PowerOffVm(utils.StringPtr(d.Id()), args) | ||
| if err != nil { | ||
| return diag.Errorf("error powering off VM: %v", err) | ||
| } | ||
| TaskRef := powerResp.(*import3.TaskReference) | ||
| taskUUID = TaskRef.ExtId | ||
| default: | ||
| return diag.Errorf("unsupported power state: %s", newPowerState) | ||
| } |
There was a problem hiding this comment.
can you please address this change again
here is a bug here. The PowerOn/Off API returns a response of type ApiResp, not taskRef.
Below is the correct implementation.
```suggestion
var taskUUID *string
switch newPowerState {
case "ON":
powerResp, err := vmmConn.VMAPIInstance.PowerOnVm(utils.StringPtr(d.Id()), args)
if err != nil {
return diag.Errorf("error powering on VM: %v", err)
}
TaskRef := powerResp.Data.GetValue().(import3.TaskReference)
taskUUID = TaskRef.ExtId
case "OFF":
powerResp, err := vmmConn.VMAPIInstance.PowerOffVm(utils.StringPtr(d.Id()), args)
if err != nil {
return diag.Errorf("error powering off VM: %v", err)
}
TaskRef := powerResp.Data.GetValue().(import3.TaskReference)
taskUUID = TaskRef.ExtId
default:
return diag.Errorf("unsupported power state: %s", newPowerState)
}```
|
can you please fix the lint issues : |
Haroon-Dweikat-Ntx
left a comment
There was a problem hiding this comment.
LGTM. @GullapalliAkhil, could you please take a final look and approve it?
Thanks.
| d.SetId(*vmUUID) | ||
| log.Printf("[DEBUG] OVA VM deployment completed successfully: vm_id=%s", *vmUUID) | ||
|
|
||
| // Handle additional disks after initial VM deployment |
Co-authored-by: kvhoute <karel.vanden-houte.ext@eurocontrol.int> Co-authored-by: Haroon Dweikat Ntx <haroon.dweikat@nutanix.com>
This PR implements the read / update / delete of OVA deploy.
It also adds disks and power_state to the override_vm_config to make it more complete.
I do not expect this to be included in 2.3.2, but since ova_deploy is only added from 2.3.2 I based my branch on this.
If any changes need to happen, please let me know.
fixes #852 , extends #904