This project provides an Ansible-based orchestration system for managing Orka VMs across multiple hosts. It allows you to plan and deploy VMs in a controlled manner, ensuring proper capacity management and distribution.
- Ansible installed on the control host
- Orka Engine License Key and URL to installer
- SSH access to remote hosts
- Python 3.x on both control and remote hosts
- sshpass installed on the control host for the Ansible runner
A web-based UI for running playbooks is available via Ansible Semaphore. This lets end users execute playbooks from a browser without needing CLI access. See semaphore/README.md for setup instructions.
├── deploy.yml # Main deployment playbook
├── delete.yml # Main deletion playbook
├── vm.yml # Main playbook for managing (delete, start, stop) a specific VM
├── pull_image.yml # Main playbook for pulling an image on all hosts
├── create_image.yml # Main playbook for creating an image and pushing it to a remote registry
├── list.yml # Main playbook for listing VMs
├── install-engine.yml # Main playbook for installing Orka Engine
├── install_android_sdk.yml # Main playbook for installing Android SDK
├── sdkmanager_install.yml # Main playbook for installing Android SDK platforms and system images
├── sdkmanager_uninstall.yml # Main playbook for uninstalling Android SDK platforms and system images
├── deploy_avd.yml # Main playbook for creating and running Android Virtual Devices
├── delete_avd.yml # Main playbook for deleting Android Virtual Devices
├── provision_user.yml # Main playbook for provisioning an admin user on a VM
├── install_citrix_vda.yml # Main playbook for installing Citrix VDA on a VM
├── register_citrix_vda.yml # Main playbook for registering a Citrix VDA with a Delivery Controller
├── dev/ # Development environment
│ ├── inventory # Inventory file for development
│ └── group_vars/ # Test vars for development
-
Create a development vars:
mkdir -p dev/group_vars/all touch dev/group_vars/all/main.yml
-
Add
ansible_userandvm_imageto the variables -
Create an inventory file in
dev/inventorywith your hosts:[hosts] host1_ip host2_ip
vm_name: Name of the VM to deploy or manage (required)max_vms_per_host: Maximum number of VMs allowed per host (default: defined in your inventory or group vars)vm_image: The image used to deploy VMs fromansible_user: The user used to connect to the Mac hostsengine_binary: Path to the Orka engine binary (default: defined in your inventory or group vars)network_interface: The network to attach the VM to, such asen0(default: none, will deploy via NAT mode)cpu: The number of vCPUs to allocate to a given VM (default: 2)memory: The amount of memory in MB to allocate to a given VM (default: 4096)
To install Orka Engine run
ansible-playbook install_engine.yml -i dev/inventory -e "orka_license_key=<license_key>" -e "engine_url=<engine_url>"where:
orka_license_key- is the Engine license keyengine_url- is the URL to download Engine from
Note - To force redeployment or upgrade pass -e "install_engine_force=true".
To install the Android SDK (including Java JDK, command-line tools, and platform-tools) on target hosts:
ansible-playbook install_android_sdk.yml -i dev/inventoryThis will:
- Install Eclipse Temurin JDK 21 (if not already present)
- Download and set up Android command-line tools
- Accept Android SDK licenses
- Install base SDK packages
- Configure
JAVA_HOME,ANDROID_HOME, andPATHin the user's.zshrc
Note - To force reinstallation pass -e "install_android_sdk_force=true".
To install an Android SDK platform and its system images on target hosts:
ansible-playbook sdkmanager_install.yml -i dev/inventoryThis will:
- Verify that
sdkmanageris available (requires the Android SDK to be installed first) - Install the specified platform (default:
android-35) - Install system images for the specified image types (default:
default,google_apis)
Optional variables:
platform- The Android platform to install (default:android-35)image_types- Comma-separated list of system image types to install (default:default,google_apis)
Example with custom platform and image types:
ansible-playbook sdkmanager_install.yml -i dev/inventory -e "platform=android-34" -e "image_types=default,google_apis,google_apis_playstore"To uninstall an Android SDK platform and all of its system images from target hosts:
ansible-playbook sdkmanager_uninstall.yml -i dev/inventoryThis will:
- Verify that
sdkmanageris available - Find and uninstall all system images for the specified platform
- Uninstall the platform itself
Optional variables:
platform- The Android platform to uninstall (default:android-35)
Example:
ansible-playbook sdkmanager_uninstall.yml -i dev/inventory -e "platform=android-34"Run the deploy_avd.yml playbook with --tags plan to see a plan for which host the AVD will be created on:
ansible-playbook deploy_avd.yml -i dev/inventory -e "vm_name=my-vm" --tags planThen, to create an Android Virtual Device (AVD) on the host where a specific VM is running:
ansible-playbook deploy_avd.yml -i dev/inventory -e "vm_name=my-vm"The AVD name is derived automatically from the VM name using the pattern {vm_name}-avd-{index}, where the index increments for each new AVD associated with the VM (e.g. my-vm-avd-0, my-vm-avd-1).
This will:
- Gather VM data from all hosts
- Find the host where the specified VM is running
- Determine the next available AVD index for the VM
- Create an AVD on that host only
- Verify that
avdmanageris available (requires the Android SDK to be installed first) - Run the AVD and setup network connectivity between the specified VM and the AVD
Required variables:
vm_name- The name of the VM where the AVD should be created (must be running on one of the hosts)
Optional variables:
platform- The Android platform to use (default:android-35)image_type- The system image type to use (default:default)run_avd- Whether to run the AVD after creation (default:true)cpu: The number of vCPUs to allocate when running the AVD (default: let host decide)memory: The amount of memory in MB to allocate when running the AVD (default: let host decide)
Example with custom settings:
ansible-playbook deploy_avd.yml -i dev/inventory -e "vm_name=my-vm" -e "platform=android-34" -e "image_type=google_apis" -e "cpu=4" -e "memory=2048"To delete an AVD from the host where a specific VM is running:
ansible-playbook delete_avd.yml -i dev/inventory -e "vm_name=my-vm" -e "avd_index=0"The AVD name is derived from the VM name and index (e.g. avd_index=0 deletes my-vm-avd-0).
This will:
- Gather VM data from all hosts
- Find the host where the specified VM is running
- Verify the AVD exists
- Check that no emulator is currently running for the AVD
- Delete the AVD
Required variables:
vm_name- The name of the VM where the AVD is locatedavd_index- The index of the AVD to delete (e.g.0formy-vm-avd-0)
To plan a deployment without actually creating VMs:
ansible-playbook deploy.yml -i dev/inventory -e "vm_name=my-vm" --tags planThis will:
- Check capacity on all hosts
- Check if a VM with the given name already exists
- Create a deployment plan
- Display the plan without executing it
To actually deploy the VM:
ansible-playbook deploy.yml -i dev/inventory -e "vm_name=my-vm" -e "vm_image=<image>"- Capacity Check: The system first checks the current capacity and running VMs on each host.
- Planning: Creates a deployment plan. If a VM with the given name already exists, no new VM is deployed.
- Deployment: Executes the deployment plan, creating the VM on the selected host.
To plan a deletion without actually deleting a VM:
ansible-playbook delete.yml -i dev/inventory -e "vm_name=my-vm" --tags planThis will:
- Check capacity on all hosts
- Find the VM with the given name
- Create a deletion plan
- Display the plan without executing it
To actually delete the VM:
ansible-playbook delete.yml -i dev/inventory -e "vm_name=my-vm"- Capacity Check: The system first checks the current capacity and running VMs on each host.
- Planning: Finds the VM by name and creates a deletion plan. The playbook fails if no VM with the given name is found.
- Deletion: Executes the deletion plan, removing the VM from its host.
If you want to delete a single VM run:
ansible-playbook vm.yml -i dev/inventory -e "vm_name=<vm_name>" -e "desired_state=absent"where vm_name is the name of the VM you want to delete. If can be a partial match.
NOTE - This playbook deletes all VMs matching the provided name. If you want to delete a VM on a specific host you need to use:
ansible-playbook vm.yml -i dev/inventory -e "vm_name=<vm_name>" -e "desired_state=absent" --limit <host>where host is the host you want to delete a VM from.
If you want to stop a VM run:
ansible-playbook vm.yml -i dev/inventory -e "vm_name=<vm_name>" -e "desired_state=stopped"where vm_name is the full name or partial match of the VM or VMs you want to stop.
NOTE - This playbook stops all VMs matching that name. If you want to stop a VM on a specific host you need to use:
ansible-playbook vm.yml -i dev/inventory -e "vm_name=<vm_name>" --limit <host>where host is the host you want to stop a VM from.
If you want to start a VM run:
ansible-playbook vm.yml -i dev/inventory -e "vm_name=<vm_name>" -e "desired_state=running"where vm_name is matches name or names of the VM you want to start.
NOTE - This playbook starts all VM with that name. If you want to start a VM on a specific host you need to use:
ansible-playbook vm.yml -i dev/inventory -e "vm_name=<vm_name>" -e "desired_state=running" --limit <host>where host is the host you want to start a VM from.
To find a specific VM matching a given name:
ansible-playbook list.yml -i dev/inventory -e "vm_name=my-vm"You can also list all VMs across all hosts:
ansible-playbook list.yml -i dev/inventoryTo pull an OCI image to the hosts run:
ansible-playbook pull_image.yml -i dev/inventory -e "remote_image_name=<image_to_pull>"where image_to_pull is the OCI image you want to pull. Optionally you could also specify the following variables:
registry_username- The username to authenticate to the registry withregistry_password- The password to authenticate to the registry withinsecure_pull- Whether to allow pulling via HTTP
This workflow:
- Deploys a VM from a specified base image
- Configures the VM by running all bash scripts inside the scripts folder
- Pushes an image from the VM to a specified remote OCI registry
- Deletes the VM
Note By default, VMs are not accessible from outside of the host they are deployed on. To connect to the VMs and to configure them we use port forwarding. SSHPass is required on the Ansible runner in order to be able to connect to the VM.
To configure and image and push it to a remote registry:
- Ensure you have added your bash scripts to the scripts folder
- Run
ansible-playbook create_image.yml -i dev/inventory -e "remote_image_name=<remote_destination>" -e "vm_image=<base_image>"where remote_destination is the OCI image you want to push to. base_image is the image you want to deploy from. Optionally you could also specify the following variables:
registry_username- The username to authenticate to the registry withregistry_password- The password to authenticate to the registry withinsecure_push- Whether to allow pushing via HTTPupgrade_os- Whether you want the OS to be upgraded as part of the image creation process
To provision an admin user account on a running VM:
ansible-playbook provision_user.yml -i dev/inventory \
-e "vm_name=<vm_name>" \
-e "vm_username=<vm_username>" \
-e "vm_password=<vm_password>" \
-e "new_username=<new_username>" \
-e "new_user_password=<new_user_password>"where:
vm_name- the exact name of the running VM to provision the user onvm_username- the existing admin username on the VM used to connectvm_password- the password for the existing admin usernew_username- the username for the new accountnew_user_password- the password for the new account
The playbook is idempotent — if the user already exists it will skip creation.
Note — VMs may not be directly accessible from outside the host they run on,
depending on networking configuration. The playbook connects to the VM via SSH
through the Mac host as a jump proxy. sshpass must be installed on the Ansible
runner. Apple Command Line Tools will be installed on the VM automatically if not
already present.
To install Citrix Virtual Delivery Agent (VDA) on a running VM:
ansible-playbook install_citrix_vda.yml -i dev/inventory \
-e "vm_name=<vm_name>" \
-e "vm_username=<vm_username>" \
-e "vm_password=<vm_password>" \
-e "citrix_installer_url=<citrix_dmg_url>" \
-e "hostname_suffix=<domain_suffix>"where:
vm_name- the exact name of the running VM to install Citrix VDA on (also used as the VM hostname)vm_username- the existing admin username on the VM used to connectvm_password- the password for the existing admin usercitrix_installer_url- Download URL for the Citrix VDA.dmg. We recommend hosting your installer in an S3 bucket with a presigned URL.hostname_suffix- domain suffix appended tovm_nameto form the full hostname (e.g.corp.example.com). This value can be blank to just use the VM name as the hostname.
This playbook:
- Locates the VM by name across all hosts and adds it to the inventory via a jump proxy
- Installs developer tools and .NET runtime prerequisites
- Sets the VM hostname using the provided name and suffix
- Downloads, mounts, and installs the Citrix VDA package
- Grants required TCC permissions (screen capture and accessibility) for Citrix components when SIP is disabled
- Reboots the VM to complete installation
Note — sshpass must be installed on the Ansible runner. The playbook connects to the VM via the Mac host as a jump proxy.
After installing Citrix VDA, register the VM with a Citrix Delivery Controller using an enrollment token:
ansible-playbook register_citrix_vda.yml -i dev/inventory \
-e "vm_name=<vm_name>" \
-e "vm_username=<vm_username>" \
-e "vm_password=<vm_password>" \
-e "enrollment_token=<enrollment_token>"where:
vm_name- the exact name of the running VM to registervm_username- the existing admin username on the VM used to connectvm_password- the password for the existing admin userenrollment_token- Citrix enrollment token for registering the VDA with the Delivery Controller
You can group VMs together by having a shared prefix. This will allow you to manage start, stop, and delete multiple VMs across various nodes by running a single task in Semaphore, or executing a single Ansible run from the command line.