Add additional packages and container images verification automation#337
Add additional packages and container images verification automation#337RajAlppy wants to merge 4 commits intodell:automation-v2.1.0.0from
Conversation
- Add discovery module for verifying additional RPM packages and container images - Implement kube control plane normalization (single node = first, multiple nodes = 1st is first) - Add support for displaying Slurm/non-K8s nodes in container images results - Create comprehensive test cases with clean output format matching telemetry/prepare_oim style New files: - automation_library/discovery/vars/additional_pkgs_vars.py - automation_library/discovery/functions/additional_pkgs_func.py - automation_library/discovery/messages/additional_pkgs_msgs.py - molecule/discovery/tests/test_additional_packages_and_container_images.py Updated files: - automation_library/discovery/__init__.py - automation_library/discovery/functions/__init__.py - automation_library/discovery/messages/__init__.py - automation_library/discovery/vars/__init__.py Features: - Verifies RPM packages from additional_packages.json on all nodes - Verifies container images on K8s nodes (crictl/podman) - Normalizes service_kube_control_plane vs service_kube_control_plane_first based on node count - Shows [treated as: service_kube_control_plane_first] indicator in output - Displays non-K8s nodes with 'no images expected' message - Follows Windsurf global rules and reuses core module functions
…l packages automation - Merged Balaji's PR dell#335 changes from dell/omnia-artifactory - Resolved conflicts in discovery module __init__.py files - Combined both sets of imports: Balaji's refactored discovery structure + additional packages automation - All functions, variables, and messages now properly exported - Discovery module now includes: SSH, Slurm, LDAP, K8s, and additional packages verification
| all_kube_cp_nodes.extend(nodes) | ||
|
|
||
| # Sort by hostname to ensure consistent ordering | ||
| all_kube_cp_nodes.sort(key=lambda n: n.get("hostname", "")) |
There was a problem hiding this comment.
we should not sort the control plane nodes by hostname because users can assign any name to their kubernetes control plane nodes. In all cases the first node should be considered as service_kube_control_plane_first.
There was a problem hiding this comment.
Removed hostname sorting for kube control plane nodes
| admin_ip = node.get("admin_ip", "") | ||
| missing = node.get("missing", []) | ||
|
|
||
| if not admin_ip: |
There was a problem hiding this comment.
Can you change the output format like this
[role_name1]
node name1
package1 [installed]
package2 [not installed]
node name2
package1 [installed]
package2 [not installed]
[role_name2]
node name1
package1 [installed]
package2 [not installed]
node name2
package1 [installed]
package2 [not installed]
Refer build_image output.
There was a problem hiding this comment.
Updated output format to match build_image style
- Changed from per-node summary to per-package status display
- Format now shows: [role_name] -> node_name -> package [installed/not installed]
- Applied same format to both test_additional_packages and test_additional_container_images
- Matches build_image test output style as requested by reviewer
- Removed checkmarks/symbols, using cleaner bracket notation
Example output:
[service_kube_control_plane_x86_64]
k8scp1
htop [installed]
atop [installed]
[slurm_control_node_x86_64]
slurm-control-node1
htop [installed]
atop [not installed]
- Removed hostname sorting logic from _normalize_kube_control_plane_role - First node in PXE mapping order is now always treated as service_kube_control_plane_first - Users can assign any hostname to their kubernetes control plane nodes - Order is preserved as it appears in the PXE mapping file This addresses reviewer feedback that sorting by hostname was incorrect since users can assign arbitrary names to their control plane nodes.
New files:
Updated files:
Features: