Skip to content

Add additional packages and container images verification automation#337

Open
RajAlppy wants to merge 4 commits intodell:automation-v2.1.0.0from
RajAlppy:automation-v2.1.0.0
Open

Add additional packages and container images verification automation#337
RajAlppy wants to merge 4 commits intodell:automation-v2.1.0.0from
RajAlppy:automation-v2.1.0.0

Conversation

@RajAlppy
Copy link
Copy Markdown

  • Add discovery module for verifying additional RPM packages and container images
  • Implement kube control plane normalization (single node = first, multiple nodes = 1st is first)
  • Add support for displaying Slurm/non-K8s nodes in container images results
  • Create comprehensive test cases with clean output format matching telemetry/prepare_oim style

New files:

  • automation_library/discovery/vars/additional_pkgs_vars.py
  • automation_library/discovery/functions/additional_pkgs_func.py
  • automation_library/discovery/messages/additional_pkgs_msgs.py
  • molecule/discovery/tests/test_additional_packages_and_container_images.py

Updated files:

  • automation_library/discovery/init.py
  • automation_library/discovery/functions/init.py
  • automation_library/discovery/messages/init.py
  • automation_library/discovery/vars/init.py

Features:

  • Verifies RPM packages from additional_packages.json on all nodes
  • Verifies container images on K8s nodes (crictl/podman)
  • Normalizes service_kube_control_plane vs service_kube_control_plane_first based on node count
  • Shows [treated as: service_kube_control_plane_first] indicator in output
  • Displays non-K8s nodes with 'no images expected' message
  • Follows Windsurf global rules and reuses core module functions

Super User added 2 commits March 26, 2026 00:04
- Add discovery module for verifying additional RPM packages and container images
- Implement kube control plane normalization (single node = first, multiple nodes = 1st is first)
- Add support for displaying Slurm/non-K8s nodes in container images results
- Create comprehensive test cases with clean output format matching telemetry/prepare_oim style

New files:
- automation_library/discovery/vars/additional_pkgs_vars.py
- automation_library/discovery/functions/additional_pkgs_func.py
- automation_library/discovery/messages/additional_pkgs_msgs.py
- molecule/discovery/tests/test_additional_packages_and_container_images.py

Updated files:
- automation_library/discovery/__init__.py
- automation_library/discovery/functions/__init__.py
- automation_library/discovery/messages/__init__.py
- automation_library/discovery/vars/__init__.py

Features:
- Verifies RPM packages from additional_packages.json on all nodes
- Verifies container images on K8s nodes (crictl/podman)
- Normalizes service_kube_control_plane vs service_kube_control_plane_first based on node count
- Shows [treated as: service_kube_control_plane_first] indicator in output
- Displays non-K8s nodes with 'no images expected' message
- Follows Windsurf global rules and reuses core module functions
…l packages automation

- Merged Balaji's PR dell#335 changes from dell/omnia-artifactory
- Resolved conflicts in discovery module __init__.py files
- Combined both sets of imports: Balaji's refactored discovery structure + additional packages automation
- All functions, variables, and messages now properly exported
- Discovery module now includes: SSH, Slurm, LDAP, K8s, and additional packages verification
all_kube_cp_nodes.extend(nodes)

# Sort by hostname to ensure consistent ordering
all_kube_cp_nodes.sort(key=lambda n: n.get("hostname", ""))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should not sort the control plane nodes by hostname because users can assign any name to their kubernetes control plane nodes. In all cases the first node should be considered as service_kube_control_plane_first.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed hostname sorting for kube control plane nodes

admin_ip = node.get("admin_ip", "")
missing = node.get("missing", [])

if not admin_ip:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you change the output format like this
[role_name1]
node name1
package1 [installed]
package2 [not installed]
node name2
package1 [installed]
package2 [not installed]
[role_name2]
node name1
package1 [installed]
package2 [not installed]
node name2
package1 [installed]
package2 [not installed]

Refer build_image output.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated output format to match build_image style

Super User added 2 commits April 3, 2026 10:21
- Changed from per-node summary to per-package status display
- Format now shows: [role_name] -> node_name -> package [installed/not installed]
- Applied same format to both test_additional_packages and test_additional_container_images
- Matches build_image test output style as requested by reviewer
- Removed checkmarks/symbols, using cleaner bracket notation

Example output:
[service_kube_control_plane_x86_64]
  k8scp1
    htop [installed]
    atop [installed]

[slurm_control_node_x86_64]
  slurm-control-node1
    htop [installed]
    atop [not installed]
- Removed hostname sorting logic from _normalize_kube_control_plane_role
- First node in PXE mapping order is now always treated as service_kube_control_plane_first
- Users can assign any hostname to their kubernetes control plane nodes
- Order is preserved as it appears in the PXE mapping file

This addresses reviewer feedback that sorting by hostname was incorrect since
users can assign arbitrary names to their control plane nodes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants