Something funky is going on in migrators.
openmpi uses both ucx and libhwloc, but both ucx120 and libhwloc2122 migrations completed without issuing PRs to openmpi.
This has led to downstream failures like:
I'm not sure how the graph is computed, but openmpi is a v1 recipe uses these dependencies conditionally:
host:
- if: mpi_type != 'external'
then:
- libevent
- libfabric-devel ${{ libfabric }}.*
- libhwloc
- libpmix-devel
- zlib
- if: linux
then:
- libnl
- if: linux and not ppc64le
then:
- ucc
- ucx
- if: with_cuda
then:
- cuda-version ${{ cuda_compiler_version }}.*
Possibly related: are migrations supposed to be completed when all PRs have been opened, or when they are merged? Because both migrators have now marked as completed with several open, failing PRs.
Something funky is going on in migrators.
openmpi uses both ucx and libhwloc, but both ucx120 and libhwloc2122 migrations completed without issuing PRs to openmpi.
This has led to downstream failures like:
I'm not sure how the graph is computed, but openmpi is a v1 recipe uses these dependencies conditionally:
Possibly related: are migrations supposed to be completed when all PRs have been opened, or when they are merged? Because both migrators have now marked as completed with several open, failing PRs.