If I use the batchbins in espnet, It will trigger a multi-GPU bug.
for example, if I use two GPUs, and the final batch_size is 61, and I use data parallel, it will divide into 30, 31,
when I thy to use torch-audiomentations, it will trigger a bug as follow.

whether The batch size of each card must be the same or there can be other solutions to avoid this bug.
looking forward to a reply
If I use the batchbins in espnet, It will trigger a multi-GPU bug.

for example, if I use two GPUs, and the final batch_size is 61, and I use data parallel, it will divide into 30, 31,
when I thy to use torch-audiomentations, it will trigger a bug as follow.
whether The batch size of each card must be the same or there can be other solutions to avoid this bug.
looking forward to a reply