Fix _benchmark dropping configs that fail compilation#1942
Fix _benchmark dropping configs that fail compilation#1942fulvius31 wants to merge 2 commits intopytorch:mainfrom
Conversation
|
These changes make sense but I'm not a big fan of adding an extra layer of complexity and tracking to an already complex function. Ideally we'd handle broken configs inline as we progress through the function e.g. emitting error results directly in the exception handler rather than doing a post-hoc fixup at the end. That said, doing it cleanly requires restructuring how the filtered lists flow through precompilation and benchmarking, which is a bigger change. |
I agree but this is a bug that needs to be addressed. I wasn't able to perform the autotuning since it crashed. |
|
I think the general changes are good for now. This function could however use some tidying up in the future. |
Summary
_benchmarkskips configs that failcompile_configbut didn't emit a placeholder result, making the results list shorter than the inputparallel_benchmark_populationzips members against results 1:1 and the misaligned pairing hitsassert result.config is member.configinf-perf placeholders for failed configs at the end