Didn't find an issue for this. Cross-reference: #19259 (comment)
I believe some of the performance "noisy" regressions seen lately are caused by this.
On my machine (Haswell) the pisum() benchmark on master gives
julia> @benchmark pisum()
BenchmarkTools.Trial:
memory estimate: 0.00 bytes
allocs estimate: 0
--------------
minimum time: 33.362 ms (0.00% GC)
median time: 33.445 ms (0.00% GC)
mean time: 33.532 ms (0.00% GC)
maximum time: 34.761 ms (0.00% GC)
--------------
samples: 150
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00%
while on 0.5 I get
julia> @benchmark pisum()
BenchmarkTools.Trial:
memory estimate: 0.00 bytes
allocs estimate: 0
--------------
minimum time: 7.548 ms (0.00% GC)
median time: 7.571 ms (0.00% GC)
mean time: 7.611 ms (0.00% GC)
maximum time: 8.404 ms (0.00% GC)
--------------
samples: 657
evals/sample: 1
time tolerance: 5.00%
memory tolerance: 1.00%
Even when both show practically the same LLVM code.
cc @yuyichao
Didn't find an issue for this. Cross-reference: #19259 (comment)
I believe some of the performance "noisy" regressions seen lately are caused by this.
On my machine (Haswell) the pisum() benchmark on master gives
while on 0.5 I get
Even when both show practically the same LLVM code.
cc @yuyichao