Using this new mini-plot.py script, we were able to see more clearly the behavior of each individual benchmark:
Observable expectation values
Rather than our current plotting scripts, which place all benchmarks on the same Y axis for both simulation and compilation benchmarks, which is difficult to understand at times, e.g.
Compilation benchmarks
For this issue, you should
- update our official ucc-bench plotting script to separate the different benchmarks into separate subplots, as in the mini-plot script
- For the simulation benchmarks, add in also the uncompiled noisy expectation value so we can compare the performance of compiled vs. uncompiled circuits with noise.
Using this new mini-plot.py script, we were able to see more clearly the behavior of each individual benchmark:
Observable expectation values
Rather than our current plotting scripts, which place all benchmarks on the same Y axis for both simulation and compilation benchmarks, which is difficult to understand at times, e.g.
Compilation benchmarks
For this issue, you should