Understanding benchmark results #239
Replies: 1 comment
-
Hi @nonhermitian, thank you for the feedback! Your comment spurred internal discussions to help clarify the choices we’ve made in UCC for ourselves and for new users: see #242, #245, #246. We'd be happy to receive your continued feedback as we address these issues and any others you’d like to comment on. Benchmark defaultsIn general, in defining other compilers in our benchmarking suite, we had to make a decision about what a fair comparison would be, and our guiding principle was the most common usage in online examples and documentation. In Qiskit, this was fairly clear, i.e. In Cirq, there is no default “cirq.transpile” to my knowledge. Instead, the recommendation and most common usage I found online was to use cirq.optimize_for_target_gateset, and the built-in In PyTKET, there is a default FullPeepHoleOptimize option, but in our early benchmarking, it took something like 50x as long to run as the other compilers, and at the time this did not seem like a fair comparison and was deemed infeasible to run as part of a regular benchmarking suite. Instead we opted for a relatively basic, manually constructed optimization pass. In both cases, we've reached out to the maintainers of these repos to ask for feedback as to whether these are sensible defaults for comparison. UCC development philosophyTo speak to your broader point, indeed UCC as it stands today is a wrapper around the transpiler infrastructure built by Qiskit. As we continue to develop UCC, our plans include error mitigation, low-level quantum control, and more – a true compiler collection. Our aim in benchmarking UCC now is to develop in public, show our work and progress as we move along this development path. |
Beta Was this translation helpful? Give feedback.
-
As it stands now, ucc is basically a wrapper around Qiskit transpiler passes. As such, it is hard to understand what is to be gained by benchmarking ucc vs Qiskit. In addition, looking at pytket vs Qiskit compilation, the latter is set to full optimization level, whereas the former is doing a small portion of a compilation chain. Again, it is hard to understand what an user is to gain by these biased comparisons. In both cases, this is not the default behavior of the SDK pipelines.
Beta Was this translation helpful? Give feedback.
All reactions