[Riverlane] Error budgeting: understanding your issues is the first step to solve them.
Table of Contents
Introduction
Imagine trying to find a needle in a haystack while being blindfolded and being offered to remove the blindfold. Would you do it?
Now imagine trying to improve the error-correction on your quantum chip without error-budgeting and being offered an easy-to-use and fully implemented error-budgeting procedure. Would you take the offer?
Main findings
deltakit now includes such a procedure. And the main catch? It is open-source and entirely free to use.
Importance and impact
For hardware manufacturers, that means that you can now find out easily what is the main source of errors impacting your logical error rates, and start planning improvements by having an idea of the return on investment.
For quantum error-correction researchers and implementers, it is now easy to assess the impact of common noise sources on your shiny new quantum error-correction code and find ways to make it more resilient, or tailor it to specific architectures with different noise sources.
Technical insights
It was really fun going into the weeds of quantum error-correction metrics such as the logical error probability, logical error probability per round and the error-supression factor in order to finally be able to compute an error-budget.
But more than that, it allowed me to come back at some of the most interesting lessons I got in school on numerical analysis and error-propagation. Getting the whole pipeline to be able to report standard deviations for the estimations returned was an interesting challenge and a very valuable addition as it allow to weight the confidence you can have in the reported error budget.
Context and timing
Error-budgeting is a crucial tool in the quantum error-correction researcher toolbox as it allows to easily and quickly understand the bottlenecks of a given code or hardware.
This is the first open-source implementation of the error-budgeting procedure which can be used by anyone. We hope that with more and more hardware providers shifting to a quantum error-correction mindset, this new feature will be used to fasten the improvement cycles.
Personal perspective
Personally, I have always been convinced that we need good tools and libraries to assess the performance of individual components in a larger system. Citing myself from 5 years ago (I was talking about optimising the implementation of a quantum algorithm, but this translates nearly world for world here):
Before even thinking about modifying code to optimise, a crucial thing should be clear in your head:
BENCHMARK IT
[…]
Without benchmark, you cannot know for sure what part of your implementation is slow. You may guess, and you may guess right, but you will not know for sure until you implemented the optimisation and saw the results. Benchmarking allows you to have a certainty about the problematic parts of your code.
The above comment ended up in the implementation of the first quantum program profiler qprof during my PhD, and I see the error-budgeting implementation as the natural continuation of the motto “benchmark it!” to quantum error-correction.
Conclusion
A quantum error-correction team without error-budgeting is like a carpenter without a chainsaw: unable to efficiently perform its job.
Thanks to the new open-source implementation in deltakit, it is now easy to perform error-budgeting estimations for your particular noise model, quantum error-correction code or its implementation.