Unpacking the science behind the “quantum supremacy” breakthrough
It’s like the “Bourne Supremacy,” but for fancy computers.
When assessing the commercial viability of quantum computing, one of the basic things to answer is, “What can you do that a classical computer can’t?”
Most attempts to establish this so-called “quantum supremacy” have revolved around simply trying to out-compute classical computers, without much regard for whether the end product of that compute has any utility or not.
Which leads to the second question: “Well, can you do anything a classical computer can’t that could make or save me money right now?”
On March 12, D-Wave Quantum claimed the company had answered both these questions in the affirmative based on a peer-reviewed paper published in the journal Science.
That announcement, along with a very encouraging set of quarterly results, caused the stock to double in just three sessions.
But with all due respect to the authors, the report is completely inscrutable to those of us whose science education never included physics to begin with and ended with chemistry in 11th grade. Even the journalist failsafe of “read the abstract, read the conclusion, and you’ll kind of know what’s going on” is rendered completely useless when the abstract contains such phrases as “area-law scaling of entanglement in the model quench dynamics” and “stretched-exponential scaling of effort.”
When we recently spoke with D-Wave Quantum CEO Dr. Alan Baratz, one of the first things we asked was what the heck all this actually meant. Basically, quantum computers were able to identify what types of materials can make a good sensor and how to make them the most sensitive sensors they can be. Here’s his longer explanation (emphasis added):
“Essentially what we’ve done is we have computed several different properties of magnetic materials. But to put a little bit finer point on that, what we are looking at is how these materials behave when they get close to what’s known as a phase transition.
OK, so what’s a phase transition? That’s like water freezing. That’s a phase transition. Or water boiling, and a gas is created. Well, magnetic materials also go through a phase transition, but that phase transition occurs not as a result of temperature changes necessarily, but as a result of putting them inside a magnetic field. You’ve got a magnetic material that you put inside a magnetic field, and depending on the actual structure and strength of that magnetic field, that magnetic material may go through a quantum phase transition. Now the reason why the phase transitions are so important in magnetic materials is because a lot of times magnetic materials are used as sensors, like in an MRI. And what we know is that if the magnetic material is close to its phase transition point, it becomes a much more sensitive sensor. It can detect more and smaller properties, or more faint properties. So what you want to do for any magnetic material, you’d like to understand where its phase transition point is and you’d like to understand its sensitivity as it gets close to that point, because that will help you identify materials that are good sensors and help you determine how you should operate those materials, what kind of a magnetic environment you should place them in as you’re using them as a sensor.
So that’s essentially what we’ve done. We’ve demonstrated that you can take a variety of different types of magnetic materials, you can put them in a magnetic field, to get them right to their phase transition point. You can find out what that phase transition point is, and you can find out their sensitivity at that phase transition point. And that’s a really important set of properties to understand as you’re thinking about using these materials as sensors. Now, the upshot of all of this is that you can investigate new kinds of materials that have never been created before and determine if they make good sensors before you actually go try to fabricate them. So you can identify new types of materials much faster.”
A little more, from the company’s press release on this breakthrough:
“Magnetic materials simulations, like those conducted in this work, use computer models to study how tiny particles not visible to the human eye react to external factors. Magnetic materials are widely used in medical imaging, electronics, superconductors, electrical networks, sensors, and motors...
Materials discovery is a computationally complex, energy-intensive and expensive task. Today’s supercomputers and high-performance computing (HPC) centers, which are built with tens of thousands of GPUs, do not always have the computational processing power to conduct complex materials simulations in a timely or energy-efficient manner.”
When asked how this was different from what Alphabet was able to pull off last December with its Willow chip, Baratz replied (emphasis added):
“The problem that they address with Willow is called random circuit sampling. So basically what you do is you take a quantum computer and you have it perform a random set of computations that have no value whatsoever. Nobody can do anything useful with this random sequence of computations, but you have it perform a random sequence of computation. And then you see if a classical computer could do the same thing. And what you find is that because these random computations are quantum mechanical computations, it’s very hard for classical computers to simulate them.
Right. But that’s all they’ve done. They’ve built a quantum system. They’ve had it perform a random sequence of quantum computations, and then they ask, how hard would it be for a classical computer to simulate that? And the answer is, it will be very hard. Now, what is important about Willow — because it was an important breakthrough — is that Google tried to do this in 2019 and they claimed quantum supremacy back then on this totally worthless problem. Interesting, but worthless. OK. The problem is shortly after that, it was shown that you actually could perform that computation classically.
Why? Because the Google system was so error-prone that you could only do relatively few of these computations before you got errors. So I think the circuit depth, or the number of computations you could do, is like 22 or 23, something like that. What Willow did was it added some partial error correction to the system. And what they showed is that with partial error correction, they could do a longer sequence of these random computations and that longer sequence could not be simulated classically. So there were two important things that came out of Willow. One: it is a demonstration that actually, you can do some partial error correction. Namely, there’s a first demonstration of error correction on a quantum computer. It’s small, it’s partial, but it’s a step forward. Two is that when you do that partial error correction, you can run longer computations before you get errors, and long enough that classical probably cannot simulate it.
So that’s what Willow did. What we did is something very different. We’re not doing random anything. We are taking a real-world problem and basically performing the computation for that problem, which would be effectively impossible for a classical computer to perform. And those two are very different.”
If that didn’t help, maybe this will:
Yeah, totally clears it up.