Google’s Willow Quantum Chip Crushes Classical Computers on a Cosmic Timescale

1 week ago 3

Google just debuted its latest quantum chip, Willow, which the tech giant claims can perform calculations in five minutes that would take the world’s fastest supercomputers 10 septillion years. For reference, the universe isn’t even 14 billion years old—a fraction of a fraction of that timescale.

Quantum computers make their calculations in a fundamentally different way than classical supercomputers. The team’s research, published today in Nature, outlines the error suppression in the Willow processor and the system’s superlative performance, which the team wrote, “if scaled, could realize the operational requirements of large scale fault-tolerant quantum algorithms.”

Quantum devices are famously finicky; to perform their remarkable calculations they must be kept in a quantum state, which generally means a laboratory environment at near-absolute-zero temperatures. At such frosty climes, the system becomes superconductive, enabling the device to perform operations beyond the limits of classical physics.

The Willow chip.The Willow chip. Photo: Google

The outstanding issue—or goal, depending on your framing—is that quantum computers are still not capable of solving problems beyond the remit of classical computers. That’s the real grail in quantum computing: a device that has practical commercial applications beyond what would make sense, or even be possible, on cutting-edge classical computers.

Unlike conventional bits of information in a classical computer, which represent a value of “0” or “1”, quantum bits (or qubits) can represent “0” and “1” simultaneously. In this way, the computer can crunch numbers more rapidly than traditional devices. If too many errors occur in the quantum system, however, the operation falls apart.

A major part of Willow’s significance is that the more qubits Willow uses, the fewer errors the system has. Errors can cause quantum operations to collapse, but instead of errors scaling up with the size of the computer, they are diminished.

In a press release accompanying the announcement, Hartmut Neven, the founder and lead of Google Quantum AI, wrote that “we tested ever-larger arrays of physical qubits, scaling up from a grid of 3×3 encoded qubits, to a grid of 5×5, to a grid of 7×7 — and each time, using our latest advances in quantum error correction, we were able to cut the error rate in half.”

“In other words,” Neven wrote, “we achieved an exponential reduction in the error rate.”

The error reduction is called “below threshold,” and is a watershed moment in the quest to build future quantum computers with even fewer errors. According to a Google release, the Willow system also showed substantive advancements in real-time error correction in the system—which is to say that the computer was mitigating errors that arose while it was working on a problem. Additionally, the qubit arrays were longer-lived than individual physical qubits in the system, indicating error correction was improving the resiliency of the entire quantum chip.

Willow’s performance on the random circuit sampling (RCS) benchmark would take the Frontier supercomputer—the fastest classical supercomputer in the world until last month—10 septillion years, much longer than the lifetime of the universe. To put that progress into scale: In 2019, Google’s Sycamore quantum computer took 200 seconds to solve a problem that would take a supercomputer about 10,000 years to solve, a landmark that allowed Google to declare quantum supremacy.

In July, the quantum computing company Quantinuum announced a 56-qubit system that outperformed the Sycamore processor on one of the benchmarks tested in 2019, called the linear cross entropy benchmark. Now, Google has drawn a new line in the sand. The team used the RCS benchmark, which tests a quantum computer’s ability to beat classical computers in a calculation. Random circuit sampling doesn’t have useful applications, but is a fundamental hurdle for quantum computers as scientists chase commercial, beyond-classical use cases.

“Even if the people on Main Street don’t care, it could still be very interesting,” said John Preskill, the director of Caltech’s Institute for Quantum Information and Matter, in a Google video accompanying the news. “I think the quantum hardware has reached a stage now where it can advance science. We can study very complex quantum systems in a regime we’ve never had access to before.”

“Quantum algorithms have fundamental scaling laws on their side, as we’re seeing with RCS,” Neven said. “There are similar scaling advantages for many foundational computational tasks that are essential for AI. So quantum computation will be indispensable for collecting training data that’s inaccessible to classical machines, training and optimizing certain learning architectures, and modeling systems where quantum effects are important.”

The Google team is now approaching the third milestone in its six-step quantum roadmap towards an error-corrected quantum computer. Neven believes that commercial applications may be three to five years away, instead of multiple decades away. As is the case with a qubit’s actual value, it’s impossible to say for sure—but the Willow result shows that real progress is being made.

Read Entire Article