IBM built the biggest, coolest quantum computer. Now comes the hard part
The world’s largest quantum computer system is quietly humming away, doing god knows what, in the middle of a bare-bones conference room, just off the lobby of a building in Westchester County, New York.
This is the Thomas J. Watson Research Lab, which gave birth to the modern laser, DRAM, Mandelbrot set, Deep Blue chess-famous computer, Jeopardy-winning Watson computer, and a host of things that helped the astronauts land on the moon. Perched atop a hill in Yorktown Heights, about an hour’s drive from Manhattan, and looking like a crescent from above, the building was designed in 1961 for IBM Research by Eero Saarinen in a way that matches the curves of his now-shuttered TWA terminal at JFK. It still houses the company’s most cutting-edge research—things like AI and semiconductor design—and it might be the last of America’s major historical research labs that’s still going. Even now, hanging above the staircase, in wood, is the old motto of the building’s namesake: Think.
After all that, the machine, called the Quantum System Two, was introduced last month not as a new chapter in that history of computing so much as a whole new book. The 22-foot-wide, 15-foot-high hexagonal platform of glass and polished aluminum, featuring three Quantum Heron processors, was designed to expand in modular fashion. (And designed is the operative word: A winner in Fast Company’s 2023 Innovation by Design Awards, the Quantum System Two is the work of an in-house team along with Map, Universal Design Studios, and Milan-based Goppion, which makes the glass for the Mona Lisa.) Its looks and ambition evoke the world-changing room-size mainframes of IBM in the ’60s and the sexy supercomputers of Cray in the ’80s. And if you squint: HAL and Skynet too.
At the same time, because it is sitting right next to the lobby, and because it looks so cool, and because what it’s trying to do is so outrageous, you might think this is a work of corporate art—perhaps an extravagant trade show display, or an industrial fridge in the style of the Cybertruck?
Jay Gambetta, who led the team that built the System Two, would like me to see past that facade. Inside—at the coldest temperatures in the universe, at a fraction above absolute zero, and at the tiniest of scales—is what he calls the building block for a working, fault-tolerant quantum computer: a chip that lays the foundation for a room full of System Twos, a quantum supercomputer, another era.
“We’re really proud of it,” he says.
For Gambetta, 44, a soft-spoken Australian and the vice president of IBM Quantum, that pride is fueled by nearly two decades of painstaking, mind-boggling work. Quantum computing, first imagined in the early ‘80s, often requiring massive cryogenic fridges and complex electronics, is still incredibly hard to do, and hard to scale. The machines are still plagued by noise and errors that can destroy reliable calculations; to really work, they’ll need to be bigger, and will need systematic error correction. Some doubt a fully fledged, reliable quantum computer will happen anytime soon.
But according to IBM’s latest roadmap, an error-corrected machine is coming within a decade. About the problems that remain, Gambetta says, “I think of them as an engineering challenge, but I don’t see any blockers.”
Taking flight
The idea of a quantum computer is to replace the simple 1s and 0s of bits—the building block of everything digital, from your websites to your 90 Day Fiancé to the simulations behind new kinds of batteries and lifesaving drugs—with something far more powerful. By exploiting the weird principles of quantum mechanics, like superposition and entanglement, qubits allow for a mix of 0 and 1 . . . a cat that’s both alive and dead, or a coin mid toss.
It’s all hard to make heads or tails of; Einstein himself was bothered by quantum mechanics, with its “spooky action at a distance,” and insisted that God “does not play dice with the universe.” But the weirdnesses are real, and for some problems, the resulting speedups from quantum computations could make classical computers look like abacuses.
Whereas adding another bit to your classical computer simply grows your computing power in a linear fashion, each new qubit in a quantum system expands your computation space exponentially: Two qubits represents 4 possible values; three represents 8; four contains 16 values, and so on. With enough qubits, an error-corrected quantum computer could handle calculations out of reach for the best classical computers. In April, Google (which is also spending billions on quantum computing) reported that its 70-qubit machine tackled a problem that would’ve taken a supercomputer nearly five decades, and solved it in seconds.
The neatest application may be for what theoretical physicist Richard Feynman imagined when he first proposed these computers in 1982: simulating the behavior of stuff at the quantum level. Classical computers can simulate a quantum system but struggle to do so above 50 particles, or qubits; above 100, says Gambetta, “a lot of interesting [science] problems happen at that range.”
Running those simulations on a quantum computer could help scientists discover new drugs and fuels and batteries, or help unravel some of the universe’s thorniest mysteries. Their math could supercharge AI and crack the hardest problems—including the prime factoring one that protects all of our digital secrets.
Those and other promises have fueled a gold rush of quantum companies, which raised $2.35 billion last year, according to McKinsey, just slightly above the previous year’s total. Alongside billions in government funding—led by the U.S. and China—Google, Amazon, and Microsoft are also investing in research. Deep-pocketed companies have been exploring uses in oil and gas, chemicals, aviation, pharmaceuticals, and finance. Governments are pushing out new standards to secure data from encryption-breaking quantum machines.
Since IBM first put a quantum computer on the cloud in 2016 for anyone to start experimenting with, it has maintained a leading spot in the business and the science, and has laid out the clearest, most concrete technology roadmap in the field. Remarkably, it has been keeping many of its promises too.
Using a new benchmark that looks at the quality rather than the quantity of qubits, IBM says the new chip, called Heron, is its “most performant” quantum processor yet, three times better than its previous record-breaking chip, the 127-qubit Eagle, and even better than last year’s whopper, the 433-qubit Osprey. Alongside Heron, Gambetta’s team also released Condor, which, with 1,121 qubits, is the world’s largest quantum chip. “I like birds,” he says.
But after years of going bigger and bigger, Gambetta’s team is now focusing its efforts on smaller, higher-quality birds, and developing ways of linking them together into larger, parallel systems, like classical supercomputers. This will require new ways of connecting adjacent and distant qubits using both classical and quantum networking. (For comparison, the classical system that trained OpenAI’s GPT used 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server.)
By 2033, according to its revised roadmap, IBM will link together multiple System Twos, forming a system capable of executing 1 billion gates across many thousands of qubits. A parade of ever-larger processor designs—Kookaburra, Cockatoo, and Starling—will culminate that year in a 2,000-qubit chip that Gambetta’s team calls Blue Jay.
“I didn’t propose it, but I’m gonna accept that name,” he says with a fading grin. “Let’s hope the device works.”
In the meantime, even while the machines are still “noisy,” they’re starting to prove more useful, beating classical computers on certain physics simulations, Gambetta says. In 2019, researchers at Google’s quantum lab claimed their computer could outperform classical machines, but only on a niche calculation without any practical use. But in a paper in Nature last June, a team of physicists at IBM and Berkeley used a 127-qubit Eagle processor to beat a classical computer at approximating a property called the average magnetization, using a simulation that naturally maps to a quantum computer.
Gambetta, who’s been working on building quantum computers since 2004, says the paper, along with other recent research that has used Eagle for problems with no a priori answers, shows that things are different now: “We’ve entered the era of utility.”
Noise machines
As crucial as the calculation was, Gambetta, an avid surfer, is really stoked about how the researchers got it: by waging a temporary battle on the errors that are the bane of all quantum engineers. The principles that make each qubit incredibly powerful also make them incredibly fragile, sensitive to the slightest noise—from the environment, the control electronics, even each other.
In this case, IBM used a set of new techniques for “error mitigation,” which seeks to better understand the noise and thus better reduce it—a bit like the process in noise-canceling headphones. With the help of the error mitigation method, IBM achieved a record-size quantum circuit, at a scale of 124 qubits with 2,600 entangling gates.
The performance of Eagle on the calculation represented “a very big mind shift for a community or even the larger public, that perhaps only perceived that useful quantum computation can only happen when you have this big error-corrected thing,” said Abhinav Kandala, an experimental physicist at IBM Quantum, and the paper’s lead author.
The goal now is to build more high-quality entangling gates between qubits, which are the logical operations run by the computer. The number of gates, or the length of the circuit, encodes the accuracy of the algorithm. After years of adding more qubits, says Gambetta, “it’s time, now that we’re in this utility phase, to focus on how we add more gates.”
Gambetta says IBM will soon have a system capable of running 100 qubits and 3,000 gates, which should open up more utility. By the end of 2024, IBM is aiming for 5,000 gates. That number increases steadily each year until 2029, when it reaches 15,000 gates using 200 qubits. “That’ll be another era,” Gambetta says, “when we implement error correction.”
But error mitigation is only a prelude to a more difficult technique: full-on quantum error correction, or QEC. Even the best-made qubit in the world will need to be error corrected, which will require building many high-quality redundant physical qubits—possibly thousands or more—to make a single “logical” qubit.
Building qubits that are good enough simply to do QEC has long been a goal for quantum computer engineers. So far, superconducting qubit chips like IBM’s and Google’s make an error about every 100 or 1,000 steps; the goal is to shrink that rate down to more like one in a million, which is when error-correction techniques start to become more feasible.
As IBM tries to squeeze more usefulness out of the current set of noisy processors and build better qubits, it’s also been working on ways to reduce the number of extra qubits it needs to do error correction. The code, or surface code, describes how the redundant physical qubits on a chip are arranged on a grid to create one working logical qubit, and IBM has been testing variations that are adapted to the intricate hexagonal geometries of its chip designs.
In August, IBM shared research describing a new approach it calls the Gross code, named for a mathematical unit equal to 12 dozen. Instead of a single grid layer, it works by coupling together more qubits that aren’t direct neighbors across two parallel grids, which means far fewer physical qubits to encode a logical qubit. Most surface codes used today can require up to 4,000 physical qubits to host 12 logical qubits; the Gross code, says IBM, could encode one using only 288 physical qubits.
Still, error correction is only part of a symphony of tricks needed to run and measure a reliable quantum computation in the face of a cascade of errors. All of it—control, readout, decoding, mitigation, correction—must be performed at rapid speed, before the qubits decohere or the whole system is overwhelmed by errors. Think of it like a complex caper at the tiniest of scales in deep-space temperatures: a stunt-filled Mission: Impossible heist in supercomputer form.
All the challenges involved, especially as the machines get larger, have fueled persistent doubts. But with Heron and other ongoing research, Gambetta now sees more light at the end of the tunnel, a way to get past the era of “noisy” error-prone machines.
“We’re saying pretty, pretty, pretty strongly that we have a path to error correction now,” says Gambetta. And with Heron, IBM has found a basic design and process that will take it all the way. “The understanding of materials and the design of the qubits, I’m not gonna say it’s a solved problem—but it’s basically solved.”
How to make a qubit
The physics of these chips is complicated as hell, and the engineering reflects that. To manage and read each qubit, IBM’s machines use microwave pulses. In System Two, this means a number of wires, connections, and classical devices for each qubit; and that means thousands of gold-plated microwave cables snaking down from the top of the fridge through a series of concentric plates until they reach the processor at the very bottom. (There are three Herons inside System Two.)
The result of all this is the “quantum chandelier”: the giant, teetering upside-down steampunk ziggurat, which has become a staple of quantum computer design. The chandelier is kept in a vacuum inside a giant fridge at some of the coldest temperatures in the universe.
IBM’s and Google’s chips use what’s known as a transmon qubit, a tiny loop of superconducting metal that can be made using existing microchip technology. Superconductors operate with no electrical resistance, provided they are kept at temperatures of roughly 20 millikelvin, or -273 degrees Celsius. (This is one reason why a room temperature superconductor would be such a big deal.)
Others are making progress with different approaches. Shortly after IBM announced Heron and System Two, a lab at Harvard working with Boston-based QuEra, MIT, and a joint program between the National Institute of Standards and Technology and the University of Maryland, announced a breakthrough on a quantum computer: They managed to turn 280 qubits into as many as 48 logical qubits, more than 10 times the number of logical qubits that have ever been created. Scott Aaronson, director of the University of Texas at Austin’s Quantum Information Center, wrote on his blog last month that if it stands, “it’s plausibly the top experimental quantum computing advance of 2023.”
Unlike IBM’s, QuEra’s qubits are made of rubidium atoms, and its system uses laser beams, not multiple wires, to control each qubit; others are building qubits with trapped ions, neutral atoms, quantum dots, photonics—in theory, anything that behaves like an atom.
“As a scientist, I think we totally support looking at alternative qubits,” says Gambetta, who began his career working with photonics-based qubits. “And if great ideas come, we’ll incorporate them.”
IBM, Google, D-Wave, and others have also explored an alternative qubit to the transmon called fluxonium, which boasts the longest coherence times of any superconducting qubit: 1.48 milliseconds, according to a team at the University of Maryland, with fidelity rates of around 99.991%. “Fluxonium has some attractive properties,” Gambetta says.
Seeqc, a Westchester County-based startup involved in the fluxonium research, designs quantum computer control devices using a superconducting technology that consumes less energy and sits directly next to the qubits. This offers speedups and other efficiencies over semiconductor-based devices, and could make it much easier to scale up qubits, says cofounder and CEO John Levy. Now, he says the big question IBM and every other quantum company needs to address is, “How do we build an architecture that addresses that scalability to begin with?”
The biggest chip and the biggest fridge
For years, the goal for companies trying to scale up IBM’s kind of quantum computer was to squeeze evermore qubits together on chips. That quest culminated last month when Gambetta’s team unveiled Condor, the world’s largest quantum computer chip yet, with 1,121 qubits. Atom Computing, a California startup, beat IBM to the 1,000-qubit mark in October, with even more: 1,225 qubits. But its computer is made of trapped neutral atoms, not tiny superconducting circuits. IBM calls Condor the world’s largest quantum processor.
Matthias Steffen, IBM’s chief quantum architect, says it demonstrated IBM’s ability to produce even smaller qubits and consistently fabricate large numbers of them. And, he says, “it successfully pushed the limits of scaling a single processor, as well as the ability to cool and measure a large processor in a single refrigerator.”
Because the chip would need to be cooled down to near absolute zero, IBM also built the world’s largest fridge. (System Two’s fridge, made by the Finnish company, Bluefors, is also one of the largest cryogenic fridges of its kind.) When it was finally turned on and cooled down in late 2022, IBM’s fridge—called Goldeneye—turned a volume larger than three home kitchen refrigerators to temperatures colder than outer space.
At the time, says Steffen, “it was the largest dilution refrigerator built.”
On the outside, it looked cool. But this is the quantum world. Zoom in and things get tricky.
For Gambetta and Steffen and everyone else, the giant chip and fridge were stark proof of the hard limits involved in making a quantum system bigger. Each additional qubit requires additional cables and classical devices, which brings more noise and errors. And apart from the quantum effects, there are sheer physical limits to how many qubits and other stuff you can cram together close enough. (Semiconductor chips have their own limits: Moore’s Law says that the number of transistors on a chip doubles about every two years; but that eventually, there is a physical limit to how many you can squeeze together before quantum effects start ruining your computation.)
Just look at the size of the Condor. The chip where the qubits sit is about the size of two large postage stamps, and it’s dense—packing in those 1,121 qubits using a honeycomb design across a record five layers of superconducting metal. But the rest of the QPU, the plate surrounding the processor—what Gambetta calls the motherboard, where the wiring comes in—is giant, stretching to the size of a vinyl album cover. Pull out all the wiring that connects to it, and it would extend for two miles.
“Shrinking [the qubits] much more and not dealing with this problem,” the cables and devices for control and readout, “and you’re not actually solving anything,” he says.
Simply turning the temperature down to near absolute zero and back up can kill your machine for days. Connectors can get loose; tiny things contract and expand. Consider just the 433 qubits of the Osprey chip, released last year. “If you now have five connectors per coax [cable] times 433 connectors, the probability that one of the connectors is loose goes to one,” says Steffen. And each needs to be reconnected. “My fingers aren’t small enough.”
A matter of time
Instead of building ever-bigger chips, IBM engineers plan to link together adjacent and eventually distant qubits on multiple chips, with the help of a feature called a tunable coupler. Previously used by other efforts including Google’s, this is an adjustable device that sits between two qubits and enables them to perform operations with fewer errors, by effectively adapting to noise interference. Over four years, a team studied errors and practiced the coupler on small, 6- to 10-qubit devices before integrating it into the new Heron. One of the biggest challenges turned out to be figuring out how to turn the coupler fully off. “That sounds so trivial, but a lot of work went into that,” says Steffen.
The result is that the Heron’s error rates and gate speeds beat Eagle’s by a factor of four to five, says Gambetta. According to a paper posted last year, its 2-qubit gate fidelities are up to 99.68%. “Crosstalk is virtually not measurable,” he says.
Engineers are still trying to better understand a zoo of errors, including a mysterious class of errors related to an unknown microscopic defect. The hope is to reach 99.99%, considered the holy grail necessary for error correction.
To further lower the error rates—and to control and read and correct errors—engineers also need to build longer-living qubits. The longer the coherence time of a qubit, the longer it stays entangled, the more time there is to carry out complex calculations before errors set in. For years, these coherence times have been reliably stuck in the tens of microseconds, which is not long enough. With its latest test qubits, IBM has reached coherence times of 1 millisecond and more. “If you convert this to a predicted error per gate, given the gate time and so forth, we’d be in the about 10 to the minus-4 range,” says Steffen, referring to that holy grail. “We need to get those [qubits] into our large devices.”
Just as tiny defects can cause problems in your qubits, other tiny physical tweaks can bring precious efficiencies. A new kind of flexible ribbon cable—like a very expensive version of the kind inside a laptop or a microwave oven—allows for almost twice as many connections to the chip. A multilevel wiring technique developed for the Eagle keeps wires and other components needed for readout and control on separate layers, allowing for more qubits and supplemental devices. Similar innovations in geometry, first used on the 433-qubit Osprey, allow for greater qubit density while reducing crosstalk.
Qubits are only a piece of the puzzle. Bigger, better systems will depend upon the performance of the classical devices surrounding the processor. Getting the microwave engineering right, including the electronics that send and receive microwave signals to and from the qubits, says Steffen, is “one of the big challenges.”
For System Two, IBM also upgraded the control and readout devices that sit in the nearby racks, which are made with blazing-fast field-programmable gate array (FPGA) chips. Eventually, the team plans to swap these out completely, in favor of application-specific integrated circuit (ASIC) chips and cryo-CMOS controllers, which can run at the roughly 4 Kelvin temperatures where the qubits operate. That proximity yields speedups. And because the newer chips are smaller and use far less energy than the FPGA chips, they generate less noise, and make scaling a bit easier. With the new control devices, Gambetta says System Two’s energy use is expected to go from about 100 watts per qubit to about 10 milliwatts per qubit, allowing for even bigger systems.
To scale its chips to match its new error-correction approach, Gambetta’s team must figure out how to construct the qubits on future chips, with each qubit linked to a number of others. This will require a series of new couplers to link distant qubits, or qubits on other chips—L couplers, C couplers, M couplers—as well as high-speed classical and quantum networking. And his team will need to ensure that the whole system operates in concert and at unthinkable speeds.
Standing behind System Two, Gambetta describes this strict orchestration, from the gates to the readout to the error correction to the cloud software, as he gestured from its frigid center to the nearby cabinets of control electronics to the cloud servers nearby.
“It’s a timescale,” he says. “This needs to be nanoseconds, this needs to be microseconds, and then cloud could be milliseconds.”
Steffen, smiling, is stuck on another timescale. He remembers cooling down his first clunky cryogenic fridge more than two decades ago.
“If you asked me in my first year in the postdoc, if you’ll have a fridge like this, with qubits that have the coherence times that we do, that have the performance that we do, I would have said that’s absurd.”
“But basically, in some sense, I’m the living proof of, Never say ‘this is impossible.’”
Correction: an earlier version said IBM would achieve 16,632 qubits by 2033; IBM says the actual final qubit count may change depending on subsequent breakthroughs in hardware and code implementation.
(30)