← Back to blog
10 min read Intermediate

Quantum error correction just crossed a line (here's why it matters)

After decades of theory, quantum error correction started working in practice between late 2024 and early 2026. Here's what actually happened, who did it, and what it means for the timeline.

error-correctionfundamentalsevaluationgooglequantinuummicrosoftibmawsquera

If you have been following quantum computing for a while, you have probably heard about error correction more than you wanted to. For years it was the thing everyone said was necessary but nobody had demonstrated convincingly. That changed.

Between late 2024 and early 2026, at least six separate teams crossed milestones that moved quantum error correction from theoretical requirement to experimental fact. Not finished. Not solved. But proven to work in the way the theory always said it should.

That matters more than another qubit count headline. Here is what actually happened.

Why error correction is the bottleneck

Quantum computers make mistakes. A lot of them.

A typical physical qubit in 2026 has an error rate somewhere between one in a hundred and one in ten thousand, depending on the hardware. That sounds tolerable until you realize that useful quantum algorithms need thousands or millions of operations chained together. At those depths, errors accumulate and destroy the computation long before it finishes.

Classical computers have the same problem in principle. In practice, classical bits are so reliable that simple redundancy schemes handle it invisibly. Quantum error correction is harder because of a fundamental constraint: you cannot copy a qubit. The no-cloning theorem forbids it. So quantum error correction uses a different trick: spreading one logical qubit across many physical qubits in a way that lets you detect and correct errors without ever looking directly at the quantum information.

The theory has existed since the mid-1990s. Peter Shor and others showed that if the physical error rate is below a certain threshold, you can always add more physical qubits to make the logical error rate as low as you need. Above that threshold, adding more qubits just adds more noise.

For roughly 30 years, the question was: can real hardware get below the threshold, and does error correction actually improve things when you scale up?

Now we know the answer is yes. Here is the evidence.

Google Willow: the threshold crossed

In December 2024, Google’s quantum team published results from Willow, a 105-qubit superconducting processor. The headline result was not a qubit count. It was a demonstration that quantum error correction works the way theory predicts.

The team built surface codes at three sizes: distance-3, distance-5, and distance-7. Each step up uses more physical qubits to protect fewer logical qubits. The critical question was whether each step up would actually reduce the logical error rate.

It did. Google reported that the logical error rate roughly halved with each increase in code distance. That is the threshold in action. Below the threshold, bigger codes mean better protection. Above it, bigger codes mean more noise. Google’s hardware was below the threshold.

This was the first convincing experimental demonstration of below-threshold quantum error correction. Others had shown pieces of the puzzle before, but Willow was the first to show the full scaling behavior that the theory demands.

The caveat: this was demonstrated on a specific error correction benchmark, not on a useful computation. And the logical error rates achieved, while improving with scale, are still far too high for the algorithms that would transform industries. But the principle is established. That is what matters.

Microsoft Majorana 1: a different bet

In February 2025, Microsoft took a different path entirely. They announced Majorana 1, a chip based on topological qubits rather than the superconducting or trapped-ion designs used by most competitors.

The idea behind topological qubits is that certain exotic quantum states are inherently resistant to local noise because the quantum information is stored in global properties of the system rather than local ones. If it works at scale, topological error correction could require far fewer physical qubits per logical qubit than surface codes.

Microsoft had spent over a decade pursuing this approach with limited public results. The Majorana 1 announcement was significant because it showed the company had reached the point of fabricating and testing a chip, not just publishing theory.

The honest assessment: Majorana 1 is early. Microsoft has not published the kind of detailed error rate benchmarks that Google or Quantinuum have. The topological approach is higher-risk, higher-reward. If it works, it could dramatically simplify the path to fault tolerance. If the engineering challenges prove harder than expected, Microsoft will be years behind competitors who chose more conventional architectures.

The field is watching closely, but the jury is still out.

AWS Ocelot: a radically different approach

Also in February 2025, Amazon Web Services announced Ocelot, their first quantum chip. Instead of trying to push conventional qubit error rates lower, AWS took a different approach entirely: cat qubits.

Cat qubits (named after Schrodinger’s thought experiment) are designed so that one type of error — bit-flip errors — is suppressed by the physics of the qubit itself, not by an external correction code. That means the error correction system only needs to handle the remaining error type (phase flips), which cuts the overhead dramatically. AWS claims Ocelot can reduce error correction costs by up to 90% compared to standard approaches.

The chip is small: nine qubits on a chip about one centimeter square, cryogenically cooled. It is a prototype, not a product. But the approach matters because the path to useful error correction is not just about better qubits. It is also about needing fewer of them per logical qubit. If cat qubits deliver on their promise, a fault-tolerant quantum computer might need an order of magnitude fewer physical qubits than surface-code-based designs.

QuEra: neutral atoms enter the race

The error correction story is not limited to superconducting and trapped-ion hardware. QuEra Computing, using neutral-atom arrays, demonstrated below-threshold error correction with up to 96 logical qubits in 2025, and showed that their 3,000-qubit array could function for over two hours with continuous operation.

Neutral atoms bring a distinctive advantage: they can be rearranged dynamically during computation, which makes certain error correction schemes more efficient. Working with Harvard and Yale, QuEra demonstrated what they call “Transversal Algorithmic Fault Tolerance,” claiming algorithms execute 10-100x faster with error correction than previous approaches.

Those are strong claims that need independent verification and careful comparison to other platforms. But they add another data point to the central theme: error correction is working across multiple, fundamentally different hardware architectures.

Quantinuum: logical qubits you can count

While Google proved the threshold and Microsoft bet on topology, Quantinuum took a third approach. They used their trapped-ion hardware to demonstrate high-fidelity logical qubits and then started scaling the count.

In March 2026, Quantinuum announced 94 logical qubits on their H-series processor. That number is notable not because more is automatically better, but because it shows that trapped-ion systems can maintain the coherence and gate fidelity needed for error correction while operating at a scale that starts to be relevant.

Trapped ions have a natural advantage here: their physical error rates are among the lowest in the industry, and they offer all-to-all qubit connectivity without the routing overhead that plagues superconducting architectures. The tradeoff is speed. Ion traps are slower per operation than superconducting qubits, so the wall-clock time for a computation can be longer even if the error rate per operation is lower.

Quantinuum’s results are important because they demonstrate that error correction is not just a Google result. Multiple hardware platforms are reaching the point where logical qubits outperform physical ones. That makes the trend more credible than any single announcement.

IBM: the modular roadmap

IBM’s approach to error correction is less about a single breakthrough and more about a systematic engineering plan.

Their current hardware, including the Heron processor family, is designed around a modular architecture. The idea is to connect multiple quantum processors together with classical and quantum communication links, building larger effective systems without requiring a single monolithic chip to do everything.

On the error correction front, IBM’s Quantum Loon demonstration showed all the hardware elements of fault-tolerant quantum computing working together, including real-time error decoding in under 480 nanoseconds using qLDPC (quantum low-density parity-check) codes. That decoding speed — a 10x improvement over the previous leading approach — matters because error correction only works if you can detect and correct errors faster than new ones appear. IBM said they achieved this a year ahead of their own roadmap.

Their quantum-HPC integration strategy embeds quantum error correction into a broader classical computing workflow. The bet is that quantum processors will not operate in isolation. They will be specialized accelerators inside hybrid systems, and error correction will be managed partly by classical infrastructure.

IBM has published a roadmap targeting a first large-scale fault-tolerant system called Starling by 2029: roughly 200 logical qubits from about 10,000 physical qubits, capable of 100-million-gate circuits. Beyond that, they are targeting 100,000 qubits by 2033. Roadmaps are not results, but IBM’s track record of delivering hardware on their published timelines has been better than most.

What “early fault tolerance” actually means

You will hear the phrase early fault tolerance increasingly in 2026. It is worth understanding what it means and what it does not.

Early fault tolerance means: hardware that can run small error-corrected computations with logical qubits that outperform physical qubits on specific tasks. It does not mean: hardware that can run arbitrary quantum algorithms with negligible error.

The gap between those two definitions is enormous. Current demonstrations use error correction codes that protect a few logical qubits with relatively short circuit depths. The algorithms that would transform drug discovery, materials science, or cryptanalysis need thousands of logical qubits running circuits millions of operations deep.

Think of it this way. The Wright brothers proved that heavier-than-air flight was possible in 1903. Commercial aviation did not arrive until decades later. We are somewhere between the Wright Flyer and the DC-3. The principle works. The engineering is underway. The timeline to “useful for passengers” is still measured in years, not months.

What changes because of these results

Three things are different now compared to two years ago.

1. The theoretical question is settled. Quantum error correction works in practice, not just on paper. Below-threshold behavior has been demonstrated on real hardware. This removes the single largest source of existential doubt about quantum computing. The remaining questions are engineering questions, not physics questions.

2. Multiple platforms work. Superconducting qubits (Google, IBM), trapped ions (Quantinuum), neutral atoms (QuEra), cat qubits (AWS), and potentially topological qubits (Microsoft) can all support error correction. That hardware diversity makes the field more robust. If one approach hits a wall, others can continue.

3. The timeline conversation has shifted. Before these results, reasonable people could argue that practical error correction might never work, or might require decades of further physics research. That argument is much harder to make now. The debate has moved from “will it work?” to “how fast can we scale it?” Those are different conversations with different implications for investment, policy, and talent.

What has not changed

It would be easy to read these milestones and conclude that fault-tolerant quantum computing is around the corner. It is not.

The qubit gap is still massive. Current demonstrations use tens to hundreds of logical qubits. Transformative applications need thousands. Bridging that gap requires continued improvement in physical error rates, qubit counts, connectivity, and control electronics, all at the same time.

Clock speed matters. Error correction adds overhead. Every logical operation requires many physical operations. Even if the logical error rate is low enough, the wall-clock time for a useful computation could be prohibitive on current hardware. This is especially true for trapped-ion systems.

Cost is an open question. Running error-corrected computations requires dramatically more physical qubits than uncorrected ones. That means more hardware, more cooling, more control infrastructure, and more cost. Whether the results justify the cost for specific applications is an economic question that has not been answered yet.

Nobody has run a useful error-corrected algorithm. The demonstrations so far are of error correction itself, not of error-corrected algorithms solving real problems. That next step — running a meaningful computation that benefits from error correction and could not be done classically — has not happened yet.

The timeline (honest version)

Now (2026): Error correction works in principle. Logical qubits outperform physical qubits on specific benchmarks. No practical advantage from error-corrected computation yet.

Near-term (2027-2029): Expect the first demonstrations of small error-corrected algorithms that match or modestly exceed what classical computers can do for narrow tasks. Likely in quantum chemistry simulation or specific optimization problems.

Medium-term (2030-2033): If current scaling trends hold, systems with hundreds to low thousands of logical qubits become available. This is the window where the first genuinely useful fault-tolerant applications become plausible, not guaranteed.

Long-term (2034+): Large-scale fault-tolerant quantum computing for transformative applications in drug discovery, materials science, cryptanalysis, and optimization. This is where the biggest claims about quantum computing live. Getting here requires sustained progress with no fundamental roadblocks.

The caveat that always applies: these timelines assume continued exponential improvement in hardware quality and scale. History suggests that engineering scaling curves can plateau, encounter unexpected obstacles, or accelerate unpredictably. Anyone offering precise dates is guessing. Including us.

Why you should care now

If quantum error correction were still an open theoretical question, it would be reasonable to wait and watch. It is not. The question has moved to engineering, scaling, and cost — the same questions that govern every other computing technology.

That does not mean you need to buy quantum hardware or rewrite your software stack. It means the smart bet is to understand what is happening, track which problems are likely to benefit first, and build enough technical literacy to evaluate claims as they come.

If you are new to quantum computing, start with what a qubit actually is and why quantum computers are not just faster classical ones. If you want to evaluate quantum claims more critically, see our guide to benchmarking quantum computers.

The error correction turning point does not mean quantum computers are useful yet. It means the path to useful is no longer blocked by an unproven assumption. That is a meaningful difference.

Sources and further reading

  • Google Quantum AI, “Quantum error correction below the surface code threshold,” Nature (December 2024)
  • Microsoft, “Majorana 1: Microsoft’s first quantum processor powered by topological qubits” (February 2025)
  • Amazon Web Services, “Amazon announces Ocelot quantum chip” (February 2025)
  • QuEra Computing, “Record 2025 as the year of fault tolerance” (2025)
  • Quantinuum, “94 logical qubits on the H-series processor” — see our detailed coverage
  • IBM Quantum Loon demonstration and updated roadmap — see our HPC integration analysis
  • Riverlane, “Quantum error correction: 2025 trends and 2026 predictions” (2026)
  • For surface code fundamentals, see our explainer: The surface code in 15 minutes