← Back to blog
7 min read News

Quantinuum Squeezes 94 Logical Qubits from 98 Physical Qubits - And They Work Better

Researchers demonstrate nearly 1:1 encoding efficiency with logical qubits that outperform physical qubits, marking a critical milestone toward practical fault-tolerant quantum computing.

newsquantinuumerror-correctionlogical-qubits

Quantinuum researchers have demonstrated quantum computations using up to 94 error-protected logical qubits extracted from just 98 physical qubits on their Helios trapped-ion processor. More importantly, these logical qubits achieved 99.94% fidelity compared to 99.68% for the underlying physical qubits.

This is the first time logical qubits have outperformed physical qubits at this scale. It’s a critical milestone because it shows that error correction is no longer just adding overhead - it’s actually making quantum computers better.

What They Actually Did

The team used a family of codes called “iceberg codes” - named because most of the logical qubits sit beneath a small error-checking layer, like an iceberg beneath the waterline. The encoding is remarkably efficient:

  • 94 logical qubits using error-detection codes (from 98 physical qubits)
  • 48 logical qubits using two-level concatenated error-correction codes
  • ~1:1 encoding ratio for error detection (94 logical from 98 physical)

Compare this to traditional error correction approaches, where each logical qubit might require dozens or hundreds of physical qubits. Iceberg codes achieve this efficiency by protecting many qubits with just a small number of additional checking qubits - in the simplest case, two extra qubits monitor an entire system.

Why “Beyond Break-Even” Matters

For years, quantum error correction made things worse. Adding error correction meant more operations, more qubits, more complexity - and ultimately more errors. Researchers called the point where error correction starts helping instead of hurting “break-even.”

This work crosses that threshold decisively:

Logical gate error rates: ~1 error in 10,000 operations
Physical gate error rates: Significantly higher (the paper doesn’t specify exact numbers, but logical operations showed “an order of magnitude” improvement in some tests)

The team tested logical gates using cycle benchmarking, which repeatedly applies gate sequences and tracks error accumulation. The encoded operations consistently beat the raw hardware.

The XY Model Test: 64 Logical Qubits Simulating Quantum Magnetism

To demonstrate real computational work, the researchers simulated a 3D quantum magnetic system described by the XY model - a mathematical description of interacting quantum spins found in condensed-matter physics.

They encoded 64 logical qubits into the system and ran the simulation using mirror benchmarking (run the circuit forward and backward - if you return to the starting state, the computation was accurate).

Result: Encoding the system with iceberg codes reduced the effective two-qubit gate error rate by roughly 30% compared to unencoded circuits.

The researchers note that simulating large quantum spin systems with complex entanglement patterns can be difficult for classical computers, making this a meaningful test of quantum advantage potential.

How Iceberg Codes Work

The key insight behind iceberg codes is asymmetric resource allocation. Instead of spreading each logical qubit across many physical qubits, you protect many logical qubits with a small shared error-checking layer.

The simplest iceberg code protects k logical qubits using just k+2 physical qubits - two extra qubits detect errors across the entire system. For distance-2 detection, this gives you nearly 1:1 efficiency.

The team also demonstrated concatenation - stacking multiple codes on top of each other to increase error-correction strength. Their two-level concatenated codes achieved distance-4 protection using 48 logical qubits.

The Entanglement Test: 94-Qubit GHZ States

The researchers generated large GHZ states (Greenberger-Horne-Zeilinger states) - maximally entangled quantum states that link all qubits together. GHZ states are foundational for many quantum algorithms and error correction protocols.

With 94 logical qubits, they achieved GHZ state fidelities around 95%, showing that large-scale entanglement can be maintained across encoded systems.

In some tests using concatenated error-correction codes, no logical errors were observed across thousands of experimental runs.

What’s Still Missing

This isn’t fault-tolerant quantum computing yet. The work falls into “partially fault-tolerant computing” - methods that improve performance on current hardware but may not scale indefinitely.

Postselection: The experiments discarded runs where errors were detected. This is standard practice in quantum experiments, but it increases the number of repetitions needed to get results. True fault-tolerant systems must correct errors on the fly without discarding runs.

Hardware scale: Even with near-perfect encoding efficiency, 98 physical qubits still only gives you 94 logical qubits. You need hundreds or thousands of logical qubits to run useful algorithms.

Algorithm depth: The XY model simulation ran relatively shallow circuits. Deeper circuits accumulate more errors, requiring stronger error correction.

The Helios Advantage

Quantinuum’s Helios trapped-ion processor has two critical features for error correction:

All-to-all connectivity: Any qubit can interact with any other qubit. Iceberg codes require interactions among widely separated qubits, making flexible connectivity essential.

Long coherence times: Trapped ions maintain quantum states for seconds, not microseconds. This gives you time to run classical decoding algorithms and make real-time decisions during circuit execution.

The team implemented their decoders in Rust and compiled to WebAssembly (Wasm), enabling efficient classical processing integrated directly into quantum programs.

Business Context: When Does This Matter?

This is a critical milestone, but it’s not the end of the journey. Here’s what enterprises should track:

Timeline to commercial advantage: Still 3-5 years for most applications. Current systems can run small simulations, but you need thousands of logical qubits and much deeper circuits for drug discovery, materials design, or cryptography applications.

What vendors should show you: If a vendor claims fault-tolerant quantum computing, ask:

  • What’s the logical-to-physical qubit ratio?
  • What are the logical gate error rates?
  • Can you run circuits without postselection?
  • What’s the circuit depth you can reliably execute?

What to do now:

  • Track which companies hit “beyond break-even” at scale
  • Build internal quantum literacy (hire or train technical staff)
  • Identify candidate problems where quantum might help in 3-5 years
  • Don’t invest in hardware yet, but watch the logical qubit scaling curve

What Comes Next

The researchers outline several next steps:

Higher-distance codes: Concatenate additional layers to detect and correct more complex errors. Higher distance = stronger protection.

Better decoders: Improve the classical algorithms that decide how to correct detected errors. Smarter decoders mean better logical fidelity.

Scale up: Move from 94 logical qubits to hundreds or thousands. This requires both more physical qubits and maintaining low error rates as the system grows.

According to Quantinuum’s blog post: “This is just the beginning: we are officially entering the era of large-scale logical computing. The path to fault-tolerance is no longer just theoretical—it is being built, gate by gate, on Helios.”

Why This is a Big Deal

For years, error correction was a theoretical solution waiting for hardware to catch up. Physical qubit quality wasn’t good enough - adding error correction just made things worse.

This work shows the transition point. Physical qubits are now good enough that error correction helps instead of hurts. And the encoding efficiency is extraordinary - nearly 1:1 for error detection.

That efficiency matters because it means you don’t lose most of your qubits to overhead. You can run larger algorithms on the same hardware.

The next test is scaling: can these codes maintain beyond-break-even performance as you scale to hundreds or thousands of logical qubits? That’s the real question for practical quantum computing.

But this milestone proves the principle. Logical qubits can be better than physical qubits. Error correction works. The path to fault-tolerant quantum computing is no longer hypothetical - it’s being demonstrated on real hardware.


Australian Quantum Scaling: SQC Secures $20M for Atomic Manufacturing

In related news, Australia’s Silicon Quantum Computing (SQC) received AUD$20 million from the National Reconstruction Fund Corporation to scale its atomic-precision chip manufacturing. SQC is the only company manufacturing quantum chips with atomic precision while delivering quantum-enhanced AI products today.

The efficiency advantage: SQC can design, produce, and test new quantum chips in under one week using their proprietary PAQMan™ (Precision Atom Qubit Manufacturing) process. That rapid iteration speed - enabled by fully in-house manufacturing - sets them apart in the race to commercial-scale quantum computing.

Products in market: SQC already has two commercial products:

  • Watermelon™: Quantum machine learning system (Telstra reported dramatic reduction in model training time)
  • Quantum Twins™: Quantum simulator for molecule and materials discovery

Scale ambitions: The NRFC investment supports expansion toward fault-tolerant, commercial-scale quantum computers built in silicon. SQC currently employs over 100 people and expects to create hundreds more high-skilled jobs in chip design, quantum engineering, and hardware development.

Australia’s National Quantum Strategy estimates the quantum industry could reach $6 billion and 19,400 jobs by 2045. SQC - one of only 11 companies advancing to Stage B of DARPA’s rigorous Quantum Benchmarking Initiative - anchors that strategy with globally competitive manufacturing capability.


Sources & Further Reading

Primary sources:

Context & analysis: