← Back to blog
6 min read News

Rigetti Breaks the Size Barrier for Quantum Optimization

New algorithm enables drug design problems 10x larger than previous quantum demonstrations, using physics of magnets to coordinate small quantum processors.

newsrigettioptimizationdrug-designalgorithms

Quantum computers face a stubborn problem: real-world applications need hundreds of variables, but today’s hardware can only handle a fraction of that. A drug molecule binding to a protein requires optimizing 252 variables across a solution space of 10⁷⁶ possibilities. Current quantum processors tap out around 100 qubits.

Rigetti Computing just demonstrated a solution. Their new self-consistent mean-field QAOA algorithm breaks large optimization problems into manageable pieces while maintaining the connections that make solutions high-quality. On March 17, they published results showing a 252-variable molecular docking problem running on their 21-qubit Ankaa-3 system.

This is the first time quantum algorithms have been tested on realistic drug discovery problem sizes. Previous demonstrations maxed out at “tens of variables.” Rigetti just jumped to 252.

The Physics of Magnets, Applied to Optimization

The core idea comes from condensed matter physics. In a magnet, each atom doesn’t track every single interaction with its neighbors individually—that would be computationally impossible. Instead, each atom aligns itself to the average magnetic field created by all the other atoms.

Rigetti applies this same principle to quantum optimization. Instead of solving a 252-variable problem all at once (impossible on current hardware), they:

  1. Break the problem into 12 subproblems of 21 variables each
  2. Create a “shared environment” that captures how the pieces influence each other
  3. Solve each subproblem using QAOA on available quantum hardware
  4. Update the environment based on the results
  5. Repeat until the system stabilizes

The environment acts like a messenger, summarizing the influence of the rest of the system so each piece can be solved accurately on its own. This feedback loop preserves the correlations that would be lost by just solving independent subproblems.

99.6% Resource Reduction Without Quality Loss

Rigetti validated the approach on the Sherrington-Kirkpatrick model, a standard stress-test for optimization algorithms. Results on 256-variable problems:

Gate count: Dropped 99.6% when decomposing into 16 subproblems (from ~63,000 gates to ~250 gates)

Solution quality: Matched or exceeded the standard QAOA on the full problem—a feat currently out of reach for simulation tools at this scale

Comparison to baseline: Massively outperformed solving independent subproblems without the shared environment

The key insight: the mean-field environment effectively captures complex correlations between separated pieces. You get near-full-problem quality at a fraction of the computational cost.

Drug Design: 252 Variables on 21 Qubits

The team applied this to a molecular docking problem relevant to cancer research. The task: predict how a drug molecule (ligand) binds to a protein target—like finding the angle at which a key enters a lock.

Problem size: 252 variables, 10⁷⁶ solution space
Decomposition: 12 subproblems of 21 variables each
Hardware: Rigetti Ankaa-3 superconducting processor
Resources required: 21 qubits, few hundred gates (vs. 252 qubits and tens of thousands of gates for direct QAOA)

Results:

  • Simulator: Self-consistent approach nearly matched full-problem QAOA
  • Real hardware: Outperformed independent subproblem solvers despite device noise
  • First demonstration of quantum optimization at realistic drug discovery scale

What’s Still Missing

Let’s be clear about limitations:

Hardware noise reduced performance. The Ankaa-3 results were lower than simulations—expected given current gate fidelities. The advantage over independent solvers persisted, but noise remains a challenge.

No classical comparison reported. The paper doesn’t show how the quantum approach compares to state-of-the-art classical optimization for this specific drug docking problem. That benchmark matters for evaluating practical advantage.

Still a research demonstration. This is not yet a production tool for pharmaceutical companies. It’s a proof-of-concept showing the approach works on realistic problem sizes.

Why This Matters for Quantum Advantage

Previous quantum optimization demonstrations hit a wall around 50-100 variables. Real applications—drug design, logistics, financial portfolios—need 200-1000 variables. That gap blocked meaningful benchmarking against classical methods.

Rigetti just closed that gap. Not by building bigger quantum processors (still limited to ~100 qubits), but by algorithmic innovation that makes better use of available hardware.

For enterprises exploring quantum:

  • Timeline: This brings quantum optimization closer to practical advantage, but you’re still looking at 3-5 years for commercial chemistry applications
  • What to do now: Identify candidate optimization problems in your domain (hundreds of variables, complex constraints), build quantum literacy in technical teams, track which vendors hit these algorithm milestones
  • Vendor evaluation: When a quantum company claims advantage for your problem, ask: What’s the classical baseline? What’s the actual speedup (wall-clock time)? Can you run a proof-of-concept on their hardware with a real instance of your problem?

Beyond Drug Design

The self-consistent mean-field framework extends beyond molecular docking. Any large optimization problem that can be decomposed into coupled subproblems could benefit:

  • Logistics: Vehicle routing with interdependent delivery windows
  • Finance: Portfolio optimization with correlated assets
  • Materials science: Simulating complex quantum materials
  • Machine learning: Large quantum ML models embedded into decomposable sub-units

As quantum processors improve in scale and fidelity, this algorithmic technique maximizes the value extracted from every available qubit.

The Next Milestone

Watch for: Can this approach scale to 500+ variables while maintaining the quality advantage? And critically, can it outperform classical state-of-the-art methods (not just independent subproblems) on wall-clock time for a practical application?

Those benchmarks will determine whether this opens a path to near-term quantum advantage or remains a clever research technique.

For now, Rigetti demonstrated something important: you don’t always need more qubits. Sometimes you need better algorithms that make smarter use of the qubits you have.

Sources & Further Reading

Primary sources:

Context & analysis:

For deeper understanding: