← Back to blog

Shots, depth, and error: the three numbers that decide if a circuit works

A practical cost model for quantum experiments: how sampling and noise turn ‘a circuit’ into ‘a result.’

practicalcost-modelnoise

When you run a quantum program on real hardware, you don’t just “execute once and get an answer.” You run many shots, the circuit has finite depth, and the device has errors.

If you want a simple mental model for whether something is feasible, start here.

1) Shots: results are estimates

Most outputs you care about are expectations like:

  • an average energy (\langle H \rangle) (VQE),
  • a probability of a “good” bitstring (QAOA),
  • or a histogram you compare to a target.

With (S) independent samples, many estimates converge like:

[ \text{error} \propto \frac{1}{\sqrt{S}} ]

That means cutting error in half often requires the shots.

Practical implication

Even if a circuit is “short,” it can be expensive if it needs huge shot counts for acceptable confidence.

2) Depth: time is also a resource

Depth is an imperfect proxy, but it correlates with:

  • total runtime (more exposure to decoherence),
  • more opportunities for gate errors,
  • more idle time where errors still accumulate.

Compilation can secretly increase depth:

  • routing (SWAP insertion),
  • device-native gate decomposition,
  • scheduling constraints.

Practical implication

Two circuits with the same logical description can have very different real depths on different devices.

3) Error: fidelity compounds

If each layer has a small error probability, errors compound with depth. A crude cartoon model is:

[ \text{success} \approx (1 - \epsilon)^{d} ]

where (\epsilon) is an effective per-layer error and (d) is depth.

This is not “the truth,” but it’s directionally useful: deeper circuits need dramatically better control.

Put them together: the “total cost” picture

To get one plotted point in a paper you often pay:

  • compile time (classical),
  • S shots (quantum time),
  • repeated across parameters (a sweep) or iterations (an optimizer),
  • plus calibration overhead (often omitted from headlines).

So “I ran a circuit” becomes:

“I ran (S) samples of a compiled depth-(d) circuit across (K) settings, and the device was stable enough to interpret the statistics.”

A quick checklist for readers (and authors)

When you see a result, ask for:

  • How many shots per datapoint?
  • What was the compiled depth (or duration)?
  • How many two-qubit gates (often the limiter)?
  • How many datapoints / iterations?
  • What mitigation was used, and how many extra circuits did it require?

If those are missing, you can’t estimate feasibility—you can only admire the plot.