The quantum computing stack has five layers: physical qubits, control electronics, error correction, compilation/transpilation, and applications. Only the bottom two are reasonably mature. The rest are active engineering problems.

Chapter 1 of 7 12 min

The Quantum Computing Stack, End to End

The full quantum computing stack from physics to applications. Hardware modalities, control electronics, error correction, compilers, and the application layer compared.

A Stack Where Every Layer Is Still Being Built

When you deploy a web application, you don’t think about the transistor physics. You shouldn’t have to. Decades of abstraction layers, each one hardened by engineering, separate your code from the silicon.

Quantum computing has no such luxury. Today, a team building a quantum application must think about the physics. They must account for which specific qubits on which specific chip will execute their gates, how long those qubits will stay coherent, which pairs of qubits can interact directly, and how the noise profile of last Tuesday’s calibration differs from today’s.

This is roughly equivalent to writing software where you need to know which transistors are running your code, whether the copper traces on your motherboard are warmer than usual, and whether the guy in the next rack bumped into your server.

Understanding the full stack is not optional for making good decisions about quantum. Here is what it looks like, bottom to top.

Five Layers, All Under Construction

The quantum computing stack has five layers: physical qubits, control electronics, error correction, compilation/transpilation, and applications. Only the bottom two are reasonably mature. The rest are active engineering problems.

Layer 1: Physical Qubits

At the bottom of everything is the qubit itself, a physical system that can exist in a superposition of two quantum states. The choice of physical system defines nearly everything about the machine built on top of it.

Superconducting qubits use tiny circuits cooled to 15 millikelvin, colder than outer space. They’re fast: gate operations take 20-100 nanoseconds. They decohere quickly too, typically losing their quantum state within 100-300 microseconds. IBM’s Eagle, Condor, and Flamingo processors use transmon qubits, a type of superconducting qubit. Google’s Sycamore and Willow chips use a similar approach. These systems currently reach 1,000+ physical qubits, the largest of any platform.

Trapped ions suspend individual charged atoms in electromagnetic fields and manipulate them with lasers. Gate operations are slower (microseconds instead of nanoseconds) but far more precise: two-qubit gate fidelities regularly exceed 99.5%, with the best demonstrations above 99.9%. Coherence times are measured in seconds or minutes, not microseconds. Quantinuum’s H2 processor and IonQ’s systems use this approach. Current scale: 20-56 qubits, but with all-to-all connectivity (any qubit can interact with any other).

Neutral atoms trap individual atoms (typically rubidium or cesium) using focused laser beams called optical tweezers. The remarkable property here is reconfigurability: atoms can be physically moved during computation, rewiring the connectivity on the fly. This makes certain error correction codes much more natural to implement. QuEra, Pasqal, and Atom Computing work in this space. Demonstrations have reached 200+ qubits with long coherence times, and the architecture is inherently scalable. The first demonstrations of quantum error correction on neutral atom platforms appeared in 2024.

Photonic qubits encode information in particles of light. They operate at room temperature, which eliminates the enormous cryogenic infrastructure other platforms require. The challenge is different: photons don’t naturally interact with each other, making two-qubit gates probabilistic rather than deterministic. PsiQuantum, Xanadu, and ORCA Computing pursue this path, with PsiQuantum betting on manufacturing-scale silicon photonics.

1,000+ qubits, gate speed of 20-100 ns, coherence of 100-300 μs. Fast but short-lived.

20-56 qubits, gate fidelity 99.5-99.9%, coherence in seconds to minutes. Precise but slower.

Each modality is a genuine engineering approach with real physics behind it. None is a clear winner. The choice depends on which engineering challenges you believe will be solved first: the short coherence of superconducting qubits, the slow gates of trapped ions, the early-stage control systems of neutral atoms, or the probabilistic gates of photonics.

Layer 2: Control Electronics

Between the quantum processor and the classical software sits a layer of specialized electronics that translates digital instructions into the precise analog signals qubits need.

For superconducting systems, this means microwave pulse generators that produce signals accurate to fractions of a nanosecond and stable to parts per million. These signals must pass through multiple temperature stages of a dilution refrigerator, from room temperature down to 15 millikelvin, with appropriate filtering at each stage to keep thermal noise from destroying quantum states.

For trapped-ion systems, it means laser control systems that can address individual ions separated by a few micrometers, with frequency precision better than one part in a billion.

This layer is more mature than what sits above it. Companies like Zurich Instruments, Keysight, and Quantum Machines (with their OPX+ platform) build commercial control hardware. The bottleneck is scaling: current systems use roughly one to two control lines per qubit. A million-qubit machine would need a fundamentally different approach, likely involving cryogenic classical processors sitting inside the refrigerator next to the qubits.

Layer 3: Quantum Error Correction

This is where the stack goes from “works in a lab” to “we have a problem.”

Classical error correction is mature and largely invisible. ECC memory detects and corrects single-bit flips. RAID arrays handle disk failures. TCP retransmits dropped packets. You don’t think about these systems because they work.

Quantum error correction is none of these things. It is immature, enormously expensive in qubit overhead, and the subject of intense active research.

The core difficulty: you cannot copy a quantum state (the no-cloning theorem), and you cannot measure a quantum state without disturbing it. Both of these are fundamental physical laws, not engineering limitations. So the classical strategies of “make a backup copy” and “check the value” are both forbidden.

Instead, quantum error correction spreads the information of one “logical qubit” across many physical qubits, then performs indirect measurements (called syndrome measurements) that detect errors without revealing the encoded information. The most studied approach, the surface code, requires roughly 1,000 physical qubits to create one high-quality logical qubit. More optimistic estimates using newer codes (like those based on the bivariate bicycle construction) suggest this ratio could improve to perhaps 100:1 or even 10:1 for certain code parameters, but these are at the frontier of current research.

In 2024, Google demonstrated that increasing the size of its surface code from distance-3 to distance-5 actually reduced the logical error rate, a critical threshold called “below breakeven” that proves the error correction is helping rather than hurting. Quantinuum demonstrated real-time error correction on trapped ions. Microsoft and Atom Computing showed error correction on a neutral-atom platform. These are genuine milestones. They are also still far from the millions of physical qubits needed for useful fault-tolerant computation.

~1,000:1

Physical-to-Logical Ratio

Surface code overhead

100:1

Optimistic Ratio

Newer codes (frontier research)

10:1

Theoretical Best

Certain code parameters

Chapter 4 covers errors in detail. For now, the key point: this layer is the primary bottleneck in the entire stack.

Layer 4: Compilation and Transpilation

A quantum algorithm, as written in a textbook, assumes perfect qubits with arbitrary connectivity. Real hardware has neither. The compiler’s job is to bridge this gap.

Quantum compilation translates a high-level quantum circuit into the specific gate set that the hardware supports. Different platforms have different native gates: IBM’s processors natively execute CX (controlled-NOT) and single-qubit rotations. Quantinuum’s trapped-ion machines natively execute ZZ gates. The compiler must decompose any abstract operation into these native gates with minimal overhead.

Transpilation goes further, mapping abstract qubits onto physical qubits and inserting SWAP operations where the hardware connectivity doesn’t allow a direct interaction. On a superconducting chip where each qubit connects to only 2-4 neighbors, a circuit that needs distant qubits to interact requires a chain of SWAP operations to move the quantum state across the chip. Each SWAP is three two-qubit gates. Each gate introduces noise. This overhead can double or triple the effective circuit depth.

This is an NP-hard optimization problem, and the quality of the solution directly affects whether a computation succeeds or drowns in noise. Tools like IBM’s Qiskit transpiler, Google’s Cirq, and Quantinuum’s TKET attempt this optimization using heuristic methods.

If you come from classical computing, think of this as an extremely aggressive compiler optimization pass, except the penalty for a suboptimal solution isn’t slower execution but wrong answers.

Compilation Overhead Is Real

Each SWAP operation is three two-qubit gates, and each gate introduces noise. On chips where qubits connect to only 2-4 neighbors, compilation overhead can double or triple the effective circuit depth.

Layer 5: Applications

At the top of the stack sit the algorithms and applications that users actually care about. This is where quantum computing either delivers value or doesn’t.

The application layer today looks nothing like classical software. There are no quantum operating systems, no quantum databases, no quantum web frameworks. Instead, there are specific quantum algorithms (covered in Chapter 3) that address specific computational problems, wrapped in classical orchestration code.

The dominant pattern in 2025-2026 is the variational algorithm: a hybrid loop where a classical computer proposes parameters, a quantum processor evaluates a cost function, and the classical computer adjusts. VQE (Variational Quantum Eigensolver) for chemistry and QAOA (Quantum Approximate Optimization Algorithm) for optimization follow this pattern. Chapter 6 covers this hybrid architecture in detail.

SDKs and frameworks sit here too: Qiskit, Cirq, PennyLane, Amazon Braket, Azure Quantum. These provide the programming interface for specifying quantum circuits. They are usable, documented, and improving steadily. They are also, in a meaningful sense, writing assembly language. Higher-level abstractions that let you specify a problem without thinking about gates are emerging but not mature.

What This Means for Decision-Making

The classical computing stack took 70 years to harden, from the first transistors to the cloud infrastructure you use without thinking. The quantum stack has had perhaps 15 years of serious engineering effort, less if you count from when multi-qubit processors became reliable enough to run meaningful circuits.

Every layer is simultaneously under active development. A breakthrough in error correction changes what the application layer can do. A new qubit modality changes what error correction is possible. A better compiler changes how many physical qubits you need.

This interconnection is both the excitement and the risk. When a vendor tells you their system has 1,000 qubits, the number that matters isn’t 1,000. It’s how many of those qubits can participate in a useful computation after you account for connectivity, gate fidelity, coherence time, compilation overhead, and error correction requirements.

That calculation, the real one, is what the rest of this guide helps you understand.

Key Takeaways

  • The quantum stack has five layers. Only physical qubits and control electronics are reasonably mature.
  • Error correction is the primary bottleneck, requiring roughly 1,000 physical qubits per logical qubit with current approaches.
  • No qubit modality is a clear winner. Superconducting leads in scale, trapped ions lead in fidelity, neutral atoms lead in connectivity.
  • When a vendor quotes qubit count, the real number is how many qubits survive after accounting for connectivity, fidelity, coherence, compilation, and error correction.