In the NISQ era (2025-2026), only variational algorithms like VQE and QAOA run on real hardware, and their results rarely beat classical methods. The algorithms with proven exponential advantages, like Shor's and quantum phase estimation, require fault-tolerant hardware that doesn't exist yet. The honest split: useful algorithms need better hardware, and available hardware supports algorithms that aren't yet clearly useful.
The Algorithms That Matter Right Now
Quantum algorithms ranked by practical relevance in 2025-2026. What works on current NISQ hardware, what needs error correction, and what results look like today.
Two Lists That Don’t Overlap
There’s an uncomfortable mismatch at the center of quantum computing. The algorithms with mathematically proven, dramatic speedups require hardware that won’t exist for years. The hardware that exists today runs algorithms whose advantages over classical methods remain unproven for practical problems.
This isn’t a crisis. It’s a phase. But understanding which algorithms sit on which side of this divide is the difference between informed investment and expensive disappointment.
An engineering lead at a European pharmaceutical company described it well: “We spent six months implementing a variational quantum eigensolver for a molecule we’d already simulated classically. The classical answer was better. But the exercise taught us exactly what the hardware would need to look like for quantum to beat us. Now we know when to come back.”
That’s the right attitude. The value of understanding these algorithms is knowing what to watch for, not what to deploy today.
The Uncomfortable Mismatch
Algorithms with proven dramatic speedups require hardware that won’t exist for years. Hardware that exists today runs algorithms whose advantages over classical methods remain unproven for practical problems.
The NISQ Algorithms: What Runs Today
NISQ stands for Noisy Intermediate-Scale Quantum, a term coined by John Preskill in 2018 to describe machines with tens to low thousands of noisy qubits. These machines can run short computations before errors accumulate and destroy the answer. The algorithms designed for them share a common pattern: keep the quantum part shallow, and let classical computers do the heavy lifting.
Variational Quantum Eigensolver (VQE)
What it does: Finds the lowest-energy state of a molecular system, which tells you the molecule’s ground-state properties, bond lengths, reaction energies, and electronic structure.
How it works: A classical optimizer proposes a set of parameters. These parameters define a quantum circuit (the “ansatz”) that prepares a trial quantum state. The quantum computer measures the energy of this state. The classical optimizer adjusts the parameters and tries again. The loop repeats hundreds or thousands of times, converging on the lowest energy.
What it needs: The quantum circuit must be shallow enough to complete before decoherence destroys the state. For current hardware, this limits VQE to molecules with roughly 10-20 qubits of meaningful simulation. The classical optimizer must navigate a parameter space that can have barren plateaus, regions where the gradient vanishes and the optimizer gets lost.
Current results: VQE has been demonstrated on small molecules: hydrogen (H2), lithium hydride (LiH), beryllium hydride (BeH2), and water (H2O). These results correctly reproduce known energies, confirming the method works. But these molecules are also easily simulated on a classical laptop. The gap between what VQE can handle today and what would constitute useful chemistry (drug-relevant molecules with 50-100+ atoms) remains large.
Honest assessment: VQE is a proof of concept, not a production tool. Its value is in demonstrating the hybrid quantum-classical loop and building expertise. It does not yet solve problems that classical methods cannot.
10-20 qubits of meaningful simulation. Small molecules (H2, LiH, BeH2, H2O) that a classical laptop handles easily.
50-100+ atom drug-relevant molecules. Thousands of error-corrected logical qubits. Chemical accuracy within 1 kcal/mol.
Quantum Approximate Optimization Algorithm (QAOA)
What it does: Finds approximate solutions to combinatorial optimization problems, things like maximum cut of a graph, satisfiability, portfolio selection, or scheduling.
How it works: Similar variational structure to VQE. A quantum circuit alternates between two operations: one that encodes the objective function and one that mixes the solutions. The depths and angles of these operations are optimized classically. In theory, deeper circuits (more alternating layers) produce better approximations.
What it needs: Enough qubits to encode the problem (one qubit per variable for many formulations) and circuit depth proportional to the desired solution quality. At depth p=1 (the shallowest useful version), QAOA on the max-cut problem provably achieves an approximation ratio of at least 0.6924 on 3-regular graphs, proven by Farhi, Goldstone, and Gutmann. Classical algorithms achieve 0.878 (the Goemans-Williamson bound). Higher depths should improve the quantum result, but higher depths also mean more noise on current hardware.
Current results: QAOA has been run on dozens of qubits for toy-size problems. Performance on small instances is competitive with random guessing and basic classical heuristics, but does not consistently beat optimized classical solvers like Gurobi or local search algorithms.
Honest assessment: QAOA is the most-studied quantum optimization algorithm and remains a strong candidate for future advantage. But “future” is doing heavy work in that sentence. The theoretical scaling advantages of QAOA at high depth remain conjectured, and classical optimization algorithms improve continuously. This is an active race, not a settled outcome.
Variational Quantum Simulation
What it does: Simulates the time evolution of quantum systems, how a quantum state changes over time under specific physical interactions. This is relevant for materials science, condensed-matter physics, and understanding chemical reactions as they happen.
How it works: Uses parameterized quantum circuits to approximate the evolution operator, optimized similarly to VQE. An alternative, Trotterized simulation, breaks the evolution into small time steps and applies the interactions directly, without a variational loop.
Current results: Simulations of small spin chains (up to about 20-30 qubits), certain lattice gauge theory models, and simple condensed-matter systems have been demonstrated. These results are beginning to approach the boundary of what is classically simulable.
Honest assessment: Quantum simulation is the nearest-term candidate for genuine quantum advantage because the advantage is inherent: simulating quantum physics on quantum hardware is natural. The bottleneck is error rates, not algorithmic capability.
Quantum Sampling and Quantum Machine Learning
Sampling tasks like random circuit sampling (the basis of Google’s 2019 supremacy claim) can be performed by quantum computers in ways that are classically hard to replicate exactly. Whether these specific sampling distributions are useful for anything practical remains debated.
Quantum machine learning (QML) applies quantum circuits as trainable models, analogous to neural networks. For classical data, QML has not demonstrated practical advantage at any meaningful scale. For quantum data (output from quantum sensors or quantum simulations), QML shows more promise because the data is naturally represented in quantum states.
The Fault-Tolerant Algorithms: What Works Tomorrow
These algorithms assume error-corrected logical qubits. They can run circuits thousands or millions of gates deep without errors accumulating. The results are mathematically precise, not approximate.
Shor’s Algorithm (Integer Factoring)
What it does: Factors large integers into their prime components in polynomial time. Since RSA encryption relies on the hardness of factoring, a sufficiently large quantum computer running Shor’s algorithm breaks most current public-key cryptography.
What it needs: For a 2048-bit RSA key, estimates range from about 4,000 to 20,000 logical qubits, depending on the implementation. At 1000:1 physical-to-logical ratios (surface code), that’s 4 to 20 million physical qubits. Recent optimizations and newer error-correcting codes could reduce this significantly, but “significantly” still means millions of physical qubits.
Current status: Has been run on trivially small numbers (largest reliably factored: around 21, though some contested claims go higher using hybrid or special-purpose methods). The algorithm is proven correct. The hardware gap is everything.
4,000-20,000
Logical Qubits Required
For 2048-bit RSA
4-20M
Physical Qubits
At 1,000:1 ratio (surface code)
~21
Largest Number Factored
On real quantum hardware
Timeline: Most serious estimates place cryptographically relevant factoring at 2030-2040. This is why NIST finalized post-quantum cryptography standards in 2024 and governments are mandating migration now. The threat is real, deferred, and preparation must happen years before the capability arrives.
Quantum Phase Estimation
What it does: Precisely measures the eigenvalues of a quantum operator. This is the workhorse of fault-tolerant quantum computing, underlying applications from chemistry to cryptography.
Why it matters: QPE gives exact energies rather than the variational approximations of VQE. For chemistry, this means calculating molecular properties with chemical accuracy (within 1 kcal/mol), the threshold at which predictions become useful for drug design and materials engineering.
What it needs: Deep circuits (thousands of gates), long coherence times, and error correction. The circuit depth scales with the desired precision.
Current status: Small demonstrations on a few qubits. Not yet practical for useful problem sizes.
Amplitude Estimation
What it does: Estimates the probability of a quantum event quadratically faster than classical sampling. If classical Monte Carlo needs a million samples for a given precision, quantum amplitude estimation needs about a thousand.
Why it matters: Monte Carlo simulation drives risk analysis in finance, reliability engineering, and scientific computation. A quadratic speedup on Monte Carlo is commercially relevant for institutions running millions of daily simulations. Goldman Sachs, JPMorgan, and other financial institutions have published research on quantum amplitude estimation for derivatives pricing and risk analysis.
What it needs: Error-corrected qubits. The circuit depth grows with the desired precision, requiring fault tolerance.
Current status: Proof-of-concept demonstrations only. Practical advantage requires early fault-tolerant machines.
Quantum Linear Algebra (HHL and Successors)
What it does: The Harrow-Hassidim-Lloyd algorithm solves systems of linear equations exponentially faster than classical methods, under specific conditions.
The catch: The exponential speedup assumes you can efficiently load classical data into quantum states and extract useful information from the quantum solution. Both of these steps can erase the speedup entirely for many practical problem types. Subsequent work has clarified exactly when HHL provides genuine end-to-end advantage, and the conditions are restrictive.
Current status: Theoretical. The data loading problem (sometimes called the “input problem” or “quantum data loading bottleneck”) remains a major practical barrier for any quantum algorithm that operates on classical data.
The Comparison Table
| Algorithm | Era | Speedup | Hardware Needed | Current Best Result | Practical Timeline |
|---|---|---|---|---|---|
| VQE | NISQ | Unknown vs classical | 10-50 noisy qubits | Small molecules (H2, LiH) | Running now, no advantage shown |
| QAOA | NISQ | Unknown vs classical | 10-100+ noisy qubits | Toy optimization (20-50 variables) | Running now, no advantage shown |
| Quantum simulation (variational) | NISQ | Possibly near-term | 50-200 noisy qubits | Spin models, lattice models | Possible useful results 2026-2028 |
| Shor’s (factoring) | Fault-tolerant | Exponential | 4,000-20,000 logical qubits | Factored small numbers (~21) | 2030-2040 |
| Quantum phase estimation | Fault-tolerant | Exponential for eigenvalues | 1,000+ logical qubits | Few-qubit demos | Late 2020s (early fault-tolerant) |
| Amplitude estimation | Fault-tolerant | Quadratic | 100-1,000 logical qubits | Proof-of-concept | Late 2020s |
| HHL (linear algebra) | Fault-tolerant | Exponential (conditional) | 1,000+ logical qubits | Tiny demonstrations | Unknown; input problem unresolved |
What to Watch
Three signals will tell you when the balance shifts.
First: quantum simulation beating classical simulation. This is the most likely first demonstration of genuine quantum advantage for a useful problem. When a quantum computer simulates a molecular system or materials property that cannot be classically verified (because the classical simulation is itself intractable), we’ve crossed a threshold. Several groups are targeting this for late 2026 to 2028.
Second: error correction reaching practical scale. When machines demonstrate 100+ logical qubits with error rates below 10^-6 per gate, the fault-tolerant algorithms unlock. This changes the entire value proposition. Watch for announcements about “logical qubit count” rather than physical qubit count.
Third: classical competition stalling. Quantum advantage is relative. If classical algorithms keep improving at their current pace, the crossover point recedes. If classical progress plateaus for specific problem classes, quantum’s moment arrives sooner. Track the classical benchmarks as carefully as the quantum ones.
2025-2026
NISQ era: variational algorithms, no proven advantage
2026-2028
Quantum simulation may beat classical simulation
Late 2020s
Early fault-tolerant: phase estimation, amplitude estimation unlock
2030-2040
Cryptographically relevant factoring (Shor's)
The pharmaceutical company that spent six months on VQE and learned what hardware to wait for was making a smart investment. They weren’t deploying quantum. They were calibrating their expectations with real data. That’s what these algorithms are for right now: building the institutional knowledge to move fast when the hardware catches up.
Key Takeaways
- NISQ algorithms (VQE, QAOA) run on today’s hardware but have not demonstrated practical advantage over classical methods.
- Fault-tolerant algorithms (Shor’s, QPE, amplitude estimation) have proven speedups but require error-corrected hardware that doesn’t exist yet.
- Quantum simulation is the nearest-term candidate for genuine useful advantage.
- The value of studying these algorithms now is knowing exactly what hardware milestones to watch for.