Back to News
quantum-computing

The classical simulation wall isn't at 50 qubits — it's at entanglement depth. A 1,000-qubit circuit can be easier to simulate than a 20-qubit one.

Reddit r/QuantumComputing (RSS)
Loading...
2 min read
0 likes
The classical simulation wall isn't at 50 qubits — it's at entanglement depth. A 1,000-qubit circuit can be easier to simulate than a 20-qubit one.

Summarize this article with:

The "50-qubit wall" gets repeated constantly, but it's not quite right. The actual limit is bond dimension, not qubit count. In MPS/tensor network simulation, bond dimension χ ≤ 2d where d is the number of entangling layers. Memory scales as N · χ² · 16 bytes. That means: Circuit N depth χ Memory Deep VQE ansatz 20 20 1,024 335 MB Willow-scale RCS 105 5 32 1.7 MB Large shallow circuit 1,000 3 8 1 MB The 1,000-qubit circuit is cheaper than the 20-qubit one. Both are classically exact. The reason the "50-qubit wall" persists is that most benchmark circuits (RCS, random Clifford, etc.) are designed to be maximally entangling — so they hit the depth wall fast regardless of N. But for VQE, QAOA, chemistry ansätze, and any circuit with a brickwork structure below depth ~10, qubit count is essentially irrelevant. This is well-known in condensed matter (Vidal 2003, Hastings area law 2007) but seems underappreciated in the broader QC community. Single-qubit gates don't grow bond dimension at all — only two-qubit gates count. Curious whether others have run into this distinction in practice, especially on near-term algorithm design where circuit depth is the actual bottleneck. submitted by /u/nnoorbakhsh [link] [comments]

Read Original

Tags

quantum-algorithms
quantum-hardware

Source Information

Source: Reddit r/QuantumComputing (RSS)