New Logical Qubit Tracker. Quantum Error Correction and the Rise of Logical Qubits

Summarize this article with:
Quantum computing aims to revolutionise fields from drug discovery to materials science. Yet, there’s a fundamental problem standing in the way. Qubits are extraordinarily fragile. Unlike classical computer bits that reliably hold their 0 or 1 state, quantum bits lose their quantum properties within microseconds due to interference from their environment. This is where quantum error correction and logical qubits come into play. They have become the most important metrics for tracking genuine progress in the field.
The Error Problem in Quantum Computing To understand why error correction matters so much, consider this: even the best physical qubits today have error rates around 1 in 1,000 per operation. That might sound acceptable. However, running a useful quantum algorithm like Shor’s factoring algorithm would require billions of operations. With a 0.1% error rate per operation, a computation would almost certainly fail long before producing any meaningful result. Classical computers solved their error problems decades ago using redundancy. They simply copied important bits multiple times. If one bit flips due to a cosmic ray or electrical noise, the majority vote determines the correct value. But quantum mechanics complicates matters. The no-cloning theorem states that you cannot make an exact copy of an unknown quantum state. You cannot simply duplicate a qubit as a backup. Enter the Logical Qubit The ingenious solution was first proposed by Peter Shor in the 1990s. It involves spreading quantum information across multiple physical qubits (PQs) in an entangled state. This process creates what we now call a logical qubit (LQ). These two terms—PQ and LQ—have become the essential vocabulary for understanding quantum computing progress. Think of it this way: a single physical qubit is like a message written in disappearing ink. It might vanish at any moment. A logical qubit is like encoding that same message across several notebooks. This is done using a clever code. A team constantly checks and corrects any smudges. The message survives even if individual letters get corrupted, provided not too many errors accumulate at once. The key insight is that while you cannot copy a qubit’s state, you can entangle multiple PQs. This entanglement allows error-detecting measurements on ancillary qubits to reveal whether an error occurred—and what type—without disturbing the actual quantum information being protected. Surface Codes as the Leading Approach Among the various quantum error correction schemes, surface codes have emerged as the most promising for near-term hardware. In a surface code, PQs are arranged in a grid pattern, with “data qubits” storing actual quantum information at the corners and “measure qubits” at the centres performing continuous error checks. The “distance” of a surface code refers to how many PQs encode a single LQ. A distance-3 code uses roughly 17 PQs per LQ; distance-5 uses around 49; distance-7 uses approximately 97. Larger distances provide better error protection but require more physical qubits. The critical question has always been: as you add more PQs to increase the code distance, does error correction actually improve, or do the additional qubits simply introduce more opportunities for errors? Breaking the Threshold with Google’s Willow Chip In December 2024, Google Quantum AI achieved what many consider the most significant milestone in quantum error correction to date. Their new Willow processor demonstrated “below-threshold” operation—meaning that adding more qubits actually reduced the LQ error rate rather than increasing it. Testing arrays of 3×3, 5×5, and 7×7 PQs, the Google team showed that each increase in code distance cut the error rate roughly in half. This exponential improvement with scale is precisely what the theory predicted would happen once PQ quality passed a certain threshold, but it had never been demonstrated experimentally until Willow. The 105-qubit Willow chip achieved T1 coherence times approaching 100 microseconds—a five-fold improvement over Google’s previous Sycamore processor. This gives error correction protocols significantly more time to operate before quantum information degrades. Google estimates that the benchmark computation Willow performed in under five minutes would take the world’s fastest supercomputer 10 septillion years. This number vastly exceeds the age of the universe. The announcement sent Alphabet’s stock price climbing over 5%. This marks one of the most significant daily increases for the company in months. Investors recognised the strengthened competitive position against IBM and Microsoft in quantum computing. Quantinuum’s Trapped Ion Advances While Google pursues superconducting qubits, Quantinuum has achieved remarkable results using trapped ion technology. In late 2024, Quantinuum demonstrated 50 entangled LQs with fidelities exceeding 98%—a new industry benchmark for scalable quantum systems.
Their Quantum Charge-Coupled Device (QCCD) architecture offers a key advantage. It provides all-to-all qubit connectivity. This means any qubit can directly interact with any other qubit in the system. This flexibility allows Quantinuum to implement various error correction codes and test different approaches. In collaboration with researchers from the University of Colorado, Quantinuum also achieved another first. They entangled four LQs with better fidelity than their underlying PQs. The logical qubits achieved fidelities between 99.5% and 99.7%, compared to 97.8% to 98.7% for uncorrected physical qubits. This demonstrated that error protection was genuinely adding value rather than just overhead. Microsoft and the Road to Fault Tolerance Microsoft, working with Quantinuum and Atom Computing, has been pursuing complementary approaches. Microsoft used a qubit virtualisation system with Quantinuum’s trapped ion quantum computer. They created 12 LQs, which were the most reliable logical qubits on record at the time of announcement. This development significantly reduced error rates, bringing the industry closer to practical quantum computing solutions. Microsoft’s collaboration with Atom Computing then pushed the boundary further. They achieved quantum entanglement on 24 LQs. This was the highest number of entangled logical qubits demonstrated at that point. Additionally, Microsoft and Photonic achieved a teleported CNOT gate. The gate was between qubits physically separated by 40 metres. This confirms remote quantum entanglement. Such entanglement is required for long-distance quantum communication. Microsoft’s roadmap emphasises what they call “Level 2 Resilient” quantum computing. They view it as a necessary stepping stone. This is before reaching “Level 3” practical quantum advantage. Global Progress from China’s Wukong Processor The race for logical qubits is not confined to Western companies. In November 2024, researchers at the University of Science and Technology of China made a significant achievement. They demonstrated a universal set of logical gates on their “Wukong” superconducting quantum processor. Using a distance-2 surface code, the team encoded two LQs within an 8-PQ region. They implemented a transversal CNOT gate. They also performed arbitrary single-qubit rotations.
The team successfully prepared logical Bell states. They confirmed entanglement by violating the CHSH inequality. This demonstrates that quantum information is being reliably preserved and manipulated at the LQ level. This marks a critical milestone for superconducting platforms in Asia.
Novel Approaches Beyond Traditional Surface Codes Researchers continue exploring alternative error correction strategies.
Oxford Quantum Circuits (OQC) has demonstrated a hardware-efficient approach using dual-rail dimon qubits—a fixed-frequency multimode superconducting technology that establishes robust error detection capabilities. By creating a logical subspace distinctly separate from error-detected states, OQC achieved substantial reductions in LQ error rates compared to physical modes, providing a roadmap toward 2035. Meanwhile, researchers have demonstrated autonomous quantum error correction that extends the lifetime of a photonic LQ by 18% beyond the best PQ in the same system—definitively surpassing the “break-even” point where error correction provides net benefit. This approach, using engineered dissipation rather than measurement-based feedback, points toward more practical implementations. What the Numbers Mean for Real Applications McKinsey forecasts the quantum technology sector to reach $97 billion by 2035, driven by advancements in both quantum computing and sensing. But to realise this value, LQs must progress from laboratory demonstrations to commercially useful systems. To put current achievements in perspective, consider the requirements for transformative applications: Breaking RSA-2048 encryption with Shor’s algorithm would require approximately 4,000 LQs with extremely low error rates (around 10⁻¹² per operation). With Alice & Bob’s efficient approach, this might require less than 100,000 PQs; with traditional surface codes, closer to 20 million. Current achievements remain far from either target. Quantum chemistry simulations for drug discovery might need 100-1,000 LQs, depending on the molecular system. Industry roadmaps suggest this range could become achievable by around 2030. Google’s team notes that quantum computing could dramatically accelerate drug discovery, potentially solving molecular modelling problems in minutes that would take classical computers millennia. Quantum machine learning applications vary widely in their requirements, but many near-term proposals assume 50-200 high-quality LQs. The gap between current capabilities (single-digit to double-digit LQs with error rates around 10⁻³) and these targets explains why most experts estimate practical quantum advantage remains 5-10 years away for most applications—though the trajectory of progress in 2024-2025 has been remarkably fast. Reducing the Overhead in the Race for Efficient Error Correction One of the most promising developments in making LQs practical is the dramatic reduction in PQ-to-LQ ratios. Traditional surface codes require around 1,000 PQs per LQ for useful error rates—a daunting overhead that makes scaling difficult. French startup Alice & Bob, working with research institute Inria, has developed a new architecture using low-density parity-check (LDPC) codes on cat qubits that could enable 100 high-fidelity LQs with as few as 1,500 physical cat qubits. Even more dramatically, their approach could run Shor’s algorithm with less than 100,000 PQs—a 200-fold improvement over competing methods that require approximately 20 million qubits.
As Boston Consulting Group’s Jean-François Bobier noted, over 90% of quantum computing value depends on strong error correction, making these efficiency gains critical for commercial viability. Company Roadmaps in the Race to 100 LQ’s Analysis of public company roadmaps reveals a striking convergence. Multiple companies are targeting approximately 100 LQs by around 2030, regardless of their underlying qubit technology. IBM’s roadmap centres on their Quantum Starling system, targeted for 2028, featuring 200 LQs. Using their efficient LDPC codes, Starling will require approximately 10,000 PQs—a tenfold improvement over traditional surface codes. By 2029, the system aims to execute 100 million gates on those 200 logical qubits. IBM’s current Heron processor, with 156 qubits and real-time classical communication capabilities, establishes the foundation for this progression. Google aims to demonstrate a “useful, beyond-classical” computation on current hardware. They plan to scale to thousands of LQs in the late 2020s. Quantinuum has demonstrated the fastest progress in LQ count to date and continues to advance their trapped ion platform. Tracking Progress with the Entangled Future Logical Qubit Tracker For investors, researchers, and enthusiasts wanting to monitor this rapidly evolving landscape, Entangled Future (entangledfuture.com)—powered by Quantum Zeitgeist—provides a comprehensive resource through its Quantum Navigator platform.
The Logical Qubit Tracker offers a dedicated progress dashboard specifically focused on fault-tolerant quantum computing developments. The platform tracks over 943 quantum computing companies across 48 countries. It has dedicated sections for monitoring the quantum stack. It also tracks investment rounds and technological progress.
The Logical Qubit Tracker provides a focused view. It shows which companies are achieving milestones in error correction. It also shows how the field is progressing toward fault tolerance. LQ’s are becoming the primary metric for measuring genuine progress in quantum computing. They move beyond simple PQ counts that tell only part of the story. This kind of systematic tracking becomes essential.
The Quantum Navigator complements Quantum Zeitgeist’s ongoing coverage of the quantum industry. It provides the data infrastructure to understand where the field truly stands.
New Logical Qubit Tracker.
Quantum Error Correction and the Rise of Logical Qubits The Road Ahead Logical qubits represent quantum computing’s transition from laboratory curiosity to practical technology. The achievements of 2024-2025—Google breaking the error correction threshold, Quantinuum demonstrating 50 entangled LQs, Microsoft reaching 24 entangled LQs—mark the beginning of what many call the “resilient quantum” era. However, significant challenges remain. Current demonstrations involve relatively small LQs and quantum memory operations rather than full logical gate operations. Scaling from today’s achievements to the thousands of LQs needed for transformative applications requires continued advances in PQ quality, control systems, and error correction algorithms. The encouraging news is that the fundamental approach is now proven: quantum error correction works. The remaining challenges are engineering challenges—formidable, certainly, but no longer questions of whether but when. Key Sources and References Google Quantum AI, “Meet Willow, our state-of-the-art quantum chip” (December 2024) – blog.google/technology/research/google-willow-quantum-chip/ Nature: “Quantum error correction below the surface code threshold” – Google’s peer-reviewed Willow results (December 2024) Quantinuum, “50 Entangled Logical Qubits” presentation at Q2B Conference (December 2024) Nature: “Entangling logical qubits with lattice surgery” (2021) – Foundation work on logical qubit operations Nature: “Logical quantum processor based on reconfigurable atom arrays” (December 2023) – Harvard/QuEra programmable logical qubit processor IEEE Spectrum: “Quantinuum Successfully Teleports a Logical Qubit” (September 2024) Quantum Zeitgeist: “Google’s New Willow Chip Quantum Computer With 105 Qubits Outperforms Classical Computers” (December 2024) Quantum Zeitgeist: “Alice & Bob Novel Error Correction Reduces Number Of Qubits” (January 2024) Quantum Zeitgeist: “Quantum In 2024: A Look In Rear View Mirror At A Momentous Year” (December 2024) Quantum Zeitgeist: “Chinese Quantum Processor Achieves Universal Logical Gates For Error Correction” (November 2024) Quantum Zeitgeist: “OQC Advances Quantum Error Correction With Dual-Rail Dimon Qubit Technology” (June 2025) Quantum Zeitgeist: “Autonomous Quantum Error Correction Surpasses Break-Even, Protecting Photonic Logical Qubits” (October 2025) Quantum Zeitgeist: “Quantum Computing Future – 6 Alternative Views Of The Quantum Future Post 2025” – IBM Starling roadmap details Entangled Future Quantum Navigator (powered by Quantum Zeitgeist) – entangledfuture.com – Logical Qubit progress tracker and quantum company database
