Home/Quantum Technology/Quantum Error Correction: Surface Code & Fault-Tolerant Computing

Quantum Error Correction: Surface Code & Fault-Tolerant Computing

Quantum error correction news: logical qubits, surface code, fault-tolerant quantum computing, QEC. Error mitigation & suppression.

4,214 Articles
Updated Daily

Quantum error correction (QEC) is the critical enabler for fault-tolerant quantum computing, protecting quantum information from environmental noise through redundant encoding across multiple physical qubits. Recent breakthroughs demonstrated below-threshold error correction where logical qubit error rates fall below physical qubit rates.

The 2D surface code is the leading QEC approach due to high error threshold (~1%), local nearest-neighbor interactions, and compatibility with superconducting chip designs. Recent breakthroughs include Google's Willow demonstrating below-threshold surface code scaling, and IBM's Heavy Hex optimizing qubit connectivity for surface code implementation.

India's Quantum Error Correction Research

India's National Quantum Mission includes quantum error correction in its basic science research component. The Foundation for QC Innovation at IISc Bengaluru addresses error correction as part of its quantum computing development. The Harish-Chandra Research Institute (HRI) and Institute of Mathematical Sciences (IMSc) conduct theoretical research on quantum error correction codes.

The NQM targets developing intermediate-scale quantum computers with 50-1000 physical qubits, requiring error mitigation and eventually error correction to achieve quantum advantage. The mission includes development of indigenous control electronics and error mitigation techniques.

Quantum AI Shortcut Could Speed up Language Models with Reduced Complexityquantum-computing

Quantum AI Shortcut Could Speed up Language Models with Reduced Complexity

Scientists are developing novel methods to improve sequence prediction, a crucial task in areas such as natural language processing and dynamical systems modelling. Alessio Pecilli and Matteo Rosati, both from the Dipartimento di Ingegneria Civile, Informatica e delle Tecnologie Aeronautiche at the Universit`a degli Studi Roma Tre, alongside et al., present a variational implementation of self-attention, termed Quantum Attention by Overlap Interference (QSA), which leverages quantum principles to predict future sequence elements. This research is significant because QSA achieves nonlinearity through state overlap interference and directly calculates a loss function as an observable expectation value, circumventing conventional decoding processes. Moreover, the team demonstrates that QSA exhibits potentially advantageous computational scaling compared to classical methods and successfully learns sequence prediction from both classical data and complex many-body quantum systems, establishing a trainable attention mechanism for dynamical modelling. Quantum self-attention via direct Renyi-1/2 entropy measurement Scientists have developed a novel quantum self-attention mechanism, termed QSA, that directly addresses computational bottlenecks within transformer architectures and large language models. This breakthrough focuses on the core self-attention operation, crucial for predicting sequential data by weighting combinations of past information. Unlike previous quantum approaches, the research realizes necessary non-linearity through interference of quantum state overlaps, directly translating a Renyi-1/2 cross-entropy loss into an expectation value measurable as an observable. This innovative design bypasses the need for complex decoding processes typically required to convert quantum predictions into classical outputs, streamlining the training procedure. Furthermore, the QSA naturally integrates a trainable data-embedding, establishing a direct link between quantum s

Quantum ZeitgeistLoading...0
Quantum Communication Secured by Choosing Measurement Basis Offers Ultimate Privacyquantum-computing

Quantum Communication Secured by Choosing Measurement Basis Offers Ultimate Privacy

Scientists have developed a novel protocol for one-way quantum secure direct communication, utilising the choice of measurement basis as the secret key. Santiago Bustamante and Boris A. Rodríguez, both from Universidad de Antioquia, alongside Elizabeth Agudelo of TU Wien, demonstrate a system where information is encoded and decoded through measurements performed in either the computational or Hadamard basis. This research is significant because it establishes information-theoretic security against BB84-symmetric attacks using finite ensembles of entangled pairs and a public channel. Importantly, the protocol requires no local unitary operations by the receiver, making it particularly suitable for practical implementation in network configurations such as star networks. This research addresses the fundamental question of distinguishing ensembles described by identical compressed density operators and introduces a method for encoding and decoding classical information through measurements in either the computational or Hadamard basis. Employing quantum wiretap channel theory, the study rigorously assesses the secure net bit rates and certifies the information-theoretic security of various implementations against BB84-symmetric attacks. A key advantage of this model is the elimination of local unitary operations required by the receiver, making it particularly suitable for practical implementation in star network configurations. The work builds upon the concept of finite ensembles of entangled EPR pairs, each shared between two parties, Alice and Bob, and explores how local measurements influence the transmission of a single bit of information. Researchers define a compressed density operator as the state of an average entity within an ensemble, acknowledging that this operator may not fully capture all information about the ensemble’s preparation. By measuring qubits in either the computational or Hadamard basis, Alice and Bob induce correlated collapses in their res

Quantum ZeitgeistLoading...0
Quantum Compilation Speeds up 100x, Bringing Practical Quantum Computers Closerquantum-computing

Quantum Compilation Speeds up 100x, Bringing Practical Quantum Computers Closer

Researchers are tackling the challenge of efficiently translating complex quantum algorithms into instructions for near-term quantum hardware. Aaron Hoyt from University of Washington and Pacific Northwest National Laboratory, alongside Meng Wang and Fei Hua from Pacific Northwest National Laboratory, et al., present QASMTrans, a novel end-to-end quantum compilation framework designed for just-in-time deployment. This work is significant because QASMTrans achieves over 100x faster compilation speeds than existing tools like Qiskit on certain circuits, while maintaining comparable fidelity and uniquely offering direct integration with hardware control systems via pulse generation. By bridging the gap between logical circuits and physical implementation, and incorporating noise-aware optimisation and circuit space sharing, QASMTrans facilitates the development and execution of real-time adaptive quantum algorithms on current quantum processing units. Rapid Quantum Circuit Transpilation via Pulse-Level Gate Set Optimisation Scientists have unveiled QASMTrans, a high-performance quantum compiler designed to rapidly translate abstract quantum algorithms into device-specific control instructions. This C++-based framework achieves over 100x faster compilation than existing tools like Qiskit for certain circuits, enabling the transpilation of large, complex circuits in a matter of seconds. QASMTrans distinguishes itself by offering complete, end-to-end device-pulse compilation and direct integration with quantum control systems such as QICK, effectively bridging the gap between logical circuits and the underlying hardware. The research focuses on accelerating the process of transpilation, which converts high-level quantum circuits into a format compatible with the limitations of near-term quantum devices. By employing latency-aware Application-tailored Gate Sets at the pulse level, QASMTrans identifies critical sequences within a circuit and generates optimized pulse schedu

Quantum ZeitgeistLoading...0
Quantum Entanglement’s ‘no Signalling’ Rule Bends, but Doesn’t Breakquantum-computing

Quantum Entanglement’s ‘no Signalling’ Rule Bends, but Doesn’t Break

Scientists are increasingly scrutinising the no-signalling principle, a cornerstone of Bell inequality and steering experiments, as experimental flaws can mimic violations beyond statistical fluctuations. Lucas Maquedano (Federal University of Paraná), Sophie Egelhaaf (University of Geneva), and Amro Abou-Hachem (Lund University, with et al. including Jef Pauwels and Armin Tavakoli) present extensions to local hidden variable and local hidden state theories, accommodating quantifiable signalling. Their research develops non-classicality tests applicable to these extended models, utilising both complete statistical analysis and corrections to established Bell and steering inequalities. This work is significant because it addresses apparent signalling in realistic scenarios, specifically demonstrating its applicability to data arising from processor imperfections and inefficient detectors. These violations, previously attributed to statistical fluctuations, can arise from subtle systematic effects present in realistic experimental setups. The work introduces extensions to local hidden variable and local hidden state theories, allowing for bounded and quantifiable amounts of signalling between entangled particles. This approach moves beyond simply enforcing no-signalling through data post-processing, instead explicitly relaxing classical models to incorporate a measurable signalling parameter. The study establishes methods for developing non-classicality tests applicable to these extended models, utilising both exact calculations based on complete statistical data and corrections to standard Bell and steering inequalities. These techniques were demonstrated using two scenarios known to exhibit apparent signalling: data obtained from an IBM quantum processor and post-selected data originating from inefficient detectors. By quantifying the permissible signalling, the research provides a means to distinguish genuine quantum non-classicality from artefacts introduced by ex

Quantum ZeitgeistLoading...0
Quantum Algorithm Cuts Molecular Energy Calculations’ Costs with Streamlined Approachquantum-computing

Quantum Algorithm Cuts Molecular Energy Calculations’ Costs with Streamlined Approach

Scientists are continually seeking improvements to variational quantum eigensolver algorithms for accurate molecular ground state energy calculations. Runhong He, Xin Hong (Key Laboratory of System Software, Chinese Academy of Sciences), and Qiaozhen Chai, alongside Ji Guan, Junyuan Zhou, and Arapat Ablimit, present a novel approach to enhance the adaptive derivative-assembled pseudo-trotter variational eigensolver (ADAPT-VQE). Their research introduces Param-ADAPT-VQE, an algorithm that intelligently selects excitation operators using a parameter-based criterion, effectively reducing redundancy and associated measurement costs. By combining this with a sub-Hamiltonian technique and a hot-start optimisation strategy, the authors demonstrate significant gains in computational accuracy and scalability, paving the way for more practical applications of ADAPT-VQE in molecular simulations. Parameter selection optimises variational quantum eigensolver performance for molecular simulations, leading to improved accuracy and efficiency Scientists have developed a new algorithm, Param-ADAPT-VQE, that significantly enhances the efficiency of molecular ground state energy calculations performed on quantum computers. This breakthrough addresses critical limitations in existing methods by reducing computational inaccuracies, minimising the size of the required quantum circuits, and dramatically lowering the number of measurements needed to achieve reliable results. The research introduces a parameter-based criterion for selecting excitation operators, a key component in building the quantum circuit, effectively avoiding the inclusion of redundant operators that hinder performance. This innovative approach moves beyond traditional gradient-based methods, offering a more robust and streamlined pathway to accurate molecular simulations. The core of this advancement lies in the optimisation of the adaptive derivative-assembled pseudo-trotter variational quantum eigensolver, or ADAPT-

Quantum ZeitgeistLoading...0
Quantum Simulations Take a Leap Forward with Superconducting Circuitsquantum-computing

Quantum Simulations Take a Leap Forward with Superconducting Circuits

Quantum computing promises to revolutionise several scientific and technological domains through fundamentally new ways of processing information. Laurin E. Fischer, affiliated with the Laboratoire de théorie et simulation des matériaux, Faculté des sciences et techniques de l’ingénieur, University of unspecified location and IBM Quantum, alongside colleagues, demonstrate significant progress in enabling large-scale digital quantum simulations using superconducting qubits. This research is particularly significant because it addresses a critical limitation in current quantum devices, imperfections that hinder practical advantage for complex problems in fields such as condensed matter physics and materials science. By exploring methods across the computational stack, including hardware innovations, noise modelling, error mitigation and algorithmic improvements, this work represents a crucial step towards extracting meaningful results from noisy quantum data and realising the full potential of quantum simulation. The thesis was presented on 28 October at the Faculty of Science and Engineering, Laboratory of Theory and Simulation of Materials, Doctoral Programme in Materials Science and Engineering for the degree of Doctor of Science by Laurin Elias Fischer. It was accepted on the proposal of the jury, with Professor Harald Brune as president, Professors Nicola Marzari and Ivano Tavernelli as thesis directors, Professor Zoë Holmes as rapporteur, Professor Zoltán Zimborás as rapporteur, and Professor Frank Wilhelm-Mauch as rapporteur. The work is documented as arXiv:2602.04719v1 [quant-ph] from February 2026. Advancing quantum simulation through hardware innovation, noise mitigation and algorithmic refinement promises to unlock previously intractable scientific challenges Scientists across condensed matter physics and materials science widely recognise the transformative potential of quantum computing. However, the realization of practical quantum advantage for problems

Quantum ZeitgeistLoading...0
DST Task Force Report: India Prepares for Post-Quantum Security by 2028quantum-computing

DST Task Force Report: India Prepares for Post-Quantum Security by 2028

India is preparing to defend its digital infrastructure against the looming threat of quantum computing, with a national task force outlining a roadmap to achieve post-quantum security by 2028. The February 2026 report, “Implementation of Quantum Safe Ecosystem in India,” details a phased approach, beginning with pilot programs in critical sectors like banking and finance. Recognizing the risk of “Harvest Now, Decrypt Later” (HNDL) attacks, the Task Force emphasizes proactive measures, stating that all cryptographic transition planning shall proceed under an “assume breach” principle. This ambitious plan includes establishing a National PQC Testing & Certification Program by December 2026 and mandating the adoption of quantum-safe products in government procurement, signaling a significant investment in future-proof cybersecurity. Quantum Computing Threat & India’s National Quantum Mission This isn’t a distant concern; the report outlines phased actions, beginning with pilots in high-priority systems like banking and finance, to be implemented by 2028, with Critical Information Infrastructure (CII) targeted by 2027. Procurement requirements will prioritize “crypto-agile and PQC-compliant assets,” including a detailed “Bill of Materials (BOM)” encompassing software, hardware, and cryptographic configurations. Furthermore, the report emphasizes the need to “promote the adoption of existing indigenous quantum-safe solutions” developed by Indian R&D labs, industries, and startups, while simultaneously initiating new product development where gaps exist. This strategic roadmap positions India alongside nations formally defining PQC migration timelines, aiming for a secure and resilient digital future. Report of the Task Force: Sub-Group Summaries The current landscape of cryptographic security is bracing for a paradigm shift, driven by the rapidly approaching threat of quantum computing. Short-term actions, targeted for completion by 2028 – and 2027 for Criti

Quantum ZeitgeistLoading...0
Nu Quantum Opens Trapped-Ion Networking Laboratory in Cambridgequantum-computing

Nu Quantum Opens Trapped-Ion Networking Laboratory in Cambridge

Nu Quantum Opens Trapped-Ion Networking Laboratory in Cambridge Nu Quantum has announced the opening of a new trapped-ion networking laboratory in Cambridge, UK, marking the first dedicated industrial R&D facility for distributed trapped-ion quantum computing in Europe. The state-of-the-art facility doubles the company’s existing research infrastructure and serves as the primary testbed for its Entanglement Fabric roadmap. The lab is designed to prove the company’s Qubit-Photon Interface (QPI) technology with trapped-ion qubits, transitioning from theoretical modeling to in-house experimental validation of modular, multi-node quantum architectures. The technical core of the new facility is the advancement of Nu Quantum’s QPI, which utilizes optical microcavity technology to enhance the interaction between stationary qubits and flying photons. These interfaces employ nanostructured mirrors with active stabilization—achieving cavity length control with a precision of <5 picometres—to ensure resonance with specific qubit wavelengths. By integrating these microcavities into custom-built ion traps, the system facilitates high-rate, high-fidelity entanglement links between discrete quantum processing units (QPUs). This hardware-agnostic approach is designed to interconnect clusters of commercial processors into a distributed fabric, aiming to exceed current state-of-the-art remote entanglement rates and fidelities. The expansion follows Nu Quantum’s $60 million Series A funding round, the largest for a pure-play quantum networking company globally. The investment supports a growth phase focused on recruiting specialist Atomic, Molecular, and Optical (AMO) physics talent and expanding international operations. The laboratory integrates a specialized laser suite with wavelength stabilization developed in partnership with the National Quantum Computing Centre (NQCC). Collaborative efforts also involve the University of Sussex, Cisco, and Infineon Technologies, the lat

Quantum Computing ReportLoading...0
Entanglement reveals the difficulty of computational problemsquantum-computing

Entanglement reveals the difficulty of computational problems

Adiabatic quantum computing An example problem represented by an energy landscape. Each point on the landscape represents a candidate solution. The deepest valley represents the actual solution with the lowest energy in dark blue. A difficult problem involves multiple valleys with similar depth and therefore similar energy. Arriving at the solution – the lowest energy valley – requires a large amount of entanglement and time. This is where quantum speed-up can be most crucial. (Courtesy: Einar Gabbassov)"> Adiabatic quantum computing An example problem represented by an energy landscape. Each point on the landscape represents a candidate solution. The deepest valley represents the actual solution with the lowest energy in dark blue. A difficult problem involves multiple valleys with similar depth and therefore similar energy. Arriving at the solution – the lowest energy valley – requires a large amount of entanglement and time. This is where quantum speed-up can be most crucial. (Courtesy: Einar Gabbassov) Entanglement is a key resource for quantum computation and quantum technologies, but it can also tell us much about a computational problem. That is the conclusion of a recent paper by Achim Kempf and Einar Gabbassov – who are applied mathematicians at Canada’s University of Waterloo and are affiliated with Waterloo’s Institute for Quantum Computing and the Perimeter Institute for Theoretical Physics. Writing in Quantum Science and Technology, Gabbassov and Kempf show how entanglement plays a fundamental role in determining both the efficiency and the hardness of quantum computation problems. They considered the role of entanglement in adiabatic quantum computing. This considers a landscape of hills and valleys (the problem) where the shape of the landscape depends on the problem to be solved.  A point on the landscape represents a candidate solution to the problem. This could be a configuration of possible states of three qubits, for example, or “a possible

Physics World QuantumLoading...0
Complex Chemical Calculations Made 25% Cheaper with New Quantum Techniquequantum-computing

Complex Chemical Calculations Made 25% Cheaper with New Quantum Technique

Researchers are continually seeking methods to reduce the computational cost of accurately modelling electronic structure, particularly for strongly correlated systems. Prateek Vaish and Brenda M. Rubenstein, both from the Department of Chemistry at Brown University, alongside Vaish et al., present a novel active space partitioning approach to significantly reduce the expense of Unitary Coupled Cluster (UCC) theory. Their work addresses the limitations imposed by the steep scaling of UCC’s Baker-Campbell-Hausdorff expansion by combining a truncated UCCSD(4) method within a selected active space with MP2 treatment of external excitations. This innovation offers a tractable pathway for modelling correlated molecules and reactions on current classical computers, and importantly, provides a viable strategy for scaling UCC calculations to meet the demands of resource-constrained hardware. This work introduces an active space UCCSD(4)/MP2 method, effectively partitioning the complex calculations to make them tractable for both classical computers and emerging quantum hardware. The research centres on a fourth-order truncation of UCCSD within a selected active space, complemented by treatment of external excitations at the MP2 level, offering a pathway to scale UCC calculations for resource-constrained systems. Two distinct formulations were explored: a composite method summing internal and external contributions, and an interacting method coupling amplitudes for enhanced accuracy. Testing encompassed the GW100 dataset, a metaphosphate hydrolysis reaction, and the strongly correlated torsion of ethylene, revealing key insights into the performance of each formulation. Results demonstrate that the interacting method, utilising canonical orbitals, maintains robustness and accurately reproduces full UCCSD(4) potential energy curves while employing only 15, 25% of the virtual orbitals within its active space. In contrast, the composite formulation proved more sensitive to both

Quantum ZeitgeistLoading...0
Twisted Quantum Codes Boost Error Correction and Extend Computing Potentialquantum-computing

Twisted Quantum Codes Boost Error Correction and Extend Computing Potential

Researchers investigate finite-length qudit quantum low-density parity-check codes constructed using translation-invariant CSS constructions on two-dimensional tori with twisted boundary conditions. Mourad Halla, and colleagues demonstrate that twisting generalized toric patterns, viewed through a bivariate-bicycle framework, substantially improves finite-size performance. This work extends the search to qudit codes over finite fields, employing algebraic methods to compute qudit numbers and pinpoint compact codes exhibiting favourable rate-distance trade-offs. The findings reveal that, across the finite sizes examined, twisted-torus qudit constructions generally attain greater distances than untwisted codes and surpass previously published twisted instances, with the most promising new codes meticulously tabulated. Finite-length qudit LDPC codes on twisted tori enhance quantum error correction performance significantly Scientists are pioneering advancements in quantum error correction through the development of qudit codes on twisted tori, achieving improved performance over existing qubit instances. Recent work demonstrated that twisting generalized toric patterns significantly enhances finite-size performance, a concept now extended to qudits over finite fields. This research focuses on finite-length qudit quantum low-density parity-check (LDPC) codes constructed from translation-invariant CSS constructions on two-dimensional tori with twisted boundary conditions. By employing algebraic methods, researchers compute the number of logical qudits and identify compact codes exhibiting favorable rate, distance tradeoffs. The study builds upon the bivariate-bicycle viewpoint, revealing that twisting generalized toric patterns can substantially improve finite-size performance, measured by the ratio kd²/n, where n represents the number of physical qudits, k the number of logical qudits, and d the code distance. Extending this insight, the work explores qudit codes over f

Quantum ZeitgeistLoading...0
Superconducting Qubits Edge Closer to Becoming a Practical Quantum Computerquantum-computing

Superconducting Qubits Edge Closer to Becoming a Practical Quantum Computer

Researchers are increasingly focused on superconducting qubit devices as a leading architecture for scalable quantum computation, owing to their maturity and compatibility with existing semiconductor manufacturing techniques. Hiu Yung Wong from San Jose State University, alongside colleagues, present a comprehensive review of these devices, examining the fundamental principles of superconductivity and Josephson junctions that underpin their operation. This work is significant because it not only details the various qubit designs and entanglement gate schemes currently employed, but also addresses the critical challenges hindering progress, such as two-level system defects that limit coherence. Furthermore, the authors explore strategies for large-scale integration, drawing parallels with established electronic design automation techniques used in conventional semiconductor technology, paving the way for more powerful and practical quantum computers. Superconducting qubit technology and the path towards scalable quantum processors represent a leading approach to building fault-tolerant quantum computers Scientists are rapidly approaching the scale necessary for practical quantum computation, with research now focused on the substantial engineering challenges of building systems with sufficient qubits. A recent review details the current state of superconducting qubit technology, highlighting the critical need for large-scale integration to realise a truly useful quantum computer. The work comprehensively examines the foundational elements, from qubit design and control to error mitigation strategies, paving the way for more robust and scalable quantum processors. This analysis underscores the progress made in superconducting qubits and identifies key areas for future development. Superconducting qubit computers represent a leading architecture for large-scale quantum integration due to their compatibility with existing semiconductor manufacturing processes. The study

Quantum ZeitgeistLoading...0
Artificial Black Holes Emit Radiation, Mimicking Hawking’s Groundbreaking Predictionquantum-computing

Artificial Black Holes Emit Radiation, Mimicking Hawking’s Groundbreaking Prediction

Researchers are increasingly exploring condensed matter systems to simulate and understand phenomena associated with black holes, and a new study by Jaiswal, Shankaranarayanan, and colleagues from the Department of Physics, Indian Institute of Technology Bombay, details the emergence and detection of Hawking radiation within a quenched chiral spin chain. This work is significant because it moves beyond simply demonstrating analogous black hole conditions to analysing the characteristics of the emitted radiation and proposing methods for its unambiguous detection. By employing both field-theoretic calculations and modelling operational quantum sensors, the team reveal deviations from ideal blackbody spectra and establish a clear protocol for differentiating genuine analog Hawking radiation from background noise in experimental platforms. Analogue Hawking radiation emerges from a chirally-driven spin chain quantum simulator through collective excitations of magnons and triplons Researchers have demonstrated the emergence and detection of Hawking radiation within a one-dimensional chiral spin chain, offering a novel platform for investigating quantum gravity phenomena. This work simulates gravitational collapse using a sudden quantum quench, inducing a phase transition that mimics the formation of a black hole horizon. By mapping the spin chain dynamics onto a Dirac fermion in a curved spacetime, the study meticulously analyzes the resulting radiation spectrum and its detectability through two distinct approaches: field-theoretic modes and operational quantum sensors. Initial findings reveal that the observed radiation spectrum deviates from the ideal Planckian form, exhibiting frequency-dependent characteristics analogous to greybody factors, yet maintains robust Poissonian statistics indicative of information loss at the formation scale. To further probe this analogue Hawking radiation, a qubit was introduced as a stationary Unruh-DeWitt detector, coupled to the chir

Quantum ZeitgeistLoading...0
Quantum Computers Sidestep Major Flaw, Paving Way for Larger, More Accurate Calculations
Featured
quantum-computing

Quantum Computers Sidestep Major Flaw, Paving Way for Larger, More Accurate Calculations

Scientists are increasingly exploring variational quantum eigensolvers as practical approaches to prepare ground states, but their potential for quantum advantage remains unclear. Baptiste Anselme Martin from Eviden Quantum Lab and Thomas Ayral from CPHT, CNRS, Ecole Polytechnique, IP Paris, alongside et al., demonstrate a novel method utilising differentiable 2D tensor networks to optimise parameterised circuits for the transverse field Ising model. This research is significant because it enables the preparation of highly accurate ground states for systems exceeding one dimension and crucially, mitigates the detrimental barren plateau issue by identifying enhanced gradient zones that maintain performance as system size increases. By evaluating the classical simulation cost at these optimised starting points, the team delineate regimes where quantum hardware may ultimately outperform tensor network simulations. Tensor network pre-optimisation overcomes barren plateaus in variational quantum circuits by improving initial parameterisation Researchers are pioneering a new approach to harness the power of quantum computing by integrating classical tensor network algorithms with parameterized quantum circuits. This work details the use of differentiable two-dimensional tensor networks to optimize circuits designed to prepare the ground state of the transverse field Ising model, achieving high energy accuracy even for complex systems exceeding one-dimensional limitations. The study demonstrates that pre-optimization using tensor networks effectively mitigates the barren plateau issue, a significant obstacle in quantum computation, by unlocking enhanced gradient zones that maintain their size even as system complexity increases. Specifically, the research focuses on optimizing quantum circuits using projected entangled pair states, a type of two-dimensional tensor network, combined with automatic differentiation techniques. This method allows for the efficient preparation

Quantum ZeitgeistLoading...0
Quantum Computer Controls Refined to Pinpoint Sources of Error in Calculationsquantum-computing

Quantum Computer Controls Refined to Pinpoint Sources of Error in Calculations

Researchers are increasingly focused on mid-circuit measurements as essential building blocks for achieving scalable quantum computation. Piper C. Wysocki (University of New Mexico and Sandia National Laboratories), Luke D. Burkhart (MIT Lincoln Laboratory), and Madeline H. Morocco (MIT Lincoln Laboratory) et al. present a detailed characterisation of these measurements on a transmon qubit, offering a significant advance in understanding their underlying mechanisms. Their work tackles the difficulty of interpreting experimentally obtained measurement data by adapting a generator formalism, previously used for noisy quantum gates, to mid-circuit measurements. By deploying this new analysis, the team successfully quantified contributions from amplitude damping, readout errors, and imperfect state collapse, demonstrating a parsimonious model that recovers key features of dispersive readout and provides a more physically intuitive understanding of this crucial quantum process. Characterising mid-circuit measurement errors using an error generator formalism Researchers have developed a new method for dissecting and understanding errors within mid-circuit measurements, a crucial component for building large-scale, fault-tolerant quantum computers. These measurements, which read qubit states during computation without fully collapsing them, are essential for quantum error correction and advanced quantum algorithms. However, characterizing the errors inherent in these mid-circuit measurements has proven challenging, limiting the ability to debug and improve quantum circuits. This work introduces a framework adapting the error generator formalism, previously used to analyze noisy quantum gates, to the unique characteristics of mid-circuit measurements. The study overcomes a key obstacle by constructing a representation of errors that mirrors the established error generators used for logic gates, despite the fundamentally different nature of mid-circuit measurement transfer m

Quantum ZeitgeistLoading...0
OQC and QinetiQ Demonstrate Critical Quantum Computing Application for Defence and Securityquantum-computing

OQC and QinetiQ Demonstrate Critical Quantum Computing Application for Defence and Security

PRESS RELEASE OQC and QinetiQ Demonstrate Critical Quantum Computing Application for Defence and Security SHARE ARTICLE London, UK — 10 February 2-26 — OQC and QinetiQ have announced the successful completion of a joint research project demonstrating how quantum computing can be applied to strengthen the security and resilience of defence communication networks. Using OQC’s Toshiko quantum computer and cloud-accessible API, QinetiQ integrated its Quantum Approximation Optimisation Algorithm (QinetiQAOA) to identify critical nodes in Mobile Ad-Hoc Networks (MANETs) – the dynamic, infrastructure-free networks used in military and emergency operations. The collaboration represents a significant step forward in applying quantum computing to real-world defence challenges, proving that quantum systems have the potential to offer valuable insights into complex network vulnerabilities and support more secure, adaptive communications in the field. Unlocking Quantum Advantage for Defence The project used OQC’s Toshiko system to model MANETs as optimisation problems, helping to identify nodes whose failure could disrupt mission-critical communications. The results demonstrated that quantum algorithms can successfully pinpoint these vulnerabilities: a breakthrough with direct implications for network resilience, cyber defence, and operational planning. By analysing network topologies, QinetiQ and OQC showed how defence organisations could one day use quantum computing to: Strengthen communications networks against interference or attack; Optimise logistics and mission planning; Support more informed decision-making in dynamic environments. “This project is a tangible example of quantum computing’s power to deliver real operational value,” said Gerald Mullally, CEO of OQC. “Working with QinetiQ has shown how sovereign quantum technology can be applied today to challenges that directly impact defence capability.” While the project successfully identified critical nodes within com

Oxford Quantum CircuitsLoading...0
Theory of quantum error mitigation for non-Clifford gatesquantum-computing

Theory of quantum error mitigation for non-Clifford gates

AbstractQuantum error mitigation techniques mimic noiseless quantum circuits by running several related noisy circuits and combining their outputs in particular ways. How well such techniques work is thought to depend strongly on how noisy the underlying gates are. Weakly-entangling gates, like $R_{ZZ}(\theta)$ for small angles $\theta$, can be much less noisy than entangling Clifford gates, like CNOT and CZ, and they arise naturally in circuits used to simulate quantum dynamics. However, such weakly-entangling gates are non-Clifford, and are therefore incompatible with two of the most prominent error mitigation techniques to date: probabilistic error cancellation (PEC) and the related form of zero-noise extrapolation (ZNE). This paper generalizes these techniques to non-Clifford gates, and comprises two complementary parts. The first part shows how to effectively transform any given quantum channel into (almost) any desired channel, at the cost of a sampling overhead, by adding random Pauli gates and processing the measurement outcomes. This enables us to cancel or properly amplify noise in non-Clifford gates, provided we can first characterize such gates in detail. The second part therefore introduces techniques to do so for noisy $R_{ZZ}(\theta)$ gates. These techniques are robust to state preparation and measurement (SPAM) errors, and exhibit concentration and sensitivity—crucial statistical properties for many experiments. They are related to randomized benchmarking, and may also be of interest beyond the context of error mitigation. We find that while non-Clifford gates can be less noisy than related Cliffords, their noise is fundamentally more complex, which can lead to surprising and sometimes unwanted effects in error mitigation. Whether this trade-off can be broadly advantageous remains to be seen.Featured image: An illustration of probabilistic error cancellation (PEC) generalized to $R_{ZZ}$ gates with a non-Clifford angle. The gate noise can be accurate

Quantum JournalLoading...0