Quantum Software Development: Qiskit, Cirq & Quantum Programming
Quantum programming news: Qiskit, Cirq, quantum SDKs, compilers. Quantum software stack & hybrid quantum-classical development.
Quantum software development bridges abstract quantum algorithms with physical hardware execution, requiring specialized programming frameworks, compilers, and hybrid classical-quantum orchestration.
Major programming frameworks include Qiskit (IBM) with 500,000+ users including substantial Indian participation; Cirq (Google); and PennyLane (Xanadu) for differentiable quantum programming.
India's Quantum Software Development Landscape
India's software development capabilities feature prominently in NQM plans. Tata Consultancy Services (TCS) partners with IBM to develop cloud-based interfaces and quantum algorithms. The DRDO-TIFR-TCS collaboration developed the cloud interface for India's 6-qubit superconducting quantum processor.
The NQM Thematic Hub at IISc Bengaluru develops quantum software including compilers, control electronics, and algorithm libraries. The Centre for Development of Advanced Computing (C-DAC) integrates quantum computing with India's high-performance computing infrastructure.
Educational institutions including IISc Bengaluru, IIT Delhi, and IIT Bombay offer quantum computing courses and certifications. The IISc Centre for Continuing Education provides a Certificate Programme in Quantum Computing and Artificial Intelligence with hands-on training in Qiskit and PennyLane.
quantum-computingWorld's largest quantum circuit simulation for quantum chemistry achieved on 1,024 GPUs
A joint research team between the Center for Quantum Information and Quantum Biology (QIQB) at The University of Osaka and Fixstars Corporation has demonstrated one of the world's largest classical simulations of iterative quantum phase estimation (IQPE) circuits for quantum chemistry on up to 1,024 GPUs, surpassing the previous 40-qubit limit. The result expands the scale of molecular systems available for the development and validation of quantum algorithms for future fault-tolerant quantum computers, supporting progress toward industrial applications in drug discovery and materials development.
Phys.org Quantum SectionLoading...0
quantum-computingQuantum Computer Simulates Magnetic Material with over 400 Components
A programmable dipolar square spin-ice model is demonstrated using a superconducting-qubit quantum annealer by Krzysztof Giergiel and Piotr Surówka at the Institute of Theoretical Physics. The model achieves access to previously unattainable quantum-coherent dynamics. Effective dipolar interactions on frustrated lattices containing over 400 vertices are realised through a direct mapping of lattice spins to physical qubits and engineered extended couplings. Observation of super-diffusive monopole transport and dynamics beyond classical stochastic relaxation provides a scalable platform for exploring fractionalised excitations and emergent gauge dynamics within engineered quantum matter. Quantum spin ice exhibits super-diffusion of magnetic monopoles via engineered qubit couplings Super-diffusive monopole transport has occurred within a quantum spin ice model, a behaviour previously unseen and unattainable in artificial spin ice systems. Realising effective dipolar interactions on frustrated lattices comprising over 400 vertices enabled this breakthrough, exceeding previous limitations reliant on short-range couplings and lacking dipolar interactions. By directly mapping lattice spins to superconducting qubits and engineering extended couplings, researchers gained access to a previously unexplored quantum-coherent regime, allowing observation of monopole dynamics beyond classical stochastic relaxation. The significance of this lies in the potential to study emergent phenomena arising from frustrated magnetism, a field crucial for understanding complex materials and potentially developing novel quantum technologies. The resulting platform allows detailed investigation of fractionalized excitations and emergent gauge dynamics, potentially paving the way for novel quantum technologies and a deeper understanding of complex magnetic systems. This scalable system offers a new avenue for exploring fundamental concepts in engineered quantum matter, while analysis of individua
Quantum ZeitgeistLoading...0
quantum-computingQuantum Optimisation Cuts Measurement Needs with New Bayesian Approach
Scientists Siran Zhang and Shuming Cheng at Tongji University have developed a resource-efficient Quantum Approximate Optimisation Algorithm (QAOA) framework that optimises performance by focusing on the cut value of the most probable measured bitstring, alongside Bayesian optimisation and adaptive shot allocation. The method achieves solution quality comparable to conventional QAOA, but requires fewer measurements to reach the same accuracy. It offers a pathway to more practical quantum optimisation, particularly valuable when measurement resources are constrained. Maximising discrete MaxCut solutions with reduced quantum measurement demands A reduction in required quantum shots was achieved when solving 3-regular MaxCut problems using a new QAOA framework. This performance leap surpasses previous methods, enabling comparable discrete-solution quality with fewer measurements, a feat previously unattainable under limited resource conditions. The MaxCut problem is a classical combinatorial optimisation problem where the goal is to partition the vertices of a graph into two disjoint sets such that the number of edges crossing the partition (the ‘cut’) is maximised. The 3-regular variant specifies that each vertex has exactly three edges connected to it, adding a specific structural constraint. QAOA is a promising algorithm for tackling such problems on near-term quantum computers, but its effectiveness is heavily reliant on efficient resource utilisation. The team replaced the conventional expectation-based objective, which optimises the average performance of a solution, with an approach focused on the cut value of the most probable bitstring, directly targeting high-quality discrete outputs. Traditionally, QAOA aims to maximise the expected value of the objective function, averaging over all possible measurement outcomes. This new approach, however, prioritises finding the single, most likely solution and maximising its cut value. This shift in focus is crucial beca
Quantum ZeitgeistLoading...0
Brooklyn-based Pizza Makers Declare Quantum Supremacy in Qulinary™ Breakthrough
Insider Brief Aleesia and Roberto’s Fine Italian Diner and Pizzeria claims to have used a quantum algorithm to design an optimized multi-topping pizza that would take classical computing impractically long to replicate. The team applied quantum-style simulation and optimization to model ingredients at a chemical level and determine an ideal topping combination, while humorously noting limits such as missing samples and rejecting pineapple as fundamentally incompatible. The restaurant plans to expand into a full “quantum menu” and pursue commercialization by 2030, including a potential SPAC listing to fund quantum-enhanced Italian food products. PRESS RELEASE — Sure, quantum science can be complex and intense, but now it can also be delicious, according to pizza makers from Brooklyn-based Aleesia and Roberto’s Fine Italian Diner and Pizzeria. In a study published on the premier pre-print server for the pizza industry, anXovi, a team of pizza scientists reported they were able to use a quantum algorithm to create one of Aleesia and Roberto’s regionally famous multi-topping pizzas, or, put scientifically, a complex, thermally transformed carbohydrate matrix supporting a heterogeneous distribution of proteins, lipids and plant-derived compounds. Now referred to as a Qulinary™ breakthrough, the quantum pizza supremacy experiment addresses two long-standing challenges in the pizza industry: how many green peppers on a pizza is just too much and what’s the deal with black olives on a pizza anyway. The resulting quantum-enhanced pizza recipe would take a classical super computer 10,000 years to formulate and make, the team reported. “And that’s not even including the time in the oven!” said Roberto Fermi, lead author of both the study and the menu. The experimental focus on pizza is ideal for quantum devices because it focuses on simulation and optimization, two calculations that quantum computers can theoretically master that often overwhelm their classical counterparts. T
Quantum DailyLoading...0
quantum-computingSenior Researcher Fault-Tolerant Quantum Computing
Application deadline: Wednesday, September 30, 2026Employer web page: http://www.neqxt.orgJob type: FellowshipOtherPostDocTags: quantum error correctionfault tolerancedecodingquantum computingquantum algorithmsIndustrystartupWe are seeking Senior Researchers in Fault-Tolerant Quantum Computing to contribute to the development of key components of the fault-tolerant stack. These roles focus on decoders and quantum algorithms at the logical level, addressing how quantum information can be processed reliably in large-scale, error-corrected systems. The positions involve developing and analyzing decoding strategies under realistic noise models, as well as designing and evaluating quantum algorithms within fault-tolerant architectures, including resource estimation and logical-level optimization. The emphasis is on rigorous, hands-on research that advances the performance, scalability, and understanding of error-corrected quantum computation. A central aspect of the role is contributing to the integration of these methods into modular fault-tolerant architectures, ensuring consistency between algorithmic, decoding, and system-level considerations. Company Overview neQxt is a trailblazing company at the forefront of quantum computing technology. With decades of expertise in ion-trap technology, neQxt was founded at Johannes Gutenberg University Mainz, Germany, emerging from the renowned research group of Prof. Dr. Ferdinand Schmidt-Kaler. At neQxt, our theory team explores the principles that make scalable quantum computing possible. We study how fault tolerance can be realized under realistic conditions, designing circuits, developing simulation methods, and analyzing noise at the circuit level. A key part of our work is to understand trade-offs between different approaches and how they can be embedded into modular architectures. We aim to connect theoretical insights with practical feasibility, developing techniques that move fault tolerance closer t
QuantikiLoading...0
quantum-computingGroup leader in quantum algorithms & machine learning at the Center for Quantum-Enabled Computing (Warsaw, Poland)
Application deadline: Sunday, May 17, 2026Employer web page: www.cft.edu.plJob type: ProfessorshipTags: quantum technologiesquantum computingquantum simulationquantum informationmachine learningquantum algorithmsGROUP LEADER in NEUTRAL ATOMS APPLICATIONS (f/m/x) Ref. Number: MAB/05/2026 Location: Warsaw, Poland Salary: 20 750 - 24 250 PLN/month gross (approx. PLN 16,300-18,500 net per month); employment contract: 1 FTE; full social security and health insurance Number of positions available: 1 Work Arrangement: Hybrid Start of the position: Negotiable, preferably on July 1, 2026 Period of employment: Until the end of 2029. Employment may be extended beyond the project period under a standard CTP PAS contract and salary scale, subject to a positive performance evaluation. Keywords: Atomic physics, Simulations, Rydberg Atoms, Many-body quantum physics, Quantum information theory, quantum computing, artificial intelligence, machine learning, quantum computational advantage Important Dates: Application deadline: May 17, 2026. Candidates will be informed about the results by the end of June 2026. Source of financing: Center for Quantum-Enabled Computing / Centrum Obliczeń Wspomaganych Kwantowo (FENG.02.01-IP.05-M032/25). The project is carried out within the International Research Agendas programme of the Foundation for Polish Science co-financed by the European Union under the European Funds for Smart Economy 2021-2027 (FENG). ABOUT THE PROJECT AND US The Center for Quantum-Enabled Computing project’s overarching objective is to address several key challenges in the field of computing by paving the way to a verifiable, energy-efficient, reliable, and scalable computational advantage based on quantum systems. Project temporary website: https://remik24-web.github.io/QT-website/ The candidates are welcome to inquire about the project details, research agenda and organizational issues. The questions should be sent by email to R. Augusiak (http://raugusia
QuantikiLoading...0
quantum-computingGroup leader in quantum computing & machine learning at the Center for Quantum-Enabled Computing (Warsaw, Poland)
Group leader in quantum computing & machine learning at the Center for Quantum-Enabled Computing (Warsaw, Poland) Application deadline: Sunday, May 17, 2026Employer web page: www.cft.edu.plJob type: ProfessorshipTags: quantum computingquantum algorithmsquantum simulationquantum advantagequantum technologiesquantum informationmachine learning Log in or register to post comments
QuantikiLoading...0
quantum-computingOptimal Timing Allows Quantum Search to Benefit from Controlled Noise
Researchers at the University of Strathclyde, led by Afaf El Kalai, have identified a specific timetable to maximise fidelity during target state searches, even when decoherence is present. The study delivers closed-form expressions for the optimal evolution schedule and minimum runtime for adiabatic Grover search, alongside a key dephasing threshold defining the limits of noise-assisted acceleration. This clarifies the trade-off between speed and accuracy in open quantum systems and establishes physically realisable boundaries for dephasing-based adiabatic quantum search protocols. Dephasing noise enables faster adiabatic Grover search with quantified limitations Minimum runtimes for adiabatic Grover search have been reduced by a factor of √N by exploiting dephasing, a form of environmental noise that gradually diminishes quantum clarity. This improvement is contingent on maintaining dephasing below a critical threshold, and exceeding N−1/2 negates any acceleration benefits and introduces errors, a limitation previously unquantified. Previously, adiabatic quantum algorithms were constrained by runtimes scaling inversely with the square of the minimum spectral gap, hindering practical application, but now a defined boundary exists for utilising noise to enhance search speed. Grover’s search algorithm, a quantum algorithm designed to find a specific item within an unsorted list of N items, typically offers a quadratic speedup over classical algorithms. However, implementing this algorithm using the adiabatic quantum computation paradigm presents challenges related to runtime and maintaining coherence. The adiabatic approach relies on slowly evolving a quantum system from a known initial Hamiltonian to a final Hamiltonian whose ground state represents the solution to the search problem. The runtime is fundamentally limited by the minimum spectral gap of the evolving Hamiltonian, which dictates how slowly the system must change to remain in its ground state and avoid n
Quantum ZeitgeistLoading...0
quantum-computingSecuring Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities: Resource Estimates and Mitigations
--> Quantum Physics arXiv:2603.28846 (quant-ph) [Submitted on 30 Mar 2026] Title:Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities: Resource Estimates and Mitigations Authors:Ryan Babbush, Adam Zalcman, Craig Gidney, Michael Broughton, Tanuj Khattar, Hartmut Neven, Thiago Bergamaschi, Justin Drake, Dan Boneh View a PDF of the paper titled Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities: Resource Estimates and Mitigations, by Ryan Babbush and 7 other authors View PDF Abstract:This whitepaper seeks to elucidate implications that the capabilities of developing quantum architectures have on blockchain vulnerabilities and mitigation strategies. First, we provide new resource estimates for breaking the 256-bit Elliptic Curve Discrete Logarithm Problem, the core of modern blockchain cryptography. We demonstrate that Shor's algorithm for this problem can execute with either <1200 logical qubits and <90 million Toffoli gates or <1450 logical qubits and <70 million Toffoli gates. In the interest of responsible disclosure, we use a zero-knowledge proof to validate these results without disclosing attack vectors. On superconducting architectures with 1e-3 physical error rates and planar connectivity, those circuits can execute in minutes using fewer than half a million physical qubits. We introduce a critical distinction between fast-clock (such as superconducting and photonic) and slow-clock (such as neutral atom and ion trap) architectures. Our analysis reveals that the first fast-clock CRQCs would enable on-spend attacks on public mempool transactions of some cryptocurrencies. We survey major cryptocurrency vulnerabilities through this lens, identifying systemic risks associated with advanced features in some blockchains such as smart contracts, Proof-of-Stake consensus, and Data Availability Sampling, as well as the enduring concern of abandoned assets. We argue that technical solutions would benefit from accompany
arXiv Quantum PhysicsLoading...0
quantum-computingReal Variance-Based Variational Quantum Eigensolver for Non-Hermitian Matrices
--> Quantum Physics arXiv:2603.28892 (quant-ph) [Submitted on 30 Mar 2026] Title:Real Variance-Based Variational Quantum Eigensolver for Non-Hermitian Matrices Authors:Durgesh Pandey, Ankit Kumar Das, P. Arumugam View a PDF of the paper titled Real Variance-Based Variational Quantum Eigensolver for Non-Hermitian Matrices, by Durgesh Pandey and 2 other authors View PDF HTML (experimental) Abstract:Non-Hermitian operators naturally arise in the description of open quantum systems, which exhibit features such as resonances and decay processes, where the associated eigenvalues are complex. Standard quantum algorithms, including the Variational Quantum Eigensolver (VQE), are designed for Hermitian operators and are ineffective in recovering correct eigenvalues for non-Hermitian matrices. We present a systematic formulation based on a Real Variance-based Variational Quantum Eigensolver (RVVQE) for non-Hermitian operators. A correct cost function that guarantees convergence to the true eigenstates is identified. Our implementation utilizes Hermitian measurements only, rendering the algorithm easily deliverable. The performance and scalability of the proposed algorithm on a hierarchy of dense non-Hermitian matrices of increasing dimension are demonstrated with numerical results and computational metrics. Comments: Subjects: Quantum Physics (quant-ph) Cite as: arXiv:2603.28892 [quant-ph] (or arXiv:2603.28892v1 [quant-ph] for this version) https://doi.org/10.48550/arXiv.2603.28892 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: P. Arumugam [view email] [v1] Mon, 30 Mar 2026 18:16:42 UTC (5,453 KB) Full-text links: Access Paper: View a PDF of the paper titled Real Variance-Based Variational Quantum Eigensolver for Non-Hermitian Matrices, by Durgesh Pandey and 2 other authorsView PDFHTML (experimental)TeX Source view license Current browse context: quant-ph < prev | next > new | recent
arXiv Quantum PhysicsLoading...0
quantum-computingQuantum Models Predict Vanishing Stability Times in Large Systems
Scientists at University of Malta, led by Emanuel Schwarzhans, have presented a novel analytical expression for approximating the average equilibration time within the Gaussian Unitary Ensemble (GUE). The GUE serves as a foundational framework within random matrix theory (RMT), a branch of theoretical physics concerned with the statistical properties of large, random matrices. Understanding equilibration times, the period required for a closed quantum system to reach a stable, equilibrium state, is crucial for characterising the behaviour of these systems. The derived expression confirms that the average equilibration time is independent of both the initial quantum state and the observable being measured, a consequence of the inherent rotational invariance within the GUE. However, the research reveals a counterintuitive trend: the calculated equilibration time diminishes as the system size increases, ultimately approaching zero in sufficiently large systems. This finding suggests that realistic chaotic many-body systems possess key physical characteristics beyond those captured by the GUE model, which fundamentally govern their true equilibration timescales. GUE equilibration timescales shorten with increasing quantum state numbers An analytical expression for the average equilibration time within the Gaussian Unitary Ensemble has been rigorously derived, demonstrating a decrease in this time as the system size expands, a result contrasting with some prior quantitative predictions. The GUE is particularly relevant to the study of quantum chaos, where the behaviour of systems is highly sensitive to initial conditions and exhibits seemingly random fluctuations. The analytical derivation is based on examining the decay of correlations between the initial state and the time-evolved state, utilising techniques from perturbative quantum mechanics and statistical physics. Numerical simulations, employing established algorithms for generating and diagonalising random matric
Quantum ZeitgeistLoading...0
quantum-computingQuantum Algorithm Speeds up Complex Problem Solving at Later Stages
Scientists at the Centre for Exploratory Research have developed a new quantum algorithm, Quantum Riemannian Hamiltonian Descent (QRHD), to enhance continuous optimisation on Riemannian manifolds. Yoshihiko Abe and Ryo Nagai present a method that builds upon existing Quantum Hamiltonian Descent (QHD) techniques by explicitly integrating the geometric structure of the parameter space via a position-dependent metric within the kinetic term of the Hamiltonian. Their formulation, explored through both operator and path integral approaches, reveals the presence of quantum corrections to the action integral, which are demonstrably suppressed as the optimisation process progresses. This suggests that while quantum effects dominate the initial dynamics, the classical potential ultimately governs convergence near optimal solutions. The team estimate a lower bound for convergence time and demonstrate the algorithm’s functionality through numerical analysis, alongside a discussion of its potential implementation via time-dependent Hamiltonian simulation and an estimation of its query complexity. Geometric optimisation accelerates quantum parameter estimation A convergence time of 0.11 is reported for Quantum Riemannian Hamiltonian Descent, or QRHD, representing a measurable improvement over standard Quantum Hamiltonian Descent techniques. Traditional continuous optimisation algorithms often struggle with the complexities of high-dimensional and non-convex parameter spaces, frequently becoming trapped in local minima. QRHD addresses this limitation by explicitly incorporating the geometry of the search space, allowing for more efficient exploration of potential solutions. Enabling parameter searches on specified Riemannian manifolds, curved, multidimensional surfaces characterised by a locally defined metric tensor, allows the algorithm to navigate these complex landscapes more effectively. This is particularly relevant in scenarios where the cost function exhibits significant
Quantum ZeitgeistLoading...0
quantum-computingFaster Quantum Computing Now Possible Despite Increased Error Rates
Scientists from Johannes Kepler University Linz and collaborators demonstrated an approach to improve the reliability of quantum computations on trapped-ion systems. Their work implements local robust shadows, an error mitigation protocol designed to counteract measurement errors. The method recovers computational accuracy when measurement procedures are accelerated, improving the fidelity of results across different quantum states. However, this comes at the cost of increased sampling requirements, resulting in no overall efficiency gain. High fidelity quantum computation enabled by strong shadow protocol error mitigation A fidelity of around 0.96 was observed in the experiments, demonstrating that the robust shadows protocol can effectively reduce errors caused by shortened measurement times. Rather than representing a universal improvement over all previous methods, this result highlights the protocol’s ability to recover accuracy in scenarios where faster measurements introduce additional noise. Measurement errors are a major limitation in near-term quantum devices, and reducing the measurement pulse duration—such as to 150 μs—can significantly increase these errors. The robust shadows approach mitigates this effect, helping restore reliable estimates despite the noisier conditions. The protocol was tested across three different quantum states, including a local Haar-random state and two states used in quantum approximate optimisation algorithms (QAOA). In each case, it consistently reduced the impact of measurement errors. The method alternates between a calibration phase and a shadow estimation phase. A key component is Pauli-X twirling, where random bit-flip operations are applied before measurement to symmetrize errors. This enables the estimation of single-qubit expansion coefficients, which are then used to construct corrected “shadow” representations of the quantum state for classical post-processing. The results show that this approach can reduce bias in
Quantum ZeitgeistLoading...0
quantum-computingTrapped Molecules Unlock a New Route to Efficient Quantum Computation
Sakthikumaran Ravichandran and colleagues at University of Warsaw show that the structure of optical tweezers, used to control individual molecules, can be used to create efficient quantum gates. Their numerical modelling of dipolar molecules in these traps reveals trap-induced resonances, offering a pathway to state-dependent dynamics and potential applications in both quantum computation and sensing. The work reframes motional dephasing, traditionally viewed as a limitation, as a potentially valuable resource for advancing quantum technologies. Coupled-channel simulations demonstrate enhanced dipolar interactions via trap-induced resonances Numerical simulations now reveal that trap-induced resonances enhance dipolar interactions by a factor of up to 50 when compared to previous calculations utilising simplified models. This enhancement is achieved through a rigorous coupled-channel treatment of the molecular dynamics, incorporating experimentally realistic parameters such as trap frequency and molecular polarisability. The substantial increase in interaction strength unlocks the potential for strong quantum gate implementation, previously hampered by the inherently weak, long-range nature of dipolar interactions. Dipolar molecules, possessing a permanent electric dipole moment, interact via electrostatic forces that fall off rapidly with distance, making strong coupling difficult to achieve. These trap-induced resonances emerge as avoided crossings between vibrational states of the trapped molecules and short-range molecular bound states formed by the trapping potential. The precise control afforded by these resonances allows for tailored manipulation of the molecular interactions. The coupled-channel approach employed accounts for the simultaneous excitation of multiple vibrational modes, providing a more accurate description of the system than single-channel approximations. This is crucial for understanding the complex interplay between the trapping potential a
Quantum ZeitgeistLoading...0
quantum-computingQuantum Computation Using Limited Interactions Rivals Standard Models’ Power
A new model termed Exchange Quantum Polynomial Time (XQP) utilises the exchange interaction for quantum computation. Jędrzej Burkat and colleagues at University of Oxford show that XQP circuits, constructed from restricted operations, occupy a computational complexity between classically tractable and fully quantum algorithms. Simulating these circuits efficiently would have key implications for computational complexity. It could potentially collapse the polynomial hierarchy. The team also prove that even a limited subset of these circuits remains challenging to simulate, and reveal connections to established mathematical structures like the Gelfand-Tsetlin basis and statistical physics models, suggesting suitability for implementation on near-term quantum hardware. XQP simulation unlocks efficient modelling of broader quantum computational complexity Additive-error simulation of the Exchange Quantum Polynomial Time (XQP) model now enables efficient additive-error simulation of arbitrary BQP computations, a feat previously unattainable. It establishes XQP as a new computational model utilising only computational basis states and the isotropic Heisenberg exchange interaction, effectively bridging the gap between classical and quantum computation. Structurally, XQP captures decoherence-free subspace computation without requiring access to singlet states, simplifying potential hardware implementations. This is particularly significant as singlet states are notoriously difficult to create and maintain with high fidelity in physical quantum systems, often being highly susceptible to decoherence. The isotropic Heisenberg exchange interaction, a fundamental interaction in quantum mechanics, provides a natural and potentially robust mechanism for implementing the required qubit interactions. Circuits comprised solely of √SWAP gates, defining a pulse angle of π/4, remain remarkably difficult to simulate even with these restrictions, challenging existing classical simulation
Quantum ZeitgeistLoading...0
quantum-computingAlice & Bob and Partners Awarded $3.9M ARPA-E Grant for Quantum Magnet Design
Alice & Bob and Partners Awarded $3.9M ARPA-E Grant for Quantum Magnet Design Alice & Bob, in collaboration with Los Alamos National Laboratory and GE Vernova, has been awarded $3.9 million from the U.S. Department of Energy’s ARPA-E Quantum Computing for Computational Chemistry (QC3) program. The three-year project aims to develop fault-tolerant quantum algorithms to identify rare-earth-free permanent magnets, which are essential components for electric motors and turbines. The effort is directed toward providing a technical alternative to neodymium-iron-boron (NdFeB) magnets, whose supply chains are currently geographically concentrated and subject to political constraints. The technical objective of the project is to achieve a 10,000-fold speed-up in computing time compared to current state-of-the-art classical simulations. The team will implement a hybrid approach where classical algorithms, developed by a group led by Professor Emanuel Gull, calculate environmental parameters while Alice & Bob’s quantum algorithms simulate highly correlated electronic systems. Los Alamos National Laboratory will contribute tensor network tools for quantum circuit optimization, and the performance targets will be validated experimentally on Alice & Bob’s cat-qubit hardware as well as through theoretical resource estimates. Classical computers struggle to accurately model the complex quantum interactions between electrons that define the magnetic behavior of candidate materials. By using quantum processors to model these systems directly, the consortium intends to enable realistic material calculations within a 24-hour window. GE Vernova’s Advanced Research accelerator will support the project by performing a technoeconomic analysis to evaluate the commercial viability of materials discovered through the hybrid algorithm. If successful, the researchers indicate that the framework could be adapted to broader applications in computational chemistry and materials sci
Quantum Computing ReportLoading...0
quantum-computingmemQ Secures $10M Series A to Develop Distributed Quantum Networking Hardware
memQ Secures $10M Series A to Develop Distributed Quantum Networking Hardware memQ, a University of Chicago spin-out, has closed a $10 million Series A financing round co-led by Quantonation and Ocean Azul Partners. The funding is allocated toward the development and commercialization of the company’s xQNA (Extensible Quantum Network Architecture) portfolio. The primary technical objective is to enable modular, scale-out configurations for quantum computers, allowing separate quantum processing units (QPUs) to be networked over standard optical telecommunication links. This approach aims to address the current limitations of monolithic quantum architectures by facilitating distributed quantum computing and cooperative processing across local and wide-area networks. The company’s hardware suite includes Quantum Network Interface Controllers (QNICs) designed to interface various qubit modalities with a network without decohering the quantum state. Supporting this infrastructure are Quantum Memory Modules (QMMs), which provide stable storage for entanglement operations, and a Quantum Control System (QCS) for sub-nanosecond orchestration of distributed tasks. On the software side, memQ’s xDQC (Distributed Quantum Compiler) manages workload allocation across the network based on available quantum resources. The architecture is built using commercial fabrication processes and is intended to be qubit-agnostic, supporting connectivity regardless of the underlying hardware structure of the connected systems. Market analysis by Global Quantum Intelligence (GQI) indicates that memQ’s use of standard photonic integrated circuits (PICs) and commercial fab platforms is a viable path for delivering quantum networking at scale. This modular strategy is currently being evaluated by hardware developers such as Atom Computing to support the scaling requirements of neutral-atom systems. By providing the components necessary for “blind” cloud quantum computing and secure quantum network
Quantum Computing ReportLoading...0
quantum-computingQuantum Factoring Needs Just 10,000 Qubits
A new analysis from Madelyn Cain and colleagues at University of California and Oratomic reveals that Shor’s algorithm, a key tool with implications for modern cryptography, may be achievable with fewer qubits than previously thought. By optimising quantum error correction and circuit design, cryptographically relevant calculations using Shor’s algorithm could potentially be performed with as few as 10,000 reconfigurable atomic qubits. The research sharply lowers the resource requirements for building a quantum computer capable of breaking widely used encryption algorithms, suggesting such a machine may be within reach given continued advances in neutral-atom technology and recent experimental demonstrations of fault-tolerant operations on increasingly large qubit arrays. The team’s analysis indicates that a system with 26,000 physical qubits could solve discrete logarithms on the P-256 elliptic curve in a matter of days. Shor’s algorithm viability enhanced via reduced qubit counts and optimised atomic architectures A cryptographically relevant Shor’s algorithm now requires as few as 10,000 reconfigurable atomic qubits, a reduction from previous estimates of millions. This breakthrough crosses a critical threshold, as the immense qubit requirements previously rendered practical implementation of the algorithm, vital for breaking modern encryption, impossible. The reduction was achieved through advances in quantum error-correcting codes, efficient logical instruction sets, and optimised circuit design, densely packing logical qubits to minimise overhead. Under plausible assumptions, a system comprising 26,000 physical qubits could solve discrete logarithms on the P-256 elliptic curve in a matter of days, demonstrating a sharp leap in computational speed. These findings highlight the potential of neutral atoms for fault-tolerant quantum computing and wider scientific applications. Utilising high-rate quantum error-correcting codes, the team achieved encoding rates of
Quantum ZeitgeistLoading...0
quantum-computingQuantum Code Boosts Data Capacity while Shielding Against Errors
A new quantum error correction code, the Majorana-XYZ code, offers a pathway towards scalable and strong quantum computation. Tobias Busse and Lauri Toikka at Aalto University demonstrate a subsystem code where logical quantum information exhibits macroscopic scaling and benefits from topologically non-trivial protection. The code, defined by parameters including $n=L^$2 physical qubits and $k= \lfloor L/2 \rfloor$ logical qubits, detects single- and two-qubit errors, alongside higher-weight errors constrained by its distance of $L$. Key undetected errors remain confined to the gauge group, preserving the integrity of logical information, and the code’s structure combines aspects of both topological and local gauge codes to achieve many topological logical qubits. The authors derive this code from a system of Majorana fermions arranged on a honeycomb lattice, using only nearest-neighbour interactions, suggesting potential feasibility for experimental realisation. Macroscopic scaling of logical qubits via a novel Majorana fermion code The Majorana-XYZ code encodes approximately $\lfloor L/2 \rfloor$ logical qubits, a substantial improvement over previous topological codes. These earlier designs typically required a number of logical qubits scaling linearly with system size, limiting their scalability for complex quantum algorithms. This breakthrough crosses a critical threshold, enabling macroscopic scaling of logical qubits, the fundamental units of quantum information, within a single system, previously unattainable without sacrificing error protection. The ability to encode a significant number of logical qubits is paramount because the complexity of quantum algorithms often necessitates many of these units to represent and manipulate quantum data effectively. Without sufficient logical qubits, even theoretically powerful algorithms become impractical due to the limitations imposed by the physical hardware. Unlike earlier designs reliant on strictly local connecti
Quantum ZeitgeistLoading...0
quantum-computingAlice & Bob Secures ARPA-E Award to Design Rare-Earth-Free Magnets
Alice & Bob has secured a $3.9 million award from the U.S. Department of Energy’s ARPA-E program to pursue the development of fault-tolerant quantum algorithms with the goal of discovering rare-earth-free permanent magnets. These magnets are essential components in electric motors and turbines, and current reliance on materials like neodymium-iron-boron presents geopolitical and supply chain challenges. The company aims to achieve a 10,000-fold speed-up in computing time compared to existing classical simulations, potentially enabling realistic material calculations within a single day. “Designing high-performance magnets without rare earth elements is one of the hardest problems in material science,” said Juliette Peyronnet, U.S General Manager at Alice & Bob, “as these materials are extremely difficult to simulate with classical computers.” This three-year project, conducted in collaboration with Los Alamos National Laboratory and GE Vernova, could accelerate the creation of more sustainable magnets and offer a broadly applicable solution for complex materials science problems. ARPA-E Funds Alice & Bob for Rare-Earth-Free Magnet Discovery The project will leverage a collaborative effort with Los Alamos National Laboratory, GE Vernova’s Advanced Research accelerator, and researchers led by Professor Emanuel Gull, combining classical and quantum computing techniques. Los Alamos will focus on optimizing quantum circuits using tensor network tools, while GE Vernova will assess the economic viability of materials discovered through this hybrid algorithm. Marco Cerezo, Los Alamos scientist and Laboratory lead on the project, emphasized the synergistic nature of the research, stating, “Finding ways to prepare high quality states via tensor network optimization is a critical tool that will help develop fault-tolerant quantum algorithms applied to challenges like rare-earth-free permanent magnets.” Jonathan Owens of GE Vernova added, “Our team is excited to col
Quantum ZeitgeistLoading...0