Large-scale Lindblad Learning from Time-series Data Enables Robust Quantum Control of 156-qubit Systems

Summarize this article with:
Understanding and correcting errors is crucial for building practical quantum computers, and recent work by Ewout van den Berg, Brad Mitchell, and Ken Xuan Wei, along with Moein Malekakhlagh, all from IBM Quantum, presents a significant advance in this field.
The team developed a new method for learning the precise nature of errors that occur during quantum computations, allowing for more accurate control and correction of these fragile systems. Their approach involves analysing time-series data from quantum processors and extracting information about how quantum states evolve under the influence of noise, effectively building a detailed map of the errors present. This achievement represents the first successful learning experiment of its kind on a 156-qubit processor, paving the way for more robust and reliable quantum computation by enabling the creation of more effective error mitigation strategies. At its core, the protocol establishes a linear system of equations to determine model parameters from observable values and their gradients, obtained by fitting time-series data with sums of exponentially damped sinusoids.
The team develops a robust curve-fitting procedure that identifies the most accurate representation of the data, accounting for inherent shot noise. They demonstrate the approach by learning the Lindbladian, a mathematical description of the system’s evolution, for a full layer of gates on a 156-qubit superconducting quantum processor, providing the first learning experiment of this kind.
Quantum Noise Characterization and Mitigation Strategies This collection of research focuses on quantum computing, noise mitigation, system identification, and optimization. The central theme is understanding and reducing errors in quantum systems, which are incredibly sensitive to environmental disturbances. Researchers investigate methods for characterizing different types of noise, such as dephasing, relaxation, and readout errors, and developing models to predict their effects on quantum states. A key focus is on techniques to mitigate these errors, including methods to reduce errors without full error correction, and specifically addressing errors in measuring the final state of qubits. Many of the techniques rely on optimization algorithms, including gradient-based optimization, constrained optimization, and limited-memory algorithms. Core concepts include the Lindblad master equation, which describes the evolution of open quantum systems, and the density matrix, a representation of the state of a quantum system. Quantum state tomography reconstructs the density matrix from measurement data, while process tomography characterizes the quantum process implemented by a circuit. Understanding noise characterization is crucial for building accurate models and developing effective mitigation strategies, and quantum tomography provides the data needed to estimate the parameters of noise models. Readout error mitigation and tomography are closely linked, as accurately characterizing readout errors is essential for developing effective mitigation techniques. Hardware-efficient compilation and noise mitigation techniques aim to reduce the overall circuit depth, which can reduce the accumulation of errors due to noise.
This research highlights a focus on practical implementation and optimization of superconducting qubit systems, with a significant emphasis on understanding and mitigating readout errors, often a dominant source of noise in quantum computations.
Lindblad Model Learning on a 156-Qubit Processor Scientists have developed a protocol for learning time-independent Lindblad models, essential for describing quantum operations repeatedly applied on a quantum computer, and demonstrated its application on a 156-qubit processor, marking the first experiment of its kind. The core of this work involves formulating a linear system of equations to determine model parameters from observable values and their gradients, obtained by fitting time-series data with sums of exponentially damped sinusoids. A robust curve-fitting procedure was developed to identify the most accurate representation of the data, accounting for inherent shot noise. The method accurately describes observable evolution, even when affected by readout errors, by fitting data to sums of exponentially damped sinusoids, rather than traditional polynomials. A specialized fitting protocol, based on the generalized pencil-of-function method, was implemented to avoid overfitting, estimating expected shot noise and finding the most parsimonious fit within a chosen tolerance. The resulting linear system of equations was solved using a splitting conic solver, ensuring the learned Lindblad model remains physically valid through a positive-semidefinite constraint. The learning protocol demonstrates high scalability, with the number of model parameters scaling linearly with the number of qubits when each qubit connects to a fixed number of others. Simulations confirm the accuracy of the learning algorithm, demonstrating its potential for characterizing and optimizing quantum operations.
Lindblad Model Learning From Quantum Data Scientists have developed a new protocol for learning the parameters of a time-independent Lindblad model, which describes the evolution of quantum systems, directly from experimental data obtained on a quantum computer. This method allows for the characterization of quantum operations, even those repeated multiple times, and is particularly suited to systems with local interactions, demonstrating resilience to errors in initial state preparation. The core of the technique involves establishing a linear relationship between model parameters and observable values, determined by analyzing time-series data and identifying exponentially damped sinusoids.
The team successfully demonstrated this approach by learning the Lindbladian, a mathematical representation of the system’s evolution, for a full layer of gates on a 156-qubit processor, marking the first experiment of its kind. Results indicate the protocol’s ability to accurately reconstruct the system’s dynamics, even in the presence of realistic experimental limitations. Future work could focus on mitigating these limitations and extending the protocol to more complex quantum systems and larger numbers of qubits. They also suggest exploring optimization techniques to further enhance the accuracy and efficiency of the learning process, potentially enabling more precise control and characterization of quantum devices. 👉 More information 🗞 Large-scale Lindblad learning from time-series data 🧠 ArXiv: https://arxiv.org/abs/2512.08165 Tags:
