Back to News
quantum-computing

Realizing Feynman’s vision for the future of simulation | IBM Quantum Computing Blog - IBM

Google News – Quantum Computing
Loading...
9 min read
0 likes
⚡ Quantum Brief
IBM has unveiled a reference architecture for quantum-centric supercomputing, enabling high-performance computing (HPC) centers to integrate quantum systems into existing workflows without overhauling infrastructure. Pre-fault-tolerant quantum computers now outperform classical methods in specific simulations, like molecular ground-state energy calculations, marking a shift from theoretical benchmarks to practical scientific applications. New algorithms like Sample-based Krylov quantum diagonalization (SKQD) demonstrate quantum advantage, solving problems where classical techniques fail, as shown in experiments with IBM’s Heron processor. Researchers used quantum-centric workflows to simulate a 300-atom protein and design a novel "half-mobius" molecule, proving quantum’s role in accelerating drug discovery and materials science. The architecture provides a scalable blueprint for hybrid quantum-classical systems, ensuring seamless integration with tools like Qiskit and CUDA as quantum hardware matures.
Realizing Feynman’s vision for the future of simulation | IBM Quantum Computing Blog - IBM

Summarize this article with:

A reference architecture for quantum-centric supercomputing is extending useful quantum to HPC centers. Our mission is to bring useful quantum computing to the world. So, what is useful quantum computing, and how do you bring it to the world? For physicists, quantum computers have been useful since IBM put them on the cloud a decade ago. These systems served as hands-on ways to explore the rules underlying the universe, where each new quantum computer was the largest test of these rules to date. But to the broader world, quantum’s value will come from its advanced computation abilities—such as predicting a chemical’s physical properties beyond anything possible on today’s computers. This would be a revolutionary tool for problems like drug discovery or catalyst design. Famed physicist Richard Feynman put it best during a lecture at the MIT and IBM-sponsored Physics of Computation Conference: Nature isn't classical, dammit, and if you want to make a simulation of nature, you'd better make it quantum mechanical, and by golly it's a wonderful problem, because it doesn't look so easy. New demonstrations are igniting an obvious expansion of usefulness. Previously, quantum research focused on physics studies, purpose-built demonstrations, and benchmarking against classical methods. Today, new hardware, algorithms and research from our partners are handling calculations of relevance and providing insights for cutting-edge experiments in chemistry and beyond. In fact, the current trajectory shows classical methods starting to falter—and even pre-fault-tolerant quantum computers replacing them as the most logical technique to handle certain simulation problems. The truest realization of Feynman’s vision will soon emerge: simulating an interesting molecule or property with a quantum computer, and then bringing it to life in the lab. So, how are we bringing it to the world? Today, we’re releasing a detailed reference architecture that demonstrates how quantum fits into today’s supercompute workflow so computational scientists can recreate these exciting experiments themselves. This reference architecture doesn’t require revolutionary changes to existing infrastructure. Rather, it’s a blueprint for augmenting these workflows with quantum—because only with real quantum hardware and high-performing quantum software can users begin accessing Feynman’s vision for computing’s future, today.For more details on the new reference architecture, read our technical note on the IBM Research blog. For more details on the new reference architecture, read our technical note on the IBM Research blog. Quantum computers interested Feynman—and interest us—because they let us encode information and manipulate it using the same mathematics that govern the behavior of interacting atoms and molecules. You can efficiently represent these behaviors on a quantum computer using computational objects called quantum circuits. However, classical computers must awkwardly recreate quantum circuits using exponentially many binary logic operations. Quantum computers are innately noisy and error-prone, and the field is constantly developing new techniques to handle these errors while working toward a large-scale, fault-tolerant quantum computer—one that can detect and correct errors as they arise while tackling valuable calculations. However, in the past few years, ever-improving quantum hardware has emerged that can run quantum circuits that classical computers alone can’t recreate exactly. These demonstrations were interesting, but not necessarily of interest to the scientists hoping to create new molecules, drugs and materials…that is, until now, thanks to quantum-centric supercomputing. Even the highest-performing quantum computers and most efficient algorithms require classical computing to orchestrate workflows, aid in fixing the errors innate to quantum computation, and simply run the computations that they do best. Last month, we demonstrated how and where classical and quantum systems are beginning to work together. New workflows employ GPUs to aid with quantum error mitigation techniques, allowing us to remove the noise from computations run on noisy quantum computers. Equally important are the novel quantum-centric supercomputing algorithms designed to offload parts of quantum computations onto classical hardware, like quantum diagonalization methods. These let us compute with quantum circuits on QPUs and tensor mathematics on GPUs simultaneously for molecular simulations. A few different flavors of these algorithms have emerged, but among the most exciting is Sample-based Krylov quantum diagonalization (SKQD), with convergence and verifiability properties that set it apart from other near-term quantum algorithms for calculating ground state energies. For example, in a new preprint, researchers from IBM, RIKEN, and University of Chicago constructed a set of ground state energy problems, called Hamiltonians, that satisfy the criteria for SKQD to converge. Going beyond the theory, these problems were experimentally tested using SKQD on an IBM Quantum Heron processor as well as using a popular classical method called selected configuration interaction (SCI). SKQD successfully converged to the ground state, while SCI failed to do so. The test problems in this study are synthetic, and don’t describe any real-world physical systems—but illustrate the existence of use-cases where QCSC running SKQD can outperform leading classical-only methods. Thanks to these advances, world-leading chemists, pharmaceutical researchers, and materials scientists are adding quantum to their toolbox as a valid and accurate simulation technique alongside well-established simulation algorithms like SCI, the density matrix renormalization group technique (DMRG), and coupled cluster methods like CCSD. One such paper from Cleveland Clinic Foundation (CCF) predicts the energies of different configurations of the 300-atom Tryptophan-cage miniprotein, a synthetic protein that serves as a ubiquitous lab rat for computational studies—among the largest molecular simulations yet. This work employs a technique called wave function–based embedding (EWF) to fragment (and reconstruct) the molecule’s Hamiltonian, then calculate the energies of the most challenging fragments using SQD. Meanwhile, researchers from IBM, Oxford, the University of Manchester, ETH Zurich, École Polytechnique Fédérale de Lausanne, and the University of Regensburg enlisted quantum to help study entirely new molecules. Using time-tested atomic force microscopy (AFM) and scanning tunneling microscopy techniques, the team led by IBM’s Leo Gross engineered a new “half-mobius” molecule—a ring of carbon atoms whose electronic structure is formed with a half-twist as you go around. Further, they used an SQD-based algorithm called SqDRIFT to predict properties and behaviors of this molecule. Simulating these experiments strains classical methods, and we see a nearing limit to how far we can push classical compute. Meanwhile, we see a clear trajectory of quantum producing ever-improving results where those classical-only techniques will fail. Soon, we hope to see the fullest realization of Feynman’s quantum simulator: a computer that can predict a molecule’s properties that we can later bring to life, allowing us to blueprint a material for storing energy or a new molecule for fighting disease that we can craft later. These results are demonstrating the ingredients required for this workflow. Quantum is now a tool capable of performing useful scientific work as part of a quantum-centric supercomputing workflow. But how can computational scientists begin to forge similar paths on their own? What if you have an interesting chemistry or optimization problem and hope to explore the potential of quantum algorithms and quantum hardware in these spaces? How do quantum and HPC centers scale together? Performing quantum simulations beyond leading classical methods requires a few things: access to a quantum computer, access to classical computing, and an architecture governing how the two communicate. Today, IBM has released a reference architecture for quantum-centric computing. This document is a blueprint for computation centers with computational scientists excited to explore quantum in their workflows. At the same time, it’s also a roadmap for how these hybrid systems will scale up and out as quantum and classical mature. But we’re not re-inventing the wheel. With this architecture, we intend to complement and co-design with today’s high-performance computers so computational scientists can easily pull quantum into their existing HPC workflows. At the highest level, the architecture considers quantum-centric applications, programs that incorporate both quantum and classical libraries for workflows like simulation, optimization, or differential equation solving. As we go down a layer, these libraries map problems to appropriate data structures including tensors and quantum circuits, the core units of computation. In turn, the middleware layer prepares these structures to run on the appropriate hardware, with tools like OpenMP, MPI, and SHMEM prepare data to process on GPUs using CUDA, Triton, and PyTorch, while quantum SDKs like Qiskit, TKET, and CirQ prepare circuits to run on QPUs. Below the middleware are the workflow and resource management tools that perform orchestration and allocate resources across the appropriate hardware. The quantum resource management interface (QRMI) is one such open tool, a vendor-agnostic library for high-performance compute (HPC) systems to access, control, and monitor the behavior of quantum computational resources. And finally comes the actual processing and post processing—the workflow and resource management systems that orchestrate the problem across hardware. We use five use case categories to guide the orchestration at this lowest layer, incorporating QPUs and interconnects to scale up and scale out systems of CPUs and GPUs. For example, algorithms like SKQD require scale-out and closed loops, yielding temporal and spatial coupling considerations. Meanwhile, error mitigation requires high-throughput CPU and GPU resources, while users would explore error correction by having low-latency classical systems more closely integrated. With this reference architecture, computational centers can now bring quantum computing to their own CPU and GPU clusters and fit them into an overarching quantum-centric workflow. Further, they can plan for and anticipate how quantum and classical will continue to grow as quantum matures and new applications arise. The architecture shows computational scientists with interest in quantum computing and access to HPC how they can take the steps required to explore Feynman’s vision today. Given our ever-maturing AI infrastructure, we’re building and investing into a future reliant on these ever-growing GPU clusters, which we’re ready to augment with quantum. Feynman presented a vision for the future of simulation—and that future is emerging now. IBM is committed to helping you realize that future yourself. 14 Jan 2026 • Stefan Woerner, Daniel J. Egger, Robert Davis 19 Nov 2025 • Jessie Yu, Maika Takita, Kit Barton, Kevin C.

Read Original

Tags

quantum-computing
ibm

Source Information

Source: Google News – Quantum Computing