Back to News
quantum-computing

AI Designs Better Drug Candidates with Quantum Aid

Quantum Zeitgeist
Loading...
8 min read
0 likes
AI Designs Better Drug Candidates with Quantum Aid

Summarize this article with:

Researchers are tackling the challenge of generating high-quality, drug-like molecules using deep generative models, a technology poised to accelerate pharmaceutical research and development. Hayato Kunugi, Yosuke Iyama, and Yutaro Hirono from Innovation to Implementation Laboratories, Central Pharmaceutical Research Institute, Japan Tobacco Inc., working in collaboration with Mohsen Rahmani, Matthew Woolway, Vladimir Vargas-Calderón, William Kim, Kevin Chern, and Mohammad Amin at D-Wave Systems Inc., present a novel framework integrating deep generative models with quantum annealing computation. Their approach utilises a newly developed Neural Hash Function (NHF) for both regularisation and binarisation, bridging continuous and discrete signals between classical and neural networks within the objective function. Significantly, compounds generated by this method demonstrate improved validity and drug-likeness compared to those from traditional models, even surpassing the characteristics of the original training data without specific optimisation constraints. These findings suggest a substantial advancement in stochastic generator design, enhancing feature space sampling and extraction for effective drug discovery. Scientists are pioneering a fresh strategy to accelerate the creation of new medicines. The vastness of potential drug candidates, estimated at 10⁶⁰ molecules, makes discovery intensely challenging. This work offers a pathway to navigate that complexity, designing more effective compounds and shortening the path to market. Scientists are pioneering a new method to accelerate the design of drug candidates using deep generative modelling and quantum computing. This work addresses a longstanding challenge in pharmaceutical research: the difficulty of efficiently searching the vast landscape of potential molecules for those with desirable properties. Existing molecular generative models often struggle to produce a sufficient number of compounds resembling effective drugs, hindering the pace of discovery. Compounds created using this quantum-annealing approach demonstrated improved validity and drug-likeness compared to those generated by traditional, fully-classical models. Remarkably, these newly generated molecules even surpassed the characteristics of the original training data in terms of drug-likeness, achieved without any deliberate optimisation to favour such outcomes. This suggests a powerful ability to sample and extract key features relevant to drug design, extending the performance of existing feature space exploration techniques. The sheer scale of the challenge is considerable. The total chemical space encompasses an estimated 10⁶⁰ molecules, yet only approximately 10¹⁰ are currently considered synthesizable. This disparity highlights the need for more efficient search strategies. By combining the strengths of deep learning with the unique capabilities of quantum annealing, this research offers a potential pathway to navigate this immense space more effectively. Once validated, this technology could substantially reduce both the time and cost associated with bringing new medications to market. Yet, achieving this required a departure from conventional quantum machine learning approaches. Most early work focused on gate-based quantum circuits, but these often face scalability issues due to trainability problems. Instead, this study leveraged quantum annealing, an analogue approach that maps learning problems onto an Ising Hamiltonian and searches for low-energy configurations. Sampling from these configurations, guided by training the Hamiltonian, allows for large-scale exploration of complex energy landscapes. Recent experiments have shown that quantum annealing can achieve a scaling advantage in finding these low-energy states, and that the resulting sampling distributions are difficult to replicate with classical simulations. This work builds upon Variational Autoencoders (VAEs), a type of generative model commonly used for creating and analysing chemical compounds. VAEs approximate the distribution of latent variables and optimise an evidence lower bound, enabling efficient training. However, a key hurdle in integrating VAEs with quantum computing lies in the non-differentiability of converting continuous data into the binary representation required for quantum systems. To address this, the researchers introduced a novel approach that allows gradients to be backpropagated through the latent variables, enabling end-to-end training of the entire system. Novel generative modelling expands accessible chemical space and drug candidate quality Researchers detailed advancements in deep generative models for molecular design, achieving higher quality compounds than previously reported. Generated compounds exhibited improved validity and drug-likeness, even surpassing the characteristics present within the original training data without deliberate optimisation. This success stems from integrating a novel neural network architecture with a stochastic generator and quantum annealing techniques. At the heart of this work lies the sheer scale of chemical possibility; the total chemical space is estimated at 10⁶⁰, a figure representing all theoretically possible molecules. Yet, the number of compounds currently synthesizable, those practically achievable in a laboratory, is approximately 10¹⁰. This disparity highlights the immense challenge of identifying promising drug candidates within such a vast landscape, a search space where conventional methods struggle. The research team addressed this by developing a framework to more efficiently navigate this chemical space. Once implemented, the generative models produced compounds with demonstrably superior qualities. Validity, referring to the chemical plausibility of a generated structure, was notably increased. Simultaneously, drug-likeness, a measure of how closely a molecule resembles known pharmaceuticals, also improved. But the gains weren’t merely incremental; the generated molecules exceeded the drug-likeness features observed in the training dataset itself. Now, the models can sample and extract characteristic features for drug design with extended performance. Still, the integration of a D-Wave computer proved essential. For instance, the framework leverages the intrinsic sampling power of quantum annealing for quantum-enhanced generative modelling. Yet, the implications extend beyond generating more “drug-like” molecules. By harnessing the power of quantum annealing, the research offers a potential pathway to accelerate drug discovery. This could reduce both the time and cost associated with bringing new medications to market, a benefit with far-reaching consequences for healthcare and pharmaceutical innovation.

Molecular Representation Learning via Discrete Latent Variable Autoencoding A Transformer28-based encoder-decoder architecture, incorporating discrete latent variables, served as the foundation for this work. The system processes tokenized SMILES strings, a standard representation of molecular structures, where each molecule is represented as a sequence of N tokens, utilising a vocabulary of size V. These tokens are initially embedded into a dmodel-dimensional space before being fed into the encoder, denoted as fφ. This encoder comprises Transformer layers, a Neural Tensor Network block, and multilayer perceptron (MLP) layers, ultimately producing fixed-length vectors H of dimension RN×D. Binary latent variables, Z, are then derived from H through a binarization function and subsequently input into a decoder, gθ, which also includes a Neural Tensor Network and Transformer layers. Outputs from the decoder are passed through a softmax layer to predict the probabilities of reconstructing the original tokens, X. This NHF loss, Lnhf, comprises three distinct components: a reconstruction loss, Lrec, calculated as the cross entropy between input and reconstructed tokens; a prior loss, Lprior, approximating the cross entropy of the prior distribution; and a quantization loss, Lquant, designed to refine the binary codes produced during binarization. The quantization loss includes a Frobenius norm term to minimise the difference between Z and H, alongside a term encouraging orthogonality of the weight matrices within the encoder’s MLP layers. To optimise model parameters using stochastic gradient descent, the gradients of Lnhf with respect to the decoder, prior, and encoder parameters were defined and calculated. This approach allows for efficient updating of the model based on the error signal. The entire model’s structure and the training/generation workflow are visually represented in Figure 1, providing a clear overview of the process. Validity, defined as the proportion of SMILES strings successfully converted into molecular structures, was used as a key metric for evaluating the generative models’ performance. Quantum computing refines artificial intelligence for targeted molecular discovery Scientists are beginning to reshape the search for new medicines, moving beyond generating any molecule, but one with a real chance of becoming a viable drug. Now, researchers have combined deep learning with the unusual power of quantum computing to address this limitation. Their work doesn’t promise instant cures, but it represents a subtle shift in how we approach molecular design, aiming to expand the pool of genuinely drug-like candidates. The sheer scale of the problem is immense; the total possible chemical space numbers around 10⁶⁰, while the number of compounds we can actually make and test is closer to 10¹⁰. This disparity demands smarter search strategies. Still, the integration of quantum annealing, a specialised form of quantum computation, is not merely a technological flourish. By employing a quantum computer to refine the generative model, the team achieved a noticeable improvement in the ‘drug-likeness’ of the molecules produced, even surpassing the characteristics of the data used to train the system. Once considered a distant prospect, the ability to generate compounds with improved properties without explicit instruction is a compelling step forward. Yet, it’s important to acknowledge that quantum computers are not yet ready to replace conventional high-performance computing. At present, the benefit appears to lie in refining the generative process, rather than enabling the creation of entirely new molecular structures. Beyond this, the field needs to address the critical issue of synthesizability, can these computationally designed molecules actually be made in a laboratory, and at what cost. Future work will likely focus on bridging this gap, perhaps by incorporating automated synthesis planning into the design loop, and exploring different quantum algorithms to further enhance the efficiency of the search. Ultimately, the true measure of success will be whether this approach translates into faster, cheaper, and more effective drug development. 👉 More information 🗞 Molecular Design beyond Training Data with Novel Extended Objective Functionals of Generative AI Models Driven by Quantum Annealing Computer 🧠 ArXiv: https://arxiv.org/abs/2602.15451 Tags:

Read Original

Tags

quantum-annealing
drug-discovery
quantum-machine-learning
quantum-investment
quantum-computing
quantum-algorithms
google
d-wave
partnership

Source Information

Source: Quantum Zeitgeist