Startup Quantum Elements Brings AI, Digital Twins To Quantum Computing - The Next Platform

Summarize this article with:
AI is having a seismic impact on processes across all industries, speeding things up and driving costs down. And digital twins aren’t immune to the tectonics. McKinsey & Co wrote that creating a digital twin for such specialized applications like routing vehicles or creating a production schedule for multiple machines can take six months or more. Large language models can spin up code for digital twins, which would reduce the amount of labor and time needed to create them. Analysts with the global consultancy wrote about the developing symbiotic relationship between the two, saying that “Gen AI can structure inputs and synthesize outputs of digital twins, and digital twins can provide a robust test-and-learn environment for gen AI. By combining these technologies, organizations could produce synergies that reduce costs, accelerate deployment, and provide substantially more value than either could deliver on its own.” AI could even go so far as to create “a generalized digital-twin solution – a foundational, universal model – that facilitates design and serves as a starting point for developers across digital-twin projects and even industries,” they wrote. Quantum Elements, a startup came out of stealth this fall with backing from QNDL Participations – which funds and supports quantum startups and founders – and the USC Viterbi School of Engineering, is using that combination of AI and digital twins with its Constellation platform that’s aimed at accelerating the timeline to commercial, fault-tolerant quantum computing. The company’s AI-native quantum development platform includes AI agents, natural language, and Quantum Elements’ simulation tools that organizations can use to generate code and create, run, and test quantum algorithms and applications. At the same time, companies can use the platform and what the founders called an “advanced noisy-qubit simulator” to create virtual prototypes – digital twins – of quantum systems, key step in reducing the time and costs that are needed to adopt the still-developing technology. Until now, there’s been no development platform that offers what’s needed for quantum computing, which includes the ability to see a system and understand how it behaves under perfect conditions and to generate the massive amounts of data that represent changes in the system and their use cases, according to Izhar Medalsy, co-founder and chief executive officer of Quantum Elements. “The reason this doesn’t exist is very simple,” Medalsy tells The Next Platform. “Hardware is scarce and expensive, it ever evolves, and you have different modalities, you have different chip manufacturers and different chip generations … which is all great. We need that in order to move forward. But what is dragging behind is the ability to virtualize and simulate those systems as they are at scale. The point is you need the digital twin. You need the ability to look at the system in the same way that flow simulators are simulating the flow of air on the wing of an airplane, or in the same way that Cadence and Ansys or Synopsis are simulating transistors in order to be able to virtualize those huge GPUs and CPUs and be able to predict how the next generation is going to look.” The expected timetable to true fault-tolerant quantum systems appears to be contracting as a host of major vendors like IBM, Microsoft, Google, and Amazon Web Services (AWS) have taken steps to address the challenge of error correction and qubits – not only the number than can run in a system but also to ensure enough stability to make them do real work – while a growing number of pure-play companies push their own roadmaps. Quantum Elements wants to use AI-fueled simulation capabilities to make it easier to develop and run quantum software and hardware. Simulating quantum infrastructure is much more difficult than with classical systems, Medalsy says, pointing to the complex nature of qubits and the expanding number of modalities – from superconducting and trapped ions to neutral atoms, photonics, and silicon spin – that differ in everything from coherence time (how long qubits can maintain their quantum states) to gate fidelity, connectivity, and scalability. “The biggest difference between classical systems and quantum systems is that in classical systems, a bit is a bit, a zero is a zero, and a one is a one,” the CEO says. “You can use different compilers and different operating systems. At the end of the day, the commands that they’re going send will be the same. In quantum systems, because of the different modalities and the different systems and the different approaches, each qubit has a different way of behaving. On each qubit, we are realizing computation in a different way, different pulses.” This means that whatever is running on a quantum system needs to be able to understand what kind of modality is being used for the qubit and the changes that it brings. Given the scale that the prototypes can reach, they also can address such issues as error correction – through advanced techniques like surface codes and QLD PC – and error suppression. “If in classical systems … a zero is a zero and a one is a one, in quantum systems, if you need to do calibration and then you need to do circuit optimization and error suppression and error correction and error mitigation, all of them have to be hardware-aware,” he says. “If your hardware is very scarce, if it keeps on changing, you need to get to a point where you’re able to predict, simulate, and emulate those components at scale with all the noise and all the things that are prohibiting them from working correctly, and use that to train your AI models. That’s exactly what we’re doing.” With Constellation, organizations can build the digital twin of the hardware in the quantum system they want to test, such as the modality that it will use. They can then run their applications and algorithms on those simulations to understand how they will perform and adjust accordingly. That includes the noise – environmental disturbances like heat, electromagnetic fields, sounds, or other qubits – that can break apart the qubit’s fragile quantum states, which causes errors and decoherence, the loss of information. It drives down the cost and time needed to run the tests because they don’t have to build a physical quantum prototype, which can save months of work and hundreds of thousands of dollars. Instead, such work can take minutes. The platform allows for a 20X improvement in productivity and 100X better development speed. “You have a physical system,” Medalsy says. “When you connect to this physical system, you can pool all of those metrics that define this system. In quantum, we call them the ‘defacing rates.’ How quickly will your qubit stop to function correctly? That defines the time span that this system works and, effectively, the idea of what are the noise models that are governing your system. This is the physical representation. We pooled all of this information, and now we’re giving you a digital representation of your system with all of the noise models that we built.” Quantum Elements’ software can then make changes to the digital twins to ensure the software runs better and gives the best performance. As an example, Medalsy pointed to a test the effect of cross talk between qubits has on Shor’s algorithm, which is used to find the prime factors of an integer. The challenge is that qubits like to “talk,” he says, so while you’re doing and operation on one, it’s contaminating the other, creating what he calls a “Whack-a-Mole problem.” What Quantum Elements wanted to do was see how it impacted the algorithm. It’s an operation that, without a simulated digital twin, could take four to six months and cost more than $100,000. The scientists would need to fabricate a component, cut the die, cool it down multiple times, and deal with the cross talk between qubits. Instead, with the platform from Quantum Elements, “you are selecting the platform that you’re interested in – in this case, it was IBM – and which QPU family you are interested in,” Medalsy says. “We’re giving you a canvas, and this digital canvas allows you to build your virtual quantum processor based on your needs. You are selecting your qubits, you are connecting them, and you can choose whichever connectivity you would like to have. Once they’re connected, you can go and control all of those parameters. You can go and plug in the numbers that you would like in order to drive those virtual qubits, what kind of noise models you want to have. You can even control this cross talk.” For the test, Quantum Elements reached 99 percent accuracy for Shor’s algorithm, which it says is a world record. The two-year-old company – whose other co-founders are Daniel Lidar, chief scientific officer and co-founder and director of USC’s Center for Quantum Information Science and Technology, and Amir Yacoby, a Harvard professor and member of the National Academy of Sciences – not only is getting strong financial backing but also has a range of impressive partnerships, including with quantum players like IBM, AWS, Quantum Machines, Nvidia, and Rigetti, as well as both USC and UCLA. “For me, it’s as clear as it can be that this is the enabling technology to accelerate the field,” Medalsy says. “If you think about how other industries evolve, do we see the aviation industry without flight simulators or without flow dynamics? Do we see the classical device community and industry without simulating the performance of layouts and systems? Or do we see AI and self-driving cars without the ability to augment and provide digital twins of the cars that are driving on the road? This is something that is a must. AI is emerging as quantum computing’s missing ingredient.” Sign up to our Newsletter Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
