Back to News
quantum-computing

Podcast with Bob Sorensen, SVP Research & Chief Quantum Analyst at Hyperion Research

Quantum Computing Report
Loading...
29 min read
0 likes
⚡ Quantum Brief
HPC centers are adopting a problem-first quantum strategy, identifying computational bottlenecks before benchmarking classical solutions and evaluating quantum accelerators for clear ROI and procurement targets. Cloud vs. on-prem quantum deployment remains split, with cost and upgradeability driving decisions; vendors must offer modular systems with short lifecycles to match rapid quantum advancements. HPC managers prioritize seamless integration over quantum specifics, fearing workflow disruptions; their focus is on reducing time-to-solution and queue wait times without operational headaches. Quantum advantage now includes speed, capability, and power efficiency, with real-world gains like HSBC’s 34% productivity boost in bond trading proving more compelling than theoretical supremacy claims. Error-correction architecture is reshaping modality choices, with trapped ions and neutral atoms gaining traction due to superior scalability and low-density parity-check code compatibility.
Podcast with Bob Sorensen, SVP Research & Chief Quantum Analyst at Hyperion Research

Summarize this article with:

Yuval Boger interviews Bob Sorensen of Hyperion Research about the growing convergence of quantum computing and high-performance computing. They outline a problem-first adoption playbook for HPC centers: identify bottlenecks, benchmark classical options and costs, then evaluate quantum as an accelerator with clear ROI and procurement targets. Sorensen weighs cloud versus on‑prem tradeoffs, argues quantum hardware needs short lifecycles with upgrade paths, and explains why HPC managers mainly worry about seamless integration. They close with practical definitions of quantum advantage (speed, capability, and power), real-world case studies, and why error-correction-driven architecture is increasingly shaping modality decisions. Transcript Yuval: Hello, Bob, and thank you for joining me today. Bob: Most glad to be here, and thanks for the invitation to come back and have a chat. Yuval: So remind me, who are you and what do you do? Bob: My name is Bob Sorensen. I work for a little consulting firm called Hyperion Research. We do basically advanced computing insights. It’s kind of our business. And my particular area of coverage for—gosh, it’s been almost eight, nine years now—is what quantum computing and HPC can do for each other going forward. It’s fascinating because we’re reaching a time period now where the reality of quantum is hitting the HPC user community and they’re starting to ask all the right questions about what do we do next and what’s the potential and what does the future hold for us. So it’s a real exciting time to be an HPC end user and have this potential performance accelerator really right around the corner. And the preparations should really start soon to make that happen across the HPC ecosystem. Yuval: Just to explain, I do know who you are, but just in case someone who’s listening to this episode doesn’t, then they should. I think there’s a hypothesis that says, okay, we quantum vendors understand that the future is hybrid, that not very many applications are going to run purely on quantum computers, that the HPC connection is required, that sometimes the quantum computer will be on the cloud, sometimes the quantum computer will be on-premises. So first, is that sort of universally accepted? Bob: I would moderate that statement a little bit and not say that quantum requires HPC as much as HPC end users will be the most interested in quantum early on because they are the ones that are most flexible and amenable to adopting new technologies. In some sense, we’re not ignoring, but at least compensating for the fact that the inclusion may be somewhat complicated, but they’re always looking for the next best computational boost and quantum brings that to the table. So while we could talk about the requirements for quantum and HPC to work in a hybrid environment, what that really means, are there only applications that require a supercomputer or not? Jury’s still out on that. To me, the bottom line is HPC end users is where the most interest is going to be going forward. And those are the end users who are probably going to end up helping vendors develop applications and use cases that maybe move out beyond the HPC environment and become much more friendly and amenable to a lot of enterprise applications as well. But that’ll take a little bit more time, I think. Yuval: It seems that HPC centers are indeed catching on, at least circumstantially. We see that, for instance, quantum sessions at Supercomputing have become much more popular than they were in the past, now people are sometimes waiting outside the room as opposed to—it was really difficult to fill three rows in the past. Other than listening, what are HPC managers doing about it and what should they do to think about quantum? Bob: Well, first off, as I’ve been telling people, the confluence, the coincidence of going to SC25 in St. Louis and seeing all of the quantum computing seminars and birds of a feather meetings and panel discussions were planned by the SC planner saying, hey, we know this is important. And then those rooms were packed. And then going to Q2B a few weeks later out in Santa Clara and seeing how much interest there was from the HPC community to deal with quantum. So we really have this meeting of the minds. And that’s why one of the things that I was invited to give at Q2B was really a talk on how HPC centers should strategically think about the adoption of quantum computing. And I’m not talking about making sure this plug fits that plug, or this API has a hook into a quantum application. It’s more about what’s the strategic plan. And the thing that struck me in kind of preparing this was the first couple of steps to that process has nothing to do with quantum. It has everything to do with HPC centers looking at their current computational workload and isolating the pain points that matter most to them. And by pain points, we could say, are you interested in time to solution, the time to science, reducing queue wait times for some of your most expensive subject matter experts, something along the line that really matters to you and isolating those things. And the first step is to look at those issues and say, how can we solve these problems classically? Decide if the technology exists, decide if you could implement it in a reasonable amount of time, and decide if you can afford it. And by putting a price tag on addressing those pain points, you’ve now started to build a valuable use case for what quantum brings to the table. So at that point, once you have all that in hand, you start to look at quantum opportunities to address the identified key pain points that you’ve looked at. You’ve got that base and you say, what can quantum address here? Can it address X, Y, and Z? Yes. Can it address A, B, C? No. So you concentrate on the pain points. You look at what it takes to address those using quantum and you start to assess how much you’re willing to spend to do that because you’ve already done the work in the classical counterparts. And only at that point do you really start to engage vendors and start to think about a procurement process that works for you. But the key point here is because you’ve kind of built a business case for quantum versus classical in the jobs that matter most, you can go to your C-suite and start talking numbers, budgets, schedules, procurement opportunities and such in a much more business-like way as just opposed to saying, “Hey, quantum’s cool, let’s go do it.” And so that’s really what I think the smart HPC sites are going to start doing. It’s looking at this problem as an analytical process to address the main pain points within their advanced computing ecosystem. And once you start to do that, you will either go down the path of quantum or you’ll go down the path of classical or something in between. But the key here, which I always like to say, is it allows you to assess the cost of doing nothing. How much do we hurt if we don’t address these pain points, either classically or with quantum? And that generates an argument that is almost undeniable when you get to the people who write the checks. So to me, those are the steps that need to be taken. Once you have all that in hand, you can engage with vendors. You can discuss how they can solve your problems because you don’t want to go to a vendor and ask them, “What’s their qubit roadmap? What’s their gate fidelities? How do they feel about CNOT gates?” You can say to them, “Here’s my problems. Can you solve them? And can you guarantee me a certain level of performance increase to address them?” Those are the questions you want to ask, and those are the questions that a thoughtful quantum computing supplier will be able to answer. Yuval: Who do you think quantum computing companies should be marketing to? And let me give you a pharmaceutical analogy. A pharmaceutical company could say, “Hey, if you have pain, you should really take Tylenol.” So they’re advertising directly to the end user. And sometimes they’re saying, “Talk to your doctor to prescribe XYZ.” So should quantum companies be talking to HPC managers and say, “This is coming. This is why you should really think about it,” or should they be focusing on the end user and say, “Hey, talk to your HPC manager to see how quantum should integrate into your compute infrastructure”? Bob: It’s interesting because right now you’re not going to be able to go to a company and find the VP in charge of quantum adoption. The job titles aren’t there. And we actually did a survey a while ago where we talked to about 200 different enterprise organizations, HPC enterprise users. And we asked, if you are quantum curious or even someone who’s looking at quantum perhaps from a proof of concept perspective or just doing research on it, who are the main drivers? What are the job titles of the main drivers within that effort within your company? And we found this wonderful diversity in who was interested. Sometimes it came down from above—a senior VP reads an article in Fortune that says you better jump on the quantum bandwagon—all the way down to a subject matter expert who is a recent hire, who did computational chemistry research as a student and understands that quantum brings something to the table, and everybody in between. HPC centers, those guys are always looking for new technology, they go to conferences like SC and they hear about and see the buzz around quantum and start to scratch their heads and say, what should we be doing here? So the answer is there’s no single answer. And that’s the complexity of this sometimes is you have to have, as a quantum computing supplier, a narrative that can adjust to the kind of person you’re talking to. So if you’re talking to a computational chemist at an oil and gas company or a geophysicist, you have to understand what their language is. If you’re talking to the CFO of that company, you have to bring a different skillset to the problem. And so that, in some sense to my mind, is a major challenge in the quantum computing sector because they haven’t developed the narrative for the broad class of people that ultimately are going to be making decisions and driving the introduction and adoption of quantum into their overall compute capability. Yuval: Let’s talk a little bit about cloud versus on-premises. And I think you’re sort of very proud of your expectation. I mean, you published your assertion that it will be split roughly half and half between cloud and on-prem and ended up being 52/48. So good for you. So first, is the cloud the enemy or the friend of the HPC manager? I mean, you could imagine the CFO coming to the HPC manager and saying, “Why do you need to buy all these expensive computers? Just use them on the cloud.” Bob: Well, I’m using our experience in HPC here to think about quantum going forward. And we have a couple of sayings here. The first one is, if you have the potential to utilize an HPC on-prem at 30% utilization or more, stay on-prem because cloud is going to probably cost more. The second one is everyone loves cloud until they get the first monthly bill. And there’s the issue right there—for a lot of aggressive HPC end users, the cloud does become very expensive and somewhat difficult to manage in many respects. And ultimately you’re making some sacrifices. You don’t get the exact architecture you need. In some cases, you don’t get the kind of workload-specific hardware that may be best suited for your application. But you do get some really interesting things. You get access to the newest technology that was rolled out the week before. Nowadays, HPC procurements—you could buy a machine today that you’re going to have to live with for the next five, six, or seven years. If you do that, you may be looking at a machine that as it reaches its end of life, it may be two or three generations behind what’s commercially available and leading edge, especially if you’re on the NVIDIA train there. So there you have the advantage of moving to the most recent and basically leading edge technologies available in the cloud that you can’t get in your on-prem environment. So there’s good and bad, but a lot of times it does come down to cost. The idea that cloud can cost a significant amount of money more than an on-prem environment if you have your workload defined and it’s not dramatically changing over time. Yuval: And what is your expectation of the lifetime of a quantum computer? I mean, you mentioned five to six years for classical HPC. We are recording this in 2026. Let’s go back five or six years, we’re at 2020 or 2021. So now you’re using a computer that has what, four qubits and 95% fidelity? I mean, doesn’t it seem like the rate of change in quantum computing is so much faster that the math should really be different? Bob: Yeah, I believe that’s the case. And I think that the smart quantum vendor understands that—as I jokingly like to say, there’s nothing more obsolete than a four-year-old quantum computer four years from today. So you’re not going to buy a quantum system today and keep that running on the shop floor for the next five to seven years. It has to be upgradable. It has to be modular. It has to offer midlife kickers or some kind of other capability that allows you to stay on the trajectory of continual quantum improvement that’s moving at a pace that seriously is above what—the old days, we used to count HPC performance gains doubling every 13 or 14 months. Quantum is moving in my perspective beyond that. So you’re going to have to deal with the fact that no, you’re not going to buy a quantum computer, drop it off at the loading dock, put it in the basement and run it for the next five to seven years. It’s got to have some kind of upgrade path. And if that means that your on-prem is a lease issue or some of these other midlife advantages, ways to improve it incrementally over time, then so be it. But that’s got to be a solution that, again, the smart vendor is going to have figured out architecturally, financially, and quite honestly, physically—they’re going to have to ship a chassis that can be upgraded for the next X years without having to tear down the entire system that’s sitting on someone’s data center. Yuval: I’m curious about the role of the HPC manager in classical compute and whether you think this translates to quantum. So I believe that in classical compute, the HPC manager or his team obviously keeps the hardware running, but then advises users, this is what you could do. And of course, the science bandwidth helps with any kind of tweaks or implementation changes. Do you see that happening in quantum as well? Do you see HPC managers really advising on what application to use and even what computer to use? Bob: In the old days, I would have said HPC managers kind of reigned supreme, that they basically looked at the systems that they thought were the most useful to keep the users reasonably happy, but they didn’t pay too much attention to particular users or particular workloads. One of my examples is many years ago, I was working in a facility where we had a Cray YMP and we bought it with eight megawords of memory. Yeah, that’s right, eight megawords. And about a year into it, we upgraded from eight megawords to 16 megawords. Well, the next day when I came in, I noticed a note from the systems guy, the guy that ran the HPC center, saying they’d upgraded to 12 megawords. And I said, what happened? I thought we put 16. He said, oh, we did, but we’re just going to keep four shut off for six months. So when the users start to complain, we’ll give them 16 and everyone will be happy. That was kind of the mentality of systems operators. Nowadays, it’s much more end user driven. End users have a diversity of workloads. They have much more say in what goes on. So I think a smart HPC center will not linger in the philosophy of “I buy what I think works and you adjust your workload to meet those requirements” versus nowadays, I think there’s much more positive input from the end user saying, this is what we need to do our jobs. We can help you build a system that’s most effective to meet the diverse workloads that we’re all trying to solve here. So hopefully the progressive sites will do that. I think the old school HPC managers—hopefully they will either get on board or get off board, whatever it takes to really respond to user requirements mattering more than just buying the machine that everybody has to live with. Yuval: The typical HPC manager, if there is such a person, when you talk to them, are they excited about quantum? Are they afraid? Are they worried? How would you sort of rate how they’re feeling right now? Bob: Well, first off, the biggest problem that HPC managers face is integrating new technology into existing capabilities. That is their ultimate fear. Every time something new comes along—and for those listening, I’ve got a gray beard—I was there when we had something called the attack of the killer micros. People used to build giant custom processors, literally built almost by hand. You had a one or two processor system, and all of a sudden this thing about, we can, instead of building one powerful processor, we can use a thousand microprocessors. The idea was which can eat an elephant faster, a lion or a thousand piranhas or some other mixed metaphor like that. And there was great consternation, mainly because how do you integrate that technology? And there have been equal kinds of paradigm shifts in the HPC world continually—massively parallel, then the idea of hybrid on-prem cloud issues, the idea of dealing with massively parallel data systems, AI coming to the fore, GPUs changing how one does math, and then of course, AI inclusion into HPC. So HPC managers are used to change. Now, quantum is a little more aggressive change because you’re kind of switching from an incremental semiconductor-based classical architecture to something new and different, but they’re used to change. Their biggest concern is not quantum per se, it’s how difficult is it going to be to make sure that the users have a reasonably seamless transition from 100% classical to quantum classical architectures. And so that’s really their big concern. It’s not about new technology, it’s about how new technology can upset the apple cart from their perspective. They don’t want to get phone calls in the middle of the night that says something’s not working. If quantum doesn’t generate those phone calls, they’re going to adopt it like crazy. Yuval: Do you remember Tim Russert? He was the host of Meet the Press and his thing was to bring politicians in and play them video clips and say, “Oh, well, three years ago you said this, and now how are you thinking about it today?” So in that spirit, I think you once surveyed 303 companies and came up with a total market of $51 billion for optimization once things reach a steady state. But today, we’re probably at 1 billion for the overall market size. So how do you feel about that? Is it just wait another 20 years and it’ll all be fine or do you want to change your expectations? Bob: Okay, I’m a little confused because I don’t think I ever said anything about multiple billions of dollars for markets. I don’t believe I’m even vaguely clever enough to stand behind a statement like that. I tend to be much more data-driven and much more conservative and view overestimation and the potential of unmet expectations in the quantum sector as perhaps one of the most damaging forces for the sector writ large. So I would have shied away from that to be quite honest with you. I expect that good things are going to happen in the future, but frankly, it’s more up to the sector to make that happen than it is for me to make an accurate prediction at this point. If use cases come to the fore, if it proliferates across the HPC space and then really starts to find applicability in more quotidian enterprise kinds of applications, we could start to see some really big numbers here. Yuval: When someone asks you about a case study, something you say, here’s a success story of quantum and HPC integration, do you have something that comes to mind? Bob: It’s funny you mentioned that because let me get it right here. I wrote a little thing a couple of weeks ago and it was this announcement by HSBC and IBM that they came up with an interesting use case about a bond trading scheme. And the thing I loved about it was it’s, A, it’s an interesting relevant use case in the financial sector, but it was the fact that they said it makes our insights 34% more productive. And I love that number because it’s 34%. It’s not quantum superiority. It’s not exponential performance improvements. It’s 34%, which from the standpoint of a competitive improvement in the financial sector is a staggering result. So to me, it’s really nice because it says, this is what quantum can bring to the table, which if you go to a C-suite and talk about a 34% performance improvement, you’re done. You don’t even have to continue arguing. And so to me, the idea of use cases that demonstrate end use productivity as opposed to random circuit samples that give you 10 to the 28th performance improvement, those things don’t move the needle for me. It’s the realistic performance gains at the very end. What was the application? What did it used to run at and what did quantum solve? And to me, 34%, man, if you can do that on a wide swath of applications out there, you’ve got a killer app, without a doubt. Yuval: The definition of quantum advantage has sort of expanded in recent years. It used to be, oh, I’m able to do something that I just wasn’t able to do classically. And then it expanded to, oh, maybe I can just do it faster than classically. But also I can do it with much less energy than I would do the exact same calculation on a classical computer. Which of these is more interesting these days to HPC managers? The solve something that you couldn’t do before, get something more accurate, or do something with dramatically lower energy consumption? Bob: This comes back to my initial discussion about defining a pain point. And there’s not going to be one answer for that because some organizations, they may not be overly concerned about power reduction. I think that is becoming less and less of a debate. I think more organizations, in fact, just about every organization out there that has any kind of high-end computing is worried about the power consumption and the cost of doing that. So that to me, in some sense, if you’ve got near equal performance, but at three orders of magnitude less power consumption, that’s a quantum advantage to be sure. I think early on, because of the idea of trying to figure out what quantum brings to the table, the emphasis is going to be on accelerating current workloads, the pain point issue I discussed earlier. I think as more and more quantum systems become available and people start to think about applications and algorithms that can move into different verticals, there will be more interest in exploration of new capabilities. But right now, I think we’re really more or less limited by interest in HPC end users to address their current problem set. One of the things that we asked in a survey, I want to say last year, was we talked about what are some of the things that you think quantum brings to the table for you? Or why would you adopt quantum? And what we found is about 92% of the survey respondents, when you unwrap their answer, 92% of them said, we have unmet computational requirements, things that we would like to do, but we simply can’t. And when we asked a little bit deeper about, well, how do you define unmet? The average answer was, we could do twice as much compute right now if we had the budget. That was the average answer. Some of them said they could do 500% more. Some of them said 20 to 30%. But almost all of them said they have unmet computational requirements. And if quantum can address even a percentage of that, then that is enough, again, to move the needle on quantum adoption. Only then, after those problems have been solved, are people going to start to say, here’s some new innovative ways that we can think about dealing with quantum, because we have the experience to do it. We know what it can do for our existing applications. We have an understanding of the potential here. Let’s start to move into new frontiers. I think that’s a reasonable, reliable, and steadfast way of progression of the technology. Yuval: Let’s assume that someone hired Hyperion Research, went to you, you’re the quantum expert at Hyperion, and said, “Okay, we’re really thinking about buying a computer. Which modality should we choose?” Maybe that ties to our cloud versus on-prem discussion. What would you tell them? Bob: Okay, here’s my Tim Russert answer. It was funny because yesterday I had an occasion that I was searching for some old surveys that I had done, old being in 2022. And we asked about the most promising modalities. And to be honest with you, superconducting qubits was up at the top. The next one, by almost a near preference number, was photonics. Neutral atoms was way at the bottom. So think about the transition of what’s happened in the last four years in terms of a shifting of emphasis, in terms of looking at different modalities and the things they bring to the table. One of the things that left an impression on me at Q2B was the idea that the linkages between modality types and the ability to do error correction, surface codes versus low-density parity checks, is a significant determinant right now in what really works. And so there may start to be a certain amount of differentiation, if you will, between modalities, not based on the things that we’ve tracked in the past, gate fidelities, circuit depth and such, but maybe some of the other aspects that are a little more architectural. And it’s the all-to-all interconnect scheme of low-density parity checks which is generating a lot of interest. So if we ran that survey today, I think we would see things like trapped ions and neutral atoms being much more prevalent, much more desired, if for no other reason than the idea of the promise of more effective error correction capabilities. So to me, the answer is who knows, but the bottom line is it may be something else in a few years, but it’s going to be something that is, I think, a higher order of abstraction. It’s going to be determined by architectural advances as opposed to things at the qubit level. So, what I’m looking for from a modality selection process is scaling, error correction, power, as opposed to just gate fidelities and some of the other more lower-level aspects of all this. So, in some sense, what we’ve done is we’ve kind of transitioned away from the very low-level specifics of quantum to, again, moving more towards an architectural or even a systems-design perspective when we evaluate modalities. Yuval: And is that also shared by HPC managers? I mean, you as an analyst, it’s really nice that you go from, okay, it took me a while to understand two-qubit gate fidelity, and now I’m looking at QLDPC codes and error correction codes and so on. But for HPC managers, do they just rely on you, or maybe they’re actually more focused on, just tell me what problem I can solve and how quickly I can solve it? Bob: The key here is to look at procurement processes, and I’ll use U.S. government ones because they tend to be very visible, but they also have a leadership capability to them where people tend to look at them. And when the Department of Energy procures a new system, they don’t go down into the details. They don’t talk about, “Gee, we better have gate all around versus FinFET transistors on our processors.” They say things like, “Here’s a mini application suite. We’d like 9X performance improvement on this application, we’d like 13x performance improvement on this one, and we’d like 5x performance improvement on this one.” So they are basically saying, we really don’t care too much about what’s under the hood, because that doesn’t matter to us nearly as much as accelerating these particular workloads. We’ve identified the ones that are most important, and we’ve identified how much we’d like to see those important applications improve in performance over time. And that, to me, is the working model for how people should be thinking about quantum, how to market it, and how to answer end user questions about quantum computing capabilities. Yuval: I hope you don’t think I’m setting you up for the next discussion in a year or two, but what’s your expectation of quantum advantage? When will HPC managers really be able to deploy quantum in a way that’s valuable to their businesses? Bob: I think, as I said, the smart companies are already going down the path of understanding what quantum brings to their workloads. And so I would say in the next one to two years, we’re going to move away from kind of the early adopters who were optimistic and just really psyched about quantum because it’s cool and it’s interesting and it’s leading edge to a little more pragmatic look about what’s going on. So there’s going to be harder questions to answer from the vendor perspective, but there’s going to be more realistic expectations for quantum and more thought processes about what do we need to do next. So when the machine that we view as useful for us becomes available, we can move quickly and make that happen seamlessly from an applications and end user perspective. That could take a couple of years, but it’s not going to take five years. It may not even take three for some of the more aggressive organizations. And we’ve already started to see there are folks out there who just, they’re not taking quantum as a flyer anymore. They’re actually thinking about, okay, we’re going to go down this road because we see utility soon. It’s no longer a hope and a prayer or a wing and a prayer, whatever metaphor you like. It’s more about, yeah, we’re on board because we see the potential here. And as with any new technology that comes around, we want to make sure it works right and then we’re going to adopt it like crazy. Yuval: And there’s a question I’d like to ask all my guests and I’ll ask it with a little twist here for you. If you hypothetically could have dinner with one of the quantum or HPC greats that are alive, who would that be? Bob: I’m not going to say Feynman—that’s too simple an answer. But what I’d love to do is sit down with Richard Feynman and Seymour Cray and just be a fly on the wall to see how they would talk about advanced computing next. What do we need to do next? What are the new technologies? How would they see the sector progressing? Because those, in my mind, are kind of the two pioneers in terms of generating interest for advanced computing developments, keeping the process going beyond Moore’s law, beyond Dennard scaling and all this other stuff, but driving paradigm shifts in compute. I’ve said rather divisively the last couple of years, anybody can build a board that goes in a rack, that goes in a system. HPC architecture has kind of stagnated and it really is time for a kind of a reset from both the classical perspective and bringing quantum to the fore. But also optical computing, analog computing, neuromorphic computing. I think we’ve reached a point where we’ve run out of opportunities in classical computing the way it exists today. And so let’s get some smart people together and break down some tradition and the kind of inertia that the sector is belaboring under right now. And let’s talk about what’s next. What’s the next big paradigm shift in compute? Yuval: Bob, thank you so much for spending some time with me today. Bob: Always a joy to talk to you. Thanks for spending the time. Yuval Boger is the Chief Commercial Officer of QuEra Computing. March 23, 2026

Read Original

Tags

government-funding
quantum-computing
quantum-algorithms
quantum-hardware
quantum-advantage

Source Information

Source: Quantum Computing Report