Back to News
quantum-computing

Unified Platform Delivers Reliable Quantum Computer Performance Assessments

Quantum Zeitgeist
Loading...
5 min read
0 likes
⚡ Quantum Brief
A collaborative team led by Alessandro Cosentino (Unitary Foundation) and researchers from Quantum Circuits Inc. and the University of Central Florida launched Metriq, an open-source platform standardizing quantum computer benchmarking across diverse hardware. Metriq unifies benchmark creation, execution, and analysis, reducing error rates to 0.6% across 10+ quantum computers—enabling reproducible, cross-platform comparisons previously impossible due to fragmented evaluation methods. The platform introduces a composite Metriq Score for aggregated performance assessment while tracking system-level metrics like entanglement quality, gate fidelity, and application-specific protocols such as quantum machine learning. Automated execution via metriq-gym normalizes tests across vendors, storing results in a version-controlled dataset (metriq-data) to ensure transparency and long-term performance tracking. While acknowledging the Metriq Score’s limitations in predicting complex-task scaling, the platform shifts focus from marketing claims to objective evaluation, accelerating quantum hardware development through shared, standardized benchmarks.
Unified Platform Delivers Reliable Quantum Computer Performance Assessments

Summarize this article with:

A new collaborative platform called Metriq addresses inconsistencies in quantum computing evaluation. Alessandro Cosentino from the Unitary Foundation, alongside Changhao Li, Vincent Russo, Bradley A. Chase, William J. Zeng and colleagues, developed Metriq in collaboration with Tom Lubinski and Siyuan Niu from Quantum Circuits, Inc. and the University of Central Florida, as well as Neer Patel from the University of Central Florida. The open-source platform unifies benchmark definition, execution, data collection and presentation, resolving the currently fragmented landscape of quantum computer assessment. By enabling a reproducible and cross-platform approach, alongside a curated dataset from over ten quantum computers and a composite Metriq Score, it provides a key foundation for meaningful comparison and ongoing refinement of quantum benchmarking methodologies as the field progresses. Standardised benchmarking reveals performance across diverse quantum hardware Error rates fell to 0.6% across multiple quantum computers using a new open-source platform, a strong improvement over previously isolated and incomparable system-specific results. This threshold unlocks systematic, reproducible benchmarking previously impossible due to the lack of standardised tools and shared datasets. Metriq, the newly launched platform, integrates benchmark creation, execution, and data analysis into a single workflow, collecting results from more than ten quantum computers from diverse hardware vendors. The curated dataset reveals device performance and highlights limitations of individual benchmarks, supporting continuous refinement of testing methodologies. This unified approach enables objective comparisons and transparent data provenance, establishing a foundation for meaningful progress in quantum computing. Data from over ten quantum computers was gathered, revealing performance variations beyond the reported 0.6% error rate. Error rates dropped significantly. The platform incorporates system-level metrics assessing entanglement quality, circuit speed, and gate performance – the accuracy of quantum operations – providing detailed hardware characterisation. Metriq benchmarks extend to application-inspired protocols, evaluating performance in areas like quantum machine learning and optimisation, demonstrating the platform’s flexibility. A composite index, the Metriq Score, summarises performance across the entire benchmark suite, allowing for aggregated comparisons. The platform also assesses fundamental device properties, scaling benchmarks to processor size for practical evaluation.

Standardised Quantum Benchmarking via Automated Execution and Reproducible Data Management Metriq streamlines quantum computer evaluation through a unified workflow, integrating how benchmarks are defined, run, and analysed. A ‘runner’ called metriq-gym acts as a central interface to execute tests across diverse quantum hardware, regardless of the underlying technology. This runner normalises the process, translating varied system-specific instructions into a common language understood by all connected quantum computers. By standardising execution, Metriq avoids the pitfalls of comparing results generated by fundamentally different methods. The resulting data is then stored in a carefully organised, version-controlled dataset – metriq-data – ensuring reproducibility and allowing performance changes to be tracked over time. This curated dataset enables reproducible results and tracks performance changes over time. Quantum computers were evaluated using parameters including qubit count and gate fidelity, alongside benchmarks inspired by real-world applications like machine learning. Prioritising an open-source platform over vendor-specific tools ensures unbiased comparisons and transparent data provenance. Quantifying quantum advantage requires careful consideration of benchmark limitations Establishing a common yardstick for quantum computers promises to accelerate development, moving the field beyond isolated performance claims towards verifiable progress. Metriq offers a vital foundation by integrating testing and data sharing into one system, yet the platform’s current Metriq Score – a single number summarising performance – presents a potential oversimplification. The abstract acknowledges this composite index doesn’t yet fully predict how these devices will scale to tackle genuinely complex problems, raising questions about whether a single score can truly capture the subtle nuances of quantum capability. Despite acknowledging that the Metriq Score cannot yet fully predict performance on truly complex tasks, this platform represents a key step forward for the quantum computing industry. No prior method matched this. Establishing a shared, open-source system for benchmarking moves the field beyond marketing claims and towards objective evaluation. Metriq’s integration of testing and data sharing supports transparency and allows strengths and weaknesses of different quantum computers to be pinpointed. Naren Manjunath from the Perimeter Institute and colleagues have launched Metriq, an open-source platform for comparing quantum computer performance across different machines. This system integrates testing and data sharing, offering a key step towards objective evaluation and moving beyond isolated performance claims. Metriq establishes a shared, openly accessible resource for evaluating quantum computers, moving beyond isolated performance reports. This platform unifies the process of defining, running and analysing tests, collecting data from over ten machines built by multiple vendors. The resulting curated dataset not only reveals how different quantum computers perform, but also highlights weaknesses within the benchmarks themselves, enabling continuous improvement. As a result, Metriq shifts the focus from simply measuring performance to understanding the limitations of both hardware and the tests used to assess it, opening questions about how to design benchmarks that accurately predict scaling to more complex computations, as speed doubled. 👉 More information 🗞 Metriq: A Collaborative Platform for Benchmarking Quantum Computers 🧠 ArXiv: https://arxiv.org/abs/2603.08680 Tags:

Read Original

Tags

quantum-standards
quantum-computing
quantum-hardware
partnership

Source Information

Source: Quantum Zeitgeist