All News

Stay updated with the latest quantum computing developments from around the world

12 articles found
Sort By:
IfM Researchers Detail NQCC’s £30 Million Testbed Programme For Quantum Computing Platforms
research

IfM Researchers Detail NQCC’s £30 Million Testbed Programme For Quantum Computing Platforms

Professor Chander Velu from the Institute for Manufacturing and Keith Norman, formerly of IfM and now with the QCi3 Hub, detailed the UK’s £30 million Testbed Programme for quantum computing platforms in a new report. The study explores the pioneering work of the National Quantum Computing Centre (NQCC) in developing and benchmarking diverse quantum technologies. This initiative, launched in 2023, selected seven companies to deliver cutting-edge testbeds, spanning key approaches like photonic and superconducting systems. The report highlights a collaborative innovation model designed to advance both the technology and the surrounding business ecosystem required for realizing quantum computing’s potential benefits. UK Quantum Computing: Pioneering Testbeds and Ecosystem Growth The UK is actively fostering growth in quantum computing through pioneering testbeds and a collaborative ecosystem, as detailed in a new report by the Institute for Manufacturing (IfM). A £30 million Testbed Programme, launched in 2023 by the National Quantum Computing Centre (NQCC), aims to create and benchmark diverse quantum computing platforms, representing a significant national effort. This initiative underscores the UK government’s commitment to accelerating the development and scaling of this strategically important technology. Seven companies , Aegiq, Infleqtion, ORCA Computing, Oxford Ionics, Quantum Motion, QuEra Computing, and Rigetti , were selected to deliver these cutting-edge testbeds, spanning technologies like photonic, trapped-ion, superconducting, and silicon-spin. According to Professor Chander Velu, Head of the Business Model Innovation Research Group at the IfM, these testbeds are not just about advancing the technology itself, but also about building the surrounding business and innovation ecosystem. This includes providing world-class technical facilities and a collaborative innovation model bringing together government, academia, and industry. Building on this, the NQC

Hardware-Efficient 4-Bit Multiplier for Xilinx FPGAs Achieves Minimal Resource Usage with 11 LUTs and 2.75 ns Delay
research

Hardware-Efficient 4-Bit Multiplier for Xilinx FPGAs Achieves Minimal Resource Usage with 11 LUTs and 2.75 ns Delay

The increasing demand for efficient processing in applications like the Internet of Things and edge computing necessitates optimising both speed and size of fundamental arithmetic circuits. Misaki Kida and Shimpei Sato, from Shinshu University, address this challenge with a new design for a 4-bit multiplier specifically tailored for Xilinx 7 series FPGAs. Their innovative approach achieves a significant reduction in hardware resources, requiring only eleven lookup tables and two carry blocks, while simultaneously improving performance by shortening the processing time. This advancement represents a crucial step towards building more powerful and energy-efficient systems for a wide range of applications demanding parallel, low-bitwidth calculations. This research introduces a hardware-efficient and accurate 4-bit multiplier design for AMD Xilinx 7-series FPGAs, utilizing only 11 lookup tables (LUTs) and two CARRY4 blocks. By reorganizing the logic functions mapped to the LUTs, the method reduces the LUT count compared to existing designs, while also shortening the critical path. Evaluation confirms the circuit attains minimal resource usage and a critical-path delay of 2. 750ns. With the proliferation of the Internet of Things (IoT) and edge computing, there is growing demand for arithmetic circuits that deliver near-real-time, high throughput under tight budgetary constraints. Optimized 4-bit Multiplier for Xilinx FPGAs Scientists developed a highly efficient 4-bit multiplier design specifically for AMD Xilinx 7-series FPGAs, achieving a significant reduction in resource usage and latency compared to existing designs. The work centers on optimizing the fundamental building blocks within the FPGA architecture, namely lookup tables (LUTs) and dedicated carry logic. Researchers harnessed the flexibility of Xilinx LUTs, configuring them to operate in a specialized mode, realizing a 4-bit multiplier with only 11 LUTs, a reduction compared to previously published designs.

Multi-Cloud SLA-Based Broker Intelligently Translates Metrics, Overcoming Provider Lock-In for Cloud Consumers
research

Multi-Cloud SLA-Based Broker Intelligently Translates Metrics, Overcoming Provider Lock-In for Cloud Consumers

Cloud computing underpins a vast majority of modern applications and services, yet realising its full potential remains challenging for many users. Víctor Rampérez, Javier Soriano, and David Lizcano, from Universidad Politécnica de Madrid and Madrid Open University, alongside Shadi Aljawarneh from Jordan University of Science and Technology and Juan A. Lara, address a key obstacle: the difficulty for cloud consumers to ensure consistent service levels across different providers. The team presents a novel approach that automatically translates complex service-level agreements into measurable, vendor-neutral metrics, effectively removing the need for specialist expertise and preventing provider lock-in. This intelligent knowledge-based system not only provides a means of monitoring performance across multiple cloud platforms, but also offers feedback to users, acting as an intelligent tutoring system to optimise cloud resource allocation and unlock the benefits of multi-cloud environments, as validated through use cases involving leading cloud providers. Cloud SLAs, Auto-Scaling and Multi-Cloud Challenges This research comprehensively explores Service Level Agreements (SLAs), cloud computing, auto-scaling, and related technologies, revealing key themes and challenges in modern cloud environments. It focuses on the importance of clearly defined SLAs, examining how to translate high-level service objectives into measurable policies and metrics that accurately reflect user experience. The study also investigates the opportunities and challenges presented by cloud computing, particularly in multi-cloud environments, advocating for standardized approaches to resource management and orchestration. Auto-scaling techniques are central to this work, crucial for achieving elasticity in cloud applications by dynamically adjusting resources based on demand. The research further considers the role of fog and edge computing in extending cloud capabilities to the network edge, enabl

Quantum Circuits Achieve Constant-Cost Clifford Operations with Four Applications of Global Interactions, Matching Theoretical Limits
research

Quantum Circuits Achieve Constant-Cost Clifford Operations with Four Applications of Global Interactions, Matching Theoretical Limits

Quantum computing relies on performing complex sequences of operations, and efficiently implementing these sequences is crucial for building practical machines. Jonathan Nemirovsky, Lee Peleg, and Amit Ben Kish, alongside Yotam Shapira from Quantum Art, demonstrate a significant advance in this area by achieving the theoretically optimal cost for performing any sequence of Clifford operations. The team reveals a method to execute these operations using a constant number of applications, no more than four, of powerful, all-to-all entangling gates, importantly without requiring additional helper qubits. This breakthrough not only minimises the number of operations needed, but also reduces the energy demands of the process, paving the way for more scalable and energy-efficient quantum computers. The team implements any sequence of CNOT gates of any length with four applications of such gates, also without ancillae, and demonstrates that extending this to general Clifford operations incurs no additional cost. This work introduces a practical and computationally efficient algorithm to realise these compilations, which are central to many quantum information processing applications. Constant Commutative Depth Clifford Operations This research addresses a key challenge in quantum computing: efficiently implementing Clifford operations, the fundamental building blocks of quantum algorithms. The team has developed a method to implement Clifford operations with a constant commutative depth, a significant improvement that allows operations to be performed in parallel, potentially accelerating quantum computations. This breakthrough leverages global interactions between qubits, meaning operations that affect multiple qubits simultaneously. The core innovation lies in representing the necessary transformations using these global interactions in a way that minimizes required resources. This approach contrasts with traditional methods that rely on sequential operations, limiting t

Scientists Compute Injective Norm of CSS Quantum Error-correcting Codes, Revealing Connections to Matroid Theory
research

Scientists Compute Injective Norm of CSS Quantum Error-correcting Codes, Revealing Connections to Matroid Theory

Quantum error correction relies on creating entanglement between qubits to protect information, but quantifying this entanglement presents a significant challenge. Stephane Dartois from Ecole Polytechnique and Gilles Zémor from the Institut de Mathématiques de Bordeaux now calculate the injective norm, a measure of genuine multipartite entanglement, for a broad class of quantum error-correcting codes known as CSS codes. This achievement extends previous work focused on specific codes, such as the Kitaev code, and importantly, provides an exact solution for an infinite family of quantum states. The team’s calculations not only advance our understanding of entanglement in quantum systems, but also reveal a surprising link between quantum information theory and the mathematical field of matroid theory, specifically through Edmonds’ intersection theorem. Computing this measure is generally a computationally challenging task, yet scientists have exactly computed it for specific codes in condensed matter theory, notably the Kitaev code and its extensions. This research extends these results to all CSS codes, thereby establishing the injective norm for a nontrivial, infinite family of quantum states. In doing so, the work uncovers an interesting connection to matroid theory and Edmonds’ intersection theorem. Entanglement, Error Correction and Tensor Networks This body of research explores the interconnected fields of quantum error correction, entanglement measures, and tensor networks. A significant portion of the work focuses on quantum error-correcting codes, including surface codes and topological codes, with a clear interest in identifying and understanding their capabilities. Scientists also investigate robust and meaningful ways to characterize entanglement in quantum states, employing geometric measures to quantify this crucial property. Tensor networks are increasingly used as a tool for representing and simulating quantum states, particularly in complex many-body

Estimating and Decoding Coherent Errors in Quantum Error Correction Experiments with Detector Error Models
Featured
general

Estimating and Decoding Coherent Errors in Quantum Error Correction Experiments with Detector Error Models

Quantum error correction holds immense promise for building practical quantum computers, but accurately interpreting the results of experiments remains a significant challenge. Evangelia Takou and Kenneth R. Brown, from Duke University, now demonstrate a method for estimating and decoding errors in quantum error correction experiments without relying on extensive prior device characterisation. Their work reveals that the history of error syndromes alone provides sufficient information to detect and quantify coherent errors, a type of noise previously difficult to assess. Crucially, the team shows that detector models established through experimentation function effectively across both stochastic and coherent noise environments, and their simulations, utilising both Majorana and Monte Carlo techniques, capture the unique signatures of coherent errors, ultimately leading to improved decoding thresholds and a more accurate understanding of quantum error correction performance. Coherent Error Mitigation in Quantum Error Correction This work explores the challenges of coherent errors in quantum error correction, going beyond the standard focus on simple bit flips and phase flips. Coherent errors introduce complex distortions that can significantly degrade performance, requiring specialized mitigation strategies. Scientists present a combination of theoretical analysis, simulations, and experimental considerations to address these challenges, aiming to advance the field and build more robust quantum computers. The team developed and utilized detector error models and hypergraphs to represent the complex relationships between errors and syndrome measurements, allowing for a more accurate modelling of the error landscape. They also explored techniques for self-consistent learning, refining error models directly from experimental data, crucial for adapting to the specific noise characteristics of quantum hardware. Noise Characterisation From Quantum Error Correction Data Sci