Back to Categories
🔬

Research

Scientific breakthroughs, academic studies, peer-reviewed papers, and fundamental research discoveries.

962 Articles
Updated Daily

Showing 12 of 962 articles

IfM Researchers Detail NQCC’s £30 Million Testbed Programme For Quantum Computing Platforms
research

IfM Researchers Detail NQCC’s £30 Million Testbed Programme For Quantum Computing Platforms

Professor Chander Velu from the Institute for Manufacturing and Keith Norman, formerly of IfM and now with the QCi3 Hub, detailed the UK’s £30 million Testbed Programme for quantum computing platforms in a new report. The study explores the pioneering work of the National Quantum Computing Centre (NQCC) in developing and benchmarking diverse quantum technologies. This initiative, launched in 2023, selected seven companies to deliver cutting-edge testbeds, spanning key approaches like photonic and superconducting systems. The report highlights a collaborative innovation model designed to advance both the technology and the surrounding business ecosystem required for realizing quantum computing’s potential benefits. UK Quantum Computing: Pioneering Testbeds and Ecosystem Growth The UK is actively fostering growth in quantum computing through pioneering testbeds and a collaborative ecosystem, as detailed in a new report by the Institute for Manufacturing (IfM). A £30 million Testbed Programme, launched in 2023 by the National Quantum Computing Centre (NQCC), aims to create and benchmark diverse quantum computing platforms, representing a significant national effort. This initiative underscores the UK government’s commitment to accelerating the development and scaling of this strategically important technology. Seven companies , Aegiq, Infleqtion, ORCA Computing, Oxford Ionics, Quantum Motion, QuEra Computing, and Rigetti , were selected to deliver these cutting-edge testbeds, spanning technologies like photonic, trapped-ion, superconducting, and silicon-spin. According to Professor Chander Velu, Head of the Business Model Innovation Research Group at the IfM, these testbeds are not just about advancing the technology itself, but also about building the surrounding business and innovation ecosystem. This includes providing world-class technical facilities and a collaborative innovation model bringing together government, academia, and industry. Building on this, the NQC

Hardware-Efficient 4-Bit Multiplier for Xilinx FPGAs Achieves Minimal Resource Usage with 11 LUTs and 2.75 ns Delay
research

Hardware-Efficient 4-Bit Multiplier for Xilinx FPGAs Achieves Minimal Resource Usage with 11 LUTs and 2.75 ns Delay

The increasing demand for efficient processing in applications like the Internet of Things and edge computing necessitates optimising both speed and size of fundamental arithmetic circuits. Misaki Kida and Shimpei Sato, from Shinshu University, address this challenge with a new design for a 4-bit multiplier specifically tailored for Xilinx 7 series FPGAs. Their innovative approach achieves a significant reduction in hardware resources, requiring only eleven lookup tables and two carry blocks, while simultaneously improving performance by shortening the processing time. This advancement represents a crucial step towards building more powerful and energy-efficient systems for a wide range of applications demanding parallel, low-bitwidth calculations. This research introduces a hardware-efficient and accurate 4-bit multiplier design for AMD Xilinx 7-series FPGAs, utilizing only 11 lookup tables (LUTs) and two CARRY4 blocks. By reorganizing the logic functions mapped to the LUTs, the method reduces the LUT count compared to existing designs, while also shortening the critical path. Evaluation confirms the circuit attains minimal resource usage and a critical-path delay of 2. 750ns. With the proliferation of the Internet of Things (IoT) and edge computing, there is growing demand for arithmetic circuits that deliver near-real-time, high throughput under tight budgetary constraints. Optimized 4-bit Multiplier for Xilinx FPGAs Scientists developed a highly efficient 4-bit multiplier design specifically for AMD Xilinx 7-series FPGAs, achieving a significant reduction in resource usage and latency compared to existing designs. The work centers on optimizing the fundamental building blocks within the FPGA architecture, namely lookup tables (LUTs) and dedicated carry logic. Researchers harnessed the flexibility of Xilinx LUTs, configuring them to operate in a specialized mode, realizing a 4-bit multiplier with only 11 LUTs, a reduction compared to previously published designs.

Multi-Cloud SLA-Based Broker Intelligently Translates Metrics, Overcoming Provider Lock-In for Cloud Consumers
research

Multi-Cloud SLA-Based Broker Intelligently Translates Metrics, Overcoming Provider Lock-In for Cloud Consumers

Cloud computing underpins a vast majority of modern applications and services, yet realising its full potential remains challenging for many users. Víctor Rampérez, Javier Soriano, and David Lizcano, from Universidad Politécnica de Madrid and Madrid Open University, alongside Shadi Aljawarneh from Jordan University of Science and Technology and Juan A. Lara, address a key obstacle: the difficulty for cloud consumers to ensure consistent service levels across different providers. The team presents a novel approach that automatically translates complex service-level agreements into measurable, vendor-neutral metrics, effectively removing the need for specialist expertise and preventing provider lock-in. This intelligent knowledge-based system not only provides a means of monitoring performance across multiple cloud platforms, but also offers feedback to users, acting as an intelligent tutoring system to optimise cloud resource allocation and unlock the benefits of multi-cloud environments, as validated through use cases involving leading cloud providers. Cloud SLAs, Auto-Scaling and Multi-Cloud Challenges This research comprehensively explores Service Level Agreements (SLAs), cloud computing, auto-scaling, and related technologies, revealing key themes and challenges in modern cloud environments. It focuses on the importance of clearly defined SLAs, examining how to translate high-level service objectives into measurable policies and metrics that accurately reflect user experience. The study also investigates the opportunities and challenges presented by cloud computing, particularly in multi-cloud environments, advocating for standardized approaches to resource management and orchestration. Auto-scaling techniques are central to this work, crucial for achieving elasticity in cloud applications by dynamically adjusting resources based on demand. The research further considers the role of fog and edge computing in extending cloud capabilities to the network edge, enabl

Quantum Circuits Achieve Constant-Cost Clifford Operations with Four Applications of Global Interactions, Matching Theoretical Limits
research

Quantum Circuits Achieve Constant-Cost Clifford Operations with Four Applications of Global Interactions, Matching Theoretical Limits

Quantum computing relies on performing complex sequences of operations, and efficiently implementing these sequences is crucial for building practical machines. Jonathan Nemirovsky, Lee Peleg, and Amit Ben Kish, alongside Yotam Shapira from Quantum Art, demonstrate a significant advance in this area by achieving the theoretically optimal cost for performing any sequence of Clifford operations. The team reveals a method to execute these operations using a constant number of applications, no more than four, of powerful, all-to-all entangling gates, importantly without requiring additional helper qubits. This breakthrough not only minimises the number of operations needed, but also reduces the energy demands of the process, paving the way for more scalable and energy-efficient quantum computers. The team implements any sequence of CNOT gates of any length with four applications of such gates, also without ancillae, and demonstrates that extending this to general Clifford operations incurs no additional cost. This work introduces a practical and computationally efficient algorithm to realise these compilations, which are central to many quantum information processing applications. Constant Commutative Depth Clifford Operations This research addresses a key challenge in quantum computing: efficiently implementing Clifford operations, the fundamental building blocks of quantum algorithms. The team has developed a method to implement Clifford operations with a constant commutative depth, a significant improvement that allows operations to be performed in parallel, potentially accelerating quantum computations. This breakthrough leverages global interactions between qubits, meaning operations that affect multiple qubits simultaneously. The core innovation lies in representing the necessary transformations using these global interactions in a way that minimizes required resources. This approach contrasts with traditional methods that rely on sequential operations, limiting t

Scientists Compute Injective Norm of CSS Quantum Error-correcting Codes, Revealing Connections to Matroid Theory
research

Scientists Compute Injective Norm of CSS Quantum Error-correcting Codes, Revealing Connections to Matroid Theory

Quantum error correction relies on creating entanglement between qubits to protect information, but quantifying this entanglement presents a significant challenge. Stephane Dartois from Ecole Polytechnique and Gilles Zémor from the Institut de Mathématiques de Bordeaux now calculate the injective norm, a measure of genuine multipartite entanglement, for a broad class of quantum error-correcting codes known as CSS codes. This achievement extends previous work focused on specific codes, such as the Kitaev code, and importantly, provides an exact solution for an infinite family of quantum states. The team’s calculations not only advance our understanding of entanglement in quantum systems, but also reveal a surprising link between quantum information theory and the mathematical field of matroid theory, specifically through Edmonds’ intersection theorem. Computing this measure is generally a computationally challenging task, yet scientists have exactly computed it for specific codes in condensed matter theory, notably the Kitaev code and its extensions. This research extends these results to all CSS codes, thereby establishing the injective norm for a nontrivial, infinite family of quantum states. In doing so, the work uncovers an interesting connection to matroid theory and Edmonds’ intersection theorem. Entanglement, Error Correction and Tensor Networks This body of research explores the interconnected fields of quantum error correction, entanglement measures, and tensor networks. A significant portion of the work focuses on quantum error-correcting codes, including surface codes and topological codes, with a clear interest in identifying and understanding their capabilities. Scientists also investigate robust and meaningful ways to characterize entanglement in quantum states, employing geometric measures to quantify this crucial property. Tensor networks are increasingly used as a tool for representing and simulating quantum states, particularly in complex many-body

Landmark Benchmark Initiative Models Partially Magnetized ExB Plasmas Using Seventeen Codes, Validating Large-Scale Coherent-Structure Simulations
research

Landmark Benchmark Initiative Models Partially Magnetized ExB Plasmas Using Seventeen Codes, Validating Large-Scale Coherent-Structure Simulations

Low-temperature plasmas underpin a wide range of scientific research and industrial processes, and increasingly, scientists rely on computer simulations to understand their complex behaviour. Andrew T. Powis and Eduardo Ahedo, working with colleagues at various international institutions, now present a rigorous benchmark study designed to validate and improve these crucial simulation tools. The team, including Alejandro Álvarez Laguna and Nicolas Barléon, challenged seventeen different plasma simulation codes to model a partially magnetized plasma configuration known to exhibit large-scale rotating structures. This work, a continuation of the Landmark benchmarking initiative, demonstrates an unprecedented level of agreement between codes on key plasma properties, validating existing models and providing valuable insights for future software development. The success of this collaborative effort, led by Lucas Beving and Enrique Bello-Benítez, also highlights important lessons learned for conducting effective benchmarking campaigns in the field of plasma physics. PIC Simulations, Energy and Charge Conservation This work details a comprehensive overview of research concerning plasma simulation, particularly using Particle-in-Cell (PIC) methods, exploring core techniques, advanced optimizations, specific applications, and parallelization strategies. Fundamental to the field are the basic PIC algorithms for tracking particles, interpolating fields, and managing boundary conditions, with a major focus on ensuring accurate energy and charge conservation within simulations for realistic and stable results. Researchers employ implicit methods to achieve this, allowing for larger time steps and faster simulations by solving linear systems. Advanced techniques aim to further optimize PIC simulations, including dynamic load balancing, sparse grid methods, reduced-order PIC techniques, and domain decomposition. These methods are applied to diverse plasma scenarios, such as capaci

Jordan–Schwinger Tomographic Transformation Connects Discrete and Continuous-Variable Quantum Systems
research

Jordan–Schwinger Tomographic Transformation Connects Discrete and Continuous-Variable Quantum Systems

Hybrid quantum systems, combining the strengths of discrete and continuous variable architectures, represent a significant advance in information science, yet translating concepts between these fundamentally different platforms poses considerable challenges. Vladimir Orlov, Liubov Markovich, Alexey Rubtsov, and Vladimir Man’ko address this issue by constructing a theoretical bridge between discrete and continuous systems, utilising tomographic probability representations and the Jordan-Schwinger map. Their work connects observable random variables, such as spin projections, photon numbers, and quadratures, through probabilistic representations like spin, photon-number, and symplectic tomograms, alongside the Wigner function, allowing direct reconstruction of a system’s state across different representations. This formalism establishes a unified framework for comparing and transferring information between diverse hardware platforms, potentially enabling the design of hybrid protocols where spin-based memories interface with photonic communication channels, and providing a valuable tool for benchmarking and validating quantum algorithms across heterogeneous architectures. This transformation allows researchers to represent discrete quantum systems in terms of continuous variables, simplifying certain quantum algorithms and potentially reducing resource requirements for quantum communication and computation. The team develops a theoretical framework for this mapping, preserving key quantum properties like unitarity and entanglement. Researchers applied this mapping to various quantum states and operations, demonstrating its ability to accurately translate between discrete and continuous formulations. A key achievement is the development of a general formalism for the Jordan-Schwinger tomographic transformation, applicable to a wide range of quantum systems and states, with explicit examples provided for converting states between representations. The findings contribute

Q-RAN Architecture Secures O-RAN Networks Against Future Cryptanalytically Relevant Quantum Computers
research

Q-RAN Architecture Secures O-RAN Networks Against Future Cryptanalytically Relevant Quantum Computers

The future of mobile networks, built on increasingly flexible and open architectures known as Open Radio Access Networks, now faces a significant security challenge from the rapidly advancing field of quantum computing. Vipin Rathi, Lakshya Chopra, and Madhav Agarwal, along with their colleagues, address this threat by presenting Q-RAN, a comprehensive security framework designed to protect disaggregated O-RAN ecosystems. Their research demonstrates how to integrate newly standardised post-quantum cryptographic algorithms, including ML-KEM and ML-DSA, with robust random number generation, to safeguard networks against the ‘Harvest Now, Decrypt Later’ attack strategy. By deploying these algorithms across all O-RAN interfaces and establishing a centralised post-quantum certificate authority, this work provides a complete blueprint for securing the next generation of mobile communications against powerful, future adversaries. The team systematically integrates NIST-standardized post-quantum cryptography, specifically the ML-KEM and ML-DSA algorithms, into key telecommunications security protocols including mTLS, DTLS, and IPsec. This proactive approach addresses vulnerabilities created by the disaggregated architecture of O-RAN and the potential for “Harvest Now, Decrypt Later” attacks, where encrypted data is intercepted and stored for future decryption with quantum computers. The team developed Q-RAN, a system combining classical and post-quantum cryptography for a gradual and minimally disruptive transition, leveraging the xFAPI interface to seamlessly integrate post-quantum algorithms into the network architecture. A key innovation is the use of composite signatures, combining established classical algorithms with the ML-DSA post-quantum algorithm for enhanced security and compatibility, further supported by composite certificates facilitating a smooth transition to a fully quantum-resistant network. Q-RAN incorporates principles of Zero Trust architecture to enhan

Renormalization-group Prepares Matrix Product States on up to 80 Qubits, Enabling Shallower Circuits for Quantum Systems
research

Renormalization-group Prepares Matrix Product States on up to 80 Qubits, Enabling Shallower Circuits for Quantum Systems

Preparing complex, many-body entangled states across numerous qubits represents a significant hurdle in quantum computing, but researchers are now demonstrating substantial progress. Moritz Scheer, Alberto Baiardi, and Elisa Bäumer Marty, all from IBM Quantum, alongside Zhi-Yuan Wei and Daniel Malz, present a method for generating matrix product states using an algorithm rooted in renormalization-group techniques. Their work showcases the preparation of these states on superconducting hardware with systems scaling up to 80 qubits, and reveals that this approach creates circuits with significantly reduced depth compared to traditional methods. This shallower circuit architecture not only improves resilience to noise, but also demonstrably outperforms sequential preparation techniques for larger systems, paving the way for the creation and study of complex quantum states, including those exhibiting symmetry-protected topological order, beyond previously accessible scales. These MPS, representing many-body entangled states, are crucial for simulating complex quantum systems and understanding their behavior. The team successfully prepared states exhibiting a phase transition between a symmetry-protected topological phase and a trivial phase, scaling systems to up to 80 qubits, the largest demonstration to date of preparing states in this ordered phase away from a fixed point. This new method offers a significant advantage over traditional sequential approaches, achieving exponentially shallower circuit depths as system size increases. This reduced circuit depth enhances resilience to noise, a critical factor for practical quantum computation. Experiments reveal that the reduced circuit depth consistently outperforms sequential circuits for larger systems, demonstrating a clear advantage on currently available hardware. Measurements of string-order-like local expectation values and energy densities confirm the superior scaling of the new protocol with increasing system s