Back to News
quantum-computing

Quantum Computing’s Decision Trees Offer Speed with Minimal Extra Cost

Quantum Zeitgeist
Loading...
5 min read
0 likes
⚡ Quantum Brief
MIT researchers led by Simone Roncallo developed a hybrid quantum decision tree that classifies multiple categories with logarithmic computational overhead (O(log₂ K)), a breakthrough over classical O(K) scaling. The approach combines quantum binary models into a multinomial classifier, achieving 97.4% accuracy on MNIST’s six-class problem—nearly matching one-vs-one (98.7%) and one-vs-rest methods while drastically reducing resource demands. Benchmarks on MNIST, Fashion MNIST, and CIFAR-10 confirmed efficiency across varying dataset complexities, with performance remaining robust despite smaller neuron counts (20–40) and batch sizes (64–128). The binary decision tree’s hierarchical structure minimizes computational cost by recursively partitioning data, enabling scalable quantum machine learning for tasks like image recognition and medical diagnostics. Future work targets optimizing the method for near-term quantum hardware, addressing qubit coherence and gate fidelity limitations to unlock broader practical applications.
Quantum Computing’s Decision Trees Offer Speed with Minimal Extra Cost

Summarize this article with:

Simone Roncallo and colleagues at Massachusetts Institute of Technology present a hybrid approach combining multiple quantum binary models into a system capable of distinguishing between numerous classes. The approach benchmarks different strategies for this combination, including one-vs-one and one-vs-rest methods, alongside a binary decision tree. This assessment carefully considers the computational cost and potential for quantum advantage. Results show the decision tree offers a particularly efficient solution, maintaining comparable accuracy to alternative methods while exhibiting a computational overhead that scales logarithmically with the number of classes. Logarithmic scaling overcomes linear resource demands in quantum multinomial classification A computational overhead of O(log2 K) is now achieved when combining quantum binary models into a multinomial classifier, representing a substantial improvement over classical methods typically requiring O(K) overhead. Previously, classifying items into more than two categories with quantum computers demanded resources that increased linearly with the number of classes, hindering practical application. This linear scaling posed a significant barrier to deploying quantum machine learning for complex, real-world problems involving numerous categories, such as detailed image recognition or nuanced medical diagnoses. The logarithmic scaling achieved in this research signifies a move towards more feasible quantum multinomial classification. Benchmarks utilising datasets like MNIST and CIFAR-10 demonstrate the efficiency of this hybrid approach, even when compared against more expressive classical models. This logarithmic scaling represents a breakthrough, allowing for more complex quantum classifications. A binary decision tree strategy, alongside quantum binary models, yields accuracies of 97.4 per cent on the MNIST dataset for a six-class problem, closely matching the 98.7 per cent achieved by one-vs-one and one-vs-rest methods. Performance on the Fashion MNIST dataset was similar, with the decision tree attaining 92.3 per cent accuracy compared to 93.2 and 92.6 per cent for the other strategies. The MNIST dataset, consisting of handwritten digits, and Fashion MNIST, featuring images of clothing items, served as initial testbeds due to their relatively low dimensionality and established benchmarks. The small difference in accuracy between the decision tree and the other methods suggests that the logarithmic scaling is achieved without significant performance degradation. The choice of a six-class problem for MNIST allowed for a focused evaluation of the scaling behaviour without the added complexity of a larger number of categories. On CIFAR-10, the approach achieved 55.4 per cent accuracy against 57.3 and 47.8 per cent. These benchmarks were conducted with models trained using 20 or 40 neurons and batch sizes of 128 or 64 samples, demonstrating consistent results across varying parameters. Data pre-processing involved greyscale conversion, flattening, and L2-norm standardisation, and the datasets employed, MNIST, Fashion MNIST, and CIFAR-10, were chosen to progressively increase in complexity, rigorously testing the approach. CIFAR-10, a dataset of colour images categorised into 10 classes, presents a considerably more challenging classification task due to its higher dimensionality and greater visual variability. The consistent performance across different network sizes (20 and 40 neurons) and batch sizes (128 and 64) indicates the robustness of this method. The data pre-processing steps, including greyscale conversion, flattening, and L2-norm standardisation, are standard practices in image classification to reduce dimensionality and improve model convergence. Quantum algorithms for categorising data beyond simple yes or no decisions Multiclass classification, essential for tasks ranging from medical diagnoses to image recognition, has long demanded greater computational resources than binary sorting. A viable method for building quantum systems capable of categorising items into more than two groups, a process known as multinomial classification, is now available. Combining multiple simpler quantum binary models achieves a computational overhead that increases very slowly as the number of categories grows, contrasting with the linear increase typical of classical approaches. The one-vs-one method involves training a separate binary classifier for each pair of classes, while the one-vs-rest method trains a binary classifier to distinguish each class from all others. These methods, while effective, suffer from the aforementioned linear scaling of computational resources with the number of classes. The binary decision tree offers an alternative by recursively partitioning the data based on binary decisions, leading to the logarithmic scaling. The binary decision tree, functioning like a series of simple sorting choices, proved particularly effective at minimising computational demands. Each node in the tree represents a binary classification, and the path from the root to a leaf node determines the predicted class. This hierarchical structure allows for efficient classification, as only a logarithmic number of nodes need to be traversed to reach a decision. While this approach demonstrates a compelling reduction in computational overhead, achieving comparable results to established techniques, it remains unclear whether this efficiency gain is substantial enough to justify the complexities of implementation on current quantum hardware. The current limitations of quantum hardware, such as qubit coherence times and gate fidelities, pose significant challenges to implementing complex quantum algorithms. Further research will focus on optimising its performance on near-term quantum devices, potentially unlocking wider applications. This development opens avenues for more efficient and scalable quantum machine learning algorithms. Specifically, exploring techniques for reducing the circuit depth and minimising the number of qubits required will be crucial for realising the full potential of this approach on practical quantum computers. The researchers demonstrated a method for combining multiple quantum binary models into a classifier capable of handling several categories. This binary decision tree approach maintains similar accuracy to other methods while scaling its computational demands at a logarithmic rate relative to the number of classes, offering a more efficient solution than classical alternatives. The study benchmarked one-vs-one, one-vs-rest, and binary decision tree strategies, finding the latter to be cost-effective. Authors intend to optimise this performance on near-term quantum devices to further explore its potential. 👉 More information 🗞 Divide et impera: hybrid multinomial classifiers from quantum binary models 🧠 ArXiv: https://arxiv.org/abs/2604.08094 Tags:

Read Original

Tags

quantum-machine-learning
quantum-computing
quantum-advantage

Source Information

Source: Quantum Zeitgeist