Back to News
research

Neuromorphic Eye Tracking Achieves Low-Latency Pupil Detection, Enabling 850x Faster Response with 20x Reduced Power

Quantum Zeitgeist
Loading...
5 min read
1 views
0 likes
Neuromorphic Eye Tracking Achieves Low-Latency Pupil Detection, Enabling 850x Faster Response with 20x Reduced Power

Summarize this article with:

Developing effective eye-tracking technology for wearable devices requires both speed and energy efficiency, yet conventional systems often struggle with motion blur and high power consumption. Paul Hueber, Luca Peres, and Florian Pitters, along with colleagues from the University of Manchester and Viewpointsystem GmbH, address this challenge by pioneering a new approach using neuromorphic computing. Their research demonstrates that replacing complex components in state-of-the-art eye-tracking models with lightweight spiking neural networks dramatically reduces both model size and power consumption, achieving accuracy comparable to specialised hardware.

The team’s models attain a mean error of 3. 7 to 4. 1 pixels while reducing theoretical power consumption to an estimated 3. 9 to 4. 9mW with a latency of 3ms, representing a significant step towards truly responsive and immersive augmented and virtual reality experiences.

Event Cameras Track Eye Movements Precisely Research focuses on developing and applying event-based vision, using event cameras, for precise eye tracking. Event cameras only report changes in brightness, offering high temporal resolution, low latency, and reduced data volume, making them ideal for real-time, high-precision tracking even in challenging conditions. Researchers are also exploring neuromorphic computing to efficiently process event data using spiking neural networks, which mimic biological neurons. Current research aims to estimate the full six degrees of freedom of eye movement, accurately determining where a person is looking, detecting the pupil center, and identifying blinks. Techniques like rigid-motion scattering are being investigated for feature extraction in event-based vision, supporting the development of algorithms for change-based processing and transformer networks. Applications for this technology are diverse, ranging from extended reality and human-computer interaction to mental health diagnosis and assistive technology. Researchers are creating datasets to train and evaluate event-based eye tracking algorithms, and the ultimate goal is to achieve kilohertz-level eye tracking, capturing even the fastest eye movements. A key advantage of event-based vision is its potential for low-power consumption and robustness to motion artifacts, making it suitable for mobile and wearable devices. Spiking Networks for Efficient Event-Based Eye-Tracking Scientists have pioneered a new approach to eye-tracking by redesigning high-performing artificial neural network models as spiking neural networks, addressing limitations of conventional systems for wearable applications. Researchers replaced recurrent and attention modules within established models with lightweight leaky integrate-and-fire layers, significantly reducing computational complexity and enabling efficient processing of event-based vision data. Depth-wise separable convolutions were implemented to further minimize model size and operational demands. The experimental setup involved processing raw event streams into one-millisecond bins, then slicing these into 450-millisecond time windows for training. Data augmentation techniques were applied to enhance model robustness and generalization, and models were trained using a sliding window approach. The best performing model was evaluated in a continuous, online setting on a separate test set to assess performance metrics under realistic operating conditions. The resulting models achieve a mean gaze estimation error of 3. 7 to 4. 1 pixels, approaching the accuracy of the Retina system, while simultaneously delivering substantial efficiency gains. The redesigned spiking neural networks are projected to operate at an estimated power consumption of 3. 9 to 4. 9 milliwatts with a latency of 3 milliseconds at a sampling rate of 1 kilohertz, positioning them as a promising solution for seamless and responsive interaction in augmented and virtual reality applications.

Spiking Networks Match Retina’s Eye-Tracking Accuracy Scientists have achieved a breakthrough in eye-tracking technology by redesigning high-performing event-based models as spiking neural networks, addressing limitations of conventional frame-based systems. Researchers replaced recurrent and attention modules with lightweight leaky integrate-and-fire layers and implemented depth-wise separable convolutions to reduce computational complexity. Experiments demonstrate that these redesigned SNNs achieve a mean error of 3. 7 to 4. 1 pixels, approaching the accuracy of the Retina system. Importantly, the new models achieve this performance with a 20-fold reduction in model size and an 850-fold reduction in theoretical compute compared to their artificial neural network counterparts. Tests confirm that these efficient variants are projected to operate at an estimated power consumption of 3. 9 to 4. 9 milliwatts while maintaining a low latency of 3 milliseconds at a 1 kilohertz sampling rate. The research team evaluated the models using continuous 1-millisecond windows, reflecting the demands of high-temporal resolution applications. This higher sampling rate, combined with the reduced computational load, positions the technology as a viable solution for real-time wearable eye-tracking systems.

Spiking Networks Enable Efficient Real-Time Eye-Tracking This research demonstrates the successful redesign of event-based eye-tracking architectures as spiking neural networks, achieving substantial gains in computational efficiency while maintaining practical accuracy for real-time wearable applications. By replacing complex recurrent and attention components with lightweight layers, the team reduced computational cost by a factor of 30 to 1000 and compressed model size by 22 to 45times, sustaining gaze estimation errors of 3. 7 to 4. 1 pixels on a standard benchmark. The resulting models are projected to operate at an estimated 3. 9 to 4. 9 milliwatts with a latency of approximately 3 milliseconds at 1 kilohertz, making them suitable for continuous, always-on eye-tracking systems with tight power and latency constraints. While a slight accuracy gap remains compared to leading conventional neural network models, these results clearly show that a neuromorphic redesign can preserve much of their performance at a significantly reduced computational footprint. Future work will focus on hardware validation, architectural expansion, and refinement of temporal processing to further advance neuromorphic eye tracking towards both high accuracy and extreme efficiency.

The team hopes that these findings and accompanying open-source implementations will encourage further exploration of neuromorphic architectures for high-speed, energy-constrained eye-tracking systems. 👉 More information 🗞 Neuromorphic Eye Tracking for Low-Latency Pupil Detection 🧠 ArXiv: https://arxiv.org/abs/2512.09969 Tags:

Read Original

Source Information

Source: Quantum Zeitgeist