Back to News
research

Neuromorphic Networks with Dual Memory Pathways Achieve 5x Performance Boost, Stabilizing Learning with 60% Reduced Energy

Quantum Zeitgeist
Loading...
6 min read
2 views
0 likes
Neuromorphic Networks with Dual Memory Pathways Achieve 5x Performance Boost, Stabilizing Learning with 60% Reduced Energy

Summarize this article with:

Spiking neural networks offer a powerful approach to event-driven sensing and long-term context maintenance, but realising their potential within the constraints of energy and memory remains a significant hurdle. Pengfei Sun, Zhe Su, and Jascha Achterberg, working with Giacomo Indiveri, Dan F. M. Goodman, and Danyal Akarca, address this challenge through a novel co-design of algorithms and hardware. Their work introduces a neural network architecture inspired by the brain’s fast-slow cortical organisation, featuring a dual memory pathway that combines rapid spiking activity with a compact, low-dimensional state representing recent activity. This approach not only stabilises learning while preserving energy-efficient sparsity, achieving competitive accuracy with fewer parameters than existing networks, but also enables a near-memory hardware architecture that dramatically improves both throughput and energy efficiency, demonstrating over four and five times improvement respectively, establishing a scalable paradigm for real-time learning systems. Their work introduces a neural network architecture inspired by the brain’s fast-slow cortical organisation, featuring a dual memory pathway that combines rapid spiking activity with a compact, low-dimensional state representing recent activity. This approach not only stabilises learning while preserving energy-efficient sparsity, achieving competitive accuracy with fewer parameters than existing networks, but also enables a near-memory hardware architecture that dramatically improves both throughput and energy efficiency, demonstrating over four and five times improvement respectively, establishing a scalable paradigm for real-time learning systems.,.

Dual Memory Pathway for Long Sequence Learning The study addresses the challenge of building energy-efficient spiking neural networks capable of processing long sequences of data, inspired by the fast-slow cortical organization observed in the brain. Researchers developed a novel dual memory pathway (DMP) architecture, integrating fast spiking activity with a slow memory pathway to maintain context over extended timescales. This architecture features an explicit slow memory pathway within each network layer, storing a compact, low-dimensional state that summarizes recent activity and modulates spiking dynamics, stabilising learning while preserving event-driven sparsity. The DMP-SNN achieves competitive accuracy on long-sequence benchmarks, notably achieving 99. 3% on S-MNIST and 97. 3% on PS-MNIST, with 40-60% fewer parameters than existing state-of-the-art spiking neural networks. To fully leverage the DMP architecture, the team engineered a near-memory computing system, optimising dataflow across sparse-spike and dense-memory pathways. Experiments demonstrate a more than fourfold increase in throughput and over a fivefold improvement in energy efficiency compared to current implementations. The methodology involved benchmarking four spiking network architectures, including the DMP-SNN, across temporally structured datasets. Performance was assessed using varying network parameters, with the DMP-SNN achieving peak accuracy with 202K parameters on PS-MNIST and S-MNIST. Further investigation examined the impact of temporal sparsity, revealing that vision tasks tolerate coarser updates while auditory tasks require tighter coupling.

The team demonstrated that increasing the state buffer length, capturing relevant temporal information, accelerated convergence and improved accuracy, particularly on the SHD dataset. Analysis of delay distributions showed that longer state buffer sizes reduced the need for explicit axonal delays, suggesting a trade-off between memory capacity and computational complexity. The DMP-SNN consistently outperformed other architectures, achieving 65. 37% accuracy on SSC with only 24K parameters, while maintaining high performance across all datasets.,.

Dual Memory Pathway Stabilizes Spiking Networks Scientists have developed a novel spiking neural network architecture that addresses the challenge of maintaining long-term memory while respecting tight energy and memory constraints. The research team drew inspiration from the fast-slow cortical organization observed in the brain, introducing a dual memory pathway (DMP) architecture where each network layer maintains a compact, low-dimensional state vector summarizing recent activity. This shared state, evolving through slow dynamics, modulates spiking activity and stabilises learning, achieving competitive accuracy on long-sequence benchmarks with 40-60% fewer parameters than existing state-of-the-art spiking neural networks.

The team’s work demonstrates that compressing recent activity into a few slow modes, rather than storing full spike histories or relying on dense recurrent activity, significantly reduces parameter requirements without sacrificing performance. Experiments reveal that this approach maintains long-range temporal context while preserving event-driven sparsity, a crucial feature for efficient computation. To fully leverage the DMP architecture, researchers designed a near-memory-compute architecture that optimises dataflow across heterogeneous sparse-spike and dense-memory pathways. This design keeps the compact shared state on-chip, enabling efficient execution and overcoming limitations of prior hardware implementations. Measurements confirm a greater than fourfold increase in throughput and over a fivefold improvement in energy efficiency compared to state-of-the-art implementations. The results demonstrate that this biologically inspired approach, pairing fast spiking dynamics with compact temporal memory, establishes a scalable co-design paradigm for real-time neuromorphic computation and learning. This breakthrough delivers a functional abstraction that is both algorithmically effective and hardware-efficient, paving the way for future advancements in energy-efficient artificial intelligence.,. Fast-Slow Networks Enable Efficient Long-Term Memory This research demonstrates a new approach to building spiking neural networks, successfully addressing the challenge of maintaining long-term memory within energy and memory constraints. Scientists developed a network architecture inspired by the fast-slow cortical organization observed in the brain, introducing a dual memory pathway that combines rapid spiking activity with a compact, low-dimensional slow memory. This explicit memory pathway stabilises learning and preserves event-driven sparsity, achieving competitive accuracy on challenging long-sequence benchmarks while significantly reducing the number of parameters compared to existing spiking networks.

The team further enhanced this algorithmic innovation with a corresponding hardware architecture, termed a near-memory system. This design fully leverages the benefits of the dual memory pathway by retaining the compact shared state and optimising dataflow across both sparse-spike and dense-memory pathways. Experimental results confirm substantial improvements, demonstrating over a fourfold increase in throughput and a greater than fivefold improvement in energy efficiency compared to state-of-the-art implementations. The authors acknowledge that the current implementation relies on strict last-timestep supervision, and future work could explore alternative learning paradigms. However, this research establishes a scalable co-design paradigm for real-time learning systems, demonstrating that biological principles can guide the development of both algorithmically effective and hardware-efficient neural networks. The key finding is that by making the slow state explicit, shared, and low-rank, and executing fast and slow pathways in parallel, long temporal horizons can coexist with implementation efficiency. 👉 More information 🗞 Algorithm-hardware co-design of neuromorphic networks with dual memory pathways 🧠 ArXiv: https://arxiv.org/abs/2512.07602 Tags:

Read Original

Source Information

Source: Quantum Zeitgeist