Back to News
research

Cognisnn Enables Neuron-Expandability and Dynamic-Configurability Via Random Graph Architectures in Spiking Neural Networks

Quantum Zeitgeist
Loading...
5 min read
1 views
0 likes
Cognisnn Enables Neuron-Expandability and Dynamic-Configurability Via Random Graph Architectures in Spiking Neural Networks

Summarize this article with:

Spiking neural networks represent a promising next generation of artificial intelligence, aiming to more closely mimic the efficiency and adaptability of the brain, but current designs often replicate the rigid structures of traditional systems. Yongsheng Huang, Peibo Duan, and colleagues from Northeastern University, alongside Kai Sun from Monash University, now present a new approach called CogniSNN, which incorporates random graph architecture to unlock key characteristics of biological neural networks, including neuron expandability, pathway reusability, and dynamic configurability. This innovative system addresses challenges in deep networks and enables efficient multi-task learning by selectively reusing critical neural pathways and dynamically growing new connections, ultimately achieving performance comparable to, and often exceeding, state-of-the-art spiking neural networks on challenging image datasets. By enhancing continuous learning and improving robustness, CogniSNN demonstrates the potential of brain-inspired intelligence and paves the way for practical deployment of spiking neural networks on dedicated hardware.

Spiking Networks Mimic Brain Connectivity and Learning Scientists developed Cognition-aware Spiking Neural Networks (CogniSNN) by incorporating a Random Graph Architecture (RGA), moving beyond the traditional chain-like structures of most spiking neural networks and mirroring the stochastic interconnectivity observed in biological brains. To address network degradation and value accumulation in deep pathways, the team engineered a pure spiking OR Gate residual mechanism combined with an Adaptive Pooling scheme, achieving a mathematically strict identity mapping without relying on floating-point additions. This innovative approach resolves the unbounded value accumulation problem while maintaining event-driven computation, crucial for efficient spiking neural network operation. The research team further tackled the limitations of continual learning by introducing a novel Key Pathway-based Learning without Forgetting (KP-LwF) algorithm. Utilizing graph theory, scientists defined Key Pathways based on Betweenness Centrality, enabling selective reuse of topological structures for new knowledge acquisition. This method allows the model to adapt to diverse tasks by recruiting pathways with high or low Betweenness Centrality, mirroring the brain’s functional allocation and surpassing methods that focus solely on intra-layer structures or full network fine-tuning. To achieve dynamic configurability, the study pioneered a Dynamic Growth Learning (DGL) algorithm that simulates the evolution of neural pathways along the temporal dimension. Unlike pruning-based methods, DGL allows neurons and synapses to grow dynamically, enhancing robustness against noise and frame loss. This approach significantly reduces training time and improves deployment flexibility on neuromorphic hardware, capturing the dynamic structure plasticity observed in biological brains. Extensive experiments on neuromorphic and static datasets demonstrate that CogniSNN achieves performance comparable to state-of-the-art models, while exhibiting superior anti-interference capabilities and continual learning potential. CogniSNN Achieves State-of-the-Art Spiking Performance This work introduces Cognition-aware Spiking Neural Networks (CogniSNN), a new paradigm designed to more closely mimic the brain’s structure and function, and demonstrates significant advancements in spiking neural network technology. Researchers addressed limitations in existing networks by incorporating a Random Graph Architecture (RGA), moving away from traditional, rigid hierarchical designs. Experiments reveal that CogniSNN achieves performance comparable to, and in some cases exceeding, current state-of-the-art SNNs on standard datasets and the challenging Tiny-ImageNet benchmark. A key innovation lies in the development of a pure spiking residual mechanism, alongside an adaptive pooling strategy, which mitigates network degradation and dimensional mismatch in deep pathways. Furthermore, the team designed a Key Pathway-based Learning without Forgetting (KP-LwF) approach, enabling efficient multi-task transfer by selectively reusing critical neural pathways while preserving previously learned knowledge. The research also introduces a Dynamic Growth Learning (DGL) algorithm, allowing neurons and synapses to grow dynamically over time. This dynamic growth significantly enhances the network’s robustness against interference and improves its adaptability for deployment on neuromorphic hardware. Extensive testing demonstrates that CogniSNN possesses superior anti-interference capabilities and continual learning potential compared to traditional chain-like architectures.

The team highlights that the primary goal of this work is to integrate random graph structures into SNNs, strengthening the link between neuroscience and artificial intelligence, and creating systems that are inherently robust, adaptable, and capable of lifelong learning. CogniSNN Learns Continuously, Exceeds Network Performance This research introduces Cognition-aware Spiking Neural Networks (CogniSNN), a new approach to artificial intelligence inspired by the brain’s complex neural structure. Departing from traditional artificial neural networks, CogniSNN incorporates a random graph architecture, allowing for more flexible and efficient information processing. Researchers addressed challenges in deep networks by developing a pure spiking residual mechanism and adaptive pooling strategy, enhancing performance and stability. Furthermore, a Key Pathway-based Learning approach was designed to selectively reuse critical neural pathways, enabling continuous learning and mitigating forgetting when transferring knowledge between tasks. Experiments demonstrate that CogniSNN achieves performance comparable to, and often exceeding, current state-of-the-art spiking neural networks on benchmark datasets. High-betweenness centrality pathways encode essential features for familiar tasks, while low-betweenness centrality pathways prove more effective when adapting to novel scenarios, mirroring the brain’s own strategies. A Dynamic Growth Learning algorithm further enhances robustness against noise and allows for flexible deployment with varying time step constraints. This work represents a significant step towards brain-inspired intelligence and lays the groundwork for practical applications of spiking neural networks in hardware. 👉 More information 🗞 CogniSNN: Enabling Neuron-Expandability, Pathway-Reusability, and Dynamic-Configurability with Random Graph Architectures in Spiking Neural Networks 🧠 ArXiv: https://arxiv.org/abs/2512.11743 Tags:

Read Original

Source Information

Source: Quantum Zeitgeist