Spatial Spiking Neural Networks Achieve 90% Accuracy with 18x Reduced Parameters through Efficient Temporal Computation

Summarize this article with:
The pursuit of efficient artificial intelligence increasingly focuses on mimicking the brain’s remarkable ability to process information with minimal energy consumption, and spiking neural networks offer a promising pathway. Lennart Landsmeer, Amirreza Movahedin, and Mario Negrello, from Delft University of Technology and Erasmus Medical Center, alongside Said Hamdioui and Christos Strydis, present a new framework called Spatial Spiking Neural Networks, which fundamentally alters how these networks handle the crucial element of time. Their research demonstrates that by modelling synaptic delays as a consequence of neuron positions within a defined space, rather than as freely adjustable parameters, they achieve significant reductions in computational cost and memory usage. This innovative approach not only outperforms conventional spiking neural networks on standard benchmarks, but also reveals a surprising geometric principle governing temporal processing, paving the way for more scalable and energy-efficient artificial intelligence systems that closely resemble biological brains. SNNs communicate using precisely timed spikes, offering potential advantages in energy efficiency and biological realism. Researchers investigate how to best design and train these networks for tasks like classifying patterns and estimating distances, focusing on techniques to improve performance and reduce computational demands.
This research focuses on understanding how information is encoded in spike timing, rather than continuous values. Scientists are tackling pattern classification and distance estimation using these networks, improving efficiency by reducing connections and using realistic neuron models. Researchers are also exploring how neuron arrangement influences network performance. Researchers are developing methods to effectively train SNNs, dynamically adjusting connections between neurons during learning and experimenting with complex neuron models.
The team investigates how neuron arrangement, specifically connection length, impacts learning and processing. This work aims to create SNNs that are accurate, efficient, and biologically plausible. Experiments demonstrate that SNNs achieve competitive performance on pattern classification and distance estimation. Dynamic sparsity improves efficiency and generalization, while realistic neuron models, like the AdEx model, capture complex neuronal dynamics. Restricting neuron arrangement to two or three dimensions can improve performance, leading to more efficient and biologically plausible SNNs.
This research is strengthened by its comprehensive exploration of techniques, emphasis on biological realism, and detailed experimental evaluation. Future research includes exploring different learning rules, developing more realistic neuron models, and investigating network topology. Conventional SNNs treat communication delays as independent parameters, leading to high computational costs. SpSNNs ground temporal computation in learnable spatial structures, mirroring the brain’s organization where delays arise from physical distances between neurons. This innovative approach embeds neurons within a defined space, allowing spike-propagation delays to emerge naturally from inter-neuron distances, reducing the need for independent parameter training.
This research pioneers a method for jointly optimizing synaptic weights and neuron coordinates, resulting in SNNs with significantly fewer parameters. Researchers tested SpSNNs across varying dimensionalities, embedding neurons in 0, 1, 2, and 3-dimensional spaces to investigate the impact of spatial constraints on network performance. Experiments revealed that networks constrained to two and three dimensions consistently outperformed those with unconstrained delays or no delays, demonstrating a beneficial regularization effect. To facilitate efficient training, the team engineered a custom automatic differentiation framework capable of calculating exact gradients for trainable delays. This framework handles spike-induced updates through novel, custom-derived gradients, enabling seamless integration with diverse neural models and network architectures. The forward pass involves a time-discretized simulation carefully managing spike-induced updates, while the backward pass leverages automatic differentiation with the custom gradients to calculate loss-parameter gradients for updating neuron positions and weights. Furthermore, dynamically sparsified SpSNNs maintain full accuracy even at 90% sparsity, achieving comparable performance to standard delay-trained SNNs while utilizing up to 18times fewer parameters. This work replaces fully trainable delays with position learning, substantially reducing the number of parameters required while maintaining the ability to process temporal information. Experiments demonstrate that SpSNNs outperform conventional SNNs with unconstrained delays on benchmark tasks, despite utilizing fewer parameters. Across tasks, performance consistently peaked in two-dimensional (2D) and three-dimensional (3D) networks, rather than in networks with infinite-dimensional delay spaces, revealing a geometric regularization effect. On one task, SpSNNs achieved higher test accuracy with increasing numbers of hidden neurons, demonstrating scalability. Dynamically sparsified SpSNNs maintained full accuracy even at 90% sparsity, matching the performance of standard delay-trained SNNs while using up to 18times fewer parameters. For a more complex task, the team organized neurons in a recurrent fashion, utilizing a rate-coded scheme. Results show that SpSNNs learn by optimizing neuron positions in different dimensions, with test accuracy generally increasing as the number of hidden neurons increased. Notably, the three-dimensional SpSNN outperformed other dimensionalities and the infinite-dimensional network, achieving the highest accuracy. Unlike conventional spiking neural networks where communication delays are learned as independent parameters, SpSNNs learn neuron positions within a multi-dimensional Euclidean space, with delays emerging naturally from the distances between these positions. This method dramatically reduces the number of parameters requiring training, achieving comparable or superior performance to existing networks on benchmark tasks despite using far fewer resources. Notably, the networks demonstrate peak performance in two or three-dimensional spaces, suggesting a geometric regularization effect that enhances efficiency. 👉 More information 🗞 Spatial Spiking Neural Networks Enable Efficient and Robust Temporal Computation 🧠 ArXiv: https://arxiv.org/abs/2512.10011 Tags:
