Conferencia de Timothée Masquelier, investigador del Centro Nacional para la Investigación Científica (CNRS) de Francia.
My main line of research is the study of spiking neural networks (SNN). Biological neurons use short stereotyped electrical impulses called "spikes" to compute and transmit information. The spike times, in addition to the spike rates, are known to play an important role in how neurons process information. SNNs are thus more biologically realistic than the artificial neural networks used in deep learning (which use continuous activations), and are arguably the most viable option if one wants to understand how the brain computes at the neuronal description level. But SNNs are also appealing for AI technology, because they can be implemented efficiently on low-power neuromorphic chips that leverage distributed event-driven computations. SNNs became a very hot topic recently due a major breakthrough: back-propagation, THE algorithm behind the deep learning revolution, can now be used in SNNs – something that was considered impossible until a few years ago because of non-differentiability issues. The trick is to use a "surrogate gradient" (SG) instead of the true one [1,2], and this in practice allows training SNNs in state-of-the-art deep learning frameworks like PyTorch or TensorFlow, which leverage automatic differentiation – a huge convenience. Recently, in my group we have successfully used this SG technique to train SNNs to solve many different problems efficiently: speech command classification [3], epileptic seizure detection from electro-encephalograms [4], encrypted internet traffic classification [5], and a wide range of computer vision tasks: ImageNet classification [6], object/action recognition [7] and depth estimation [8] from event-based cameras.
References:
1. Neftci EO, Mostafa H, Zenke F: Surrogate Gradient Learning in Spiking Neural Networks. IEEE Signal Process Mag 2019, 36(October):51–63.
2. Zenke F, Bohté SM, Clopath C, Comsa IM, Göltz J, Maass W, Masquelier T, Naud R, Neftci EO, Petrovici MA, Scherr F, Goodman DFM: Visualizing a joint future of neuroscience and neuromorphic engineering. Neuron 2021, 109:571–575.
3. Pellegrini T, Zimmer R, Masquelier T: Low-Activity Supervised Convolutional Spiking Neural Networks Applied to Speech Commands Recognition. In 2021 IEEE Spok Lang Technol Work. IEEE; 2021:97–103.
4. Soltani Zarrin P, Zimmer R, Wenger C, Masquelier T: Epileptic Seizure Detection Using a Neuromorphic-Compatible Deep Spiking Neural Network. In Lect Notes Comput Sci. Volume 12108; 2020:389–394.
5. Rasteh A, Delpech F, Aguilar-Melchor C, Zimmer R, Shouraki SB, Masquelier T: Encrypted Internet traffic classification using a supervised Spiking Neural Network. arXiv 2021:1–22.
6. Fang W, Yu Z, Chen Y, Huang T, Masquelier T, Tian Y: Deep Residual Learning in Spiking Neural Networks. In NeurIPS; 2021.
7. Fang W, Yu Z, Chen Y, Masquelier T, Huang T, Tian Y: Incorporating Learnable Membrane Time Constant to Enhance Learning of Spiking Neural Networks. In ICCV; 2021.
8. Rançon U, Cuadrado-Anibarro J, Cottereau BR, Masquelier T: StereoSpike: Depth Learning with a Spiking Neural Network. arXiv 2021.
Instituto de Microelectrónica de Sevilla
13 Diciembre 2021