Deprecated: Implicit conversion from float 213.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Deprecated: Implicit conversion from float 213.6 to int loses precision in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 534
Warning: imagejpeg(C:\Inetpub\vhosts\kidney.de\httpdocs\phplern\27877107
.jpg): Failed to open stream: No such file or directory in C:\Inetpub\vhosts\kidney.de\httpdocs\pget.php on line 117 Front+Neurosci
2016 ; 10
(ä): 508
Nephropedia Template TP
gab.com Text
Twit Text FOAVip
Twit Text #
English Wikipedia
Training Deep Spiking Neural Networks Using Backpropagation
#MMPMID27877107
Lee JH
; Delbruck T
; Pfeiffer M
Front Neurosci
2016[]; 10
(ä): 508
PMID27877107
show ga
Deep spiking neural networks (SNNs) hold the potential for improving the latency
and energy efficiency of deep neural networks through data-driven event-based
computation. However, training such networks is difficult due to the
non-differentiable nature of spike events. In this paper, we introduce a novel
technique, which treats the membrane potentials of spiking neurons as
differentiable signals, where discontinuities at spike times are considered as
noise. This enables an error backpropagation mechanism for deep SNNs that follows
the same principles as in conventional deep networks, but works directly on spike
signals and membrane potentials. Compared with previous methods relying on
indirect training and conversion, our technique has the potential to capture the
statistics of spikes more precisely. We evaluate the proposed framework on
artificially generated events from the original MNIST handwritten digit
benchmark, and also on the N-MNIST benchmark recorded with an event-based dynamic
vision sensor, in which the proposed method reduces the error rate by a factor of
more than three compared to the best previous SNN, and also achieves a higher
accuracy than a conventional convolutional neural network (CNN) trained and
tested on the same data. We demonstrate in the context of the MNIST task that
thanks to their event-driven operation, deep SNNs (both fully connected and
convolutional) trained with our method achieve accuracy equivalent with
conventional neural networks. In the N-MNIST example, equivalent accuracy is
achieved with about five times fewer computational operations.