Mobile QR Code QR CODE

  1. (Department of Computer Engineering, KwangWoon University, Korea {parkjow15, unangsunarya2040, rmsqhwkd2, cndtjq97, jw03070, lunit, parkcheolsoo}@kw.ac.kr )
  2. ( School of Applied Science, Telkom University, Bandung, Indonesia unangsunarya@telkomuniversity.ac.id)



Spiking neural network, Machine learning, Neuromorphic, Artificial intelligence

1. Introduction

The spiking neural network (SNN) is a third-generation artificial neural network (ANN) [1] in which every neuron gets trigger signals from other neuron fire spikes at a certain threshold. Unlike other neural networks, such as the ANN, that use more than one-bit communication between cells, the SNN uses spikes with a precision of one bit [2]. In an SNN, it is not the signal form that matters but the timing and number of spikes. The patterns of the signals generated from each neuron will be in the same form. However, every neuron can generate different numbers of spikes at different times, which is the key feature of the SNN. There are a lot of issues when implementing conventional neural network algorithms in hardware devices due to energy consumption, especially with a large network. The SNN can consume less energy, compared to conventional networks [3].

Spiking neural networks can be used in a variety of ways. Harvest yield estimation, for example, attempts to forecast a wheat crop yield [1]. They are also used for pattern digit recognition [2,4] and EEG signal classification [5,6]. In addition, SNN researchers try to keep the network biologically plausible [2]; that is, they make all or most of the neuron parameters the same as the actual neuronal biology observations, which can reduce accuracy [7,8]. Others have tried to implement an SNN using rate-based learning in an ANN. In this approach, they use backpropagation training, followed by conversion to a spiking neural network [9-11]. The basic principle of the SNN is to transform input signals (temporal information) into spike trains representing event-sensitive time [1]. In this paper, several SNN algorithms with variations in the model are explained in Section 2. There are several encoding methods, and the encoding utilized by the SNN is discussed in Section 3. Section 4 addresses learning methods for SNNs, and Section 5 concludes this paper.

2. The Spiking Neural Network

The SNN is a model based on biology, mimicking the behavior and activities of neurons [2]. In an SNN, spike trains encode temporal input information. The neurons' firing rates change in response to different input strengths. Spike trains from one neuron flow and enter other neurons.

There are three main parts of the neuronal system: dendrite, axon, and soma. Dendrites receive spike trains from an input neuron through a synapse. Then, the accumulated spikes form potential membranes in neurons. The potential membrane could increase and decrease exponentially based on the number of spike trains in a certain time frame. If the accumulated potential in a membrane crosses a certain threshold, neurons fire and generate a spike. The threshold can be defined by each model. The more often a neuron receives spike trains, the faster the neuron will fire spikes. The generated spikes are transmitted to other neurons via the axon, on which the end is called the synapse. Fig. 1 illustrates the mechanism in the neuronal system [5,12].

Fig. 1. The neuronal system.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig1.png

2.1 Classes of Spiking Neural Networks

There are various classes of SNN, first of which is the evolving spiking neural network (eSNN) including a neural network’s ability to learn, its close reasoning, and linguistically meaningful features [13-15]. Second is the dynamic evolving spiking neural network (deSNN) utilizing a dynamic synapse and a learning process of rank-order to train spike-driven synaptic plasticity in a fast, online mode [16-18]. Lastly, the NeuCube SNN learns from spectro temporal brain data, which makes connections between clusters of neurons to build a track of neuronal activity [1,19]. The details of the above-mentioned algorithms are described below.

▪ Evolving Spiking Neural Network

An eSNN integrates neural network adaptive learning with fuzzy-rule-based logical reasoning. To discover new shapes from incoming data, the eSNN applies strategy and a one-pass rank order to develop a new spiking neuron and neural connections [16]. This architecture is a class of an Evolving Connections System (ECS) and an SNN, which creates spiking neurons and then merges incrementally to find groups and similarities from incoming data. The eSNN is claimed to have an adaptive system and a fast training process [13]. One implementation of the eSNN is utilized to control animals for a certain task requiring pattern recognition [20]. The paper proposed a mechanism to find target sources of specific signals (temporal patterns of sound) in the presence of distractors.

▪ Dynamic Evolving Spiking Neural Network

The deSNN is an improved eSNN model that utilizes temporal spike coding and rank order spike coding [18]. A deSNN also utilizes dynamic synapses and rank order learning from spatio/spectro temporal data (SSTD) in a fast, online mode. The deSNN models have better performance in terms of speed and accuracy, compared to other SNN models that employ STDP or rank-order learning [16].

▪ NeuCube

The NeuCube is an SNN model that was made to handle streams of spatiotemporal data at first. This model is widely used to process spatiotemporal data for remote sensing applications to estimate crop yields and early predictions of events [1]. A NeuCube is also used for understanding, mapping, and learning spatiotemporal brain data (STBD) [19]. Fig. 2 illustrates the NeuCube structure [19]. General principles of the NeuCube architecture consist of four functional modules [19]:

Fig. 2. The NeuCube architecture for spatiotemporal brain data.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig2.png

1. input data encoding

2. 3D SNN reservoir (SNNr)

3. classification output function

4. an optional Gene Regulatory Network (GRN)

Several softwires implement a NeuCube for an SNN in Python with PyNN, Matlab, C++, and Java. SpiNNaker, based in the cloud, is a computer platform to support the implementation of NeuCube models [21].

2.2 Neuron Model

There are a variety of neuron models: Izhikevich, Hodgkin-Huxley, the Leaky Integrate and Fire method, etc. [22]. This paper illustrates various spiking neuron models.

▪ Izhikevich Neuron Model

Izhikevich's neuron model can handle nonlinear and linear pattern recognition problems [24]:

(1)
$ C\dot{v}=k\left(v-v_{r}\right)\left(v-v_{t}\right)-u+I \\ if~ ~ v~ \geq ~ v_{peak}~ ~ then \\ \dot{v}=a~ \left\{b~ \left(v-v_{r}\right)-u\right\} \\ v~ \leftarrow c \\ u\leftarrow u+d $

where a is a recovery time constant, b can be an integrator if b < 0 or a resonator if b > 0, k is a constant value that would be gained after input resistance is known, while c and d sum different currents activated during spike firing. Both a and b consider the action of currents with high thresholds that are engaged during firing. The membrane potential is represented by v, the recovery value by u, the membrane capacitor by C, and the resting membrane potential by v$_{r}$; $v_{t}$ is a simultaneous potential-threshold, $\dot{v}$ is the first voltage derivate over time, and $v_{peak}$ is the spike cutoff value.

▪ Hodgkin-Huxley Neuron Model

In the Hodgkin-Huxley neuron model, the current flowing through membrane resistance is determined by three ion channels: the sodium channel with conductance $g_{Na},$ a potassium channel with conductance $g_{k}$, and a leak channel with conductance $g_{CL}~ $[25]:

(2)
$ C\frac{dV_{i}}{dt}=~ g_{Na}^{max}~ m^{3}h\left(V_{Na}-V_{i}\right)+~ g_{K}^{max}~ n^{4}\left(V_{K}-V_{i}\right)~ +~ g_{CL}^{max}\left(V_{CL}-V_{i}\right)+~ I_{syn}^{i,j} $

where $V_{i}$ is the membrane potential of the i-th neuron, and dt is time in ms. Parameters m, h, and n are gating variables for sodium activation, sodium inactivation, and potassium activation, respectively.

▪ Leaky Integrate and Fire Model

Another model that explains the interaction between neurons is the Leaky Integrate and Fire (LIF) neuron model. The relationship models the neuron connections using electric ingredients such as circuit elements, voltage source, and current source:

(3)
$ \tau _{m}\frac{du}{dt}=-u\left(t\right)+RI\left(t\right) $

where $\tau _{m}$ is a membrane time constant, $u\left(t\right)$ is the membrane potential at time t, R is membrane resistance, and $I\left(t\right)$ is input current at time t. This model encodes input value intensity into spikes [2,26]. Fig. 3 shows the structure of the LIF model [9].

Fig. 3. Structure of the Leaky Integrate and Fire model.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig3.png

3. Encoding

Neurons are cells in the body that generate signals and rapidly propagate them. Neurons accomplish this by producing characteristic electrical pulses known as action potentials. A sensory neuron changes its activity by firing a series of action potentials according to a variety of time patterns in response to an external stimulus.~As the action potential patterns are transmitted throughout the brain, information about stimuli is encoded [27]. Information is converted to a different form or format for standardization, security, processing speed improvement, and storage space savings [27]. The most common encoding method is the rate-temporal coding method. The two approaches can be distinguished depending on how important the precise timing and order of the information conveyed are.

3.1 Rate Coding

In 1926, Adrian and Zotterman proposed rate coding [29], also known as frequency coding since rate refers to the average over time [28]. This is the standard coding scheme, which assumes that the neurons contain the majority of the information about the stimuli [30]. The kind of stimulus that generates action potentials is different every time, so our neural response system analyzes them statistically or probabilistically [30]. Thus, it is possible to characterize the firing rate, rather than a specific spike sequence [31]. The rate coding technique became the standard method for characterization of sensory or cortical neurons in the following decades owing to the relatively easy measurement of rates experimentally [32]. However, this approach can ignore information that might be included in the precise timing of the spike. Increasingly, experimental evidence suggests that the direct rate of fire based on time averages may not be a sufficient representation of brain activity [32].

3.2 Temporal Coding

Neural code is described as temporal code because it transmits information based on precise spike timing. Firing rates of neurons fluctuate at high frequencies, and they can transmit information or be noise [31]. As an alternative explanation for noise, temporal coding suggests that information is encoded and transmitted through it. It also influences neural activity. In some cases, temporal codes are referred to as spike codes. Essential information can be lost because it is not possible to capture all information about spike trains from rate code. Additionally, in response to similar stimuli, there are enough differences in responses, suggesting that distinct patterns of spikes contain more information than can be encoded by rate code [34].

4. Types of Learning

Learning is a process where a network builds a model by training it from input data. After the training process, the network will have a model to be used to resolve or recognize new data. In general, types of learning are classified as seen in Fig. 4 [35,36].

Fig. 4. Types of Learning.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig4.png

▪ Supervised Learning

In supervised learning, the algorithm can use label information to apply to the new test data what it has learned from the training data. Fig. 5 shows a supervised neural network process [37].

The output of this learning process is a model that can be applied to new test data, and the model can estimate the labels of test data using the weights of networks that were calculated during the training process. A general SVM pseudocode format for supervised learning is seen below.

One of the implementations of supervised learning in spiking neural networks was presented in [37]. That study implemented supervised learning in an SNN to test complex nonlinear classification problems for an iris dataset. It demonstrated that the SNN could perform nonlinear, separable classification tasks with 97.33\% accuracy after 1200 iterations. A supervised learning rule for classification of spatiotemporal spike patterns is another supervised learning SNN solution [7]. This implementation takes into account axonal and synaptic delays caused by weights and STDP, as in a remote supervised method (e.g., ReSuMe). It was compared with other methods using the tempotron learning rule and the spike pattern association neuron (SPAN). The proposed method outperformed others with the highest result at 100\% accuracy from training and testing.

Fig. 5. A supervised neural network.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig5.png

▪ Unsupervised Learning

An unsupervised learning algorithm updates the weights of networks without involving label information during the training process [37]. Fig. 6 diagrams an unsupervised neural network [37].

Although learning is conducted without ground truth, this algorithm explores the input data and draws inferences from datasets to describe the hidden structure of the network [28]. General pseudocode for the K-means format of unsupervised learning is seen below.

Several studies implementing the SNN in an unsupervised way have been suggested. In [8], the authors implemented an unsupervised clustering SNN to classify an iris dataset from efficient utilization of neurons. That study demonstrated how Hebbian learning induces and exploits synchronous neurons with enhanced unsupervised clustering capabilities. It built an SNN architecture for efficient use of neurons, but still yielded reliable clustering of high-dimensional multi-modal data. Another unsupervised SNN study proposed an approach comprising hybridization of self-organized maps (SOM) [21]. This method combined STDP rules with a SOM algorithm to train an MNIST dataset of handwritten digits [41,42]. Fig. 7 is the architecture of an unsupervised spiking neural network [41,42].

In this network, weights are trained on the connection between input and excitatory layers using the STDP learning rule (see Fig. 7). An SNN using unsupervised learning was implemented in [2] where biologically plausible mechanisms for learning new data were designed. This SNN model was tested using an MNIST dataset and achieved 95\% accuracy.

Fig. 6. An unsupervised neural network.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig6.png
Fig. 7. An unsupervised spiking neural network architecture.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig7.png

▪ Semi-supervised Learning

The semi-supervised learning algorithm has properties of both supervised learning and unsupervised learning using a small amount of labeled data and a large amount of unlabeled data during the training process. Varone et al. demonstrated that this model outperformed other conventional SNN models [28]. Fig. 8 is a diagram of a semi-supervised neural network [37].

The semi-supervised learning approach was also applied in the pre-training phase [43] where STDP was utilized to perform the unsupervised pre-training process to train the weights of the network. Then, labeled data were trained for a classification problem. General pseudocode for the self-training format of semi-supervised learning is below.

Fig. 8. A semi-supervised neural network.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig8.png

▪ Reinforcement Learning

The reinforcement learning algorithm interacts with the environment by producing actions and receiving rewards/errors from the environment [36,37]. Fig. 9 shows the reinforcement learning diagram [37].

This algorithm is commonly used when optimal interaction is required, and the form of the pseudocode is presented below.

A reinforcement learning SNN was designed in [44] in which the model implements the action selection process biologically and updates the weights of the network during the next iteration of the process. That study presented a PFC function model based on the Leaky Integrate and Fire neuron model. In this way, it could perform operand and spatial goal-directed tasks. The model reuses associations from representations in PFC learned before Hebbian STDP. Another implementation of a reinforcement learning SNN [45] presented a spiking neuron model that learns how to carry out a motor control task. Here, reinforcement learning is applied to train the model, and global reward and punishment signals are controlled using STDP.

Fig. 9. Reinforcement learning diagram.
../../Resources/ieie/IEIESPC.2023.12.1.64/fig9.png

5. Conclusion

The SNN models reviewed in this paper have their own advantages and disadvantages depending on the types of input data—the higher the data complexity, the higher the complexity of the network to solve the problem. Performance, individual plausibility, simplicity, and computing time are often trade-offs. The Hodgkin-Huxley neuron model could be more biologically plausible, yielding highly accurate performance. Also, the Leaky Integrated and Fire neuron model displays less computing time during the process. In terms of learning methods, clear and simple supervised learning would be the best choice to model the network. Semi-supervised learning could be an option for improving accuracy problems between supervised and unsupervised methods. The reinforcement learning method evaluates the network based on previous action to see if the previous action works well based on rewards, which is why the trial-and-error process is crucial.

ACKNOWLEDGMENTS

This research was supported by the Ministry of Science and ICT (MSIT), under the National Program for Excellence in SW (2017-0-00096), supervised by the Institute of Information & Communications Technology Planning & Evaluation (IITP) and the Ministry of Science and ICT (MSIT), Korea, under the Information Technology Research Center (ITRC) support program (IITP-2022-RS-2022-00156225) supervised by the Institute for Information & Communications Technology Planning & Evaluation (IITP).

REFERENCES

1 
P. Bose, N.K. Kasabov, L. Bruzzone, R.N. Hartono, “Spiking Neural Network for Crop Yield Estimation Based on Spatiotemporal Analysis of Image Time Series”, IEEE Transaction on Geoscience and Remote sensing, Vol. 54, No. 11 November 2016.URL
2 
P.U. Diehl and M. Cook, “Unsupervised Learning of digit Recognition Using Spikes Timing Dependent Plasticity”, Frontiers in Computational Neuroscience, August 2015.DOI
3 
G. Srinivasan, S. Roy, V. Raghunathan, K. Roy, “Spike Timing Dependent Plasticity Based Enhanced Self-Learning for Efficient Pattern Recognition in Spiking Neural Networks”, International Joint Conference on Neural Network (IJCNN), May 2017.URL
4 
K. Kiani and E.M. Korayem, “Classification of Persian Handwritten Digits Using Spiking Neural Network”, International conference on Knowledge-Based Engineering and Inovation (KBEI), November 5-6, 2015.URL
5 
A. Tahtirvancu and B. Yilmaz, “Classification of EEG Signals Using Spiking Neural Network”, International conference on Knowledge-Based Engineering and Inovation (KBEI), November 2015.URL
6 
Z. G. Doborjeh, M. Doborjeh, N. Kasabov, “EEG Pattern Recognition Using brian-Inspired Spiking Neural Network for Modelling Human Decision Processes”, International Joint Conference on Neural Network (IJCNN), July 2018.URL
7 
D. Querlioz, O. Bichler, P. Dollfus and C. Gamrat, "Immunity to Device Variations in a Spiking Neural Network With Memristive Nanodevices," in IEEE Transactions on Nanotechnology, vol. 12, no. 3, pp. 288-295, May 2013.URL
8 
M. Beyeler, N.D Dutt and J.L. Krichmar, “Categorization and Decision-making in A Neurobiologically Plausible Spiking Network Using a STDP-like Learning Rule”, Neural Network 48, 109-124. July 2013.DOI
9 
P. Morella, J. Arthur, F. Akopyan, N. Imam, R. Manohar, D.S. Modha, “A Digital Neurosynaptic Core Using Embedded Crossbar Memory With 45pj Per Spike in 45 nm”, IEEE Custom Integrated Circuits Conference (CICC), September, Sept. 2011.URL
10 
P. O’Connor, D. Neil, S.C. Liu, T. Delbruck, M. Pfeiffer, “Real Time Classification and Sensor Fusion with a Spiking Deep Belief Network”, Frontiers in Computational Neuroscience, October 2013.URL
11 
S. Hussain, S.C. Liu, A. Basu, “Improved Margin Multi-Class Classification using dendritic neurons with morphological Learning”, in Circuits and Systems (ISCAS), IEEE International Symposium, June 2014.URL
12 
M. C. Ergene, A. Durdu, H. Cetin “Imitation and Learning of human Hand Gesture Tasks of the 3D Printed Robotic Hand By Using Artificial Neural Networks”, International conference on Electronics, Computer and Artificial Intelligence (ECAI), July 2016.URL
13 
N.K. Kasabov, “Evolving Connectionist Systems for Adaptive Learning and Knowledge Discovery: Trends and Directions”, Elseiver Knowledge-Based Systems. May 2015.URL
14 
S.G. Wysoski, L. Benuskova, N. Kasabov, “Evolving Spiking Neural Networks for Audio Visual Information Processing”, Elseiver: Neural Network Vol. 23, No. 7, 2010.URL
15 
S. Schliebs and N. Kasabov, “Evolving Spiking Neural Network - A survey”, Evolving Syst. Vol. 4, No. 2, February 2013.URL
16 
N.K. Kasabov, K. Dhoble, N. Nuntalid, G. Indiveri, “Dynamic Evolving Spiking Neural Network for On-line Spatio and Spectro Temporal Pattern Recognition”, Elseiver: Neural Network, Vol. 41, May 2013.URL
17 
A. Mohemmed, S. Schliebs, S. matsuda, N. Kasabov, “Training Spiking Neural Network to Associate Spatio-temporal Input-Output Spike Patterns”, Neurocomputing, Vol. 107, May 2013.DOI
18 
K. Dhoble, N. Nuntalid, G. Indiveri, N. Kasabov, “Online Spatio-Temporal Pattern Recognition with Evolving Spiking Neural Networks Utilising Address Event Representation, Rank order, and Temporal Spike Learning”. IEEE WCCI, Brisbane. June 2012.URL
19 
N. K. Kasabov, “NeuCube: A Spiking Neural Network Architecture For Mapping, Learning and Understanding of Spatio Temporal Brain Data”, Elseiver: Neural Networks, February 2013.DOI
20 
C. Bensmail, V. Steuber, N. davey, B. Wrobel, “Evolving Spiking Neural Network To Control Animats for Temporal Pattern Recognition and Foraging”, IEEE Symposium Series on Computational Intelligence (SSCI), December 2017.URL
21 
N.K. Kasabov, “Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence”, Springer Series on Bio and Neurosystems 7, August 2018.DOI
22 
A. A. Abusnaina and R. Abdullah, “Spiking Neuron Models: A review”, International Journal of Digital Content Technology and its Applications (JDCTA), Vol. 8 No. 3, June 2014.URL
23 
E.M. Izhikevich, “Simple Model of Spiking Neurons”, IEEE Transactions on Neural Networks Vol. 14 Issue. 6 November 2003.URL
24 
R. Vazquez, “Izhikevich Neuron Model and Its Application in Pattern Recognition”, Neurodynamics: Australian Journal of Intelligent Information Processing System, Vol. 11, No. 1, 2010.URL
25 
M. Lu, J.L. Wang, J. Wen, X.W. Dong, “Implementation of Hodgkin-Huxley Neuron Model in FPGAs”, 7th Asia Pacific International Symposium on Electromagnetic Compatibility, May 2016.URL
26 
M. Dimopoulou, E. Doutsi, M. Antonini “A Retina-Inspired Encoder: An Innovative Step on Image Coding Using Leaky Integrate and Fire Neurons”, IEEE International Conference on Image Processing (ICIP), October 2018.URL
27 
B. Sengupta, S. B. Laughlin, J. E. Niven, “Consequences of converting graded to action potentials upon neural information coding and energy efficiency”, PLoS Comput. Biol., Vol. 10, No. 1, Jan 2014.DOI
28 
D. Auge, J. Hille, E. Mueller, A. Knoll, “A Survey of Encoding Techniques for signal Processing in Spiking Neural Networks”, Neural Process Lett., Vol 53, pp. 4693-4710, July 2021.DOI
29 
E. D. Adrian, Y. Zotterman, “The impulses produced by sensory nerve-endings: Part II. The response of a Single End-Organ”, J Physiol, Vol. 61(2), pp. 151-171, April 1926.DOI
30 
J. Gautrais, S. Thorpe, “Rate coding versus temporal order coding: a theoretical approach”, Biosystems,Vol. 48, pp. 57-65, November 1998.DOI
31 
S. J. Thorpe, “Spike arrival times: A highly efficient coding scheme for neural networks”, Parallel Processing in Neural Systems and Computers, pp. 91-94, January 1990.URL
32 
R. B. Stein, E. R. Gossen, K. E. Jones, “Neuronal variability: noise or part of the signal?”, Nature Reviews Neuroscience, Vol. 6, pp. 389-397, May 2005.DOI
33 
F. Theunissen, J. P. Miller, “Temporal encoding in nervous systems: A rigorous definition”, J comput Neurosci, Vol 2, pp. 149-162, November 1994.DOI
34 
C.F. Stevens, A. Zador, “Neural Coding: The enigma of the brain”, Current Biology, Vol 5, No 12, pp. 1370-1371, December 1995.DOI
35 
A. Dey, “Machine Learning Algorithms: A Review”, International Journal of Computer Science and Information Technology, Vol. 7(3), 2016.URL
36 
M. Varone, D. Mayer, A. Melegari et al, “What is Machine Learning? A definition”,URL
37 
S.B Hiregoudar, “A Survey: Research Summary on Neural Network ”, International journal of Research in Engineering and Technology, ISSN: 2319 1163, Vol. 03, S.I. 03, pages: 385-389, May 2014.URL
38 
J. Xin and M.J. Embrechts, “Supervised Learning with Spiking Neural Network”, International Joint Conference on Neural Networks, July 2001.URL
39 
L. Guo, Z. Wang, M. Adjouadi, “A Supervised Learning Rule for Classification of Spatiotemporal Spike Patterns” 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), August 2016.URL
40 
S.M. Bohte, J.N. Kok, H. La Poutre, “Unsupervised Classification of complex Clusters in networks of Spiking Neurons”. Proceedings of the IEEE - INNS - ENNS International Joint Conference on Neural Networks IJCNN, July 2000.URL
41 
H. Hazan, D. Saunders, D. T. Sanghavi, H. Siegelmann and R. Kozma, "Unsupervised Learning with Self-Organizing Spiking Neural Networks," 2018 International Joint Conference on Neural Networks (IJCNN), Rio de Janeiro, July 2018.URL
42 
D.J. Saunders, H.T. Siegelmann, R. Kozma, M. ruszinko, “STDP Learning of Image Features with Spiking Neural networks”, IEEE/ INNS IJCNN, July 2018.URL
43 
Y. Dorogyy and V. Kolisnichenko, "Unsupervised Pre-Training with Spiking Neural Networks in Semi-Supervised Learning," 2018 IEEE First International Conference on System Analysis & Intelligent Computing (SAIC), Kiev, October 2018.URL
44 
R. A. Koene and M. E. Hasselmo, “An Integrate and Fire Model of Prefrontal Cortex Provides a biological Implementation of Action Selection in Reinforcement Learning Theory that Reuse Known Representations”, IEEE International Joint conference on Neural Network 2005.URL
45 
M. Spüler, S. Nagel and W. Rosenstiel, "A spiking neuronal model learning a motor control task by reinforcement learning and structural synaptic plasticity," 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, July 2015.URL
46 
I. Garg, S. S. Chowdhury, K. Roy, “DCT-SNN: Using DCT to Distribute Spatial Information over Time for Learning Low-Latency Spiking Neural Networks”, arXiv preprint arXiv: 2010.01795, October 2020.DOI

Author

YunTae Park
../../Resources/ieie/IEIESPC.2023.12.1.64/au1.png

YunTae Park is a senior in the Computer Engineering Department at Kwangwoon University, Seoul, South Korea.

Unang Sunarya
../../Resources/ieie/IEIESPC.2023.12.1.64/au2.png

Unang Sunarya is a PhD student in the Computer Engineering Department at Kwangwoon University, South Korea. He received a diploma from Bandung State Polytechnic (POLBAN), and bachelor’s and master’s degrees from Telkom University, Indonesia. His research interests include machine learning, robotics, and signal processing.

Geunbo Yang
../../Resources/ieie/IEIESPC.2023.12.1.64/au3.png

Geunbo Yang received his B.S. from the Department of Computer Engineering at Kwangwoon University, Seoul, South Korea. He is currently pursuing an M.S. in computer engineering at the same university. His research interests are signal processing, machine learning algorithms, and computational neuroscience

Choongseop Lee
../../Resources/ieie/IEIESPC.2023.12.1.64/au4.png

Choongseop Lee received his B.S. in computer science and engineering from Kwangwoon University in Seoul, South Korea. His research interests include machine learning and computational neuroscience.

Jaewoo Baek
../../Resources/ieie/IEIESPC.2023.12.1.64/au5.png

Jaewoo Baek received a B.S. in computer engineering from Kwang-woon University, Seoul, South Korea, where he is currently pursuing a master’s degree in computer engineering. His research interests include biological signal processing, machine learning, deep learning, and reinforcement learning.

Suwhan Baek
../../Resources/ieie/IEIESPC.2023.12.1.64/au6.png

Suwhan Baek is a junior in computer engineering at Kwangwoon University in Seoul, South Korea. His research interests include overall medical AI and auto ML. He is also attracted to reinforcement learning, generative models, and the SNN.

Cheolsoo Park
../../Resources/ieie/IEIESPC.2023.12.1.64/au7.png

Cheolsoo Park is an assistant professor in the Computer Engineering Department at Kwangwoon University, Seoul, South Korea. He received a B.Eng. in electrical engineering from Sogang University, Seoul, and received his M.Sc. in biomedical engineering from Seoul National University, South Korea. In 2012, he received his Ph.D. in adaptive nonlinear signal processing from the Imperial College London, London, U.K., and worked as a postdoctoral researcher in the bioengineering department at the University California, San Diego, U.S.A. His research interests are mainly in the areas of machine learning and adaptive and statistical signal processing, with applications in brain-computer interfaces, computational neuroscience, and wearable technology.