# Classification of BMI control commands from rat's neural signals using extreme learning machine

- Youngbum Lee
^{1}, - Hyunjoo Lee
^{2}, - Jinkwon Kim
^{1}, - Hyung-Cheul Shin
^{2}and - Myoungho Lee
^{1}Email author

**8**:29

https://doi.org/10.1186/1475-925X-8-29

© Lee et al; licensee BioMed Central Ltd. 2009

**Received: **23 February 2009

**Accepted: **28 October 2009

**Published: **28 October 2009

## Abstract

A recently developed machine learning algorithm referred to as Extreme Learning Machine (ELM) was used to classify machine control commands out of time series of spike trains of ensembles of CA1 hippocampus neurons (n = 34) of a rat, which was performing a target-to-goal task on a two-dimensional space through a brain-machine interface system. Performance of ELM was analyzed in terms of training time and classification accuracy. The results showed that some processes such as class code prefix, redundancy code suffix and smoothing effect of the classifiers' outputs could improve the accuracy of classification of robot control commands for a brain-machine interface system.

## Keywords

## Introduction

A brain-machine interface (BMI) is a communication channel, which transforms a subject's thought processes into command signals to control various devices, for example, a computer application, a wheelchair, a robot arm, or a neural prosthesis. Many studies have been made on the prediction of human voluntary movement intention in real-time based on invasive or noninvasive methods to help severely motor-disabled persons by offering some abilities of motor controls and communications. A noninvasive method records electroencephalographic (EEG) signals and extracts intentional traits or movement-related EEG features, such as the P300 component of an event-related evoked potential [1], EEG mu rhythm conditioning [2–4], or visually evoked potential [5]. Noninvasive methods have low spatial resolution since they take readings from the entire brain rather than a specific part of the brain [6]. On the other hand, an invasive method delivers better signal quality at the expense of its invasive characteristic. Its typical approaches include electrocorticograms [7], single neuron recordings [8], or multi-neuron population recordings [9]. Advanced researches on invasive methods are being actively pursued with the aim of recovering complex and precise movements by decoding motor information in motor related brain areas [10, 11]. Naturally, such researches have raised the hopes of paralyzed people. Due to the advances of science and medical technologies, life expectancy has increased. As the person's age increases, the development of multiple chronic conditions increases. The number of motor-disabled and solitary aged people also increases. However, the resources needed to care for the aged is not meeting the demands. A virtual reality linked to a general purpose BMI could be an alternative for the shortcoming resources on the arrival of aging society and the need of assistive technology.

*m*spike trains

*s*

_{ j },

*j*= 1,2, ⋯,

*m*in real-time, where and denotes the time of occurrence of the p'th spike emitted by the j'th neuron. Each spike train during a time interval (0,

*T*] was transformed into time series data in the feature extraction unit, where and

*z*=

*T*/Δ

*t*and Δ

*t*=

*t*

_{ i }-

*t*

_{i-1}is the bin size of the time series data. The neuronal response function

*ρ*

^{ j }(

*t*

_{ i }) was evaluated as sums over spikes from j'th neuron for 0 ≤

*t*≤

*i*Δ

*t*[13]. The correlation coefficients

*r*

_{ jk }and the partial correlation coefficients

*r*

_{jk,l}of the time series data were then calculated using the equations given in reference [14]. The correlation coefficient

*r*

_{ jk }measures the correlation between the time series data

*X*

_{ j }and

*X*

_{ k }. The partial-correlation coefficient

*r*

_{ jk }measures the net correlation between the time series data

*X*

_{ j }and

*X*

_{ k }after excluding the common influence of (i.e., holding constant) the time series data

*X*

_{ l }[13]. The source selection unit classified the time series data

*X*

_{ j }into two groups, correlated, and uncorrelated groups, according to the values of the correlation coefficients. Each group was again subdivided into two subgroups based on the values of the partial correlation

*s*

_{j 1}coefficients of its elements. Two spike-trains and were then selected, where the corresponding time series data and were belong to the uncorrelated group but not in the same subgroup. In result, and were independent each other as well as had large difference in their correlations with other spike trains . The coding unit coded a series of motor functions into the spike train and by an coding function and transformed in real-time the relative difference between the neuronal activities of the spike trains and into a command signal corresponding to one of the motor functions. The control unit received the command signal from the coding unit and executed it correspondingly to control a water disk or a robot of the BMI system.

The aim of this study was to see an efficient usability of ensembles, simultaneously recorded many single units for the generation of specified directional commands in a BMI system for a rat to manipulate an external target on a two-dimensional space to achieve rat's volition. For this purpose, ELM was used to classify machine control commands out of time series spike trains of 34 simultaneously recorded CA1 hippocampus neurons.

## Materials and methods

The practical usability and the efficiency of the presented BCI system were tested by experiments of a 'water drinking' task using 11 rats. The subject was to control the degree and the direction of the rotation of the wheel with its neuronal activities of the SI cortex to access water in the WD task. The water was contained in one-quarter of a circular dish positioned on top of the wheel. The experiments were carried out with approval from the Hallym University Animal Care and Use Committee. Adult male or female SPF Sprague-Dawley rats weighing 200-220 g were used. Two multi-wire recording electrodes arrays (eight channels for each array, tungsten microwire, A-M systems, USA, 75 mm diameter, Teflon-coated) were implanted bilaterally into the SI vibrissae area of both right (RH) and left (LH) hemispheres of each rat. Lesions were made to the vibrissae motor cortices in both hemispheres. Infraorbital and facial nerves were bilaterally sectioned to prevent a sensory input from and a motor output to whisker pads. Four weeks after the lesions, the rats were deprived of water for 24 h. Each rat was then placed in front of the wheel to perform the WD task for a trial of an experimental session. Three experimental sessions were carried out over six days for each rat. The rat was deprived of water for 24 h before each session. A session comprised 40 s preprocessing, five 300 s trials, and a 300 s rest period between trials. In the preprocessing, the spike trains from the SI cortex of the rat were assessed by the correlations among them, two spike trains sj1 and sj2 were selected, and then a series of motor functions were encoded into them. The bin size Dt used in the feature extraction unit was 200 ms. A critical value, rc of the correlation coefficient was estimated at the significance level of 0.05 to categorise spike trains to the uncorrelated group, e.g. rc 1/4 0.098 for the sample size n 1/4 400. Seven motor functions were set up for the directions and the degrees of the rotation of the wheel, which were embodied by seven command signals Cq for q 1/4 _3, _2, ..., 3. The absolute value and the polarity of the subscript q of the command signal described the direction and the number of the step of the wheel rotation, respectively. If it was positive, the resulting direction was clockwise (CW), and if negative, in a counter-clockwise (CCW) rotation on the rat side. One-step (C_1) rotation turned the wheel exactly 14.5_, two-step (C_2), 21.5_, and three-step (C_3), 28.5_. In case of zero-step (C0) rotation the wheel was not to turn. During the trials, the relative difference between the neuronal activities of the spike trains sj1 and sj2 were evaluated and categorised into one of the motor functions by the encoding function f (sj1, sj2) and then one of the seven command signals Cq was generated every 200 ms. Then, an Intel i80196 microprocessor in the control unit received the command signal, Cq, and executed it correspondingly. The implanted electrodes to a preamplifier whose outputs were sent to a data acquisition system (Plexon Inc., Dallas, TX, USA) for online multi-channel spike sorting and unit recording.

*N*hidden neurons with randomly chosen input weights can learn

*N*distinct patterns with randomly small error [6]. ELM is based on this result and has learning process using random chosen input weights and biases of hidden neurons [5]. For approximation of SLFNs, when we have

*N*random distinct samples (

**x**

_{ i },

**t**

_{ i }), we can model SLFNs as eq. (1),

**x**

_{ j }= [

*x*

_{j 1},

*x*

_{j 2}, ⋯,

*x*

_{ jn }]

^{ T },

**t**

_{ j }= [

*t*

_{j 1},

*t*

_{j 2}, ⋯,

*t*

_{ jm }]

^{ T }represents

*j*th input vector and output vector,

*b*

_{ i }is bias of

*i*th hidden neuron,

**w**

_{ i }= [

*w*

_{i 1},

*w*

_{i 2}, ⋯,

*w*

_{ in }]

^{ T }input weight vector connecting

*i*th hidden neuron to input layer,

**β**

_{ i }= [

*β*

_{i 1},

*β*

_{i 2}, ⋯,

*β*

_{ im }] output weight vector connecting

*i*th hidden neuron to output layer, and SLFNs have

*N*

_{ h }hidden neurons and activation function

*g*(·). The eq. (1) is represented by matrix equation as:

**H**represent output of hidden layer. When input weights

**w**

_{ i }and biases

*b*

_{ i }of hidden neuron are invariable,

**H**is determined with input vector

**x**

_{ j }. In that case, SLFNs are linear system. So, In case of

**H**has inverse matrix, we can get

**β**through

**H**

^{-1}·

**T**. But generally number of samples is greater than number of hidden neurons,

**H**is a nonsquare matrix and there may not exist

**H**

^{-1}. The optimal output weights guarantee minimum difference between

**Hβ**and

**T**as:

Using Moor-Penrose generalized inverse **H**^{†} we can get minimum norm least-squares solution of (3).

That case has the optimum value of [5].

The process of ELM for SLFMs learning algorithm is expressed below:

Choose random values for input weights **w**_{
i
}and biases *b*_{
i
}of hidden neurons.

Calculate hidden layer output matrix **H**.

Obtain the optimal
using
= **H**^{†}**T**.

Because learning process of ELM randomly choose the input weights and analytically determine the output weights of SLFNs, there are no iteration processes and that means extremely smaller learning time of ELM than BPNN.

The universal approximation capability of ELM is also critical to show that ELM theoretically can be applied in such applications. ELM has some versions such as I-ELM [16], C-ELM [17] and EI-ELM [18]. I-ELM [16] means incremental ELM. According to conventional neural network theories, single-hidden-layer feed forward networks(SLFNs) with additive or radial basis function (RBF) hidden nodes are universal approximators when all the parameters of the networks are allowed adjustable. However, as observed in most neural network implementations, tuning all the parameters of the networks may cause learning complicated and inefficient, and it may be difficult to train networks with no differential activation functions such as threshold networks. Unlike conventional neural network theories, I-ELM proves in an incremental constructive method that in order to let SLFNs work as universal approximators, one may simply randomly choose hidden nodes and then only need to adjust the output weights linking the hidden layer and the output layer. C-ELM [17] means Complex ELM. C-ELM extends the ELM algorithm from the real domain to the complex domain, and then applies the fully complex extreme learning machine (C-ELM) for nonlinear channel equalization applications. The simulation results show that the ELM equalizer significantly outperforms other neural network equalizers such as the complex minimal resource allocation network (CMRAN), complex radial basis function (CRBF) network and complex back propagation (CBP) equalizers. C-ELM achieves much lower symbol error rate (SER) and has faster learning speed. EI-ELM [18] means enhanced method for I-ELM. An incremental algorithm referred to as incremental extreme learning machine (I-ELM) was proposed by Huang et al. [16]. which randomly generates hidden nodes and then analytically determines the output weights. Huang et al. [16] have proved in theory that although additive or RBF hidden nodes are generated randomly the network constructed by I-ELM can work as a universal approximator. During recent study, it is found that some of the hidden nodes in such networks may play a very minor role in the network output and thus may eventually increase the network complexity. In order to avoid this issue and to obtain a more compact network architecture, this paper proposes an enhanced method for I-ELM (referred to as EI-ELM). At each learning step, several hidden nodes are randomly generated and among them the hidden node leading to the largest residual error decreasing will be added to the existing network and the output weight of the network will be calculated in a same simple way as in the original I-ELM. Generally speaking, the proposed enhanced I-ELM works for the widespread type of piecewise continuous hidden nodes.

## Results

Class code allocation for 5 Events.

Event1 | Event2 | Event3 | Event4 | Event5 | Class |
---|---|---|---|---|---|

0 | 0 | 0 | 0 | 0 | 0 |

1 | 0 | 0 | 0 | 0 | 1 |

0 | 1 | 0 | 0 | 0 | 2 |

0 | 0 | 1 | 0 | 0 | 3 |

0 | 0 | 0 | 1 | 0 | 4 |

0 | 0 | 0 | 0 | 1 | 5 |

It seems strange to add class code prefix and redundancy code suffix into the raw data for constructing input vectors of the ELM algorithm or other learning algorithms. But the code prefix or the redundancy code suffix is feature vector for effective pattern classification not the target label or target vector which the algorithm is supposed to learn/predict. ELM algorithm extracts the feature vector from input vectors and in the testing phase, it evaluates the classification performance for output vectors using feature vector.

## Discussion

In this study, a recently developed machine learning algorithm [19] referred to as Extreme Learning Machine (ELM) was used to classify machine control commands, such as directions (forward, backward, right, left) and steps, out of time series spike trains of 34 simultaneously recorded CA1 hippocampus neurons. Performance of ELM was analyzed in terms of training time and classification accuracy. The study showed that some processes such as class code prefix, redundancy code suffix and smoothing effect of the classifiers' outputs can obviously improve the classification accuracies of the commands used for the BMI system [20].

In this study, at first, using the ELM classifier, the accuracy of validation was just below 30%. This was quite natural since commands of our BMI were encoded in every 200 ms by two neurons, such that one was for direction and the other for distance. The rest of 32 neurons were not directly used for BMI machine control. The 30% of validation accuracy may suggest that about 1/3 of simultaneous recorded CA1 neurons in the vicinity of the two neurons directly encoding commands were synchronously active in every 200 ms [21].

Our results showed that adding class code column as prefix of the raw data doubled the training accuracy up to 50% with incremental accuracy validation, but reduced validation of testing accuracy as increasing the number of hidden neurons. This class code insertion appeared to increase the tendency of other 32 neurons to behave in synchronous to the two neurons, which were directly responsible for command generation. However, their heterogeneous characteristics shaped by continuous interactions with other modulation inputs [22], i.e., hidden neurons of ELM, in the CA1 circuits might act against the increase of testing accuracy for command generation.

The results of the current study demonstrated that adding redundancy event bits in addition to the class code prefix dramatically increased the classification accuracy especially when increasing the number of hidden neurons. This feature of ELM could be used as a new BMI command generation algorithm to either supplement or replace the current threshold algorithm, where neural firing rates during every 200 ms were classified by manually as one of four activity ranges. This may increase the efficiency of the BMI system, which may reduce the time for rat to utilize the system for its own volition [23].

However, there are many things to be done in future studies. First, we need to obtain testing accuracy for each event such as directions (forward, backward, right, left) and steps. Second, it is necessary to make a comparison table for each event that shows the correlation between actual activities and estimated activities. Third, additional performance evaluation parameters such as the sensitivity and specificity should be calculated. Lastly, it is necessary to compare the results of ELM methods to other classifiers such as BPNN [24], support vector machine [25] and evolutionary ELM [26].

## Declarations

### Acknowledgements

This study was supported by Yonsei University Institute of TMS Information Technology, a Brain Korea 21 Program, Korea. and grants to HCSHIN (MEST-Frontier research-2009K001280 & MKE-Industrial Source Technology Development Program-10033634-2009-11).

## Authors’ Affiliations

## References

- Farwell LA, Donchin E:
**Talking off the top of your head: toward a mental prosthesis utilizing event-related brain potentials.***Electroencephalogr Clin Neurophysiol*1988,**70:**510–523. 10.1016/0013-4694(88)90149-6View ArticleGoogle Scholar - McFarland DJ, Neat GW, Read RF, Wolpaw JR:
**An EEG-based method for graded cursor control.***Psychobiology*1993,**21:**77–81.Google Scholar - Pfurtscheller G, Flotzinger D, Kalcher J:
**Brain-Computer Interface - a new communication device for handicapped persons.***J Microcomputer Appl*1993,**16:**293–299. 10.1006/jmca.1993.1030View ArticleGoogle Scholar - Wolpaw JR, McFarland DJ, Neat GW, Forneris CA:
**An EEG-based brain-computer interface for cursor control.***Electroencephalogr Clin Neurophysiol*1991,**78:**252–259. 10.1016/0013-4694(91)90040-BView ArticleGoogle Scholar - Sutter EE:
**The brain response interface: communication through visually induced electrical brain response.***J Microcomputer Appl*1992,**15:**31–45. 10.1016/0745-7138(92)90045-7View ArticleGoogle Scholar - Nunez PL:
**Toward a quantitative description of large-scale neocortical dynamic function and EEG.***Behav Brain Sci*2000,**23:**371–398. 10.1017/S0140525X00003253View ArticleGoogle Scholar - Huggins JE, Levine SP, Fessler JA, Sowers WM, Pfurtscheller G, Graimann B, Schloegl A, Minecan DN, Kushwaha RK, BeMent SL, Sagher O, Schuh LA:
**Electrocorticogram as the Basis for a Direct Brain Interface: Opportunities for Improved Detection Accuracy.***Proceedings of the 1st International IEEE EMBS Conference on Neural Engineering*2003, 20–22.Google Scholar - Serruya MD, Hatsopoulos NG, Paninski L, Fellows MR, Donoghue JP:
**Instant neural control of a movement signal.***Nature*2002,**416:**141–142. 10.1038/416141aView ArticleGoogle Scholar - Chapin JK:
**Using multi-neuron population recordings for neural prosthetics.***Nature Neurosci*2004,**7:**452–455. 10.1038/nn1234View ArticleGoogle Scholar - Chapin JK, Moxon KA, Markowitz RS, Nicolelis MAL:
**Real-time control of a robot arm using simultaneously recorded neurons in the motor cortex.***Nature Neurosci*1999,**2:**664–670. 10.1038/10223View ArticleGoogle Scholar - Wu W, Black MJ, Mumford D, Gao Y, Bienenstock E, Donoghue JP:
**Modeling and decoding motor cortical activity using a switching Kalman filter.***IEEE Transactions on Biomedical Engineering*2004,**51:**933–942. 10.1109/TBME.2004.826666View ArticleGoogle Scholar - Lee U, Lee HJ, Kim S, Shin HC:
**Development of Intracranial brain-computer interface system using non-motor brain area for series of motor functions.***Electronics Letters*2006,**42:**198–200. 10.1049/el:20063595View ArticleGoogle Scholar - Dayan P, Abbott LF:
**Theoretical Neuroscience: Computational and Mathematical Modeling of Neural System.***MIT Press*2001.Google Scholar - Izzett R:
**SPSS Windows Instructions for PSYCH.***280 & PSYCH. 290*2nd edition. [http://www.oswego.edu/~psychol/spss/partial.pdf] - Huang GB, Zhu QY, Siew CK:
**Extreme Learning Machine: Theory and Applications.***Neurocomputing*2006,**70:**489–501. 10.1016/j.neucom.2005.12.126View ArticleGoogle Scholar - Huang GB, Chen L, Siew CK:
**Universal Approximation Using Incremental Constructive Feedforward Networks With Random Hidden Nodes.***IEEE Transactions on Neural Networks*2006,**17:**879–892. 10.1109/TNN.2006.875977View ArticleGoogle Scholar - Li MB, Huang GB, Saratchandran P, Sundararajan N:
**Fully complex extreme learning machine.***Neurocomputing*2005,**68:**306–314. 10.1016/j.neucom.2005.03.002View ArticleGoogle Scholar - Huang GB, Chen L:
**Enhanced random search based incremental extreme learning machine.***Neurocomputing*2008,**71:**3460–3468. 10.1016/j.neucom.2007.10.008View ArticleGoogle Scholar - Huang GB, Zhu QY, Siew CK:
**Real-Time Learning Capability of Neural Networks.***IEEE Transactions on Neural Networks*2006,**17:**863–878. 10.1109/TNN.2006.875974View ArticleGoogle Scholar - Kim J, Shin H, Lee Y, Lee M:
**Algorithm for classifying arrhythmia using Extreme Learning Machine and principal component analysis.***Conf Proc IEEE Eng Med Biol Soc*2007, 3257–3260.Google Scholar - Isomura Y, Sirota A, Ozen S, Montgomery S, Mizuseki K, Henze DA, Buzsáki G:
**Integration and segregation of activity in entorhinal-hippocampal subregions by neocortical slow oscillations.***Neuron*2006,**52:**871–882. 10.1016/j.neuron.2006.10.023View ArticleGoogle Scholar - Klausberger T, Somogyi P:
**Neuronal diversity and temporal dynamics: the unity of hippocampal circuit operations.***Science*2008,**321:**53–57. 10.1126/science.1149381View ArticleGoogle Scholar - Liang NY, Saratchandran P, Huang GB, Sundararajan N:
**Classification of Mental Tasks from EEG Signals Using Extreme Learning Machines.***International Journal of Neural Systems*2006,**16:**29–38. 10.1142/S0129065706000482View ArticleGoogle Scholar - Chen Y, Akutagawa M, Katayama M, Zhang Q, Kinouchi Y:
**Additive and multiplicative noise reduction by back propagation neural network.***Conf Proc IEEE Eng Med Biol Soc*2007, 3184–7.Google Scholar - Qin J, Li Y, Sun W:
**A Semisupervised Support Vector Machines Algorithm for BCI Systems.***Comput Intell Neurosci*2007, 94397.Google Scholar - Huynh HT, Won Y, Kim JJ:
**An improvement of extreme learning machine for compact single-hidden-layer feedforward neural networks.***Int J Neural Syst*2008,**18:**433–441. 10.1142/S0129065708001695View ArticleGoogle Scholar

## Copyright

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.