Wednesday 1 July 2015

Mind is motion


Whilst you are alive, your brain is continuously active. Every second of every day, your neurons are firing and signals are being passed along perhaps the most complex communication network in the universe. Your brain is continuously processing, even in the absence of any obvious stimulus (e.g. when in a coma or asleep). This continuous activity or 'motion' appears to be fundamental to all our conscious cognitive activities (e.g. perception, reasoning, motor control, speech etc). However, this poses a couple of problems for would-be brain modellers: 

How does this continuously active network distinguish between this ongoing 'background' activity, and the activity caused by a new stimulus arriving from the senses? What's more, how does the network learn from new stimuli with all this 'noise' going on in the background? (A bit like trying to listen to the teacher in a very noisy class room). 

Back in 2007, I invented the term "Transient Computation" (i.e. computation based on transients or motions) to describe an approach to modelling this aspect of the brain and published it in a paper (Crook, 2007). I have since learned that this paper is generally thought to be somewhat impenetrable, so in this post I will try to demystify what I proposed back then.

Prior to my publication in 2007, "Liquid State Machines" (Maass, 2002) had become popular and shown to be successful at a range of processing tasks. The basic concept of the LSM is that you have a large "pool" of randomly connected neurons that are active (i.e. intermittently firing - see Fig. 1). Input signals are projected on to the pool and perturb the activity of the neurons. Different inputs will result in different perturbations. Simple linear neurons can be trained to recongnise these differences and identify the class of input that caused them. The analogy often used to describe the LSM is of how the surface of a pool (neural activity) is perturbed by objects ("inputs") that are dropped in the pool. Different sizes and shapes of object would cause different perturbations or ripples on the pool surface. Hence the ripples enable you to classify the objects that caused them.

The term "Liquid State Machine" became associated with large randomly connected pools of neurons. I introduced the term "Transient Computation" because I believe it describes a more general class of computation that doesn't necessarily depend on large pools of neurons. In my 2007 paper, I presented a model consisting of just two neurons that possessed all the essential properties of an LSM.

These neurons were slightly more complex than the neurons typically used in LSMs in that they had weakly chaotic internal states based on the well known Rossler equations (Rossler, 1976). See Fig. 2.

The first video below shows the evolution of two of the variables that represent the internal state of the neuron. When the state reaches a firing threshold (in the y direction), the neuron outputs a spike and the state moves to a reset value (in the y direction), mimicking the firing potential of a biological neuron.







This next video shows the corresponding time series and spike output of the neuron:




Fig. 3 illustrates the structure of the network. To put it simply, one of the neurons (NP) acts as a pacemaker  which regulates the activity of the other neuron (NT) that acts as the "pool" or transient neuron. The pacemaker neuron NP has time-delayed feedback which enabled it to stabilise into an Unstable Periodic Orbit (see here for a brief explanation of UPOs). The connection from the pacemaker NP to NT also stabilised NP to the same UPO. External input to NT pushed it away from the UPO in a direction that is proportional to the input. Simple readout neurons could then be trained to classify the external input based on the transient that the NT neuron took to return to the stabilised UPO.


So, I hear you ask, what is the point of having a chaotic neuron that is controlled into a periodic orbit? The point is this, the chaotic attractor of the NT neuron provides a rich set of dynamics that ensures that the transients in its internal state caused by external input will be sensitive to the character or class of that external input, thereby enabling the readout neurons to easily differentiate between different classes of external input.

The results presented in (Crook 2007) show that this model possess the key properties of separation, approximation and fading memory required for transient computation. They also show that its performance is comparable to that of the LSM but with significantly fewer neurons.

References



Crook, N.T and Goh, W. J. (2008) Nonlinear Transient Computation as a potential “kernel trick” in cortical processing. BioSystems 94(1), 55-59. . doi:10.1016/j.biosystems.2008.05.010

Crook, N. 2007. Nonlinear transient computation. Neurocomputing 70, 7-9 (Mar. 2007), 1167-1176. 
DOI= http://dx.doi.org/10.1016/j.neucom.2006.10.148 

Crook, N.T and Goh, W. J.. Human motion recognition using nonlinear 
transient computation. In: M. Verleysen, Editor, Proceedings of 15th European Symposium on Artificial Neural Networks (ESANN’2007), Bruges, d-side, Belgium (April 2007). 

Goh, W.J. and Crook, N.T. Pattern recognition using chaotic 
transients. In: M. Verleysen, Editor, Proceedings of 15th European Symposium on Artificial Neural Networks (ESANN’2007), Bruges, d-side, Belgium (April 2007).

Maass, W., Natschlager, T.,  Markram, H. (2002) Real-time computing without stable states: a new framework for neural computation based on perturbations, Neural Comput. 14 (11) (2002) 2531–2560.

Rossler, O.E. (1976) An equation for continuous chaos, Phys. Lett. 57A (5) 397–398.

No comments:

Post a Comment