Wednesday 29 July 2015

Mind is motion (Part 2)


In my last post ('Mind is motion'), I introduced the concept of Nonlinear Transient Computation which takes advantage of the dynamics of chaos in the task of pattern classification. In this post, I want to develop this a little further and unpack the underlying concepts of the approach (more details are given in Crook & Goh, 2008). 

Fig. 1 illustrates the transient computation concept.
The blue lines inside the box represent the continuously changing state of the dynamical system (e.g. controlled chaotic attractor). The dotted line illustrates that path that the system would have followed had there been no inputs. The solid line illustrates the changes in the evolution of the dynamical system following three inputs presented consecutively at times t1, t2 and t2. On the right is a bank of observers that are trained to recognise the characteristic changes in the dynamical system that are produced by certain classes of input patterns.

But how are the inputs presented to the dynamical system? Essentially, the inputs are scaled and added on to one or more of the state variables of the dynamical system (Fig. 2 illustrates this with a two dimensional input vector I1). This gives a 'kick' to the system, usually away from the attractor in a direction and with a magnitude that is proportional to the input vector. The readout devices ('observers') then observe the path that the dynamical system follows back to the attractor (which, due to the properties of chaos, will depend on the direction and magnitude of the 'kick' - i.e. of the input), as illustrated by the animation shown to the right.

The readout devices (e.g. simple linear perceptrons) take a series of time delayed samples of the values of one or more of the state variables of the dynamical system as illustrated in Fig. 3.

A key issues for pattern classification is how to differentiate between patterns that are similar, thus likely to belong to the same class, and patterns that are different and which should belong to different classes. Patterns that are similar will tend to be close to each other in input space. Fig. 4 below gives an example of patterns that have two features (p1 and p2). When you plot the position of two similar patterns (red dot and blue dot) they appear close together in the input space (Fig 4, left). In our case, these input patterns would be scaled and added to the variable(s) of the dynamical system, essentially giving it two similar 'kicks'.


For pattern classification to succeed in our Transient Computation model, then similar inputs must produce similar transients for the readout neurons to detect (and conversely, different inputs must produce different transients). As can be seen in the animation below, the two similar input patterns produce similar trajectories in phase space, at least to begin with. Later, because of the sensitivity of chaos to initial conditions, the trajectories begin to diverge.


There is much more to be said about this approach, some of which I will cover in a future blog posts. The most surprising feature of Transient Computation, however, was that it is capable of correctly classifying linearly inseparable patterns of inputs (e.g. XOR type arrangements of inputs). More of this and the story behind it in my next blog.

References


Crook, N.T and Goh, W. J. (2008) Nonlinear Transient Computation as a potential “kernel trick” in cortical processing. BioSystems 94(1), 55-59. . doi:10.1016/j.biosystems.2008.05.010

Crook, N. 2007. Nonlinear transient computation. Neurocomputing 70, 7-9 (Mar. 2007), 1167-1176. 
DOI= http://dx.doi.org/10.1016/j.neucom.2006.10.148 

Crook, N.T and Goh, W. J.. Human motion recognition using nonlinear 
transient computation. In: M. Verleysen, Editor, Proceedings of 15th European Symposium on Artificial Neural Networks (ESANN’2007), Bruges, d-side, Belgium (April 2007). 



Wednesday 1 July 2015

Mind is motion


Whilst you are alive, your brain is continuously active. Every second of every day, your neurons are firing and signals are being passed along perhaps the most complex communication network in the universe. Your brain is continuously processing, even in the absence of any obvious stimulus (e.g. when in a coma or asleep). This continuous activity or 'motion' appears to be fundamental to all our conscious cognitive activities (e.g. perception, reasoning, motor control, speech etc). However, this poses a couple of problems for would-be brain modellers: 

How does this continuously active network distinguish between this ongoing 'background' activity, and the activity caused by a new stimulus arriving from the senses? What's more, how does the network learn from new stimuli with all this 'noise' going on in the background? (A bit like trying to listen to the teacher in a very noisy class room). 

Back in 2007, I invented the term "Transient Computation" (i.e. computation based on transients or motions) to describe an approach to modelling this aspect of the brain and published it in a paper (Crook, 2007). I have since learned that this paper is generally thought to be somewhat impenetrable, so in this post I will try to demystify what I proposed back then.

Prior to my publication in 2007, "Liquid State Machines" (Maass, 2002) had become popular and shown to be successful at a range of processing tasks. The basic concept of the LSM is that you have a large "pool" of randomly connected neurons that are active (i.e. intermittently firing - see Fig. 1). Input signals are projected on to the pool and perturb the activity of the neurons. Different inputs will result in different perturbations. Simple linear neurons can be trained to recongnise these differences and identify the class of input that caused them. The analogy often used to describe the LSM is of how the surface of a pool (neural activity) is perturbed by objects ("inputs") that are dropped in the pool. Different sizes and shapes of object would cause different perturbations or ripples on the pool surface. Hence the ripples enable you to classify the objects that caused them.

The term "Liquid State Machine" became associated with large randomly connected pools of neurons. I introduced the term "Transient Computation" because I believe it describes a more general class of computation that doesn't necessarily depend on large pools of neurons. In my 2007 paper, I presented a model consisting of just two neurons that possessed all the essential properties of an LSM.

These neurons were slightly more complex than the neurons typically used in LSMs in that they had weakly chaotic internal states based on the well known Rossler equations (Rossler, 1976). See Fig. 2.

The first video below shows the evolution of two of the variables that represent the internal state of the neuron. When the state reaches a firing threshold (in the y direction), the neuron outputs a spike and the state moves to a reset value (in the y direction), mimicking the firing potential of a biological neuron.







This next video shows the corresponding time series and spike output of the neuron:




Fig. 3 illustrates the structure of the network. To put it simply, one of the neurons (NP) acts as a pacemaker  which regulates the activity of the other neuron (NT) that acts as the "pool" or transient neuron. The pacemaker neuron NP has time-delayed feedback which enabled it to stabilise into an Unstable Periodic Orbit (see here for a brief explanation of UPOs). The connection from the pacemaker NP to NT also stabilised NP to the same UPO. External input to NT pushed it away from the UPO in a direction that is proportional to the input. Simple readout neurons could then be trained to classify the external input based on the transient that the NT neuron took to return to the stabilised UPO.


So, I hear you ask, what is the point of having a chaotic neuron that is controlled into a periodic orbit? The point is this, the chaotic attractor of the NT neuron provides a rich set of dynamics that ensures that the transients in its internal state caused by external input will be sensitive to the character or class of that external input, thereby enabling the readout neurons to easily differentiate between different classes of external input.

The results presented in (Crook 2007) show that this model possess the key properties of separation, approximation and fading memory required for transient computation. They also show that its performance is comparable to that of the LSM but with significantly fewer neurons.

References



Crook, N.T and Goh, W. J. (2008) Nonlinear Transient Computation as a potential “kernel trick” in cortical processing. BioSystems 94(1), 55-59. . doi:10.1016/j.biosystems.2008.05.010

Crook, N. 2007. Nonlinear transient computation. Neurocomputing 70, 7-9 (Mar. 2007), 1167-1176. 
DOI= http://dx.doi.org/10.1016/j.neucom.2006.10.148 

Crook, N.T and Goh, W. J.. Human motion recognition using nonlinear 
transient computation. In: M. Verleysen, Editor, Proceedings of 15th European Symposium on Artificial Neural Networks (ESANN’2007), Bruges, d-side, Belgium (April 2007). 

Goh, W.J. and Crook, N.T. Pattern recognition using chaotic 
transients. In: M. Verleysen, Editor, Proceedings of 15th European Symposium on Artificial Neural Networks (ESANN’2007), Bruges, d-side, Belgium (April 2007).

Maass, W., Natschlager, T.,  Markram, H. (2002) Real-time computing without stable states: a new framework for neural computation based on perturbations, Neural Comput. 14 (11) (2002) 2531–2560.

Rossler, O.E. (1976) An equation for continuous chaos, Phys. Lett. 57A (5) 397–398.