Thursday, 7 April 2016

Essential Dimensions of an Ethical Robot?


First off, I'm not an ethicist, so what I am about to say needs to be taken with a pinch of salt. But I do have an amateur interest in the human self and what makes us tick as ethical beings. There has been some really interesting work on ethical robots recently which has piqued my interest (see, for example, Prof Alan Winfield's Blog entitled Towards an Ethical Robots). 

In what follows I sketch out a proposal for what I consider to be the essential dimensions of an ethical robot. What I outline here could be categorised in the 'virtue ethics' tradition of moral philosophy: it is not that robot should exhibit some preprogrammed ethical behaviour, which ultimately is a projection of the human designer's/engineer's ethics onto the robot, but rather that the robot is so designed that it autonomously develops as an ethical agent based partly on external moral guidance and on its observations of the consequences of its decisions. In other words, the robot would develop moral character over time.

What I am proposing is an adaptation of the work of the late Prof. Dallas Willard who was a Professor in the School of Philosophy at the University of Southern California. In his work on human ethical behaviour, he identified 5 essential dimensions of the human self, and arranged them in concentric circles (the first four are shown in Fig. 1) [Willard, 2014]. The dimensions represented by the inner circles are somehow contained in (or a sub-part of) the ones represented by the outer circles.

Briefly, the outer circle represents the social context in which the ethical agent operates. This social dimension encompasses all of the interactions (or relationships) which the agent has with other agents. Note that the social context encompasses the other three dimensions in Fig. 1. So it includes the body (how the agent acts in relation to themselves and others), the mind (how the agent thinks of others) and the will (what decision the agent makes that affect others). 

The body constitutes the personal 'power pack' of the ethical agent. It locates the agent in time and space and enables it to interact with the physical world. It provides the essential sensory apparatus which endows the agent with the ability to perceive the physical environment in which it is situated. It also has actuation mechanisms which enable control of various parts of the agents body.

Thought brings things before the heart/will/spirit in different ways.  It enables the agent to reason about things and explore possibilities. It includes the agent's imagination and creative abilities, which incorporates the ability to anticipate the consequences of perceived events and planned actions as illustrated in Prof Winfield's work. Feelings constitute the emotions that incline the agent towards or away from whatever comes before agent's mind in thought. 

Heart, will and spirit are three facets of the same thing.  This dimension includes the agent's capacity to choose and to generate original ideas (I am sidestepping the issue of 'free will' here, which some would argue even human moral agents don't actually posses - I do have some thoughts on this which I may well share in a future blog). The ability to make moral choices is, of course, fundamental to ethical agency and the development of moral character.

Crudely, information flows from the agent's social context, passing through the body's sensory systems, and is represented to the will in thought and associated feelings (Fig. 2). The intentions to act that are originated by the will pass through thought, feeling and the body and are effected in the social context.


According to Prof Willard, we do not live from the will alone. He suggests that we live largely from the soul, which he proposes is the fifth dimension of the self. He suggests that the soul integrates all of the other dimensions together to form the whole person (Fig. 3).  Inspired by Willard's analogy, I would say that this understanding of the soul is similar to the operating system of a computer which integrates all the different parts of the computer (memory, CPU, input/output devices, software, etc) to enable the computer to function as one device.

Traditionally, the soul is understood to be the source of life and order (or disorder, depending on the inner state of the individual). Also, the soul is traditionally seen as the seat of the personality and that over time it takes on ('learns') the moral character of the decisions and behaviour of the agent.

Under this view, moral action stems not just from choosing to 'do the right thing', whatever that might be, but can also be strongly influenced by the personality of the agent. In some cases the weight of the personality of the agent can over-rule the intentions formed by the will resulting in moral (or immoral) behaviour of the agent that is contrary to what they actually want to do.

I am intrigued by the possibility of designing and creating robots with the ability to develop moral character. I believe that a scaled-down version of Willard's perspective on moral agency could, in principle, be implemented in real robots. The question is, should we go down this route?

References

D. Willard (2014) Renovation of the Heart: Putting on the Character of Christ. Tyndale.

Wednesday, 29 July 2015

Mind is motion (Part 2)


In my last post ('Mind is motion'), I introduced the concept of Nonlinear Transient Computation which takes advantage of the dynamics of chaos in the task of pattern classification. In this post, I want to develop this a little further and unpack the underlying concepts of the approach (more details are given in Crook & Goh, 2008). 

Fig. 1 illustrates the transient computation concept.
The blue lines inside the box represent the continuously changing state of the dynamical system (e.g. controlled chaotic attractor). The dotted line illustrates that path that the system would have followed had there been no inputs. The solid line illustrates the changes in the evolution of the dynamical system following three inputs presented consecutively at times t1, t2 and t2. On the right is a bank of observers that are trained to recognise the characteristic changes in the dynamical system that are produced by certain classes of input patterns.

videoBut how are the inputs presented to the dynamical system? Essentially, the inputs are scaled and added on to one or more of the state variables of the dynamical system (Fig. 2 illustrates this with a two dimensional input vector I1). This gives a 'kick' to the system, usually away from the attractor in a direction and with a magnitude that is proportional to the input vector. The readout devices ('observers') then observe the path that the dynamical system follows back to the attractor (which, due to the properties of chaos, will depend on the direction and magnitude of the 'kick' - i.e. of the input), as illustrated by the animation shown to the right.

The readout devices (e.g. simple linear perceptrons) take a series of time delayed samples of the values of one or more of the state variables of the dynamical system as illustrated in Fig. 3.

A key issues for pattern classification is how to differentiate between patterns that are similar, thus likely to belong to the same class, and patterns that are different and which should belong to different classes. Patterns that are similar will tend to be close to each other in input space. Fig. 4 below gives an example of patterns that have two features (p1 and p2). When you plot the position of two similar patterns (red dot and blue dot) they appear close together in the input space (Fig 4, left). In our case, these input patterns would be scaled and added to the variable(s) of the dynamical system, essentially giving it two similar 'kicks'.


For pattern classification to succeed in our Transient Computation model, then similar inputs must produce similar transients for the readout neurons to detect (and conversely, different inputs must produce different transients). As can be seen in the animation below, the two similar input patterns produce similar trajectories in phase space, at least to begin with. Later, because of the sensitivity of chaos to initial conditions, the trajectories begin to diverge.

video

There is much more to be said about this approach, some of which I will cover in a future blog posts. The most surprising feature of Transient Computation, however, was that it is capable of correctly classifying linearly inseparable patterns of inputs (e.g. XOR type arrangements of inputs). More of this and the story behind it in my next blog.

References


Crook, N.T and Goh, W. J. (2008) Nonlinear Transient Computation as a potential “kernel trick” in cortical processing. BioSystems 94(1), 55-59. . doi:10.1016/j.biosystems.2008.05.010

Crook, N. 2007. Nonlinear transient computation. Neurocomputing 70, 7-9 (Mar. 2007), 1167-1176. 
DOI= http://dx.doi.org/10.1016/j.neucom.2006.10.148 

Crook, N.T and Goh, W. J.. Human motion recognition using nonlinear 
transient computation. In: M. Verleysen, Editor, Proceedings of 15th European Symposium on Artificial Neural Networks (ESANN’2007), Bruges, d-side, Belgium (April 2007). 



Wednesday, 1 July 2015

Mind is motion


Whilst you are alive, your brain is continuously active. Every second of every day, your neurons are firing and signals are being passed along perhaps the most complex communication network in the universe. Your brain is continuously processing, even in the absence of any obvious stimulus (e.g. when in a coma or asleep). This continuous activity or 'motion' appears to be fundamental to all our conscious cognitive activities (e.g. perception, reasoning, motor control, speech etc). However, this poses a couple of problems for would-be brain modellers: 

How does this continuously active network distinguish between this ongoing 'background' activity, and the activity caused by a new stimulus arriving from the senses? What's more, how does the network learn from new stimuli with all this 'noise' going on in the background? (A bit like trying to listen to the teacher in a very noisy class room). 

Back in 2007, I invented the term "Transient Computation" (i.e. computation based on transients or motions) to describe an approach to modelling this aspect of the brain and published it in a paper (Crook, 2007). I have since learned that this paper is generally thought to be somewhat impenetrable, so in this post I will try to demystify what I proposed back then.

Prior to my publication in 2007, "Liquid State Machines" (Maass, 2002) had become popular and shown to be successful at a range of processing tasks. The basic concept of the LSM is that you have a large "pool" of randomly connected neurons that are active (i.e. intermittently firing - see Fig. 1). Input signals are projected on to the pool and perturb the activity of the neurons. Different inputs will result in different perturbations. Simple linear neurons can be trained to recongnise these differences and identify the class of input that caused them. The analogy often used to describe the LSM is of how the surface of a pool (neural activity) is perturbed by objects ("inputs") that are dropped in the pool. Different sizes and shapes of object would cause different perturbations or ripples on the pool surface. Hence the ripples enable you to classify the objects that caused them.

The term "Liquid State Machine" became associated with large randomly connected pools of neurons. I introduced the term "Transient Computation" because I believe it describes a more general class of computation that doesn't necessarily depend on large pools of neurons. In my 2007 paper, I presented a model consisting of just two neurons that possessed all the essential properties of an LSM.

These neurons were slightly more complex than the neurons typically used in LSMs in that they had weakly chaotic internal states based on the well known Rossler equations (Rossler, 1976). See Fig. 2.

The first video below shows the evolution of two of the variables that represent the internal state of the neuron. When the state reaches a firing threshold (in the y direction), the neuron outputs a spike and the state moves to a reset value (in the y direction), mimicking the firing potential of a biological neuron.





video


This next video shows the corresponding time series and spike output of the neuron:


video


Fig. 3 illustrates the structure of the network. To put it simply, one of the neurons (NP) acts as a pacemaker  which regulates the activity of the other neuron (NT) that acts as the "pool" or transient neuron. The pacemaker neuron NP has time-delayed feedback which enabled it to stabilise into an Unstable Periodic Orbit (see here for a brief explanation of UPOs). The connection from the pacemaker NP to NT also stabilised NP to the same UPO. External input to NT pushed it away from the UPO in a direction that is proportional to the input. Simple readout neurons could then be trained to classify the external input based on the transient that the NT neuron took to return to the stabilised UPO.


So, I hear you ask, what is the point of having a chaotic neuron that is controlled into a periodic orbit? The point is this, the chaotic attractor of the NT neuron provides a rich set of dynamics that ensures that the transients in its internal state caused by external input will be sensitive to the character or class of that external input, thereby enabling the readout neurons to easily differentiate between different classes of external input.

The results presented in (Crook 2007) show that this model possess the key properties of separation, approximation and fading memory required for transient computation. They also show that its performance is comparable to that of the LSM but with significantly fewer neurons.

References



Crook, N.T and Goh, W. J. (2008) Nonlinear Transient Computation as a potential “kernel trick” in cortical processing. BioSystems 94(1), 55-59. . doi:10.1016/j.biosystems.2008.05.010

Crook, N. 2007. Nonlinear transient computation. Neurocomputing 70, 7-9 (Mar. 2007), 1167-1176. 
DOI= http://dx.doi.org/10.1016/j.neucom.2006.10.148 

Crook, N.T and Goh, W. J.. Human motion recognition using nonlinear 
transient computation. In: M. Verleysen, Editor, Proceedings of 15th European Symposium on Artificial Neural Networks (ESANN’2007), Bruges, d-side, Belgium (April 2007). 

Goh, W.J. and Crook, N.T. Pattern recognition using chaotic 
transients. In: M. Verleysen, Editor, Proceedings of 15th European Symposium on Artificial Neural Networks (ESANN’2007), Bruges, d-side, Belgium (April 2007).

Maass, W., Natschlager, T.,  Markram, H. (2002) Real-time computing without stable states: a new framework for neural computation based on perturbations, Neural Comput. 14 (11) (2002) 2531–2560.

Rossler, O.E. (1976) An equation for continuous chaos, Phys. Lett. 57A (5) 397–398.

Friday, 26 June 2015

Do you have a chaotic brain?

There is some (disputed) biological evidence [1,2,3] that what goes on inside your brain is chaotic (I could find plenty of real evidence for this in my case!) But this is not referring to the common meaning of the term 'chaos', which is normally used to describe something which is highly disorganised. On the contrary, in this context we are using the term 'chaos' in its specialist mathematical sense to describe something which is highly organised, but difficult to predict.


Early in my research career, I became interested in the possibility that brain activity is chaotic in this mathematical sense. I pondered what this might mean. Does this chaos offer any advantage to the brain in terms of the memories it can store and retrieve, or in terms of how it processes information? One thing that is notable about chaotic systems is that they are dynamic, restless, ceaselessly moving, creating new paths, new possibilities, and yet remaining constrained, bounded in a small sub-region of their potential 'space'. Sounds impossible? Have a look at this:


video
Here you see a classic chaotic system called the Lorenz attractor [4]. The light blue line illustrates the two winged sub-region that this chaotic systems is constrained within (i.e. its 'attractor'). The red point is a particular state of the system which, as you will see, starts far away from the attractor, but moves over time towards it and then continues to describe a path around it. Although this system is constrained to its two winged attractor, it will never stop moving, it will never be in exactly the same point twice on its attractor and it will continuously trace new paths (transients) as it goes along. Pretty cool!

Although the system will never be at exactly the same point twice (i.e. it will never be in exactly the same state more than once), it will come very close to points that it had previously visited, forming complex loop-like structures that are called 'unstable periodic orbits' (UPOs - see image on the right, taken from [5]).

Another remarkable feature of a chaotic attractor is that it can embed an infinite number of these UPOs. One question we asked ourselves early on in this research was, what if each of these UPO's represented a memory of the network [5]? If this could be achieved, then the memory capacity of the network (and by implication the brain) would also be theoretically infinite.

This begs lots of questions, most of which we have been unable to answer. However, in my next blog, I will begin to outline some of the work I did with colleagues at Oxford Brookes University on developing chaotic models of neural information processing in the brain.

References

[1]  Babloyantz, A., Lourenco, C., 1996. Brain chaos and computation. International Journal of Neural Systems 7, 461–471.

[2] Freeman,W.J., 1987. Simulation of chaotic eeg patterns with a dynamic model of the

olfactory system. Biological Cybernetics 56, 139–150.

[3] Freeman, W.J., Barrie, J.M., 1994. Chaotic oscillations and the genesis of meaning
in cerebral cortex. In: Buzsaki, G., et al. (Eds.), Temporal Coding in the Brain.

Springer–Verlag, Berlin, pp. 13–37.

[4] Lorenz, Edward Norton (1963). "Deterministic nonperiodic flow". Journal of the Atmospheric Sciences 20 (2): 130–141.

[5] Crook, N.T. & olde Scheper, T. (2002) Adaptation Based on Memory Dynamics in a Chaotic Neural Network.  Cybernetics and Systems 33 (4), 341-378.

Wednesday, 17 June 2015

Rudely Interrupted!

Have you ever been rudely interrupted? You're part way through saying something of significance (to you at least) and the person you are speaking to barges in with a comment or a question. How do you react? Ignore it and carry on regardless? Deal with their comment/question and return to what you were saying? This was one of the problems we faced in the Companions Project when we developed an animated avatar called Samuela capable of engaging in social conversation (see this post for a very brief overview).

Companions Dialogue System Interface

Occasionally, Samuela would make long multi-sentence utterances commenting on what they user had said about their day at work. Here's an example of one of Samuela's long utterances:
"I understand exactly your current situation. It's right that you are pleased with your position at the minute. In my opinion having more free time because of the decreased workload is fantastic. Meeting new people is a great way to pass the time outside of work. I'm sure Peter will provide you with excellent assistance. Try not to let Sarah bother you either."
These long utterances provided the opportunity for (and often provoked) the user to interrupt the avatar mid speech. We realised that Samuela would need to be able to handle these interruptions and respond to them in a human-like way if she was to engage in believable social conversation with the user. A detailed description of how we implemented this barge-in interruption handling facility can be found here (Crook et al, 2012)

In summary, we faced two problems when developing this interruption handling capability. The first was detecting the occurrence of genuine interruptions and distinguishing them from back-channel utterances from the user (e.g. 'Aha', 'Yes' etc). The second was to equip the system with human-like strategies for responding to them in a natural way and continuing with the conversation.

If the user starts speaking whilst Samuela (denoted as ECA in the figure below) is speaking, then the system uses thresholds in both the intensity and sustained duration of the audio signal from the user's microphone to determine if this counts as a genuine interruption. This is illustrated in the schematic below, which shows 4 cases of Samuela speaking, two of which (cases 3 and 4) are designated interruptions by the system:


The second challenge, which was to equip Samuela with strategies for responding to user barge-in interruptions, required us to understand more about the strategies that humans use in such situations. To gather information about this we analysed some transcripts of the BBC Radio 4 programme Any Questions.  This is a discussion programme consisting of a panel of pubic figures, including politicians who regularly interrupt each other - so this was a rich source of examples for us!

In brief, our analysis showed that two things were happening when panelists were interrupted, the first was to address  the interruption, the second was the resumption  or recovery of speech after the interruption. We found it necessary to classify the types of interruptions that we observed, and focussed on implementing the 6 that were found to be most common. We then classified the types of recovery that we observed for each type of interruptions and then implemented these in the system controlling Samuela.

Here are a couple of examples of Samuela responding to user interruptions that are taken from the paper. The down arrow in the system turn (S) indicates the point at which the user (U) interrupted (the remainder of what the system had planned to say is shown in italics). The right arrow shows what the output of the speech recogniser when the interruption occurred.



We were unable to do a full evaluation of the interruption handling before the end of the project, which is a pity because I believe that this is the most sophisticated user barge-in interruption handing system that has yet been developed.

Friday, 12 June 2015

Social Robotics Motivation Part II: Human Identity

In my last post (found here) I began to explain why I found myself increasingly interested in social robotics as a focus for my research. Today, I want to complete this picture by explaining that, at heart, my motivation stems from a desire to understand what it is to be human. I want to start with  a quote from one of my all time favourite movies:
"There have always been ghosts in the machine. Random segments of code that group together to form unexpected protocols. Unanticipated, these free radicals engender questions of free will, creativity and even the nature of what we might call the soul.
When does a perceptual schematic become consciousness? When does the difference-engine become the search for truth? When does a personality simulation become the bitter moat of a soul?" (I Robot, Alex Proyas).
Although I don't believe in the "random segments of code that group together to form unexpected protocols" part, Proyas's film I Robot raises some deeply interesting questions. 

For those of you who haven't seen it, here is the official trailer. I Robot tells the thrilling story of a future in which humanoid robots are fully integrated into society. As the sinister plot unfolds, it becomes clear that the central robot character, Sonny, is unique amongst the robot population in that he appears to be more human than robotic. The film raises deep questions about the robot’s true identity: Is he a person in his own right, possessing free will, creativity and even a soul? 

For me the film also implicitly raises important questions about human identity: If machines are created that successfully simulate personhood to the degree of accuracy portrayed here, does that mean that humans are nothing more than biological machines?

I believe that the study of social robotics has a part to play in answering this question.

Friday, 5 June 2015

How I became involved in social robotics

Some say that conversation is an art. When you try to build an artificial agent capable of even a limited form of social conversation, you begin to understand what people are getting at. In 2008 I was employed as a RA/developer at Oxford University to work on the EU funded Companions Project, which sought to develop an animated avatar called Samuela that you could have a 'social' conversation with about your day at work.




Samuela was designed to be emotionally intelligent, recognising the user's emotional state through voice patterns and sentiment analysis, and using her voice, facial expressions and gestures to show empathy towards the user. She was also capable of generating long utterances in which she gave advice to the user about how they were responding emotionally to the events of their day. Here is a video which introduces the prototype system and shows a couple of the couple of sample conversations with a user:


video

If you want to know more of the technical details of the system have a look at the selected references listed below. I will summarise my contributions to the Companions project in following Blog posts.

Working on this project was one of the most exciting and challenging periods of my research career. It introduced me to the deeply interesting and challenging area of creating artefacts capable of social interaction with people. Such systems require us to go far beyond the traditional mainstream challenges of AI (e.g. NLP, reasoning, learning, dialog management etc), into a world dominated by social norms and protocols, emotion, ethical patterns of behaviour and much more. I also realised the importance of 'presence' in social interaction, and in particular, bodily presence. An avatar on a screen (just like a human on a screen) involves a certain remoteness and lack of presence. For this reason, I turned to the use of robots to study and develop technologies capable of social interaction with people.

In 2011 I was appointed as Head of Computing and Communication Technologies at Oxford Brookes University. Soon afterwards I opened a new Cognitive Robotics lab there and began work on producing robots capable of social interaction, including our own skeletal head-and-neck robot called Eddie (more about this in later Blog posts) which we have built to mimic human head movements during conversation. We have also recently completed a study of the effect that upper body pose mirroring has on human-robot interaction. In this series of blog posts I will summarise this and subsequent work and give some insights into the stories behind the publications.