First off, I’m not an ethicist, so what I am about to say needs to be taken with a pinch of salt. But I do have an amateur interest in the human self and what makes us tick as ethical beings. There has been some really interesting work on ethical robots recently which has piqued my interest (see, for example, Prof Alan Winfield’s Blog entitled Towards an Ethical Robots).

In what follows I sketch out a proposal for what I consider to be the essential dimensions of an ethical robot. What I outline here could be categorised in the ‘virtue ethics’ tradition of moral philosophy: it is not that robot should exhibit some preprogrammed ethical behaviour, which ultimately is a projection of the human designer’s/engineer’s ethics onto the robot, but rather that the robot is so designed that it autonomously develops as an ethical agent based partly on external moral guidance and on its observations of the consequences of its decisions. In other words, the robot would develop moral character over time.

What I am proposing is an adaptation of the work of the late Prof. Dallas Willard who was a Professor in the School of Philosophy at the University of Southern California. In his work on human ethical behaviour, he identified 5 essential dimensions of the human self, and arranged them in concentric circles (the first four are shown in Fig. 1) [Willard, 2014]. The dimensions represented by the inner circles are somehow contained in (or a sub-part of) the ones represented by the outer circles.

Briefly, the outer circle represents the social context in which the ethical agent operates. This social dimension encompasses all of the interactions (or relationships) which the agent has with other agents. Note that the social context encompasses the other three dimensions in Fig. 1. So it includes the body (how the agent acts in relation to themselves and others), the mind (how the agent thinks of others) and the will (what decision the agent makes that affect others).

The body constitutes the personal ‘power pack’ of the ethical agent. It locates the agent in time and space and enables it to interact with the physical world. It provides the essential sensory apparatus which endows the agent with the ability to perceive the physical environment in which it is situated. It also has actuation mechanisms which enable control of various parts of the agents body.

Thought brings things before the heart/will/spirit in different ways.  It enables the agent to reason about things and explore possibilities. It includes the agent’s imagination and creative abilities, which incorporates the ability to anticipate the consequences of perceived events and planned actions as illustrated in Prof Winfield’s work. Feelings constitute the emotions that incline the agent towards or away from whatever comes before agent’s mind in thought.

Heart, will and spirit are three facets of the same thing.  This dimension includes the agent’s capacity to choose and to generate original ideas (I am sidestepping the issue of ‘free will’ here, which some would argue even human moral agents don’t actually posses – I do have some thoughts on this which I may well share in a future blog). The ability to make moral choices is, of course, fundamental to ethical agency and the development of moral character.

Crudely, information flows from the agent’s social context, passing through the body’s sensory systems, and is represented to the will in thought and associated feelings (Fig. 2). The intentions to act that are originated by the will pass through thought, feeling and the body and are effected in the social context.

According to Prof Willard, we do not live from the will alone. He suggests that we live largely from the soul, which he proposes is the fifth dimension of the self. He suggests that the soul integrates all of the other dimensions together to form the whole person (Fig. 3).  Inspired by Willard’s analogy, I would say that this understanding of the soul is similar to the operating system of a computer which integrates all the different parts of the computer (memory, CPU, input/output devices, software, etc) to enable the computer to function as one device.

Traditionally, the soul is understood to be the source of life and order (or disorder, depending on the inner state of the individual). Also, the soul is traditionally seen as the seat of the personality and that over time it takes on (‘learns’) the moral character of the decisions and behaviour of the agent.

Under this view, moral action stems not just from choosing to ‘do the right thing’, whatever that might be, but can also be strongly influenced by the personality of the agent. In some cases the weight of the personality of the agent can over-rule the intentions formed by the will resulting in moral (or immoral) behaviour of the agent that is contrary to what they actually want to do.

I am intrigued by the possibility of designing and creating robots with the ability to develop moral character. I believe that a scaled-down version of Willard’s perspective on moral agency could, in principle, be implemented in real robots. The question is, should we go down this route?

References

D. Willard (2014) Renovation of the Heart: Putting on the Character of Christ. Tyndale.

Posted by