Thinking Machines

The Artbot project is interesting for the way it opens onto a number of interesting and persistent questions in fields concerned with notions of intelligence and creativity – both things that at one point were taken to define our humanity (and for the most part still do).

It is nearly impossible to grapple with robotics – even at its most practical (instrumental) instantiations – and not at the same time be knee-deep in these issues even if unknowingly.

One of my favourite ideas that gets to heart of this issue is described by Richard Doyle in ‘Wetwares: Experiments in Postnatal Living’ (http://www.amazon.com/Wetwares-Experiments-Postvital-Living-Theory/dp/0816640092/ref=sr_1_9?ie=UTF8&qid=1461569827&sr=8-9&keywords=wetware).

A rather sophisticated Artbot

In that book Doyle describes the way Robots are formed in a process of what we might call Narcissistic sexual selection….. Which is to say we have tended to build robots that aim to think in the ways we like to think of ourselves as thinking. We choose to proceed with the development of models and robotic projects that mirror our own sense of self and the forms of intelligence and creativity that we think define us (and marginalise those that don’t).

Contemporary robotics have move away from this tendency – in reaction to the spaces in robotics development that that narcissistic project left open. Roboticist have followed neuro-philosophy and biology in finding models of thought that are derived from swarm and viral moses of a highly reconfigured notion of ‘intelligence’ (and creativity). That said robotics is still throughly tied up with this dialectic between the inhuman and non human.

 

The cheap and cheerful depiction of this shift goes something like this; We began trying to make robots and artificial intelligence that imagined the brain as the seat of thought and consciousness – so-called ‘Strong Artificial Intelligence’ roughly proceeded by attempting give a robot brain as much information about the the world to make it navigable. In esoteric terms we seemed to think that given enough knowledge – the electronic brain might become a seat of the ‘soul’. In practical terms this might mean that we try to provide a map of the world and locate the robot body within that map as a means of navigation.

In Artbot terms this might look something like asserting a (mostly cartesian) grid ahead of time so that at any time a robot can move to a specific point anywhere within the world – but only as it is defined by the grid. These robots look much more like printers or plotters than what we would traditionally call a robot. Perhaps this is a point of definition worth thinking about.

What is the alternative? Many thinkers had moved well away from his model of a centralised  intelligence by the time the 60’s were in full swing. They had begun to move away from the idea that the brain holds a map of the world by which we navigate the actual concrete world and associated/analogous approaches.

The second wave of cybernetics (late 60’s-mid 80’s) had already pushed toward a more ’embodied’ and ‘relational’ notion of intelligence and began to understand perception itself as a fundamental aspect of the thinking machine we call a body – the brain was no longer the sole mechanism of intelligence. By ’embodied’ we mean that it was the body interaction with the world in the ‘moment’ that provided the best premise for intelligent and timely response. By ‘relational’ we mean that ‘decisions’ are made according to an immediate interaction and response between body and world.

In addition a new field called Artificial Life looked for emergent qualities in much more distributed systems. John Conway’s Game of Life (http://www.bitstorm.org/gameoflife/) is a good simple demonstration of this emergent quality. Of course these simple life-like systems aren’t displaying ‘intelligence’ (as we know it) but many thinkers began to imagine that this was only a problem of degrees of complexity. The very notion of what intelligence might look like began to shift.  Narcissus’ reflection took on another dimension entirely.

Robotics took a while too catch up (and I have to cut some corners for clarity’s sake).  Eventually a maverick computer scientist, engineer (and philosopher I would argue) named Rodney Brooks (working at MIT) suggested we were doing it all wrong. Our robots had generally been Slow (due to complexity of informations and abstraction they are based on), Expensive and obsessed with finite control. As with all mavericks he decided to take a completely different approach by building robots that were ‘Fast, Cheap and Out of Control’ (also the Title of his great book on the subject).

Much of the robotics projects that makers are playing with today are inspired by Brook’s project – and also many of our vacuum cleaners – as Roomba was initially a offshoot of Brook’s research. Rather than create an intelligent robot modelled on the human being’s idea of itself he created systems modelled on the behaviour of insects. One of his famous premises was that is it looks like a duck it probably was – if it looks like intelligence then it was probably telling us something about intelligence – it might not even be our intelligence but that no longer seems to matter.

Rather than try and build a robot that ‘knows’ its world – that carries knowledge of its world Brook’s built robots that were responsive to sensory input from the world in ways the were ‘hard-coded’ (determined). Building a cockroach like model might start with simple rules; Run until light = 0, If noise < 10 then wander, else run until light = 0 (with a few edge conditions for obstacles). Once enacted such a robot looks a lot like a duck (I mean -cockroach). More importantly it looks like a cockroach in the lab, in the house, in the street.

You can move this cockroach bot any where and it won’t get confused and stop operating – it will continue to act like a cockroach. It no longer requires a model of the world – and so functions well beyond the limits of any sandbox environment – it deals interestingly with complexity. In fact emergent qualities tend to emerge as an intelligence grapples with its new space – with the intriguing notion that perhaps intelligence is a function of a particular coupling between body and world…the ‘mind’ might extend into the world itself. (For more on this see Andy Clarks notion of ‘extended mind’)

What the hell has all this got to do with us? Artbot can take two approaches – as I said before it might be premised on the notion of an abstracted model of the world in it which it can operate – we can draw a lot more accurately with this approach – we can always return to a zero point. But what would happen if we abandoned this approach – instead of telling our robot what to draw we could use the robot to discover emergent qualities of drawing as a function of its relation with the world through which it moved (or even a sense of its own drawing). It is here emergent qualities are potentialised – that is – its here we might better find what we didn’t know we were looking for.

Perhaps its here we get closer to that thing that is even more difficult than simulating intelligence – that model elusive of qualities -creativity ….. but for that we need another step and perhaps (I’d assert) we need to move back to Richard Doyles’ focus on the relationship between body and robot – and its there I think the most interesting possibilities emerge…..

We can save that step for another day..