Wednesday, June 17, 2015

System Crashes Will Forestall the Robot Apocalypse

The Conversation

Do you find yourself worried by the implications of , Channel 4’s new drama about the exploits of near-human intelligent robots? Have you ever fretted over the apocalyptic warnings of Stephen Hawking and Elon Musk about the threat of superintelligent artificial intelligence? Have your children ever lay wide-eyed thinking about, such as those in Marvel’s film Avengers: Age of Ultron?

If you find this creepy or have answered “yes” to any of these questions, you should immediately watch footage from the recent DARPA Robotics Challenge.

The  is unusual in that it requires bipedal robots to do only the everyday things humans do: getting out of cars, walking into buildings, climbing stairs, negotiating uneven ground, turning valves, and picking up and using a saw to cut a hole in a wall. Hardly skills worthy of , but to KAIST, the winning team which walked away with the US$2m prize, and all those that failed, it was tough.

The winning robot completed only eight of the nine tasks, many of which would not trouble a seven-year-old. In fact, all but three teams failed the rather basic challenge of getting out of a stationary car, even with no door to complicate matters.

Even simple things are hard

This creates a cognitive bias: if a machine can do something we find hard, we tend to assume it can easily do the simple stuff as well. Like all biases, this isn’t necessarily true.

We also assume a generality bias: since we can do many different things, we assume that a machine which can do one of them can do the others as well. This conflicts with the way computing research happens, which tends to focus on getting a computer to do one thing (partly because there’s no way to easily research “doing everything”). Machines have grown up in a completely different environment from us, so it shouldn’t come as a surprise they are good at doing different things.

Science fiction still … fictionso life like it (she) comes alive. The idea is still a powerful one. Hollywood, and fiction in general, loves robots. From  to , from  to Humans, a “machine person” is an easy trope with which to explore complex issues of embodied identity.

In fact robots (the Czech word for “worker”) emerged not from research but from the 1920’s Czech writer , which played upon universal fears of the servants—the working class—taking over. So it’s the equivalent of fearing what would happen if Orcs took over London, or how to cope with a zombie apocalypse: it’s fun, but unrelated to reality.

Computers aren’t peopleJaron Lanier says the problem lies with the myth of computers as people, which survives due to a domineering subculture in the technical world. Visions of robots drive researchers on, generating new achievements that feed back into myth-making in fiction, which in turn encourages funding and further research.

In the 1960s, the film  saw full artificial intelligence as only ten or 20 years away, a figure which has remained remarkably constant from all experts before and since. Our reactions are channelled by the computer as people myth, pushing us to think of it as a choice between stopping Skynet, Terminator-style, or . At its heart, these fears expose the parallel and competing visions for what computing should be.

Early AI pioneer Alan Turing strongly articulated the computer as the beginnings of a synthetic human being: his Turing test defines artificial intelligence as one that’s .

On the other hand  pioneered an alternative vision: computing as a means to “augment human intellect” (Engelbart also gave us the mouse, bitmapped screens, and the graphical user interface). The closest Hollywood ever got to Engelbart’s vision was Neil Burger’s film , in which a pill allows humans to use the potential power of their entire brain. But as mere augmentation doesn’t raise the kind of philosophical questions demanded by fiction it’s unlikely to create a mythology juggernaut.

If you’re worried about AI and the , Lanier points out that while computer power has improved reliability has not—the time between failures hasn’t changed much in the last 40 years, so a conquered human race need wait only until the next system crash. And in any case, if DARPA’s challenge is anything to go by, shutting your door seems to be very effective at keeping robots out.

The Conversation. Read the 

see also:

No comments:

Post a Comment