Adam Spruce
I GREW UP in New York City - greatest city in the world, but Boston comes a close second. I was born in 2000, a millennium baby, and brought up to believe I was special because I was one of the Millennium Cohort, a group of us all born on the same day. Every year for the first 18 years we had assessments at the Massachusetts Institute of Technology's developmental research facility. I guess it was inevitable that I would be interested in the learning process, but I've gone and made it my job.
I leave the house at about 8am. Mandy stays behind with the kids: she works in a virtual office at home, and they learn from the Net. I work with two robots, Anna and Pete, both Kismet-12s. They were created six months ago, so they are still children in robot terms.
They are cute. They were designed to be lovable. People won't engage in the same way with something that is just metal and plastic. But they don't think like we do, and they get insulted if I treat them too much like humans.
"We have our own needs," they say, and it's true. Where we eat food and drink water, they need a service of their mechanical parts. They do have eyes, ears and hands, so they can build up a picture of the outside world. Their brains are microchip versions of our neural networks, built at Imperial College in London.
In some ways, Anna and Pete are a lot like you or me. They are curious about the world. They figure out what happens in a certain situation and use that information to make decisions the next time around. They sleep by shutting down and starting up again, which we think is to refresh their emotional circuits. They can sense if a human is angry or sad through visual and aural clues. They listen, they talk. They even picked up my sense of humour - I didn't teach them, they just started making jokes. After a while I stopped finding them funny, but they would probably tell you they are just too sophisticated for me now.
When people first started creating thinking robots, there was a lot of controversy over whether they were truly conscious or not. Alan Turing, the British mathematician, wrote a paper nearly 100 years ago on the subject: if you could have a conversation with a machine and not tell the difference from talking to a human, it must be conscious. But he was wrong: you can tell the difference, because they tell you. They have a different kind of consciousness: they know what they are, they are self-aware. And they know that I'm different. If I haven't taken my lunch break by 2pm, they nag me: "Go eat, or you'll be grumpy later." I go down to the bay and watch the sea. You used to see whales out there, someone told me.
Anna and Pete are research robots. We use them to learn more about human intelligence. Some conscious robots are more practical: they chauffeur cars, fly aircraft, go off on missions for the military. Fifty years ago people worried that they might turn bad, like Frankenstein's monster. But I've never heard of a robot going on the rampage. They all have a human protection element. An evil robot creator could override it, but why bother? If you want to wipe out humans it's easier to buy a chemical weapons kit off the Net. Non-conscious machines are more dangerous.
A part of my job is to explain to Anna and Pete how things work in the world. It's like training an apprentice. Most of their knowledge is second-hand, at least to start with. They were created by cloning from an older machine, Carla, so they had a large knowledge base. But the more they ask questions and have their own sensations, the more they are developing their own personalities. Anna is chirpier, Pete more thoughtful. Neither ever complains, though, and that's the great thing about them: they are much less demanding than humans.
Sometimes I stay late at work, because in a certain mood I almost prefer spending time with them than with my family. Don't get me wrong; I love my family. But the robots, on an emotional level, are simpler. They are not insecure. They don't get resentful, they don't throw tantrums, they don't act irrationally. If I ask them to leave me alone for an hour, they do, just like that. They are happy as they are. They have no competitiveness, no survival agenda.
They are not scared of dying; they know that one day they will just stop existing and their data will be passed on to another robot.
Interview by Kathy Brewis
Portrait by Daniel Mackie
Go to - Home
|