“We completely understand the public’s concern about futuristic robots feeding on the human population, but that is not our mission…”
Harry Schoell, CEO, Cyclone Power Technologies Inc.
– The best thing about cheesy 80’s SF on the telly, (and StarWars), was the robots. I was all kinds of disappointed when I saw my first real robot, via the news, and all it did was swing a big ‘arm’ around and did something to help build a car in a Japanese factory.
I suspect now, that the thing I found cool about those SF robots was that they were little metal persons. They clearly had more than just processing power and autonomic reactions. They were self aware, curious, emotional and aware of others. Nothing like robots at all then, appalling SF, and even worse telly. But still, they were pretty cool.
In the early nineties when I was looking at a few theories of mind, Artificial Intelligence was obviously a useful thing to think about. What would it mean to say that a computer, or an integrated system of machines, could think? That would obviously be a mind, wouldn’t it?
Back then I thought the most important issues, if such machines were invented, would be; “What rights, if any, do we give them” and ” Help, they are going to kill us”. Those are still important issues.
– There’s been a bit of talk about what they are calling ‘The Singularity’, defined as the point at which we invent a machine ‘smarter’ than us. The existence of such a machine, would cause a positive feedback loop with the machine(s) being able to further improve both their own descendants, and us, upwards along an increasing spiral of alleged awesome. I’m not at all sure what to think about that.
Most of the talk I’d heard about this was from people mocking the boosters of the idea, many of whom to be fair, do seem a bit strange.
This though, is a little bit different from that. More prosaic, less utopian, more real.
While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,†they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.
The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.
They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?
A bunch of scientists have had a meeting discussing AI issues, they’ll be releasing the report “this year”. Should be interesting. The NYT report itself is kind of strange. It seems to be offering a warning, but the meeting also seemed to be about letting people know that there is nothing to be alarmed about. (So look over there and don’t worry your good selfs. It’s all under control.)
Aside from the criminals and terrorists, I’m more than a bit concerned about what governments and corporations will find do-able, working together even.
I’m not stocking up on tin foil just yet, but some of this stuff needs to start getting regulated I think. Or at least talked about. That which isn’t already classified of course.