Strange things are afoot at the circle K

“We completely understand the public’s concern about futuristic robots feeding on the human population, but that is not our mission…”
Harry Schoell, CEO, Cyclone Power Technologies Inc.

– The best thing about cheesy 80’s SF on the telly, (and StarWars), was the robots. I was all kinds of disappointed when I saw my first real robot, via the news, and all it did was swing a big ‘arm’ around and did something to help build a car in a Japanese factory.

I suspect now, that the thing I found cool about those SF robots was that they were little metal persons. They clearly had more than just processing power and autonomic reactions. They were self aware, curious, emotional and aware of others. Nothing like robots at all then, appalling SF, and even worse telly. But still, they were pretty cool.

In the early nineties when I was looking at a few theories of mind, Artificial Intelligence was obviously a useful thing to think about. What would it mean to say that a computer, or an integrated system of machines, could think? That would obviously be a mind, wouldn’t it?

Back then I thought the most important issues, if such machines were invented, would be; “What rights, if any, do we give them” and ” Help, they are going to kill us”. Those are still important issues.

– There’s been a bit of talk about what they are calling ‘The Singularity’, defined as the point at which we invent a machine ‘smarter’ than us. The existence of such a machine, would cause a positive feedback loop with the machine(s) being able to further improve both their own descendants, and us, upwards along an increasing spiral of alleged awesome. I’m not at all sure what to think about that.

Most of the talk I’d heard about this was from people mocking the boosters of the idea, many of whom to be fair, do seem a bit strange.

This though, is a little bit different from that. More prosaic, less utopian, more real.

While the computer scientists agreed that we are a long way from Hal, the computer that took over the spaceship in “2001: A Space Odyssey,” they said there was legitimate concern that technological progress would transform the work force by destroying a widening range of jobs, as well as force humans to learn to live with machines that increasingly copy human behaviors.
The researchers — leading computer scientists, artificial intelligence researchers and roboticists who met at the Asilomar Conference Grounds on Monterey Bay in California — generally discounted the possibility of highly centralized superintelligences and the idea that intelligence might spring spontaneously from the Internet. But they agreed that robots that can kill autonomously are either already here or will be soon.

They focused particular attention on the specter that criminals could exploit artificial intelligence systems as soon as they were developed. What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?

A bunch of scientists have had a meeting discussing AI issues, they’ll be releasing the report “this year”. Should be interesting. The NYT report itself is kind of strange. It seems to be offering a warning, but the meeting also seemed to be about letting people know that there is nothing to be alarmed about. (So look over there and don’t worry your good selfs. It’s all under control.)

Aside from the criminals and terrorists, I’m more than a bit concerned about what governments and corporations will find do-able, working together even.

I’m not stocking up on tin foil just yet, but some of this stuff needs to start getting regulated I think. Or at least talked about. That which isn’t already classified of course.

10 thoughts on “Strange things are afoot at the circle K

  1. launching a drone that then totally autonomously engages and destroys a target with a hellfire missile arugably already breaks the “rules” as postulated by Asimov.

    “Smarter” always seems to be about sentience, but a sentient machine would, presumably, develop free will and morality and one doubts if Pentagon planners would wish to engage in an ethical debate with their missile before it agrees to destroy itself and a school which may house a Taliban HQ.

    The drone above is already “smarter” in at least one way (its target acquisition systems) than a human. it is only a matter of time before battlefield systems like tanks become fully automated, with command links that allow for a fully automatic mode.

    So, I agree – it well past time for bodies like the U.N. to develop a convention on robot weapons.

  2. Why not tie back responsibility for a drone’s actions to the person or agency which launched it? As in, if you deploy an armed drone, then any and all actions taken and harm caused by the drone were legally taken by you.

    There are implementation problems, and problems with chains of command and ascertaining responsibility, but wouldn’t this be a strong starting principle?

    L

  3. I’m with Lew, we hold people responsible for the direct and sometimes indirect consequences of their actions. Why can’t we just hold the people responsible for the consequences of their robots?

  4. Here’s the press release but the funniest bit is the Safe Harbor Statement,

    Certain statements in this news release may contain forward-looking information within the meaning of Rule 175 under the Securities Act of 1933 and Rule 3b-6 under the Securities Exchange Act of 1934, and are subject to the safe harbor created by those rules. All statements, other than statements of fact, included in this release, including, without limitation, statements regarding potential future plans and objectives of the company, are forward-looking statements that involve risks and uncertainties. There can be no assurance that such statements will prove to be accurate and actual results and future events could differ materially from those anticipated in such statements. The company cautions that these forward-looking statements are further qualified by other factors. The company undertakes no obligation to publicly update or revise any statements in this release, whether as a result of new information, future events or otherwise.

  5. In a battle between regulation and technology, especially I.T. or computer-based technology, technology will win simply because regulation cannot keep up with such rapid development. Look at how the recording industry and governments continually go after file-sharers, yet the threat changes. From the early peer to peer networks to torrents and proxy servers… what comes next is unpredictable and unregulatable.

    I’m with Lew, we hold people responsible for the direct and sometimes indirect consequences of their actions. Why can’t we just hold the people responsible for the consequences of their robots?

    I agree in principle, ie: If you let a crazy killer drone go and it blows the wrong person up, then you have to suffer the consequences. But if I may take it a bit further, who (or what) should be held responsible should the robot do something completely unpredictable? Is there any degree where the owner could be held not responsible for the actions of his property? This need not apply to military drones or robots…

    Btw, I enjoyed that nytimes article, but I too believe it massively overplays AI or the “threat” – to my knowledge, there is no other field that has produced such a vast sum of optimism and subsequent disappointment.

  6. Corey,

    I agree in principle, ie: If you let a crazy killer drone go and it blows the wrong person up, then you have to suffer the consequences. But if I may take it a bit further, who (or what) should be held responsible should the robot do something completely unpredictable? Is there any degree where the owner could be held not responsible for the actions of his property? This need not apply to military drones or robots…

    I’d be comfortable with enshrining the idea that people are responsible for actions of robots they could have reasonably foreseen, or for not having taken reasonable care which, I assume, would cover them not taking the time to figure out what they should be able to foresee. Similarly stepping up the culpability in the “with reckless disregard” kind of space.

  7. Corey,

    In a battle between regulation and technology, especially I.T. or computer-based technology, technology will win simply because regulation cannot keep up with such rapid development.

    I completely agree with this, but that isn’t to say that there shouldn’t be norms and regulations codified in law, even if they are hopelessly optimistic. Of course, ‘the terrorists’ will always find a way to turn good tech bad – but as a starting point, we need to establish means by which good tech cannot be purposefully turned bad by the good guys as well.

    Anita and Corey, you’re right – a driver isn’t held culpable if their accelerator gets stuck or their brakes fail due to a manufacturing defect or general unforeseeable failure. This should also be true of robots and such – although with the proviso that a lot more would seem to be foreseeable when you arm an (artificially) intelligent machine.

    Btw, I enjoyed that nytimes article, but I too believe it massively overplays AI or the “threat” – to my knowledge, there is no other field that has produced such a vast sum of optimism and subsequent disappointment.

    I have two words for you: flying car.

    L

  8. I have two words for you: flying car.

    I’ll be okay, as long as nobody mentions that Moon Base we were all promised…

  9. “Should denying the people a moon base as part of good space exploration be a criminal offence?”

    Do you think we can get 300,000 signatures?

Leave a Reply

Your email address will not be published. Required fields are marked *