Bruce Sterling: Since when do machines need an ethical code? For 80 years, visionaries have imagined robots that look like us, work like us, perceive the world, judge it, and take action on their own. The robot butler is still as mystical as the flying car, but there’s trouble rising in the garage.

In Nobel’s vaulted ballroom, experts uneasily point out that automatons are challenging humankind on four fronts.

First, this is a time of war. Modern military science is attempting to pacify tribal peoples with machines that track and kill by remote control. Even the resistance’s weapons of choice are unmanned roadside bombs, commonly triggered by transmitters designed for radio-controlled toys.

The prospect of autonomous weapons naturally raises ethical questions. Who is to be held morally accountable for an unmanned war crime? Are machines permitted to give orders? In a world of networked minefields and ever-smarter bombs, are we blundering into mechanized killing fields we would never have built by choice?

The second ominous frontier is brain augmentation, best embodied by the remote-controlled rat recently created at SUNY Downstate in Brooklyn. Rats are ideal lab animals because most anything that can be done to a rat can be done to a human. So this robo-rat, whose direction of travel can be determined by a human with a transmitter standing up to 547 yards away, evokes a nightmare world of violated human dignity, a place where Winston Smith of Orwell’s 1984 isn’t merely eaten by rats but becomes one.

More here.