Should robots eat? Should they carry weapons, or win patents? These are the questions we need to answer as automation advances.

Most people’s expectations of robots are driven by fantasy. These marvelous machines, optimists hope, will follow Moore’s law, doubling in quality every 18 months, and lead to a Jetsonian utopia. Or, as pessimists fear, humanoid bots will reproduce, increase their intelligence, and wipe out humanity.

Both visions are wrong. The artificial intelligence to animate robots remains several orders of magnitude less than what’s needed. We have to master either software engineering or self-organization before our most intelligent designers can dare play in the same league as Mother Nature.

My definition of a robot is any device controlled by software that can work 24/7 and put people out of work. The machines are not intelligent. They cannot comprehend Isaac Asimov’s Three Laws of Robotics to protect and obey humans before preserving themselves. Yet they are all around us. In case you missed them, today’s most popular robots are ATMs and computer printers.

While our hopes for and fears of robots may be overblown, there is plenty to worry about as automation progresses. The future will have many more robots, and they’ll most certainly be much more advanced. This raises important ethical questions that we must begin to confront.

1. Should robots be humanoid? Humanlike robots today are showbots, created for marketing purposes. They allow corporations to display technological machismo, wooing consumers to trust their cars and stereos. The risk is not humanoids running amok, but that as these electronic puppets become more lifelike, they become door-to-door spambots who trick people into buying snake oil and junk bonds.

2. Should humans become robots? We are nearing an age in which humans and computers may be connected via direct neural interfaces, technology indistinguishable from telepathy and telekinesis. In the input direction, computers might use electrodes to format information for our brains to understand. In the output direction, humans might be trained to think in distinct ways so that sensors and software could classify thoughts into signals to control equipment. While potentially beneficial for paraplegics, there’s the frightening opportunity for using animals as cheap, disposable robot bodies.

3. Should robots excrete byproducts? When cars were invented, no one imagined that hundreds of millions of them would spew carbon monoxide into the atmosphere. But they do, and yet we still feel entitled to drive them. Imagine the pollution levels if we add hundreds of millions of robots powered by internal combustion engines.

4. Should robots eat? There are proposals to allow robots to gain energy by combusting biological matter, either food or waste items. If this mode of fuel becomes popular, will we really want to compete for resources against our own technological progeny?

5. Should telerobotic labor be regulated? A telerobot is an electronic puppet controlled across a wire by a human using a PC and devices like joysticks and gloves. Consider replacing the on-site operator with a $10-per-day handler in an overseas call center. Instead of outsourcing jobs, we could import brains over broadband to manage machinery in factories, to teach in schools, or to clean houses. Should local labor laws apply to overseas workers who telecommute?

6. Should robots carry weapons? We must distinguish autonomous robot weapons from remote control armaments – unmanned telerobots supervised by humans. The ethical difference between the two: Who’s responsible for pulling the trigger?

7. Should machines be awarded patents? Evolutionary software has already designed simple circuits, as well as physical mechanisms like the ratchet and cantilever. As these automatic design systems improve and progress from simple geometric forms to novel integrated systems, intellectual property laws must change. If a robot invents, does the patent go to its owner or the patent holder of its artificial intelligence?

These questions are the beginning of a dialog that should precede, rather than react to, the enormous social, economic, and legal changes wrought by continued automation. Managed correctly, the increased labor and intelligence provided by machines can lead to greater human prosperity and improved conditions on Earth. We need reasonable policies informed by the robots of reality, not of fantasy.

Jordan Pollack ( is a professor of computer science and complex systems at Brandeis University.

More here.