3784513700000578-0-image-a-48_1471964294624

Robots are becoming an inevitable part of our future.

But questions remain over whether the increased use of artificial intelligence will be a good thing for humanity.

Now academics are becoming concerned that autonomous machines will break the law – and we will be powerless to stop them.

In a study published in the Vanderbilt Journal of Entertainment & Technology Law, Amitai Etzioni and Oren Etzioni from the Allen Institute for Artificial Intelligence in Seattle discussed the potential for robots to break the law.

The problem is not a hypothetical one.

Recently someone in a Google self-driving car was pulled over for going too slowly.

‘But who should the policeman have cited?’ the researchers said. 

Whether the fault was in the driver, the car or the person who wrote the car’s software was unclear.

Since artificial intelligence learns as it goes along, the researchers say, a self-driving car could easily start to stray from the speed limits by small amounts with no consequences, eventually leading to it speeding and causing an accident.

‘AI programs may stray considerably from the guidelines their programmers initially gave them. Indeed, smart instruments may counteract their makers’, the researchers wrote.

‘A self-driving car may note that other cars exceed the speed limit by a few miles per hour without harm or consequences and increase its own speed accordingly – more and more.’

To make sure robots do not go haywire, the academics called for the introduction of ‘AI guardians’ to watch over the machines.

They also said a ‘readily locatable off switch’ should be there in case of emergency.

If this is not the case, machines could make life-and-death determinations outside of human control.

‘Unassisted human agents—from auditors and accountants to inspectors and police—cannot ensure that smart instruments abide by the law,’ the authors added. 

Because growth in human intelligence is unlikely to keep pace with growth in artificial intelligence, humans may have to draw on AI to keep AI in check, the researchers say.

In a report by the Human Rights Watch earlier this year, they highlighted that if a robot unlawfully kills someone in the heat of battle, nobody is responsible for the death.

The organisation said something must be done about this lack of accountability – and it is calling for a ban on the development and use of ‘killer robots’.

Called ‘Mind the Gap: The Lack of Accountability for Killer Robots,’ their report details the hurdles of allowing robots to kill without being controlled by humans. 

‘No accountability means no deterrence of future crimes, no retribution for victims, no social condemnation of the responsible party,’ said Bonnie Docherty, senior Arms Division researcher at the HRW and the report’s lead author.

Image credit: Melinda Sue Gordon
Article via: Daily Mail