Researchers have created a machine that they claim can tell if a person is a convicted criminal simply from their facial features. The artificial intelligence, created at Shanghai Jiao Tong University, was able to correctly identify criminals from a selection of 186 photos nine out of 10 times by assessing their eyes, nose and mouth.
The findings add support to an often-discredited view that criminals have particular facial features, suggesting that the structure of someone’s face, including “lip curvature, eye inner corner distance, and the so-called nose-mouth angle”, can identify criminality.
It would be highly controversial if applied, but raises fears that China could add such information to its surveillance capabilities, which already include a dossier on almost everyone called dang’an. The files, collected since the Mao era, contain personal and confidential information such as health records and school reports.
As part of the research, Xiaolin Wu and Xi Zhang trained the artificial intelligence with around 1,670 pictures of Chinese men, half of whom were convicted criminals. The pictures analysed were taken from identification cards in which the men, aged 18 to 55, were clean-shaven and holding neutral poses.
Having taught the system, Mr Wu and Mr Xiang then fed it a further 186 images and asked it to sort them into criminals and non-criminals.
The accuracy of its guesses, which were based on features it associates with criminality, led the researchers to claim that, “despite the historical controversy”, people who have committed a crime have certain unique facial features.
“The faces of general law-abiding public have a greater degree of resemblance compared with the faces of criminals, or criminals have a higher degree of dissimilarity in facial appearance than normal people,” said Mr Wu and Mr Xiang.
More research is required to cover different races, genders and facial expressions before the tool could be widely used.
The research could add to China’s vast security apparatus, which already includes AI-based “predictive policing”.
Earlier this year, Beijing hired the China Electronics Technology Group, the country’s largest defence contractor, to create an AI that can analyse the behaviour of people in CCTV footage for signs that they’re about to commit an act of terror.
Once complete, the system will be used to predict “security events” so that police or the military can be deployed in advance.
Digital rights experts warned that using AI in this way could be dangerous and that “reaching generalised conclusions from such small data poses huge problems for innocent people”.
Dr Richard Tynan, technologist at Privacy International, said: “This is no different than Craniometry from the 1800s, which has been debunked. In fact, the problem runs much deeper because it can be impossible to know why a machine has made a certain decision about you.
“It demonstrates the arbitrary and absurd correlations that algorithms, AI, and machine learning can find in tiny data sets. This is not the fault of these technologies but rather the danger of applying complex systems in inappropriate contexts.”