The neglect of AI ethics extends from universities to industry
A study by data science firm Anaconda found an absence of AI ethics initiatives in both academia and industry.
Amid a growing backlash over AI‘s racial and gender biases, numerous tech giants are launching their own ethics initiatives — of dubious intent.
The schemes are billed as altruistic efforts to make tech serve humanity. But critics argue their main concern is evading regulation and scrutiny through “ethics washing.”
At least we can rely on universities to teach the next generation of computer scientists to make. Right? Apparently not, according to a new survey of 2,360 data science students, academics, and professionals by software firm Anaconda.
Only 15% of instructors and professors said they’re teaching AI ethics, and just 18% of students indicated they’re learning about the subject.
Notably, the worryingly low figures aren’t due to a lack of interest. Nearly half of respondents said the social impacts of bias or privacy were the “biggest problem to tackle in the AI/ML arena today.” But those concerns clearly aren’t reflected in their curricula.
The AI ethics pipeline
Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.
Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.
The study authors warned that this could have far-reaching consequences:
Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.
The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:
Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.
While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.