Better coding, not just laws and regulations, is the solution for tech’s failure to address the needs of actual humans
Calls for stronger government regulation of large technology companies have become increasingly urgent and ubiquitous. But many of the technology failures we hear about every day—including fake news; privacy violations; discrimination; and filter bubbles that amplify online isolation and confrontation—have algorithmic failures at their core.
For problems that are primarily algorithmic in nature, human oversight of outcomes is insufficient. We cannot expect, for example, armies of regulators to check for discriminatory online advertising in real time. Fortunately, there are algorithmic improvements that companies can and should adopt now, without waiting for regulation to catch up.
Given their frequent media portrayal as mysterious black boxes, we might be worried that rogue algorithms have escaped the abilities of their creators to understand and rein in their behaviors. The reality is thankfully not so dire. In recent years hundreds of scientists in machine learning, artificial intelligence and related fields have been working hard at what we call socially aware algorithm design. Many of the most prominent and damaging algorithmic failures are well understood (at least in hindsight), and, furthermore, have algorithmic solutions.