How AI reinforces systemic bias
Vox asked me to explain the many, many ways in which AI systems can reinforce biases, a topic that has become even more urgent with the introduction of popular generative AI tools. A lot of people who pay attention to technology reporting will know about the biases in training data: AI, generally speaking, is designed to recognize patterns and execute tasks based on those patterns. In order to get there, the systems need to be trained on a data set. If that data set contains biases, the system will learn those biases. If someone builds an AI system to recognize faces, and trains the system on a data set containing pictures of mostly white faces, that system will not be as effective at identifying nonwhite faces, for example. But the issue of AI bias is much deeper than training data, touches our lives every day, and hurts vulnerable people the most. You can read the piece here. |