Case Study: Breaking Bias in AI

Machines, like children, only model what they’ve been taught. Machines start out bias-free but can quickly learn and even amplify bad human behavior. Learn how to mitigate bias by first understanding how it is introduced and then by being intentional about its removal. Attend this talk to learn how you can start to build systems that are bias-free. S. A. M. (Suspicious Activity Monitor), a predictive policing model, is used as the case study. Attendees will learn: how human bias is passed on to machines, practical strategies for identifying and removing bias, and tips for making machine learning algorithms transparent.

  • Kesha Williams, Software Engineering Manager, Chick-fil-A