In the span of a few years, we've gone from a world where machine learning is a curiosity that makes hilariously bad recommendations to one where learned models makes increasingly intrusive decisions about all aspects of our daily lives. The very way in which we think about how decisions are made, what it means to be fair and unbiased, and what (or who) is accountable for these decisions, is undergoing a radical shift, driven by computational metaphors that are only now entering mainstream discussions in society at large.
This shift presents challenges: how do we adapt the way we design models to adapt to the larger issues of discrimination and bias in society, and how can we instrument our models to provide more clarity about their inner workings? But it also presents opportunities: can we use concepts from data science to build better tools for decision-making?
In this talk I'll present examples of these challenges and opportunities in my own work. I'll discuss the problem of defining fairness mathematically, and how we might build fair models and inspect them. I'll also talk about new work in predictive policing that abstracts the problem of feedback in decision systems and uses ideas from reinforcement learning to fix it.