← All Heroes
Timnit Gebru

Timnit Gebru

Executive Director

Distributed AI Research Institute (DAIR)

Timnit Gebru co-authored "Gender Shades" (2018) with Joy Buolamwini — a study that demonstrated commercial facial recognition systems from Microsoft, IBM, and Face++ had significantly higher error rates for darker-skinned women than for lighter-skinned men. The error differential was not marginal: Microsoft's system was 34.7 percentage points less accurate for darker women than for lighter men. "Gender Shades" is the most widely cited empirical study of algorithmic bias and directly influenced subsequent regulatory scrutiny of biometric AI systems.

In 2020, Gebru was a co-lead of Google's Ethical AI team and co-authored "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?" with Emily Bender, Angelina McMillan-Major, and Margaret Mitchell. The paper examined the environmental costs, bias amplification, and coherence-without-understanding problems of large language models. Google management requested that the paper be either withdrawn or the authors' names removed before publication. When Gebru declined, her employment was terminated — a dismissal that Google initially characterised as a resignation. The incident prompted significant debate about whether large technology companies can maintain credible AI ethics research internally when that research conflicts with their commercial interests.

Gebru founded the Distributed AI Research Institute (DAIR) in 2021 to conduct AI ethics research outside technology company structures. Her work consistently centres the people most likely to be harmed by AI systems — communities with less power to contest algorithmic decisions — rather than the developers who build them.

enes