From predictive policing to automated credit scoring, algorithms applied on a massive scale, gone unchecked, represent a serious threat to our society. Dr. Rumman Chowdhury, director of Machine Learning Ethics, Transparency and Accountability at Twitter, joins Azeem Azhar to explore how businesses can practice responsible AI to minimize unintended bias and the risk of harm.
They also discuss:
- How you can assess and diagnose bias in unexplainable “black box” algorithms.
- Why responsible AI demands top-down organizational change, implementing new metrics and systems of redress.
- How Twitter led an audit of its own image-cropping algorithm that was alleged to bias white faces over people of color.
- The emerging field of “Responsible Machine Learning Operations” (MLOps).
@ruchowdh
@azeem
@exponentialview
Further resources:
- “Sharing learnings about our image cropping algorithm” (Twitter Blog, 2021)
- “As AI develops, so does the debate over profits and ethics” (Financial Times, 2021)
- “It’s time for AI ethics to grow up” (Wired, 2020)
- “Auditing Algorithms for Bias” (Harvard Business Review, 2018)
HBR Presents is a network of podcasts curated by HBR editors, bringing you the best business ideas from the leading minds in management. The views and opinions expressed are solely those of the authors and do not necessarily reflect the official policy or position of Harvard Business Review or its affiliates.