What should you do when a machine’s decision does you wrong?
The overlooked problem with AI-driven tools. Recent data-driven prediction controversies illustrate that the shift from human to machine decision-making can be problematic. These issues also raise concerns about social and gender disparities in machine learning tools and question whether AI could be actively hostile towards race or gender identity. Aziz Huq, a law professor, reveals the potentially disastrous consequences of badly designed machine-decision making, which is looking more and more likely to exacerbate the problems it claims to reduce.