In 2017, the city of Rotterdam in the Netherlands deployed an artificial intelligence (AI) system to determine how likely welfare recipients were to commit fraud. After analyzing the data, the system developed biases: it flagged as “high risk” people who identified as female, young, with kids, and of low proficiency in the Dutch language.
Local governments are using AI without clear rules or policies, and the public has no idea
