Van der Vliet and other welfare advocates I met on my trip, like representatives from the Amsterdam Welfare Union, described what they see as a number of challenges faced by the citys some 35,000 benefits recipients: the indignities of having to constantly re-prove the need for benefits, the increases in cost of living that benefits payments do not reflect, and the general feeling of distrust between recipients and the government. 

City welfare officials themselves recognize the flaws of the system, which is held together by rubber bands and staples, as Harry Bodaar, a senior policy advisor to the city who focuses on welfare fraud enforcement, told us. And if youre at the bottom of that system, youre the first to fall through the cracks.

So the Participation Council didnt want Smart Check at all, even as Bodaar and others working in the department hoped that it could fix the system. Its a classic example of a wicked problem, a social or cultural issue with no one clear answer and many potential consequences. 

After the story was published, I heard from Suresh Venkatasubramanian, a former tech advisor to the White House Office of Science and Technology Policy who co-wrote Bidens AI Bill of Rights (now rescinded by Trump). We need participation early on from communities, he said, but he added that it also matters what officials do with the feedbackand whether there is a willingness to reframe the intervention based on what people actually want. 

Had the city started with a different questionwhat people actually wantperhaps it might have developed a different algorithm entirely. As the Dutch digital rights advocate Hans De Zwart put it to us, We are being seduced by technological solutions for the wrong problems & why doesnt the municipality build an algorithm that searches for people who do not apply for social assistance but are entitled to it? 

These are the kinds of fundamental questions AI developers will need to consider, or they run the risk of repeating (or ignoring) the same mistakes over and over again.

Venkatasubramanian told me he found the story to be affirming in highlighting the need for those in charge of governing these systems  to ask hard questions & starting with whether they should be used at all.

But he also called the story humbling: Even with good intentions, and a desire to benefit from all the research on responsible AI, its still possible to build systems that are fundamentally flawed, for reasons that go well beyond the details of the system constructions. 

To better understand this debate, read our full story here. And if you want more detail on how we ran our own bias tests after the city gave us unprecedented access to the Smart Check algorithm, check out the methodology over at Lighthouse. (For any Dutch speakers out there, heres the companion story in Trouw.) Thanks to the Pulitzer Center for supporting our reporting. 

This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.