I took an algorithm to court in Sweden. The algorithm won | Charlotta Kronblad


We like to imagine that injustice announces itself loudly. That when something goes wrong in the public system, alarms go off and someone takes responsibility or is held accountable if they do not. But in 2020 in Gothenburg, injustice arrived quietly, disguised as efficiency.

For the first time, the city used an algorithm to allocate places in its schools. After all, working out geographical catchment areas and admissions is an administrative headache for any municipality. What better than a machine to optimise distances, preferences and capacity? The system was designed to serve public efficiency: framed as neutral, streamlined and objective.

But something went terribly wrong. Hundreds of children were allocated places in schools miles from their homes – across rivers and fjords, over major highways, in neighbourhoods they had never visited and had no connection to. Parents stared at the decisions in disbelief. Had anyone checked whether a 13-year-old could reasonably walk that route in winter? What rationale guided these decisions? Were their stated preferences simply ignored? No one in the schools administration seemed able – or willing – to explain what had happened or to address the errors.

I watched this unfold as a researcher in technology and a former lawyer, but also as a mother. My then 12-year-old son was among the children affected by the algorithm. Our frustration grew with the schools administration’s lack of response. Calmly, they told us we could appeal if we had an issue with our placement – as if it were a matter of taste. As if the problem was due to individual dissatisfaction rather than systemic malfunction. Around kitchen tables across the city, the same confusion and anger simmered. Something was off, and the severity of the problem was becoming increasingly clear.

It was nearly a year before city auditors confirmed what many of us had suspected; the algorithm had been given flawed instructions. It had calculated distances “as the crow flies”, not the distances of actual walking routes. Gothenburg has a major river running through it. The failure to factor that in meant children were facing hour-long commutes. Reaching the opposite riverbank by walking or cycling (as the law stipulates is the appropriate way to get to school) was simply not possible for many.

After an outcry from families procedures were improved for the subsequent school year. But for roughly 700 children already affected by the faulty algorithm, nothing changed. They would spend their entire junior high years in the “wrong” schools.

The official line was that individual appeals were sufficient. But this misses the point. Algorithms do not merely make isolated decisions; they generate systems of decisions. When 100 children are wrongly placed in schools on the opposite riverbank, they take the places intended for others. Those children are consequently pushed to different schools, displacing others in turn. Like dominoes, the errors cascade. By the fifth or sixth displacement, the injustice becomes almost impossible to detect, let alone to contest and prove in court.

The resulting algorithmic injustice is not an abstract problem, nor a problem specific to the Swedish context, it painfully echoes recent scandals across Europe. One is the Post Office scandal in the UK, where the Horizon IT system falsely accused hundreds of post office operators of theft, leading to prosecutions, bankruptcies and even imprisonment. For years, the system output was treated as near-infallible. Human testimony was bent to the authority of the machine. Another example is the childcare benefits scandal in the Netherlands, where a system deployed by the Dutch tax authority wrongly flagged thousands of parents as fraudsters. Families were plunged into debt. Many lost their homes. Children were taken into foster care. In both these cases, the algorithmic malfunctions continued for many years, as the automated systems operated behind a veil of technical complexity and institutional defensiveness. Errors multiplied. Harm deepened. Accountability lagged.

Back in Gothenburg in 2020, it became clear to me that simply appealing against my son’s placement would not be enough. You cannot fix a systemic error through individual redress. So, as part of a research project, I sued the city to see what happens when algorithms are taken to court. Thus, I did not contest the individual placement of my son but the legality of the entire decision-making system and all its output. I argued that the algorithm’s design violated applicable legislation.

Lacking access to the system, as my repeated requests for disclosure of the algorithm had gone unanswered, I could not present the algorithm to the court. Instead I conducted a painstaking analysis of hundreds of placements, using addresses and school choices to reconstruct how the system must have operated, and supplied this as evidence instead.

The city’s defence was breathtakingly simple. They claimed the decision-making system had functioned merely as a “support tool”. According to them, they had done nothing wrong and provided no evidence to support the claim: no technical documentation, no code, no explanation of their processes.

And, to my astonishment, they did not have to. The court placed the burden of proof squarely on me. It was my responsibility, the judges said, to demonstrate that the system was unlawful. The analysis of decisions was not enough. Without direct evidence of the code, I could not meet the evidentiary threshold. The case was dismissed. In other words: prove what is in the black box, or lose.

This, more than the initial administrative failure, is what keeps me awake at night. We know that algorithms will sometimes fail. That is precisely why we have courts – to compel disclosure, to scrutinise, and to correct. But when procedural frameworks remain stubbornly analogue, and when the judges lack the tools, the competence and the mandate to interrogate algorithmic systems, injustice will prevail. While our public authorities deploy opaque systems at scale, citizens, confronted with life-altering outcomes, are told to appeal – one by one – without access to the underlying code.

The lessons from the Post Office and the Dutch child benefit scandals echo what I found in Gothenburg. When courts defer to technology rather than interrogate it, and when the burden of proof rests on those harmed rather than those who designed and deployed the system, algorithmic injustice will not only appear, but can go on for years. Even if the technology itself is relatively simple, as in Gothenburg – where the error lay in using bird’s-eye distance rather than actual walking routes, citizens were still confronted with a black box that had to be uncovered in order to contest it. In this case: a glass box covered in multiple layers of black wrapping paper.

It is time to demand that our courts open the black boxes of algorithmic decision-making. We need to shift the burden of proof to the party that actually has access to the algorithm, and design procedural rules for effective systematic redress. Until we adapt our legal procedures to the realities of digital society we will continue to stumble from scandal to scandal. When injustice is delivered by code in near silence, accountability must answer at full volume.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *