This blog is part of a series on ‘the Politics of Food and Technology’, in collaboration with the SOAS Food Studies Centre. All of the blogs in this series are contributions made at the International Humanitarian Studies Association (IHSA) Conference in Istanbul-Bergen, October 2025, to the panel with a similar title. To read the rest of the blogs in this series, please click here.
Digital tools promise efficiency and impartiality in humanitarian response. In food aid, biometric systems are meant to ensure that the ‘right’ people receive assistance. But when the verification of need depends on being readable by a machine, accountability shifts. Drawing on field experience in South Sudan, Hayley Umayam explores how exclusions come to look like a system error rather than a downstream effect of human decision-making.

Needs-based programming is the organizing principle of most contemporary humanitarian action. In South Sudan, where millions require assistance each year, and resources are consistently insufficient to meet needs, organizations justify allocation choices through a ‘logic of impartiality’: aid should go to those most in need. This logic is increasingly operationalized through digital and technocratic systems designed to make suffering measurable, commensurable, quantifiable, and thus ‘governable’.
Over the past decade, humanitarian agencies have turned to digital tools like fingerprint scanners and unique digital identifiers to manage service delivery. These tools promise accuracy and efficiency, an appeal that is easy to understand in a world of shrinking aid budgets and growing demand. They offer a way to demonstrate that limited resources are used responsibly and that assistance is delivered to the “right” people, thereby reinforcing claims of impartiality. There are plenty of technological evangelists, too, highlighting the potential use of Artificial Intelligence or Machine Learning in ‘streamlining’ the aid process.
Within this paradigm of impartiality-through-efficiency, accountability becomes largely procedural. It risks being defined less by relationships with affected communities than by the ability to show that needs-based logic has been correctly applied. If you can demonstrate that you followed needs-based logic using the right indicators, vulnerability criteria, and verification procedures with some level of “community buy-in”, you are seen as accountable. In other words, claiming that “the most in need” were reached is a way of demonstrating impartiality, and accountability is about legitimizing hard choices in contexts where almost everyone can qualify as in need. Strangely, humanitarian hyper-prioritization may actually lead to a reduction in the number of people who can access aid.
South Sudan makes the limits of this approach especially visible. Routinely described as complex and protracted, it is a setting where identifying the “most in need” is not only contested but, in practice, impossible to do in any complete sense. Selection is less about discovering need in any comprehensive sense than about justifying exclusion in the most acceptable way under conditions of scarcity.
When I reflect on the promises and risks of digitalization in these conditions, I return to a moment early in the rollout of biometric systems at food distributions I helped monitor. This encounter may seem mundane, but shows how core ideas of need, accountability, and responsibility are shifting as humanitarian action is increasingly digitally mediated.
“Before the computer, we used to get food”
At a food distribution site in Lakes State, a woman presses her finger onto a biometric scanner. The machine beeps, and the screen shows a red X: Not matched. She wipes her hand, prays, and tries again. After several attempts, the screen finally turns green. The next woman in line is less fortunate. Her fingerprints fail repeatedly. After trying multiple machines, she is sent home without food, her distress visible.
“They have brought computers in and these useless cards that make some of us not get food,” she says. “Before, without the computer and with our previous cards, we used to get food.”
During these early months of biometric rollout, moments like this were common. Fingerprint readers often struggled with calloused, dusty, or sooty hands. People waited anxiously to undergo a process they did not fully understand. Some prayed before placing their finger on the device, others cried with relief when the screen flashed green. And when it didn’t, there was little to be done but blame the computer.
The long social and moral labor of being selected, being summoned for a distribution, queuing, and presenting oneself as deserving collapses into a single, opaque interaction between body and machine. At that moment, one’s neediness is technical, not social or relational.
“It’s the System That Decides”
Frontline staff experienced these moments of biometric failure with their own mix of frustration, sympathy, and resignation. They had been trained on the new equipment, but they could not control how the machines behaved. When the screens displayed error messages, there was often little they could do to fix the problem on the spot. They could not see inside the system or override its judgement. While they could log exclusions in hopes of a ‘catch-up’ distribution cycle, I seldom saw mention of this in upstream reporting. Concretely, a non-recognized fingerprint simply meant no food, while a distribution that adhered to its list of scannable beneficiaries checked the box of impartiality.
Biometric systems were introduced into an already tense moral terrain. Even before digitalization, frontline staff were the face of decisions that they often had no control over. Caseload numbers were set elsewhere, and it was the unenviable task of field teams to turn those inevitably constrained numbers into a verified list of the “most in need.”
In this context, some staff began to see digital tools as a buffer against the reactions of the affected-but-excluded. Instead of saying we cannot assist you, staff could say the system does not recognize you.
Who is accountable for technical errors?
Some of these early rollout issues have been partially mitigated over time. Nevertheless, the encounter at the scanner still matters because it offers a glimpse into how humanitarian need and accountability are being reconfigured, which will likely only continue with increased digital aid practices.
Exclusion appears as a technical error rather than a consequence of prioritization and human decision-making. This sustains a humanitarian fantasy of impartial needs-based programming in which defaults to technical systems and procedures. By transforming moral and political decisions into technical ones, humanitarian organizations can maintain legitimacy amid chronic shortfalls, while displacing responsibility onto machines and caseloads. This procedurally legitimizes needs-based distributions while making certain bodies invisible, producing a formal sense of impartiality even as real-world access is uneven. Meanwhile, those with unrecognizable fingerprints have limited recourse to accountability.
None of this means digital tools should be rejected outright. In many contexts, they can limit some forms of abuse and allow aid to reach people who might otherwise be excluded. But if we evaluate them only in terms of their supposed efficiency or as neutral tools of impartiality, we miss how they redistribute responsibility, normalize exclusion, and translate need into something that exists only when a system can verify it.
Opinions expressed in Bliss posts reflect solely the views of the author of the post in question.
About the author:

Hayley Umayam is a PhD candidate at the Geneva Graduate Institute. Her research focuses on the politics of knowledge and expertise in famine and mass starvation. She holds an MA in Peace and Justice Studies from the University of San Diego.
Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.











































