Politics of Food and Technology Series| Beyond the Numbers: Humanitarian Response in the Absence of Data

By Posted on 614 views

This blog is part of a series on ‘the Politics of Food and Technology’, in collaboration with the SOAS Food Studies Centre. All of the blogs in this series are contributions made at the International Humanitarian Studies Association (IHSA) Conference in Istanbul-Bergen, October 2025, to the panel with a similar title. To read the rest of the blogs in this series, please click here.

In this blog, Jeremy Taylor (PhD), Regional Head of Advocacy at the Norwegian Refugee Council, looks at some of the programming issues inherent within a reliance upon (good!) data in the humanitarian space, proposing some points for further discussion and improvement to build a digitally resilient humanitarian system.

Operational INGOs are grappling with two different yet intersecting dynamics. Firstly, the aid system is powered by numbers and, also heavily reliant on accompanying digital platforms for storing and analysing a range of quantitative indicators such as needs analyses, severity indices, caseload targets, and response monitoring frameworks. Secondly, in many frontline contexts the numbers are partial, delayed, politicized—or simply unavailable. When digital systems are fragile or authorities restrict information, our dependence on purely quantitative proof risks excluding the very people we exist to serve.

This is the double-bind the humanitarian industry now inhabits, we are more dependent than ever on quantified proof to allocate, prioritize and report, while the very places that need aid most are increasingly data-poor, data-contested or data-controlled.

Consider Sudan in early 2024. A nationwide communications blackout severed internet and phone networks for tens of millions, stalling assessments, beneficiary verification and even basic security checks. Humanitarian agencies warned operations could not continue without connectivity, and OCHA dashboards logged communications blackouts as a binding constraint across clusters. In a response architecture that assumes constant digital reporting, the data pipeline simply collapsed.

Or Ethiopia in 2021. An effective blockade of the Tigray region meant the ability of aid agencies and the UN to conduct assessments was drastically curtailed. Not only was the humanitarian space significantly impacted by political pressure, but the physical access constraints also carried with them data gaps that could not be plugged. While the Tigrayan authorities claimed there was a famine, the lack of telecoms, fuel and physical access to many parts of the region meant the quantitative data bar for a famine declaration could not be fully met. The episode revealed the need for additional qualitative indicators given how quickly a core evidence channel can be closed. And that the limited international response and accountability mechanisms that do exist, such as the UN Security Council resolution 2417 on the use of starvation as a weapon of war, are ultimately reliant on a credible data-based determination of food insecurity that in this case was not possible to obtain.

Sometimes the choke point is formalized. In Burkina Faso, what began as de facto restrictions has been codified as a visa statistique regime: any survey, census or statistical study must obtain prior authorization from the national institute, with detailed procedures governing approvals. At the same time, public statistics for internally displaced people (IDPs) have not been updated since March 2023, leaving planners to operate against stale baselines they cannot independently refresh. The intent may be order and security; the effect is constrained verification and publication, and ultimately constrained and patchy service delivery.

Compounding this is a second overlaying reality: sweeping budget cuts. By early 2025 the system was told to “hyper-prioritize”—and “do more with less”. In addition to the sweeping cuts by the United States, further reductions are expected from many traditional donors in 2026 and beyond, with few new faces around the OECD DAC table. In this climate, prioritization becomes a requirement for almost all parts of the aid system—and, inevitably, it is determined by the evidence we can marshal. When that evidence is uneven, the risks of mis-prioritization magnify.

For an operational agency, the contradictions show up in small, human ways. We try to distinguish between displaced families and impoverished hosts who share needs indicators. We worry about counting the same person three times as frontlines shift and people are displaced multiple times. We see local authorities nudge figures up to unlock supplies, or down to preserve legitimacy. We sit with community leaders who have learned, over years, which phrases trigger which boxes on which forms. And all the while, the planning machine asks us for clean imputations that field reality does not provide.

All of this leaves us with some questions.

What level of uncertainty are we prepared to accept to save lives? We have become adept at demanding high-frequency, comparable indicators; we are less comfortable acting on imperfect, triangulated signals. Yet places in blackout or under blockade will not produce gold-standard datasets. Are donors and agencies willing to define explicit “no-regrets” thresholds—a ladder of evidence that, once crossed, unlocks time-bound, life-saving response even when the denominator is fuzzy?

How should algorithmic or formula-based allocations treat invisibility? Many institutional donor funding models effectively reward measurable burden and penalize missing data. In a year of cuts, that potentially shifts resources toward where surveillance is strongest, not necessarily where need is greatest. Should allocation formulas include an “uncertainty margin” for data-denied contexts, weighted by independent access analysis and expert consensus, so that lack of visibility does not equal lack of value?

Can we protect impartial analysis space from political veto—without losing the ability to operate? The IPC experience in Ethiopia—and the hyper-contested statistics environment more broadly—shows how easily analysis can be  stymied. What minimum guarantees (on methods, publication and dissenting notes) are we, as a system, willing to insist on before we put our logos on a consensus number? And if those guarantees are absent, can we normalize transparent ranges and scenario narratives rather than offering a false precision open to further political manipulation?

Where are our “minimum viable indicators” when digital systems fail? Sudan’s blackout laid bare our dependency on connectivity. What is the offline core data or indicators—two or three proxies per sector—that can be collected safely and quickly, with paper-first redundancies and simple integrity checks, to steer assistance for weeks at a time? If we cannot answer that now, we will keep rediscovering this vulnerability in every conflict with degraded infrastructure and intentional blockages.

How can we include data collection and dissemination as integral to the protection of humanitarian space? The Burkina Faso visa statistique offers a glimpse of a future where data permission is proceduralized as much as physical access – and becomes another indicator in the wider trajectory of tightening humanitarian space.

None of these questions diminish the real gains of the data revolution. Needs overviews are sharper than a decade ago; anticipatory models have prevented suffering; digital platforms allow us to reach more people in more creative ways. But the current equilibrium—absolute dependence on quantified proof in places where proof is systematically degraded—has institutionalized a bias toward the visible. Cuts make that bias costlier. Every time we “hyper-prioritize” using incomplete evidence, we risk reinforcing a hierarchy of suffering determined by data richness rather than human need.

So what does “beyond the numbers” look like for an operational agency? It looks like codifying uncertainty—writing it into proposals, dashboards and board papers, not burying it in footnotes. It looks like donors rewarding honest ranges and scenario-triggered scale-ups. It looks like protecting the independence of analysis even when it is inconvenient, and building redundancies for when the lights go out. Above all, it looks like keeping faith with people who exist whether or not the spreadsheet can currently count them.

 

BLISS will be publishing various blogs from this series over the next few months. For more information about the project ‘Digitalising Food Assistance: Political economy, governance and food security effects across the Global North-South divide’, check out the project website, or overview on the website of SOAS, University of London. You can read other entries from this series here.

 

Opinions expressed in Bliss posts reflect solely the views of the author of the post in question.

 

About the author:
Jeremy Taylor

Jeremy Taylor is the Regional Head of Advocacy at the Norwegian Refugee Council (NRC) covering East and Southern Africa. Based in Nairobi, he collaborates closely with institutional partners and international organizations, and his work links operational complexity to policy solutions in protracted humanitarian and conflict contexts.  With a background in research and peacebuilding, his current role encompasses coordinating and leading briefings to inter-agency forums, donors, and the diplomatic community. He holds a PhD from SOAS, University of London.

 

Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.


Discover more from Bliss

Subscribe to get the latest posts sent to your email.

What do you think?

Your email address will not be published. Required fields are marked *

No Comments Yet.

Discover more from Bliss

Subscribe now to keep reading and get access to the full archive.

Continue reading