Tag Archives digital

16 Days of Activism Against GBV Blog Series| Holding Both Ends of the Line in the fight Against Digital Violence

Prevailing responses to digital violence against women and girls remain overwhelmingly reactive. We demand justice only after revenge‑porn, doxxing, or cyber‑bullying has already shattered a woman’s livelihood, dignity, or sense of safety. The scale of the crisis is undeniable: globally, between 16-58% of women have experienced some form of online violence, and in Nigeria, 45% of women self‑report digital abuse. Yet our interventions continue to treat symptoms while leaving the systems that enables digital violence unchallenged.

We are holding only one end of the line.

In this blog, Emaediong Akpan argues for a dual approach that confronts both the structural and cultural roots of digital violence. First, we must hold tech platforms and legal systems accountable for the architectures that make abuse easy, anonymous, and viral. Second, we must rethink how we prepare and support the next generation, beginning with digital literacy from childhood. This is not about shifting responsibility to users; it is about building collective resilience against the weaponized shame that underpins digital abuse. When we meet survivors with belief, care, and solidarity, we disrupt the culture of silence and return shame to its rightful place — with abusers and the systems that protect them.

 

Photo Credit: UN Women


Beyond Reactions

Nearly half of the world’s women and girls, have no legal protection from digital violence. The uncomfortable truth in our fight for digital safety is that we are often act after the fact. There is an overwhelming number of safety nets: legal, social, psychological, designed to ‘protect’ women and girls after they have experienced harm in digital spaces. However, according to Amnesty International, 76% of women report altering their online behavior due to abuse. This statistic reveals the limitation of our reactionary approach. We are treating the consequences of digital violence but failing to confront the architecture that exposes women and girls to harm. Our reactionary approach, though vital, is a partial victory at best, it means holding one end of the line. My call is to extend our hands and hold both ends.

The reactionary approach operates after the fact, after the harm has been done. It fails to confront the underlying issue: a digital ecosystem that is engineered through its architecture, business model and algorithms to facilitate and profit from such harm. To address digital violence against women and girls, we must adopt a dual-approach. This approach requires us to hold the line of platform accountability on one hand while engaging in foundational prevention rooted in early digital literacy and communal care on the other.

Understanding the Impact of Digital Violence on Women’s Participation in Public Life

Globally, 16-58% of women have experience online violence. In Nigeria, 45% of women self-report experiencing digital violence, with girls aged 12-17 and young women up to 35 being targeted. 85% of women globally have witnessed digital violence such as cyberbullying, false and misleading smear campaigns, doxxing, image and text-based threats, and more. Although the forms of digital violence vary, the motive remains the same: to shame, silence, and exclude women and girls from public life. Below I explain the impact of two particularly insidious forms.

  • Cyber-Stalking: Research indicates that an estimated 7.5 million people have experienced cyberstalking, demonstrating that anyone with a smartphone, social-media or GPS-enabled device is vulnerable.  Data from domestic violence programs in multiple countries indicates that 71-85% of domestic violence perpetrators use technology from smartphones and GPS to spyware, to stalk, monitor and threaten survivors. The intimate violence of the physical world now follows women into every digital space, collapsing any boundary between public and private life.

 

What Do We Mean by ‘Digital Violence’?

Without a universal conceptualization, this phenomenon operates under a cluster of terms, each highlighting a different aspect of this menace.

I use “digital violence” throughout this blog because it is conceptually encompassing. It captures not only the act of violence (harassment, doxing) but also the structural nature of the harm. It points to a violent digital environment shaped by the algorithmic amplification of harm and the prioritization of engagement/virality over safety. Digital violence as a concept draws attention to the platform not as a neutral mirror of gender-based violence offline but as an active participant in these acts of violence.

Holding Platforms and Systems Accountable

Our response ought to begin with the platforms whose digital architectures are designed to maximize ‘engagement’ irrespective of whether these engagements are driven by joy, outrage or hatred. The algorithms reward inflammatory contents with increased visibility, providing a fertile ground for digital violence to thrive. In adopting this approach, we must move beyond reactive content moderation to safety-by-design principles that places the responsibility on these platforms to mitigate systemic risks, including gender-based violence.

Our laws should specifically criminalize forms of digital violence including but not limited to cyber-stalking, disinformation, revenge porn, and doxxing. Although the Nigerian Violence Against Persons Prohibition Act 2015 is a good starting point, its effective application to address digital violence requires both amendment and judicial activism. The Act currently lacks explicit provisions for image-based sexual abuse, cyber-stalking, and platform liability. Courts must be willing to interpret existing provisions broadly while legislators work to close these gaps. We need legal frameworks that recognize the unique harms of digital violence—its permanence, its viral spread, its capacity to follow victims across every platform and into every space.

Digital Literacy as a Complimentary Strategy

Preventive approaches have been critiqued —often rightly for placing the responsibility on potential victims while absolving platforms of responsibility. My suggested approach does not absolve platforms of their responsibility. Rather, I argue that building communal resilience is not a parallel response but a complimentary strategy in this fight against digital violence. Even in a utopia with perfectly regulated platforms, harm can exist. The goal is to change the social and psychological terrain on which these attacks land.

Fostering a child’s critical consciousness does not excuse a platforms toxic design; it can help mitigate the effect of that design. This is the inoculation I speak of, is not against infection, but against the shame that digital violence weaponizes. Where young girls and women have the nonjudgmental support of their community, it becomes harder to manipulate them into feeling shame and equips them to identify, and resist abusive dynamics.

Building Communal Resilience from the Cradle

Today’s children are digital natives in a profound sense. Globally, one in three internet users is a child. In high-income countries, 60% of children use the internet by age five. In Africa, with the world’s youngest population and smartphone adoption surpassing 50%, children are primary users of family devices, entering complex digital publics with little to no guidance. This strategy ought to begin with digital literacy.

Critical consciousness from early childhood: Teaching children to question what they see online, who benefits from this content? Who might be harmed? Why is this being shown to me? This is media literacy adapted for an algorithmic age.

Bodily autonomy and consent: Children need to understand they have the right to set boundaries online, to say no to requests for images or information, and that consent given under pressure is not consent at all. These conversations must happen before children encounter coercion, not after.

Trusted adult networks: Every child should be able to identify at least two adults they can turn to if something online makes them uncomfortable or afraid. This requires adults who respond without panic, judgment, or punishment, a significant cultural shift in many contexts.

Community response models: When digital violence occurs, the community’s response matters as much as the legal one. Schools, religious institutions, and community organizations must be prepared to support survivors with unwavering belief rather than interrogation, with resources rather than blame. In Nigeria, organizations like the International Federation of Women Lawyers, Feminist Coalition, and StandToEndRape have pioneered such models, but they need to become the norm, not the exception.

The evidence supports this approach. In Finland, where comprehensive digital literacy has been integrated into education since 2014, young people report higher confidence in identifying misinformation and manipulation online. In South Korea, where digital citizenship education is mandatory, rates of cyber-bullying have declined even as internet usage has increased. Nigeria has the capacity to develop contextually grounded approaches that respond to our specific realities of digital violence.

Conclusion: Holding Both Ends of the Line

The fight against digital violence is a struggle for the future of public space, discourse, and democracy itself. A singular focus on post-harm justice, while morally imperative, is strategically incomplete. It addresses the symptoms but does not prepare the next-generation for these realities. We must confront digital violence by contesting the exploitative architectures of platforms and by building a critically conscious population from the cradle. This dual-approach is critical in this moment.

We must confront digital violence by contesting the exploitative architectures of platforms while simultaneously building a critically conscious population from the cradle. We must demand that platforms redesign their systems for safety while teaching young people to navigate these systems with critical awareness. We must prosecute abusers while building communities that refuse to shame survivors. This dual approach is not a compromise, it is recognition that structural change and cultural transformation must advance together. One end of the line without the other leaves us perpetually playing catch-up, counting casualties, offering comfort after the fact.

It is time to hold both ends of the line. Our children are counting on it.

 

Opinions expressed in Bliss posts reflect solely the views of the author of the post in question.

 

About the author:

Emaediong Akpan is a legal practitioner and an alumna of the International Institute of Social Studies. With extensive experience in the development sector, her work spans gender equity, social inclusion, and policy advocacy. She is also interested in exploring the intersections of law, technology, and feminist policy interventions to promote safer digital environments. Read her blogs here: 1, 2, 3, 4,5

Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.

 

 

From Content Production to Meaningful Engagement: A Collective Reflection on Communicating Development Research Online

By Posted on 1268 views

The communications landscape around us is changing — seemingly at breakneck speed. Since our last meeting as EADI Research Communications Working Group more than five years ago, especially the online communications environment has all but been transformed. These changes are forcing us to reflect on how we are communicating and whether it’s sufficient, also from a social justice perspective. The recent workshop for EADI members held in Bonn, Germany, was a moment for us to get together and reflect on recent changes and our responses.

Get with the times or fall behind

As the communications environment changes, we as research communications professionals are changing how we communicate scientific research. Sometimes this change takes place naturally. The recent changes to Twitter (X) — a platform long favoured by (development) researchers and research institutions for keeping vital discussions alive online — is a prime example. What we see now is a call coming from within the research community to find alternatives to a platform that no longer aligns to the mission of researchers and research communications professionals. We are not being forced to abandon Twitter; it is a choice that we make.

Fear the algorithm — it does as it pleases (or does it?)

At other times, we don’t have a choice; we are forced to change our communications strategies to prevent what we’re communicating from going unheard, from not making the impact we want it to, or from being misused. At the workshop, several participants highlighted difficulties they were facing when producing content for social media: the algorithms for platforms such as Instagram and Facebook decide which content is visible — and we don’t always know why. Algorithms all but govern social media, one participant observed.

Another recounted that the organization’s Facebook page was disabled because the word “climate” had been used. The word was considered politically inflammatory. The organization didn’t realize that this had happened until they investigated it, and even then, it took some puzzling to determine that it was that specific word that had triggered the freezing of the account. Something similar happened at another organization that had posted political content on TikTok — the account they used was banned from the platform. In both cases, they later knew what had triggered the ban but not how to prevent it.

At other times, the algorithm suppresses or highlights content seemingly at random. Many of us do not fully understand how this works. What we do know is social media platforms want to keep people on them for as long as possible. For this reason, content with links is suppressed because it takes users to another site. Embedding content on these platforms or on website pages might be one way to circumvent this – but this is not always possible, especially when we link to longer texts that simply cannot be posted on social media.

Too much information

This is linked to the problem of oversaturation: there is a wealth of content that gets posted on social media, meaning that content gets ‘lost’. And if the algorithm sends ‘undesired’ content to the bottom of the pile, the chances are even smaller that the post will be seen. How can we deal with this problem? Perhaps cross-posting on social media can ensure that it reaches more people. Researchers themselves could possibly also play a key role, as their online presence complements that of research communications teams and their voices are preferred over the more ‘generic’ voices of those who do so professionally. How to get researchers to want to communicate their research is discussed in another blog article on the workshop that follows this one.

We still need Twitter — but we don’t want to

Getting back to quitting Twitter, moving away from the platform is not as easy as we would imagine it to be. One participant remarked that they use the platform to reach journalists and that they’d simply fail to do so if they stopped using it. There also is not a strong enough alternative to the platform. Several participants had joined BlueSky but have not yet been able to determine whether the platform is useful or not; not many researchers have joined the platform, either.

And until everyone who’s important for our communications efforts has joined an alternative platform like BlueSky (both researchers and our target audiences) — or enough people to start a new community join it — Twitter will probably remain the dominant platform. A coordinated migration by development research and education institutes to a new platform was suggested as one possible way to make this shift, but the loss of followers that had taken several years to amass was identified as one disadvantage of this suggested strategy. And yet again other platforms such as Threads do not allow political content to be posted, something which several of the organizations wish to do.

LinkedIn is more important than ever

The discussion clearly showed the rise of LinkedIn, which not only performs well but is also becoming preferred by (development) researchers and practitioners alike. While other platforms such as Facebook are also used for personal reasons, LinkedIn is used by professionals to find information they need to do their work, one participant commented. This includes what’s happening in the field — new developments and possibly new partners to collaborate with. LinkedIn Groups are also useful for locating epistemic communities and those researchers and practitioners working on particular subjects or in particular fields. One participant shared how she had spent time on LinkedIn scanning groups to (re)post relevant content in.

Accessibility is key, but the digital divide persists

Accessibility is also becoming increasingly important. Videos are being produced with subtitles for those who cannot access the audio, or for those who watch them while commuting, for example. Other platforms remain less accessible; these include podcasts, which like videos require data to listen to that is expensive in many countries (where there is also limited access to Wi-Fi networks). In such contexts, mainstream media – television, radio, and newspapers – are still seen to play an important role.

Building and nurturing relationships

One of the important lessons we learned at the workshop is that communicating is more than simply producing and disseminating content; it is much more than that. One participant commented — and this struck me — that we need to focus not only on the “media” aspect of social media but also on its “social” aspect. We have a responsibility as research communicators to create and nurture social spaces.

Related to this, another participant commented that communication is about building relationships. From this perspective, we need to focus on enduring engagement that means nurturing the social spaces for dialogue we’ve created. Focusing only on spreading content is not enough.

And, last of all, meaningful engagement should be a key priority that drives our communications strategies so that our messages are not only heard but also heeded.


This blog article was first published here


Image: Taken from the workshop


About the author:

Lize Swartz is an academic blogging specialist, academic editor, and development researcher. She is Editor of Bliss, the blog of the International Institute of Social Studies, where she also conducts PhD research on experiences of and responses to water scarcity in urban contexts.

 

Technological solutions for socio-political problems: revisiting an open humanitarian debate by Rodrigo Mena

By Posted on 3507 views

The use of technology in the humanitarian aid sector is showing a steady increase based on a sense of hope that technology could help to improve the delivery of aid and solve multiple systemic problems. Technological solutions alone, however, cannot properly address such complex problems. This blog engages in an ongoing debate among development scholars on some of the hopes and concerns related to the use of digital and web-based technology in this sector. The main conclusion: we need more case research on the use of technology and, in the meantime, the careful use of technology is invited.


The application of technology is gaining popularity in the humanitarian sector due to the series of perceived benefits and ‘solutions’ that it seems to provide. Increasingly, development scholars are warning of the unintended consequences that such technological ‘solutions’ can produce—some of them negative. Dr. Duncan Green, Senior Strategic Adviser at Oxfam GB, in one of his blog posts, cautions us about the limits of technological solutions, saying that ‘just because technologies can allow us to collect, store, analyse and communicate data and ideas in unprecedented ways should not lull us to think they can address old, entrenched problems in unprecedented ways. The primary constraints for human action are non-technological in nature.’

Long-term research on the topic by Dr. Kristin Bergtora Sandvik, Dr. Katja Lindskov Jacobsen, and Sean Martin McDonald, from the Peace Research Institute Oslo (PRIO), reminds us of how technology shapes humanitarian action; they also write in a blog post that technology is implemented in the humanitarian sector without adequate legal, ethical and methodological frameworks. Another warning comes from Dr. Emre Eren Korkmaz, post-doctoral researcher at the University of Oxford, who in a recent blog post shows how the use of blockchain technologies[i] by aid agencies to support people in need, especially refugees, is embraced with great hopes, but also brings along deep concerns. He highlights the complexity of certain socio-environmental problems that are unlikely to be sufficiently addressed by technological solutions alone. Sandvik, Jacobson and Korkmaz in deepening the debate then call for more research on specific cases of the applications of digital and web-based technology in the humanitarian aid sector.

The utility of technological ‘solutions

Is the use of more technology really making humanitarian aid and disaster responses better, faster or more efficient? Even though it is difficult to find a single answer to this question, the reality is that many believe that technology can fulfil this ideal. Let’s consider a few examples:

Satellite images are being used for data collection and project monitoring with the hope that this technology will obtain more accurate information, more quickly. Iris and fingerprint scanning for the registration of the recipients of aid bring the hope of reducing duplications on the delivery of aid and more focused assistance. The use of Skype, email, and cloud systems are essential for the day-to-day management of humanitarian aid, but the hope remains that they will also improve the coordination of disaster responses and humanitarian aid provision within and among organisations and agencies.

Technology, it is said, will also reduce excessive bureaucratic bottlenecks and could provide a solution to problems of access and increased insecurity in the field. The use of digital payment systems, e-transfers or “mobile money” revolutionised the ways of delivering economic aid, promising more flexible, faster and safer economic assistance as compared to moving and distributing cash. Finally, there is hope that the use of technology will help to avoid problems of corruption, power struggles, or inequality. It is believed that using technology is politically neutral, but this belief has proven to in fact be far from reality.

A panacea for deeper problems?

Despite the benefits that these technologies can bring, they cannot be used naïvely, as the use of any technology (and the use of the information obtained along with it) involves multiple political and social variables. New technologies interplay with the realities of the places where they are implemented, and in places requiring humanitarian aid, with the existing and emerging needs of people.

We must question how these technologies interact with the inequalities of these places or their political regimes. As Korkmaz warns in his blog, there is also risk of abuse —institutions can use digital identities ‘to track people’s choices and desires, which could lead to increased surveillance and the use of information against refugees.

Technology is also subject to instrumentalisation and can be used for purposes quite the opposite of those humanitarian purposes it is intended to serve. The way in which information is collected, analysed and presented, can also be motivated by other, non-humanitarian objectives. In other words, the use of technology is never politically neutral— it affects and is affected by actors and processes, in ways not always fully understood. Reflecting on this is as important as thinking about the benefits of using new data-collection technologies. And we must also identify when, how and which technology to use.

The need for more case studies

The expansion[ii] and international call[iii] for the use of technology need to go hand-in-hand with greater reflection and deeper knowledge of the real impact, benefits and consequences of technology’s use. As McDonald, Sandvik, and Jacobsen argue in their blog post, ‘humanitarians need both an ethical and evidence-driven human experimentation framework for new technologies.’

As the discussion on the need for awareness about the use of technology is already ongoing, it is important to start gathering information on specific cases showing how which technology is used in reality. Afghanistan presents a good case for examining the application of aid technology, as its use has increased here over the last decade4–6.

Ongoing research I’m carrying out as Visiting Scholar of the Afghanistan Research and Evaluation Unit (AREU) on the (political) use and the introduction of data-collection technology in Afghanistan seeks to map this technology, also reflecting on who uses it, who can get access to the collected information, and how and for which purposes it is used. The research importantly also asks: does technology really fulfil the promises it carries?

The promotion of technology is still alive in Afghanistan and globally, as multiple new forms of technology are being implemented by the humanitarian sector, like bitcoin or blockchain technology9,10. However, the applicants of technology in the humanitarian sector should not be blind to its potential negative effects. Technology can be tremendously helpful, but must also pass the ‘do no harm’ test11,12 and should be applied in a reflective manner. In the meantime, the thoughtful use of technology and more research on the topic are invited.


[i] Blockchain technologies refers to a distributed and decentralized database of continuously growing records of digital information, ordered, linked and secured using cryptography.
[ii] The use technology in the humanitarian sector, if far from new, is a growing phenomenon since the late 20th Century1–3. The difference nowadays lies in its expansion and penetration at all levels of the humanitarian aid system.
[iii] There has been an international call to innovate and introduce more technology. For instance, two reports from 2013 reinforced the use of multiple communications and data collection technologies in the humanitarian system: the World Disaster Report from the International Federation of the Red Cross and Red Crescent (IFRC), and the document Humanitarian in a Network Age from the United Nations Office for the Coordination of Humanitarian Affairs (OCHA).

References:
  1. Stephenson, R. and P.S. Anderson (1997) ‘Disasters and the information technology revolution’, Disasters 21, 305–334.
  2. Sandvik, K. B., M. Gabrielsen Jumbert, J. Karlsrud and M. Kaufmann (2014) ‘Humanitarian technology: a critical research agenda’, Int. Rev. Red Cross 96, 219–242.
  3. Harvard Humanitarian Initiative (2011) ‘Disaster Relief 2.0: The Future of Information Sharing in Humanitarian Emergencies. UN Foundation & Vodafone Foundation Technology Partnership.
  4. IRIN (2013) ‘Innovative ICT helps aid workers in Afghanistan’. Available at: http://www.irinnews.org/feature/2013/05/02/innovative-ict-helps-aid-workers-afghanistan.
  5. Boone, J. US army amasses biometric data in Afghanistan. The Guardian (2010). Available at: http://www.theguardian.com/world/2010/oct/27/us-army-biometric-data-afghanistan.
  6. Zax, D. In Afghanistan, Cash Has Become The Most Effective Form Of Aid. Fast Company (2016). Available at: https://www.fastcompany.com/3065011/in-afghanistan-cash-has-become-the-most-effective-form-of-aid.
  7. Jacobsen, K. L. Experimentation in humanitarian locations: UNHCR and biometric registration of Afghan refugees. Secure. Dialogue 46, 144–164 (2015).
  8. Jacobsen, K. L. Humanitarian biometrics. In The Politics of Humanitarian Technology: Good intentions, unintended consequences and insecurity. 57–87 (Routledge Taylor & Francis Group, 2017).
  9. DH Network. Blockchain for the Humanitarian Sector: Future opportunities. Digital Humanitarian Network (2016). Available at: http://digitalhumanitarians.com/resource/blockchain-humanitarian-sector-future-opportunities.
  10. Bello Perez, Y. Can Bitcoin Make a Difference in the Global Aid Sector? CoinDesk (2015). Available at: https://www.coindesk.com/can-bitcoin-make-a-difference-in-the-global-aid-sector/.
  11. Jacobsen, K. L. Humanitarian technology: revisiting the ‘do no harm’ debate. ODI HPN (2015). Available at: https://odihpn.org/blog/humanitarian-technology-revisiting-the-%c2%91do-no-harm%c2%92-debate/.
  12. The Sphere Handbook. Protection Principle 1: Avoid exposing people to further harm as a result of your actions. The Sphere Project Available at: http://www.spherehandbook.org/en/protection-principle-1-avoid-exposing-people-to-further-harm-as-a-result-of-your-actions/. (Accessed: 5th January 2018)

OLYMPUS DIGITAL CAMERAAbout the author: 

Rodrigo (Rod) Mena is a socio-environmental AiO-PhD researcher at the International Institute of Social Studies of the Erasmus University Rotterdam. His current research project focuses on disaster response and humanitarian aid in complex and high-intensity conflict-affected scenarios.