Tag Archives AI

AI and Emerging Tech for Humanitarian Action: Opportunities and Challenges

By Posted on 1729 views
Source: AI Generated

The use of digital and emerging technologies such as artificial intelligence in the humanitarian sector is not new. Since the advent of these technologies, particularly in the last two decades, the sector has gone through several transitions as data collection, storage, and data processing have become increasingly available and sophisticated. However, the recent contemporary advances in computational power, along with ‘big data’ now at the disposal of the public and private sector has allowed for a widespread and pervasive use of these digital technologies in every sphere of human life – notably also in humanitarian contexts. AI, quite rapidly, is reshaping the humanitarian sector with projects such as Project Jetson by UNHCR, AI supported mapping for an emergency response in Mozambique, AI chatbots for displaced populations, and more besides.

Humanitarian workers therefore must pose the following questions. How can responsible AI along with emerging technology be used for humanitarian action? And what are the priority areas and conditions that the humanitarian sector should put forth while employing these technologies? And does emerging technology present any ethical challenges for the sector?

There is an enormous potential in AI technology, with its ability to predict events and results that can help in international humanitarian action. With the rate at which disasters and conflicts are increasing in the past few years, the humanitarian sector particularly in terms of funding, is simply not at par in providing the relief and responses to the degree that the world requires1. In this light, strengthening disaster resilience and risk reduction by building community resilience through initiatives such as better early warning systems become crucial.

Case Study: Using AI to forecast Seismic Activity

A study using hybrid methodologies was conducted to develop a model that could forecast seismic activity in the region of Gazientep, Türkiye (bordering Syria). The system was trained using the data gathered after the massive 7.8 magnitude earthquake in early 2023, which was then followed by more than 4,300 minor tremors. To create the algorithm, key dimensions and indicators such as social, economic, institutional and infrastructural capacity from open-source websites, were identified. During the research, two regional states were identified to have extremely low resilience to earthquakes. Incidentally, this area is also home to a large number of Syrian refugees. After gathering two years of seismic data from more than 250 geographers on the ground and other open sources, two Convolutional Neural Network models were applied

that could predict 100 data points (with 93% accuracy) in future, which is amounts to about 10 seconds in future, . The study underlines the regional challenges in data collection. Several indicators were omitted due to the absence of openly availability data. This highlights the influence of power asymmetry, which allows for biased results and conclusions, thereby pushing researchers away from new understandings. A case-in-point, data pertaining to areas/neighborhoods where Syrian refugees reside was not gathered and thus excluded by default from the research findings. Despite these political challenges, there is great potential in this technology when provided with relevant data sets. AI becomes the model it is trained to be and therefore it is important to have a complete a data set to prevent reproducing real world/human biases

Fears of techno-colonialism and Asymmetric Power Structures

This case highlights the need for transparent, complete, and bias-free data sets, which remain a challenge in most parts of the world. Further, who owns these data sets? Who oversees data collection and training, and what is omitted? As AI and various deep learning methodologies transform our world, fears of techno-colonialism, techno-solutionism and surveillance are omnipresent.

Today’s post-colonial world, that in fact continues to carry forward colonial power hierarchies albeit in a new setting with changed roles, is ridden with inequalities. And these inequalities and pre-existing biases both in data and in people, are then transferred to the AI because of the way it is being (or not being) trained. Even ‘creative’ AI tools are still a conglomeration of the data that they are trained on.

AI and deep learning methodologies are tools that can be targeted to provide a solution. They require input of data, and if the data carries bias or racism to some degree then the output will also reflect that2. Questions such as, who is training the AI, what funds are being used, and who is the recipient of the effort, become critical to answer. Unfortunately, very few companies and countries in the world have the capacity to create data sets that train AI. These are often large conglomerates that work for profit in a capitalist ideology where a human centered approach is at best secondary. The decision power therefore lies in the hands of few, thereby forming a new form of colonialism.

Is AI then a tool or a medium to keep the status quo (of power structures)? Because if the few people in power, driven by capitalism, are invested in maintaining the power structures, then how will AI be of help in decisions about resource allocation? This points also to the much-needed democratization of AI and these tools. The human centric AI otherwise will remain a paradox.

Looking at Responsible AI and humanitarian principles

Can we employ AI that does no harm? For AI and similar tools to therefore be viable and inclusive, one must ensure transparency and inclusion in data gathering that forms the data sets. This requires conscious effort that is not technology driven, rather policy driven that invites people with diverse thought processes from diverse communities and especially minorities and vulnerable populations to be in a position of action and not just participate. One way is to rethink the humanitarian sector and its functioning. The other is to have a more community centered approach while thinking of AI applications, as James Landay puts it. He describes that in a community centered approach, the members of the community discuss and decide how and which resources must be allocated to what, according to their own priorities and needs. This method stands in contrast to the top-down politics, where communities are merely seen as consumers or beneficiaries.

Drawing from Edward Soja’s theory, Anisa Abeytia (2023) distinguishes and adds a fourth sphere or space to the already formed three-layer model by Soja, which Abeytia argues to be relevant in the use of AI.

According to the model, “Firstspace” is the geographic location that includes human, non-human (living and nonliving) entities and environments. “Secondspace” is our communal areas (library, schools, etc.). “Thirdspace” is the liminal landscape – the way people accept or reject ideas and technologies such as their apprehensions and fear to new transitions and change. And lastly, Abeytia adds a Fourthspace to represent the digital world which is as real as physical geographies today. An important rubric to measure viability of an AI application is how it will affect each of these spaces – the personal, the communal, the transitional and the digital space. For example, we can witness the use of AI affecting all four spaces in the project run by University of Utah and a refugee resettlement agency that used Virtual Reality (VR) headsets as a reception and resettlement tool to assist refugees to integrate into American societies.

Survey: What are the needs of the sector?

As members of the humanitarian sector, we must strive to develop our own solutions to the challenges we face, ensuring inclusivity for all. The identification of these challenges should also come from within the sector itself. Recently, a survey was conducted among key stakeholders to identify areas where AI could make a significant contribution. The most commonly highlighted areas of interest were as follows:

● Can AI assist in creating bias free intelligence that improves victim-state relationship with others?

● Can AI be utilized in measuring intolerance and widening hatred between communities, thereby causing riots such as in the UK and South Asia?

● Can AI provide guidance in identifying uncertainties of risks and resilience, along with humanitarian action insights that we have not spotted?

● Can AI conduct contribution analysis for impact evaluation?

● How to employ AI to identify methods of empowerment in decision making and developing strategies to offer universal humanitarian assistance?

● How can we harness the power of AI in analyzing epidemic preparedness and response improvement in health crises like monkeypox or Covid?

It is essential to actively investigate the use of AI and emerging technologies across the identified spheres. Efforts to make AI more equitable should include advocating for inclusive methodologies, creating transparent and diverse data sets, and amplifying the voices of Indigenous, marginalized and vulnerable populations.

While working towards more equitable systems, several critical questions arise: How can these projects be funded? Are they viable in a landscape where only a fraction of resources reaches those in need? What is the carbon footprint of developing AI and deep learning tools? How can Indigenous knowledge from resilient communities be integrated into AI systems? Each of these issues warrants thorough discussion, and every major humanitarian organization should address them.

Further reading:

Tom L. Beauchamp and James F. Childress, Principles of Biomedical Ethics, 8th ed., Oxford University Press, Oxford, 2019; Luciano Floridi and Josh Cowls, “A Unified Framework of Five Principles for AI in Society”, Harvard Data Science Review, Vol. 1, No. 1, 2019.

Opinions expressed in Bliss posts reflect solely the views of the author of the post in question.

 

Authors: Anisa Abeytia, Shanyal Uqaili, Mihir Bhatt and Khayal Trivedi are members of the Humanitarian Observatory Initiative South Asia (HOISA)

Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.

 

ChatGPT can be our ally when conducting scientific research — but academic integrity must guide its use

By Posted on 2645 views

[vc_row css=”.vc_custom_1592900783478{margin-right: 0px !important;margin-left: 0px !important;}”][vc_column css=”.vc_custom_1592900766479{margin-right: 10px !important;margin-left: -10px !important;}”][vc_column_text]Several papers that have recently been published in peer-reviewed journals display obvious signs of having been written by the AI tool ChatGPT. This has sparked a heated online debate about the transparency of research communication and academic integrity in cases where AI is used in the academic writing process. In this blog article, Kim Tung Dao discusses the ethical implications of using AI for academic writing and ponders the future impact of AI in academic research, urging for a balance between the efficiency of AI tools and research integrity.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_single_image image=”28106″ img_size=”full” alignment=”center”][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]

Used for everything from streamlining everyday tasks to revolutionizing industries, artificial intelligence (AI) has come to profoundly affect our lives in the past few decades. The emergence of new forms of AI in recent years has led to a heated debate in academia about whether students should be allowed to use AI tools — usually large language models (LLMs) such as ChatGPT — in their writing. And if they are permitted, a related question is to what extent they should be used, especially in higher education.

A new issue related to the rise of LLMs is now rearing its head within the realm of scientific research: the publication of LLM-generated content in peer-reviewed journals. This worrying trend reflects not only the rapid advancements in LLMs’ ability to replicate human work but also gives rise to discussions on the ethics of research (communication) and research integrity.

More and more researchers are attempting to leverage generative AI such as ChatGPT to act as a highly productive research assistant. It is very tempting to have an LLM compose content for you, as these AI-generated pieces often exhibit sophisticated language, conduct statistical analyses seamlessly, and even discuss new research findings expertly. The line between human- and machine-generated content is blurring. In addition, these LLMs work tirelessly and quickly, which can be considered highly beneficial for human scholars.

However, beneath the surface of effectiveness and efficiency lies a complex labyrinth of ethical concerns and potential repercussions for the integrity of scientific research. Publishing academic research in journals remains the most popular way for many researchers to disseminate their findings, communicate with their peers, and contribute to scientific knowledge production. Peer reviewing ensures that research findings and truth claims are meticulously evaluated by experts in the field to sustain quality and credibility in the formulation of academic theories and policy recommendations. Hence, when papers with AI-generated content are published in peer-reviewed journals, readers can’t help but question the integrity of the entire scientific publishing process.

There is a big difference between receiving assistance from generative AI and allowing it to generate entire or significant parts of research texts without appropriate supervision and monitoring. These can entail smaller tasks such as proofreading AI-generated content before its distribution/publication but can also play a much more critical role in ensuring the originality and significance of AI-enhanced research. This is why this article seeks to reflect on the abuse of AI in the writing of academic texts by researchers and provides commentary on the insufficiency of the current peer-review system. I also try to initiate a thoughtful discussion on the implications of AI for the future of research.

Falling through the cracks

The latest volume of Elsevier’s Surfaces and Interfaces journal recently caught the attention of researchers on X (Twitter), as one of its papers has evidently been written by ChatGPT. The first line of the paper states: “Certainly, here is a possible introduction for your topic: […].” Any ChatGPT user knows that this is the typical reply generated by the LLM when it responds to a prompt. Without any expertise in AI or other related fields, a common ChatGPT user with normal common sense can therefore tell that this sentence and at least the following paragraph, if not many others, has been generated by ChatGPT.

But this paper is certainly not the only one in this new line of LLM-generated publications. ChatGPT prompt replies have been found in other papers published in different peer-reviewed journals and are not limited to any specific fields of science. For example, a case report published in Radiology Case Reports (another Elsevier journal) includes a whole ChatGPT prompt reply stating “I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about […], but for specific cases, it is essential to consult with a medical professional […].”

Hallucinating information

What is more worrisome is the quality, integrity, and credibility of scientific research conducted by these LLMs, as ChatGPT has the tendency to hallucinate information and draws on seemingly non-existent citations and references to support the texts it generates. For example, in a forum discussion where contributors talked about detecting AI-generated content in academic publications, one contributor pointed out that they could not find the references cited in a paper titled “Automatic Detection of Coagulation of Blood in Brain Using Deep Learning Approach”. Several other cases are mentioned in the discussion thread.

Besides likely contributing to the publication of false or unevidenced information, the use of LLMs in the writing up of scientific research also highlights the failure of peer reviewers to catch or question these practices, showing either their carelessness or their irresponsibility. The peer-review system has long served as the gatekeeper of scholarly knowledge, aiming to uphold high standards of quality, integrity, and credibility that are part and parcel of academic research and publishing. But with obvious evidence of LLM-generated content being included in papers published in peer-reviewed journals, it might be time to start questioning the transparency and accountability inherent in the peer-review process. When a peer-review publication starts with a ChatGPT’s typical prologue, it’s reasonable to wonder how such article was reviewed.

A call for responsible use

AI is not all bad. Clearly, it can be a powerful assistant to researchers in the research process, used for anything ranging from brainstorming, developing research strategies, coding, analyzing empirical results, and language editing to acting as a competently critical reviewer to provide useful and helpful feedback for excellent improvement. But to work with this powerful assistant, researchers still need to have a solid knowledge of the research topic, make significant decisions on the research strategy, and, most importantly, ensure that the research is an original contribution to the literature and can be applied. Relying heavily on AI to finish a research project without understanding the foundation and the essence of the research is plainly ethical contamination and fraudulent behavior.

AI is not a scientific researcher — and might never be

Beyond the immediate finger-pointing at the peer-reviewed system and research practices, the increasing influence of AI in research outputs carries broader implications for the role and integrity of human researchers, the nature of scientific discovery, and the social perception of AI. Even if the potential for deception and manipulation is ignored, AI-generated research outputs might still lack genuine insights, critical analysis, and might fail to take into account ethical considerations without human guidance. Moreover, in order for research outputs to be meaningful for human life and society, they need to be validated by human researchers.

We don’t necessarily need to fear AI; we do need to fear the improper use of AI, and we need to play an active role in preventing this from happening. Thus, instead of fearing being replaced by AI, human researchers should start acknowledging its abilities and using it to shape our projects. Let’s board this technological advancement ship to escalate our research efficiency and accelerate the speed of scientific discovery. But let us remain cautious. We are responsible for ensuring that AI contributes to instead of compromises scientific knowledge production.

Writing this post with the help of ChatGPT 3.5 (which I used to improve my language), I can’t help but recall the question I was asked when receiving my doctoral degree: “Do you promise to continue to perform your duties according to the principles of academic integrity: honestly and with care; critically and transparently; and independently and impartially?”

I promise.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]Opinions expressed in Bliss posts reflect solely the views of the author of the post in question.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text css=”.vc_custom_1713430703942{margin-top: 0px !important;}”]

About the author:

Kim Tung Dao is a recent PhD graduate of the International Institute of Social Studies. Her research interests include globalization, international trade, development, and the history of economic thought.

 [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column css=”.vc_custom_1596795191151{margin-top: 5% !important;}”][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]

Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.

[/vc_column_text][vc_column_text][newsletter][/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][/vc_column][/vc_row]

EADI/ISS Series | Two faces of the automation revolution: impacts on working conditions of migrant labourers in the Dutch agri-food sector

By Posted on 3422 views

by Tyler Williams, Oane Visser, Karin Astrid Siegmann and Petar Ivosevic

Rapid advances in robotics and artificial intelligence (AI) are enabling production increases in the Dutch agri-food sector, but are also creating harsh working conditions as the sector remains dependent on manual labour, while implementing new technologies. To ensure better working conditions for migrants forming the majority of manual labourers in this sector, ‘worker-friendly’ implementation of new technologies is necessary to limit the negative effects of the automation revolution.


The ‘Threat’ of Automation?

Decades-old debates about the extent of job loss induced by the automation revolution were re-ignited by the seminal work of Frey and Osborne (2013), who suggested large numbers of jobs would be replaced by automation. Where jobs are not lost, automation impacts labour conditions as facilities are geared towards the optimal use of new technology. Novel ICTs offer possibilities to increase labour productivity and to free workers from harsh and repetitive tasks (OECD 2018). Yet they also enable high levels of remote, covert monitoring and measurement of work, often resulting in increased work pressure and the risk of turning workplaces into ‘electronic sweatshops’ (Fernie and Metcalf 1998).

Ever since Keynes (1930) warned about “technological unemployment” in his essay ‘Economic Possibilities for our Grandchildren’, tech innovations have been eliminating jobs across sectors (e.g., in manufacturing), while simultaneously leading to the creation of new types of work (e.g., machine engineers). However, the ‘fourth industrial revolution’ (Schwab 2016) currently taking place might differ from earlier ones: automation is accelerating, affecting a wider variety of jobs, and is now also penetrating sectors like agriculture. Likely candidates for new automation waves are ‘3D jobs’ (dirty, dangerous and demeaning) which are overrepresented in agriculture and often performed by migrant workers (manual mushroom picking, for example, which is physically demanding and carries myriad other risks like respiratory issues). Therefore, this sector – understudied in research on automation – deserves attention.

Farm Robots and Migrant Workers

‘Milking robots’, drones, and (semi-)automated tractors have appeared on farms in the U.S. and the EU. As the second largest exporter of agricultural products and the ‘Silicon Valley of Agriculture’ (Schultz 2017), the Netherlands is at the forefront of such innovations. Yet despite this position, Dutch agriculture still depends strongly on manual labour, as the complexity and variability of nature (crops, animals, soils, and weather) have hampered automation.

Technological innovation and the recourse to low-paid, flexible migrant labour in the Dutch agri-food sector both represent cost-saving responses to the increased market power by supermarkets (Distrifood 2019) and the financialisation of agriculture. A FNV (Federation of Dutch Trade Unions) representative asserted: “Employers see those people as machines […]. Employers need fingers, cheap fingers, if I may call it like that”[1].

However, an educated migrant workforce provides benefits to employers beyond ‘cheap fingers’. The majority of labour migrants from Central and Eastern Europe (CEE), the largest group of migrant labour workers on Dutch soil (CBS 2019), hold a post-secondary education (Snel et al 2015: 524). As the Dutch are reluctant to do the low-paid 3D jobs, agriculture depends heavily on migrants from CEE countries, especially from Poland (Engbersen et al 2010). An estimated 30 percent of all CEE migrants in the Netherlands work in agri-food, contributing almost 2 billion euros to the country’s GDP in that sector (ABU 2018).

While technology can and does assist in and accelerate the harvesting process, this educated workforce can flexibly perform manifold tasks like identifying and communicating inconsistencies in products or processes to their supervisors, including plant illness, irregular production, etc. This makes them invaluable in improving agricultural production processes and output[2]. However, their working conditions remain precarious. Consequently, grasping the impact that technological innovations have on agriculture necessitates studying transnational labour.

To this end, ISS scholars – with the Centre for Research on Multinational Corporations (SOMO) – initiated a research project titled ‘Technological change in the agro-food sector in the Netherlands: mapping the role and responses of CEE migrant workers’. So far, it has included interviews with organisations in the agri-food sector, trade unions, engineering/labour experts, and migrant workers; this formed the basis for the MA theses of Petar Ivosevic and Tyler Williams. First results were discussed during an ISS workshop with practitioners in December 2018, and a follow-up workshop will be held on 17 March 2020. In addition, two sessions on the topic will be organised at the 2020 EADI Conference taking place from 29 June to 2 July at the ISS.

Industry versus Workers

To date, the benefits of automation for industry and farm workers are highly unevenly distributed. For example, technologies such as (semi-)automated LED lighting allow for more crops to be grown indoors, accelerating crop growth and extending the growing season. This benefits the agricultural industry and supermarkets by leading to all-year production. It also initially improved agricultural labour conditions: workers received a more stable, year-round income and a reduction in time spent working outdoors in difficult weather conditions. However, these improvements also brought negative consequences for labourers. The workweek increased (from 40 to roughly 60 hours – occasionally 80 hours – per week), and smart LED-lighting technologies, sterile environments, and novel ways of conserving heat and humidity created harsher working conditions (cf. Pekkeriet 2019).

Moving Forward

 How can decent labour conditions for (migrant) farmworkers be ensured while further automation of agricultural workplaces takes place? First, further research involving (migrant) workers themselves, growers, and other practitioners is needed to inform policy. So far, policy debates on the future of agriculture have paid only scant attention to (migrant) workers and labour conditions. Farm labour ‘shortages’ in agriculture are often narrowly and one-sidedly discussed in terms of supposed ‘unwillingness’ to work in agriculture per se or the tendency of CEE migrants to return to their home countries where economic growth has picked up. Such a position ignores the harsh (and often insecure) working conditions or postulates them as a given. It strongly underestimates the (potential) role of ‘worker-friendly’ implementation of new technologies and decent labour conditions in shaping the quality (and attractiveness) of farm work. Support from Dutch labour unions – which have started to organise and include CEE migrant workers – could increase migrant workers’ voice. Insecure, dependent work arrangements, language problems, and fragmentation of the migrant workforce have thus far impeded migrants’ own collective action. Finally, food certifications in the Netherlands primarily target food safety and sustainability. Including social (labour-related) criteria would reward farms with sound labour conditions[3].


[1] FNV Representative. 18 June 2018, interviewed by Karin Astrid Siegmann and Petar Ivosevic.
[2] Municipality Westland Presentation, World Horticulture Centre, 19 February 2019.
[3] For instance, the pillar of fair food in the slow food manifesto includes respectful labour conditions.

This article is part of a series launched by the EADI (European Association of Development Research and Training Institutes) and the ISS in preparation for the 2020 EADI/ISS General Conference “Solidarity, Peace and Social Justice”. It was also published on the EADI blog.


Photo-Tyler-image1About the authors:

Tyler Williams recently completed the ISS MA Development Studies’ track in Migration and Diversity and co-organised the abovementioned workshop.

 

Foto-OaneVisser-Balkon-1[1]

Oane Visser (associate professor, Political Ecology research group, ISS) leads an international research project on the socio-economic effects of and responses to big data and automatization in agriculture.photo-KarinSiegmann-fromISSwebsite

 

Karin Astrid Siegmann is a Senior Lecturer in Labour and Gender Economics at the International Institute of Social Studies (ISS), studying how precarious workers challenge marginalization of their labour.Photo-Petar-image1

 

Petar Ivosevic graduated from the ISS MA program in Development Studies in 2018, with a major in Agrarian, Food and Environmental Studies.