Tag Archives Artificial intelligence

From Hands-On to High-Tech: How Dutch Care Workers Navigate Digitalization and Robotization

By Posted on 1076 views

Whether we embrace it or not, digital technologies and AI are here to stay, and they are fundamentally changing the human world of labour. As new technologies revolutionize the healthcare landscape, these changes are reshaping the lives and work of care workers. In this blog, Sreerekha Sathi shares insights from her research, which explores important questions about how digital technologies are reshaping care work in the Netherlands specifically: how these innovations are affecting care workers and how care homes are adapting to digital solutions and AI-assisted robotics. What specific forms of AI-assisted robotics are currently being utilized in Dutch care homes and how can we evaluate the benefits, challenges and risks associated with their implementation?

Source: Unsplash

Digitalization, robotization and the care worker

The Dutch healthcare sector faces increasing inequality in access to care, staff shortages, increasing workloads and a high percentage of aging populations. Around two thousand government-funded care homes serve the elderly, those with dementia, disabilities and other care needs.

Like other countries in Europe, the Netherlands has been experimenting with digitization and robotization in health care. Over the past two decades, AI-assisted digital tools and Socially Assistive Robots (SARS) have become more common in surgeries, patient monitoring, consultations, diagnostics, rehabilitation, telemedicine, cognitive and emotional care, especially in the post-pandemic period (Getson, C., & Nejat, G. 2021, Kang et al. 2023). Beyond Europe, countries like China and Japan lead these developments, with Sweden and the Netherlands close behind.

The use of digital solutions and AI-assisted robotics have moved beyond the experimental phase into early adoption. Current discussion focuses on opportunities for collaboration between private companies, academic institutions and healthcare providers. This pilot study involved conversations with few care workers in the care homes, innovation managers, company officials and academic scholars in the Netherlands.

Conversations with care workers show that most technologies in use are still relatively simple – medication dispensers, sensor systems and communication tablets – selected for their affordability and ease. Once prescribed, digital care tools like Compaan, Freestyle Libre, MelioTherm, Medido, Sansara or Mono Medical are introduced to clients by neighbourhood digital teams, usually via smartphone apps connected through WIFI as part of online digital care.

The introduction of robots is slowly gaining ground. Many universities, including Erasmus University, are collaborating with private companies on new projects in robotization and digitalization in health care. Some of the robots which are popular in use currently in Europe include TinyBots (Tessa), Zorabots (NAO), Pepper, Paro and other robotic pets, and SARA, which supports dementia patients. Some care workers believe that the robots promote social contact and enhance patients’ independence, while others appreciate that robots taking over peripheral tasks can make their own work easier.

Care workers are required to learn and engage with new technologies, which directly affect their everyday lives. Although they are relatively well paid by normal standards, their workload and stress often exceed what their pay reflects. Larger, well-funded care homes have support staff who assist care workers for indirect or non-medical support at lower pay. When new technologies are introduced without sufficient involvement and inputs from the workers, they can lead to more burden on workers in terms of time and labour costs. For them, new technologies are often ‘thrown over the fence’, with insufficient training or involvement of care workers in design or decision-making, leading to frustration, resistance and underuse even when the tools are effective. They argue, ‘we don’t need fancy tools – just the right tools used in the right way.’

Many workers feel that if a robot can take on physical tasks, the workers can give clients more time and attention. When the purpose of a tool is clearly explained, and workers remain present in critical moments, clients and families are more accepting of new technology.

Gender and labour in new technologies

Feminist Science and Technology Studies (FSTS) has long shown how technologies carry gendered biases. Feminist histories of computing have highlighted women’s contribution to the invention and introduction of computers and software (Browne, Stephen & McInerney, 2023). A relevant question to explore today is would new technologies using AI assisted robotics replicate the same biases. Although new technologies are often presented as objective, they are built upon datasets and assumptions that can reproduce biases and stereotypes, based on the foundations of the feeds and accesses in-built into it (1). Robots, for instance, often reflect the idealized gendered traits. Nurse robots are designed with feminine or childlike features – extroverted and friendly – versus ‘techno-police’ styled introvert security robots as stoic and masculine.

Care work remains a heavily gendered profession, though more men are joining the field. While some men care workers face occasional client push back, they are increasingly welcomed amid shortages. Many care workers worry about being replaced by robots, yet most agree that emotional presence of caregivers – especially in elderly and dementia care – remains essential and robots may support but cannot substitute the human connection that defines good care work.

Further, workers also stress that technology must be context-sensitive: its success depends on the socio-economic profile of the area, staff availability and the lived preferences of the people receiving care. They advocate for flexible, context-based implementation rather than top-down standardization of new machines. Core to the debates on digitalization and robotization in care are ethical issues often narrowly framed as privacy concerns but extending to autonomy, emotional dignity and growing surveillance and inequality.

Insights into the future

The study observe that many attempts to introduce digital technologies or robotics in care homes stall in the pilot phase, often disliked or abandoned by care professionals or clients. Care workers need time and training to trust these devices, especially regarding the risks and uncertainties involved. They emphasize early involvement through co-design as essential for building trust, transparency and accountability. For sustainable implementation, the focus should shift from what is ‘new’ to what is ‘useful’.

Future debates will likely centre around prioritizing digitization in health care versus SARs in physical care. Persistent challenges include time constraints to software failures (Huisman & Kort 2019). As efforts to create ‘smart homes’ and support independent living continue (Allaban, Wang & Padir 2020), environmental sustainability and climate resilience must become priorities.

Another important step for exploration is to critically analyze the growing corporatization and monopolization in digitization and robotization (Zuboff, 2019; Hao, 2025). Rather than leaving healthcare innovations to monopolies or private capital, public or community-based state welfare support must retain agency in how digital and robotic tools are implemented. Finally, pushing back from military robotics towards socially beneficial technologies – such as health care or waste management – needs to be prioritized.

As a work in progress, this research is significant for understanding the social impacts of digitalization and robotization. In the next step of this study, these conversations will further bring together care workers, academics and innovative managers between the global south and the global north to foster dialogue about how these changes are reshaping the healthcare economy, care homes and the future of care workers.

 

End Note:

  1. A focus on changing forms of labour, along with the concerns around gender stereotypes and gendered knowledges attributed to social robots, is important for further exploration in the fields of AI-assisted occupations. The introduction of new machines involves the invisible human labour behind them, which is mostly the ‘ghost workers’ from the global south, whether with data work, coding or mining. What is inherent to existing social contexts, including gender, class, and racial stereotypes, are already heavily compromising the digital world.

Acknowledgements: This research was supported by a small grant from Erasmus Trustfonds for 2024-2025, I embarked on this short study to explore these questions. Although the grant period concluded in June 2025, the research continues. I would like to thank Ms. Julia van Stenis for her invaluable support in making this study possible.

 

Opinions expressed in Bliss posts reflect solely the views of the author of the post in question

 

About the author:

Sreerekha Sathi

Sreerekha Sathi works on issues of gender, political economy, and critical development studies. Her current research explores the intersections of gender, care, and labour with digitalization, AI, and the future of work, and engages with critical debates in decolonial thought. She is a member of the editorial board of Development and Change.

 

Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.

ChatGPT can be our ally when conducting scientific research — but academic integrity must guide its use

By Posted on 2746 views

[vc_row css=”.vc_custom_1592900783478{margin-right: 0px !important;margin-left: 0px !important;}”][vc_column css=”.vc_custom_1592900766479{margin-right: 10px !important;margin-left: -10px !important;}”][vc_column_text]Several papers that have recently been published in peer-reviewed journals display obvious signs of having been written by the AI tool ChatGPT. This has sparked a heated online debate about the transparency of research communication and academic integrity in cases where AI is used in the academic writing process. In this blog article, Kim Tung Dao discusses the ethical implications of using AI for academic writing and ponders the future impact of AI in academic research, urging for a balance between the efficiency of AI tools and research integrity.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_single_image image=”28106″ img_size=”full” alignment=”center”][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]

Used for everything from streamlining everyday tasks to revolutionizing industries, artificial intelligence (AI) has come to profoundly affect our lives in the past few decades. The emergence of new forms of AI in recent years has led to a heated debate in academia about whether students should be allowed to use AI tools — usually large language models (LLMs) such as ChatGPT — in their writing. And if they are permitted, a related question is to what extent they should be used, especially in higher education.

A new issue related to the rise of LLMs is now rearing its head within the realm of scientific research: the publication of LLM-generated content in peer-reviewed journals. This worrying trend reflects not only the rapid advancements in LLMs’ ability to replicate human work but also gives rise to discussions on the ethics of research (communication) and research integrity.

More and more researchers are attempting to leverage generative AI such as ChatGPT to act as a highly productive research assistant. It is very tempting to have an LLM compose content for you, as these AI-generated pieces often exhibit sophisticated language, conduct statistical analyses seamlessly, and even discuss new research findings expertly. The line between human- and machine-generated content is blurring. In addition, these LLMs work tirelessly and quickly, which can be considered highly beneficial for human scholars.

However, beneath the surface of effectiveness and efficiency lies a complex labyrinth of ethical concerns and potential repercussions for the integrity of scientific research. Publishing academic research in journals remains the most popular way for many researchers to disseminate their findings, communicate with their peers, and contribute to scientific knowledge production. Peer reviewing ensures that research findings and truth claims are meticulously evaluated by experts in the field to sustain quality and credibility in the formulation of academic theories and policy recommendations. Hence, when papers with AI-generated content are published in peer-reviewed journals, readers can’t help but question the integrity of the entire scientific publishing process.

There is a big difference between receiving assistance from generative AI and allowing it to generate entire or significant parts of research texts without appropriate supervision and monitoring. These can entail smaller tasks such as proofreading AI-generated content before its distribution/publication but can also play a much more critical role in ensuring the originality and significance of AI-enhanced research. This is why this article seeks to reflect on the abuse of AI in the writing of academic texts by researchers and provides commentary on the insufficiency of the current peer-review system. I also try to initiate a thoughtful discussion on the implications of AI for the future of research.

Falling through the cracks

The latest volume of Elsevier’s Surfaces and Interfaces journal recently caught the attention of researchers on X (Twitter), as one of its papers has evidently been written by ChatGPT. The first line of the paper states: “Certainly, here is a possible introduction for your topic: […].” Any ChatGPT user knows that this is the typical reply generated by the LLM when it responds to a prompt. Without any expertise in AI or other related fields, a common ChatGPT user with normal common sense can therefore tell that this sentence and at least the following paragraph, if not many others, has been generated by ChatGPT.

But this paper is certainly not the only one in this new line of LLM-generated publications. ChatGPT prompt replies have been found in other papers published in different peer-reviewed journals and are not limited to any specific fields of science. For example, a case report published in Radiology Case Reports (another Elsevier journal) includes a whole ChatGPT prompt reply stating “I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about […], but for specific cases, it is essential to consult with a medical professional […].”

Hallucinating information

What is more worrisome is the quality, integrity, and credibility of scientific research conducted by these LLMs, as ChatGPT has the tendency to hallucinate information and draws on seemingly non-existent citations and references to support the texts it generates. For example, in a forum discussion where contributors talked about detecting AI-generated content in academic publications, one contributor pointed out that they could not find the references cited in a paper titled “Automatic Detection of Coagulation of Blood in Brain Using Deep Learning Approach”. Several other cases are mentioned in the discussion thread.

Besides likely contributing to the publication of false or unevidenced information, the use of LLMs in the writing up of scientific research also highlights the failure of peer reviewers to catch or question these practices, showing either their carelessness or their irresponsibility. The peer-review system has long served as the gatekeeper of scholarly knowledge, aiming to uphold high standards of quality, integrity, and credibility that are part and parcel of academic research and publishing. But with obvious evidence of LLM-generated content being included in papers published in peer-reviewed journals, it might be time to start questioning the transparency and accountability inherent in the peer-review process. When a peer-review publication starts with a ChatGPT’s typical prologue, it’s reasonable to wonder how such article was reviewed.

A call for responsible use

AI is not all bad. Clearly, it can be a powerful assistant to researchers in the research process, used for anything ranging from brainstorming, developing research strategies, coding, analyzing empirical results, and language editing to acting as a competently critical reviewer to provide useful and helpful feedback for excellent improvement. But to work with this powerful assistant, researchers still need to have a solid knowledge of the research topic, make significant decisions on the research strategy, and, most importantly, ensure that the research is an original contribution to the literature and can be applied. Relying heavily on AI to finish a research project without understanding the foundation and the essence of the research is plainly ethical contamination and fraudulent behavior.

AI is not a scientific researcher — and might never be

Beyond the immediate finger-pointing at the peer-reviewed system and research practices, the increasing influence of AI in research outputs carries broader implications for the role and integrity of human researchers, the nature of scientific discovery, and the social perception of AI. Even if the potential for deception and manipulation is ignored, AI-generated research outputs might still lack genuine insights, critical analysis, and might fail to take into account ethical considerations without human guidance. Moreover, in order for research outputs to be meaningful for human life and society, they need to be validated by human researchers.

We don’t necessarily need to fear AI; we do need to fear the improper use of AI, and we need to play an active role in preventing this from happening. Thus, instead of fearing being replaced by AI, human researchers should start acknowledging its abilities and using it to shape our projects. Let’s board this technological advancement ship to escalate our research efficiency and accelerate the speed of scientific discovery. But let us remain cautious. We are responsible for ensuring that AI contributes to instead of compromises scientific knowledge production.

Writing this post with the help of ChatGPT 3.5 (which I used to improve my language), I can’t help but recall the question I was asked when receiving my doctoral degree: “Do you promise to continue to perform your duties according to the principles of academic integrity: honestly and with care; critically and transparently; and independently and impartially?”

I promise.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]Opinions expressed in Bliss posts reflect solely the views of the author of the post in question.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text css=”.vc_custom_1713430703942{margin-top: 0px !important;}”]

About the author:

Kim Tung Dao is a recent PhD graduate of the International Institute of Social Studies. Her research interests include globalization, international trade, development, and the history of economic thought.

 [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column css=”.vc_custom_1596795191151{margin-top: 5% !important;}”][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]

Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.

[/vc_column_text][vc_column_text][newsletter][/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][/vc_column][/vc_row]