Tag Archives ethics

ChatGPT can be our ally when conducting scientific research — but academic integrity must guide its use

By Posted on 2854 views

[vc_row css=”.vc_custom_1592900783478{margin-right: 0px !important;margin-left: 0px !important;}”][vc_column css=”.vc_custom_1592900766479{margin-right: 10px !important;margin-left: -10px !important;}”][vc_column_text]Several papers that have recently been published in peer-reviewed journals display obvious signs of having been written by the AI tool ChatGPT. This has sparked a heated online debate about the transparency of research communication and academic integrity in cases where AI is used in the academic writing process. In this blog article, Kim Tung Dao discusses the ethical implications of using AI for academic writing and ponders the future impact of AI in academic research, urging for a balance between the efficiency of AI tools and research integrity.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_single_image image=”28106″ img_size=”full” alignment=”center”][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]

Used for everything from streamlining everyday tasks to revolutionizing industries, artificial intelligence (AI) has come to profoundly affect our lives in the past few decades. The emergence of new forms of AI in recent years has led to a heated debate in academia about whether students should be allowed to use AI tools — usually large language models (LLMs) such as ChatGPT — in their writing. And if they are permitted, a related question is to what extent they should be used, especially in higher education.

A new issue related to the rise of LLMs is now rearing its head within the realm of scientific research: the publication of LLM-generated content in peer-reviewed journals. This worrying trend reflects not only the rapid advancements in LLMs’ ability to replicate human work but also gives rise to discussions on the ethics of research (communication) and research integrity.

More and more researchers are attempting to leverage generative AI such as ChatGPT to act as a highly productive research assistant. It is very tempting to have an LLM compose content for you, as these AI-generated pieces often exhibit sophisticated language, conduct statistical analyses seamlessly, and even discuss new research findings expertly. The line between human- and machine-generated content is blurring. In addition, these LLMs work tirelessly and quickly, which can be considered highly beneficial for human scholars.

However, beneath the surface of effectiveness and efficiency lies a complex labyrinth of ethical concerns and potential repercussions for the integrity of scientific research. Publishing academic research in journals remains the most popular way for many researchers to disseminate their findings, communicate with their peers, and contribute to scientific knowledge production. Peer reviewing ensures that research findings and truth claims are meticulously evaluated by experts in the field to sustain quality and credibility in the formulation of academic theories and policy recommendations. Hence, when papers with AI-generated content are published in peer-reviewed journals, readers can’t help but question the integrity of the entire scientific publishing process.

There is a big difference between receiving assistance from generative AI and allowing it to generate entire or significant parts of research texts without appropriate supervision and monitoring. These can entail smaller tasks such as proofreading AI-generated content before its distribution/publication but can also play a much more critical role in ensuring the originality and significance of AI-enhanced research. This is why this article seeks to reflect on the abuse of AI in the writing of academic texts by researchers and provides commentary on the insufficiency of the current peer-review system. I also try to initiate a thoughtful discussion on the implications of AI for the future of research.

Falling through the cracks

The latest volume of Elsevier’s Surfaces and Interfaces journal recently caught the attention of researchers on X (Twitter), as one of its papers has evidently been written by ChatGPT. The first line of the paper states: “Certainly, here is a possible introduction for your topic: […].” Any ChatGPT user knows that this is the typical reply generated by the LLM when it responds to a prompt. Without any expertise in AI or other related fields, a common ChatGPT user with normal common sense can therefore tell that this sentence and at least the following paragraph, if not many others, has been generated by ChatGPT.

But this paper is certainly not the only one in this new line of LLM-generated publications. ChatGPT prompt replies have been found in other papers published in different peer-reviewed journals and are not limited to any specific fields of science. For example, a case report published in Radiology Case Reports (another Elsevier journal) includes a whole ChatGPT prompt reply stating “I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about […], but for specific cases, it is essential to consult with a medical professional […].”

Hallucinating information

What is more worrisome is the quality, integrity, and credibility of scientific research conducted by these LLMs, as ChatGPT has the tendency to hallucinate information and draws on seemingly non-existent citations and references to support the texts it generates. For example, in a forum discussion where contributors talked about detecting AI-generated content in academic publications, one contributor pointed out that they could not find the references cited in a paper titled “Automatic Detection of Coagulation of Blood in Brain Using Deep Learning Approach”. Several other cases are mentioned in the discussion thread.

Besides likely contributing to the publication of false or unevidenced information, the use of LLMs in the writing up of scientific research also highlights the failure of peer reviewers to catch or question these practices, showing either their carelessness or their irresponsibility. The peer-review system has long served as the gatekeeper of scholarly knowledge, aiming to uphold high standards of quality, integrity, and credibility that are part and parcel of academic research and publishing. But with obvious evidence of LLM-generated content being included in papers published in peer-reviewed journals, it might be time to start questioning the transparency and accountability inherent in the peer-review process. When a peer-review publication starts with a ChatGPT’s typical prologue, it’s reasonable to wonder how such article was reviewed.

A call for responsible use

AI is not all bad. Clearly, it can be a powerful assistant to researchers in the research process, used for anything ranging from brainstorming, developing research strategies, coding, analyzing empirical results, and language editing to acting as a competently critical reviewer to provide useful and helpful feedback for excellent improvement. But to work with this powerful assistant, researchers still need to have a solid knowledge of the research topic, make significant decisions on the research strategy, and, most importantly, ensure that the research is an original contribution to the literature and can be applied. Relying heavily on AI to finish a research project without understanding the foundation and the essence of the research is plainly ethical contamination and fraudulent behavior.

AI is not a scientific researcher — and might never be

Beyond the immediate finger-pointing at the peer-reviewed system and research practices, the increasing influence of AI in research outputs carries broader implications for the role and integrity of human researchers, the nature of scientific discovery, and the social perception of AI. Even if the potential for deception and manipulation is ignored, AI-generated research outputs might still lack genuine insights, critical analysis, and might fail to take into account ethical considerations without human guidance. Moreover, in order for research outputs to be meaningful for human life and society, they need to be validated by human researchers.

We don’t necessarily need to fear AI; we do need to fear the improper use of AI, and we need to play an active role in preventing this from happening. Thus, instead of fearing being replaced by AI, human researchers should start acknowledging its abilities and using it to shape our projects. Let’s board this technological advancement ship to escalate our research efficiency and accelerate the speed of scientific discovery. But let us remain cautious. We are responsible for ensuring that AI contributes to instead of compromises scientific knowledge production.

Writing this post with the help of ChatGPT 3.5 (which I used to improve my language), I can’t help but recall the question I was asked when receiving my doctoral degree: “Do you promise to continue to perform your duties according to the principles of academic integrity: honestly and with care; critically and transparently; and independently and impartially?”

I promise.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]Opinions expressed in Bliss posts reflect solely the views of the author of the post in question.[/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text css=”.vc_custom_1713430703942{margin-top: 0px !important;}”]

About the author:

Kim Tung Dao is a recent PhD graduate of the International Institute of Social Studies. Her research interests include globalization, international trade, development, and the history of economic thought.

 [/vc_column_text][/vc_column][/vc_row][vc_row][vc_column css=”.vc_custom_1596795191151{margin-top: 5% !important;}”][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][vc_column_text]

Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.

[/vc_column_text][vc_column_text][newsletter][/vc_column_text][vc_separator color=”custom” accent_color=”#a80000″ css=”.vc_custom_1594895181078{margin-top: -15px !important;margin-bottom: 10px !important;}”][/vc_column][/vc_row]

Epistemic Diversity | From ‘do no harm’ to making research useful: a conversation on ethics in development research by Karin Astrid Siegmann

By Posted on 2789 views

Ethical dilemmas are part and parcel of the research processes that researchers are engaged in. This article details a recent conversation between ISS students and staff in which they tried to make sense of some of the ethical issues that researchers face. While the ‘do no harm’ principle was emphasised as an overall yardstick, the discussion went beyond that, raising broader questions about epistemic and social justice.


With thanks to Andrea Tauta Hurtado, Zhiren Ye, Kristen Cheney, Roy Huijsmans and Andrew Fischer.


Scholars in Development Studies are quick to brag about how relevant their research is for the underdogs of society. The reality is that representatives of marginalised groups rarely knock at our office doors to ask for scholarly support. In fact, development research often does harm by justifying economic and social inequalities, reproducing stereotypes and stigma, and misrepresenting or even erasing knowledge about the lives of marginalised people.

How can scholars prevent such harm from being done through their research? This question was discussed by ISS students majoring in Social Policy for Development and staff members in a workshop on “ethical, integrity, and security challenges”. The discussion aimed to prepare ISS students for their fieldwork. While in our conversation the ‘do no harm’ principle was emphasised as an overall yardstick for our research, the discussion went beyond that, raising broader questions about epistemic and social justice.

Challenges to informed consent and ensuring anonymity

Roy Huijsmans’ example from his masters’ research on Dutch school-going children’s employment experiences illustrated that research participants’ informed consent is crucial, but also complicated by the power relations structuring the research arena. Teachers in his former school had facilitated meetings with their students. Several of these students, in turn, had expressed interest in and consented to participating in Roy’s study. When conducting telephone interviews with these children, however, in some cases parents became suspicious: who is that adult male calling their child? Roy’s experience raises the issue of whether it is adequate to understand informed consent individually. If not, what role do we give to the—in this case generational—power relations wherein consent is embedded? Can ethics protocols that require consent from parents or other gatekeepers alongside children’s own answer these questions?

In my own research, class-based power relations motivate special attention to research participants’ anonymity. Referring to a recent study on working conditions in South Asian tea plantations, I flagged that if workers’ and unionists’ statements could be identified, this could lead to their dismissal or worse outcomes. Our research team addressed this concern by not providing names—neither of people, nor of research locations. Andrew Fischer challenged me: would that really prevent identification? It is likely that few people are probably willing to stick their necks out as labour leaders, making those that do more easily recognisable.

One student followed up and asked how she could protect the identity of chemsex users— people having sex while using hard drugs—whose experiences she plans to investigate. Referring to the do no harm principle, Roy encouraged her to reflect on the consequences of research participants’ names leaking out: the Dutch government tolerates illegal drug consumption. Hence, in the current scenario, enforcement agencies are unlikely to arrest users. However, such political priorities can easily change over time. Andrew therefore recommended the anonymisation of transcripts, with their key to be stored outside the computer.

The quest for epistemic justice and diversity

In recent years, I have become increasingly concerned with the responsible representation of the lives, concerns and demands of the people who participate in my research, or, put differently, with epistemic justice. For instance, how will I represent the plantation workers who generously shared their experiences in our tea study? In a way that responds to the academic pressure to publish in highly-ranked journals with specific theoretical fancies? Or do research participants’ concerns guide my writing? This relates to questions that Marina Cadaval and Rosalba Icaza raise in their earlier post on this blog: ‘who generates and distributes knowledge, for which purposes, and how?’

Other participants in the discussion shared this concern for a fair representation. The student who engages with chemsex users’ experiences was acutely aware of the role of race in her research. In exploratory interviews, she learned how race shapes the exercise of power in chemsex users’ sexual relationships and how it either enables them to get support from or bars their access to the healthcare system. How to do justice to participants’ narratives without simultaneously repeating and reinforcing the underlying stereotypes?

For me, one way to deal with this quest for epistemic justice has been to engage in processes of activist scholarship, i.e. in collaboration and joint knowledge production with people who struggle for recognition and redistribution. Activist scholarship involves moves towards epistemic diversity, challenging the widely assumed supremacy of scientific knowledge heavily produced in Northern academic institutions. For instance, I have been involved in the campaign of a Florida-based farmworker organisation for making the Dutch retailer Ahold sign on to their programme for better working conditions in US agriculture. In dialogue with that organisation, the Coalition of Immokalee Workers (CIW), I have written about lessons from that campaign for how precarious workers can effectively organise. Sruti Bala points out that this implies ‘to listen to articulations radically different from the frameworks that I may be trained in, but more than good listening is required in order for those articulations and insights to translate themselves into what we might call knowledge’. These processes of listening, dialoguing and learning didn’t lead to “consensus-based writing”, though. We had disagreements and I tried to make them visible in my writing.

Besides, there may be internal power hierarchies within the movements with which we collaborate. My colleague Silke Heumann earlier warned that through our decision of who participates in our research and who doesn’t, we run the risk of reinforcing existing power relations and of legitimising an elite’s perspective of a movement.

This approach may not be feasible for a masters’ thesis. What is possible in most cases, though, is to get research participants’ feedback on, critique and validation of how they understood our conversations or my wider observations about their lives. Time is a key resource in this effort to respect their knowledge as experts on their own lives. Taking time for research participants—rather than racing from one respondent to the next—enables us to conduct research in a more responsible manner. I want to integrate this principle more and more in my research due to the belief that this not only helps to prevent harm. Over and above that, it enables me to treat my research participants and their concerns with care. The more time I plan and spend for engagement with those who participate in my research, the greater the likelihood that it will embody epistemic justice.


 

This article forms part of a series on Epistemic Diversity. You can read the other articles here and here

csm_5abd70057687ec5e3741252630d8cc66-karin-siegmann_60d4db99baAbout the author: 

Holding a PhD in Agricultural Economics, Dr Karin Astrid Siegmann works as a Senior Lecturer in Labour and Gender Economics at the International Institute of Social Studies (ISS) of Erasmus University Rotterdam in The Hague, the Netherlands. She is the convenor of the ISS Major in Social Policy for Development (SPD).