ChatGPT can be our ally when conducting scientific research — but academic integrity must guide its use

Posted on 0 min read

Several papers that have recently been published in peer-reviewed journals display obvious signs of having been written by the AI tool ChatGPT. This has sparked a heated online debate about the transparency of research communication and academic integrity in cases where AI is used in the academic writing process. In this blog article, Kim Tung Dao discusses the ethical implications of using AI for academic writing and ponders the future impact of AI in academic research, urging for a balance between the efficiency of AI tools and research integrity.

Used for everything from streamlining everyday tasks to revolutionizing industries, artificial intelligence (AI) has come to profoundly affect our lives in the past few decades. The emergence of new forms of AI in recent years has led to a heated debate in academia about whether students should be allowed to use AI tools — usually large language models (LLMs) such as ChatGPT — in their writing. And if they are permitted, a related question is to what extent they should be used, especially in higher education.

A new issue related to the rise of LLMs is now rearing its head within the realm of scientific research: the publication of LLM-generated content in peer-reviewed journals. This worrying trend reflects not only the rapid advancements in LLMs’ ability to replicate human work but also gives rise to discussions on the ethics of research (communication) and research integrity.

More and more researchers are attempting to leverage generative AI such as ChatGPT to act as a highly productive research assistant. It is very tempting to have an LLM compose content for you, as these AI-generated pieces often exhibit sophisticated language, conduct statistical analyses seamlessly, and even discuss new research findings expertly. The line between human- and machine-generated content is blurring. In addition, these LLMs work tirelessly and quickly, which can be considered highly beneficial for human scholars.

However, beneath the surface of effectiveness and efficiency lies a complex labyrinth of ethical concerns and potential repercussions for the integrity of scientific research. Publishing academic research in journals remains the most popular way for many researchers to disseminate their findings, communicate with their peers, and contribute to scientific knowledge production. Peer reviewing ensures that research findings and truth claims are meticulously evaluated by experts in the field to sustain quality and credibility in the formulation of academic theories and policy recommendations. Hence, when papers with AI-generated content are published in peer-reviewed journals, readers can’t help but question the integrity of the entire scientific publishing process.

There is a big difference between receiving assistance from generative AI and allowing it to generate entire or significant parts of research texts without appropriate supervision and monitoring. These can entail smaller tasks such as proofreading AI-generated content before its distribution/publication but can also play a much more critical role in ensuring the originality and significance of AI-enhanced research. This is why this article seeks to reflect on the abuse of AI in the writing of academic texts by researchers and provides commentary on the insufficiency of the current peer-review system. I also try to initiate a thoughtful discussion on the implications of AI for the future of research.

Falling through the cracks

The latest volume of Elsevier’s Surfaces and Interfaces journal recently caught the attention of researchers on X (Twitter), as one of its papers has evidently been written by ChatGPT. The first line of the paper states: “Certainly, here is a possible introduction for your topic: […].” Any ChatGPT user knows that this is the typical reply generated by the LLM when it responds to a prompt. Without any expertise in AI or other related fields, a common ChatGPT user with normal common sense can therefore tell that this sentence and at least the following paragraph, if not many others, has been generated by ChatGPT.

But this paper is certainly not the only one in this new line of LLM-generated publications. ChatGPT prompt replies have been found in other papers published in different peer-reviewed journals and are not limited to any specific fields of science. For example, a case report published in Radiology Case Reports (another Elsevier journal) includes a whole ChatGPT prompt reply stating “I’m very sorry, but I don’t have access to real-time information or patient-specific data, as I am an AI language model. I can provide general information about […], but for specific cases, it is essential to consult with a medical professional […].”

Hallucinating information

What is more worrisome is the quality, integrity, and credibility of scientific research conducted by these LLMs, as ChatGPT has the tendency to hallucinate information and draws on seemingly non-existent citations and references to support the texts it generates. For example, in a forum discussion where contributors talked about detecting AI-generated content in academic publications, one contributor pointed out that they could not find the references cited in a paper titled “Automatic Detection of Coagulation of Blood in Brain Using Deep Learning Approach”. Several other cases are mentioned in the discussion thread.

Besides likely contributing to the publication of false or unevidenced information, the use of LLMs in the writing up of scientific research also highlights the failure of peer reviewers to catch or question these practices, showing either their carelessness or their irresponsibility. The peer-review system has long served as the gatekeeper of scholarly knowledge, aiming to uphold high standards of quality, integrity, and credibility that are part and parcel of academic research and publishing. But with obvious evidence of LLM-generated content being included in papers published in peer-reviewed journals, it might be time to start questioning the transparency and accountability inherent in the peer-review process. When a peer-review publication starts with a ChatGPT’s typical prologue, it’s reasonable to wonder how such article was reviewed.

A call for responsible use

AI is not all bad. Clearly, it can be a powerful assistant to researchers in the research process, used for anything ranging from brainstorming, developing research strategies, coding, analyzing empirical results, and language editing to acting as a competently critical reviewer to provide useful and helpful feedback for excellent improvement. But to work with this powerful assistant, researchers still need to have a solid knowledge of the research topic, make significant decisions on the research strategy, and, most importantly, ensure that the research is an original contribution to the literature and can be applied. Relying heavily on AI to finish a research project without understanding the foundation and the essence of the research is plainly ethical contamination and fraudulent behavior.

AI is not a scientific researcher — and might never be

Beyond the immediate finger-pointing at the peer-reviewed system and research practices, the increasing influence of AI in research outputs carries broader implications for the role and integrity of human researchers, the nature of scientific discovery, and the social perception of AI. Even if the potential for deception and manipulation is ignored, AI-generated research outputs might still lack genuine insights, critical analysis, and might fail to take into account ethical considerations without human guidance. Moreover, in order for research outputs to be meaningful for human life and society, they need to be validated by human researchers.

We don’t necessarily need to fear AI; we do need to fear the improper use of AI, and we need to play an active role in preventing this from happening. Thus, instead of fearing being replaced by AI, human researchers should start acknowledging its abilities and using it to shape our projects. Let’s board this technological advancement ship to escalate our research efficiency and accelerate the speed of scientific discovery. But let us remain cautious. We are responsible for ensuring that AI contributes to instead of compromises scientific knowledge production.

Writing this post with the help of ChatGPT 3.5 (which I used to improve my language), I can’t help but recall the question I was asked when receiving my doctoral degree: “Do you promise to continue to perform your duties according to the principles of academic integrity: honestly and with care; critically and transparently; and independently and impartially?”

I promise.

Opinions expressed in Bliss posts reflect solely the views of the author of the post in question.

About the author:

Kim Tung Dao is a recent PhD graduate of the International Institute of Social Studies. Her research interests include globalization, international trade, development, and the history of economic thought.

 

Are you looking for more content about Global Development and Social Justice? Subscribe to Bliss, the official blog of the International Institute of Social Studies, and stay updated about interesting topics our researchers are working on.

[newsletter]

Development Dialogue 19 | Why we need alternatives to mainstream education — and how the ‘Nook’ model of learning can show us the way

Posted on 7 min read

Contemporary education models continue to reflect and perpetuate colonial educational priorities and by virtue are intricately tied to goals of shaping ‘children as future adults’ and creating a ‘productive’ workforce through education. In the process, they exclude marginalised groups of people, denying them the opportunity to learn and thrive. Alternatives to mainstream education models have been sought all over the world and are gaining traction. In this blog article, Anoushka Gupta discusses ‘Nooks’, alternative community learning spaces that non-profit organisation Project DEFY has introduced in several Asian and African countries, and shows how they are transforming the way in which people approach learning.

Learners working on projects during the design phase. Source: Project DEFY.

Situating systemic challenges within mainstream education models

The outdatedness of several mainstream education models in their failure to enable individuals and communities to respond to emerging challenges have long been recognised. Yet, not much has been done in terms of questioning the foundational principles of these models and in finding enduring alternatives. Such alternatives are needed particularly in Asia and Africa, where several systemic challenges confront educational systems.

It is well known, for example, that the founding principles of schooling systems rest on the assumption that child development is a linear process — it is thereby assumed that a child of a particular age must learn certain skills and competencies before progressing further[1]. As a result, as children move through school, their worth is increasingly tied to their performance in standardized examinations, placing immense pressure on them to do well and limiting opportunities to explore interests or enjoy the process of learning. Metrics to understand what constitutes ‘success’ over the years (through assessment results or further educational trajectories) have standardised experiences and divorced education from its local context[2].

Moreover, differences in material wealth and social location play an important role in understanding variations in ‘success’ defined through assessment results. For example, Dalit and Adivasi communities in India who were historically excluded from economic resources and formal educational systems face challenges in meeting the uniform testing criteria, which puts them at a disadvantage in many disciplines and professions even today[3]. In Uganda, high rates of teenage pregnancy and associated stigma reproduce exclusion and drive girls to drop out[4].

These instances demonstrate that mainstream schooling is built on rigid eligibility rules and criteria for success that fail to secure an environment where learners feel safe and heard and where they can explore their interests instead of sticking to uniform curricula, often detached from their own realities. In the next section, I will show how the Nook learning model seeks to contend with such hegemonic education models and creates safe spaces in which learners can thrive without excessive pressure to perform.

Questioning why we learn

First conceptualised in 2016 by Abhijit Sinha, founder of the India-based non-profit organisation Project DEFY,[5]Nooks are physical community learning environments located in under-resourced places that are accessible to learners irrespective of their age, gender, marital status, and socio-economic background. These spaces are built on questioning the fundamental purpose of learning, which for mainstream models often is creating a productive workforce by teaching them standardised knowledge and skills instead of centring interest as the main driver of learning.

Sinha’s experiment started in a small village in Karnataka, India. Disillusioned with his own educational experiences in one of India’s top engineering colleges, he envisioned a space equipped with basic tools and without strict instructions or rules that would push learners to really explore their interests and would encourage resourcefulness, teamwork, and innovation. These spaces later expanded, went through several iterations, and became the ‘Nooks’ they are today. And they continue to be adapted to new conditions and the needs of learners and communities. Since 2016, 41 Nooks have been set up and 32 are currently operational through partnerships with local organisations across Uganda, Rwanda, Zimbabwe, India, and Bangladesh.

The freedom to choose how (and what) to learn

Nooks follow ‘self-designed learning’ as the pedagogical orientation where the core belief rests on learners defining and designing their own educational goals in an enabling environment. Each space is equipped with basic tools, raw materials, the internet, and laptops and has two fellows who act as mentors.

The Nook follows a cycle-based structure comprising four stages:

  1. Exploration — fellow-guided sessions that introduce learners to diverse learning areas (from robotics to art to storytelling).
  2. Goal Setting — the identification and articulation by learners of a specific learning goal based on their interesteither from areas in the exploration stage or something totally different, as well as their definition of the steps and resources required to translate the goal into a project.
  3. Design — the execution by learners of the project, which they spend approximately three to six months on (the length of the cycle differs depending on the Nook).
  4. Exhibition — the presentation of their work at an event known as an ‘external exhibition’, which is used as a platform for showcasing learner projects to community members and external stakeholders.

Conversations, reflections, and enjoyment

In each cycle, beyond working on projects, learners gather twice a day in opening and closing circles to discuss any troubles they have faced, be it related to their project or something that bothers them in general. Reflections during these designated discussion hours are meant to build a sense of community in the Nook. Many learners have chosen to take up problems in their community – for instance, learners are trying to tackle environmental pollution in the Barishal Nook in Bangladesh. This approach to learning allows individuals to share challenges without judgment and allows them to flexibly explore their interests without assessments or pressures of completion. It intends to recentre the role of learners’ agency and to foster an understanding of individuals as part of a larger collective.

An opening circle in one of the Nooks. Source: Project DEFY.

The Nooks have also had a wider impact. First, self-designed learning naturally implies that projects differ across and within Nooks. A common thread, however, is that learners tend to pick up problems they see in their surroundings or delve deeper into an area they were curious about. In the Bulawayo Nook in Zimbabwe, for example, a learner articulated his desire to build an artificial limb, explaining,Personally, I need it. I would also want to help other people in my community who are disabled once I achieve this goal. The cost of artificial legs is very expensive in the country so that is why I decided to make a cheaper and innovative one”.

Several learners also revealed that their goals challenged normative gendered ideas of learning and work. For instance, in the Gahanga Nook in Rwanda, a female learner spoke of how she intended to learn tailoring initially. However, with exposure to different areas, she discovered her interest in welding despite initial resistance from her family. With time and through encouragement from peers and fellows, she created a hanger and a garden chair, ultimately convincing her family to support her.

Lastly, Nooks foster a community identity. Before Nooks are set up, a community mapping exercise is carried out to understand how the space potentially adds value to the lives of community members. The eventual goal of each Nook is for learners to drive the concept independently. While Nooks are still young and learners running the Nook independently are yet to be located, several seeds of leadership from within Nooks have been sown. Beyond taking on day-to-day responsibilities, steering opening and closing circles, and mentoring fellow learners, the transition of several learners to Nook facilitator roles is encouraging.

Expanding the ‘idea’ behind and beyond Nooks — some final takeaways

Globally, enhancing access to schooling is hailed as a marker of development. Yet, the exclusion and disempowerment that are part of both the design and implications of such beliefs are rarely questioned. In contexts where disempowerment stems from wider socio-economic barriers that trickle down to schooling, Nooks demonstrate the value of learning spaces that allow flexibility to explore one’s interests without imposing restrictions on what to learn. In turn, the emphasis on contextual learning and engagement with community challenges as part of the learning journey seeks to upturn individualised notions of education.

Finally, while ‘community-led development’ is increasingly used as the go-to buzzword among development practitioners and donors, very few are truly willing to let go of predetermined criteria to measure the ‘output’ and ‘outcomes’ of education interventions. Truly recognising the agency of the learners and communities means first questioning our own metrics of what constitutes ‘success.’


This blog article draws on a recent working paper published by Project DEFY that can be accessed here


References:

[1] Prout, A. & James, A. (1997) ‘A New Paradigm for the Sociology of Childhood? Provenance, Promise and Problems’ in Prout, A. & James, A. (ed.) Constructing and Reconstructing Childhood: Contemporary Issues in the Sociological Study of Childhood. Second edition. London: Falmer Press. pp. 7-32.

[2] Ydesen, C. and Andreasen, K. (2020) “Historical roots of the global testing culture in education,” Nordic studies in Education, 40(2), pp. 149-166. DOI: 10.23865/nse.v40.2229

[3] See Ch2 ‘School Education and Exclusion’ in India Exclusion Report 2013-14. pp.44-75. Available at: IndiaExclusionReport2013-2014.pdf (idsn.org)

[4] Study-report-on-Linkages-between-Pregnancy-and-School-dropout.pdf (faweuganda.org)

[5] For more on Project DEFY, see https://hundred.org/en/innovations/project-defy-design-education-for-yourself


About the author:

Anoushka Gupta is a researcher based out of India. Her research interests include child and youth wellbeing, understanding social exclusion, and utilising participatory methods in community-based research. She has worked extensively with non-profit organisations primarily in India on educational quality and community-based learning models. She previously majored in Social Policy as part of the MA in Development Studies from the International Institute of Social Studies, Erasmus University Rotterdam and holds a Bachelor’s degree in History from St. Stephen’s College, University of Delhi.