Skip to main content

ICYMI: ChatGPT - A Boon or Threat to Scientific Publication?

Editor's note: This article originally appeared February 6, 2023, and is being shared again this week in case you missed it. 

ChatGPT is a new, artificial intelligence chatbot that has dramatically changed the digital worlds of education, research, graphic design, statistics and more. While this AI driven platform has the untold potential in generating written content, there is considerable concern in assuring that human-generated content of research, education and publishing has veracity.

ChatGPT was created and released by OpenAI, a Microsoft supported company, in November 2022 (GPT stands for “generative pretrained transformer”). 

It is an open source, natural language processing tool born of AI and delivered by a chatbot that simulates human conversation in response questions and queries. This application is capable of writing, analyzing, researching, grammar, language, references, statistical analysis, and reporting standards. It is also capable of detecting plagiarism, image manipulation, and ethical issues.

ChatGPT is a prolific tool that can facilitate the above. Whether it will further our learning or have nefarious misuses (e.g., homework assignments, student essays, and examination performance) remains a topic of debate. Recently there were reports of ChatGPT passing the LSAT, U.S. Medical Licensing Examination (USMLE), and other medical licensing examinations.

In January 2023, Nature published articles that included ChatGPT as a bylined (nonhuman) author; already noting indexed authorship in PubMed and Google Scholar.

Nature, other journals, and organizations are developing policies banning nonhuman “authors” and the technologies that generate them. In situations where human authors use these AI tools, transparency and detailed descriptions of AI tools and how they were applied in the Methods or Acknowledgment sections.

The scholarly publishing community is concerned about potential misuse of these language models in scientific publication. Many have acknowledged that these AI tools are not ready to be used as a source of trusted information, and certainly not without transparency and human accountability for its use.

JAMA and the JAMA Network journals have addressed arising concerns by revising their authorship policies in the journals’ Instructions for Authors. While defining criteria for authorship credit, accountability and transparent reporting of writing or editing, they acknowledge these criteria continue to evolve. The overarching principle is that authors must take responsibility for the integrity of the content generated by these novel models and tools. JAMA revisions (in part) include:

Author Responsibilities

  • Nonhuman artificial intelligence, language models, machine learning, or similar technologies do not qualify for authorship.
  • If these models or AI tools are used to create content or assist in writing or manuscript preparation, the authors must take responsibility for the integrity of the content generated and, when appropriate, report the use of artificial intelligence, language models, machine learning, or similar technologies to create content in the Acknowledgment or Methods sections.
  • This should include a description of the content that was created or edited and the name of the language model or tool, version and extension numbers, and manufacturer.

Reproduced and Re-created Material

  • Content created by artificial intelligence, language models, machine learning, or similar technologies is discouraged.

Image Integrity

  • Submission and publication of images created by artificial intelligence, machine learning tools, or similar technologies is discouraged, unless part of formal research design or methods, and must be clearly described (as above).

Statistical Analysis

  • If authors use statistical analysis software, they are required to adhere to the EQUATOR Network reporting guidelines,  including guidance for trials that include AI interventions (e.g., CONSORT-AI and SPIRIT-AI)21,22 and machine learning in modeling studies (e.g., MI-CLAIM).

Publishers will evolve and in the future either screen for AI-generated content (or alternatively affirm human generated content)

“In this era of pervasive misinformation and mistrust, responsible use of AI language models and transparent reporting of how these tools are used in the creation of information and publication are vital to promote and protect the credibility and integrity of medical research and trust in medical knowledge.”

Join The Discussion

Muhammad Ahmed Saeed

| Jul 11, 2023 7:08 am

It is indeed a very interesting and challenging process
We as a community have to be flexible but at that same time should compromise on integrity

If you are a health practitioner, you may to comment.

Due to the nature of these comment forums, only health practitioners are allowed to comment at this time.

Disclosures
The author has no conflicts of interest to disclose related to this subject