Anyone with access to the internet now has free access to artificial intelligence (AI) applications that can quickly develop text-based responses to specific questions. Large language model applications such as ChatGPT have made it possible to construct research manuscripts, abstracts, and letters to the editor that are extremely difficult to differentiate from human-derived work (see Supplementary Material).
This rapid improvement in AI capabilities may offer some benefits to journals, publishers, readers, and, ultimately, patients. For example, large language models such as ChatGPT might – with suitable human oversight – be able to create plain-language summaries of complex research quickly and at scale, which might make the scientific record more accessible to the public.1 AI-based tools also may facilitate the creation of consistent, clear visual presentations of complex data. And, of course, an exciting feature of transformative technologies is the potential for benefits that we cannot imagine at the outset.
However, misuse of these tools can undermine the integrity of the scholarly record; indeed, there are examples of this happening already. Researchers and authors need to be aware that AI-detection software development is in the refinement stage. When available, these tools will be used by our journals in the same way that plagiarism software is currently deployed. Some have suggested that large language models should be considered authors; in fact, ChatGPT has been listed as a co-author in published research,2 and even is a registered author in the ORCiD and SCOPUS databases. This practice is inappropriate. Under the authorship guidelines of the International Committee of Medical Journal Editors, which all of our journals follow, an author must meet a number of important standards, including being willing to be accountable for all aspects of the work, to ensure that questions related to the accuracy or integrity of the work will be suitably investigated and resolved, to be able to identify which co-authors are responsible for specific parts of the work, and to have confidence in the integrity of the contributions of their co-authors.3 A large language model has no means to comply with such standards, and, for that reason – as well as, we believe, simple common sense – AI-based tools cannot be authors on scientific papers.
Other important concerns have been raised about the use of AI-driven tools in scientific reporting, including the possibilities that they may produce material that is inaccurate or out of date,4 they may conjure up “sources” that do not exist,5 and – this from the team that built ChatGPT – they may generate “plausible-sounding but incorrect or nonsensical answers,” which the coders have said is “challenging” to fix because “during RL [reinforcement learning] training, there’s currently no source of truth”.6 We believe that our readers, and the patients for whom they are responsible, deserve better.
For these reasons and others, our editorial boards have agreed on the following standards concerning AI applications that create text, tables, figures, images, computer code, and/or video:
-
AI applications cannot be listed as authors.
-
Whether and how AI applications were used in the research or the reporting of its findings must be described in detail in the Methods section and should be mentioned again in the Acknowledgements section.
Our editorial boards will closely follow the scientific developments in this area and will adjust editorial policy as frequently as required.
References
1. Rosenberg A , Walker J , Griffiths S , Jenkins R . Plain language summaries: Enabling increased diversity, equity, inclusion and accessibility in scholarly publishing . Learned Publishing . 2023 ; 36 ( 1 ): 109 – 118 . Crossref Google Scholar
2. O’Connor S . Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ Pract . 2023 ; 66 : 103537 . Crossref , PubMed Google Scholar
3. No authors listed . Defining the role of authors and contributors . International Committee of Medical Journal Editors . https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html ( date last accessed 15 March 2023 ). Google Scholar
4. Flanagin A , Bibbins-Domingo K , Berkwits M , Christiansen SL . Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge . JAMA . 2023 ; 329 ( 8 ): 637 – 639 . Crossref , PubMed Google Scholar
5. Davis P . Did ChatGPT just lie to me? The Scholarly Kitchen . 2023 . https://scholarlykitchen.sspnet.org/2023/01/13/did-chatgpt-just-lie-to-me/ ( date last accessed 15 March 2023 ). Google Scholar
6. No authors listed . Introducing ChatGPT . Open AI . 2022 . https://openai.com/blog/chatgpt ( date last accessed 15 March 2023 ). Google Scholar
The authors of this editorial are the Editors-in-Chief of Clinical Orthopaedics and Related Research, The Bone & Joint Journal, the Journal of Orthopaedic Research, and The Journal of Bone and Joint Surgery, respectively, and this editorial is being published concurrently in all four of those journals. The articles are identical except for minor stylistic and spelling differences in keeping with each journal’s style. Citation from any of the four journals can be used when citing this article.
ICMJE COI statement
All ICMJE Disclosure of Potential Conflicts of Interest forms for Clinical Orthopaedics and Related Research Editors are on file with the publication and can be viewed on request; the Editors’ disclosure statements also appear each month in print on the masthead of Clinical Orthopaedics and Related Research. The ICMJE Disclosure form for the Editor of The Bone & Joint Journal is available with the BJJ online version of this article. The ICMJE Disclosure form for the Editor of the Journal of Orthopaedic Research is available from the Orthopaedic Research Society. The ICMJE Disclosure form for the Editor of The Journal of Bone and Joint Surgery is provided with the JBJS online version of this article.
Acknowledgements
Joseph Bernstein, MD, a member of the Editorial Board of Clinical Orthopaedics and Related Research, provided the prompts for (and responses from) ChatGPT in the Supplementary Material.
Open access statement
This article is distributed under the terms of the Creative Commons Attributions (CC BY 4.0) licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium or format, provided the original author and source are credited.