ICYMI: Humans Are (still) Responsible

On the use of Artificial Intelligence (AI)assisted technologies in medical publications  
01 August 2025
WMW Editorial Team

“Open the pod bay doors, HAL.”
“I’m sorry, Dave. I’m afraid I can’t do that.”
– 2001: A Space Odyssey

Popular culture has long foretold the rise of artificial intelligence (AI), such as with HAL, the sentient computer and main antagonist of the 1968 film 2001: A Space Odyssey. As such, in today’s reality, AI tools like large language models (LLMs) and chatbots have become useful for many aspects of life and are increasingly being adopted among the scientific and medical communities. In the 2023/2024 update to the International Committee of Medical Journal Authors (ICMJE) Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals, guidelines have been added for reporting the use of AI–assisted technologies in medical manuscripts submitted for publication.1

*In case you missed it (ICYMI), humans are still responsible.

@#$%!

So you used AI in your work…now what?

ICMJE recommends that journals require authors to disclose their use of AI at the time of submission, both in the cover letter to the editor and in the relevant section of the manuscript. Authors should disclose the use of AI for writing assistance in the Acknowledgements section and for data collection, data analysis, and generation of figures in the Methods section.

AI is many things, but it’s not an author

You may be wondering: if AI helped analyze my data and write my manuscript, shouldn’t it be listed as an author? The answer is no. According to the ICMJE recommendations, “humans are responsible for any submitted material that included the use of AI–assisted technologies.” AI can’t be held responsible for the accuracy, integrity, and originality of the work, nor can it give approval for the final version of the manuscript to be published; therefore, AI can’t meet all ICMJE authorship criteria.

ICMJE also advises that authors carefully review all AI–generated content. While LLMs and other AI–assisted technologies can improve the efficiency of some tasks, there are also limitations. For example, ‘hallucination’ is a phenomenon where an LLM misinterprets inputs, resulting in inaccurate outputs.2 You’ve likely already seen an example of this in AI–generated images of people. Notice how the hands never seem to look quite right? These misinterpretations can happen for a number of reasons, such as model overfitting or training data bias/inaccuracy, and unlike anatomically incorrect human hands, they may even appear to be scientifically plausible.2,3

AI–assisted technologies don’t (currently) have minds of their own like HAL did, but neither are they infallible. At the end of the day, it’s still the sole responsibility of humans to do our due diligence and ensure that any work submitted for medical publication is accurate and original.

Stay tuned in!

Keep checking wiesenmed.com for more newsletters!

References

1. International Committee of Medical Journal Editors (ICMJE). Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly Work in Medical Journals. Updated April 2025. Accessed July 1, 2025. https://www.icmje.org/icmje-recommendations.pdf @

2. International Business Machines Corporation (IBM). What are AI hallucinations? Accessed March 14, 2024. https://www.ibm.com/topics/ai-hallucinations @

3. Sallam M. ChatGPT Utility in Healthcare Education, Research, and Practice: Systematic Review on the Promising Perspectives and Valid Concerns. Healthcare (Basel). 2023;11(6):887.

Further Reading…

tbd

Back to Posts