Artificial intelligence (AI) has the capacity to be useful in our writing endeavors. For example, it can help in crafting content outlines, organizing literature reviews, and writing the first draft of an abstract. As with any tool, however, AI has drawbacks that authors need to know about. In future blogs, I’ll discuss some of the positive ways writers can engage with AI, but let’s first consider some cautions and limitations.
Bias
As I discussed in a previous blog, AI platforms are trained on large volumes of data sets. Unfortunately, biases within those data sets can be carried over into the AI platform. For example, race-based algorithms have the potential to perpetuate poor care, however, Palmer and McFarling note, clinicians continue to use them. It seems not unlikely, then, that some of these algorithms could end up in AI tools.
More work needs to be done on how to avoid bias transfer. A systematic review by Chen and colleagues found limited strategies, none of which had been tested in actual clinical settings. Although they focused on electronic health records, the takeaway message remains important for writing considerations as well.
Hallucinations
When the training data aren’t well matched to the task AI is trying to accomplish, AI can generate “hallucinations” or false information. This information frequently sounds quite authoritative despite being completely wrong. Therefore, we must always check the accuracy of information obtained through AI.
Outdated information
Hallucinations are a particular risk when information in the AI platform isn’t current, but the user is asking for recent updates (such as, recent research into long COVID). Many of the free versions of AI have limits on access to recent information. For example, Khan writes that if a user asks too many questions or there’s a lot of traffic for the free version of ChatGPT, the model may downgrade access from GPT-4o to GPT-3.5, which has information current only up to January 2022.
Lack of privacy
AI privacy policies vary considerably. For instance, some state the data you input aren’t used for training, while others state that your prompt (questions to the AI platform) could be sold to third parties. It’s best to assume that nothing you enter is private, and, of course, never enter HIPAA-protected patient data.
Copyright and potential plagiarism
AI doesn’t always provide information sources, which can lead to plagiarism. In addition, nefarious “authors” have used AI to create entire articles that may or may not be of high quality. Those of low quality could end up in reputable databases, leading to inappropriate citations. Existing plagiarism detection tools have added AI detection capabilities, but little data exist regarding their effectiveness.
In addition, it’s interesting to note that publishers are selling papers that have appeared in their journals to organizations that then use them to train AI platforms. Most authors aren’t aware of this because they’ve transferred copyright to the publisher. Overall, many issues regarding copyright and AI remain unresolved.
Lack of comprehension
The most important drawback of AI is that it lacks the depth and understanding of context compared to humans. It can’t engage in critical thinking.
But wait…
Despite these drawbacks, AI can be beneficial for authors. I’ll share some of those benefits in my next blog.
Hi, I’m Cynthia Saver, MS, RN, owner of CLS Development, which provides writing and editing services, and editor of Anatomy of Writing for Publication for Nurses, 4th ed. I’m also past editorial director for American Nurse Journal.
I’ve been a full-time professional nurse writer and editor for many years, and that doesn’t count the writing I did as I fulfilled my nursing roles in clinical, research, education, and management. My passion is helping nurses share their expertise through the written word, including, but not limited to, publication. Writing can be scary and intimidating. I hope to make it less so and to help you develop your writing skills the same way you’ve developed your nursing skills.
Whether you’re considering your first or your 50th publication, want to contribute to your organization’s newsletter, or crave to be a better communicator online and in print, I hope you’ll find what I write helpful. The nurse publishing colleagues I’ve learned from over the years (many of whom are contributors to my book) may not be listed by name, but I’m grateful for their willingness to share. In that spirit, I’m looking forward to sharing with you! If you have feedback, feel free to email me at csaver57@gmail.com.
References
Buriak JM, Akinwande D, Artzi N, et al. Best practices for using AI when writing scientific manuscripts. ACS Nano. 2023;17(5):4091-3. doi:10.1021/acsnano.3c01544
Chen F, Wang L, Hong J, Jian J, Zhou L. Unmasking bias in artificial intelligence: A systematic review of bias detection and mitigation strategies in electronic health record-based models. J Am Med Inform Assoc. 2024;31(5):1172-83. doi:10.1093/jamia/ocae060
Khalifa M, Albadawy M. Using artificial intelligence in academic writing and research: An essential productivity tool. Comp Meth Prog Biom Update. 2024;100145. doi:10.1016/j.cmpbup.2024.100145
Khan I. ChatGPT free vs. ChatGPT plus: Worth the $20 upgrade? CNET. July 14, 2024. cnet.com/tech/services-and-software/chatgpt-free-vs-chatgpt-plus-worth-the-20-upgrade
Martin K. AI language models are transforming the medical writing space – like it or not! Medical Writing. 2023;32(3):22-7.
Palmer K, McFarling UL. STAT. Embedded Bias, Part 1: Doctors use problematic race-based algorithms to guide care every day. Why are they so hard to change? September 3, 2024. statnews.com/2024/09/03/embedded-bias-investigation-health-equity-clinical-algorithms