Artificial intelligence (AI) permeates all aspects of life, though it often goes unrecognized and misunderstood by those who use it daily. Due to various methods of development and variant subspecialties, AI has eluded a uniform definition primarily because AI is not a single function or technology (Buiten, 2019; Jiang et al., 2022; Jennings, 2019; Robert, 2019). The lack of consensus on a clear characterization of AI combined with its extraordinary growth has outpaced ethical guidelines and normative frameworks for the legal authority to regulate AI (Buiten, 2019; Jennings, 2019).
There is a paucity of legal precedent for cases involving the complexities of AI, and those that do exist are a poor fit for ever-evolving technology (Daneshjou et al., 2021; Surden, 2019). Limited case law, combined with rapid technological advancement and the interdisciplinary collaboration that AI requires, leaves the bench uncertain about which laws dominate and how they apply. Judges experience difficulty adjudicating what they cannot demystify. Despite the lack of clarity, novel cases are litigated in the courts, and judicial decisions are rendered. In 2018, Dr. Steven Thaler applied for copyright protection for artwork generated by an algorithm on his computer. In his application for copyright, Dr. Thaler identified the AI algorithm as the sole author of the artwork. Dr. Thaler’s application was denied by the U.S. Office of Copyright Review Board (2022), citing the Copyright Act of 1976, which requires human authorship for works submitted. The denial was later upheld on appeal by the District Court of Columbia (Thaler v. Perlmutter, 2022).
In December 2023, the New York Times (“Times”) filed a lawsuit in federal court against OpenAI, Inc. and Microsoft Corporation for copyright infringement (New York Times Company v. Microsoft Corporation, 2024). The complaint alleges that OpenAI chatbots are trained on massive amounts of texts from stories in Times, equating to copyright infringement. The prayer for relief concludes the complaint and specifies what the court is being asked to do. In this section, Times demanded judgment “Ordering destruction under 17 U.S.C. § 503(b) (2022) of all GPT or other LLM models and training sets that incorporate Times works,” Indeed, copyright law grants the judiciary the power to order the destruction of infringing goods and the equipment used to create them. In response, OpenAI, Inc. filed a motion to dismiss, citing fair use as its defense under 17 U.S.C. § 107 (2022). Even legal experts are not confident in how the court will decide. Other causes of action in generative AI (GAI) that have been filed are personal injury, criminal liability, anti-trust, data breach, privacy breach, discrimination, defamation, and malicious- use related such as Deepfakes, hate-speech, and scamming.
If real-world cases are not thought-provoking enough, they are easily overshadowed by the hypothetical legal questions that AI generates. As futurists and tech executives seek to humanize computers, Hallevy (2010) proposed questions such as, “Is AI with human thinking capability able to commit a crime under the legal definition of criminal intent (mens rea)?” “If an AI algorithm malfunctions, is insanity a possible defense?” “If an electronic virus infects an AI algorithm, could it claim coercion or intoxication as a defense?” It is not difficult to envision what turmoil could ensue in the legal field if hypothetical scenarios become reality. Although rare, smartphone companion apps and chatbots have been implicated in promoting suicide and homicide (Possati, 2023)
To be fair, there is no question that AI has efficiently streamlined work processes for both the medical and legal fields. AI automation has increased the speed and accuracy of data analysis, optimized retrieval of research, and provided enhanced imaging. AI has also created more work for attorneys. Discovery is the process in litigation where opposing sides seek information from the parties related to the case, and in doing so, attorneys must be vigilant in requesting the preservation of data from mobile devices, cloud repositories, social media, online meeting apps, and other resources of electronically stored information (ESI).
Grimm et al. (2021) observed that a working knowledge of AI is needed for attorneys to introduce or object to evidence and for judges as “gatekeepers” to determine the admissibility of evidence. Attorneys and judges must educate themselves on what AI is and how it works, what is done accurately and reliably, and what its limitations are (Grimm et al., 2021). The authors emphasize the significance of assessing the validity (accuracy of the technology’s functions in what it is programmed to do) and reliability (the technology’s consistency in producing like results when used in like circumstances). The rules of evidence demand this much (Manes et al., 2007).
There are ethical concerns regarding AI: violation of privacy, worker displacement, bias, lack of transparency, misinformation, cybersecurity risks, plagiarism, and harmful content (Daneshjou et al., 2021; Jiang et al., 2022). Bias and lack of transparency are two areas where nursing input is key (Robert, 2019). The importance of incorporating strategies to mitigate bias is a fundamental role for nurses. Experts have identified points of entry for bias in the stages of AI development. These phases are the pre-design stage, data collection, data input, algorithm construction during testing and training, and transfer of bias at deployment (Daneshjou et al., 2022; Johnson et al., 2023; Nazar et al., 2023). According to Johnson et al. (2023), acknowledging the proper context is critically important in developing AI algorithms to mitigate bias and prevent end-user liability. In matters of safe patient care and reducing bias, Dudding and Gephart (2023) opined that the essential contributors in ensuring the algorithm developers are advised of the proper context are nurses who provide information on the significance of workflow and the nurses’ workload.
Another issue of ethical importance in AI is transparency (Buiten, 2019). Transparency is the ability to have clear answers to the algorithm’s decision-making process and is determined by criteria: explainable, justifiable, accessible, and known error. The value of transparency is evidenced in the subtopic of research called Explainable AI (XAI) (Jiang et al., 2022), and nurse informaticists are vitally important to ensure transparency is accomplished (Dudding & Gephart, 2023). As the abilities and benefits of AI advance to improve patient outcomes, nurses will play an even greater role.
Experts are divided on the future of AI. Kurzweil (2005), a leading futurist and computer scientist, postulates that “the singularity” is the future point where machine intelligence equals and surpasses human intelligence. Other AI experts warn of the dangers of unregulated AI advancement and the potential for misuse (Buiten, 2019; Jiang et al., 2019; Mitchell, 2019). The importance of education on the perils and promise of AI is every person’s duty. Society should promote and support innovation, creativity, and technological progression, as well as ethical safeguards, cautious development, and regulatory governance. If we humans are smart enough to annihilate ourselves, we humans are smart enough to save ourselves.
References
Buiten, M. C. (2019). Toward intelligent regulation of artificial intelligence. European Journal of Risk Regulation, 10(1), 41-59. https://doi.org/10.1017/err.2019.8
Daneshjou, R., Smith, M. P., Sun, M. D., Rotemberg, V., & Zou, J. (2021). Lack of transparency and potential bias in artificial intelligence data sets and algorithms. JAMA Dermatology, 157(11), 1362. https://doi.org/10.1001/jamadermatol.2021.3129
Dudding, K. M., & Gephart, S. M. (2023). Nurse, know your value: Designing technology to transform outcomes. [Editorial] Advances in Neonatal Care, 23(1), 1-3. https://doi.org/10.1097/anc.0000000000001057
Grimm, P. W., Grossman, M. R., & Cormack, G. C. (2021). Artificial intelligence as evidence. Northwestern Journal of Technology and Intellectual Property, 19(1/2), 9-106. https://scholarlycommons.law.northwestern.edu/njtip/vol19/issue1/2
Hallevy, G. (2010). The criminal liability of artificial entities. http://dx.doi.org/10.2139/ssrn.1564096
Jennings, C. (2019). Artificial intelligence: Rise of the lightspeed learners. Rowman & Littlefield.
Jiang, Y., Li, X., Luo, H., Yin, S., & Kaynak, O. (2022). Quo Vadis artificial intelligence? Discover Artificial Intelligence, 2(4). https://doi.org/10.1007/s44163-022-00022-8
Johnson, E. A., Dudding, K. M., & Carrington, J. M. (2023). When to err is inhuman: An examination of the influence of artificial intelligence-driven nursing care on patient safety. Nursing Inquiry, 31(1), 1-8. https://doi.org/10.1111/nin.12583
Kurzweil, R. (2005). The singularity is near: When humans transcend biology. Penguin Books.
Manes, G. W., Downing, E., Watson, L., & Thrutchley, C. (2007, April). New Federal rules and digital evidence [Paper presentation]. Annual ADFSL Conference on Digital Forensics, Security, and the Law, Arlington, Virginia.
Mitchell, M. (2020). Artificial intelligence: A guide for thinking humans. Picador Paper.
Nazer, L. H., Zatarah, R., Waldrip, S., Ke, J. X., Moukheiber, M., Khanna, A. K., Hicklen, R. S., Moukheiber, L., Moukheiber, D., Ma, H., & Mathur, P. (2023). Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digital Health, 2(6), e0000278. https://doi.org/10.1371/journal.pdig.0000278
Possati, L. M. (2023). Psychoanalyzing artificial intelligence: The case of Replika. AI & Society, 38(4), 1725-1738. https://doi.org/10.1007/s00146-021-01379-7
Robert, N. (2019). How artificial intelligence is changing nursing. Nursing Management, 50(9), 30-39. https://doi.org/10.1097/01.NUMA.0000578988.56622.21
Surden, H. (2019). Artificial intelligence and the law: An overview. Georgia State University Law Review, 35(4), 13041337. https://readingroom.law.gsu.edu/gsulr/vol35/iss4/8
Thaler v. Perlmutter, No. 1:22-cv-01564-BAH (D.D.C., Aug. 18, 2023). https://ia601401.us.archive.org/14/items/gov.uscourts.dcd.243956/gov.uscourts.dcd.243956.1.0.pdf
New York Times Company v. Microsoft Corporation, No. 1:23-cv-11195 (SHS) (OTW) (S.D.N.Y.) February 26, 2024. https://fingfx.thomsonreuters.com/gfx/legaldocs/byvrkxbmgpe/OPENAI%20MICROSOFT%20NEW%20YORK%20TIMES%20mtd.pdf
U.S.C. Title 17 § 107 Limitations on exclusions: Fair use (2022) https://www.law.cornell.edu/uscode/text/17/107
U.S. Office of Copyright Review Board. (2022). Second Request for Reconsideration for Refusal to Register a Recent Entrance to Paradise (Correspondence ID 1-3ZPC6C3; SR # 1-7100387071. https://www.copyright.gov/rulings-filings/review-board/docs/a-recent-entrance-to-paradise.pdf