Ongoing discourse is needed on how we can use generative artificial intelligence (AI) to meet the needs of the aging population in a responsible and ethical manner and promote health equity.
To discuss the challenges and opportunities associated with the use of generative AI and large language models (LLM) in gerontology, the University of Pennsylvania Artificial Intelligence and Technologies Collaboratory for Healthy Aging, known as PennAITech, invited a group of interdisciplinary experts for a roundtable discussion. Speakers and experts represented the following disciplines: computer science, human computer interaction, law, medicine, nursing, and ethics. The workshop (ChatGPT and Aging: Implications of Generative AI for Gerontology), held on December 5 and 6, received funding from the National Institute on Aging and the University of Pennsylvania School of Nursing. The workshop explored how LLM could effectively address the unique considerations of older adult populations.
AI, generative AI, and LLM—What are they?
Kok defines AI an area of study that develops computers or systems to mimic aspects of human intelligence, such as reasoning, problem-solving, learning, and self-correction. Rather than simply coding machines to perform a specific type of task, they’re trained to think or act in a way that corresponds with intelligent human behavior.
According to Duffourc and Gerke, generative AI, a subset of AI, refers to technology that can generate new content, such as text, images, music, or any other form of data. While generative AI models have a broad application, LLM specifically focus on handling language-related tasks and creating human-like text responses. LLM can summarize literature, engage in dialogues with users, or even write an essay on its own. These models are the foundation for AI-based chatbots, the famous example of which is OpenAI’s ChatGPT, which sparked both enthusiasm and controversy upon its first public release in 2022.
In the context of gerontology
Although generative AI and LLM are robustly used in specialties like radiology and neurology, their application in gerontology remains largely unexplored. This gap highlights the importance of studying how tools like ChatGPT can be applied in the context of geriatric and gerontology care. Beyond supporting clinical decision-making and disease prediction, generative AI and LLM could offer many potential changes in how we provide care for older adults. For example, these technologies could be used to generate health information tailored to the user’s needs and digital literacy. Generative AI models could also incorporate the end user’s data, including audio and text, to predict cognitive decline and promote timely interventions. In addition, the potential exists for using conversational chatbots to provide digital companionship, which might address social isolation among older adults.
PennAITech Workshop
Throughout the PennAITech workshop, experts offered short presentations on their research and work related to generative AI and aging. Despite diverse viewpoints, the experts shared common ground: the critical need to understand aging in our rapidly evolving digital world.
On Day 1 of the workshop, key discussion points emerged, which raised concerns about the ethical considerations of deploying LLM technology among older adults. There were concerns related to coporate backing for many LLM-based chatbots and whether the technology was designed with older adults in mind. Experts also expressed concern about the lack of legal protection regarding older adults’ vulnerability. Some inquired whether these populations are truly vulnerable to AI, which led to discussions about the social constructs behind vulnerability and what precisely makes older adults vulnerable to these types of technologies. For example, we discussed abuse and fraud specifically targeting older adults and advocated for universal guidelines in generative AI to safeguard data privacy.
Existing systematic biases, such as racism and ageism, in AI algorism also can exacerbate the marginalization of certain older adult groups. This happens when the data on which the algorithm has been trained don’t represent the diversity of society. Acknowledging this potential harm, experts emphasized techniques to address these biases such as finetuning, which Wang and Russakovsky describe as applying pretrained weights to the model based on a smaller and task-specific dataset. The ethical concern related to who decides what needs to be adjusted to make the data more representative remained unresolved.
Consequences of the wide dissemination and utilization of LLM models within health systems also were discussed. If these models are added in standardized patient care, we questioned whether not referring to AI outputs would be considered malpractice. Experts suggested the need to identify to what extent providers are deemed responsible in utilizing these LLM models in patient care.
On Day 2, we divided into three groups, each focusing on a scenario involving specific older adult populations where AI could be applied, including the following: aging in place, older adults with dementia, and hospitalized older adults. It was clear that generative AI can offer opportunities for older adults across various settings, such as providing virtual assistance or companionship, supporting care plans, and detecting signs of cognitive decline based on the users’ communication. However, attendees highlighted crucial ethical considerations related to bias, privacy concerns, over- and underreliance, nontransparent data governance and usage, and the potential exacerbation of existing digital divides based on subscription to AI tools or language barriers. Concrete guidelines to develop and deploy these technologies are needed to facilitate ethical and equitable care across various realms within gerontology.
Over the 2 days of this workshop, interdisciplinary colleagues shared insights regarding AI technology in gerontology. As the uses of generative AI and LLM become more prevalent in our daily lives, we should vigilantly watch for potential harm while promoting and engaging older adults to use this innovative technology. Further research and recommendations are required to ensure generative AI is used to responsibly and equitably meet the needs of older adults.
Oonjee Oh, MSN, RN is a doctoral student in the University of Pennsylvania School of Nursing. Her research interests center around palliative and end-of-life care. Specifically, she focuses on supporting persons living with dementia and family caregivers as they transition to hospice care.
Hannah Cho, MSN, AGPCNNP, RN is a doctoral student in the School of Nursing at the University of Pennsylvania. Her research interests are palliative care and dementia care. Her goal is to become a nurse scientist who can increase the quality of life for persons living with dementia and their family members.
References
Duffourc M, Gerke S. Generative AI in health care and liability risks for physicians and safety concerns for patients. JAMA 2023;330(4):313-4. doi:10.1001/jama.2023.9630
Kok JN. Artificial Intelligence. Paris: Eolss Publishers Company; 2009.
van Dis EAM, Bollen J, Zuidema W, van Rooij R, Bockting CL. ChatGPT: Five priorities for research. Nature.2023;614(7947):224-6. doi:10.1038/d41586-023-00288-7
Wang A, Russakovsky O. Overwriting pretrained bias with finetuning data. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023:3957-68.