ARTIFICIAL INTELLIGENCE AND GPT

As explained in the July 2023 issue of the journal Trends in Cognitive Sciences, artificial intelligence (AI) is an “agent of replacement” designed to take over tasks once performed by humans. Generative large language models, such as GPT are replacing much human labor, including in psychological science, where researchers use these tools to help edit papers, conduct literature reviews, and create scale items. A provocative question worth considering is “Could AI replace human participants?” For that replacement to occur, AI must give humanlike responses. The “humanness” of AI has long been questioned.  

As described in the July 20, 2023 issue of the journal Nature, two scientists produced a research paper in less than an hour with the help of ChatGPT, a tool driven by AI that can understand and generate human-like text. The researchers designed a software package that automatically fed prompts to ChatGPT and built on its responses to refine the paper over time. This autonomous data-to-paper system led the chatbot through a step-by-step process that mirrors the scientific process, from initial data exploration; through writing data analysis code and interpreting the results; to writing a polished manuscript. The article was fluent, insightful, and presented in the expected structure for a scientific paper, but the effort was not perfect. For instance, it states that the study “addresses a gap in the literature,” a phrase that is common in manuscripts, but inaccurate in this case. 

While AI shows promise for improving basic and translational science, medicine, and public health, its success is not guaranteed as described in the July 20 issue of the journal Science. Numerous examples have arisen of racial, ethnic, gender, disability, and other biases in AI applications to health care. Consensus has emerged among scientists, ethicists, and policy makers that minimizing bias is a shared responsibility among all involved in AI development. Ensuring equity will require more than unbiased data and algorithms. It also will entail reducing biases in how clinicians and patients use AI-based algorithms, a potentially more challenging task than reducing biases in algorithms themselves.  

Allied health consists of several professions described under that umbrella term. The August 3, 2023 issue of the New England Journal of Medicine contains a paper acknowledging that medical trainees and clinicians already use AI, which means that medical education does not have the luxury of watchful waiting. The field needs to grapple now with the effects of AI. Many valid concerns already have been raised about AI’s effects on medicine, including the propensity for AI to make up information that it then presents as fact (termed a “hallucination”), its implications for patient privacy, and the risk of biases being baked into source data.  A major concern is the ways in which this technology could affect the thought structures and practice patterns of medical trainees and physicians for generations to come. Such issues have implications for many other professions, including allied health. More information on this overall topic will be available at the 2023 ASAHP Annual Conference on October 17-19 in Fort Lauderdale, FL.