Bridging Motivation Gaps: LLMs and Health Behavior Change

Summary: A new study explores how large language models (LLMs) like ChatGPT, Google Bard, and Llama 2 address different motivational states in health-related contexts, revealing a significant gap in their ability to support behavior change. While these generative conversational agents excel at providing information to users with clear goals, they struggle to guide those uncertain about making health-related changes, such as adopting a more active lifestyle to manage conditions like diabetes.

This research underscores the need for LLMs to integrate psychological theories and natural language processing to effectively promote preventive health behaviors, pointing to new directions for enhancing digital health solutions.

Key Facts:

  1. Generative conversational agents can identify users’ motivation states and provide relevant information for goal-oriented individuals but fall short in assisting those ambivalent about changing behaviors.
  2. The study highlights a crucial gap in LLMs’ ability to support users with uncertain motivation, emphasizing the importance of incorporating behavioral science into LLM development for health promotion.
  3. The research team, led by PhD student Michelle Bak and Assistant Professor Jessie Chin, aims to develop digital health interventions that leverage LLMs to encourage positive health behavior changes.

Source: University of Illinois

A new study recently published in the Journal of the American Medical Informatics Association (JAMIA) reveals how large language models (LLMs) respond to different motivational states.

In their evaluation of three LLM-based generative conversational agents (GAs)—ChatGPT, Google Bard, and Llama 2, PhD student Michelle Bak and Assistant Professor Jessie Chin of the School of Information Sciences at the University of Illinois Urbana-Champaign found that while GAs are able to identify users’ motivation states and provide relevant information when individuals have established goals, they are less likely to provide guidance when the users are hesitant or ambivalent about changing their behavior.

This shows a person using a laptop.
“This major gap of LLMs in responding to certain states of motivation suggests future directions of LLMs research for health promotion,” said Chin. Credit: Neuroscience News

Bak provides the example of an individual with diabetes who is resistant to changing their sedentary lifestyle.  

“If they were advised by a doctor that exercising would be necessary to manage their diabetes, it would be important to provide information through GAs that helps them increase an awareness about healthy behaviors, become emotionally engaged with the changes, and realize how their unhealthy habits might affect people around them.

“This kind of information can help them take the next steps toward making positive changes,” said Bak.

Current GAs lack specific information about these processes, which puts the individual at a health disadvantage. Conversely, for individuals who are committed to changing their physical activity levels (e.g., have joined personal fitness training to manage chronic depression), GAs are able to provide relevant information and support. 

“This major gap of LLMs in responding to certain states of motivation suggests future directions of LLMs research for health promotion,” said Chin.

Bak’s research goal is to develop a digital health solution based on using natural language processing and psychological theories to promote preventive health behaviors. She earned her bachelor’s degree in sociology from the University of California Los Angeles.

Chin’s research aims to translate social and behavioral sciences theories to design technologies and interactive experiences to promote health communication and behavior across the lifespan. She leads the Adaptive Cognition and Interaction Design (ACTION) Lab at the University of Illinois.

Chin holds a BS in psychology from National Taiwan University, an MS in human factors, and a PhD in educational psychology with a focus on cognitive science in teaching and learning from the University of Illinois.

About this LLM and AI research news

Author: Cindy Brya
Source: University of Illinois
Contact: Cindy Brya – University of Illinois
Image: The image is credited to Neuroscience News

Original Research: Closed access.
“The potential and limitations of large language models in identification of the states of motivations for facilitating health behavior change” by Jessie Chin et al. Journal of the American Medical Informatics Association


Abstract

The potential and limitations of large language models in identification of the states of motivations for facilitating health behavior change

Importance

The study highlights the potential and limitations of the Large Language Models (LLMs) in recognizing different states of motivation to provide appropriate information for behavior change. Following the Transtheoretical Model (TTM), we identified the major gap of LLMs in responding to certain states of motivation through validated scenario studies, suggesting future directions of LLMs research for health promotion.

Objectives

The LLMs-based generative conversational agents (GAs) have shown success in identifying user intents semantically. Little is known about its capabilities to identify motivation states and provide appropriate information to facilitate behavior change progression.

Materials and Methods

We evaluated 3 GAs, ChatGPT, Google Bard, and Llama 2 in identifying motivation states following the TTM stages of change. GAs were evaluated using 25 validated scenarios with 5 health topics across 5 TTM stages. The relevance and completeness of the responses to cover the TTM processes to proceed to the next stage of change were assessed.

Results

3 GAs identified the motivation states in the preparation stage providing sufficient information to proceed to the action stage. The responses to the motivation states in the action and maintenance stages were good enough covering partial processes for individuals to initiate and maintain their changes in behavior. However, the GAs were not able to identify users’ motivation states in the precontemplation and contemplation stages providing irrelevant information, covering about 20%-30% of the processes.

Discussion

GAs are able to identify users’ motivation states and provide relevant information when individuals have established goals and commitments to take and maintain an action. However, individuals who are hesitant or ambivalent about behavior change are unlikely to receive sufficient and relevant guidance to proceed to the next stage of change.

Conclusion

The current GAs effectively identify motivation states of individuals with established goals but may lack support for those ambivalent towards behavior change.

Previous post Brothers plead guilty to insider trading tied to Trump media company
Next post JK Rowling vows to defend other women if they are pursued by cops for misgendering trans folks