[citation report] Transforming Healthcare Education: Harnessing Large Language Models for Frontline Health Worker Capacity Building using Retrieval-Augmented Generation (2024)

2023

DOI: 10.1101/2023.12.15.23300009

|View full text |Cite

Preprint

|

Sign up to set email alerts

|

Yasmina Al Ghadban,

Huiqi (Yvonne) Lu,

Uday Adavi

et al.

Abstract: In recent years, large language models (LLMs) have emerged as a transformative force in several domains, including medical education and healthcare. This paper presents a case study on the practical application of using retrieval-augmented generation (RAG) based models for enhancing healthcare education in low- and middle-income countries. The model described in this paper, SMARThealthGPT, stems from the necessity for accessible and locally relevant medical information to aid community health workers in delive… Show more

Help me understand this report

Search citation statements

Order By:

Relevance

Paper Sections

Select...

1

Year Published

2024

20242024

2024

Publication Types

Select...

2

Relationship

0

2

Authors

Journals

[citation report] Transforming Healthcare Education: Harnessing Large Language Models for Frontline Health Worker Capacity Building using Retrieval-Augmented Generation (1)

Cited by 2 publications

(

1

citation statement

)

[citation report] Transforming Healthcare Education: Harnessing Large Language Models for Frontline Health Worker Capacity Building using Retrieval-Augmented Generation (2)

References 16 publications

(

15

reference statement

s

)

1

Order By:

Relevance

“…These use cases can augment the limited financial, logistical, and human resources available in LMICs [8]. Initial studies on clinician perception on the usefulness of a combination of OpenAI’s “gpt-3.5-turbo / “gpt-4” and Retrieval Augmented Generation (RAG), as a health education tool in India, an LMIC, revealed that though clinicians believed the tool held potential, they were generally not satisfied with its performance [9]. In that study, the authors identified the need to enhance the contextual and cultural relevance of the models’ responses.…”

Section: Related Workmentioning

confidence: 99%

All You Need Is Context: Clinician Evaluations of various iterations of a Large Language Model-Based First Aid Decision Support Tool in Ghana

Mensah,

Quao,

Dagadu

et al. 2024

Preprint

11

As advancements in research and development expand the capabilities of Large Language Models (LLMs), there is a growing focus on their applications within the healthcare sector, driven by the large volume of data generated in healthcare. There are a few medicine-oriented evaluation datasets and benchmarks for assessing the performance of various LLMs in clinical scenarios; however, there is a paucity of information on the real-world usefulness of LLMs in contextspecific scenarios in resource-constrained settings. In this work, 5 iterations of a decision support tool for medical emergencies using 5 distinct generalized LLMs were constructed, alongside a combination of Prompt Engineering and Retrieval Augmented Generation techniques. Quantitative and qualitative evaluations of the LLM responses were provided by 12 physicians (general practitioners) with an average of 2 years of practice experience managing medical emergencies in resource-constrained settings in Ghana.

“…These use cases can augment the limited financial, logistical, and human resources available in LMICs [8]. Initial studies on clinician perception on the usefulness of a combination of OpenAI’s “gpt-3.5-turbo / “gpt-4” and Retrieval Augmented Generation (RAG), as a health education tool in India, an LMIC, revealed that though clinicians believed the tool held potential, they were generally not satisfied with its performance [9]. In that study, the authors identified the need to enhance the contextual and cultural relevance of the models’ responses.…”

Section: Related Workmentioning

confidence: 99%

All You Need Is Context: Clinician Evaluations of various iterations of a Large Language Model-Based First Aid Decision Support Tool in Ghana

Mensah,

Quao,

Dagadu

et al. 2024

Preprint

11

As advancements in research and development expand the capabilities of Large Language Models (LLMs), there is a growing focus on their applications within the healthcare sector, driven by the large volume of data generated in healthcare. There are a few medicine-oriented evaluation datasets and benchmarks for assessing the performance of various LLMs in clinical scenarios; however, there is a paucity of information on the real-world usefulness of LLMs in contextspecific scenarios in resource-constrained settings. In this work, 5 iterations of a decision support tool for medical emergencies using 5 distinct generalized LLMs were constructed, alongside a combination of Prompt Engineering and Retrieval Augmented Generation techniques. Quantitative and qualitative evaluations of the LLM responses were provided by 12 physicians (general practitioners) with an average of 2 years of practice experience managing medical emergencies in resource-constrained settings in Ghana.

Can Large Language Models Provide Emergency Medical Help Where There Is No Ambulance? A Comparative Study on Large Language Model Understanding of Emergency Medical Scenarios in Resource-Constrained Settings

Mensah,

Quao,

Dagadu

2024

Preprint

The capabilities of Large Language Models (LLMs) have advanced since their popularization a few years ago. The healthcare sector operates on, and generates a large volume of data annually and thus, there is a growing focus on the applications of LLMs within this sector. There are a few medicine-oriented evaluation datasets and benchmarks for assessing the performance of various LLMs in clinical scenarios; however, there is a paucity of information on the real-world usefulness of LLMs in context-specific scenarios in resourceconstrained settings. In this study, 16 iterations of a decision support tool for medical emergencies using 4 distinct generalized LLMs were constructed, alongside a combination of 4 Prompt Engineering techniques: In-Context Learning with 5-shot prompting (5SP), chain-of-thought prompting (CoT), self-questioning prompting (SQP), and a stacking of self-questioning prompting and chain-of-thought (SQCT). In total 428 model responses were quantitatively and qualitatively evaluated by 22 clinicians familiar with the medical scenarios and background contexts. Our study highlights the benefits of In-Context Learning with few-shot prompting, and the utility of the relatively novel self-questioning prompting technique. We also demonstrate the benefits of combining various prompting techniques to elicit the best performance of LLMs in providing contextually applicable health information. We also highlight the need for continuous human expert verification in the development and deployment of LLM-based health applications, especially in use cases where context is paramount.

[citation report] Transforming Healthcare Education: Harnessing Large Language Models for Frontline Health Worker Capacity Building using Retrieval-Augmented Generation (2024)

References

Top Articles
Latest Posts
Article information

Author: Geoffrey Lueilwitz

Last Updated:

Views: 6541

Rating: 5 / 5 (80 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Geoffrey Lueilwitz

Birthday: 1997-03-23

Address: 74183 Thomas Course, Port Micheal, OK 55446-1529

Phone: +13408645881558

Job: Global Representative

Hobby: Sailing, Vehicle restoration, Rowing, Ghost hunting, Scrapbooking, Rugby, Board sports

Introduction: My name is Geoffrey Lueilwitz, I am a zealous, encouraging, sparkling, enchanting, graceful, faithful, nice person who loves writing and wants to share my knowledge and understanding with you.