
Enhancing AI in Healthcare: Large Language Models Surpass USMLE Scores
TL/DR –
The integration of artificial intelligence (AI) and large language models (LLMs) into healthcare is becoming increasingly prevalent, with a recent study demonstrating that an enhanced AI LLM outperformed most physicians and other AI tools in the US Medical Licensing Examination. Examples of AI use in healthcare include in-hospital patient monitoring, medication management, treatment planning, and disease detection. However, while LLMs show promise, researchers highlight that for their implementation in healthcare to be successful, there need to be methods in place to maintain accuracy over time, with one suggested method being the use of retrieval-augmented generation to improve LLM performance.
The Emergence of AI and LLMs in Healthcare
The intersection of artificial intelligence (AI) and healthcare is giving rise to the use of enhanced Large Language Models (LLMs) for improved patient care. A study in JAMA Network Open highlighted how a superior AI LLM excelled in the US Medical Licensing Examination (USMLE), surpassing most physicians.
According to a 2024 Statista survey, 18% of healthcare workers use LLMs for biomedical research, while around a fifth use them for patient communication. AI is revolutionizing healthcare with LLMs like OpenAI’s chatbot ChatGPT, attracting 100 million users within two months of its release in November 2022. Other examples include Meta’s Llama, Google’s BERT, and Microsoft’s Orca.
AI Adoption in Global Healthcare
Global healthcare is embracing AI, with 43% of hospitals using it for patient monitoring, according to the Future Health Index 2024. Other applications include medication management (37%), treatment planning (37%), and preventative care (36%). For instance, Mayo Clinic deploys AI for early detection of several health conditions, while Johns Hopkins University uses it for patient medical chart summarization.
LLMs Understanding and AI Accuracy Improvement
LLMs, a type of AI machine learning program, are pre-trained on enormous datasets to predict text sequences. Despite their advanced pattern recognition abilities, researchers note the need for enhanced accuracy for maximum benefits. AI model performance can be improved with larger, better-quality training datasets, and computational power boosts during training. LLMs can also benefit from specific techniques like prompt distillation and prompt engineering.
Enhanced LLMs Through Retrieval-Augmented Generation
In their study, researchers enhanced LLMs with retrieval-augmented generation (RAG), a technique that uses external knowledge to refine LLM output. This provides greater flexibility in data source input and enables the use of up-to-date information from sources like news sites and social media feeds.
Researchers also used semantic triples, grouping data by subject, relation, and object to add context to the LLM. The team coined their technique “Semantic Clinical Artificial Intelligence” (SCAI).
They tested SCAI-enhanced LLMs using Meta’s Llama models on USMLE questions and found significantly improved scores. The researchers believe that AI collaboration will become the standard in healthcare, serving as an assistive tool rather than a replacement for human clinicians.
—
Read More Health & Wellness News ; US News