Medical Rag Report
Medical Rag Report
Abstract—Large language models (LLMs) have been a effective techniques for adapting pre-trained LLMs to
game-changer in a number of fields in recent years, including particular applications are retrieval-augmented generation
healthcare and medical education. This work offers a case study (RAG) and fine-tuning. In a "close-book" scenario, fine-
on the real-world implementation of retrieval-augmented tuning adjusts the model's weight according to a task-specific
models for improving healthcare education in low- and middle-
income nations that are based on generation (RAG). The need
dataset, depending only on extra input-output pairs of training
for easily available and locally relevant medical information to data for learning. On the other hand, RAG does not require
support community health workers in providing high-quality labeled training data and functions in a "open-book"
maternity care led to the development of the SMARThealth environment.
GPT model, which is the subject of this research. We outline the
whole RAG pipeline development process, which includes
A. What is RAG
parameter selection and optimization, knowledge embedding The implementation of goal-oriented large language
retrieval, response production, and the establishment of a models (LLMs) in conjunction with various LLM-oriented
knowledge base of Indian pregnancy-related rules. This case frameworks is expanding the range of AI applications and
study demonstrates how LLMs may improve guideline-based improving LLMs' ability to perform complicated tasks.
health education and develop the ability of frontline healthcare Modern LLMs are quite capable, ranging from chatbots that
workers. It also provides ideas for comparable applications in
can generate programming code to responding to inquiries on
environments with restricted resources. It is a resource for
machine learning researchers, teachers, medical experts, and legal papers with latent provenance. But this enhanced
legislators who want to use LLMs to significantly enhance potential also brings with it new complications. Despite their
education. strength at traditional text-based activities, emerging LLMs
require outside assistance to keep up with changing
Keywords—Machine Learning, Large language Models, Retrieval knowledge [2].
Augmented Generation, Natural Language processing, Medical
Assisstent.
I .INTRODUCTION
Large Language Models (LLMs) are the solution for majority of
the text-related tasks or LLMs, are the standard approach.
Their factual accuracy1, a drawback of their generative
nature, is still a serious worry, nevertheless. LLMs are made
to produce believable text based on learnt patterns rather than
to acquire exact facts [1]. Contextualizing LLMs by the use
of pertinent input tokens to affect their output is a common
method of improving their factuality. This includes more
complex Retrieval Augmented Generation (RAG) methods
as well as more straightforward prompting strategies like
"Let's think step by step." Context retrieval system
integration may, in fact, greatly improve LLM performance
and dependability[1].
Recently, With the growing availability of pre-trained large
language models (LLMs), including Open AI's GPT, Lama, Fig. 1. RAG Model
and PaLM, the field of natural language processing (NLP) has
recently witnessed amazing advancements. These models Non-parametric retrieval-based approaches, like as retrieval-
have been used in a variety of sectors and are becoming more augmented generation (RAG), are becoming essential to the
and more working in healthcare and medical education. Two most recent LLM applications in order to overcome this
difficulty, particularly for domain-specific tasks.