Divide and Conquer Radiology Report Generation via Observation Level Fine-grained Pretraining and Prompt Tuning

Yuanpin Zhou, Huogen Wang


Abstract
The automation of radiology report generation (RRG) holds immense potential to alleviate radiologists’ workloads and improve diagnostic accuracy. Despite advancements in image captioning and vision-language pretraining, RRG remains challenging due to the lengthy and complex nature of radiology reports. In this work, we proposes the Divide and Conquer Radiology Report Generation (DCRRG) model, which breaks down full-text radiology reports into concise observation descriptions. This approach enables the model to capture fine-grained representations from each observation through a two-stage process: an encoding stage focusing on observation prediction tasks to learn fine-grained representations, and a decoding stage for integrating these descriptions into cohesive and comprehensive radiology reports. Experimental results on two benchmark datasets demonstrate that DCRRG achieves significant improvements across all evaluated metrics, underscoring its capability to generate semantically coherent and clinically accurate radiology reports.
Anthology ID:
2024.emnlp-main.433
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7597–7610
Language:
URL:
https://aclanthology.org/2024.emnlp-main.433/
DOI:
10.18653/v1/2024.emnlp-main.433
Bibkey:
Cite (ACL):
Yuanpin Zhou and Huogen Wang. 2024. Divide and Conquer Radiology Report Generation via Observation Level Fine-grained Pretraining and Prompt Tuning. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 7597–7610, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Divide and Conquer Radiology Report Generation via Observation Level Fine-grained Pretraining and Prompt Tuning (Zhou & Wang, EMNLP 2024)
Copy Citation:
PDF:
https://aclanthology.org/2024.emnlp-main.433.pdf

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy