We present MedNarratives, which is a framework for creating a comprehensive medical vision-language dataset by leveraging educational videos. Overall MedNarratives does the following:
We propose MedicalNarratives, a dataset curated from medical pedagogical videos similar in nature to data collected in Think-Aloud studies and inspired by Localized Narratives, which collects grounded image-text data by curating instructors' speech and mouse cursor movements synchronized in time. MedicalNarratives enables pretraining of both semantic and dense objectives, alleviating the need to train medical semantic and dense tasks disparately due to the lack of reasonably sized datasets. Our dataset contains 4.7M image-text pairs from videos and articles, with 1M samples containing dense annotations in the form of traces and bounding boxes. To evaluate the utility of MedicalNarratives, we train GenMedCLIP based on the CLIP architecture using our dataset spanning 12 medical domains and demonstrate that it outperforms previous state-of-the-art models on a newly constructed medical imaging benchmark that comprehensively evaluates performance across all modalities.
We present MedNarratives, which is a framework for creating a comprehensive medical vision-language dataset by leveraging educational videos. Overall MedNarratives does the following:
MedicalNarratives contains:
We evaluate our approach through multiple experiments:
Zero-shot classification performance across different medical domains
@article{medicalnarratives2025,
title={MedicalNarratives: Connecting Medical Vision and Language with Localized Narratives},
author={Author Names},
journal={arXiv preprint arXiv:XXXX.XXXXX},
year={2025}
}