Video and Synthetic MRI Pre-training of 3D Vision Architectures for Neuroimage Analysis

Nikhil J. Dhinagar*, Amit Singh, Saket Ozarkar, Ketaki Buwa, Sophia I. Thomopoulos, Conor Owens-Walton, Emily Laltoo, Yao Liang Chen, Philip Cook, Corey McMillan, Chih Chien Tsai, J. J. Wang, Yih Ru Wu, Paul M. Thompson

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

1 Scopus citations

Abstract

Transfer learning represents a recent paradigm shift in the way we build artificial intelligence (AI) systems. In contrast to training task-specific models, transfer learning involves pre-training deep learning models on a large corpus of data and minimally fine-tuning them for adaptation to specific tasks. Even so, for 3D medical imaging tasks, we do not know if it is best to pre-train models on natural images, medical images, or even synthetically generated MRI scans or video data. To evaluate these alternatives, here we benchmarked vision transformers (ViTs) and convolutional neural networks (CNNs), initialized with varied upstream pre-training approaches. These methods were then adapted to three unique downstream neuroimaging tasks with a range of difficulty: Alzheimer's disease (AD) and Parkinson’s disease (PD) classification, “brain age” prediction. Experimental tests led to the following key observations: 1. Pre-training improved performance across all tasks including a boost of 7.5% for AD classification and 4.5% for PD classification for the ViT and 19.1% for PD classification and reduction in brain age prediction error by 1.26 years for CNNs, 2. Pre-training on large-scale video or synthetic MRI data boosted performance of ViTs, 3. CNNs were robust in limited-data settings, and in-domain pretraining enhanced their performances, 4. Pre-training improved generalization to out-of-distribution datasets and sites. Overall, we benchmarked different vision architectures, revealing the impact of pre-training them with emerging datasets for model initialization. The resulting pre-trained models can be adapted to a range of downstream neuroimaging tasks, especially when training data for a domain-specific target task is limited.

Original languageEnglish
Title of host publicationMedical Imaging 2024
Subtitle of host publicationComputer-Aided Diagnosis
EditorsWeijie Chen, Susan M. Astley
PublisherSPIE
ISBN (Electronic)9781510671584
DOIs
StatePublished - 2024
EventMedical Imaging 2024: Computer-Aided Diagnosis - San Diego, United States
Duration: 19 02 202422 02 2024

Publication series

NameProgress in Biomedical Optics and Imaging - Proceedings of SPIE
Volume12927
ISSN (Print)1605-7422

Conference

ConferenceMedical Imaging 2024: Computer-Aided Diagnosis
Country/TerritoryUnited States
CitySan Diego
Period19/02/2422/02/24

Bibliographical note

Publisher Copyright:
© 2024 SPIE.

Keywords

  • MRI
  • Transfer learning
  • vision transformers

Fingerprint

Dive into the research topics of 'Video and Synthetic MRI Pre-training of 3D Vision Architectures for Neuroimage Analysis'. Together they form a unique fingerprint.

Cite this