Abstract: Alzheimer’s disease (AD) is the primary cause of dementia worldwide (1), with an increasing morbidity burden that may outstrip diagnosis and management capacity as the population ages. Current methods integrate patient history, neuropsychological testing and magnetic resonance imaging (MRI) to identify likely cases, yet effective practices remain variably-applied and lacking in sensitivity and specificity (2). Here we report an explainable deep learning strategy that delineates unique AD signatures from multimodal inputs of MRI, age, gender, and mini-mental state examination (MMSE) score. Our framework linked a fully convolutional network (FCN) to a multilayer perceptron (MLP) to construct high resolution maps of disease probability from local brain structure. This enabled precise, intuitive visualization of individual AD risk en route to accurate diagnosis. The model was trained using clinically-diagnosed AD and cognitively normal (NC) subjects from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset (n=417) (3), and validated on three independent cohorts: the Australian Imaging, Biomarker & Lifestyle Flagship Study of Ageing (AIBL, n=382) (4), the Framingham Heart Study (FHS, n=102) (5), and the National Alzheimer’s Coordinating Center (NACC, n=582) (6). Model performance was consistent across datasets, with mean accuracy values of 0.966, 0.948, 0.815, and 0.916 for ADNI, AIBL, FHS and NACC, respectively. Moreover, our approach exceeded the diagnostic performance of a multi-institutional team of practicing neurologists (n=11), and high-risk cerebral regions predicted by the model closely tracked postmortem histopathological findings. This framework provides a clinically-adaptable strategy for using routinely available imaging techniques such as MRI to generate nuanced neuroimaging signatures for AD diagnosis, as well as a generalizable approach for linking deep learning to pathophysiological processes in human disease.
Bio: Vijaya Kolachalama is an Assistant Professor within the Section of Computational Biomedicine, Department of Medicine, Boston University School of Medicine. Research in his group is focused on developing deep learning algorithms for disease risk assessment and designing software technologies to assist clinical decision-making. Current projects include the development of fully convolutional networks and multimodal fusion models to predict the risk of Alzheimer’s disease and osteoarthritis using digital data such as MR imaging, and computer vision related tasks such as semantic segmentation, image classification and object detection for digital pathology applications. His group is also developing recurrent neural network approaches for protein sequence analysis.
Before joining Boston University, Dr. Kolachalama held appointments as a Postdoctoral Associate at MIT, as an ORISE Fellow at the US Food and Drug Administration, and as a Principal Member of Technical Staff at the Charles Stark Draper Laboratory. He has a bachelor’s degree in Aerospace Engineering from the Indian Institute of Technology, Kharagpur, India and a PhD in Mechanical Engineering from the University of Southampton, UK. His recent accomplishments include recognition as Research Fellow and Junior Faculty Fellow by Boston University’s Hariri Institute of Computing and Fellow by Boston University’s Institute for Health System Innovation & Policy. He was recently elected as a Fellow of the American Heart Association.