To the top

Page Manager: Webmaster
Last update: 9/11/2012 3:13 PM

Tell a friend about this page
Print version

Predicting MCI Status Fro… - University of Gothenburg, Sweden Till startsida
Sitemap
To content Read more about how we use cookies on gu.se

Predicting MCI Status From Multimodal Language Data Using Cascaded Classifiers

Journal article
Authors Kathleen Fraser
Kristina Lundholm Fors
Marie Eckerström
Fredrik Öhman
Dimitrios Kokkinakis
Published in Frontiers in Aging Neuroscience
Volume 11
Issue 205
ISSN 1663-4365
Publication year 2019
Published at Institute of Neuroscience and Physiology
Department of Swedish
Centre for Ageing and Health (Agecap)
Language en
Links dx.doi.org/10.3389/fnagi.2019.00205
Keywords mild cognitive impairment, language, speech, eye-tracking, machine learning, multimodal, early, mild cognitive impairment, alzheimers-disease, spontaneous speech, picture description, memory, integration, decline, identification, comprehension, recognition
Subject categories Neurosciences, Language Technology (Computational Linguistics), Linguistics

Abstract

Recent work has indicated the potential utility of automated language analysis for the detection of mild cognitive impairment (MCI). Most studies combining language processing and machine learning for the prediction of MCI focus on a single language task; here, we consider a cascaded approach to combine data from multiple language tasks. A cohort of 26 MCI participants and 29 healthy controls completed three language tasks: picture description, reading silently, and reading aloud. Information from each task is captured through different modes (audio, text, eye-tracking, and comprehension questions). Features are extracted from each mode, and used to train a series of cascaded classifiers which output predictions at the level of features, modes, tasks, and finally at the overall session level. The best classification result is achieved through combining the data at the task level (AUC = 0.88, accuracy = 0.83). This outperforms a classifier trained on neuropsychological test scores (AUC = 0.75, accuracy = 0.65) as well as the "early fusion" approach to multimodal classification (AUC = 0.79, accuracy = 0.70). By combining the predictions from the multimodal language classifier and the neuropsychological classifier, this result can be further improved to AUC = 0.90 and accuracy = 0.84. In a correlation analysis, language classifier predictions are found to be moderately correlated (rho = 0.42) with participant scores on the Rey Auditory Verbal Learning Test (RAVLT). The cascaded approach for multimodal classification improves both system performance and interpretability. This modular architecture can be easily generalized to incorporate different types of classifiers as well as other heterogeneous sources of data (imaging, metabolic, etc.).

Page Manager: Webmaster|Last update: 9/11/2012
Share:

The University of Gothenburg uses cookies to provide you with the best possible user experience. By continuing on this website, you approve of our use of cookies.  What are cookies?