Automated Multimodality Image-based Classifiersfor Early Detection of Alzheimer's Disease
Projectomschrijving
De belangrijkste resultaten die de software laat zien, zijn:
- Functionele modaliteiten als functionele en perfusie-MRI leiden tot hogere precisie in vroege stadia dan traditionele MRI.
- Voor de uiteindelijke classificatie is een samenstelling van meerdere scans niet beter dan de beste scan alleen.
- Het toepassen van een getrainde classifier op onbekende data zal altijd in meerdere stappen gaan: eerst om de meest geschikte test te bepalen, en die te doen met de meest geschikte modaliteit.
- Deze initiële stap moet multimodaal gedaan worden: het gebruik van een modaliteit die geschikt is voor een bepaald stadium kan tot een afwijking richting een bepaalde uitkomst leiden. In dit project zat een enquête om de voor- en nadelen van zo’n langere scan te bepalen. Het blijkt dat:
- Mensen geen problemen hebben met meer scannen, als van tevoren bekend is wat er gebeurt en waarom en als een bekende meegaat.
- Het opnemen en gebruiken van patiëntenervaringen kan veel meer dan nu gebeurt.
Verslagen
Eindverslag
Samenvatting van de aanvraag
Brain imaging is playing an increasingly important role as a diagnostic tool for identifying Alzheimer’s disease (AD) [1], enabling intervention to bring timely support to patients, families and carers. However, this is more difficult in early AD, where aberrations are more subtle. Different image types, or modalities, such as anatomical mgnetic resonance imaging (MRI) scans, metabolic scans using positron emission tomography (PET), arterial spin labelling (ASL) scans and functional MRI (fMRI) activation scans [1-4] show different aspects of the development path towards AD. However, no integrated analysis of these modalities exists that improves early diagnosis of AD. Attempts have been made to construct one model model of AD-related changes in different stages of AD [5], [6]. This model represents the evolution of independent biological measurements (biomarkers) in the course towards dementia. We propose to use pattern recognition software to multimodality image data, i.e., combinations of more than one image type. By constructing a combined imaging biomarker of AD (a biological measure that represents the disease), we will a window of opportunity for interventions by predicting different stages of AD. Trained classification patterns enable assessment of single-subject examinations: they can be directly applied to new acquisitions, enabling their use in the clinic as an automated diagnostic tool. Data and methods will be easily interchangeable between institutes: the trained classifier no longer needs access to the training data set. This is not possible with current methods, which rely on between-group analyses for statistical power. The resulting package is a first step to a system that can evaluate each new examination by automatically gathering multimodality imaging data and processing them in the classifier for diagnostic support in clinical and therapeutic studies. To realise this, we plan to integrate the software with the portal software of medical image storage databases. This project will combine modern, efficient pattern classification methods with integrated representations of multimodality data. Its main milestones are: • to tailor pattern recognition methods to neuroimaging data by introducing optimal data structures that represent the common spatial structure of multimodality inputs; • to train the software using an optimised normative multimodality imaging data set from the ADNI-2 cohort (N=550, controls and patients); • to validate the clinical relevance of the resulting biomarkers in terms of reliability in a test-retest setting, and in terms of validity/generalisability in a cross-validation setting; • to apply and validate these biomarkers in existing, ecological multi-modality imaging cohorts from 1. the VUmc (N=160 patients Alzheimer Center) 2. CITA-Alzheimer (N=480 elderly controls, recruited via the regional media); • to quantify classifier accuracy by relating its outcomes to disease variables of amyloid-beta, tau, genetic and cognition data; • to define, validate and test diagnostic patterns for various early stages of AD to facilitate clinical decision making; • to develop a quantitative diagnostic tool for decision support and to assess its clinical value. The ADNI-2 data set [7] is a multi-centre, multi-modality data set of volunteers and patients with different degrees of AD, recruited via Internet, advertising in print and via physicians. Multi-modality imaging data are collected as well as cognitive tests, APOE genotyping, amyloid-beta and tau concentrations in the cerebrospinal fluid (CSF). We will use a subset of the cohort with equal patient group sizes and optimal matching and balancing for age and gender, and who have had both 3-month and 12-month follow-up scans. The software will be trained on the baseline data to produce quantitative markers that are validated internally (using cross-validation) as well as externally (by matching to non-imaging disease variables). To assess reliability, the controls and AD patients whose diagnosis and status has not changed at month 12 will be used in a test-retest classification using the baseline and month 3 data. Applying the classifier to the 'ecological' VUmc imaging study of subjects with diagnoses ranging from healthy controls to severe AD [8], will give better insight into the generalisability of the classifier across populations and acquisition configurations. Afterwards, a classifier based on the ecological data set will be applied to the ADNI-2 data to assess the generalisability of the method. The cohort at CITA-Alzheimer is an ecological control cohort of healthy elderly. Subjects were recruited via popular media, to sample the whole adult population, and subjects with cognitive or neurological problems were excluded. Applying the classifier to these control data will yield important quantitative insights into the classifier's sensitivity to brain changes that precede AD.