About the project
Dementia affects about 44 million individuals today, a number that is expected to nearly double by 2030 and triple by 2050. With an estimated cost of 604 billion USD annually, dementia represents a major growing economic burden for both industrial and developing countries, in addition to the significant physical and emotional burden it places on individuals, family members and caregivers.
Currently, neither a cure for dementia nor a reliable way to slow down its progress exists. Improving prevention and diagnosis, and discovering potential ways of treatment and cure requires better understanding of the mechanisms underlying neurodegeneration. However, these mechanisms are complex, and influenced by a range of genetic and environmental influences that may have no immediately apparent connection to brain health.
The promises of big data
At the core of dementia research are data obtained purposively in medical settings, such as images; clinical, genetic, proteomic and biological data; and cognitive tests or surveys. Big data approaches to research add new types of data and ways of analysing them to this repertoire, including data from electronic medical records, registries and other routine health data, from online patient platforms, but also from retailers and mobile phone providers for social and lifestyle insights.
Both broad data (relating to the number of individuals represented in a dataset) and deep data (an indication of the number of measures and granularity of those measures related to each individual) are part of the big picture for dementia research. Both of these types of data represent challenges for researchers in terms of generating, linking, using and sharing data in a way that respects individual rights to privacy without unnecessarily constraining dementia research.
In December 2013, the G8 Global Dementia Summit in London identified the better use of available data, resource sharing and researcher collaboration as key priorities. With the ambition to find a cure or disease-modifying therapy by 2025, the G8 health ministers mandated the OECD to report on how big data can be used and shared more efficiently for dementia research.
This project is one part of that OECD work, and is aimed at a wide audience of policymakers, funders, the private sector and researchers. The results will be reported to the World Dementia Council and presented to the G7 health ministers at the First WHO Ministerial Conference on Global Action Against Dementia in Geneva on 16-17 March 2015.
Methods and project setup
For this report, the authors of this report interviewed 37 leading experts from academia and beyond, with a particular focus on four case studies of data sharing initiatives: ADNI, AddNeuroMed, UK Biobank and the Swedish Brain Power studies. These cases represent a number of important examples of efforts to create, federate, catalogue, and share data, and cover the spectrum of ongoing data sharing activities in terms of geographic coverage, general or dementia-specific focus, size, maturity, openness by design and linkage to routine data.
The project ran between July 2014 and March 2015, when the final report “Big data for advancing dementia research” was published. The project reported to the OECD International Advisory Group chaired by Rob Buckle, with Philippe Amouyel, Neil Buckholtz, Giovanni Frisoni, Yves Joanette, Richard Johnson, Miia Kivipelto, Martin Rossor, Donald Stuss and Yoshiaki Tojo. On the OECD side, the work was led by Elettra Ronchi and Christian Reimsbach-Kounatze.