Ethics of Digital Phenotyping in Alzheimer’s and Related Disorders
Artificial intelligence is revolutionizing healthcare in unprecedented ways, including how we detect and diagnose neurological conditions like Alzheimer’s disease. Digital phenotyping—the use of data collected from everyday devices to identify patterns associated with health conditions—presents exciting possibilities for early detection and intervention. But as this technology advances, so do ethical concerns that demand our attention. Earlier this year, our lab published a review of these considerations in The Journal of Medical Ethics.
The Promise of Digital Speech Analysis
Alzheimer’s disease presents an enormous challenge for our health system. It affects millions worldwide, causing emotional and economic hardship for patients and their families. One of the most promising developments in early detection involves computational analysis of speech patterns.
Changes in language have been noted since the initial descriptions of Alzheimer’s by Alois Alzheimer in 1906, and research now suggests these changes may predate more obvious symptoms by decades. Speech analytics could provide a cost-effective, rapid, and accurate screening tool that doesn’t add time to clinical practice.
With algorithms approaching 90% accuracy in detecting Alzheimer’s from speech patterns, this technology could revolutionize early diagnosis. But who’s considering the ethical implications as these tools move outside traditional clinical settings?
The Ubiquity of Data Collection
Many of us already live surrounded by devices with microphones—smartphones, smart speakers, televisions, and more. These devices can potentially collect speech data without our explicit awareness.
Consider these real-world examples:
- Samsung faced a class action lawsuit in 2017 over collecting users’ data through voice-activated televisions
- During the 2019 Women’s World Cup, a mobile app activated phone microphones to detect if users were watching unauthorized broadcasts
- Amazon patented technology to assess health conditions, including respiratory infections, through voice analysis to tailor marketing
While medical providers operate within strict legal and ethical boundaries, what protections exist when diagnostic data are collected “in the wild”? For individuals with Alzheimer’s disease, the stakes are particularly high as this information could affect their independence, insurance costs, employment opportunities, and more—often without their knowledge.
The Consent Conundrum
“No one in the universe… has read the terms and conditions.” — Eddie Izzard
Medical ethics centers on principles of autonomy, beneficence, non-maleficence, and justice. Informed consent is fundamental to respecting a person’s autonomy and right to make decisions about their care. But this becomes incredibly complex with digital phenotyping technology for several reasons:
- Capacity concerns: Alzheimer’s disease affects decision-making capacity, potentially compromising a person’s ability to understand, retain, or weigh information about data collection
- Opaque consent processes: Terms and conditions for technology are typically long, difficult to understand, and vague
- Changing capacity: Someone’s ability to provide informed consent may fluctuate daily and decline over time
- Lack of real-time assessment: Unlike clinical settings, there’s typically no trained professional assessing understanding in real-time
In medical settings, when capacity is compromised, a legally authorized representative (LAR) may provide consent. But how can such protections be implemented in technologies operating outside clinical environments?
Who Owns Your Health Data?
In the United States, medical information is governed by HIPAA, with strict regulations on confidentiality and severe consequences for breaches. But what about health information inferred from everyday interactions with technology?
Outside medical settings, once information is shared with a third party, individuals typically relinquish their right to privacy. Companies generally aren’t required to inform users what information they possess or how they’re using it. This creates a troubling situation where sensitive medical information could be distributed without the standard precautions to which patients and physicians are accustomed.
If an algorithm detects a high likelihood of Alzheimer’s disease through everyday speech analysis, who—if anyone—is responsible for communicating this information to the person? And how should this sensitive information be shared?
When physicians diagnose a medical condition, they carefully communicate results, guide patients to appropriate resources, and assess understanding in real-time. How can such care be replicated when algorithms make diagnoses based on surreptitiously collected data?
Understanding AI Results: Bias and Interpretability
Machine learning algorithms offer a veneer of objectivity but are only as good as the data they’re trained on. They may integrate and reinforce societal biases in ways that are difficult to detect. Consider:
- Will algorithms be equally accurate across different ethnicities, genders, socioeconomic backgrounds, and language variations?
- How will algorithms account for known risk factors (like women being more likely to develop Alzheimer’s) without reinforcing societal biases?
- If biases are discovered in outcomes for specific populations, who is responsible for addressing them?
Most algorithms are opaque even to their designers, and proprietary algorithms are typically protected as intellectual property, further limiting transparency. This “black box” nature makes it difficult for users, medical professionals, and the public to understand the limitations and implications of the technology they’re using.
The Public Sphere: Beyond Medical Settings
What happens when this technology becomes widely available to the general public? We’ve already seen analyses of public figures’ speech patterns suggesting cognitive decline (like Ronald Reagan’s speeches showing linguistic changes years before his Alzheimer’s diagnosis was announced).
If technology becomes sufficiently user-friendly, we could see unauthorized analyses conducted during phone calls or video meetings—by family members concerned about a loved one’s cognition, by employers making hiring decisions, or by political opponents seeking leverage.
Moving Forward Responsibly
The convergence of technology and healthcare requires a multidisciplinary approach that balances innovation with ethical considerations. Potential solutions include:
- Establishing robust regulatory frameworks prioritizing transparency, minimizing biases, and ensuring confidentiality
- Developing informed consent processes that account for fluctuating cognitive capacities
- Implementing stronger data protection measures that extend beyond traditional healthcare settings
- Including diverse stakeholders in technology design, including medical professionals, ethicists, legal experts, patients, and caregivers
- Integrating education about AI technologies into healthcare training
- Conducting thorough impact assessments to identify and mitigate potential biases
By taking a proactive approach to the development and deployment of these technologies, we can harness their potential benefits while minimizing risks. The goal should always be to use technology to enhance patient care while protecting dignity, autonomy, and privacy.
Conclusion
Digital phenotyping and AI-based diagnostic tools offer tremendous potential for early detection and intervention in Alzheimer’s disease. However, we must ensure that the pursuit of technological advancement doesn’t outpace ethical considerations, particularly for vulnerable populations.
As we navigate this new frontier, our guiding principle should remain: first, do no harm. By thoughtfully addressing these ethical challenges, we can develop technologies that truly benefit patients with dementia and their families while preserving their dignity, autonomy, and rights.