Alzheimer’s: How phone conversations could aid early detection
- Cheap, accessible, and reliable tests to diagnose Alzheimer’s disease at an early stage are currently lacking.
- People with the condition tend to speak more slowly and with longer pauses.
- In a recent study, researchers used machine learning to develop models that use the acoustic features of a person’s conversations to identify whether they may have early Alzheimer’s disease.
- If further tests prove successful, the models could help identify the early stages of the condition via a smartphone app or online.
Alzheimer’s disease involves progressive degeneration in the parts of the brain that govern thoughts, memory, and language.
The Centers for Disease Control and Prevention (CDC) report that in 2020, up to
Research suggests that early diagnosis is important because it provides the opportunity for doctors to start clinical interventions as soon as possible to manage the person’s symptoms.
However, no inexpensive, widely accessible, and reliable tools are currently available to diagnose Alzheimer’s disease during its preclinical stage.
One possible diagnostic indicator may be that in everyday conversation, people with the disease tend to speak more slowly, pausing as they try to find the right words. As a result, their speech can lack fluency compared with people without the condition.
Scientists from McCann Healthcare Worldwide, Tokyo Medical and Dental University, Keio University, and Kyoto University in Japan reasoned that a fully automated model could use acoustic features of speech, such as pauses, pitch, and voice intensity, to predict who is likely to develop Alzheimer’s disease.
They used machine learning to create models that they believe could eventually be just as good as, or even better than, a standard test that doctors use to diagnose the disease.
The scientists have reported their results in
Machine-learning algorithms
The team used three machine-learning algorithms to analyze voice data from 24 people with Alzheimer’s disease and 99 people without, all of whom were aged 65 years or older.
The audio recordings came from a public health program in Hachioji that involved participants talking on the phone about lifestyle changes to reduce their risk of dementia.
As part of the program, the participants also underwent the Japanese version of a standard test of cognitive functioning called the Telephone Interview for Cognitive Status (TICS-J).
For the new study, the scientists used vocal features from some of the audio recordings to train the machine-learning algorithms to differentiate between people with Alzheimer’s disease and controls.
They used the remainder of the recordings to gauge the performance of the resulting models.
One of the models, which was based on an algorithm called extreme gradient boosting (XGBoost), performed better than TICS-J, although the difference between the two did not reach the threshold for statistical significance.
Feeding the model several audio files from each individual improved the reliability of its predictions.
Both XGBoost and TICS-J had a
XGBoost also got a perfect score for specificity, meaning that there were no false positives, and all the people that it defined as having Alzheimer’s disease were indeed people with this condition. In comparison, TICS-J only scored 83.3%.
In other words, 16.7% of the participants whom TICS-J judged to have Alzheimer’s disease actually had good cognitive health.
The researchers say that developers could incorporate their model into websites or mobile apps, allowing the general public to access it for themselves.
They believe that such a predictive tool could guide people in the earliest stages of the disease to seek professional help.
They conclude:
“Our achievement in predicting [Alzheimer’s disease] well using only vocal features from daily conversation indicates the possibility of developing a pre-screening tool for [Alzheimer’s disease] among the general population that is more accessible and lower cost.”
“[W]e are now planning to conduct this test again with a larger sample size in the new field by the end of this year in order to further validate our results,” said lead author Akihiro Shimoda of McCann Healthcare Worldwide in Tokyo.
“McCann Health wants to improve this diagnostic screening method further to develop its own service named ‘Dearphone’ aimed at contributing to prevention and early detection of dementia,” he told Medical News Today.
He said that alongside apps and online platforms, developers could incorporate their model into a conventional phone service for older people who do not use a smartphone or computer.
“Actually, we are looking for a partner that can collaborate with us to develop and implement our model to society,” he added.
Limitations of the study
One of the main limitations of the study was that it used audio data from people who had already received a diagnosis of Alzheimer’s disease.
To confirm that the model works, researchers would need to test it on a larger sample from the general population and then follow them over time to see who developed the condition.
The authors note some other limitations of their work. For example, the study did not differentiate between people with Alzheimer’s disease and those with mild cognitive impairment, who may have different speech characteristics. In addition, the sample size was relatively small.
They also note that a future model could incorporate speech content and sentence structure to improve its performance.
Source: Read Full Article