Can A.I.-Driven Voice Analysis Help Identify Mental Disorders?

This is part of the article Limited range On the ability of artificial intelligence to solve everyday problems.

Imagine a quick and simple test as soon as your temperature is taken or your blood pressure is measured that can reliably identify anxiety disorders or predict impending depressive relapses.

Although health care providers have many tools to measure a patient’s physical condition, there are no reliable biomarkers for assessing mental health – objective indicators of medical conditions observed from outside the patient.

But some artificial intelligence researchers now believe that the sound of your voice may be the key to understanding your mental state – and that AI is perfectly suited to detect such changes, which are otherwise difficult to understand, if not impossible. The result is a set of applications and online tools designed to track your mental state, as well as programs delivering real-time mental health assessments to telehealth and call-center providers.

Psychologists have long known that certain mental health problems can only be diagnosed by listening What One person says but how They say, says Maria Espinola, a psychologist and assistant professor at the University of Cincinnati College of Medicine.

With depressed patients, Drs. Espinola said, “Their speech is usually more monotonous, flattering and softer. They also have a lower pitch range and lower volume. They take more breaks. They stop more often.”

Patients with anxiety experience more stress in their body, which can also change the way they sound, she said. “They tend to speak faster. They have more difficulty breathing.”

Today, such learning features are used by machine learning researchers to predict depression and anxiety, as well as other mental illnesses such as schizophrenia and post-traumatic stress disorder. The use of deep-learning algorithms can uncover additional patterns and characteristics, such as those captured in short voice recordings, which may not be obvious even to trained specialists.

Kate Bentley, an assistant professor at Harvard Medical School and a clinical psychologist at Massachusetts General Hospital, said, “The technology we’re using now can shed light on features that might make sense to the human ear.”

“There’s a lot of excitement around finding biological or more objective indicators of psychiatric diagnosis that go beyond the more subjective forms of assessment traditionally used, such as physician-rated interviews or self-report measures,” she said. Other indications that researchers are tracking include changes in activity levels, sleep patterns and social media data.

These technological advances come at a time when the need for mental health care is particularly acute: According to a report by the National Alliance on Mental Illness, one in five adults in the United States experienced mental illness in 2020. And the numbers are steadily rising. .

Although AI technology cannot address the shortage of qualified mental health care providers – not enough to meet the country’s needs, Drs. Bentley – It is hoped that this will reduce the barriers to getting a proper diagnosis, help physicians identify patients who are reluctant to care, and facilitate self-monitoring between visits.

“A lot can happen between appointments, and technology can really give us the ability to improve monitoring and evaluation more consistently,” said Dr. Bentley said.

To test this new technology, I started downloading the Mental Fitness app from health technology company Sonde Health to see if my discomfort was a sign of something serious or if I was getting frustrated. Described as a “voice-powered mental fitness tracking and journaling product,” the free app invited me to record my first check-in, a 30-second verbal journal entry, which would rank my mental health on a scale of 1 to 100.

A minute later my score was: Awesome 52. “Attention” app warned.

The app flagged that the level of vitality in my voice was significantly lower. Did I feel monotonous because I was trying to speak quietly? Should I heed the app’s instructions to improve my mental health by going for a walk or clearing my space? (The first question may indicate one of the potential drawbacks of the app: As a customer, it can be difficult to know Why Your voice level fluctuates.)

Later, feeling uncomfortable between interviews, I tested another voice-analysis program, this one focusing on finding the discomfort levels. The StressWaves test is a free online tool from Signa, a healthcare and insurance group, developed in collaboration with AI expert Ellipsis Health to assess stress levels using 60-second samples of recorded speech.

“What keeps you awake at night?” Was the prompt of the website. After I spent a minute describing my constant worries, the program scored my recording and sent me an email pronouncing: “Your stress level is moderate.” Unlike the Sonde app, Signna’s email does not provide any helpful self-improvement tips.

Other technologies add to the potentially helpful level of human interaction, such as Berkeley, California-based company Kintsugi, which raised શ્રેણી 20 million in Category A funding earlier this month. Kintsugi is named after the Japanese practice of repairing broken pottery with gold veins.

Founded by Grace Chang and Rima Seilova-Olson, who combine past experience of struggling to access mental health care, Kintsugi develops technology for telehealth and call-center providers that can help them identify patients who benefit from more help. Can get.

Using Kintsugi’s voice-analysis program, a nurse may be asked, for example, to take an extra minute to ask a distraught parent about his or her own well-being.

One concern with the development of this type of machine learning technology is the issue of bias – ensuring that programs work equally for all patients, regardless of age, gender, ethnicity, nationality or other demographic criteria.

“For machine learning models to work well, you really need to have a very large and varied and robust set of data,” Ms. Chang said Kintsugi used voice recordings in many different languages ​​from around the world, especially to protect against this problem.

Another major concern in this new field is privacy – especially voice data, which can be used to identify individuals. Bentley said.

And even when patients agree to record, the question of consent is sometimes twofold. In addition to assessing a patient’s mental health, some voice-analysis programs use recordings to develop and refine their own algorithms.

Another challenge, Dr. Bentley said there is potential distrust among consumers over machine learning and so-called black box algorithms, which work in a way that even developers can’t fully explain, especially what features they use to predict.

“There’s an algorithm created, and the algorithm has to be understood,” said Dr. Alexander S. Young, interim director of the Semel Institute for Neuroscience and Human Behavior and chair of psychiatry at the University of California, Los Angeles, echoes the concerns of many researchers about AI and machine learning in general: .

For now, Dr. Young remains cautiously optimistic about the potential of voice-analysis techniques, especially as a tool for patients to monitor themselves.

“I believe you can model people’s mental health status or estimate their mental health status in general,” he said. “People prefer to be able to self-monitor their condition, especially with chronic illnesses.”

But before automated voice-analysis techniques enter mainstream use, some are calling for rigorous testing of their accuracy.

“We really need more recognition of AI and machine learning models built on not only voice technology, but other data streams,” he said. Bentley said. “And we need to derive that recognition from large-scale, well-designed representative studies.”

Until then, AI-powered voice-analysis technology is a promising but uncertified tool, which may eventually become the everyday method for taking the temperature of our mental well-being.

Similar Posts

Leave a Reply

Your email address will not be published.