Our brains are exceptional, especially when it comes to picking out individual voices in The Downtown Amarillo Historic District. This is a task that even the most advanced hearing aids have struggled with. However, researchers at Columbia University in New York City are currently developing new artificially intelligent (AI) technology that could better amplify the correct speaker in a group.
One thing modern hearing aids do well is select a speaker’s voice and amplify it while suppressing distracting background noise like traffic and clanking dishes. However, devices have a hard time boosting an individual’s voice over other voices, and instead tend to amplify all speakers at once. This is known as the cocktail party problem, and it severely hinders a hearing aid wearer’s ability to participate in the conversation.
Instead of designing yet another device that uses external sound amplifiers like microphones, the new technology currently in development actually monitors the wearer’s brain waves, so it can boost the voice they want to focus on.
The technology itself is complex and utilizes speech-separation algorithms with neural networks, which are complex mathematical models that imitate the brain’s natural computational abilities.
The system first separates the voices of individual speakers from a group, then compares the voices of each speaker to the brain waves of the person listening. The speaker whose voice pattern matches the listener’s brain waves closest is then amplified.
“By creating a device that harnesses the power of the brain itself, we hope our work will lead to technological improvements that enable the hundreds of millions of hearing-impaired people worldwide to communicate just as easily as their friends and family do,” explained senior study author Nima Mesgarani, Ph.D.
To test the effectiveness of this new technology, the Columbia research team partnered with Ashesh Dinesh Mehta, M.D., Ph.D., a neurosurgeon at the Northwell Health Institute for Neurology and Neurosurgery.
Mehta’s epilepsy patients volunteered to listen to various speakers while their brain waves were monitored using electrodes placed in their brains. The algorithm tracked the volunteers’ attention as they focused on different speakers. The volume levels did, indeed, change to reflect that shift.
The research team is currently investigating how to transform their prototype into a noninvasive device that can be worn externally. They’re also hoping to refine the algorithm so it can function in a broader range of environments.
For more information on the benefits of hearing aids or to schedule an appointment, call Amarillo Hearing Clinic today.