Hearing loss, both mild and severe, affects millions of people worldwide. The World Health Organization (WHO) suggests that around 2.5 billion people—or one in four—will live with some degree of hearing loss by 2050.
With the help of technological innovations, medical professionals can now provide more people with the ability to hear, understand speech, and participate in social activities.
Regulators are also advancing their efforts to bring new technologies to those with hearing loss. On November 10, the FDA rule allowing any patient over the age of 18 to purchase hearing aids directly from a seller without the need for a doctor’s prescription, medical exam, or fitting adjustment by an audiologist went into effect. This will help more people with mild to moderate hearing loss get the devices they need to hear better and live life to the fullest.
Continue reading to take a deeper dive into how MedTech is benefiting those with hearing impairments, and what the future may hold for those with hearing loss.
Hearing Aids
The first electronic hearing aids used a telephone transmitter, which converted speech into electrical signals. The technology has evolved to include more personalized frequency response, and hearing aids now have interesting features such as Bluetooth compatibility, connections to mobile apps, and even artificial intelligence (AI) capabilities.
Assistive listening devices are also boosting the efficacy of hearing aids by filtering through background noise, particularly in crowded areas. Some common types of assistive listening devices include:
- FM systems
- Hearing loops
- Personal amplification devices
- Infrared systems
Many of these devices can connect to a user’s cochlear implant or hearing aid for ease of use.
Cochlear Implants
Cochlear Implants (CI) remain one of the most significant advances in hearing aid technology. A CI is an electronic neuroprosthesis that provides sound perception to people with profound hearing loss. The FDA regulates CIs as Class III medical devices because they remain in a patient’s body indefinitely.
They include two components—an outside and an inside—that facilitate hearing stimulation. On a standard CI, the outside component, which is a sound processor, is worn behind the ear or attached to clothing. The processor contains microphones, a battery, electronics with DSP chips, and a coil that transmits signals to the implant under the skin.
The inner portion of the implant bypasses the damaged parts of the cochlea. From the auditory nerve, signals are sent to the brain and are recognized as sound. However, patients with CIs experience hearing differently than those without them. Rather than restoring natural hearing, CIs present people with profound hearing loss with useful representations of sound and speech.
The original CIs, developed in the late 1950s, distributed stimulation through a single channel. Eventually, research showed that single-channel CIs were limited in usefulness. These models were unable to stimulate different regions of the auditory nerve, meaning the single-channel CIs could not discern between low and high frequencies, a requirement for speech detection.
Modern CIs are designed with more electrodes to stimulate different areas at different times. This type of design allows the user to perceive a wider range of pitches.
Today’s CI electrical current delivery system is also highly controlled compared to older models. With more control, the electrodes in the implant work together to provide “virtual channels,” delivering more pitch details.
CIs have also become much smaller. The original devices were worn on a belt, today, they are similar in size to standard hearing aids.
In 2019, the FDA approved CIs for unilateral hearing loss, making the technology available to broader range of patients.
Sign Language Translation Software
While devices and assistive technologies improve quality of life by restoring the ability to process sound, companies are also seeking new ways to help those with hearing loss communicate.
Google’s AI team is currently working on products to translate spoken languages to sign language, reducing barriers for those who heavily rely on sign language to communicate.
Additionally, Google has a team of researchers working to develop an AI capable of speech recognition, specifically to understand people with atypical speech patterns. The tech company’s AI could potentially help people with conditions such as speech impediments, Down syndrome, cerebral palsy, and those who experienced a stroke, ALS, or traumatic brain injuries, in addition to those with profound hearing loss.
Hearing loss can have profound effects on patient quality of life. It can make ordinary tasks more difficult to complete and affect one’s ability to socialize and succeed in school or work. Thankfully, the MedTech community continues to identify and develop new tools and devices that offer more personalized and more effective options for the deaf and hard of hearing.