How is hearing technology adapting to complex listening needs and environments?

The environment a person is in, as well as their own complex listening needs, determine an individual’s listening experience. BIHIMA spoke with Leonard Cornelisse, Lead System Architect and manager of the Hearing System Engineering group at Unitron, to discuss how advances in technology are addressing the difficulties that complex listening needs and environments can present for individuals with hearing loss.

Published on 16 November 2022

How is hearing technology adapting to complex listening needs and environments?

By Karen Noble

 

BIHIMA: Can you tell us a little more about your role?

Leonard Cornelisse (LC): The Hearing System Engineering group work within R&D and are involved in many stages of product development from concepts and exploration of ideas to the evolution, testing, and optimisation of products and product features.

 

BIHIMA: Can you tell me what is meant by complex listening needs and difficult listening environments?

LC: We can look at these as two dimensions. A listening environment is about the acoustics of the place the individual is in, for example how many competing sounds are present, how different the sounds are and what direction they are coming from. A difficult listening environment is one in which there are many competing sounds, the nature of the sounds may be similar or very dissimilar, and the sounds may be coming from different directions. An example of a difficult listening environment is a busy restaurant with many groups having conversations, if there’s music playing in the background, then that will present another layer of difficulty. Scenes that are complex with many sound sources and a poor SNR make it challenging for the person to focus on what they want to hear.

Complex listening needs relate to individual themselves, to their internal needs. There is more to hearing ability than simply what’s shown on an audiogram. Frequency selectivity, temporal selectivity and susceptibility to masking can all affect a person’s hearing abilities in any given situation. In addition, other factors such as cognitive load, motivation and fatigue play a role.

 

“An example of a difficult listening environment is a busy restaurant with many groups having conversations, if there’s music playing in the background…”

 

BIHIMA: How does hearing technology support people in difficult listening environments, and how has this has changed over time?

LC: The number one complaint from people with hearing aids is the challenge of being able to focus on the person they want to hear in difficult situations. A lot of effort has gone into trying to improve that, especially in the last ten years. For example, as an industry we’ve developed things like noise cancellers and directionality, or beamformers that try to focus in a certain direction. It’s possible to do noise reduction based on signal/speech to noise ratio (SNR) or an estimate of the SNR based on spatial location.

One of the major advances, apart from Bluetooth giving us wireless connectivity, has been guided directionality, the ability to change direction so the hearing instrument is not always focused to the front. The sounds that a person wants to listen do not always come from the front. The characterisation of the listening environment by hearing instruments has evolved over time in terms of recognising where speech is located around the person and automatically changing the beam based on that. Technology has enabled communication of data and audio signals between the two hearing instruments which allows an automated binaural decision, when in the past a manual interaction from the wearer was required.

 

BIHIMA: How is hearing technology adapting to support people with complex listening needs, and how has this changed over time?

LC: This relates to my previous answer in that the needs of the listener are addressed by the automation: this helps the listener as they don’t need to interact with their hearing instrument. Technology has allowed us to develop an automatic system, using sophisticated event driven detection, that adjusts and matches the listener’s needs as much as possible. Of course, you can’t be 100% correct so the next challenge is how do you let the listener interact with the device in these moments where they want something different, and that’s an area that Unitron has recently focused on.

At Unitron, in addition to adjustment within the automatic program we offer optional app programmes that are configured to specific situations. The user can choose one of these programmes according to the listening environment they’re in, for example in a café or inside a moving car. And within that program the user also has the option to fine tune the audio processing to their satisfaction.

 

“In the past manufacturers have only really focussed on the devices. In the future, I think it will be about the entire process: how we, in the hearing technology industry interact with the clients through their HCP and the process of fitting the devices.”

 

BIHIMA: How do you see hearing technology adapting complex listening needs and difficult listening environments as we move forward?

LC: Technology will continue to evolve along these two dimensions making listening easier in difficult situations. On the technical side we’re working to improve what I call the perceptual SNR, so when the listener has a certain sound they want to hear, and they’re in a challenging listening situation, we will continue to focus on trying to make that easier for the listener. Determining what exactly the listener wants to hear, especially as we get into challenging and very complex situations, is not always obvious. For example, think about a busy restaurant and at the table next to you someone is talking very loudly. That person can somehow dominate the conversation that you’re actually trying to listen to. For a hearing instrument to make intelligent choices in situations like that is still technically challenging so this is something we’re working on. You could perhaps develop electrodes on the outside of the hearing instrument that can pick up brain wave signals or use artificial intelligence to pick up and respond to voice signals from the listener, but another major challenge we have is the size and power constraints of hearing instruments.

 

BIHIMA: What needs to be a priority for the future?

LC: It’s very difficult to know what the future is going to bring. What I see as important is the interaction between the hearing healthcare professional (HCP) and the client. In the past manufacturers have only really focussed on the devices. In the future, I think it will be about the entire process: how we, in the hearing technology industry interact with the clients through their HCP and the process of fitting the devices. People who’ve been diagnosed with hearing loss are going through a difficult adjustment in their life and getting used to wearing hearing instruments can be challenging. Better fitted hearing instruments make a big difference – people who have had a good fitting are generally the most satisfied with their listening abilities. So, even though we tend to focus on technology, there’s a very human dimension that is critically important. Hearing technology is an interesting industry to be in and we’re working to improve user experience and quality of life. I’ve been in the hearing industry for over 30 years and consider myself very lucky that I’ve found something interesting, that I really like, and that I have been able to contribute the quality of life for individuals with hearing loss.

Source: Unitron

Karen Noble

In the same section