EMBRACE AI!
Society’s rigorous welcome to the complex computing dynamic most of us are now used to uncomfortably talking about as artificial intelligence is a defence mechanism perhaps best challenged by experts who have tested the benefits AI brings in areas that, from these days onwards, no longer function without it. For example, audiology.
One of the key reasons why audiology will need AI is that the number of people with hearing loss is growing rapidly, ” but the number of audiologists is remaining flat in almost every country”, says Dr. Edwards, pointing to data that suggests that even if audiologists become efficient with their services, the gap between the demand for audiologists and those available will still grow. ” AI can help fill that gap by helping you be more efficient to see more patients,” Edwards underlines.
As he pointed out in his address to BAA members in November 2024, the benefits of AI have had to shine past many clouding concerns, not least of which are ethical considerations and the apparent hallucinations of AI systems. Winning our trust depends on the answers to questions such as: Was this system trained on bad data? Was it trained on data that is not representative of the general population? Can it produce biases towards race or gender or other aspects of individuals? What about the privacy of it, the healthcare records that are being used and given to the AI system? Is it learning individual data that it shouldn’t really know or understand?
Tolerating the hallucinating AI “bull***tter”
If Brent Edwards wants his audience of audiologists to embrace AI in their professional lives, it is because, working with the NAL on developing useful AI tools for hearing healthcare, he has not only seen and harvested evidence of its advantages, but found ways to develop a mindset of acceptance of AI’s not so helpful aspects, such as its hallucinations, one interpretation of which equates with a common human foible. Yes, there is, says Dr. Edwards, a ” proven” clinical diagnosis of AI as a bull***tter. His answer is for us just to think of a new colleague you might have in your practice – ” let’s call this person, Eugene” – who has smart ways of talking about his professional experience that we would be kind to just call bull***ting, the kind of lies we politely indulge. ” No, he’s not hallucinating; he’s lying to you and you know,” Edwards tells the gathered BAA crowd.
” Don’t be afraid of AI. AI is Eugene, someone who you can trust, who’s pretty good at things, but you can’t trust that they’re going to do everything right. And sometimes they’re going to bull***t you. So you’ve got to be on the alert and you’ve got to use your own professional judgement to make sure you’re still providing the best care for your patient, even though AI is used to help you. It’s just a Eugene in your practice, on your computer” is how Dr. Edwards wrapped his lecture.
Indeed, using AI to help not only you but also your patient, was the serious stuff in a presentation that always had, in a good sense, a light feel to it.
AI as a transformative tool in audiology
While you are encouraged to maintain and even sharpen your professional judgement, Dr. Edwards encouraged his audience to use AI and see it emerging as a transformative tool in audiology:
Enhancing diagnostics – AI systems excel in visuals, diagnosing conditions like otitis media, with some models achieving over 90% accuracy-higher than many clinicians.
Training Audiologists – Virtual AI-powered patients simulate conversations with individuals with varying types of hearing loss. This system is being piloted in Australian universities and professional organisations to improve case history gathering, recommending treatments, and addressing patient objections.
Child Hearing Loss Detection – by leveraging voice pattern analysis, an AI system identifies children under five with hearing loss. The system uses speech converted into spectrograms analysed by neural networks, achieving 97% accuracy in initial studies. “We [NAL] are developing a system that kind of piggybacks on what other people are doing in using voice patterns to identify disease states. Voice patterns have been used to identify Parkinson’s or to even identify COVID in people. We’re looking at whether we can we use the voice patterns of children to identify whether they have started to get hearing loss or not.”
Hearing Aid Fitting: AI integration in fitting algorithms (e.g., NAL-NL2) has existed for over a decade. Future developments, like NL3, promise more precise and individualised fitting strategies.
Assisting with administrative work – “I know an audiologist who has AI listen to the conversation that he had with his patient and then immediately summarise that information, put it in his electronic medical record, format it automatically, and then create a summary for the patient to take away with them on what was discussed. And instead of taking an hour, a couple of hours, two or three hours a day on that stuff, it takes a minute. How good is that?” Edwards haughtily asserts.
Screening/triaging patients
Advising on treatment
Monitoring and motivation of patient
Teasing his audience, Dr. Edwards asked: “What if you knew that this whole presentation in every single word I was saying was actually generated by AI, and I was just a puppet kind of repeating it to you? Would you still be interested in this lecture?”
Such is the world we march into, dear reader. In what you have read, and what you are about to, what is it that convinces you that this entire article and interview was not generated by Chat GPT?
INTERVIEW: Watch our FULL interview with Dr. Brent Edwards (below) for questions and answers on socio-cultural implications of AI, beyond the profession…
Or read on for a version edited to professional audiology perspectives.
Audiology News UK (ANUK): AI is here to stay. Do we now have to recognise it as unavoidably beneficial?
Dr. Brent Edwards (BE): Unavoidably?! You could avoid it, but why would you, because people who embrace AI are going to be more capable, more effective, more efficient, and more successful in their practice. So, it’s like saying you’re not going to use a computer when hearing aids went digital; well, you could… but why would you?
ANUK: So, please tell us about the ways in which AI will help the audiologist.
BE: First of all, it will allow your patient to do a little bit of self-assessment before they come to see you, if you set your practice up that way. So, you can use AI tools to do a little bit of needs assessment which offload some of that time when you’re in the clinic. Once you’re in the clinic, AI – if you have the the tool provided to you – can help you in determining the treatment strategy for that unique individual. Everyone’s different. Two people with the same audiogram will respond differently, but, using AI, you can look at a vast database of over a million patients and start to identify unique segments and unique treatment strategies for each individual, that falls in front of you based on data. So imagine you could say ” well, according to our AI database, you have a 90% chance of being a success if we can give you this hearing aid in conjunction with the remote microphone, based on 10,000 people who have had the same results as you”. How confident would that patient be in the solution that you’re giving them, and in being confident to use that technology with that?
ANUK: In your Adrian Davis lecture at the 2024 BAA Annual Conference, you underlined the role of AI in patient monitoring and motivation. How might this work?
BE: Well, that patient’s life would be best if you could follow them around their whole lives, watch them, monitor them when they’re at home, when they’re at work, when they’re socialising. Obviously you can’t do that, nor would you if you could. But AI can, with the sensors on a hearing aid or on a smartphone or on a Fitbit, monitor the environment and monitor how they’re doing, provide advice to the person right there based on what they’re experiencing, or inform you that the patient is having a challenge and that maybe you can ring them up and say “it looks like maybe we can help help you succeed more in these situations”.
ANUK: And you have had some help from an Apple tool in these situations. Tell us about that, please.
BE: We’ve used the Apple Watch in order to look at biometrics that can determine the amount of fatigue that someone with a hearing aid is experiencing. So imagine you could say, “well, it looks like in in your time that evening you were more fatigued than normal, let’s see if we can help you with that” .
ANUK: What AI projects is the National Acoustic Laboratories currently developing?
BE: We’ve been using AI in a variety of ways. One way is using large language models like Chat GPT to simulate different types of patients, in order to help train audiologists better in case history taking, in discussing the results of the audiometry, in terms of recommending treatment strategies. So if you are new, you may not know how to overcome objections, or you may not know how to discuss in the right way the outcomes of the test. You only get that through practice. Well, it would be better to practise on an AI patient rather than maybe practising along with real patients where you might make mistakes when you don’t want to. So that’s one tool that we’re developing for training audiologists and for assisting new audiologists in developing their communicaton skills.
We’re also leveraging big data. We have access to a database of over a million patients where we can look at, for example, reverse-slope hearing loss – a very difficult type of hearing loss to treat. We can look in the database and see how audiologists have treated those patients over the past 20 years across thousands of patients, apply AI in order to extract meaningful solutions for future patients who fit that same audiometric configuration.
AI can also be used to predict what might happen in the future with your patient. So, a certain configuration of a certain hearing loss in a certain lifestyle may mean that they’re going to have greater needs from you down the road than a different kind of patient with a different hearing loss configuration, a different lifestyle. If you knew in advance based on this AI prediction what the future for them holds, you can prepare them better and prepare yourself better. So for example, if you knew that they were more at risk of a deteriorating hearing loss, you could say “you know what, I really need to see you every year because there’s a 70% chance that your hearing loss is going to get worse and we need to adjust your hearing aids for you… as opposed to another patient to whom you may go “you’ve got an 80% chance that nothing’s going to change, just call me if you notice something”. And as a healthcare provider that makes you a lot more efficient and effective with your time and with your patients’ time.
ANUK: What about areas where AI cannot operate…will knowing where these are help audiologists to reassess their future roles?
BE: Sure. AI can do simple tasks. I mean think about audiometry, doing a hearing test. Do you need a highly skilled professional audiologist to measure pure tone threshold? Probably not. I think an AI system can probably do that pretty well. So where is your value? Your value is understanding the complex needs of your patient, giving them the trust, the motivation, the confidence that the solution that you’re recommending – both technology solutions, lifestyle changes, behavioural changes – are going to help them. You know, you could go on to Google and you could say ” I have a hearing loss, what should I do?” And Google could spit out something, but is a patient going to trust that? No, they’re not. When it comes to health, humans want to talk to other human medical professionals to make sure that they’re doing the right thing, they’re getting the right treatment, that they have the confidence that the approach that they’re taking is the best. That’s not going to change, I think, in my lifetime. And so the counselling, the motivation that human care in providing the understanding of what they’re going through and giving them the solutions, is critical for the human component.
ANUK: When it comes to important steps, such as diagnosis and treatment planning, what are the factors that influence our readiness to trust a machine over a human professional?
BE: Let’s say that when mobile phones first came out, I think people might have been reluctant to integrate those into their lives and rely on them too much. People who grew up with that from the get-go, they probably couldn’t live without it. They’ve got no problem adopting new technology. So I think for younger generations, this will be the norm of what they expect. I think today there are diagnoses where AI is much better than a typical human because the level of accuracy is very high. Even though I know that, I think most people, including myself, still want a trusted healthcare professional to look at the results and translate them to me and give me the confidence that yes, what this is coming out with makes sense, because we know that AI can hallucinate or simply make mistakes. And so I would want a trained professional, even if I trust that statistically AI is going to come to the right answer and the right diagnosis, I want the professional to give it their seal of approval and also be able to answer my questions effectively again in a way that I can trust and believe.