<- Back to Insights
October 30, 2024
AI Transcription Tool Faces Accuracy Concerns in Healthcare Settings
The Verge
|
Contributed by: Drex DeFord
Summary
Hospitals are increasingly implementing an AI transcription tool based on OpenAI's Whisper model, though concerns about its accuracy have emerged. Research indicates that the model can produce significant errors, including fabricating entire phrases that are not present in the original audio. Nabla, the company behind the tool, has transcribed around 7 million medical conversations but acknowledges the limitations of Whisper and is working on improvements. A study presented at the Association for Computing Machinery FAccT conference found that approximately 1% of transcriptions contained fabricated content, with about 38% of these inaccuracies potentially leading to harmful outcomes. Notably, errors appear more often when speakers pause for extended periods, an occurrence common in patients with language disorders like aphasia, highlighting the need for reliable transcriptions in critical medical settings.
Explore Related Topics