AI 'hallucinations' flagged by researchers in medical field

This post was originally published on Hespress

You will shortly be re-directed to the publisher's website

Tech giant OpenAI has promoted its AI-driven transcription tool, Whisper, as nearly achieving “human-level robustness and accuracy.”

However, Whisper has a significant drawback: it frequently generates false text, sometimes entire sentences, as reported by interviews with over a dozen software engineers, developers, and academic researchers, according to AP.

This phenomenon, referred to in the industry as “hallucinations,” can include inappropriate racial comments, violent language, and fabricated medical treatments.

Experts warn that these inaccuracies are concerning, especially since Whisper is being adopted across various industries globally for tasks like translating and transcribing interviews, generating text for popular consumer technologies, and creating video subtitles.

Of particular concern is the rapid adoption of Whisper-based tools by medical facilities to transcribe patient consultations, despite OpenAI’s advisories against using the tool in “high-risk domains.”

The post AI 'hallucinations' flagged by researchers in medical field appeared first on HESPRESS English – Morocco News.