Current and former members of the FDA told CNN about issues with the Elsa generative AI tool unveiled by the federal agency last month. Three employees said that in practice, Elsa has hallucinated nonexistent studies or misrepresented real research. “Anything that you don’t have time to double-check is unreliable,” one source told the publication. “It hallucinates confidently.” Which isn’t exactly ideal for a tool that’s supposed to be speeding up the clinical review process and aiding with making efficient, informed decisions to benefit patients.
Leadership at the FDA appeared unfazed by the potential problems posed by Elsa.
→ Continue reading at Engadget