AI-generated X-rays are now so realistic they could fool doctors—and potentially disrupt the entire healthcare system.
A new study published in Radiology, the journal of the Radiological Society of North America (RSNA), finds that both radiologists and advanced multimodal large language models (LLMs) struggle to reliably tell apart real X-rays from artificial intelligence (AI)-generated “deepfake” versions. The results point to growing risks tied to synthetic medical images and highlight the urgent need for better detection tools and specialized training to protect the accuracy of medical records.
A “deepfake” is any image, video, or audio that appears authentic but has been created or altered using AI.
“Our study demonstrates that these deepfake X-rays are realistic enough to deceive radiologists, the most highly trained medical image specialists, even when they were aware that AI-generated images were present,” said lead study author Mickael Tordjman, M.D., post-doctoral fellow, Icahn School of Medicine at Mount Sinai, New York.
“This creates a high-stakes vulnerability for fraudulent litigation if, for example, a fabricated fracture could be indistinguishable from a real one. There is also a significant cybersecurity risk if hackers were to gain access to a hospital’s network and inject synthetic images to manipulate patient diagnoses or cause widespread clinical chaos by undermining the fundamental reliability of the digital medical record.”
Study Design and Global Participation
The study involved 17 radiologists from 12 medical centers across six countries (United States, France, Germany, Turkey, United Kingdom, and United Arab Emirates). Their experience levels ranged from newcomers to experts with up to 40 years in the field. Researchers analyzed a total of 264 X-ray images, split evenly between real and AI-generated scans.