Investigation Reveals New Risks for Google Users

ملاحظة: النص المسموع ناتج عن نظام آلي
- Google reduces visibility of warnings about AI medical information accuracy
- Experts warn of risks from misleading health information for users
Experts have warned that the way Google presents medical information through artificial intelligence could pose risks to users, as the company reduces the prominence of warnings that indicate possible errors or inaccurate information.
The company emphasizes that AI-generated summaries encourage consulting specialists and not relying fully on automated answers. However, an investigation by a British newspaper revealed that these warnings do not appear when medical advice is shown for the first time; they are only displayed after clicking “Show More,” in smaller and less noticeable text.
AI researchers say that the absence of prominent warnings at the outset weakens users’ awareness of potential errors, especially since current models may provide inaccurate or incomplete information.
A Massachusetts Institute of Technology expert noted that warnings are a key element in breaking automatic trust in AI answers and encouraging users to verify medical information.
A professor at Queen Mary University in London considered that the issue stems from a design that prioritizes speed over accuracy, which could lead to dangerous errors in health information.
Previous investigations showed that some AI summaries contained misleading statements, prompting the company to remove the feature from certain medical searches while keeping it in others.
Meanwhile, a Stanford University researcher explained that displaying AI summaries at the top of search results gives users a false sense of sufficiency, discouraging them from checking details that may include warnings.
The summary may contain both correct and incorrect information, which can be hard to distinguish without prior knowledge.
A representative from the Anthony Nolan Foundation called for warnings to be shown immediately and clearly at the top of results, emphasizing that misleading health information could be extremely dangerous if users treat AI-generated content as verified without consulting a qualified medical professional.
