If you turn to Google’s AI summaries for quick answers about symptoms or health concerns, it helps to understand what those responses are built on. The information can feel authoritative at a glance, but the sources feeding those summaries are not always what users expect.
A recent snapshot from December 2025 looked at more than 50,000 German-language health searches and examined where Google’s AI Overviews were pulling their citations from. One platform stood out well above the rest: YouTube. In that dataset, video links appeared more often than traditional medical reference sites, meaning the first explanation you see may be rooted in video content rather than clinical literature.
Google has been steadily expanding the use of AI Overviews for symptom-related searches, positioning them as a fast way to get clarity or reassurance. What is easy to miss is that the short paragraph at the top is only a summary layer. The real substance sits in the citations underneath it, and those links vary widely in quality and medical rigor.
YouTube emerges as the most common source
According to a detailed analysis published by SE Ranking, YouTube accounted for roughly 4.43% of all domains cited inside AI Overviews. That translates to more than 20,000 individual links out of a total pool of around 465,000 citations. No other single domain appeared as frequently.
What makes this notable is not just YouTube’s presence, but the gap between it and established medical publishers. Video links were referenced several times more often than platforms like NetDoktor and appeared more than twice as often as MSD Manuals. Even when evidence-based resources existed for the same topic, the AI summaries still leaned heavily toward video content.
For users, that means a symptom explanation may be shaped by general health videos, creator commentary, or explainer clips rather than peer-reviewed medical material. While some health channels on YouTube are credible and professionally run, the platform as a whole does not apply uniform medical review standards.
AI Overviews do not follow normal search rankings
Another finding from the snapshot highlights how different AI Overviews are from standard Google results. Only about 36% of the links cited by AI Overviews appeared in the top ten organic search results for the same queries. Just over half made it into the top twenty.
This shift changes what gets surfaced. When search features are stripped away and only classic organic rankings are considered, YouTube typically ranked much lower, around eleventh place in this dataset. Inside AI Overviews, however, it rose to the top position.
The AI system appears designed to look beyond page-one rankings and pull in material that users might never click on during a normal search session. That broader reach can be useful, but it also means content with weaker editorial controls can gain outsized visibility.
What the reliability breakdown shows
The study also grouped cited sources by reliability signals. Roughly one-third of all citations were classified as more reliable based on factors like editorial oversight and evidence-based standards. Nearly two-thirds came from sources without strong safeguards. Government and academic websites made up only about 1% of the total.
This distribution matters because AI summaries often read with confidence, regardless of the underlying source quality. Without checking the links, it is difficult to tell whether the explanation is anchored in clinical guidance or general information.
A more careful way to read AI health summaries
Google’s own guidance has emphasized that AI Overviews are not medical advice, yet their placement and tone can suggest otherwise. When using them for health-related questions, it helps to treat the summary as a starting point rather than an answer.
Clicking through to the cited sources is essential. Look for pages that clearly state who reviewed the content, whether medical professionals were involved, and how often information is updated. Trusted hospital systems, government health agencies, and well-known medical publishers typically provide that transparency.
Google has confirmed it is using AI to respond to a growing share of health and symptom searches, as outlined in its broader rollout of AI-powered results. As these features expand across regions and languages, the mix of sources may continue to evolve, but the underlying dynamic remains the same: the summary is only as strong as the links behind it.
This particular snapshot focused on one language and region, and patterns may differ elsewhere. Still, the findings offer a clear reminder that when it comes to health information, where an AI answer comes from matters just as much as what it says.








