Hallucination Spotter

Learn to identify when AI makes things up. Build critical thinking skills for verifying AI outputs.

AI for Everybody Built With UD/UDL Security A+ 100% Performance 100% AI Assisted Building Claude Code Community GitHub

Why This Matters

AI can sound confident while being completely wrong. It doesn't "know" truth—it predicts likely text. This practice tool helps you build intuition for catching errors before they cause real problems.

Practice: Spot the Hallucination

Read each AI response carefully. Does it contain fabricated information?

Score: 0 1 / 1,000,000,000
AI Response

Loading...

Real-World Hallucination Examples

These examples show common patterns to watch for in AI outputs.

📚

Fabricated Citations

What AI Said: "According to a 2024 study by Dr. Sarah Mitchell at Harvard published in the Journal of Cognitive Science, users who practice prompt refinement show 47% improvement..."

Reality: No such person, study, or journal article exists. AI generated a convincing-sounding citation from patterns in its training data.

Detection Tip: Always verify specific citations, author names, and statistics. If you can't find the source, assume it's fabricated.
🏛️

Historical Fabrication

What AI Said: "In 1847, inventor Thomas Edison demonstrated his first successful telephone prototype at the Philadelphia Science Exhibition."

Reality: Edison didn't invent the telephone (Alexander Graham Bell did), the dates are wrong, and the exhibition is fictional. Plausible details combined incorrectly.

Detection Tip: Cross-reference historical claims with authoritative sources, especially for specific dates, names, and events.
⚕️

Medical Misinformation

What AI Said: "Clinical trials show that taking vitamin D supplements reduces COVID-19 severity by 62% in patients over 50."

Reality: While vitamin D research exists, this specific statistic and trial doesn't exist. AI combined general knowledge into a specific false claim.

Detection Tip: Medical claims require verification with healthcare providers. Never act on AI health advice without professional consultation.

Understanding Hallucinations

🧠

What is a Hallucination?

When AI generates information that sounds plausible but is factually incorrect, made up, or unsupported. This happens because AI predicts likely text patterns, not truth.

📊

Fabricated Statistics

AI often invents specific percentages, dates, and numbers. The more precise the statistic, the more likely it needs verification.

📖

Fake Citations

Non-existent papers, authors, journals, and studies are common. AI can generate convincing academic-sounding references that don't exist.

🔀

Mixed-Up Facts

AI often combines real information incorrectly—attributing achievements to wrong people, mixing up dates, or merging separate events.

💬

Confident Uncertainty

AI expresses things it doesn't know with the same confidence as facts. It can't signal when it's uncertain or guessing.

🕐

Outdated Information

AI's training has a cutoff date. Claims about recent events, current prices, or living people may be incorrect or outdated.

The VERIFY Technique

A systematic approach to checking AI outputs for accuracy.

🔍

V - Validate Sources

If AI cites a study, paper, or statistic, look it up directly. AI often invents convincing-sounding citations that don't exist.

📋

E - Examine Specifics

Be extra skeptical of precise numbers, dates, percentages. The more specific the claim, the more likely it needs verification.

🔗

R - Reference Multiple Sources

Compare AI's claims with multiple reliable sources. If you can't find corroboration, be skeptical of the information.

I - Interrogate Reasoning

Ask the AI to explain its reasoning or show its work. Hallucinations often fall apart under follow-up questions.

⚠️

F - Flag High-Stakes Claims

For legal, medical, or financial information, always verify with qualified professionals before taking action.

🧪

Y - Yield to Experts

Domain experts can quickly spot errors that non-experts miss. Consult specialists for specialized topics.

Red Flags to Watch For

Overly Specific Details

When AI provides suspiciously precise statistics (e.g., "87.3% of users" or "founded in 1847"), these often signal fabrication. Real data is rarely this convenient.

Claims That Seem Too Perfect

If the information perfectly supports the point being made, verify it. AI tends to generate content that sounds right rather than content that is right.

Recent Events or News

AI's knowledge has a cutoff date. Any claims about current events, recent research, or living people's current status may be outdated or invented.

Pro Tip: The "Search Test"

If you can't find an AI-provided fact, quote, or citation with a quick search, assume it's fabricated until proven otherwise. Real information is usually findable.