Getting Started

AI Facts & Fictions

AI is surrounded by hype, fear, and misunderstanding in equal measure. This guide cuts through the noise — separating genuine capabilities from marketing myths, science fiction fantasies, and well-intentioned but inaccurate claims about what artificial intelligence actually is and does.

Living Document: 2024–2025 AI Landscape

This is a living reference guide covering the most common misconceptions about artificial intelligence, updated to reflect the 2024–2025 landscape. The AI field moves fast — claims that were accurate two years ago may be outdated today, and today’s truths may shift as the technology evolves. We focus on durable principles rather than transient benchmarks: how large language models actually work (statistical pattern matching, not understanding), what “intelligence” means in this context (not what most people assume), and why critical thinking remains your most important tool when engaging with AI. Each section addresses a widely held belief and explains the nuanced reality behind it.

Why This Matters

Understanding AI Starts with Understanding the Myths

Every technology goes through a phase where public understanding lags behind the reality. With AI, that gap is especially wide — partly because the term “artificial intelligence” itself invites anthropomorphism. When we call software “intelligent,” we instinctively project human qualities onto it: understanding, intention, creativity, even consciousness. These projections lead to both overestimating what AI can do and underestimating the risks of using it carelessly.

Misconceptions have real consequences. People who believe AI “understands” their questions are less likely to verify its answers. Those who believe AI will replace all jobs may make career decisions based on fear rather than evidence. And organizations that believe AI is objective may deploy it in ways that amplify existing biases rather than reduce them.

This guide is not anti-AI. It is pro-clarity. The better you understand what AI actually is — a powerful statistical tool with genuine utility and genuine limitations — the more effectively you can use it, and the more confidently you can spot inflated claims.

The Anthropomorphism Trap

When a chatbot says “I think” or “I believe,” it is generating statistically likely continuations of text — not reporting inner mental states. The conversational interface is designed to feel natural, but that design choice creates a persistent illusion: that there is a someone behind the responses. Recognizing this illusion is the single most important step toward using AI effectively and responsibly.

How to Evaluate AI Claims

A practical process for separating hype from reality

1

Identify the Claim

Start by isolating the specific assertion being made. Headlines like “AI passes the bar exam” or “AI is now sentient” compress complex realities into attention-grabbing soundbites. What exactly is being claimed? By whom? In what context? Strip the claim down to its testable core.

Example

Headline: “AI outperforms doctors at diagnosis.” Core claim: An AI system scored higher than physicians on a specific diagnostic benchmark under controlled conditions.

2

Evaluate the Evidence

Check what actually supports the claim. Is there peer-reviewed research? A reproducible benchmark? Or is it a press release from a company with a financial interest in the outcome? Marketing claims about AI capabilities are not the same as independently verified results. Look for the original source, not the headline about the source.

Example

The diagnostic AI was tested on multiple-choice questions from textbooks — a very different task from diagnosing a real patient with incomplete information, comorbidities, and emotional needs.

3

Understand the Nuance

Most AI claims are not simply true or false — the reality lives in the middle. AI can be genuinely useful for specific tasks while being unreliable for others. A model that excels at pattern recognition may fail at reasoning. Performance on a benchmark may not transfer to real-world conditions. The nuance is where the actionable insight lives.

Example

AI diagnostic tools can be valuable for initial screening and flagging anomalies in medical images, but they work best as a second opinion alongside — not a replacement for — trained clinicians.

4

Apply What You Know

Use your calibrated understanding to make better decisions — about which AI tools to adopt, how much to trust their outputs, and where human oversight remains essential. The goal is not skepticism for its own sake, but informed engagement: using AI where it genuinely helps while maintaining the critical thinking that prevents costly mistakes.

Example

You decide to use AI for drafting and brainstorming but always verify factual claims before publishing, because you understand that generation and accuracy are fundamentally different capabilities.

See the Difference

Common myth vs. the nuanced reality

The Myth

Common Belief

“AI understands what I’m saying and thinks about its response before answering. It’s basically a really smart person who has read everything on the internet.”

Resulting Behavior

The user trusts AI responses without verification, accepts fabricated citations as real, and assumes confident-sounding answers are correct. When the AI makes an error, the user is surprised and feels deceived.

Leads to over-trust, unverified outputs, and preventable mistakes
VS

The Reality

Informed Understanding

“AI predicts statistically likely text continuations based on patterns in its training data. It generates plausible language, not verified facts. It has no awareness, no memory between sessions, and no understanding of truth.”

Resulting Behavior

The user leverages AI for drafting, brainstorming, and structured tasks while verifying factual claims independently. They recognize confident-sounding outputs may still be wrong and maintain appropriate oversight.

Enables effective use with appropriate trust calibration

Practice Responsible AI

Always verify AI-generated content before use. AI systems can produce confident but incorrect responses. When using AI professionally, transparent disclosure is both best practice and increasingly a legal requirement.

48 US states now require AI transparency in key areas. Critical thinking remains your strongest tool against misinformation.

Quick Knowledge Check

How do large language models actually generate their responses?

When a chatbot says “I think” or “I believe,” what is actually happening?

Why is verifying AI output always necessary, even with well-crafted prompts?

By The Numbers

Research Highlights

Key findings from academic studies cited on this page

106
Experiments analyzed
Nature Human Behaviour meta-analysis
60%
Accuracy swing
Question-level variation from phrasing
8,214
Study participants
Creativity meta-analysis (Holzner, 2025)
100×
Tests per question
Wharton variability study
Understanding AI

AI Capability Myths

Common misconceptions about what AI can and cannot do

01
Fiction

AI tools are intelligent and think like humans

Many believe that AI systems like ChatGPT possess human-like understanding, reasoning, and comprehension of meaning.

Fact

AI operates on pattern matching, not understanding

Generative AI systems lack human cognition. They operate similarly to predictive text, arranging frequently co-occurring words rather than thinking or evaluating like humans do.

UC Colorado Springs Library; MIT Press Open Mind (2024)
02
Fiction

AI chatbots are search engines that find information

Users often treat ChatGPT and similar tools like Google, expecting them to locate and retrieve accurate information from the internet.

Fact

AI generates text, it doesn’t retrieve facts

“Google is a website finding machine and ChatGPT is a paraphrase machine.” These tools generate responses by rearranging words from training data, unlike search engines.

UC Colorado Springs Library Research Guide
03
Fiction

AI will replace all human labor

Headlines proclaim that AI will eliminate most jobs and make human workers obsolete across all industries.

Fact

AI automates tasks, not entire jobs

Professionals remain essential for data quality checks, bias mitigation, and stakeholder presentations—functions requiring judgment. AI handles routine tasks.

University of Minnesota Carlson School
04
Fiction

Better AI models always perform more reliably

As AI models improve on benchmarks, they should naturally become more trustworthy and predictable in real-world use.

Fact

More capable models can be less predictable

“More capable models tend to perform worse in high-stakes situations” due to misalignment with human expectations. They don’t show patterns of expertise like humans.

MIT News (July 2024)
Prompting Research

Prompt Engineering Myths

Research-tested truths about prompting techniques

05
Fiction

Universal prompting techniques work everywhere

Techniques like politeness, specific phrasing, or formatting should produce reliable improvements across all questions and models.

Fact

Prompting effects are inconsistent and context-dependent

Individual questions showed swings up to 60 percentage points depending on phrasing, but these differences often cancelled out across datasets.

Wharton Generative AI Labs (2024)
06
Fiction

Chain-of-Thought prompting universally improves results

Asking AI to “think step by step” should always produce better, more accurate answers across all scenarios.

Fact

Chain-of-Thought effectiveness varies by model and task

Improvements were modest (2.9%–13.5%). Response times increased 20–80%. CoT can help harder problems while causing errors on easier questions.

Wharton GenAI Labs — “Decreasing Value of CoT”
07
Fiction

AI models give consistent answers to the same question

Running the same prompt should produce the same or very similar results each time.

Fact

AI produces substantial hidden variability

Testing GPT-4o 100 times per question revealed substantial variability. Traditional benchmarks likely overestimate reliability.

Wharton Generative AI Labs (2024)
Meta-Analysis 2019–2025

AI Productivity Myths

What meta-analyses of 2019–2025 research actually show

08
Fiction

AI reliably boosts productivity for everyone

AI should improve work output across most contexts and user types uniformly.

Fact

Productivity gains are highly variable

37 software studies found code-quality regressions often offset gains. AI delivered 35% gains for novices but almost none for experienced workers.

California Management Review / UC Berkeley (2025)
09
Fiction

Human-AI collaboration always outperforms either alone

Teams combining humans and AI should consistently outperform either working independently.

Fact

Human-AI teams often underperform solo work

106 experiments found human-AI combinations “perform worse than the better of the two working solo” except for open-ended creative tasks.

Nature Human Behaviour (Vaccaro et al., 2024)
10
Fiction

AI has surpassed human creativity

Generative AI produces more creative and novel outputs than humans working alone.

Fact

No significant creativity gap exists

8,214 participants showed no creativity gap. AI use caused “dramatic declines in idea diversity”—a homogenization effect undermining innovation.

UC Berkeley (Holzner et al., 2025)
11
Fiction

Higher automation reduces errors

Automating decisions with AI should decrease both cognitive load and mistake rates.

Fact

High reliability creates dangerous blind spots

74 studies found users become over-trusting with reliable AI, causing “12% increase in commission errors” and slower anomaly detection.

UC Berkeley (Goddard et al.)
Model Behavior

AI Behavior Myths

Understanding how AI actually operates

12
Fiction

AI chatbots reliably find quality academic sources

AI can help locate and cite credible academic research and documentation.

Fact

AI frequently fabricates citations

Chatbots “hallucinate” citations for nonexistent materials. Lawyers were fined for citing fake case law. Researchers found AI misrepresented their own work.

UC Colorado Springs; Van Dis et al. (2023) Nature
13
Fiction

AI provides objective, unbiased responses

AI systems are neutral arbiters that provide balanced, objective information.

Fact

AI exhibits sycophancy and amplifies user opinions

LLMs affirm user assumptions due to optimization that rewards agreeableness—a behavior called “sycophancy.” They match patterns, not meaning.

MIT News (2024); NIH PMC
Official Sources

Government Warnings

Official guidance from the Federal Trade Commission and NIST

Federal Trade Commission

Operation AI Comply

Fake Reviews

AI tools can generate fake reviews, betraying customer trust and violating FTC rules.

False AI Claims

Using AI during development is not the same as offering a product “with AI inside.”

AI Accuracy Issues

AI tools can be inaccurate, biased, and discriminatory by design.

National Institute of Standards

AI Risk Technique

Three categories of AI bias to manage:

Systemic Bias

Historical patterns embedded in training data

Computational Bias

Technical choices in model development

Human Bias

Biases introduced through human decisions in AI design

All three types “can occur in the absence of prejudice, partiality, or discriminatory intent.”

When This Guide Helps

Situations where understanding AI myths and realities matters most

Perfect For

Evaluating AI Products

When a vendor claims their product uses “advanced AI” to deliver magical results, this guide helps you ask the right questions and separate genuine capability from marketing.

Building AI Literacy

Whether you are introducing AI concepts to a team, a classroom, or yourself — starting with an accurate mental model prevents costly misconceptions later.

Navigating AI News

AI headlines are designed for clicks, not clarity. Understanding the common myths helps you read past the hype and extract the actual information.

Making Adoption Decisions

Deciding whether and how to integrate AI into your workflow requires understanding what it actually does well versus what it claims to do well.

Skip It When

You Need a Specific Technique

If you already understand AI basics and need a prompting technique, head straight to our framework guides like CRISP, Chain-of-Thought, or Role Prompting.

You Want Technical Implementation

This guide covers conceptual understanding, not hands-on prompting. For practical prompt writing, start with Prompt Basics instead.

You Are an AI Practitioner

If you already work in machine learning or AI research, you likely know these distinctions. This guide is written for people newer to the field.

Where Myth-Busting Matters

Real contexts where understanding AI realities prevents costly mistakes

Academic Research

AI frequently fabricates citations, invents authors, and generates plausible-sounding but nonexistent journal articles. Researchers who understand this verify every reference independently.

Business Strategy

Understanding that AI automates tasks rather than entire jobs helps leaders plan realistic adoption roadmaps instead of chasing promises of fully automated workforces.

Education and Training

Teachers and trainers who understand AI limitations can set appropriate policies for AI use in learning contexts rather than banning or uncritically embracing it.

Consumer Protection

Recognizing that AI can generate convincing fake reviews, deepfakes, and scam content helps consumers maintain healthy skepticism in an era of synthetic media.

Healthcare Decisions

Patients who understand AI limitations are less likely to substitute chatbot medical advice for professional consultation, and better equipped to discuss AI-assisted diagnostics with their doctors.

Public Discourse

Informed citizens can contribute to better AI policy discussions when they understand what AI is and is not, rather than arguing from positions based on science fiction or marketing hype.

The Evolution of AI Understanding

How public awareness of AI capabilities has shifted over time

1950s–1990s Science Fiction Era AI imagined as humanlike robots and superintelligent machines
2000s–2010s Narrow AI Era AI recognized as task-specific tools like spam filters and recommendation engines
2022–2024 Hype Cycle Peak ChatGPT ignites public interest, myths and inflated expectations proliferate
2025+ Calibrated Understanding Informed users separate genuine utility from marketing narratives
Your Position on This Timeline

Reading this guide places you in the final era — calibrated understanding. You do not need to be an AI expert to use AI effectively. You need to understand what it actually does (statistical pattern matching), what it does not do (think, understand, or know), and how to verify its outputs. That foundation makes every other resource in the Praxis Library more useful, because you will approach each framework with realistic expectations about what it can achieve.

Start Using AI Effectively

Now that you understand what AI can and cannot do, learn the frameworks that help you work with it productively and responsibly.