Community Framework

CARE Framework

Four components that ensure every prompt carries the right context, assigns a clear role, and defines what success looks like. CARE treats expected outcomes as a first-class element — so the AI knows not just what to do, but what “done well” means.

Framework Context: 2023

Introduced: CARE was popularized in 2023 through usability and AI communication communities, with notable contributions from Nielsen Norman Group’s research on effective AI prompting. The acronym stands for Context, Action, Role, and Expectation — four dimensions that mirror how experienced communicators naturally structure requests to human collaborators. A common variation, Context-Ask-Rules-Examples, adapts the same philosophy for complex multi-constraint tasks by substituting explicit rules and demonstrations for the role and expectation components.

Modern LLM Status: CARE remains highly practical and widely recommended as an introductory framework for prompt engineering. Its four-component structure is deliberately lean — easy to memorize and quick to apply — while still covering the critical dimensions that most casual prompts neglect. Whether you use Claude, GPT-4, or Gemini, specifying context, action, role, and expectation consistently produces more focused and higher-quality outputs than leaving any of these dimensions implicit. The Expectation component is especially valuable: it gives the AI a success criterion to aim for rather than guessing at what “good enough” means.

The Core Insight

Tell AI What Success Looks Like

Most prompts describe a task but never define what a successful result looks like. The AI fills the gap with its best guess — producing output that is technically responsive but often misaligned with what you actually needed. CARE closes this gap by making the expected outcome an explicit, non-negotiable part of every prompt.

CARE structures prompts around four pillars. Context sets the stage — the situation, background, and constraints the AI needs to understand. Action defines the specific task or deliverable. Role assigns a persona or expertise level that shapes the AI’s perspective and vocabulary. Expectation declares the success criteria — what the output should achieve, how it should be measured, or what qualities it must demonstrate.

Think of it like briefing a consultant. You would not just say “write a report” — you would explain the business situation, specify the deliverable, clarify who they are writing as, and define what a successful report would accomplish. CARE formalizes that same natural process for AI interactions.

Why Expectations Change Everything

Without an explicit expectation, an AI optimizes for plausibility — producing text that sounds reasonable but may not meet your actual standard. When you define the expectation (“persuade a skeptical executive,” “pass a technical review,” “be accessible to a non-technical audience”), the AI has a concrete target to optimize toward. This single addition often produces more dramatic improvements than adding any other prompt component.

The CARE Process

Four components that transform vague requests into targeted prompts

1

Context — Set the Stage

Provide the background information, situation, and constraints that the AI needs to understand before it can respond appropriately. Context includes the domain, the current state of affairs, relevant history, and any limitations. Without context, the AI defaults to the most generic interpretation of your request.

Example

“Our SaaS startup just closed Series A funding. We have 50 enterprise customers, a 12-person engineering team, and we are preparing to scale from 5,000 to 50,000 users over the next 6 months.”

2

Action — Define the Task

Clearly state what you need the AI to produce or accomplish. The action should be specific and unambiguous — a concrete deliverable rather than a vague direction. Focus on the what, not the how — give the AI room to apply its expertise within the boundaries you have set.

Example

“Create a technical infrastructure scaling plan that identifies the top 5 bottlenecks we will hit during this growth phase and proposes solutions for each.”

3

Role — Assign a Persona

Specify who the AI should be for this task. A well-chosen role shapes the vocabulary, depth, perspective, and problem-solving approach the AI brings to the response. Roles can be professional identities (“senior DevOps engineer”), communication styles (“explain like a patient teacher”), or domain experts (“healthcare compliance specialist”).

Example

“Act as a senior platform engineer who has scaled three SaaS products from startup to enterprise scale. You favor pragmatic, battle-tested solutions over cutting-edge experimentation.”

4

Expectation — Define Success Criteria

Declare what a successful response looks like. Expectations can be qualitative (“convincing enough for a board presentation”), quantitative (“under 500 words”), structural (“table format with cost estimates”), or audience-focused (“understandable by a non-technical CEO”). This is the component that separates adequate output from precisely targeted output.

Example

“The plan should be presentable to our CTO and VP of Engineering. Each bottleneck needs a severity rating, estimated timeline to hit, and a solution with rough cost range. Format as a structured document with an executive summary.”

See the Difference

Why four structured components produce better results than a single instruction

Casual Prompt

Prompt

Help me write an email to my team about our new remote work policy.

Response

Subject: New Remote Work Policy

Hi team, I wanted to let you know about our updated remote work policy. Starting next month, we will be implementing a hybrid work model. Please review the attached policy document for details. Let me know if you have any questions.

Generic, no tone, no context, no clear outcome defined
VS

CARE Prompt

Structured Prompt

C: Our 200-person company is shifting from fully remote to hybrid (3 days in-office). Employee sentiment surveys show 40% resistance. The policy takes effect March 1st.
A: Write an internal email announcing the new hybrid policy to all employees.
R: You are the VP of People & Culture who is known for transparent, empathetic communication.
E: The email should acknowledge employee concerns, explain the reasoning, outline the transition timeline, and end with an invitation to a live Q&A. Tone should be warm but direct. Under 400 words. Always verify policy details with HR before sending.

Response

Subject: Our Path Forward — Hybrid Work Starting March 1st

Team,

I will be straightforward: changing how we work is hard, and I know many of you have built routines around full remote that you value. I hear you, and this decision was not made lightly.

After reviewing collaboration data, team feedback, and project outcomes over the past year, we believe a hybrid model — three days together, two days remote — gives us the best of both worlds: the spontaneous collaboration that drives innovation plus the focused deep-work time that remote provides...

Join me for a live Q&A on February 20th at 2 PM to share your questions and concerns directly.

Note: Please verify all policy details against the official HR document before making scheduling decisions.

Empathetic tone, context-aware, structured, clear next steps

Practice Responsible AI

Always verify AI-generated content before use. AI systems can produce confident but incorrect responses. When using AI professionally, transparent disclosure is both best practice and increasingly a legal requirement.

48 US states now require AI transparency in key areas. Critical thinking remains your strongest tool against misinformation.

CARE in Action

See all four components working together across different scenarios

CARE Prompt

Context: We are a B2B project management tool. New users complete sign-up but 60% never create their first project within the first week. Our onboarding data shows the biggest drop-off happens between account creation and first project setup.

Action: Write a 3-email onboarding sequence sent on Day 1, Day 3, and Day 5 after sign-up, designed to guide users to create their first project.

Role: You are a customer success manager who specializes in product-led growth for SaaS companies. Your communication style is helpful without being pushy.

Expectation: Each email should be under 150 words, include one clear CTA button, and progressively increase urgency. The sequence should feel like a helpful guide, not a sales funnel. Include a note reminding users they can contact support for help. Verify all product feature descriptions are accurate before sending.

AI Output

Email 1 (Day 1): Welcome — Your First Project is 2 Minutes Away

Hi [Name],

Welcome aboard. The fastest way to see what [Product] can do? Create your first project. It takes about 2 minutes, and we have a template ready to go so you do not have to start from scratch...

[Create Your First Project]

Email 3 (Day 5): Still Figuring Things Out?

No pressure — but teams who set up their first project in the first week are 3x more likely to make [Product] a daily habit. Need a hand? Reply to this email or book a 10-minute walkthrough with our team...

Note: Contact support@[product].com anytime for setup assistance. Verify all metrics and feature claims against current product documentation.

CARE Prompt

Context: Our e-commerce platform experienced a 45-minute outage during peak holiday shopping hours. Root cause was a database connection pool exhaustion triggered by an unoptimized query in a newly deployed feature. Revenue impact estimated at $180K.

Action: Write a blameless post-mortem report covering timeline, root cause analysis, impact assessment, and remediation steps.

Role: You are a senior site reliability engineer who writes post-mortems focused on systemic improvements rather than individual blame.

Expectation: The report should follow the format: Executive Summary, Timeline, Root Cause, Impact, Remediation (immediate + long-term), and Lessons Learned. Technical enough for the engineering team but with an executive summary readable by non-technical leadership. Include specific action items with owners and deadlines. Always verify incident details and metrics against actual monitoring data before distributing.

AI Output

Incident Post-Mortem: Database Connection Pool Exhaustion
Date: [Date] | Duration: 45 minutes | Severity: P1

Executive Summary
A newly deployed product recommendation feature contained an unoptimized database query that consumed all available connection pool slots under peak traffic. This caused cascading failures across the checkout and product catalog services for 45 minutes during our highest-traffic shopping period...

Remediation — Long Term
1. Implement mandatory load testing for all database-touching features before production deployment (Owner: Platform Lead, Due: [Date])
2. Add connection pool monitoring alerts at 70% and 90% thresholds...

Note: Verify all timestamps, metrics, and revenue impact figures against monitoring dashboards and financial reports before circulating this document.

CARE Prompt

Context: Our nonprofit runs after-school STEM programs in underserved communities. We serve 800 students across 12 schools in the metro area. We are applying for a $250K federal education grant to expand into 8 additional schools over two years.

Action: Write the executive summary section of the grant proposal (the first thing the review committee reads).

Role: You are a grant writer with 15 years of experience securing federal education funding. You know how to balance data-driven impact claims with compelling human stories.

Expectation: Under 500 words. Open with a compelling hook, include 2–3 key impact metrics from our current program, clearly state the funding request and what it will accomplish, and close with a forward-looking vision. The tone should be confident but not boastful. All statistics must be clearly identified as requiring verification against actual program data before submission.

AI Output

Executive Summary

When Maria, a seventh-grader from Jefferson Middle School, presented her water filtration prototype at our annual showcase last spring, she told the judges: “I did not know engineers could look like me until this program.” Maria is one of 800 students across 12 schools who have discovered that STEM is not a spectator sport...

With $250,000 in federal support over 24 months, [Organization] will expand into 8 additional Title I schools, reaching an estimated 550 new students annually. Our model — which has demonstrated a [X]% increase in science proficiency scores and a [X]% improvement in STEM career interest among participants — is ready to scale...

Note: All statistics marked [X] must be verified against actual program evaluation data. Have the program director review all impact claims before submission.

When to Use CARE

Best for tasks where defining the expected outcome is as important as defining the task itself

Perfect For

Professional Communication

Emails, announcements, and stakeholder updates where the wrong tone or missing context can derail the message entirely.

Outcome-Driven Tasks

When the deliverable has a measurable success criterion — persuade a specific audience, meet a word count, or follow a particular format.

Quick Structured Prompts

When you need structure but do not have time for a six-component framework. CARE’s four elements cover the essentials without overhead.

Beginner-Friendly Prompting

Teaching prompt engineering to newcomers — CARE’s four letters are easy to remember and immediately improve output quality.

Skip It When

Complex Multi-Constraint Tasks

When you need to specify examples, adjustments, tone, audience, and format separately. Frameworks like CO-STAR or CREATE offer more granular control.

Reasoning-Heavy Analysis

Mathematical proofs, logic problems, or code debugging where structured thinking techniques like Chain-of-Thought are more appropriate.

Open-Ended Exploration

Early brainstorming where you want the AI to surprise you. Defining a strict expectation too early can constrain creative exploration.

Use Cases

Where CARE delivers the most value

Business Communication

Draft emails, memos, and announcements where context and expected outcomes are critical for tone and effectiveness across different organizational levels.

Report Writing

Generate structured reports — post-mortems, quarterly reviews, competitive analyses — where the success criteria and audience shape every paragraph.

Educational Content

Create lesson plans, explainers, and study guides where the role defines the teaching approach and the expectation sets the comprehension target.

Stakeholder Presentations

Build slide decks, talking points, and briefing documents where the audience’s technical level and decision-making authority shape the content strategy.

Policy and Compliance

Draft policy documents, compliance summaries, and regulatory responses where context accuracy and expected legal precision are non-negotiable.

Customer Success

Craft onboarding sequences, churn-prevention outreach, and account review summaries where expected engagement outcomes guide every word.

Where CARE Fits

CARE bridges minimal prompting and comprehensive structured frameworks

Zero-Shot Raw Instructions Single request, no structure
CARE Outcome Focused Context, role, and success criteria
CO-STAR Audience Centered Full communication brief with six dimensions
CREATE Example Driven Persona, demonstrations, and iterative refinement
CARE as a Starting Point

CARE is deliberately lean — four components that cover the essentials without overwhelming new prompt engineers. If you find yourself adding more detail to the Expectation component (audience, format, tone, examples), that is a signal to graduate to a more comprehensive framework like CO-STAR or CREATE. Use CARE when speed and simplicity matter, and upgrade when precision demands it. And always verify the AI’s output against your actual requirements before using it.

Build Your CARE Prompt

Structure your next prompt with all four CARE components or find the right framework for your specific task.