Few-Shot Prompting

Few-shot prompting provides 2-5 input/output examples before your actual request, teaching the AI the pattern you want by demonstration rather than description. Format: show the pattern with "Input: X → Output: Y" examples, then present your real input. Works exceptionally well for tone matching, data formatting, classification, and extraction tasks. Use 3 examples for most tasks — adding more rarely improves results and wastes tokens.

Few-shot prompting gives an AI model 2-5 examples of the exact input/output pattern you want before asking your real question. Instead of describing what you need in words, you demonstrate it — which is almost always more effective than trying to explain it. The technique is called "few-shot" because you're providing a small number of training examples at inference time, without fine-tuning the model. It's one of the most powerful and underused prompting techniques available. Once you understand the structure, you'll use it constantly — for tone matching, data extraction, classification, formatting, and any task where you can show a pattern more easily than you can describe it.

Last updated: May 2026

Want to put these prompts to work inside Claude Code?

Get practical Claude Code tips in your inbox — no hype, no spam.

The full guide to Claude Code, MCP, and hooks — free.

Ready-to-Use AI Prompts for Few-Shot Prompting

Tone Matching — Rewrite in Brand Voice

Show 3 before/after examples to teach the AI your exact writing style.

Rewrite the following marketing copy in our brand voice. Here are three examples of our style: Original: "Our platform offers comprehensive analytics capabilities for enterprise teams." Ours: "See exactly what's working. Real-time dashboards your whole team can actually use." Original: "We provide seamless integration with existing workflows." Ours: "Plugs into the tools you already use. No migration. No training. Just go." Original: "Our customer success team is available to assist with onboarding." Ours: "A real person helps you get set up. Then stays available when you need them." Now rewrite this in the same voice: "Our solution enables organisations to achieve operational efficiency through data-driven decision-making processes."

Data Extraction — Consistent Formatting

Use few-shot examples to extract structured data from messy text.

Extract company information from these descriptions and output as JSON. Follow the exact format shown in these examples: Input: "Acme Corp was founded in Dublin in 2018. They sell B2B SaaS for HR teams and have raised €4.2M." Output: {"name": "Acme Corp", "founded": 2018, "location": "Dublin", "category": "HR SaaS", "funding": "€4.2M"} Input: "DataFlow is a Cork-based analytics startup. Founded 2021, raised €1.8M seed, focuses on retail data." Output: {"name": "DataFlow", "founded": 2021, "location": "Cork", "category": "Retail Analytics", "funding": "€1.8M"} Now extract from this: Input: "Nimbus Health launched in Galway in 2020. They are a medtech company helping GPs with patient scheduling. Raised €3M in 2023."

Classification — Customer Feedback Tagging

Few-shot classification is faster and more consistent than writing classification rules.

Classify customer feedback as one of: BUG, FEATURE_REQUEST, PRAISE, BILLING, or OTHER. Use these examples: Feedback: "The export button does nothing when I click it on Firefox." Category: BUG Feedback: "Would love if I could schedule reports to email automatically." Category: FEATURE_REQUEST Feedback: "Your support team is incredible, fixed my issue in 10 minutes." Category: PRAISE Feedback: "I was charged twice for my subscription this month." Category: BILLING Now classify these: 1. "The dashboard takes 30 seconds to load every time I open it." 2. "Can you add a dark mode? My eyes hurt after long sessions." 3. "Sarah on your team went above and beyond helping me migrate my data."

How to Use These Prompts

1

Copy the Prompt

Click the "Copy Prompt" button to copy the prompt to your clipboard.

2

Paste in AI Tool

Paste the prompt into ChatGPT, Claude, Gemini, or your preferred AI tool.

3

Customize & Use

Fill in the bracketed sections with your specific information and get results!

Frequently Asked Questions

What is few-shot prompting?+

Few-shot prompting provides an AI model with 2-5 examples of input/output pairs before your actual request. The model learns the desired pattern — tone, format, classification logic, extraction structure — from the examples rather than from a verbal description. It's one of the highest-impact prompting techniques for tasks where you can demonstrate the pattern more easily than you can describe it.

How many examples should I provide in a few-shot prompt?+

Three examples is the sweet spot for most tasks. One example (one-shot) often works for simple patterns but can be brittle. Two to three examples covers the pattern reliably. More than five rarely improves results and increases token usage. The exception is classification tasks with many categories — provide at least one example per category. For chain-of-thought few-shot prompting, even one worked example showing the reasoning process is highly effective.

What is the difference between few-shot and zero-shot prompting?+

Zero-shot prompting describes what you want in words, without examples. Few-shot prompting demonstrates what you want with 2-5 examples. Zero-shot is faster to write and works well for straightforward tasks. Few-shot is more work upfront but produces more consistent and accurate results for tasks involving specific formats, styles, or classification logic. Use zero-shot first; switch to few-shot when zero-shot produces inconsistent results.

Does few-shot prompting work better than fine-tuning?+

For most business use cases, few-shot prompting achieves 70-90% of fine-tuning performance at near-zero cost. Fine-tuning bakes the pattern into the model weights permanently and is faster at inference, but requires hundreds or thousands of examples and ongoing maintenance. Few-shot prompting is the practical choice for most teams — start here. Only invest in fine-tuning if you have a clear quality gap, a high-volume use case, and quality training data.

What tasks benefit most from few-shot prompting?+

Highest-benefit tasks: format conversion (converting one data structure to another), tone matching (rewriting in a specific brand voice), classification (tagging content into predefined categories), structured data extraction (pulling fields from unstructured text), and code translation (converting code between languages or patterns). Tasks that benefit less: open-ended generation, research, and tasks where the desired output is fully described by a verbal instruction.