ReAct Prompting

ReAct prompting interleaves Thought, Action, and Observation steps in a loop: the AI reasons about the current state (Thought), takes an action (searches, calls a tool, or reads data), observes the result, then reasons about the next step. Use it for tasks requiring external information, multi-step planning, or tool use. It is the foundation of most AI agent architectures including LangChain agents.

ReAct (Reasoning + Acting) is a prompting framework that interleaves reasoning traces with action steps. Instead of thinking or acting, the model alternates between: forming a thought about the current state, deciding on an action, observing the result, then reasoning about the next step. This loop continues until the task is complete. ReAct was introduced in a 2022 Princeton/Google paper and underpins many AI agent frameworks including LangChain. Understanding it helps you design better multi-step AI workflows, write more effective prompts for agentic tasks, and reason about why AI agents sometimes fail.

Last updated: May 2026

Want to put these prompts to work inside Claude Code?

Get practical Claude Code tips in your inbox — no hype, no spam.

The full guide to Claude Code, MCP, and hooks — free.

Ready-to-Use AI Prompts for ReAct Prompting

ReAct Reasoning Trace — Without Tools

Apply the ReAct Thought/Action/Observation pattern to complex reasoning tasks, even without external tools.

Solve the following problem using a Thought/Action/Observation loop. At each step, write: Thought: [your current reasoning] Action: [what you will do next — calculate, look up, check, etc.] Observation: [what you find or conclude] Continue until you reach a final answer. Problem: A company started January with 1,200 customers. Each month they gain 8% new customers but lose 3% through churn. What is the customer count at the end of June, rounded to the nearest whole number?

ReAct Agent Prompt Template

The classic ReAct system prompt structure for agentic tasks with tool access.

You are a research assistant with access to the following tools: - search(query): searches the web and returns relevant results - calculate(expression): evaluates mathematical expressions - summarise(text): condenses long text to key points For every task, follow this loop: Thought: reason about what you know and what you need next Action: tool_name(input) Observation: result of the action ... repeat until you have enough information ... Final Answer: your complete response to the user Task: [user task here]

ReAct for Debugging — Step-by-Step Investigation

Apply ReAct structure to systematic debugging to prevent jumping to conclusions.

Debug the following issue using a Thought/Action/Observation structure. Investigate one hypothesis at a time before moving to the next. Issue: Our API endpoint /api/orders returns a 500 error for approximately 15% of requests. The error rate started 48 hours ago. No code was deployed in that period. Use this structure: Thought: [what could cause this?] Action: [what would you check first?] Observation: [what you find — I will provide the actual data when you tell me what to check] Begin your investigation.

How to Use These Prompts

1

Copy the Prompt

Click the "Copy Prompt" button to copy the prompt to your clipboard.

2

Paste in AI Tool

Paste the prompt into ChatGPT, Claude, Gemini, or your preferred AI tool.

3

Customize & Use

Fill in the bracketed sections with your specific information and get results!

Frequently Asked Questions

What is ReAct prompting?+

ReAct (Reasoning + Acting) is a prompting framework where an AI alternates between Thought steps (reasoning about the current state) and Action steps (performing an operation like search or tool call), then processes Observations (results of actions) before the next thought. This interleaved loop enables AI to solve multi-step tasks that require both planning and real-time information gathering. It was introduced in a 2022 paper and underpins most AI agent frameworks.

How is ReAct different from chain-of-thought prompting?+

Chain-of-thought prompting produces a reasoning trace that leads directly to an answer from existing knowledge. ReAct interleaves reasoning with actions — the AI can fetch external information, use tools, and incorporate new observations into its reasoning at each step. CoT is for reasoning over known information; ReAct is for tasks that require gathering information, using tools, or taking actions in a multi-step loop. ReAct is effectively CoT extended with agency.

When should I use ReAct prompting?+

Use ReAct for tasks requiring external information retrieval, multi-step planning with decision points, tool-use workflows, or any task where the AI needs to take intermediate actions and incorporate their results. If your task can be answered from the model's existing knowledge, chain-of-thought is sufficient. ReAct adds value specifically when actions (search, calculate, read, API call) are required as part of the reasoning process.

Is ReAct the same as LangChain agents?+

LangChain's ReAct agent implements the ReAct framework with programmatic tool integration. The underlying prompting pattern (Thought/Action/Observation loop) is the same. LangChain handles the plumbing: parsing action outputs, routing them to the right tool, and feeding observations back. Understanding the ReAct prompt structure helps you debug LangChain agents, customise their behaviour, and design better tools for them to use.

What are the limitations of ReAct prompting?+

ReAct chains can fail in several ways: the model can get stuck in loops, misparse tool outputs, or compound errors over many steps. Each additional step amplifies any earlier reasoning errors. Token costs increase with chain length. For tasks with many steps (10+), the model's attention to early context degrades. Best practices: keep each action atomic, add explicit stop conditions, validate critical observations before proceeding, and test the chain on representative inputs before deploying.