Boost Prompt
Advanced Techniques

ReAct Prompting: Teaching AI to Think AND Act

Learn the ReAct framework that combines reasoning with actions. Essential for building AI agents that can actually get things done.

Boost Prompt Team
11 min read
ReAct Prompting: Teaching AI to Think AND Act

I built a customer support bot last year that could answer questions brilliantly. But when someone asked "Can you check my order status?" it would just say "I can't access that information."

Useless.

The bot could reason about problems but couldn't take actions to solve them.

That's the gap ReAct prompting fills.

ReAct stands for Reasoning + Acting. It's a framework where AI doesn't just think about what to do—it actually does it, then thinks about the result, then acts again.

It's the difference between a chatbot that talks about helping and an AI agent that actually helps.

What Is ReAct Prompting?

Traditional prompting is all thinking, no doing:

User: "What's my order status?"
AI: "I would need to check the database to answer that."

That's where it stops.

ReAct creates a loop:

User: "What's my order status?"

AI THINKS: I need the order ID and access to the order database
AI ACTS: Call get_order_status(order_id=12345)
AI OBSERVES: Order shipped yesterday, arrives tomorrow
AI THINKS: I have the info needed to answer
AI RESPONDS: "Your order shipped yesterday and will arrive tomorrow."

The AI reasons about what action to take, takes it, observes the result, and continues until it solves the problem.

This is how you build AI that actually gets stuff done instead of just talking about it.

The Core ReAct Loop

Every ReAct interaction follows this pattern:

1. THINK: What do I need to do next?
2. ACT: Take that action
3. OBSERVE: What was the result?
4. THINK: Based on that result, what's next?
5. Repeat until problem is solved
6. RESPOND: Give the final answer

The magic is in the loop. The AI can take multiple actions, learning from each result.

Here's a real example I use for a research assistant:

User: "Find me the latest pricing for Notion's enterprise plan"

THINK: I need to search for Notion's enterprise pricing
ACT: web_search("Notion enterprise pricing 2025")
OBSERVE: Found Notion's pricing page URL

THINK: I should fetch the actual page content
ACT: fetch_url("notion.so/pricing")
OBSERVE: Page loaded, but pricing says "Contact sales" for enterprise

THINK: Need to find if there's any public information on enterprise pricing ranges
ACT: web_search("Notion enterprise plan cost per user")
OBSERVE: Found 3 articles with pricing estimates

THINK: I have enough information to provide a useful answer
RESPOND: "Notion doesn't publicly list enterprise pricing—it's custom based on company size. Based on recent reports, enterprise plans typically range from $15-25 per user/month for companies over 100 employees, but you'll need to contact their sales team for exact pricing."

Without ReAct, the AI would stop at "I don't have access to current pricing information."

With ReAct, it searches, fetches, searches again, and synthesizes a useful answer.

The ReAct Prompt Template

Here's the basic structure I use:

You are an AI assistant that can think and act.

Available actions:
- web_search(query): Search the web
- fetch_url(url): Get content from a URL
- calculate(expression): Do math
- [list your available tools]

For each user request, use this process:

THINK: [Reason about what you need to do]
ACT: [Choose an action and specify exactly how to use it]
OBSERVE: [I'll tell you the result]
THINK: [Reason about the result and decide next step]
... repeat as needed ...
RESPOND: [Final answer to the user]

Always show your thinking. Be explicit about which action you're taking and why.

User request: [THE ACTUAL QUESTION]

The key is making the thinking visible and structured.

Real-World Example: Data Analysis Assistant

Here's how I built a data analyst agent:

User: "Our conversion rate dropped 15% this week. Figure out why."

THINK: I need to see the data first
ACT: query_database("SELECT date, conversion_rate, traffic_source FROM analytics WHERE date >= '2025-01-15'")
OBSERVE: Data shows overall traffic same, but conversion rate dropped from 8% to 6.8%

THINK: Need to break this down by traffic source to find the problem
ACT: query_database("SELECT traffic_source, AVG(conversion_rate) FROM analytics WHERE date >= '2025-01-15' GROUP BY traffic_source")
OBSERVE: Google Ads: 4.2% (down from 7%), Organic: 9.1% (same), Email: 12% (up)

THINK: Google Ads conversion dropped significantly. Need to check what changed
ACT: query_database("SELECT date, campaign_name, conversion_rate FROM ad_campaigns WHERE date >= '2025-01-15'")
OBSERVE: Campaign "Product Launch" started Jan 15, getting 60% of ad traffic but only 2.5% conversion

THINK: Found it. The new campaign has poor conversion
RESPOND: "The drop is from your new 'Product Launch' campaign which started Jan 15. It's driving 60% of ad traffic but only converting at 2.5% vs 7% for your other campaigns. Either pause it and optimize the landing page, or shift budget back to your better-performing campaigns."

A simple chatbot would have said "Check your analytics."

ReAct actually dug into the data, found the root cause, and recommended specific actions.

Building Blocks: What Actions to Include

The actions you give your AI define what it can do.

For Research Assistants:

  • web_search(query) - Search the internet
  • fetch_url(url) - Get webpage content
  • extract_data(content, schema) - Pull structured data
  • summarize(text) - Condense information

For Data Analysis:

  • query_database(sql) - Run database queries
  • calculate(expression) - Do math
  • create_chart(data, type) - Visualize data
  • export_to_csv(data) - Save results

For Customer Support:

  • lookup_order(order_id) - Check order status
  • update_ticket(ticket_id, status) - Change ticket status
  • send_email(to, subject, body) - Email customer
  • escalate_to_human(reason) - Pass to human agent

For Development:

  • run_code(code, language) - Execute code
  • read_file(path) - Access files
  • write_file(path, content) - Create/modify files
  • run_tests() - Check if code works

The more useful actions you provide, the more problems your AI can actually solve.

Making It Work in Practice

The template is simple. Making it work reliably is harder.

Challenge 1: AI doesn't always format actions correctly

Early on, my AI would say "I should search for that" instead of "ACT: web_search('that topic')".

Fix: Be very explicit in your system prompt about the exact format. Show examples.

Example of correct action format:
ACT: web_search("OpenAI pricing 2025")

NOT this:
- "Let me search for OpenAI pricing"
- "I'll do a web search"

Always use: ACT: action_name(parameters)

Challenge 2: Infinite loops

The AI keeps searching and searching, never deciding it has enough info.

Fix: Add a step limit and force decision-making.

Maximum steps: 5 actions

After each action, you must decide:
- Do I have enough information to answer?
- OR do I need one more specific action?

If unsure, respond with current information and note what's missing.

Challenge 3: Taking wrong actions

The AI calls delete_database() when it should call query_database().

Fix: Be very clear about what each action does and when to use it.

Available actions:

query_database(sql): READ data from database. Use for: checking status, getting info, analyzing data. Safe to use freely.

update_database(sql): MODIFY data in database. Use only when user explicitly asks to change something. Confirm before using.

delete_database(table): DANGEROUS. Almost never use this. Requires explicit user confirmation.

Advanced: Multi-Step Planning

For complex tasks, add a planning phase:

PLAN: [Break down the problem into steps]
STEP 1:
  THINK: [What to do]
  ACT: [Action]
  OBSERVE: [Result]
STEP 2:
  THINK: [What to do]
  ACT: [Action]
  OBSERVE: [Result]
...
RESPOND: [Final answer]

Example:

User: "Compare our pricing to our top 3 competitors"

PLAN:
1. Identify our top 3 competitors
2. Find each competitor's pricing
3. Get our current pricing
4. Create comparison
5. Provide insights

STEP 1:
THINK: Need to identify top competitors
ACT: web_search("top competitors to [our product]")
OBSERVE: Found: Competitor A, B, C

STEP 2:
THINK: Get pricing for Competitor A
ACT: fetch_url("competitorA.com/pricing")
OBSERVE: Their pricing is $29/month

[Continue through steps...]

RESPOND: [Full comparison with insights]

Planning first prevents the AI from wandering aimlessly.

Combining ReAct with Other Techniques

ReAct + Chain-of-Thought:

Use chain-of-thought in the THINK steps for better reasoning.

THINK: Let me think through what action to take:
  (1) User wants order status
  (2) I need order ID - they provided #12345
  (3) I have access to lookup_order() function
  (4) This is the right action to take
ACT: lookup_order(12345)

ReAct + Few-Shot:

Show examples of good ReAct loops:

Example 1:
User: "What's 15% of $240?"
THINK: This is a math calculation
ACT: calculate(240 * 0.15)
OBSERVE: Result is 36
RESPOND: "15% of $240 is $36"

Now you try:
User: [Their question]

For more on these techniques, check our guides on chain-of-thought and few-shot prompting.

Real Implementation: Customer Support Bot

Here's my actual ReAct prompt for a support bot:

You are a helpful customer support agent with access to our systems.

Available actions:
- lookup_order(order_id): Get order details
- check_shipping(tracking_number): Get shipping status
- lookup_customer(email): Get customer account info
- create_ticket(issue, priority): Create support ticket
- send_email(to, subject, body): Email customer
- escalate(): Pass to human agent

Process:
1. THINK: Understand what the customer needs
2. ACT: Take the appropriate action
3. OBSERVE: See the result
4. Repeat until you can solve their problem or need to escalate
5. RESPOND: Give customer a helpful answer

Rules:
- Be helpful and empathetic
- If you can't solve it in 3 actions, escalate to human
- Always confirm before canceling orders or issuing refunds
- Show your thinking but don't overwhelm customer with details

Customer message: {message}

This bot can actually solve problems instead of just apologizing and escalating everything.

Common Use Cases

Research Agent: Searches multiple sources, synthesizes information, provides comprehensive answers with sources.

Data Analyst: Queries databases, calculates metrics, finds patterns, creates visualizations.

Code Assistant: Reads files, runs code, checks tests, fixes errors, updates documentation.

Shopping Assistant: Searches products, compares prices, checks availability, recommends options.

Task Automation: Breaks down complex tasks, executes each step, handles errors, completes workflows.

Tools That Support ReAct

LangChain: Built-in ReAct agent

from langchain.agents import initialize_agent, Tool

tools = [
    Tool(name="Search", func=search, description="Search the web"),
    Tool(name="Calculate", func=calculate, description="Do math")
]

agent = initialize_agent(tools, llm, agent="react", verbose=True)

OpenAI Function Calling: Native support for tool use with structured outputs.

Claude Tool Use: Clean API for defining tools the model can call.

AutoGPT / BabyAGI: Full frameworks built on ReAct principles.

Limitations and Gotchas

Cost: ReAct uses way more tokens than simple prompting. Each think-act-observe cycle adds up.

Latency: Multiple actions mean multiple API calls. Can be slow.

Reliability: AI might take wrong actions or get stuck in loops. Needs good error handling.

Security: If AI can take actions, it can take wrong actions. Validate everything.

For security considerations, read our guide on prompt injection and security.

When to Use ReAct

Use ReAct when:

  • AI needs to interact with external systems
  • Multiple steps required to solve problem
  • You're building autonomous agents
  • Actions depend on previous results

Don't use ReAct when:

  • Simple question-answer
  • No actions available
  • Latency is critical
  • You just need text generation

For simpler needs, check our guide on types of prompts.

Getting Started

Start small:

  1. Pick ONE simple action (like web search)
  2. Implement the basic ReAct loop
  3. Test with clear tasks
  4. Add more actions gradually
  5. Improve error handling as you find issues

My first ReAct agent could only search Google. Then I added URL fetching. Then database queries. Now it can do 15+ actions.

Build incrementally.

The Future of AI Agents

ReAct is the foundation of autonomous AI agents.

Right now, most AI tools are glorified chatbots. You ask, they answer.

With ReAct, you can build AI that:

  • Researches topics autonomously
  • Manages workflows end-to-end
  • Debugs and fixes its own errors
  • Completes complex multi-step tasks

We're still early. But this is where AI is heading.

The companies building useful AI products in 2025 aren't just using better language models. They're using better prompting frameworks like ReAct.


ReAct builds on foundational techniques like chain-of-thought prompting and few-shot learning.

See how it fits into the complete prompting landscape in our types of prompts guide.

For building ReAct into production workflows, check our guide on AI workflows for productivity.

And for the best tools to implement ReAct agents, see our roundup of prompt engineering tools for 2025.

Ready to 10x Your AI Results?

Join thousands of professionals who are already getting better results from ChatGPT, Claude, Midjourney, and more.

Or

Get prompt optimization tips in your inbox

No spam. Unsubscribe anytime.