Agent Washing: The One Question That Exposes Fake AI Agents - By Sourav Mishra (@souravvmishra)

Does it act on its own initiative or only when you push a button? How to tell real autonomous agents from workflows in disguise.

BySourav Mishra5 min read

"We use AI agents" is the new "we use AI." Half the time it's a triggered workflow with an LLM step. I call that agent washing. In this post I, Sourav Mishra, give you the one question that exposes it, how to tell real agents from workflows, and why the distinction matters for design and security—plus where to find a concrete pattern for building a real agent.

The Test: Initiative vs Trigger

The test: Does it act on its own initiative, or only when you hit Run? Real agents loop: observe → decide → act → reflect. They choose tools and paths at runtime based on what just happened. If you can draw the full flowchart before execution and it never changes, you have a workflow. The LLM might fill in boxes (e.g. "approve or reject"); it doesn't control the graph. So when someone says they have an agent, ask: Can it change strategy based on the last tool's output? If the answer is no, don't pay agent prices for it. If you're building, use a real loop and tool-calling with a stop condition—here's a concrete pattern.

The difference isn't semantic. Workflows are easier to secure and cost-predict. Agents are flexible and dangerous. Mislabeling either direction screws your design and your security model. I spell out the distinction in agents vs workflows.

What Agent Washing Looks Like in Practice

Agent washing is marketing something as an "AI agent" when it behaves like a fixed workflow or script—no real autonomy or dynamic tool choice. Examples: a "support agent" that's really a form → LLM → predefined response template; a "research agent" that's really a fixed sequence of search → summarize → return. The LLM does work, but the path is fixed. You could draw it once and it wouldn't change at runtime. That's a workflow with an LLM in the middle, not an agent.

Real agents can decide to call different tools, loop back, or try a different strategy based on what they just saw. So the one question to ask: Can it change strategy based on the last tool's output? If yes, you're in agent territory. If no, you're in workflow territory—which is fine, but call it what it is and design (and price) accordingly.

Why the Distinction Matters

Security and audit. Workflows have a fixed graph. You can enumerate all paths and lock down each node. Agents can call tools in unpredictable order and count; they're harder to audit and easier to abuse (e.g. runaway API calls). So if you're sold an "agent" but it's a workflow, you might overestimate the attack surface—or vice versa. Clarity here keeps your security model right.

Cost and expectations. Workflows have bounded steps (one per node on a path). Agents can run until a stop condition, so cost and latency are less predictable. If you think you're buying a workflow and you're actually getting an agent (or the other way around), your capacity and cost planning break.

Building the right thing. If you need open-ended behavior (research, support, "figure out how to do X"), a workflow will be limiting. If you need a fixed, auditable process (approvals, form pipelines), an agent is riskier than necessary. So tell them apart and build (or buy) the right one.

How to Build a Real Agent

If you're building and you want a real agent: single loop, tools, LLM, stop condition. The agent observes state (and tool results), decides what to do next, acts (calls a tool or returns), reflects (updates state), and repeats until done or until a step limit. I use the Vercel AI SDK and patterns like stopWhen: stepCountIs(N) so the loop can't run forever. Full implementation: building an agentic chatbot. For production you add least privilege per tool, human-in-the-loop for irreversible actions, and (if you have multiple agents) verification at handoffs. See production-ready agents and agents vs workflows.

Can LangGraph be used for real agents? Yes. It's agentic when you use loops and dynamic tool choice. Otherwise it's a workflow with an LLM in the graph. Same question: can it change strategy based on the last output?

Key Takeaways

  • Agent washing = marketing a workflow as an agent. Test: Can it change strategy based on the last tool's output? If no, it's a workflow.
  • Real agents = observe → decide → act → reflect; tools and paths chosen at runtime. Workflows = fixed graph; LLM may fill boxes but doesn't change the graph.
  • Why it matters: security/audit, cost/expectations, and building the right thing. Workflows are easier to secure; agents are flexible and riskier.
  • Build a real agent: loop + tools + LLM + stop condition. Building an agentic chatbot; agents vs workflows.

Written by Sourav Mishra. Full Stack Engineer, Next.js and AI.

Frequently Asked Questions

Q: What is agent washing? Marketing something as an "AI agent" when it behaves like a fixed workflow or script—no real autonomy or dynamic tool choice. The one question: can it change strategy based on the last tool's output?

Q: How do I tell if something is a real agent? Can it choose different tools or paths based on results? If it only runs when you trigger it and follows a fixed sequence, it's not an agent; it's a workflow with an LLM.

Q: Can LangGraph be used for real agents? Yes. It's agentic when you use loops and dynamic tool choice. Otherwise it's a workflow with an LLM in the graph. See agents vs workflows.

Q: Where do I get a concrete agent implementation? Building an agentic chatbot with the Vercel AI SDK—tools, loop, stopWhen, and production guardrails.

Share this post

Cover image for Agent Washing: The One Question That Exposes Fake AI Agents

You might also like

See all