Why Agents Are Actual Game Changers
Most people’s experience with AI goes something like this: you type a question into ChatGPT, get a confident-sounding answer, and later find out it was wrong. Or you ask it to build something, it says “done!”, and what you get doesn’t work.
This isn’t because AI is dumb. It’s because of how it works under the hood.
The autocomplete problem
When you use a large language model in a normal chat window, here’s what actually happens: you type something, it predicts the most statistically likely next tokens, and it outputs text. That’s it. No verification. No feedback. No way of knowing if what it just told you is right.
When you ask that autocomplete “when’s the cheapest time to fly to Lisbon?” it doesn’t search flights. It doesn’t check prices. It generates text that sounds like an answer. It has no mechanism to know if it’s wrong.
So yeah, of course it hallucinates. It has no feedback loop. It can’t course-correct.
What changes when you add a feedback loop
I build with AI constantly. A while back I asked Claude to build me a web app. It churned away, then answered: “Here’s your finished web app!” I opened it — blank screen, confusing error messages in red.
Same task, different setup: I gave it a browser tool, the ability to actually open the page and see what’s on screen. I gave it the ability to run tests. Same prompt.
This time it didn’t finish in 10 seconds. It spent time. It built something, opened it, saw the error, fixed it, checked again. Iterated. And I got something that worked.
The code wasn’t the constraint. The feedback loop was.
This is what “agentic AI” actually means
Not AI that’s smarter. Not a better model. Just AI with a loop:
Do → Check → Correct → Repeat
When you give an AI agent the tools to verify its own output — to know when it’s wrong and by how much — you start getting something that can actually do work. Flight search. Postage calculations. Code that fulfills acceptance criteria. What matters is: can it check itself?
If you’re thinking “wait, ChatGPT already does some of this” — you’re right. The latest models are starting to build pieces of this loop in: web search, code execution, multi-step reasoning. That’s not a coincidence. The companies building these tools are slowly turning their chatbots into agents, because that’s where the value is.
What this means for your product
This is the lens I bring to every product I work on. When I’m building features, designing workflows, or evaluating whether AI can solve a problem for a client’s app, the question is always the same: where’s the feedback loop?
The answer to that question is the difference between an AI feature that demos well and one that actually works in production.
If you’re building a product and thinking about where AI fits in, let’s talk.