AI Agentic Patterns and Anti-Patterns
This week, I was on stage at the Tech Rocks Summit 2025 in the beautiful Théâtre de Paris. This is the first I’m attending this event, gathering a nice crowd of CTOs, tech leads, architects, and decision makers.
My talk focused on what everyone is talking about right now: AI Agents. And in particular, I was interested in sharing with the audience things I’ve seen work or not work in companies, startups, and via tons of discussions with AI practitioners I met at conferences, meetups, or customer meetings.
Without further ado, here’s the deck (in French 🇫🇷 for now, sorry!) I showed on stage:
A Quick Historical Recap
We saw the Transformer wave in 2017, the ChatGPT tsunami in 2023, and the RAG (Retrieval Augmented Generation) trend in 2024. In 2025, here we are: Agents are the new frontier for LLMs.
But concretely, what does this change for us, devs and tech leaders? What works, what doesn’t work? Here are the key points of my presentation.
What is an Agent, Really?
Forget the magic for two minutes. An agent is a fairly simple equation:
Agent = LLM + Memory + Planning + Tools
It is no longer just a model predicting the next word. It is a system that observes, plans, acts, and thinks (the famous Reflection loop to correct its own errors).
Architecture Patterns that Work
I presented 4 patterns to avoid reinventing the wheel:
- The Orchestrator: A supervisor agent that delegates to specialized sub-agents. This is crucial for breaking down a complex task into digestible chunks.
- Rethinking Tools: Don’t just throw your raw REST API at the LLM.
Create “business task” oriented tools (e.g., “Schedule Meeting” vs
POST /calendar/v1/events). Fewer tools = less confusion = more determinism. - MCP (Model Context Protocol): This is the future standard, essentially the USB for AI tools. It standardizes how an agent connects to its tools, launched by Anthropic and now widely adopted (but still rapidly evolving).
- A2A (Agent to Agent): Google and its partners are pushing this extensible protocol so that agents can discover and collaborate with each other, regardless of their language or framework.
Traps to Avoid (Anti-Patterns)
I insisted on this because I see teams falling into these traps:
- The “Chatbot Mandate”: Does your leadership want “a chatbot”? Resist. AI should often be invisible (like a Head-Up Display), not necessarily an endless conversation.
- Insufficient Vibe-Checking: “It looks like it works” is not a testing strategy. You need Golden Responses, LLM-as-a-Judge, and a real evaluation phase.
- Silent Confabulation: RAG is great, but if the AI invents things, it’s dangerous. Force source citation and aim for IVO (Immediately Validatable Output, coined by my colleague Zack Akil): the user must be able to verify the result at a glance.
- The Coding “Rabbit Hole”: Coding agents are stunning but can lead you down the wrong path with incredible confidence. (“You’re absolutely right!”) Keep a cool head and focus on value (MVP), not feature creep.
Back at the Office: What Do We Do?
I concluded with a “Todo List” for when attendes are back at the office:
- Don’t ask yourself “Where can I squeeze in a chatbot?”. Instead, identify the most painful business process (the Critical User Journey).
- Experiment small. The goal is to learn.
- Measure & Evaluate. It’s your users who will tell you if you’re right, not the hype.
The agent might not buy happiness, but implemented well, it can seriously contribute to it! 😄
