❯ Guillaume Laforge

Ai-Agents

On AI Standards and Protocols: Focus on MCP and A2A

At SnowCamp 2026, with my Cast Codeurs buddy Emmanuel Bernard of Hexactgon, I had the chance to deliver a talk on AI standards and protocols, with a big focus on MCP (Model Context Protocol), and A2A (Agent 2 Agent Protocol).

Without further ado, here’s the slide deck we presented:

This talk is based on the Devoxx 2025 deep dive session that I delivered with Emmanuel and my colleague Mete Atamel. As the talk wasn’t recorded during SnowCamp, I’ll share with you the 3h-long video from Devoxx below:

Read more...

Implementing the Interactions API with Antigravity

Google and DeepMind have announced the Interactions API, a new way to interact with Gemini models and agents.

Here are some useful links to learn more about this new API:

About the Interactions API

The Rationale and Motivation

The Interactions API was introduced to address a shift in AI development, moving from simple, stateless text generation to more complex, multi-turn agentic workflows. It serves as a dedicated interface for systems that require memory, reasoning, and tool use. It provides a unified interface for both simple LLM calls and more complex agent calls.

Read more...

AI Agentic Patterns and Anti-Patterns

This week, I was on stage at the Tech Rocks Summit 2025 in the beautiful ThéÒtre de Paris. This is the first I’m attending this event, gathering a nice crowd of CTOs, tech leads, architects, and decision makers.

My talk focused on what everyone is talking about right now: AI Agents. And in particular, I was interested in sharing with the audience things I’ve seen work or not work in companies, startups, and via tons of discussions with AI practitioners I met at conferences, meetups, or customer meetings.

Read more...

Gemini Is Cooking Bananas Under Antigravity

What a wild title, isn’t it? It’s a catchy one, not generated by AI, to illustrate this crazy week of announcements by Google. Of course, there are big highlights like Gemini 3 Pro, Antigravity, or Nano Banana Pro, but not only, and this is the purpose of the article to share with you everything, including links to all the interesting materials about those news.

Gemini 3 Pro

The community was eagerly anticipating the release of Gemini 3. Gemini 3 Pro is a state-of-the-art model, with excellent multimodal capabilities, advanced reasoning, excellent at coding, and other agentic activities.

Read more...

Driving a web browser with Gemini's Computer Use model in Java

In this article, I’ll guide you through the process of programmatically interacting with a web browser using the new Computer Use model in Gemini 2.5 Pro. We’ll accomplish this in Java ☕ leveraging Microsoft’s powerful Playwright Java SDK to handle the browser automation.

The New Computer Use Model

Unveiled in this announcement article and made available in public preview last month, via the Gemini API on Google AI Studio and Vertex AI, Gemini 2.5 Pro introduces a pretty powerful “Computer Use” feature.

Read more...

A Javelit frontend for an ADK agent

Continuing my journey with Javelit, after creating a frontend for “Nano Banana” to generate images and a chat interface for a LangChain4j-based Gemini chat model, I decided to see how I could integrate an ADK agent with a Javelit frontend.

The Javelit interface for an ADK search agent

A Javelit interface for an ADK search agent

The key ingredients of this interface:

  • a title (with some emojis 😃)
  • a container that displays the agent’s answer
  • a text input field to enter the search query

The ADK agent

For the purpose of this article, I built a simple search agent, with a couple of search tools:

Read more...

Building AI Agents with ADK for Java

At Devoxx Belgium, I recently had the chance to present this new talk dedicated to ADK for Java, the open source Agent Development Kit framework developed by Google.

The presentation covered:

  • an introduction to the notion of AI agents
  • how to get started in a Java and Maven project
  • how to create your first agent
  • how to debug an agent via the Dev UI
  • the coverage of the various tools (custom function tools, built-in tools like Google Search or code execution, an agent as tool, MCP tools)
  • an overview of the different ways to combine agents into a multi-agent system: sub-agents, sequential agents, parallel agents, loop agents
  • some details on the event loop and services (session and state management, artifacts, runner…)
  • structured input / output schemas
  • the various callbacks in the agent lifecycle
  • the integration with LangChain4j (to give access to the plethora of LLMs supported by LangChain4j)
  • the definition of agents via configuration in YAML
  • the new long-term memory support
  • the plugin system
  • the new external code executors (via Docker containers or backed by Google Cloud Vertex AI)
  • how to launch an agent with the Dev UI from JBang

Slides of the presentation

The slide deck of this session is embedded below:

Read more...

Creative Java AI agents with ADK and Nano Banana 🍌

Large Language Models (LLMs) are all becoming “multimodal”. They can process text, but also other “modalities” in input, like pictures, videos, or audio files. But models that output more than just text are less common…

Recently, I wrote about my experiments with Nano Banana 🍌 (in Java), a Gemini chat model flavor that can create and edit images. This is pretty handy in particular for interactive creative tasks, like for example a marketing assistant that would help you design a new product, by describing it, by futher tweaking its look, by exposing it in different settings for marketing ads, etc.

Read more...

Creating a Streamable HTTP MCP server with Micronaut

In previous articles, I explored how to create an MCP server with Micronaut by vibe-coding one, following the Model Context Protocol specification (which was a great way to better understand the underpinnings) and how to create an MCP server with Quarkus.

Micronaut lacked a dedicated module for creating MCP servers, but fortunately, recently Micronaut added official support for MCP, so I was eager to try it out!

For the impatient

You can checkout the code we’ll be covering in this article on GitHub.

Read more...

Vibe-coding a Chrome extension with Gemini CLI to summarize articles

I often find myself staring at a wall of text online. It could be a lengthy technical article, a detailed news report, or a deep-dive blog post. My first thought is often: “Is this worth the time to read in full?” On top of that, for my podcast, Les Cast Codeurs, I’m constantly gathering links and need to create quick shownotes, which is essentially… a summary.

My first attempt to solve this was a custom Gemini Gems I created: a personalized chatbot that could summarize links. It worked, but I often ran into a wall: it couldn’t access paywalled content, pages that required a login, or dynamically generated sites that I was already viewing in my browser. The solution was clear: I needed to bring the summarization to the content, not the other way around. The idea for a Chrome extension was born.

Read more...