❯ Guillaume Laforge

Large-Language-Models

Visualizing ADK multiagent systems

Let me share an interesting experiment I worked on to visualize your AI agent structure, more specifically, Agent Development Kit (ADK) multiagents.

The more complex your agents become, as you split tasks and spin off more specialized and focused sub-agents, the harder it is to see what your system is really made of, and how the interactions happen between the various components.

This is also something I experienced when I was covering Google Cloud Workflows: the more steps in the workflow, the more loops I had, indirections, conditions, etc, the trickier it was to understand and debug. And sometimes, as the saying goes, a picture is worth a thousand words. So when I was working on my recent series of articles on ADK agentic workflows (drawing diagrams by hand) this idea of experimenting with an ADK agent visualizer came up immediately.

Read more...

Mastering agentic workflows with ADK: the recap

Over the past few articles, we’ve taken a deep dive into the powerful agentic workflow orchestration capabilities of the Agent Development Kit (ADK) for Java. We’ve seen how to build robust, specialized AI agents by moving beyond single, monolithic agents. We’ve explored how to structure our agents for:

In this final post, let’s bring it all together. We’ll summarize each pattern, clarify when to use one over the other, and show how their true power is unlocked when you start combining them.

Read more...

Mastering agentic workflows with ADK: Loop agents

Welcome to the final installment of our series on mastering agentic workflows with the ADK for Java. We’ve covered a lot of ground:

Now, we’ll explore a pattern that enables agents to mimic a fundamental human problem-solving technique: iteration. For tasks that require refinement, trial-and-error, and self-correction, the ADK provides a LoopAgent.

Read more...

Mastering agentic workflows with ADK for Java: Parallel agents

Let’s continue our exploration of ADK for Java (Agent Development Kit for building AI agents). In this series, we’ve explored two fundamental agentic workflows:

But what if your problem isn’t about flexibility or a fixed sequence? What if it’s about efficiency? Some tasks don’t depend on each other and can be done at the same time. Why wait for one to finish before starting the next?

Read more...

Mastering agentic workflows with ADK for Java: Sequential agents

In the first part of this series, we explored the “divide and conquer” strategy using sub-agents to create a flexible, modular team of AI specialists. This is perfect for situations where the user is in the driver’s seat, directing the flow of conversation. But what about when the process itself needs to be in charge?

Some tasks are inherently linear. You have to do Step A before Step B, and Step B before Step C. Think about a CI/CD pipeline: you build, then you test, then you deploy. You can’t do it out of order… or if you do, be prepared for havoc!

Read more...

Mastering agentic workflows with ADK for Java: Sub-agents

Let me come back to the Agent Development Kit (ADK) for Java! We recently discussed the many ways to expand ADK agents with tools. But today, I want to explore the multi-agentic capabilities of ADK, by talking about sub-agent workflows.

In upcoming articles in this series, we’ll also talk about sequential, parallel, and loop flows.

The “divide and conquer” strategy

Think of building a complex application. You wouldn’t put all your logic in a single, monolithic class, would you? You’d break it down into smaller, specialized components. The sub-agent workflow applies this same “divide and conquer” principle to AI agents.

Read more...

The Sci-Fi naming problem: Are LLMs less creative than we think?

Like many developers, I’ve been exploring the creative potential of Large Language Models (LLMs). At the beginning of the year, I crafted a project to build an AI agent that could generate short science-fiction stories. I used LangChain4j to create a deterministic workflow to drive Gemini for the story generation, and Imagen for the illustrations. The initial results were fascinating. The model could weave narratives, describe futuristic worlds, and create characters with seemingly little effort. But as I generated more stories, a strange and familiar pattern began to emerge…

Read more...

AI Agents, the New Frontier for LLMs

I recently gave a talk titled “AI Agents, the New Frontier for LLMs”. The session explored how we can move beyond simple request-response interactions with Large Language Models to build more sophisticated and autonomous systems.

If you’re already familiar with LLMs and Retrieval Augmented Generation (RAG), the next logical step is to understand and build AI agents.

What makes a system “agentic”?

An agent is more than just a clever prompt. It’s a system that uses an LLM as its core reasoning engine to operate autonomously. The key characteristics that make a system “agentic” include:

Read more...

Advanced RAG β€” Using Gemini and long context for indexing rich documents (PDF, HTML...)

A very common question I get when presenting and talking about advanced RAG (Retrieval Augmented Generation) techniques, is how to best index and search rich documents like PDF (or web pages), that contain both text and rich elements, like pictures or diagrams.

Another very frequent question that people ask me is about RAG versus long context windows. Indeed, models with long context windows usually have a more global understanding of a document, and each excerpt in its overall context. But of course, you can’t feed all the documents of your users or customers in one single augmented prompt. Also, RAG has other advantages like offering a much lower latency, and is generally cheaper.

Read more...

Advanced RAG β€” Hypothetical Question Embedding

In the first article of this Advanced RAG series, I talked about an approach I called sentence window retrieval, where we calculate vector embeddings per sentence, but the chunk of text returned (and added in the context of the LLM) actually contains also surrounding sentences to add more context to that embedded sentence. This tends to give a better vector similarity than the whole surrounding context. It is one of the techniques I’m covering in my talk on advanced RAG techniques.

Read more...