❯ Guillaume Laforge

Large-Language-Models

Advanced RAG — Using Gemini and long context for indexing rich documents (PDF, HTML...)

A very common question I get when presenting and talking about advanced RAG (Retrieval Augmented Generation) techniques, is how to best index and search rich documents like PDF (or web pages), that contain both text and rich elements, like pictures or diagrams.

Another very frequent question that people ask me is about RAG versus long context windows. Indeed, models with long context windows usually have a more global understanding of a document, and each excerpt in its overall context. But of course, you can’t feed all the documents of your users or customers in one single augmented prompt. Also, RAG has other advantages like offering a much lower latency, and is generally cheaper.

Read more...

Advanced RAG — Hypothetical Question Embedding

In the first article of this Advanced RAG series, I talked about an approach I called sentence window retrieval, where we calculate vector embeddings per sentence, but the chunk of text returned (and added in the context of the LLM) actually contains also surrounding sentences to add more context to that embedded sentence. This tends to give a better vector similarity than the whole surrounding context. It is one of the techniques I’m covering in my talk on advanced RAG techniques.

Read more...

Expanding ADK AI agent capabilities with tools

In a nutshell, the AI agent equation is the following:

AI Agent = LLM + Memory + Planning + Tool Use

AI agents are nothing without tools! And they are actually more than mere Large Language Model calls. They require some memory management to handle the context of the interactions (short term, long term, or contextual information like in the Retrieval Augmented Generation approach. Planning is important (with variations around the Chain-of-Thought prompting approach, and LLM with reasoning or thinking capabilities) for an agent to realize its tasks.

Read more...

Expanding ADK Java LLM coverage with LangChain4j

Recently on these pages, I’ve covered ADK (Agent Development Kit) for Java, launched at Google I/O 2025. I showed how to get started writing your first Java agent, and I shared a Github template that you can use to kick start your development.

But you also know that I’m a big fan of, and a contributor to the LangChain4j project, where I’ve worked on the Gemini support, embedding models, GCS document loaders, Imagen generation, etc.

Read more...

An ADK Java GitHub template for your first Java AI agent

With the unveiling of the Java version of Agent Development Kit (ADK) which lets you build AI agents in Java, I recently covered how to get started developing your first agent.

The installation and quickstart documentation also helps for the first steps, but I realized that it would be handy to provide a template project, to further accelarate your time-to-first-conversation with your Java agents! This led me to play with GitHub’s template project feature, which allows you to create a copy of the template project on your own account or organization. It comes with a ready-made project structure, a configured pom.xml file, and a first Java agent you can customize at will, and run from both the command-line or the ADK Dev UI.

Read more...

Things you never dared to ask about LLMs — Take 2

Recently, I had the chance to deliver this talk on the mysteries of LLMs, at Devoxx France, with my good friend Didier Girard, It was fun to uncover the oddities of LLMs, and better understand where they thrive or fail, and why. I also delivered this talk alone at Devoxx Poland.

In this post, I’d like to share an update of the presentation deck, with a few additional slides here and there, to cover for example

Read more...

Beyond the chatbot or AI sparkle: a seamless AI integration

When I talk about Generative AI, whether it’s with developers at conferences or with customers, I often find myself saying the same thing: chatbots are just one way to use Large Language Models (LLMs).

Unfortunately, I see many articles or presentations that just focus on demonstrating LLMs at work within the context of chatbots. I feel guilty of showing the traditional chat interfaces too. But there’s so much more to it!

Read more...

Vibe coding an MCP server with Micronaut, LangChain4j, and Gemini

Unlike Quarkus and Spring Boot, Micronaut doesn’t (yet?) provide a module to facilitate the implementation of MCP servers (Model Context Protocol). But being my favorite framework, I decided to see what it takes to build a quick implementation, by vibe coding it, with the help of Gemini!

In a recent article, I explored how to use the MCP reference implementation for Java to implement an MCP server, served as a servlet via Jetty, and to call that server from LangChain4j’s great MCP support. One approach with Micronaut may have been to somehow integrate the servlet I had built via Micronaut’s servlet support, but that didn’t really feel like a genuine and native way to implement a server, so I decided to do it from scratch.

Read more...

LLMs.txt to help LLMs grok your content

Since I started my career, I’ve been sharing what I’ve learned along the way in this blog. It makes me happy when developers find solutions to their problems, or discover new things, thanks to articles I’ve written here. So it’s important for me that readers are able to find those posts. Of course, my blog is indexed by search engines, and people usually find about it from Google or other engines, or they discover it via the links I share on social media. But with LLM powered tools (like Gemini, ChatGPT, Claude, etc.) you can make your content more easily grokkable by such tools.

Read more...

Advanced RAG — Sentence Window Retrieval

Retrieval Augmented Generation (RAG) is a great way to expand the knowledge of Large Language Models to let them know about your own data and documents. With RAG, LLMs can ground their answers on the information your provide, which reduces the chances of hallucinations.

Implementing RAG is fairly trivial with a framework like LangChain4j. However, the results may not be on-par with your quality expectations. Often, you’ll need to further tweak different aspects of the RAG pipeline, like the document preparation phase (in particular docs chunking), or the retrieval phase to find the best information in your vector database.

Read more...