❯ Guillaume Laforge

Gemini

The power of large context windows for your documentation efforts

My colleague Jaana Dogan was pointing at the Anthropic’s MCP (Model Context Protocol) documentation pages which were describing how to build MCP servers and clients. The interesting twist was about preparing the documentation in order to have Claude assist you in building those MCP servers & clients, rather than clearly documenting how to do so.


A Generative AI Agent with a real declarative workflow

In my previous article, I detailed how to build an AI-powered short story generation agent using Java, LangChain4j, Gemini, and Imagen 3, deployed on Cloud Run jobs.

This approach involved writing explicit Java code to orchestrate the entire workflow, defining each step programmatically. This follow-up article explores an alternative, declarative approach using Google Cloud Workflows.

I’ve written extensively on Workflows in the past, so for those AI agents that exhibit a very explicit plan and orchestration, I believe Workflows is also a great approach for such declarative AI agents.

Read more...

An AI agent to generate short sci-fi stories

This project demonstrates how to build a fully automated short story generator using Java, LangChain4j, Google Cloud’s Gemini and Imagen 3 models, and a serverless deployment on Cloud Run.

Every night at midnight UTC, a new story is created, complete with AI-generated illustrations, and published via Firebase Hosting. So if you want to read a new story every day, head over to:

short-ai-story.web.app

The code of this agent is available on Github. So don’t hesitate to check out the code:

Read more...

Let's think with Gemini Flash 2.0's experimental thinking mode and LangChain4j

Yesterday, Google released yet another cool Gemini model update, with Gemini 2.0 Flash thinking mode. Integrating natively and transparently some chain of thought techniques, the model is able to take some more thinking time, and automatically decomposes a complex task into smaller steps, and explores various paths in its thinking process. Thanks to this approach, Gemini 2.0 Flash is able to solve more complex problems than Gemini 1.5 Pro or the recent Gemini 2.0 Flash experiment.

Read more...

Detecting objects with Gemini 2.0 and LangChain4j

Hot on the heels of the announcement of Gemini 2.0, I played with the new experimental model both from within Google AI Studio, and with LangChain4j.

Google released Gemini 2.0 Flash, with new modalities, including interleaving images, audio, text, video, both in input and output. Even a live bidirectional speech-to-speech mode, which is really exciting!

When experimenting with AI Studio, what attracted my attention was AI Studio’s new starter apps section. There are 3 examples (including links to Github projects showing how they were implemented):

Read more...

Lots of new cool Gemini stuff in LangChain4j 0.35.0

While LangChain4j 0.34 introduced my new Google AI Gemini module, a new 0.35.0 version is already here today, with some more cool stuff for Gemini and Google Cloud!

Let’s have a look at what’s in store!

Gemini 1.5 Pro 002 and Gemini 1.5 Flash 002

This week, Google announced the release of the new versions of the Google 1.5 models:

  • google-1.5-pro-002
  • google-1.5-flash-002

Of course, both models are supported by LangChain4j! The Google AI Gemini module also supports the gemini-1.5-flash-8b-exp-0924 8-billion parameter model.

Read more...

New Gemini model in LangChain4j

A new version of LangChain4j, the super powerful LLM toolbox for Java developers, was released today. In 0.34.0, a new Gemini model has been added. This time, this is not the Gemini flavor from Google Cloud Vertex AI, but the Google AI variant.

It was a frequently requested feature by LangChain4j users, so I took a stab at developing a new chat model for it, during my summer vacation break.

Gemini, show me the code!

Let’s dive into some code examples to see it in action!

Read more...

Gemini Nano running locally in your browser

Generative AI use cases are usually about running large language models somewhere in the cloud. However, with the advent of smaller models and open models, you can run them locally on your machine, with projects like llama.cpp or Ollama.

And what about in the browser? With MediaPipe and TensorFlow.js, you can train and run small neural networks for tons of fun and useful tasks (like recognising hand movements through the webcam of your computer), and it’s also possible to run Gemma 2B and even 7B models.

Read more...

Analyzing video, audio and PDF files with Gemini and LangChain4j

Certain models like Gemini are multimodal. This means that they accept more than just text as input. Some models support text and images, but Gemini goes further and also supports audio, video, and PDF files. So you can mix and match text prompts and different multimedia files or PDF documents.

Until LangChain4j 0.32, the models could only support text and images, but since my PR got merged into the newly released 0.33 version, you can use all those files with the LangChain4j Gemini module!

Read more...

Latest Gemini features support in LangChain4j 0.32.0

LangChain4j 0.32.0 was released yesterday, including my pull request with the support for lots of new Gemini features:

  • JSON output mode, to force Gemini to reply using JSON, without any markup,
  • JSON schema, to control and constrain the JSON output to comply with a schema,
  • Response grounding with Google Search web results and with private data in Vertex AI datastores,
  • Easier debugging, thanks to new builder methods to log requests and responses,
  • Function calling mode (none, automatic, or a subset of functions),
  • Safety settings to catch harmful prompts and responses.

Let’s explore those new features together, thanks to some code examples! And at the end of the article, if you make it through, you’ll also discover 2 extra bonus points.

Read more...