❯ Guillaume Laforge

Google-Cloud

AI Inktober — Generating ink drawings with Imagen 3

Every year, in October, takes place the Inktober challenge: every day of the month, you have to do a drawing representing the word of the day. The list of prompts this year is the following: I participated to some of the daily challenges the past few years, but I never did all of them. But this year, for the fun, I thought I could ask Google’s Imagen 3 image model to draw for me! Read more...

Lots of new cool Gemini stuff in LangChain4j 0.35.0

While LangChain4j 0.34 introduced my new Google AI Gemini module, a new 0.35.0 version is already here today, with some more cool stuff for Gemini and Google Cloud! Let’s have a look at what’s in store! Gemini 1.5 Pro 002 and Gemini 1.5 Flash 002 This week, Google announced the release of the new versions of the Google 1.5 models: google-1.5-pro-002 google-1.5-flash-002 Of course, both models are supported by LangChain4j! The Google AI Gemini module also supports the gemini-1. Read more...

New Gemini model in LangChain4j

A new version of LangChain4j, the super powerful LLM toolbox for Java developers, was released today. In 0.34.0, a new Gemini model has been added. This time, this is not the Gemini flavor from Google Cloud Vertex AI, but the Google AI variant. It was a frequently requested feature by LangChain4j users, so I took a stab at developing a new chat model for it, during my summer vacation break. Read more...

Let LLM suggest Instagram hashtags for your pictures

In this article, we’ll explore another great task where Large Language Models shine: entity and data extraction. LLMs are really useful beyond just mere chatbots (even smart ones using Retrieval Augmented Generation). Let me tell you a little story of a handy application we could build, for wannabe Instagram influencers! Great Instagram hashtags, thanks to LLMs When posting Instagram pictures, I often struggle with finding the right hashtags to engage with the community. Read more...

Sentiment analysis with few-shot prompting

In a rencent article, we talked about text classification using Gemini and LangChain4j. A typical example of text classification is the case of sentiment analysis. In my LangChain4j-powered Gemini workshop, I used this use case to illustrate the classification problem: ChatLanguageModel model = VertexAiGeminiChatModel.builder() .project(System.getenv("PROJECT_ID")) .location(System.getenv("LOCATION")) .modelName("gemini-1.5-flash-001") .maxOutputTokens(10) .maxRetries(3) .build(); PromptTemplate promptTemplate = PromptTemplate.from(""" Analyze the sentiment of the text below. Respond only with one word to describe the sentiment. INPUT: This is fantastic news! Read more...

Analyzing video, audio and PDF files with Gemini and LangChain4j

Certain models like Gemini are multimodal. This means that they accept more than just text as input. Some models support text and images, but Gemini goes further and also supports audio, video, and PDF files. So you can mix and match text prompts and different multimedia files or PDF documents. Until LangChain4j 0.32, the models could only support text and images, but since my PR got merged into the newly released 0. Read more...

Text classification with Gemini and LangChain4j

Generative AI has potential applications far beyond chatbots and Retrieval Augmented Generation. For example, a nice use case is: text classification. I had the chance of meeting some customers and prospects who had the need for triaging incoming requests, or for labeling existing data. In the first case, a government entity was tasked with routing citizen requests to access undisclosed information to the right governmental service that could grant or reject that access. Read more...

Latest Gemini features support in LangChain4j 0.32.0

LangChain4j 0.32.0 was released yesterday, including my pull request with the support for lots of new Gemini features: JSON output mode, to force Gemini to reply using JSON, without any markup, JSON schema, to control and constrain the JSON output to comply with a schema, Response grounding with Google Search web results and with private data in Vertex AI datastores, Easier debugging, thanks to new builder methods to log requests and responses, Function calling mode (none, automatic, or a subset of functions), Safety settings to catch harmful prompts and responses. Read more...

Let's make Gemini Groovy!

The happy users of Gemini Advanced, the powerful AI web assistant powered by the Gemini model, can execute some Python code, thanks to a built-in Python interpreter. So, for math, logic, calculation questions, the assistant can let Gemini invent a Python script, and execute it, to let users get a more accurate answer to their queries. But wearing my Apache Groovy hat on, I wondered if I could get Gemini to invoke some Groovy scripts as well, for advanced math questions! Read more...

Grounding Gemini with Web Search results in LangChain4j

The latest release of LangChain4j (version 0.31) added the capability of grounding large language models with results from web searches. There’s an integration with Google Custom Search Engine, and also Tavily. The fact of grounding an LLM’s response with the results from a search engine allows the LLM to find relevant information about the query from web searches, which will likely include up-to-date information that the model won’t have seen during its training, past its cut-off date when the training ended. Read more...