❯ Guillaume Laforge

Java

Hands on Codelabs to dabble with Large Language Models in Java

Hot on the heels of the release of Gemini, I’d like to share a couple of resources I created to get your hands on large language models, using LangChain4J, and the PaLM 2 model. Later on, I’ll also share with you articles and codelabs that take advantage of Gemini, of course.

The PaLM 2 model supports 2 modes:

  • text generation,
  • and chat.

In the 2 codelabs, you’ll need to have created an account on Google Cloud, and created a project. The codelabs will guide you through the steps to setup the environment, and show you how to use the Google Cloud built-in shell and code editor, to develop in the cloud.

Read more...

Get Started with Gemini in Java

Google announced today the availability of Gemini, its latest and more powerful Large Language Model. Gemini is multimodal, which means it’s able to consume not only text, but also images or videos.

I had the pleasure of working on the Java samples and help with the Java SDK, with wonderful engineer colleagues, and I’d like to share some examples of what you can do with Gemini, using Java!

First of all, you’ll need to have an account on Google Cloud and created a project. The Vertex AI API should be enabled, to be able to access the Generative AI services, and in particular the Gemini large language model. Be sure to check out the instructions.

Read more...

Generative AI in practice: Concrete LLM use cases in Java, with the PaLM API

Large Language Models, available through easy to use APIs, bring powerful machine learning tools in the hands of developers. Although Python is usually seen as the lingua franca of everything ML, with LLM APIs and LLM orchestration frameworks, complex tasks become easier to implement for enterprise developers.

Abstract

Large language models (LLMs) are a powerful new technology that can be used for a variety of tasks, including generating text, translating languages, and writing different kinds of creative content. However, LLMs can be difficult to use, especially for developers who are not proficient in Python, the lingua franca for AI. So what about us Java developers? How can we make use of Generative AI?

Read more...

Tech Watch #5 — November, 15, 2023

  • Some friends shared this article from Uwe Friedrichsen, tilted back to the future, that talks about this feeling of “déjà-vu”, this impression that in IT we keep on reinventing the wheel. With references to mainframes, Uwe compared CICS to Lambda function scheduling, JCL to step functions, mainframe software development environments to the trendy platform engineering. There are two things I like about this article. First of all, it rings a bell with me, as we’ve seen the pendulum swing as we keep reinventing some patterns or rediscovering certain best practices, sometimes favoring an approach one day, and coming back to another approach the next day. But secondly, Uwe referenced Gunter Dueck who talked about spirals rather than a pendulum. I’ve had that same analogy in mind for years: rather than swinging on one side to the next and back, I always had this impression that we’re circling and spiraling, but each time, even when passing on the same side, we’ve learned something along the way, and we’re getting closer to an optimum, with a slightly different view angle, and hopefully with a better view and more modern practices. Last week at FooConf #2 in Helsinki, I was just talking with my friend Venkat Subramaniam about this spiral visualisation, and I’m glad to see I’m not the only one thinking that IT is spiraling rather than swinging like a pendulum.

    Read more...

Tech Watch #3 — October, 20, 2023

  • Stop Using char in Java. And Code Points
    It’s a can of worms, when you start messing with chars, code points, and you’re likely going to get it wrong in the end. As much as possible, stay away from chars and code points, and instead, use as much as possible the String methods like indexOf() / substring(), and some regex when you really need to find grapheme clusters.

  • Paul King shared his presentations on Why use Groovy in 2023 and an update on the Groovy 5 roadmapIt’s interesting to see how and where Groovy goes beyond what is offered by Java, sometimes thanks to its dynamic nature, sometimes because of its compile-time transformation capabilities. When Groovy adopts the latest Java features, there’s always a twist to make things even groovier in Groovy!

    Read more...

Client-side consumption of a rate-limited API in Java

In the literature, you’ll easily find information on how to rate-limit your API. I even talked about Web API rate limitation years ago at a conference, covering the usage of HTTP headers like X-RateLimit-*.

Rate limiting is important to help your service cope with too much load, or also to implement a tiered pricing scheme (the more you pay, the more requests you’re allowed to make in a certain amount of time). There are useful libraries like Resilience4j that you can configure for Micronaut web controllers, or Bucket4j for your Spring controllers.

Read more...

Discovering LangChain4J, the Generative AI orchestration library for Java developers

As I started my journey with Generative AI and Large Language Models, I’ve been overwhelmed with the omnipresence of Python. Tons of resources are available with Python front and center. However, I’m a Java developer (with a penchant for Apache Groovy, of course). So what is there for me to create cool new Generative AI projects?

When I built my first experiment with the PaLM API, using the integration within the Google Cloud’s Vertex AI offering, I called the available REST API, from my Micronaut application. I used Micronaut’s built-in mechanism to marshal / unmarshal the REST API constructs to proper classes. Pretty straightfoward.

Read more...

Creating kids stories with Generative AI

Last week, I wrote about how to get started with the PaLM API in the Java ecosystem, and particularly, how to overcome the lack of Java client libraries (at least for now) for the PaLM API, and how to properly authenticate. However, what I didn’t explain was what I was building! Let’s fix that today, by telling you a story, a kid story! Yes, I was using the trendy Generative AI approach to generate bedtime stories for kids.

Read more...

Getting started with the PaLM API in the Java ecosystem

Large Language Models (LLMs for short) are taking the world by storm, and things like ChatGPT have become very popular and used by millions of users daily. Google came up with its own chatbot called Bard, which is powered by its ground-breaking PaLM 2 model and API. You can also find and use the PaLM API from withing Google Cloud as well (as part of Vertex AI Generative AI products) and thus create your own applications based on that API. However, if you look at the documentation, you’ll only find Python tutorials or notebooks, or also explanations on how to make cURL calls to the API. But since I’m a Java (and Groovy) developer at heart, I was interested in seeing how to do this from the Java world.

Read more...

Build and deploy Java 17 apps on Cloud Run with Cloud Native Buildpacks on Temurin

In this article, let’s revisit the topic of deploying Java apps on Cloud Run. In particular, I’ll deploy a Micronaut app, written with Java 17, and built with Gradle.

With a custom Dockerfile

On Cloud Run, you deploy containerised applications, so you have to decide the way you want to build a container for your application. In a previous article, I showed an example of using your own Dockerfile, which would look as follows with an OpenJDK 17, and enabling preview features of the language:

Read more...