โฏ Guillaume Laforge

Micronaut

Visualize PaLM-based LLM tokens

As I was working on tweaking the Vertex AI text embedding model in LangChain4j, I wanted to better understand how the textembedding-gecko model tokenizes the text, in particular when we implement the Retrieval Augmented Generation approach. The various PaLM-based models offer a computeTokens endpoint, which returns a list of tokens (encoded in Base 64) and their respective IDs. Note: At the time of this writing, there’s no equivalent endpoint for Gemini models. Read more...

Serving static assets with Micronaut

My go-to framework when developing Java apps or microservices is Micronaut. For the apps that should have a web frontend, I rarely use Micronaut Views and its templating support. Instead, I prefer to just serve static assets from my resource folder, and have some JavaScript framework (usually Vue.js) to populate my HTML content (often using Shoelace for its nice Web Components). However, the static asset documentation is a bit light on explanations. Read more...

Discovering LangChain4J, the Generative AI orchestration library for Java developers

As I started my journey with Generative AI and Large Language Models, I’ve been overwhelmed with the omnipresence of Python. Tons of resources are available with Python front and center. However, I’m a Java developer (with a penchant for Apache Groovy, of course). So what is there for me to create cool new Generative AI projects? When I built my first experiment with the PaLM API, using the integration within the Google Cloud’s Vertex AI offering, I called the available REST API, from my Micronaut application. Read more...

Creating kids stories with Generative AI

Last week, I wrote about how to get started with the PaLM API in the Java ecosystem, and particularly, how to overcome the lack of Java client libraries (at least for now) for the PaLM API, and how to properly authenticate. However, what I didn’t explain was what I was building! Let’s fix that today, by telling you a story, a kid story! Yes, I was using the trendy Generative AI approach to generate bedtime stories for kids. Read more...

Getting started with the PaLM API in the Java ecosystem

Large Language Models (LLMs for short) are taking the world by storm, and things like ChatGPT have become very popular and used by millions of users daily. Google came up with its own chatbot called Bard, which is powered by its ground-breaking PaLM 2 model and API. You can also find and use the PaLM API from withing Google Cloud as well (as part of Vertex AI Generative AI products) and thus create your own applications based on that API. Read more...

Build and deploy Java 17 apps on Cloud Run with Cloud Native Buildpacks on Temurin

In this article, let’s revisit the topic of deploying Java apps on Cloud Run. In particular, I’ll deploy a Micronaut app, written with Java 17, and built with Gradle. With a custom Dockerfile On Cloud Run, you deploy containerised applications, so you have to decide the way you want to build a container for your application. In a previous article, I showed an example of using your own Dockerfile, which would look as follows with an OpenJDK 17, and enabling preview features of the language: Read more...

Reuse old smartphones to monitor 3D prints with WebRTC WebSockets and serverless

Reuse old smartphones to monitor 3D prints, with WebRTC, WebSockets and Serverless Monitoring my 3D prints in my basement means climbing lots of stairs back and forth! So hereโ€™s my story about how I reused an old smartphone to check the status of my prints. I built a small web app that uses WebRTC to exchange video streams between my broadcasting smartphone and viewers, with WebSockets for signaling, and a serverless platform for easily deploying and hosting my containerized app. Read more...

Skyrocketing Micronaut microservices into Google Cloud

Instead of spending too much time on infrastructure, take advantage of readily available serverless solutions. Focus on your Micronaut code, and deploy it rapidly as a function, an application, or within a container, on Google Cloud Platform, with Cloud Functions, App Engine, or Cloud Run. In this presentation, youโ€™ll discover the options you have to deploy your Micronaut applications and services on Google Cloud. With Micronaut Launch, itโ€™s easy to get started with a template project, and with a few tweaks, you can then push your code to production. Read more...

Running Micronaut serverlessly on Google Cloud Platform

Last week, I had the pleasure of presenting Micronaut in action on Google Cloud Platform, via a webinar organized by OCI. Particularly, I focused on the serverless compute options available: Cloud Functions, App Engine, and Cloud Run. Here are the slides I presented. However, the real meat is in the demos which are not displayed on this deck! So let’s have a closer look at them, until the video is published online. Read more...

Start the fun with Java 14 and Micronaut inside serverless containers on Cloud Run

Hot on the heels of theย announcementย of the general availability of JDK 14, I couldn’t resist taking it for a spin. Without messing up my environment — I’ll confess I’m running 11 on my machine, but I’m still not even using everything that came past Java 8! — I decided to test this new edition within the comfy setting of a Docker container. Minimal OpenJDK 14 image running JShell Super easy to get started (assuming you have Docker installed on your machine), create a Dockerfile with the following content: Read more...