Tech Watch #3 — October, 20, 2023
Stop Using char in Java. And Code Points
It’s a can of worms, when you start messing with chars, code points, and you’re likely going to get it wrong in the end. As much as possible, stay away from chars and code points, and instead, use as much as possible the String methods like
substring(), and some regex when you really need to find grapheme clusters.
Paul King shared his presentations on Why use Groovy in 2023 and an update on the Groovy 5 roadmapIt’s interesting to see how and where Groovy goes beyond what is offered by Java, sometimes thanks to its dynamic nature, sometimes because of its compile-time transformation capabilities. When Groovy adopts the latest Java features, there’s always a twist to make things even groovier in Groovy!
The State of WebAssembly in 2023
Tell your LLM to take a deep breath!
We tend to humanize large language models via anthropomorphism, as much as we see human faces in anything like with pareildolia, although LLMs are neither sentients nor human. So it’s pretty ironic that to get a better result in some logic problem solving, we need to tell the LLM to actually take a deep breath! Are they now able to breathe?
Wannabe security researcher asks Bard for vulnerabilities in cURL
Large Language Models can be super creative, that’s why we employ them to imagine new stories, create narratives, etc. And it seems wannabe security experts believe that what LLMs say is pure facts, probably what happened to this person that reported that they asked Bard to find a vulnerability in cURL! And Bard indeed managed to be creative enough to craft an hypothetical exploit, even explaining where a possible integer overflow could take place. Unfortunately, the generated exploit text contained many errors (wrong method signature, invented changelog, code that doesn’t compile, etc.)
LLMs confabulate, they don’t hallucinate
A few times, I’ve seen this mention on social networks about the fact we should say that LLM confabulate, instead of hallucinate. Confabulation is usually a brain disorder that makes people confidently tell things that may be true or not, in a convincing fashion (they don’t even know it’s false or a lie). Hallucination is more of a misinterpretation of the sensory input, like having the impression to see a pink elephant! The article linked above explains the rationale.
Greg Kamradt tweets about the use cases for multimodal vision+text LLMs
You’d think that you could just get a model that describes a picture as a text, and then mix that description with other text snippets. But models that really fully understand both images and texts are way more powerful than this. In this tweet, Greg distinguishes different scenarios: description, interpretation, recommendation, convertion, extraction, assistance and evaluatation. For example, we could imagine transforming an architecture diagram into a proper Terraform YAML file, or a UI mockup into a snippet of code that builds that UI for real. You You could show a picture of a dish, and ask for its recipe!
The Story of AI Graphics at JetBrains
I’ve always loved generative and procedural art, both for games and indeed for art. I really enjoyed this article which is going through the story of how they are generating their nice splash screens and animations for the JetBrains family of products. Neural networks at play here!