Websim: when AI hallucinations are actually a cool product feature

Making the internet fun again when LLMs make a silly mistake

Chanon Roy
3 min readMay 1, 2024

Look around the corporate landscape these days and you’ll find that every business is trying to ask ChatGPT to put on a suit and come work ‘9-to-5’ for them as a highly capable chatbot assistant. European fintech company Klarna claims that two-thirds of their support requests are now handled by an AI support agent. With reasoning capabilities and vector-based context retrieval, this alien intelligence is actually working surprisingly well.

LLM Hallucinations and half-baked launches

There’s a catch though. Everything feels magical until the large language model (LLM) hallucinates something that is either non-sensical, irrelevant, or even disturbing. Solving this issue has been a mission critical objective for businesses and model creators. Getting it wrong can have costly consequences for a business, such as when Air Canada’s AI agent made up a company return policy, which the company was forced by court to honor.

DeepMind, Google’s AI lab, were actually first to discover the transformer-based architecture, which became the basis for LLMs and ChatGPT. Hallucination concerns prevented Google from being confident in releasing this…

--

--

Chanon Roy

🧑🏻‍💻 I write about tech and programming. I'm also fluent in film references