Don't Imitate Understand - #4
Hello!
In this fourth issue of the "Don't Imitate Understand" newsletter you'll get:
- Course news and a discussion of why anthropomorphizing your LLM is bad for your code output quality.
- A bunch of interesting links!
Course News
- Today is Prime Day so Understanding React is on sale for one day.
- My upcoming course Understanding Modern JavaScript Frameworks is still open for pre-sale.
- I'm happy to announce production is in progress on a new AI course! Many of you have asked for an AI course, and I'm happy to have found an approach that matches the "Don't Imitate, Understand" mantra. Will be on Udemy, stay tuned!
Stop Anthropomorphizing Your Code Generator
Your LLM won't always do what you want. It hallucinates. It makes mistakes. All this despite your best prompting efforts.
The big mistake I see developers making is responding to mistakes as if the LLM was a person. To "anthropomorphize" something means to "attribute human characteristics" to something that isn't human. Anthropomorphizing your LLM is a human tendency, and a developer mistake.
I see developers repeatedly asking, pleading with the LLM to change its behavior. But the LLM is not a person. Asking in different ways is not the best way to get good results.
You need to remember that an LLM is a pattern matching machine. Your context and prompts are creating a multi-dimensional mathematical space which the LLM will navigate in concert with everything in its training data. Your prompts and context should aid that navigation.
Repeating yourself when things go wrong, or using all caps to plead with the LLM, isn't helping it navigate that "semantic space".
Let's say an LLM just won't output proper HTML for you, it keeps giving divs and spans. There's the 1) anthropomorphized way, and 2) the "semantic space" way to respond:
The Wrong Way (Anthropomorphized)
"PLEASE stop using divs and spans everywhere. USE SEMANTIC HTML."
This may work. Sometimes. But it isn't reliable. If the pattern matching machine is outputting patterns you don't like, you need to provide patterns for it to follow and redirect its navigation of the semantic space that makes up its training data.
The Right Way
"You are an HTML author that cares about accessibility and semantics. Always opt for semantic HTML over divs and spans whenever possible. For example use <ul> for lists, <article> and <section> for sectioning pages and content, and <b>, <i>, <strong>, and <em> for emphasizing words."
You've given the LLM a role ("an HTML author") and example patterns ("article over div") to assist its navigation of the semantic space. You will get better results this way.
An LLM is not a person
An LLM does not truly think or reason. It doesn't "understand". It's a statistical guessing machine that is guided by the words and patterns you give it, and the words and patterns in its training data.
So stop thinking about the LLM as a person you are conversing with. Think of it as what it is: a mathematical device that you are guiding and interacting with, using your words as input.
Stop anthropomorphizing your LLM. Start treating it like a computer program. Give it good inputs.
Links
- A non-anthropomorphized view of LLMs. 100% agreed with this take on not anthropomorphizing LLMs.
- Angular's best practices for developing with LLMs.
- Everything LLMs do is a hallucination.
- Understanding AI. A good breakdown of the core concepts.
That's it for this fourth issue!
Happy coding!
Tony Alicea