The LLMentalist Effect

URL: https://softwarecrisis.dev/letters/llmentalist/

A wonderful letter that maps a psychic's con to LLM's, breaking it down—step by step—as to how they're an appropriate like-to-like comparison.

With great headers like:

  • The rise of the mechanical psychic
  • The Psychic's Con
    1. The Audience Selects Itself
    2. The Scene is Set
    3. Narrowing Down the Demographic
    4. The Mark is Tested
    5. The Subjective Validation Loop
    6. "Wow! That psychic is the real thing!"
  • Many psychics fool themselves
  • The LLMentalist Effect
    1. The Audience Selects Itself
    2. The Scene is Set
    3. The Prompt Establishes the Context
    4. The Marks Test Themselves
    5. The Subjective Validation Loop
    6. "Wow! This chatbot thinks! It has sparks of general intelligence!"
  • It's easy to fall for this
  • This new era of tech seems to be built on superstition and pseudoscience

LLMs are a mathematical model of language tokens. You give a LLM text, and it will give you a mathematically plausible response to that text.

Well, the tech industry just isn’t that good at software. This illusion is, honestly, too clever to have been created intentionally by those making it.

All of these are proposed applications of “AI” systems, but they are also all common psychic scams. Mind reading, police assistance, faith healing, prophecy, and even psychic employee vetting are all right out of the mentalist playbook.

Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.