silly business

Gell-Mann's One-Armed Bandit

A small observation about Large Language Models (LLMs), human psychology, and some common pricing models for the APIs of hosted models:

LLMs induce Gell-Mann Amnesia1 when asked about any topic about which the inquirer is unfamiliar. If one should ask an LLM a question to which one knows the answer, one scoffs at the small or large errors it makes. When one asks it about something with which one is unfamiliar, it outputs something "reasonable" that is difficult not to take at face value.

LLMs, especially ones that have a per-token cost, operate similarly to slot machines. Exhibiting psychological similarities to a Skinner box2, chat bots that sometimes have useful output perform variable schedule operant conditioning on the user, which is a highly effective schedule known for causing addiction3. This is in addition to other psychological effects that LLMs induce when paying per-token, like plan continuation bias4 which is induced by output which almost meets the standards of the requester, but falls short in a critical way.

A short search shows I'm not the first5 to notice that using LLMs might be addictive. I'm not the first6 to notice the Gell-Mann Amnesia effect they have either.

I'm a little nervous about the idea that we have created addictive bullshit7 machines that sound like experts on any topic about which the audience is uninformed. That seems like a problem.

Footnotes


1

Gell-Mann amnesia

the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies

3

Variable schedule operant conditioning

Variable ratio schedule: Reinforcement occurs after a variable number of responses have been emitted since the previous reinforcement. This schedule typically yields a very high, persistent rate of response.

(emphasis mine)

4

Plan continuation bias is a form of the sunk-cost fallacy