Gell-Mann's One-Armed Bandit
A small observation about Large Language Models (LLMs), human psychology, and some common pricing models for the APIs of hosted models:
LLMs induce Gell-Mann Amnesia1 when asked about any topic about which the inquirer is unfamiliar. If one should ask an LLM a question to which one knows the answer, one scoffs at the small or large errors it makes. When one asks it about something with which one is unfamiliar, it outputs something "reasonable" that is difficult not to take at face value.
LLMs, especially ones that have a per-token cost, operate similarly to slot machines. Exhibiting psychological similarities to a Skinner box2, chat bots that sometimes have useful output perform variable schedule operant conditioning on the user, which is a highly effective schedule known for causing addiction3. This is in addition to other psychological effects that LLMs induce when paying per-token, like plan continuation bias4 which is induced by output which almost meets the standards of the requester, but falls short in a critical way.
A short search shows I'm not the first5 to notice that using LLMs might be addictive. I'm not the first6 to notice the Gell-Mann Amnesia effect they have either.
I'm a little nervous about the idea that we have created addictive bullshit7 machines that sound like experts on any topic about which the audience is uninformed. That seems like a problem.
Footnotes
the tendency of individuals to critically assess media reports in a domain they are knowledgeable about, yet continue to trust reporting in other areas despite recognizing similar potential inaccuracies
Variable schedule operant conditioning
Variable ratio schedule: Reinforcement occurs after a variable number of responses have been emitted since the previous reinforcement. This schedule typically yields a very high, persistent rate of response.
(emphasis mine)
Plan continuation bias is a form of the sunk-cost fallacy
some medium blog about LLMs feeling addictive, another one, someone on reddit is "joking" about it, someone on twitter notices the effect but doesn't seem to understand the gravity of the situation or how real his words are when he says "a slight indication", a Hacker News has a bad feeling about how "using an LLM to code is like gambling and you're always hoping that just one more prompt will get you what you want."
A Hacker News notices the Gell-Mann Amnesia effect while using an LLM, so does this blogger, and this one and I am sure there are more.