Skip to main content

I have met the LLMs, and they are us

Well, actually not all of us, really just me (I?). 

I was listening to a vaguely interesting podcast this morning about David Bessis' book Mathematica, which (apparently?) emphasises the importance of intuition in mathematical thinking rather than logical proofs. In particular, the idea was that being a good mathematician is learning to train your intuition to agree with the reality, battering it into the right shape when you make a mistake instead of just giving up. It can then be a useful guide and inspiration when exploring new mathematical concepts.

What most struck me about this idea of 'training your intuition' was that this was analogous to machine learning (LLMs/AI/whatever). You don't really care about the steps or formally proving something, you're just trying to make the outputs of the system agree with reality. Our big advantage over LLMs in this context being that for us reality is more than 'all the text on the internet'.

But then I got to thinking about the other things that make us distinct from LLMs. You'll have to forgive me here as I rant primarily based on machine learning knowledge from twenty years ago rather than the state of the art; it's entirely possible that I'm underplaying the complexities of modern systems (err RAG? Chain of thought? ...).

The most obvious cause of "I am a human and I deny the AI is human" is a different understanding of time and memory. When 'chatting' to an LLM, we interact with a static model every time, everyone else in the world is interacting with that same model, and most of the time it's better to start from scratch rather than let the model be confused by some previous tokens from either you or it. Sure, you can leave those tokens there, but they're ultimately a surface level manipulation of the underlying model's abilities and inclinations.

The other way we pat ourselves on the back and say "well, I'm smarter than a computer" is that LLM output doesn't have obvious tells about what's incorrect. Also known as high quality bullshit. Sure, some humans are 'better' at it than others, but you can tune for the individual human given time. People that are skilled bullshit generators tend to be completely useless at providing good information, or at least you can clearly identify a whole topic area where they know nothing. But for almost any topic in the world an LLM will provide amazing information 90% of the time, and the 10% of absolute tosh will be just as clearly written about as the 90%.

That all sounds great, and some small bulwark for our continuing delusion that humanity is somehow a special and unique snowflake, but unfortunately I'm here to explain to you that you're probably safer treating me as an LLM than as a real person. Evidence presented in dot point form, in memory of my father (whose natural affinity was with a computer program rather than an LLM):

  • I am aphantasic, which for those who haven't followed the repetitive pop sci articles of the last few years means that I don't do mental imagery, like 1% to 5% of the population. And two of the characteristics of most aphantasics is that they're a bit detached and their memory is rubbish.
  • I have an instinctive aversion to dealing with the outside world, and my main experience of reality is consuming vast quantities of text without remembering any of it specifically or viscerally. For instance, I've read a decent proportion of the English literature canon, but am essentially unable to discuss any particular book because I don't remember them; they've mostly been training data for my 'intuition'. I think this is affected by me not having an internal monologue, not verbalising when reading, not having any mental imagery (see above), so I charge quickly and almost thoughtlessly through most text. 
  • I can quickly generate plausible words in sequences about factual-ish topics, and am sometimes faster at typing them than speaking them. People whom I have known for a while (ok, primarily my wife) have often remarked (ok, complained) that I will sound perfectly convincing about things that I have very little knowledge of, and can do this with approximately 0.5 seconds of prior thought.
  • I am getting older. This means that my neural connections are increasingly pruned/optimised, and the effect of any new event on my neural architecture is smaller and smaller. In short, my whole brain is becoming more like a pre-trained model, with limited ability to adjust and adapt to different circumstances; I am asymptotically approaching pure LLM-ness.

I think the only real question here is what to do about this. Start a religion? Put up an internet quiz ("How close are you to an LLM?")? Sell my brain to science? Try to be more human for my family? Eh, clearly rhetorical: all I'm actually likely to do is write an annoying blog post.

Oops.

Comments