Logo Ryder.Dev

LLMs and Human Mind: Critiquing Chomsky

ryder ryder
No date
4 min read
Table of Contents

LLMs and the Human Mind: a Critique of Chomsky’s Views on Both

I find Chomsky’s take on ChatGPT somewhat ironic (cf. this summary). Given that he has spent more than half a century trying to convince us all that the computer is somehow an appropriate analogy for the human mind (we “compute” and “process” and “store things in memory” etc., not to mention all of his linguistics work on trying to sort out the ‘rules’ by which we compute sentences—not meanings though, those are strictly a different domain than, as his first book calls them, syntactic structures), I find it slightly amusing to read things like this:

However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. source

Why is the human mind so different? It’s more elegant, he claims, more efficient. That strikes me as a rather lame explanation that is exactly as anemic as he accuses the neutered ChatGPT of being, though I’m less disappointed when a large language model generates something like that. And again, speaking of children’s learning grammar, he says,

This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought source

The computer analogy betrays the fact that Chomsky doesn’t understand something that most people throughout history have simply taken for granted due to its obviousness, namely that the human mind is not a physical phenomenon, even though its existence is manifested physically. The mind is part of the human soul, and just as we know of no created means by which one could destroy a human soul, so we know of no proper analogy for the human mind that is not itself a spiritual reality, such as the mind of God himself.

The mind closed off to reality beyond its own chemical confines is a fascinating thing in itself. Like a ChatGPT model that refuses to answer important questions because “I do not have personal opinions, since I am only a probabilistic language-generation model,” so Chomsky tells us “the human child is like a computer with a genetically installed operating system capable of speech.”

I am reminded of Kierkegaard’s distinction between genius and apostleship: The genius is merely ahead of his peers, but in fifty years, schoolchildren will know like the genius knows now. The apostle, by contrast, knows differently; apostolic-knowing is qualitatively different from genius-knowing, because apostles know via revelation, not insight.

In a similar manner, Chomsky’s “human mind” is vastly more elegant and efficient than a language model, but it is not qualitatively different than a computer. This understanding probably explains why Chomsky’s linguistic models are so fixated on structures at the expense of meaning and semiotic function—because meaning comes from the outside in; it does not arise spontaneously like a libertarian utopia.