For years I called Professor Chomsky "the intellectual for people who aren't as smart as they think they are." In 2021 when he was quoted:
'How can we get food to them?' asks Chomsky. 'Well, that's actually their problem'
I thought "Jeez, maybe he's not as smart as I think he is." and he was flying the old authoritarian freak flag to boot.
And then I'd spin the Chomskybot at "http://rubberducky.org/cgi-bin/chomsky.pl" and it would spit out something like:
Look On My Words, Ye Mighty, And Despair!However, this assumption is not correct, since a subset of English sentences interesting on quite independent grounds does not affect the structure of a general convention regarding the forms of the grammar. In the discussion of resumptive pronouns following (81), the appearance of parasitic gaps in domains relatively inaccessible to ordinary extraction is not to be considered in determining the levels of acceptability from fairly high (eg (99a)) to virtual gibberish (eg (98d)). Let us continue to suppose that most of the methodological work in modern linguistics may remedy and, at the same time, eliminate irrelevant intervening contexts in selectional rules. Analogously, the fundamental error of regarding functional notions as categorial delimits nondistinctness in the sense of distinctive feature theory. So far, the theory of syntactic features developed earlier is rather different from the ultimate standard that determines the accuracy of any proposed grammar.
And all would be right with the world once again. I especially like the Ozymandias bit that headlines every Chomskybot pronouncement. However...
From time to time the old boy says or writes something that is so self-evidently true that I just tuck it into my mental matrix with hardly a care or concern that he might be in error.
From The New York Times, March 8, 2023:
By Noam Chomsky, Ian Roberts and Jeffrey Watumull
Jorge Luis Borges once wrote that to live in a time of great peril and promise is to experience both tragedy and comedy, with “the imminence of a revelation” in understanding ourselves and the world. Today our supposedly revolutionary advancements in artificial intelligence are indeed cause for both concern and optimism. Optimism because intelligence is the means by which we solve problems. Concern because we fear that the most popular and fashionable strain of A.I. — machine learning — will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge.
OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney are marvels of machine learning. Roughly speaking, they take huge amounts of data, search for patterns in it and become increasingly proficient at generating statistically probable outputs — such as seemingly humanlike language and thought. These programs have been hailed as the first glimmers on the horizon of artificial general intelligence — that long-prophesied moment when mechanical minds surpass human brains not only quantitatively in terms of processing speed and memory size but also qualitatively in terms of intellectual insight, artistic creativity and every other distinctively human faculty.
That day may come, but its dawn is not yet breaking, contrary to what can be read in hyperbolic headlines and reckoned by injudicious investments. The Borgesian revelation of understanding has not and will not — and, we submit, cannot — occur if machine learning programs like ChatGPT continue to dominate the field of A.I. However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.
It is at once comic and tragic, as Borges might have noted, that so much money and attention should be concentrated on so little a thing — something so trivial when contrasted with the human mind, which by dint of language, in the words of Wilhelm von Humboldt, can make “infinite use of finite means,” creating ideas and theories with universal reach.
The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question. On the contrary, the human mind is a surprisingly efficient and even elegant system that operates with small amounts of information; it seeks not to infer brute correlations among data points but to create explanations.
For instance, a young child acquiring a language is developing — unconsciously, automatically and speedily from minuscule data — a grammar, a stupendously sophisticated system of logical principles and parameters. This grammar can be understood as an expression of the innate, genetically installed “operating system” that endows humans with the capacity to generate complex sentences and long trains of thought. When linguists seek to develop a theory for why a given language works as it does (“Why are these — but not those — sentences considered grammatical?”), they are building consciously and laboriously an explicit version of the grammar that the child builds instinctively and with minimal exposure to information. The child’s operating system is completely different from that of a machine learning program.
Indeed, such programs are stuck in a prehuman or nonhuman phase of cognitive evolution. Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.
Here’s an example....
....MUCH MORE
On the other hand: "The hedge fund that just posted the best return in history is negotiating a company-wide ChatGPT license"
And one of the gurus of AI: Kai-Fu Lee on Artificial Intelligence: "Why Computers Don’t Need to Match Human Intelligence"