Great Expectations?: Exploring Chomsky’s Critique of ChatGPT

It is surely a fair claim to make that ChatGPT has changed the world, and the scope of this change is bound to expand, given the tremendous pace with which it is increasing in capacities at human-computer interaction. These platforms work primarily by finding patterns within large data sets and generating outputs that are statistically probable. However, the day when such large language models would be able to match up the capabilities of humans in language generation and usage is still not here, given how these platforms are riddled with problems like biases, misinformation, and lack of contextual understanding.

It is then important to be critical of platforms like ChatGPT and be wary of overly celebratory claims that foreground its game-changing nature. Especially in a world where the rise of artificial intelligence coincides with an increase in problems like the underfunding of humanities, the loss of jobs for artists, among other things. It then becomes important to be suspicious of the ways in which such huge amounts of resources are being funneled into large language models, which despite their vast capacities, pale in comparison to the limitlessness of the human mind, which as described by the German linguist and philosopher, Wilhelm von Humboldt, works through making an “infinite use of finite means”.

In an article written by Noam Chomsky and Ian Roberts for The New York Times, the authors argue that instead of working statistically like ChatGPT, the human mind does not solely work through creating patterns and correlations, but instead looks for explanations and reasoning, an ability critically missing in ChatGPT. Even when a young child is imbibing language, they unconsciously and automatically, begin inferring logical principles within that grammatical system, endowing developing children with the ability to frame complex sentences in that language. In pale comparison to this, AI can only mimic sample data by mixing and matching what is already available.

The biggest problem with ChatGPT is that its pattern-oriented approach to language fails, because of the irregularities that mark linguistic thought. Its inability to understand the causal explanation behind linguistic statements means that the moment someone utters a weird phrase or neologism, its ability to parse and interpret it correctly collapses. This is because it lacks the intuition needed to understand the reasoning behind why someone has uttered. Thus, insofar as AI relies on what is merely probable and likely, it misses the improbable, the odd, and the strange. Except, human experience is marked by these odd experiences, such as the diversity of human thought and language.

Owing to its inability to understand human language, it is imperative that we are wary of claims that treat artificial intelligence and large language models as instruments that will save our world from its pressing socioeconomic concerns. Only when we situate AI within this context we will be able to derive meaningful benefits from it.

Sources

https://www.nytimes.com/2023/03/08/opinion/noam-chomsky-chatgpt-ai.html

Previous
Previous

Transforming Travel: Enhancing Tourism AI Through Natural Language Processing

Next
Next

Talking Beasts?: Exploring the Role of AI in Human-Animal Conversations