Learning Language: Differences in the Acquisition Abilities of Humans and AI

Even as AI has become an essential part of human lives in the 21st century, with the capacity to generate original content and apparently mimic the human capacity for language, it becomes critical for us to analyze the language acquisition abilities of AI in comparison with those of humans.

Despite all the abilities of interfaces such as ChatGPT, GPT3, LaMDA, and MUM, AI is still restricted within its confined linguistic ability. The large language models (LLM) that these chatbots are based on have predictive analysis abilities that enable them to predict the next word of a block of text, owing to the fact that a fairly large number of conversations are predictable, unoriginal, and similar to each other, allowing AI to keep the conversation flowing on productively through making use of simple inference mechanisms. As LLMs continue to display common sense abilities and linguistic reasoning, they also often display instances of embarrassing nonsense and non-sequiturs.

This leads to the argument that despite major advancements in AI language models, they will never be able to approximate the human capacity of language. This is because AI models work on the assumption that there is an entrenched, clear, and transparent connection between thought and language. Therefore, while AI can use massive databases to construct blocks of text with sentences that logically follow after each other, it does not have the capacity to ideate and understand.

Even as AI can talk about anything, it does not understand what it is talking about. This is because the structures that entail language are only a mode of representing knowledge, but they do not subsume knowledge. As language involves the mechanism of compressing and representing information, such compression leaves out things that remain unsaid, for while humans have the ability to pick up on these cues and understand what has been left unsaid, AI does not have the ability to parse this as it is restricted to the realm of the concrete. Humans also do not need a perfect language because they share a non-linguistic understanding of factors in various contexts, leading to richer inferences, but AI, on the other hand, can only indulge in context-free comprehension and thus is shallow in its understanding.

Therefore, it can be argued that LLMs cannot match the deep immersive understanding that humans have about the world which allows them to far exceed the capacities of what AI can entail, and thus, there is no way for AI to approximate this deep understanding ability of humans, as they cannot go beyond the words written to understand what is being talked about.

Sources:

  1. https://www.noemamag.com/ai-and-the-limits-of-language/

  2. https://files.eric.ed.gov/fulltext/EJ1363313.pdf

Previous
Previous

Simple Syntaxes: Analyzing AI's Sentence Structures  

Next
Next

The Rise of AI in Linguistics: From Chatbots to Language Translation