Mimicking Emotion: Exploring the Limits of AI in Replicating Human Language

The giant strides made by AI in replicating human language have allowed it to pass off as being able to generate text convincingly as humans do. This is because of how large language models (LLMs) like ChatGPT and GPT3 work on colossal sets of data to catalog and survey the speech patterns of human language and use algorithmic analysis to replicate it. It is incredibly important, then, to emphasize that AI does not have the ability to understand human language or even produce it organically: it simply has the ability to mimic the examples fed into it.

AI then lacks a core component of what marks human speech as distinct, lacing it with uniqueness: emotions. Emotions are a significant part of human speech as they contribute to the meaning that speech is involved in. Further, because emotions are also culturally encoded, and thus are differently manifested and interpreted within different societies, the ability of AI to process these differences and navigate these nuances is deeply negligible. For instance, a smile means different things in different societies. In ignoring these differences, and conflating everything across different factors, to generate passages full of decontextualized language, AI far lags behind the human ability to infuse language with emotion and understand the differences that emotions bestow on language.

Given how subjective emotions are, AI is a dangerous technology to deploy in interpreting emotions especially because of how prone it is to bias. Multiple studies have shown how AI is more likely to associate negative emotions with people from certain races as opposed to others. This can have disastrous consequences for individuals from these identities, if the speech generated by AI were to consistently identify certain identities as exhibiting negative traits.

As AI has begun to power platforms like chatbots with the ability to respond to consumer needs, their deftness at executing automated conversations has led to the misconception that these chatbots exhibit empathy, which is absolutely not the case. This is because empathy requires actual understanding and the sharing of emotions which is not within the ability of what AI can do, for it can only mimic understanding.

Despite this shortcoming, AI serves important functional tasks in our world today, making our lives so much easier. And yet it is important to not overemphasize its utility: it is absolutely incapable of functioning when faced with complex human behavior. Therefore, we need to make a realistic assessment of its abilities to avoid giving it too many responsibilities that would require it to make decisions about the well-being of individuals.

Sources:

  1. https://unloq.org/exploring-the-limits-breaking-down-the-misconceptions-of-ai-powered-chatbots/

  2. https://hbr.org/2019/11/the-risks-of-using-ai-to-interpret-human-emotions

  3. https://emergingindiagroup.com/2023/05/19/the-battle-of-the-minds-why-ai-will-never-fully-replicate-human-emotions/

Previous
Previous

AI as Amanuensis: Understanding Speech-to-Text Technologies

Next
Next

Simple Syntaxes: Analyzing AI's Sentence Structures