Forked Tongues: Delving into the Dangers of AI Language Models

Artificial intelligence incorporates techniques of deep learning and predictive analysis to create Large Language Models (LLMs) that are able to generate text, create content, and translate between languages. Large Language Models feed on tremendous amounts of data to glean insights, identify trends and patterns, and deploy them for various purposes.

However, all is not hunky dory with Large Language Models. Despite their ability to do any kind of task ranging from code generation, virtual tutoring, and content summarization to even powering AI-driven chatbots that are able to offer suggestions and answer questions, LLMs come with their own set of risks.

Open AI, the founder of Chat GPT and GPT4, has confirmed the ability of its creations to generate various forms of prejudiced content such as slurs, hate speech, and bigoted thinking. Furthermore, these platforms are also capable of serious aid in the preparation of lethal substances such as chemical and biological weapons. Despite the company’s insistence that it has reduced the likelihood of such output, the company still has conceded that these threats remain.

The root of the problem lies in the fact that chat machines cannot actually think and understand things. Hence, when humans start mistaking advice from unthinking chat machines as advice from other humans, it leads to convoluted issues. There have been reported instances of AI inciting a human to indulge in self-harm, as well as cases of AI defending morally reprehensible things such as genocide.

Further, the likelihood of AI generating biased and prejudiced information is incredibly high. For example, AI is likely to associate certain identities such as black people and Hispanic people with criminality owing to the data it is fed on, which is not diverse enough. Not only is this stereotypical, but it also creates the risk of actual harm incited on these identities, given the increasing use of AI in forming policies such as predictive policing. Additionally, AI is also susceptible to security problems including the leaking of personal information, and helping in cybercrime such as phishing and spamming.

This is why it is important to identify and mitigate the harms associated with AI technology through the formation of comprehensive policy. However, this is not an easy job given that mitigating such harms requires firstly, the ability to quantify the extent and degree of these harms. Because of the nascent nature of AI, it is tricky to predict what exactly its consequences on our society would be. Hence, it is imperative that we treat AI with caution instead of assuming it to be a benevolent god ushering in limitless progress.

Sources

https://www.cigionline.org/articles/are-ai-language-models-too-dangerous-to-deploy-who-can-tell/

https://www.wired.com/story/large-language-models-artificial-intelligence/

https://www.ibm.com/blog/open-source-large-language-models-benefits-risks-and-types/

https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/

Previous
Previous

Creative Conundrums: Human Originality versus AI's Predictive Analysis

Next
Next

Preserving Precious Tongues: Exploring the Role of AI in Safeguarding Dying Languages