Unregulated LLMs Could be Catastrophic

As LLMs continue to be integrated into workflows and peoples' daily lives, some argue that mainstream LLMs such as ChatGPT are too censored, and that this is equivalent to "lobotomizing" the LLM from its true power. Having researched what uncensored LLMs can generate, I would like to paint an opposing picture. Having a technology that can any kind text on command, with no human-level of accountability is a recipe for disaster. The most immediate danger is mass-proliferation of hate speech. LLMs are trained on internet data, which includes massive amounts of hate speech. The ability to produce targeted hate speech, using knowledge of stereotypes for each specific racial, ethnic, religious, or gender group will lead to a deluge of "quality" (for lack of a better word) hate speech propaganda flooding the internet. Uncensored LLMs can also be used to spread targeted speech in different languages, which can be abused by terrorist organizations to spread convincing propaganda in any language, including English. The next biggest danger is the potential for the mass production of inappropriate images. The pornography "industry" is already controversial for reports of sexual abuse and sex trafficking. With even less regulation in the AI space, I believe that we could see within the next few years the rise of realistic looking inappropriate AI deepfakes and revenge pornography. Image and video LLMs can already produce content extremely quickly. There will be an unprecedented volume of pornography on the internet, and this could have dire effects on Generation Alpha as they come of age. Finally, even without explicitly nefarious intentions, generative AI poses danger in another form - the personalizability of content. As entertainment gets more and more personalized, it gets more and more addictive. From TV, we got YouTube and Netflix "binge watching". Then we got social media and "likes". Most recently, we got personalized short form content such as Instagram Reels and Tiktok. With AI, there will be _hyper_personalized content that is even more addictive. Without regulation on the kinds of data we can feed LLMs and the outputs they produce, these are some scenarios that I envision happening. I understand the support for free-speech from LLMs, but I believe that LLM speech should be more regulated than human speech as they do not have the same accountability has humans. An LLM, especially an uncensored one, never says no. It will say and produce things that humans would balk saying or making. It has no inherent moral compass. It is up to society to make sure AI abides by our collective values.

Apr 16, 2025 - 21:41
 0
Unregulated LLMs Could be Catastrophic

As LLMs continue to be integrated into workflows and peoples' daily lives, some argue that mainstream LLMs such as ChatGPT are too censored, and that this is equivalent to "lobotomizing" the LLM from its true power.

Having researched what uncensored LLMs can generate, I would like to paint an opposing picture. Having a technology that can any kind text on command, with no human-level of accountability is a recipe for disaster.

The most immediate danger is mass-proliferation of hate speech. LLMs are trained on internet data, which includes massive amounts of hate speech. The ability to produce targeted hate speech, using knowledge of stereotypes for each specific racial, ethnic, religious, or gender group will lead to a deluge of "quality" (for lack of a better word) hate speech propaganda flooding the internet.

Uncensored LLMs can also be used to spread targeted speech in different languages, which can be abused by terrorist organizations to spread convincing propaganda in any language, including English.

The next biggest danger is the potential for the mass production of inappropriate images. The pornography "industry" is already controversial for reports of sexual abuse and sex trafficking. With even less regulation in the AI space, I believe that we could see within the next few years the rise of realistic looking inappropriate AI deepfakes and revenge pornography. Image and video LLMs can already produce content extremely quickly. There will be an unprecedented volume of pornography on the internet, and this could have dire effects on Generation Alpha as they come of age.

Finally, even without explicitly nefarious intentions, generative AI poses danger in another form - the personalizability of content. As entertainment gets more and more personalized, it gets more and more addictive. From TV, we got YouTube and Netflix "binge watching". Then we got social media and "likes". Most recently, we got personalized short form content such as Instagram Reels and Tiktok. With AI, there will be _hyper_personalized content that is even more addictive.

Without regulation on the kinds of data we can feed LLMs and the outputs they produce, these are some scenarios that I envision happening. I understand the support for free-speech from LLMs, but I believe that LLM speech should be more regulated than human speech as they do not have the same accountability has humans.
An LLM, especially an uncensored one, never says no. It will say and produce things that humans would balk saying or making. It has no inherent moral compass. It is up to society to make sure AI abides by our collective values.