by Nathan S. Allen

AI-generated content is taking over the Internet. In ‘Deepfakes: The Coming Infocalypse,’ Nina Schick estimates up to 90% of internet content could be primarily synthetic by 2026. Odds are you’ve read at least one news article or social media post generated entirely by AI this week and didn’t even know it. The age-old adage “Don’t believe everything you see on the internet” is more valid than ever.

What is AI-generated content?

AI content is any content generated by a Large-Language Model or LLM. An LLM is a machine-learning concept that works similarly to the human brain. Engineers train LLMs on an extensive data set, such as the entire public contents of the Internet, and the LLM eventually learns to mimic what it read, such as how to have a conversation, write an article, or code a program. Before training, the data is de-duplicated and cleansed of toxic or inaccurate material. This step is critically important; a single hateful expression or logical error could pollute the entire model.

The benefits of AI for creators

When used correctly, AI can boost the speed of your workflow and the quality of your content by orders of magnitude. Need someone to review a draft for an upcoming article? AI can do that. Need help generating a keyword-stuffed site description to improve your search rankings? AI can do that, too. The most exciting aspect of AI is its ability to transform a single person into a genuinely independent creator. Tasks that creators might otherwise farm out to a friend, coworker, or Fiverr can now be handled entirely by AI. But what happens when a creator’s content becomes less creative and more generated?

Why human content is important

Humans are the most imaginative creatures in the known universe, and our imagination plays a significant role in developing our inner voice. We use our inner voice to work out problems and rationalize new experiences. Personal bloggers, social media influencers, and reporters for major news outlets use their inner voices to build content for the Internet, from sharing simple life hacks to influencing public opinion.

People commonly research the Internet to find notable sources to support their claims. These sources are typically credited to create transparency and provide the reader with additional context. Not crediting sources can damage a creator’s reputation and, in some cases, be considered plagiarism. AI is a source, and AI is trained, in part, on the copyrighted property of other creators. However, taking credit for AI content happens more often than you think.

AI content is not all bad or all good.

Public opinion is an area where AI content can have severe repercussions when creators are not entirely transparent about their sources. In ‘The Role of Artificial Intelligence in Disinformation, ‘ authors Noémi Bontridder and Yves Poullet discuss the difference between disinformation and misinformation and how AI has and continues to be used to shape the arena of public influence.

Being honest about the sources we use to craft our stories is the ethical bedrock of content creation. It lets readers differentiate between your voice, your contributors’ voices, and the authors of your research. For AI-augmented content to be ethical, it’s essential to credit AI as a source. Not doing so is deliberately misleading and unfair to the millions of creators who unwittingly helped train for-profit AI products like ChatGPT, Grok, and Gemini.

Encouraging transparency

Not everyone is susceptible to behavioral economics, but as Neeti Sanyal puts it in their article ‘How to Manipulate Customers … Ethically,’: “People aren’t fully rational.” It’s all too easy to create an AI that reinforces a political narrative or narrow worldview or to prefer certain brands over others and to use its influence to guide consumers toward decisions they otherwise may not have made.

AI is routinely exploited for monetary and social profit, and specialized tools are emerging to help add transparency to the synthetic voices you might otherwise believe are human. One such tool is GPTZero, a web-based application that analyzes text to determine the likelihood that it was generated by “…ChatGPT, GPT4, Bard, LLaMa, and other AI models.” Media platforms might benefit from integrating with tools like GPTZero to provide an “AI Score” as a form of social proof.

Parroting AI-generated content without attribution is unwise, especially during its first years of growth. Training, scrutiny, and deeper industry awareness are needed before it can be considered a reliable and unbiased source of information.

 

References

The Role of Artificial Intelligence in Disinformation

https://www.cambridge.org/core/journals/data-and-policy/article/role-of-artificial-intelligence-in-disinformation/7C4BF6CA35184F149143DE968FC4C3B6

How to Manipulate Customers Ethically

https://hbr.org/2021/10/how-to-manipulate-customers-ethically

GPTZero – AI Detector for ChatGPT, GPT-4, & More

https://gptzero.me

The Author

Nathan is a polyglot engineer and AI enthusiast with a knack for combining aesthetic beauty, technical performance, and future flexibility. He’s a regular open-source contributor and provides consulting for cloud infrastructure, data mesh, and web applications. Book a virtual coffee to chat about your project.

https://calendly.com/nallenscott/virtual-coffee