Marketing Strategy Blog

The Risks of AI-Generated Content

Risks of AI Content

It’s no secret that there are positives to AI-generated content. It accelerates certain content development processes. It also facilitates brainstorming. And it expedites the production of FAQs based on information you’ve already written. AI is poised to transform the efficiency of various content development workflows.

But for all the good, there are risks. Plenty of them!

And in the rush to take advantage of all the positives of AI content, it can be easy to overlook those risks. In Jurassic Park, the ever-cynical Ian Malcolm says this of the entire project: “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

This, in some ways, sums up the current state of affairs when it comes to AI-generated content.

AI shows much promise. But there are serious risks associated with its use that cannot be overlooked.
 

Risk #1: Brand Mediocrity

AI content is like a remix artist who never produces anything original. AI content generators take only what’s already been created and remix it in various ways. The AI algorithms can produce content based only on the existing content to which they have already gained access.

To be fair, AI content has some positives. It’s fast, grammatically correct, flows well, and will accept feedback without being a diva.

But it doesn’t have that human spark. It doesn’t have that creative genius that makes you…you. It doesn’t introduce new, innovative ideas and thoughts.

Can it help your business? Maybe. But there’s a good chance it will be bland. Boring. Forgettable. Like oatmeal with no maple syrup, raisins, or brown sugar.

It certainly won’t stand out from the crowd. It won’t attract attention or help you establish your authority within your industry. It won’t attract backlinks or get mentioned in other pieces. It will just be.

Sarah Waldbuesser of Destination Legal says:

…One of the biggest risks [of AI-generated content] is that you’re losing authenticity in your content. And I think it’s going to end up being a bit of a turnoff because part of what we crave in the marketing space is connection and authenticity, and it’s hard to do that with ChatGPT.

Sarah Waldbuesser, Founder, Destination Legal

Your marketing team, on the other hand, can provide deep insights found nowhere else. You can insert your personality, your knowledge, your wit, your experience, and your creativity.

AI can’t produce anything new (at least not yet). If you want to gain organic traction in Google, establish authority, build backlinks, and be recognized as an authority, be better than what’s already out there.
 

Risk #2: Poor User Experience

AI doesn’t write with the user experience in mind. It just pumps out content. That’s all it does. It’s not thinking of the value the user is getting or optimizing the page aligned to user behavior. It’s just an obedient “robot,” spewing words and sentences in accordance with its algorithms.

If your content isn’t novel and doesn’t deliver sufficient value, your site visitors won’t stick around long. They won’t explore other pages on your site.

On top of this, if your page is lacking in engagement triggers or interesting visuals or anything that compels the reader to keep scrolling, abandonment will spike. All of this translates into low conversion rates.

Risk #3: No Copyright Protection

Let’s turn to copyright law as it relates to content produced with AI technology. There is significant ambiguity regarding the ownership and copyright protection of AI-generated content. And this protection can differ depending on the country, as well. Companies relying on generative AI without understanding the related laws are shouldering legal and reputational risks, and often without even realizing they are doing so. The evolving state of copyright law as it pertains to AI today makes it even more challenging to protect your brand.

Casey Nobile, Head of Content at Skyword, says:

Right now, it’s sort of the Wild West and copyright law hasn’t really caught up to the rate at which the technology is advancing.

Casey Nobile, Head of Content, Skyword

 

As the law firm Davis and Gilbert notes, “The U.S. Copyright Office has taken the position that AI-generated works generally do not qualify for copyright protection and cannot be registered because there is no ‘human authorship’ involved.”

In other words, content produced by AI tools without additional human input cannot be copyrighted. If you crank out (copy & paste) 10 blog posts using solely AI, there’s nothing to stop someone else from copying those blog posts and using them for their own benefit.

Note: The U.S. Copyright Office states that whoever owns the copyright from AI-generated content can pass the copyright down to you. But you’d need to check the terms, conditions, and fine print of the generative AI tool you’re using to confirm if this is applicable to your situation.

What about content that is partially written by humans and partially by AI? That’s where things get blurry. Currently, the U.S. Copyright Office will consider the degree of human authorship when determining whether to grant a copyright.

If you use an AI tool to write 50% of a blog post, article, or ebook, the Copyright Office may or may not give you ownership of it.

Waldbuesser notes:
 

…technically, under ChatGPT’s terms of service, you own the copyright to whatever is created. However, according to the copyright office and the government, and the laws that are in place, you can only own something that is human-generated. So, to get copyright protection, to get it registered with a copyright office, it has to be human-generated. Now, where the gray area is coming into place is when someone uses ChatGPT and then updates it with their own spin.

Sarah Waldbuesser, Founder, Destination Legal

Ryan M. Martin, associate attorney of the law firm Loeb & Loeb LLP, similarly points out:

The most notable and immediate risk [of AI-generated content] falls under copyright. The open copyright issues are currently being litigated in several cases. Most consequentially to an end user, if a court were to find that the training or use of the tool involves the unauthorized copying or creation of derivative works, and that copying or creation is not protected by a fair use defense, there would be a significant litigation risk against the tools and end users.

Ryan M. Martin, Associate Attorney, Loeb & Loeb LLP

Martin continues, “There is no shortage of potential claimants whose works were used in the training corpus of materials for AI tools who could sue seeking statutory damages up to $150,000 for willful infringement, per violation.

“Most tools do not offer meaningful protection for their end users, meaning end users – like agencies and advertisers – could be left having to defend potential lawsuits.”

 

Risk #4: False Information

Generative AI tools do not have any fact-checking built into the content development process. AI platforms have been fed massive data sets, but a major problem is that some of the information is not correct and some of it is actually highly biased.

The result is that AI content tools will sometimes state things as facts which are not, in fact, fact. They’ll confidently create quotes out of thin air, list statistics that are made up, and sometimes produce content that is simply not true.

Nobile states:

[AI has] no critical thinking skills. Because of its fluency and proficiency, it tricks people into thinking that it’s intelligent, but it is “artificial” intelligence for a reason. Not actually intelligent enough to make critical decisions about the sources that it’s using or where it’s pulling from. It’s just trying to generate a satisfactory answer.

Casey Nobile, Head of Content, Skyword

And what happens when AI eventually trains on other AI-generated content? A group of researchers in the UK and Canada found that AI that trains on AI-based content produces defective results: “We find that use of model-generated content in training causes irreversible defects in the resulting models.”

This can be dangerous, especially if you use AI to write about sensitive topics like health or finance or law. People make significant decisions based on what they read on the internet. If you publish something that’s not true, you run the risk of misleading people in massive ways.

Not only is this perilous for individuals, it’s dangerous for your brand as well.

If you’re going to use AI-generated content in some manner, you should, at a minimum, fact-check it. Just because AI produced it, doesn’t mean it’s true.

A Content Strategy with Teeth!

Learn how to develop a powerful content strategy that beats AI-generated content and delivers greater traffic and leads. Read the Post
B2B CONTENT STRATEGY

Risk #5: Biased Content

There is a lot of bad content out there. Biased content. Racist content. Sexist and homophobic content. Undoubtedly, a lot of this bad content has been consumed by AI learning algorithms.

Yes, most AI tools have checks in place to keep them from publishing the worst of the worst, but that doesn’t mean AI won’t produce biased content.

Hopefully, you would catch the worst instances of this biased content, but what about more subtle content? What about content that unintentionally contains biases against certain races, genders, or religions?

If you want your writing to be informed and inclusive, don’t leave that up to the discretion of an algorithm.

Risk #6: SEO Risk

Google cares very much about content that is written by experienced experts who are trustworthy authorities. Their E-E-A-T (Experience – Expertise – Authority – Trust) guidelines make this very clear. They want content that is highly valuable, can be trusted, and is written by the best.

Generative AI content is the opposite of E-E-A-T. It’s generic content written by an algorithm. As I’ve noted, it’s often mediocre. And it can’t always be trusted to be accurate.

If you rely on AI-generated content, you run the risk of performing poorly in Google. This past year, Google released its Helpful Content update to its algorithm. That update is specifically designed to ensure that they rank the best content on any given query.

Regarding the update, Google states:

“The helpful content update aims to better reward content where visitors feel they’ve had a satisfying experience, while content that doesn’t meet a visitor’s expectations won’t perform as well. How can you ensure you’re creating content that will be successful with our new update? By following our long-standing advice and guidelines to create content for people, not for search engines. People-first content creators focus first on creating satisfying content, while also utilizing SEO best practices to bring searchers additional value.”

Another SEO risk to consider… If all of your competitors are using generative-AI content based on the same keywords, there’s going to be a lot of highly similar content in your industry. As mentioned, AI-generated content lacks E-E-A-T, lacks differentiation, and does not deliver new, creative, or innovative value for the reader.

What makes you think that Google is going to pull your particular AI content out of the heap and place it in the Google top 10?

Risk #7: Branding Risk

Do you want to create a strong, resonant, amazing brand that captures people’s attention? Do you want your messaging to cut through the drone of boring content? Do you want a strong brand personality that’s empathetic to the audience? Then don’t rely on AI.

AI can’t do amazing.

AI can do adequate. AI can do okay. AI can do mediocre.

But surely you want more than that, right? Surely you want your brand to be something that stands out and that your audience remembers. If that’s the case, don’t look to AI for that. An amazing brand will come from you. Not a robot or algorithm.

On the topic of the branding risks of AI-generated content, what happens when AI eventually trains on other AI-generated content? It will turn online content into a veritable sea of sameness. If all of your competitors are pumping out nearly identical content, where’s the brand differentiation? Where’s the competitive advantage?

Staying Up-to-Date

What’s an effective way for marketers to stay up-to-date on the legal considerations/ramifications of AI-generated content as the technology continues to evolve?

Martin answers:

Certainly keeping an eye on ongoing litigation, talking to your legal teams, and having regular conversations with your stakeholders who are monitoring the technological and legal development. We are certainly in early days of this technology, but it is here to stay. Companies will need to stay on top of the legal landscape in order to continually balance the risks against the goals and benefits of using the technology. The lawyers at Loeb & Loeb LLP are constantly monitoring the landscape and speaking and publishing, so subscribe to our newsletters and blogs!

Ryan M. Martin, Associate Attorney, Loeb & Loeb LLP

 

In addition, you’d be wise to monitor the U.S. Copyright Office’s “Copyright and Artificial Intelligence” page. This is a good resource of information with new, related announcements, as well as links to events and resources.

 

Final Thoughts

I believe that the best content is published by humans for humans. The goal is not quantity but quality and impact.

Rather than have AI publish 100 articles in a month for your brand, focus on publishing a number of pieces that resonates deeply with your audience and builds brand affinity. Ultimately, that’s what’s going to move the needle for your business.

If you’re passionate about your brand (and you should be), don’t put it in the hands of algorithms. Put real-life content creators on the job. If you want help building a content marketing strategy that gets real, lead-generating results, hire a B2B content strategy agency like Stratabeat.