youtube ai content disclosure

YouTube Cracks Down on AI and Fake Videos

In a new YouTube help article titled “Disclosing use of altered or synthetic content“, YouTube now forces creators to reveal when their videos are fake. This marks a shift for the platform, where altered and synthetic content could previously run without labels.

Why the new focus on transparency?

YouTube seeks to empower viewers to make informed decisions about the content they consume. Deepfakes, AI-generated videos, and other manipulated content have the potential to mislead.

What do creators need to disclose?

Anything that could make a viewer believe they’re seeing something real that didn’t actually happen. This includes:

  • Digitally inserting someone into a scene where they weren’t present.
  • Fabricating audio or video to make it seem like someone said or did something they didn’t.
  • Creating hyperrealistic depictions of events, places, or people that never existed.

Not everything needs a label

Minor edits, artistic filters, or clearly fictional content (think “unicorn rides”) won’t earn you that label. Using some mild AI assistance also falls outside the disclosure guidelines.

Can YouTube tell what’s real and what’s fake?

YouTube wants creators to disclose their AI-generated content, but can the platform even tell the difference?

Not reliably. Even OpenAI, the brains behind ChatGPT, admits their AI detector is far from foolproof. “Our classifier is not fully reliable,” they state, with a measly 26% accuracy rate on AI-written text.

So, don’t expect YouTube to do much better. For now, YouTube’s taking creators at their word – a risky bet when it comes to the ever-evolving world of deepfakes and misinformation.

What happens if creators don’t play by the rules?

Undisclosed content could earn a label from YouTube itself, and repeat offenders risk content removal or even YouTube Partner Program suspension.

Now picture this: you put your heart and soul into a video, using your own music, your own footage. Suddenly, it’s flagged as “altered content” by a competitor or a disgruntled viewer. Sound familiar?

This opens the door to a whole new wave of takedown abuse, reminiscent of creators falsely hit with Content ID claims on their original work. YouTube’s new policy, while well-intentioned, could easily become a weapon in the hands of those seeking to stifle competition or just cause trouble.

Opinion: It’s about time

This feels like YouTube finally grappling with the ethical dilemmas of the AI age.

While there’s always a risk of overreach, giving viewers basic context to understand what they’re watching is a positive step towards fighting rampant misinformation.