YouTube now requires disclosure of AI-generated content

YouTube
YouTube

Think about an AI fake news report about a fire in a historic building, you will find that the incident was fabricated. Scary, isn’t it? In response to misleading content, YouTube is implementing a new policy: Influencers must declare whether they have included “altered or synthetic” content in their videos, which could potentially lead to misleading results, such as generative AI (artificial intelligence). Excluding factors are partial in providing factual information.

This means creators must flag videos where they have: This means creators must flag videos where they have:

  • In cases of deepfakes, instead of their faces, other people’s faces were used (imagine yourself in a deepfake situation).
  • The release contained false information in the form of real-life edits, such as this particular building being blown to pieces.
  • The developers have made the scenes in the background of the fictional stories more real
  • There is no need to label anything unnatural. Imagine a movie with clips of kitten videos and perfect color-balancing techniques. They’re free and I usually collect them.

Goal?Transparency. YouTube’s philosophy is that it needs to inform viewers about what’s real and what’s not. It is therefore important that algorithms are optimized especially in sensitive topics such as news, finance, or elections where misinformation can cause serious problems.

And, the next time you watch a YouTube video, just remember there’s a new “Altered or Synthetic” disclaimer that’s going to be sent to you. It’s a small action that’s similar to the way cryptocurrencies work, giving people the power to reward or punish certain content based on how valuable they deem it.

Source

Exit mobile version