YouTube is implementing updates to its harassment and cyberbullying policies, specifically targeting content that "realistically simulates" minors and victims of violent events narrating their deaths. The changes, set to take effect on January 16, are aimed at addressing concerns raised by the use of artificial intelligence to recreate the likeness of deceased or missing children in true crime content.
Instances have been reported where content creators used AI to generate childlike voices to describe the deaths of high-profile cases, such as the abduction and death of James Bulger, the disappearance of Madeleine McCann, and the tragic case of Gabriel Fernández. The disturbing nature of these AI narrations has sparked criticism, with families of the victims referring to the content as "disgusting."
Under the updated policies, YouTube will remove content that violates these guidelines. Users found in violation will receive strikes, with the first strike resulting in a one-week suspension of video uploads, livestreams, and stories. For repeat offenses within a 90-day period, penalties escalate, potentially leading to the permanent removal of the user's channel from YouTube.
This policy change follows YouTube's introduction of new measures related to responsible disclosures for AI content and tools to request the removal of deepfakes. Users are now required to disclose when they create altered or synthetic content that appears realistic. Failure to comply with these disclosure requirements may lead to various penalties, including content removal, suspension from the YouTube Partner Program, or other consequences.
The move by YouTube is part of a broader industry trend addressing the responsible use of AI-generated content. In September 2023, TikTok implemented a tool allowing creators to label AI-generated content, requiring disclosure when posting synthetic or manipulated media that depicts realistic scenes. YouTube's policy aligns with a growing recognition of the need to regulate AI-driven creation tools and establish guidelines to prevent the dissemination of misleading or harmful content.