This does sound like a decent step toward fighting disinformation on the platform. However, it also sounds like a lot of the onus is still on the people who have been the subject of the deepfakes in the first place. After the government official, journalist, or political candidate uploads a video verifying their identity, YouTube will flag potential deepfakes for them to report and request to remove. However, “detection does not guarantee removal,” reads the blog. “YouTube has a long history of protecting free expression and content in the public interest—including preserving content like parody and satire, even when used to critique world leaders or influential figures. We’ll continue to carefully evaluate these exceptions when we receive requests for removal.” We’re sure the line between satire and disinformation on YouTube will be an easy and uncontroversial one to draw!
If you can’t get your likeness removed, you might at least be able to profit from deepfakes of yourself before long. “Will likeness detection ever become even more like content ID, where it goes beyond just takedown requests and could help you make money from AI versions of yourself?” says Rene Ritchie, YouTube’s Creator Liaison, in a video attached to the blog. “Right now, YouTube’s absolute priority is on safety and protection, but YouTube is exploring similar future paths for likeness that could open up entirely new revenue opportunities for creators and artists to manage, authorize, and benefit from AI likeness because ultimately YouTube wants a future where AI helps creativity thrive and that means building the legal and technical frameworks to ensure that creators and artists stay in the driver’s seat.” This potential has already been discussed in other areas of the entertainment industry, according to a story published by The Ankler last week. As SAG-AFTRA negotiates with AMPTP, perhaps we could see these kinds of royalties take off in the near future.