First off: Netflix seems to be completely okay with its various creatives using genAI to brainstorm, noting that, at worst, creatives need to “socialize” the intended use with their Netflix contacts. (Don’t look at us, we’re not the lawyers uncomfortable with saying “Just talk to us about it.”) That includes creating temporary assets—provided that partners didn’t feed any of Netflix’s own tasty proprietary data or information into the machines in order to generate them. (A lot of these guidelines boil down to “If you give these company permission to feast on our data, it’s your ass,” honestly.)
Things get a lot more rigorous once film and TV creators start wanting to include actual AI assets in the finished product, with the guidelines stating that you can maybe skirt by without a lot of oversight if the object in question is a background element that doesn’t get alluded to in the scene. Anything beyond that is going to require far more extensive oversight for “legal or reputational implications,” though, and that goes double for any elements that are actually going to be the focal point of a scene. (The doc specifically notes something like adding a second killer doll to Squid Game‘s infamous Red Rover sequences would need written consent from the powers that be to be included.) Things get even stricter on the topic of replicating or altering performances, where Netflix asserts that creators are expected to not only get written permission, but to follow guidelines from the acting guilds (the subject of much of the fighting during the 2023 strikes). There’s some more detailed wording about when you are or aren’t expected to get consent from an actor before using AI to alter anything—with requirements more lax on stuff that’s become more industry-standard, like using computers to fix visual mistakes—but notes that, for sure, any genAI models created from a performer’s voice or likeness won’t be re-used in any future projects without explicit permissions. (That’s been one of the big fears underpinning much of the Hollywood fighting on this topic, i.e., that studios could just feed an actor’s performance into the computer and then generate new takes in perpetuity.)
For those holding to the basic, extremely understandable premise that generative AI is a blight on the arts, full stop, removing jobs—like creating those same background elements—that a human artist could just as easily perform, these guidelines are going to be little more than palliative care. That being said, they do suggest that Netflix is adopting caution as its basic model, and really doesn’t want to get caught off-guard by its creators using AI without telling them about it. None of this, after all, is “Absolutely don’t do it”—except for “giving our data to third-parties,” of course—but it’s also suggesting that the streamer is leaning toward heavy oversight as it wades into the digitally generated, multi-fingered quagmire.