One attorney quoted in the piece does note that the added warnings may accomplish one thing: When the inevitable court cases—like the one Disney and Universal are currently bringing to bear on “bottomless pit of plagiarism” Midjourney—eventually go to trial, there is some legal strength in explicitly telling people they don’t have your permission to do something. (Judges and juries alike tend to be harsher on groups that straight-up ignore a warning like this than those who can semi-plausibly shrug and go “But we didn’t know how you felt!”)
What these warnings can’t do, though, is jump over the basic hurdle that’s going to crop up in many of these cases: Proving that the tech companies trained their AI on Universal’s specific copyrighted content in the first place. Sure, it’s pretty easy to prove when AI is used to generate something close enough to copyrighted content to trigger legal action; you just look at the AI Shrek and our real, beautiful Original Shrek, and compare the two. But it’s way harder to look at a picture of “My Totally Original Big Green Scottish Swamp Man” and prove that the AI that spat it out was actually trained on the original article, since these big training data sets tend to include huge amounts of material all getting jammed up together. As one expert pointed out, proving this stuff is also complicated by the PR machines of the studios themselves: Even outside the actual movies, studios like Universal put out so many images and trailers for public consumption that it’s easy for Team AI to train on just the publicly available material, possibly skirting the warnings entirely.
On top of all that, there’s still a very basic unresolved legal question at work here, i.e., whether scraping copyrighted works for training use falls under “fair use” in the first place. (A federal judge ruled back in June that scraping books to help power a large language model was fair use, but that was one case, and nobody is expecting that to fall as the final word on the matter.) The upshot is that we’re headed for a lot of massive legal battles on this topic, possibly with the Trump White House (and its copyright-hostile “AI Action Plan”) trying to put its thumb on the scales—and in those fights, it’s not at all clear how much a “We said you can’t!” disclaimer is going to weigh.