In what feels like an arsonist cheerfully offering to sell you a new and improved kind of hose to put out the fire he just set inside your house—and, quick note to future readers, “hoses” were a way we in the Wet Times used to transport a substance called “water” for various life-adjacent purposes—Google has announced it’s rolling out a new tool that will let users identify content created with its own generative AI tools.
Now, to be very clear, the company’s SynthID Detector is not a catch-all tool for identifying any and all gen-AI content. That’s because it only works on images, videos, audio, and text that have been embedded with Google’s own SynthID watermark, which, understandably, is only going to happen on stuff people have created using Google’s various tools. (NVIDIA has apparently partnered with the tech giant to embed the invisible-to-humans watermark into videos generated by its own slop-creator, Cosmos, too.) Most importantly for those of us trying to keep our sanity in the current hellscape we’re all living in, SynthID can do dick-all to identify material generated via OpenAI’s various products, including ChatGPT, so all it can really tell you is whether the AI weirdo you’re trying to catch is a big fan of Google’s stuff, specifically.
Detecting AI-created material is, unsurprisingly, a hugely complicated topic, one that’s already grown into a burgeoning arms race. On the one side, you have the detectors, which purport (with dubious claims of accuracy and authenticity) to be able to ID whether any particular piece of content was spewed out of a computer’s butthole. And on the other, there’s the butthole farmers themselves, who know all the techniques the detectors are using to catch them, and are trying to find new ways dodge or fool them. Watermarking, like what Google’s doing with SynthID, can work—at least until people find new ways to remove it (or insert it into human-created works, just to create a little more chaos). And, again, it has to be opt-in: The easiest way to not get caught by SynthID Detector, presumably, is to use tools that don’t place the watermark there in the first place.
Anyway, Google announced the detector (which does seem kind of interesting, in so far as it doesn’t just flag images wholesale, but can ID which parts were specifically made with AI) yesterday, although it won’t be available to the public for some time. There’s currently a waitlist for journalists and researchers to get access to the tech.
[via The Verge]