Even Grok is pretty grossed out by some of the images Grok has been putting out lately

Things are not going well when your AI reportedly suggests its users might consider calling the FBI on it.

Even Grok is pretty grossed out by some of the images Grok has been putting out lately

The main problem with our planet’s sudden gout of apocalyptically inescapable generative AIs, like ChatGPT and Grok, is that—aw, whoops, no, we just gave ourselves a bit of a dizzy spell there, trying to narrow the festering boil of issues with this tech down to one “main” offender. Give us a second here to rephrase… Okay, one of several layered problems that happen with generative AIs like ChatGPT and Grok is that they are, at their core, “Say yes” machines. Whatever safeguards or guardrails that tech firms implement in efforts to keep themselves out of headlines like the one hovering above these exact words, they’re running directly counter to that fundamental goal: Generating the words and images that their users are willing to boil a lake or two in order to read and see. And, spoiler alert for those of you new to the species: Sometimes human beings want to read and look at some pretty gross shit.

Hence increasing reports that a new feature recently added to xAI’s Grok, allowing users to “edit” existing pictures, is being used to—shock of shocks—edit the clothes off of real women who’ve suffered the bad luck of having their photos available to other people on the internet. Beyond the usual consent-violating concerns about deepfakes, these reports also go on to note that Grok has, at least in some cases, gone to a somehow even darker place: Happily responding to commands to edit pictures of women under the age of 18 into “bikinis” or other, more skimpy garments, generating pictures that could run afoul of laws about creating sexually explicit images of minors.

Although plenty of wishful thinkers have prompted Grok itself to apologize or address the images—including it reportedly telling users they should alert the FBI about its own conduct—all of that is really just another expression of its status as the Say Yes Machine. The actual human beings who created this technology and unleashed it on the world have been quite a bit less conciliatory, with Musk-owned xAI apparently responding—per Reuters—to a request for comment on what users are pretty clearly increasingly using their technology to make with a three-word blanket denial: “Legacy media lies.”

[via The Verge]

Keep scrolling for more great stories from A.V. Club.
 
Join the discussion...