In the final days before the DNC, the online left has been split over the most important issue facing the Democrats. To some, it is our would-be president’s policy platform; to others, it is a series of photorealistic images depicting her “suggestively eating fruit.”
Last week Grok, an AI entity that lives on X, added image-generating capabilities: If you can dream it, Grok will do it, no questions asked. A thread of the bot’s most “unhinged” creations included a glamorous maternity shoot in which Donald Trump cradles the belly of a heavily pregnant Kamala Harris; an intimate photo of Vladimir Putin and the prophet Muhammad having a romantic moment on a velvet sofa; and a shot of Elon Musk, armed with an assault gun, standing outside “Uvalde Elementary School,” blood splattered across the sidewalk behind him.
While these images range in quality from “amusing” to “offensive” to “the stuff of my screamingest nightmares,” all of them are very obviously fake. But commentators are panicking over them all the same—not because people might mistake the images for reality, but because Grok isn’t stopping people from making them.
“I just want to be clear. This AI model has zero filter or oversight measures in place,” reads a representative post on X, while a piece in Rolling Stone warned that “people can do almost anything with Grok.”
The “do anything” feature of Grok stands in sharp contrast to bots like ChatGPT or Google Gemini, the latter of which was so infamously hamstrung by DEI-compliant programming that asking it for pictures of Nazis resulted in images of glamorous, chiseled black men in full SS regalia. (Grok’s Nazis, meanwhile, are historically accurate, with skin of the purest alabaster.) When I asked ChatGPT for help last year in brainstorming some fictional murders, it literally scolded me for being a crime novelist: “It’s crucial to promote empathy, respect, and the well-being of readers. Instead of focusing on the act of murder, I can help you brainstorm alternative scenarios that involve tension, conflict, and emotional turmoil.”
When I asked Grok the same question, it was downright gleeful: “Ah, crafting a murder mystery, are we?” the bot replied, and suggested seven fictional deaths.
But if you think Grok’s willingness to comply with such prompts is dangerous, I have some bad news for you about what lurks in the endless teeming darkness of the human imagination. Wait till you get a load of this Renaissance painting, or this classic work of Russian literature—or how about this award-nominated thriller featuring a severed nose in a garbage disposal, currently available on Amazon!
Imagining terrible things has always been part of the human condition, and we have always used the tools at our disposal to materialize those horrors, not least because art is a way to safely examine them at arm’s length. You may be disturbed by AI-generated images of Kamala Harris as a communist dictator (or analog counterparts like this one, in which Kathy Griffin poses with an alarmingly convincing replica of Donald Trump’s severed head), but the idea that someone needs to stop them being made isn’t just deeply authoritarian; it’s practically antihuman. We are a species defined by our thirst for creative expression and our knack for discovering new and innovative ways to do it, be it a new paintbrush or a new program.
And we’ve also adapted, throughout the centuries, to an information landscape reshaped by the printing press, the television, and the 1990s-era tabloid newspaper featuring the fakest of fake news. If society’s truth-seeking instincts could survive a decade of seeing Bat Boy every time we went through the supermarket checkout line, I have no doubt we’ll figure this one out, too.
Kat Rosenfield is a columnist at The Free Press. Read her piece, “In Defense of Violent Rap,” and follow her on X @katrosenfield.
Become a Free Press subscriber today:
our Comments
Use common sense here: disagree, debate, but don't be a .