Saying "nope" to Nvidia's "yassify" tech

Nvidia's new graphics technology reinforces bad behavior through the lie of AI.

Saying

It’s called DLSS 5—that is, “Deep Learning Super Sampling,” and apparently we’re already on the fifth one. Nvidia’s plan with DLSS, at least initially, was to improve graphics on older tech that might struggle to support modern dazzle, simulating the experience of having a more powerful, expensive computer (which are, by the way, increasingly scarce). But with the fifth version of DLSS, revealed at GTC 2026, what Nvidia offers isn’t just a sharper, smoother, more impressive picture.

Using AI’s neural rendering, the gaps are hallucinated-into, ostensibly to reproduce dynamic lighting and ray-tracing. Instead, the faces of the characters Nvidia has used to demonstrate the technology—principally, Grace Ashcroft of Resident Evil Requiem—have been not only revised, but entirely replaced: casualties of what AI is being trained to think we all want. The tech industry (or rather, the humans in tech who care about the actual business, and ownership, of creative labor) has lost its absolute mind over Nvidia pushing this on us, justifiably and rightfully. We’re through the looking glass now. 

When industry veteran Will Smith called out DLSS 5 as a glorified “yassify filter,” his critique immediately entered the popular parlance. Admittedly, I’ve been a vocal skeptic of the slippery slope of arbitrating what cup size constitutes an aesthetically ‘serious’ videogame character, but even I had to immediately concede Smith’s point. DLSS 5 really has veered into the realm of “Bold Glamour”—that is, the Instagram or TikTok filter that is the direct antecedent of Mar-a-Lago face. Grace Ashcroft, by the way, is an FBI analyst. The effect is too much like walking in on your boomer mom who watches fire-department procedurals on network television. 

This raises an obvious question: Just how sexy do we really need our videogame characters to be? Is this sexy at all? It’s an uncomfortable question that goes back at least as far as Lara Croft: Tomb Raider’s polygon boobies, but which reached a fever pitch in the months and weeks right before GamerGate, when many gamers felt that game reviewers and tech critics were trying to prevent or totally outlaw sexiness in order to ruin gaming—just to be mean, to prove a point. Now, 12 years later, DLSS 5 has taken the supposed ‘side’ of that hypothetical gamer: the player who might demand maximum protagonist hotness as a sort of aesthetic default.

“This kind of tech undermines the artistic intent of countless artists, animators, lighting engineers, and designers in games,” John Warren of VGBees tells me. Indeed, this isn’t the democratization of taste; it’s the flattening of art and creative labor in favor of whatever the lowest common denominator might want. Which is something AI can only guess at, to our collective peril. This realtime “lighting” filter also diminishes, devalues, the skill of the original work. The entitlement! The extremely questionable decision-making of it all! (It also, not for nothing, introduces questions about artists’ consent and where the hell all the “fair use” laws went.)

“Imagine if this tech also decided Grace Ashcroft’s voice needed to match the visual version DLSS 5 renders,” Warren continues. “You’d laugh the person who wants that out of the room, I hope. But it’s transformational, right? Warping intent for technical ‘improvement’ is anti-art.” Warren’s point yields another very real problem: A new, yassified face grafted on top of an ‘old’ voice would be uncanny. It introduces dissonance, a strange mismatch, a conflict of two interests.

In a swiftly deleted post, likely a variation on a gag meme that in this instance felt far too true, someone had replaced Harrison Ford—that is, the face of videogame and movie hero Indiana Jones—with the absolute blandest looksmaxxed Chad, a face developed not with the artistry of human hands or hearts, but rather, by analyzing the already-blandest human faces in the world and settling on whatever horrific facemorph distortion emerges from it as the ultimate standard for male beauty: a lantern-jawed meme of a man.

Where does this lead? Nowhere good. We iterate on aesthetic standards; one minor tweak begets the next. Have you ever seen those social-media stars who’ve spent way too long using a FaceTune app? Or someone who’s had what was maybe one too many surgical revisions? Distortions echo upon other distortions, and soon you’ve passed up dysmorphia entirely, embracing something much more alien, something uncanny. This is where a disordered self-image meets your bank account: Someone will always have one more little tweak to sell you. Discernment, human intervention, is needed, because, if anything has become abundantly clear, it’s that AI cannot pump its own brakes. 

This is really what AI is all about—crowdsourcing artistic vision. Not to get too “death of the author” about it, but gamemakers’ intent really does matter less than whatever we, the players, ultimately put of ourselves into the work. And how we feel matters—we are never merely consumers of a piece of media or literature (“content”); rather, we are in constant breathing dialogue with it. That’s because the reader, or player, is a person who is alive, with their own personal context and biases. The instinct to want to challenge a piece of media, as opposed to consuming it uncritically, is a good thing.

It’s one thing to want to challenge something, and quite another to feel absolutely threatened by it, to want to seize control of it, to dominate it: to change it to suit us, seizing control of someone else’s creation, reskinning it, then assuming credit for what amounts to a fresh coat of paint. It is “mod as authorship,” except in this specific case, the mod itself is artless. (Artful mods do exist! What Nvidia is offering here is not it!)

For years, gamers have wanted to seize greater authorship, greater authority, over the games they play. That isn’t new; they’ve been sending death threats to game developers for as long as there have been games. It comes down to control issues. The initial impulse is almost understandable, if pathological. When you love something, when you value it, your instinct might be to also lay some sort of claim to it, to put it on a pedestal or in a little box, to trim it like a bonsai, to clip its wings, to feel territorial, to try to possess it: to declare war on it. To escalate threats, to saber-rattle, to enact a terror campaign until the developers and narrative designers and artists cede to a list of demands.

Already anticipating where this trail leads—how catering to this initial impulse can accelerate—games journalist Leigh Alexander wrote in 2014, effectively, “Creators, you do not have to go there,” in a piece that launched a thousand ships, inciting further mob campaigns under the auspices of a “consumer movement.” One thing quickly became apparent: We may collectively be in the thrall of billion-dollar corporations, but corporations, in turn, will absolutely defer to a mob’s will.

Here is the seductive promise of the mob, though: If anything ever goes wrong, the hope is, no one person can be blamed. The mob’s promise is the same as the promise of AI itself: an end to personal accountability, to ever being held responsible for one’s own individual thoughts, actions, behaviors, or freaky-ass desires. For good or ill, thoughts, actions, and even desires are the things that make you individually you. Those are the things to lay claim to, to own. You shouldn’t sacrifice or outsource those to the cloud or the AI or the amorphous mob. Unfortunately, AI can never be held accountable for any harms done because, very much like a mob, there’s no one ‘there.’ Moreover AI cannot give you what you want, only its best neural guess at what it thinks you want. That is to say, AI can only tell you what you want.

It seems as if there is a larger push against seeing ‘real’ people, even pretend-real people: against fictions that have the audacity to depict real physical flaws real individuals might have. Humans, the billionaire consensus seems to be, have failed you, disappointed you. Real life has failed you. Or you’re failing it. Hard to say. Imagine if you could go somewhere better, more utopian, where conversation never stalls and the girls are seriously hot, and your grandma feels alive, at least. Maybe better than alive! And there’s no friction, no cognitive load, because the AI will help you think, will even take over your thinking for you, if you start to get too tired, too pained, by all the thinky-ouchies. 

Endemic to the DLSS 5 brouhaha is this bizarre expectation of having to like everything all the time—to have every personal whim catered to. Unfortunately, with the persistent availability of on-demand Everything, the discomfort of disliking stuff can start to feel foreign, even intolerable. That is why we should all be friction-maxxing our 2026, embracing the imperfect. Maybe we, individually, can’t stop AI’s harms—therein lies the paradox, where collective action is required for sustained change—but we can each enforce checks and balances in our own lives, learning to conscientiously pump the brakes when needed.

 
Join the discussion...
Keep scrolling for more great stories.