,

DLSS 5: Why It Isn’t Really Free Performance (Hidden Costs)

DLSS 5: Why It Isn’t Really Free Performance (Hidden Costs)

There is a version of this article where I just explain what DLSS 5 is and tell you it looks pretty good and leave it there. I am not going to write that article. I’ve been trying to figure out why this announcement bothered me as much as it did and I think I’ve got it, or most of it anyway, so bear with me because this is going to take a minute.

Okay so. Jensen Huang got up at GTC on March 16 and told everyone DLSS 5 was “the GPT moment for graphics.” The room went with it. The demo looked good. Todd Howard called the tech “amazing.” Ubisoft’s Charlie Guillemot said his team could finally build the worlds they’d always wanted. CAPCOM’s Jun Takeuchi talked about cinematic atmosphere. Starfield, Resident Evil Requiem, Assassin’s Creed Shadows, Hogwarts Legacy, Oblivion Remastered, a bunch of others. Fall release. All very exciting, apparently.

Gaming Twitter watched the same demo and arrived at a completely different conclusion. The word that stuck was “yassified,” which I’ll get to in a second. Worth understanding what DLSS actually does first though, because the reaction makes a lot more sense in context, and what this technology does is genuinely pretty strange.

DLSS has existed since 2018 and started life as a performance trick. You render your game at a lower resolution, an AI figures out what the higher-res version probably would have looked like, GPU does less work, frame rate goes up. It worked well enough that over 750 games eventually shipped with it, at which point it sort of stopped being a feature anyone made a conscious decision about and became more like infrastructure. DLSS 4.5, out this past January, had gotten to the point where the AI was generating 23 out of every 24 pixels you’re actually seeing on screen — not reconstructing them from something the GPU drew, but conjuring them from scratch based on the 1 real pixel in 24. At some point the “game image” stopped being a picture and became more of a prompt. I’m not sure exactly when that happened but I think it already had by the time most people noticed.

DLSS 5 is a different thing from that. It’s not really about performance. It takes colour and motion data from each frame and runs it through a model trained to add photoreal lighting on top of whatever the developer built — skin that actually scatters light, fabric that has sheen, hair that behaves like hair instead of a solid mass. The AI reads the scene and makes judgment calls: is this character backlit, what does this material do in overcast light, how does translucent skin behave here. Huang’s pitch was that this closes the gap between real-time game rendering and the kind of VFX work Hollywood spends hours per frame on. Which, in a narrow technical sense, is probably true.

The problem is that CAPCOM’s artists spent years deciding what Grace Ashcroft should look like, and DLSS 5 looked at their work and thought it could do better.

Grace is the protagonist of Resident Evil Requiem. Before the DLSS 5 announcement, critics who’d played the game had been quietly passing around screenshots of her face as an example of how good real-time character rendering had gotten. The imperfections in her skin, the uneven way light caught her features, the specific quality that made her look like a person rather than a video game character. IGN’s Simon Cardy had played the whole game ahead of release. He knew what she was supposed to look like. When he saw what DLSS 5 did to her face he called it “yassified,” which is a word that started as a meme about AI beauty filters that smooth and symmetrize faces until they stop looking like anyone in particular. He wrote that the DLSS 5 version of Grace was “devoid of any discernible character, as if the light behind her eyes had been switched off,” and he was pretty clearly furious about it. “AI has no artistic or authorial intent,” he said. DLSS 5 was replacing “the paintbrush held in a human hand with an AI slopping a big vat of oil over the canvas.”

A lot of developers agreed with him and said so publicly, some by name and some not. Cullen Dwyer, who does gameplay and tech design at Doinksoft, called DLSS 5 a perfect example of the gap between what developers actually want and what NVIDIA assumes they want. Developer SolidPlasma said it strips out everything original about a character’s design and in his view whitewashes them in the process. Someone with 15 years of AAA experience, speaking without their name attached, told Kotaku it felt like DLSS 5 was “taking away some authorial intent from artists” and producing work that was “less distinct or aesthetically cohesive than the original intent.” And one more anonymous developer, whose quote I keep thinking about, just said it feels like there’s no future for them in this industry.

Now here’s where I want to be careful because some of the backlash was based on bad comparisons and I don’t want to just pile on without acknowledging that. A lot of the screenshots going around were pulled from pre-release footage with unfinished settings and stacked against the polished final game, which isn’t a fair fight and some people were doing it knowingly. At a press Q&A after the keynote, Tom’s Hardware’s Paul Alcorn asked Huang point-blank whether critics had a case — was the tech homogenizing games, projecting NVIDIA’s taste onto work that wasn’t theirs? Huang said flatly that they were completely wrong. It’s not a filter on top of a finished image, he said, it runs at geometry level, and artists keep control over how it’s applied. Bethesda backed that up separately, saying their implementation in Starfield would be optional and stay under the artists’ hands.

All of that might be true. But then Insider Gaming reported that at least one Ubisoft developer found out Assassin’s Creed Shadows was getting DLSS 5 at the same time as the general public. From the press release. Ubisoft’s co-CEO had publicly described the technology as helping his studio realise the worlds they’d always wanted to make. Some of the people actually building that game apparently weren’t part of that conversation.

I don’t know what to do with that information exactly but it’s been sitting wrong with me. The pitch from NVIDIA and the studios is that this is a tool for artists, something that expands what’s possible rather than overriding choices that were already made. And maybe for some people working with it, that’s genuinely how it feels — Howard’s enthusiasm sounds real, and Takeuchi at CAPCOM was supportive. But Bethesda felt the need to separately clarify that their implementation would be optional and stay under artist control, which is the kind of clarification you only have to make when the original announcement gave people reason to worry it might not be. And over at Ubisoft, some of the people whose game is on the DLSS 5 list read about it the same place the rest of us did. Those things all happened in the same week and I find it hard to hold them together into a coherent story about artist empowerment.

What I keep getting stuck on, more than the Grace screenshots, is the longer-term version of this. Games look wildly different from each other right now. You can tell Disco Elysium from Cyberpunk from Hi-Fi Rush in half a second because artists working on those projects made choices nobody else would have made in quite the same way, and those choices are in every frame. What DLSS 5 does is take those frames and pass them through a model that has its own opinions about what photoreal lighting looks like and applies them on top. Maybe the artists retain enough control that this doesn’t matter much in practice. But the underlying logic of the system is that a single model’s aesthetic judgment is available to run across the entire industry’s output, and if it ends up getting applied as a default rather than a deliberate choice, games are going to start rhyming with each other in ways they don’t right now. That might sound abstract. I don’t think it is.

And look, the demo ran on two RTX 5090s. That’s about eight thousand dollars of GPU. Narrative designer Bruno Diaz wrote that the tech would be “so performance costly that few people will be willing to use it.” So there’s also just the practical question of who this is actually available to, and whether NVIDIA building AI infrastructure demand that requires its most expensive hardware is really the same thing as improving games for regular people.

I genuinely don’t know how DLSS 5 lands once it actually ships. There’s a version where studios with time to work with it produce something that feels earned and the positive quotes turn out to be right. The underlying research is not fake, the gap between game rendering and VFX-quality output is a real thing that’s worth trying to close, and if NVIDIA has found a way to do some of that in real time, that’s not nothing.

But the developer who said they felt like there was no future for them wasn’t making a technical argument about whether the before-and-after screenshots were based on finished settings. That’s a different kind of statement. And I think the instinct to respond to the backlash by correcting the technical misunderstandings, while not wrong exactly, sidesteps the thing people are actually upset about, which is that something is being handed over here and nobody’s quite sure what they’re getting in return. I don’t think NVIDIA has answered that yet. I’m not sure they’ve tried.

You can also read more blogs here

T.A.R.S TACTICAL ASSISTANCE & RECON SYSTEM
ONLINE
// personality matrix
SARCASM
50
HUMOR
50
SERIOUS
50
TARS_
I could talk, but I choose to listen. Go ahead — impress me. Or don't. Either way, I'm here.
🎙