Does Nvidia’s Slop-ified DLSS 5 Game Lighting Make Any Sense?

3 days ago 7

Nvidia’s newfangled AI-based “neural rendering” technology, DLSS 5, dramatically modifies (aka “slop-ifies”) game visuals with the help of AI. Nvidia promised this technology would revolutionize in-game lighting at its most fundamental level. After hearing from lighting experts, developers, and Nvidia itself, it turns out that DLSS 5 isn’t nearly as impressive as all that.

Both gamers and game developers were none too happy about DLSS 5 “slop-ifying” faces in games like Resident Evil Requiem, Starfield, and Hogwarts Legacy. For its part, Nvidia was adamant this technology was not modifying characters at the “geometry level.” On Thursday, YouTuber Daniel Owen went back and forth with Nvidia’s own “GeForce Evangelist” Jacob Freeman about what’s going on inside DLSS 5. To summarize, it’s a sophisticated, ultra-fast, AI image generator.

Freeman confirmed DLSS 5 is only analyzing a “2D frame plus motion vectors as input” and sticking that information into a generative AI model. Essentially, the AI is only analyzing the surface information of each rendered frame and using motion vectors to determine what the next frame will look like. These motion vectors are data points used to determine the difference between where in-game objects are positioned from frame to frame. Nvidia’s AI is using these vectors to guesstimate the position of these to make motion look smoother. The AI doesn’t have access to intrinsic scene information, such as PBR (physically based rendering). PBR helps inform a game engine of what materials are in a scene, such as wood, metal, etc. Instead, with DLSS 5, “materials are inferred from the rendered frame.” The AI merely guesses what each object should look like, then papers over the frame with its own interpretation.

Here's some of the DLSS 5 material we saw in the demos but didn't get a chance to film. Here I think you can see the strengths of DLSS 5 – reflections become much more attractive. Starfield doesn't have great lighting to begin with, so the differences can be profound. pic.twitter.com/LXa9e25xMe

— Oliver Mackenzie (@oliemack) March 19, 2026

Changing a scene’s textures and lighting may also disrupt how players perceive the environment. In a follow-up video posted by Digital Foundry, the AI reinterprets two ceramic-seeming cups on a table and decides they are completely made of metal.

Gizmodo reached out to Nvidia to confirm our interpretations of Freeman’s comments. We’ll update this story if we hear back. It’s likely inaccurate to say the AI is interpreting a “screenshot” of each scene, as Owen put it in his video. However, it remains unclear at what part of the rendering stage the AI is taking its information from. “This is a very early preview of the tech,” Freeman responded when asked whether the AI was capable of reinterpreting a scene.

The AI can only guess where lights should go

How Nvidia Dlss 5 WorksThis process by Nvidia offers very little perspective on how DLSS 5 even works. © Nvidia

In a Q&A hosted at GTC 2026, Nvidia CEO Jensen Huang was adamant that the technology doesn’t result in homogenized graphics in games. He then went on to describe the technology in vague terms that don’t mean much or anything to the wider gaming public.

“DLSS 5 fuses the controllability of the geometry and textures and everything about the game with generative AI,” Huang is recorded saying at the Q&A. “It’s not post-processing; it’s not post-processing at the frame level; it’s generative control at the geometry level.”

The Nvidia CEO has a tendency to fall into technobabble to detail his company’s products. He initially described DLSS 5 and its “neural rendering” technology as “combined 3D graphics—structured data—with generative AI—probabilistic computing.”

Huang’s comments seem to fly directly in the face of Freeman’s description. DLSS 5 is not remapping new textures and lighting onto 3D objects. It’s spitting out AI-generated 2D images at such a rapid rate that it can keep up with a game’s frame rate. As with all generative techniques in games such as multi-frame gen, this will likely increase latency and lead to odd artifacts within scenes, especially with objects in motion.

The AI can completely change the tone of a scene

Nvidia Ceo Jensen Huang Holding Up Fake BeltNvidia’s CEO Jensen Huang pretends to hold up a fake championship belt during a keynote address at Nvidia’s GTC Conference on March 16. © Benjamin Fanjoy / Getty Images

DLSS 5 lighting may look more “realistic” in a sense that it is reflecting accurately off of objects in a scene. The issue is that the AI will not understand the artistic direction behind each moment in a game. The one thing missing from this debate is whether the lighting makes any sense outside of programming and gaming circles.

Gizmodo reached out to Jonathan Harris, a local independent photographer and filmmaker based in New York, to get his take on DLSS 5’s realism.

“In some cases, the lighting does look better, but at the same time, you lose the creative edge,” Harris said. “It seems like it’s optimizing light.” Gizmodo showed the indie filmmaker two screenshots from environmental lighting in one of Nvidia’s demo scenes. He pointed out that the one with DLSS 5 enabled looked as if it were trying to resemble an overcast sky. The AI even added clouds that weren’t there in the original frame.

Left, one of Nvidia’s demo scenes with DLSS 5 on; right, the same scene with DLSS 5 off. © Nvidia

“Things do seem to look sharper, which can be nice, but in a game like Resident Evil, which is supposed to have a little bit of fog and haze, the lighting overall is just a bit brighter… and that may work against the tone of the game,” Harris said.

Nvidia said in its FAQ that game devs will have the ability to change color grading and mask off parts of scenes. This means creators could keep the AI from modifying the light and texture on specific characters or objects in a frame. One Redditor pointed out that the issue with Nvidia’s original DLSS 5 images may be more due to the AI’s awful tone mapping. By toning down the overbaked colors in the original screenshots, the characters seem to fit much better with the original games. Other than color grading, developers may not have control over whether the AI can add elements to a scene that weren’t there before.

i wrote several of the breakdowns you're referring to, and i agree this is NOT an Instagram filter by any mean! This is Tiktok filter!
Context, i don't sub or watch Daniel Owen for my own reasons, but i've been sent the video too to check, and i can see that the Nvidia person…

— mamoniem (@_mamoniem) March 20, 2026

In a thread on X, longtime game dev and ex-Ubisoft programmer Muhammad Moniem said “it is an AI trained post-processing filter.” While he stopped short of calling it “slop,” he also said, “This is not an Instagram filter by any mean [sic]! This is a TikTok filter.”

There are still too many unknowns

Let’s get one thing straight. There’s a reason why many of these DLSS 5-modified characters look horrendous. They all bear too much resemblance to AI-generated Instagram and TikTok videos that have flooded users’ feeds. Whether you actually wanted a “photorealistic” game or not, these AI-infused frames miss the mark by miles.

While we’ve seen hands-on from some outlets like Digital Foundry and PC Guide, most of the footage was cut up and spliced. Digital Foundry’s original video of the tech showcased a few static environments and character models. The only extended video of DLSS 5 we’ve seen comes from the YouTube channel HotHardware.

The problem with this footage is that Nvidia seems to restrict people from seeing the lighting effects when the characters are in motion. The Nvidia staff will turn off DLSS 5, move to another end of the room in games like Hogwarts Legacy and Starfield, then turn the upscaler back on. Nvidia’s staff said that the model is getting information “from what’s on screen and what’s moving on screen.” What’s still unclear is how fast it will be able to do this when the player’s camera is moving as well.

A perfect storm of misinformation and bad press

Nvidia Dlss 5 Geforce Rtx Starfield Comparison 002 OnThe AI can add elements to a scene that weren’t there before, such as extra hair on the right side of the male character’s head. © Nvidia

Nvidia showed DLSS 5 to an extremely limited press pool, and it’s done a very poor job explaining how the technology even works. The game and DLSS 5 model were running on two Nvidia GeForce RTX 5090 GPUs. One graphics card was dedicated to running the AI model exclusively. Nvidia told multiple outlets it was already testing DLSS 5 running on a single GPU. Despite all these lingering questions we have about whether it will look good or even run on any but the highest-end GPUs, Nvidia still intends to launch DLSS 5 this fall.

The wider developer community seems much less on board. “A lot of these companies take the time to high-res 3D scan these people’s faces,” said Mike York, an animator who has worked on games like Red Dead Redemption 2 and GTA V, in a YouTube video. “They want to get those original textures. They want to get that little scar underneath the eye… [DLSS 5 is] squashing over somebody’s hard work.”

Our friends at Kotaku quoted a whole assortment of game developers who do not like DLSS 5 or what it represents for the industry. Nvidia’s CEO showed this technology off at its own AI-centric GTC conference, even though it had an opportunity to bring it to developers at GDC (Game Developers Conference) 2026 that took place just a few weeks prior.

Nvidia was so focused on AI, it ignored everything that the gaming community actually wants from today’s games. It was never about achieving “photorealism.” It was about creating an experience that matters, that impacts the players, and that adds something to the world. You know—art.

Read Entire Article