From what Nvidia’s showing (and what Huang emphasized in that Tom’s Hardware Q&A at GTC 2026), DLSS 5 is pitched as “neural rendering” or “content-control generative AI.” It uses the game’s existing 3D data (motion vectors, scene semantics like hair/skin/fabric, lighting conditions) as grounded input, then the AI infuses photoreal lighting, materials, and enhancements while staying consistent frame-to-frame. Devs get tools to fine-tune intensity, color grading, masking, etc., so they can dial in how much “photorealism” gets applied without losing their artistic intent
From what Nvidia’s showing (and what Huang emphasized in that Tom’s Hardware Q&A at GTC 2026), DLSS 5 is pitched as “neural rendering” or “content-control generative AI.” It uses the game’s existing 3D data (motion vectors, scene semantics like hair/skin/fabric, lighting conditions) as grounded input, then the AI infuses photoreal lighting, materials, and enhancements while staying consistent frame-to-frame. Devs get tools to fine-tune intensity, color grading, masking, etc., so they can dial in how much “photorealism” gets applied without losing their artistic intent