Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Especially once they start considering PS5/Xbox versions where everyone will expect RT of some shape.
But now we can only wait and see.
Hopefully the game will make it easy to compare exact pictures with and without it. I can see Monster Hunter World updating in real time as I go through the different AA options and the least blurry choice by a long shot is off.
That alone makes it somewhat desirable in performance starved scenario, especially in context of raytracing where the cost scales directly with target resolution so cutting pixel count by 50% nets a solid performance boost.
Also compared to other mainly used AA method in modern engines - TAA, DLSS is not temporal and as such it does not suffer from ghosting that increases exponentially as framerate drops.
I think where my skepticism comes from is "where is the image quality benefit people mention?" The image quality getting better is going to completely depend on the level of detail the graphic artists want to go. Like how far do they want to zoom in on the pixels to blend them or w/e it is they do. If the graphic artists do a good job of blending the colors together, DLSS would potentially harm the image quality.
On principle it falls close to methods like TAA, FXAA, SMAA and all of the "post-process" methods in the way that it's essentially more or less educated guesswork on how the gaps in the scene should be filled.
Traditional AA methods like SSAA truly did render entire scene at X times the resolution in order to scale it down with gaps filled using averaged (so there was actual data to begin with, rather than a guess).
Of course backbone to DLSS is entirely different given it's a pretrained AI model but in the end it's still a guessing game for the machine.
If you want to ask me, you can never substitute "true" data with a guess but DLSS 2.0 had some truly impressive examples where it produced image that had more clarity than native 4k image from same game.
As to whether it's the case where really DLSS is that good at guessing or it just disables other stuff that would make natively rendered image blurrier is up to anyones guess (as usually DLSS cannot be ran along TAA).
Now maybe the performance benefit will be enough that we can say "hey, just zoom in 50% less" or something like that to the artists and it will be better overall. We will see. I remain skeptical until I can play around with it myself on a wide variety of products.