Εγκατάσταση Steam
Σύνδεση
|
Γλώσσα
简体中文 (Απλοποιημένα κινεζικά)
繁體中文 (Παραδοσιακά κινεζικά)
日本語 (Ιαπωνικά)
한국어 (Κορεατικά)
ไทย (Ταϊλανδικά)
Български (Βουλγαρικά)
Čeština (Τσεχικά)
Dansk (Δανικά)
Deutsch (Γερμανικά)
English (Αγγλικά)
Español – España (Ισπανικά – Ισπανία)
Español – Latinoamérica (Ισπανικά – Λατινική Αμερική)
Français (Γαλλικά)
Italiano (Ιταλικά)
Bahasa Indonesia (Ινδονησιακά)
Magyar (Ουγγρικά)
Nederlands (Ολλανδικά)
Norsk (Νορβηγικά)
Polski (Πολωνικά)
Português (Πορτογαλικά – Πορτογαλία)
Português – Brasil (Πορτογαλικά – Βραζιλία)
Română (Ρουμανικά)
Русский (Ρωσικά)
Suomi (Φινλανδικά)
Svenska (Σουηδικά)
Türkçe (Τουρκικά)
Tiếng Việt (Βιετναμικά)
Українська (Ουκρανικά)
Αναφορά προβλήματος μετάφρασης
1 32GB DDR4 3090RTX O.C 2120/9875
I only use 1440p so only needed more fps in 1 game
Some say dlss is blurry in most ga.es is it true?
DLSS is about getting more performance, if you don't need the performance boost and would rather have total native visual quality, leave it off.
Depends in what game and how good is the implementation in that game.
DLSS quality upscales to 1440p from just 960p. It will look better than native 1080p but worse than native 1440p.
However, upscaling from 1440p to 4K would look better than native 1440p but worse than real 4K.
There are rare cases when DLSS can look better than native but that's usually the case for older games with bad AA and lower resolution textures.
Dude, This is 100% wrong.
Scientist John Hess:
“Our eye sees all frame rates, and they all have their look and feel,” he concludes, “so we should approach frame rate for what it offers aesthetically. As such, cinema remains at 24 because [that frame rate] is aesthetically pleasing and culturally significant… video games can take whatever frame rate your graphics card can muster, because that too is aesthetically pleasing: different mediums, different forms of expression, different frame rates.”
DLSS is a real-time deep learning image enhancement and upscaler that allows the GPU to run the game at a lower resolution to increase performance while at the same time infer a higher resolution image to try to increase performance without completely degrading image quality. It literally upscales the image and then uses AI to enhance the image back to approximate native quality.
DLSS 1.0 failed to do this, DLSS 2.0 was quite a bit better as it brought AI acceleration through the GPU's tensor cores, DLSS 3.0 made use of RTX 40 series hardware to generate frames in between of rendered frames to further increase performance (which is the only part of it that's "fake" and there's nothing wrong with using deep learning/machine learning/AI to improve performance, there are limits to what silicon is capable of), and DLSS 3.5 added ray reconstruction and replaced multiple algorithms with a single AI model trained with a lot more data than what DLSS 3 had.
With each revision of DLSS, the image quality gets better, with the latest version it's barely distinguishable from native but the game actually has to support that version, a lot of the hate is coming from games that use older revisions. But as it continues to improve, as long as games support the later revisions, they'll be able to improve performance without degrading quality whatsoever.
Ampere doesn't support frame generation because it lacks the hardware changes that Lovelace has, if you force it to use DLSS Frame Gen then it's not going to be able to properly generate frames in between rendered frames and will not look quite right as a result (causes ghosting and other issues)
DLSS itself doesn't cause latency, only frame gen can, and NVIDIA has settings to improve latency regardless so maybe you should take a look at that.
A point of clarification on an otherwise decent overview of what DLSS is doing. With DLSS 3.5 it wasn't replacing multiple algorithms with a single AI model trailed with more data. It replaced multiple steps of denoisers in the process with one denoiser at the end of the process in the new AI model; which significantly increased fine detail, especially with lighting, reflections, and edges specifically with regards to Ray Tracing.
The way the AI model worked prior to 3.5 was that the game engine would generate the materials and geometry without a lighting pass, then the AI model takes that as input alongside the temporal data and generates a reflection image and a diffused global illumination image. Both of those are separately put through different hand-tuned denoisers and then the AI model generates an internal resolution composite image from the two deniosed inputs. Finally the AI model upscales the composite image up to native resolution.
With DLSS 3.5 they've essentially rebuilt the AI model to include an AI based denoiser and as such removed the multiple hand-tuned denoisers and the generation of the composite image of the resulting denoised images. The process became the engine generates the materials and geometry at the internal resolution, ray casts are sampled and the resulting color data from the sampled ray casts are fed into the AI model along with the motion vectors from the game engine and/or prior frame data (e.g. the temporal data previously sent to the GI denoiser). The AI model then generates a native resolution frame that the model already denoises and then, if on a 40-series card, passes that native resolution image on to the optical flow accelerator engine (hardware) to generate the optical flow field data. Additionally the resulting native resolution image is sent back into the AI model as an input for the next frame along with its ray sample data and motion vectors to provide better temporal feedback and stability. Finally, assuming a 40-series card, that native resolution image along with the optical flow field is passed as inputs to the AI model for frame generation.
It's akin to the old "making a copy of a copy of a copy" trope where making a copy inherently loses some of the original data/detail and then making a copy of that copy loses some more data/detail.
Similar, not the exact same; which is why FSR looks significantly worse than DLSS. We'll have to see how FSR4 does with the RX8000 series.
Eventually it will come to a point where DLSS should be able to increase image quality beyond native resolution while still increasing performance
Being a DLSS upscaling and also being an FSR, they add an extra step in image processing pipeline which also means extra processing time is required, but due to offloading the CUDA cores (mention of Nvidia due to the universal support of both) to process the image at a lower resolution, so if the GPU is a bottleneck this allows it to get bigger data flow that comes with the newest data from the beginning of the pipeline and this creates a somewhat balance. If it's the other way around, this will hurt the latency.