Αυτό το θέμα έχει κλειδωθεί
Why we can't trust Nvidia users anymore. DLSS.
"4070S is a 4K card"


Lists all games using DLSS..

So none of them are 4K.

In the comments, "no but I consider it 4K because of final image."


Cognitive dissonance and software reliance for performance, not GPU horsepower.

The new delulu.

https://www.reddit.com/r/nvidia/comments/1duz20n/blown_away_by_how_capable_the_4070s_is_even_at_4k/
< >
Εμφάνιση 76-90 από 107 σχόλια
Αναρτήθηκε αρχικά από C1REX:
Αναρτήθηκε αρχικά από Illusion of Progress:
When was this ever true though? I think you're once again exaggerating something.
PC currently has some serious limitations as well.

- No direct storage dedicated decompression hardware so all decompression is done either by the CPU or GPU and it's very demanding. That's why so little games use it.
- More importantly CPUs can't use super fast GDDR6 or soon GDDR7 and it's stuck with much slower DDR5. Current PCs (Windows specifically) need to decompress data to system RAM first and then move it to VRAM. Such a silly bottleneck and a waste of computing power. No way to have unified fast memory like on consoles.

There are plans for PC to catch up with that technology but who knows when and if that will happen.
and yet none of that makes one bit of difference in the actual playing of games.but sounds great dont it !! thanks for the laugh.
I have i9 10900k oc 5
1 32GB DDR4 3090RTX O.C 2120/9875
I only use 1440p so only needed more fps in 1 game
Some say dlss is blurry in most ga.es is it true?
Αναρτήθηκε αρχικά από (415!)Fle@B@gL@ne:
I have i9 10900k oc 5
1 32GB DDR4 3090RTX O.C 2120/9875
I only use 1440p so only needed more fps in 1 game
Some say dlss is blurry in most ga.es is it true?
Depends on the game and which version, DLSS 1.0 sucked, 2.0 was much better and actually worth its mettle but can be noticeably worse visual quality, 3.0 is close enough to native that it's hard to tell the difference.

DLSS is about getting more performance, if you don't need the performance boost and would rather have total native visual quality, leave it off.
Τελευταία επεξεργασία από r.linder; 7 Ιουλ 2024, 22:19
Αναρτήθηκε αρχικά από (415!)Fle@B@gL@ne:
Some say dlss is blurry in most ga.es is it true?

Depends in what game and how good is the implementation in that game.
DLSS quality upscales to 1440p from just 960p. It will look better than native 1080p but worse than native 1440p.
However, upscaling from 1440p to 4K would look better than native 1440p but worse than real 4K.

There are rare cases when DLSS can look better than native but that's usually the case for older games with bad AA and lower resolution textures.
Αναρτήθηκε αρχικά από Dutchgamer1982:
Αναρτήθηκε αρχικά από Andrius227:
Yeah. DLSS is great. People who call it 'fake frames' and stuff, are just stupid. Humans dont even see in 'frames'. As long as the overall image is clear and moves smoothly it's all that matters.

actually human brains DO see the world in frames..:) something bout how our brain processes what the eye sees..

not all humans have the same speed in this.. for some humans it is as low as 60fps but the upper limit for humans is about 120fps.. more not even the best visually gifted of us can see..

what we CAN however see is when frames are not smooth (which is ofcourse the difference between watrching a real object and a screen..) while we cannot truelly "see" this.. it does tire us to look at... and this is why syncing.. and having slighly higher fps than our eyes natively can prevent this

Dude, This is 100% wrong.

Scientist John Hess:


“Our eye sees all frame rates, and they all have their look and feel,” he concludes, “so we should approach frame rate for what it offers aesthetically. As such, cinema remains at 24 because [that frame rate] is aesthetically pleasing and culturally significant… video games can take whatever frame rate your graphics card can muster, because that too is aesthetically pleasing: different mediums, different forms of expression, different frame rates.”
Αναρτήθηκε αρχικά από gardotd426:
Dude, This is 100% wrong.

Scientist John Hess:


“Our eye sees all frame rates, and they all have their look and feel,” he concludes, “so we should approach frame rate for what it offers aesthetically. As such, cinema remains at 24 because [that frame rate] is aesthetically pleasing and culturally significant… video games can take whatever frame rate your graphics card can muster, because that too is aesthetically pleasing: different mediums, different forms of expression, different frame rates.”
And here I thought people learned to stop using the "BUT HUMAN EYES CAN ONLY SEE XX FPS!!!" comments back in 2010. I'm shocked someone is still trying to use this in 2024. That's been proven false so long ago that it's completely funny to see you try to use it now. 😂😂😂
everything running DLSS on my 3080 looks like garbage.....fake frames do nothing when there is so much input lad just from turning it on......
never buying nvidia again cuz of this only buying amd now
Since people don't seem to understand what DLSS even does:

DLSS is a real-time deep learning image enhancement and upscaler that allows the GPU to run the game at a lower resolution to increase performance while at the same time infer a higher resolution image to try to increase performance without completely degrading image quality. It literally upscales the image and then uses AI to enhance the image back to approximate native quality.

DLSS 1.0 failed to do this, DLSS 2.0 was quite a bit better as it brought AI acceleration through the GPU's tensor cores, DLSS 3.0 made use of RTX 40 series hardware to generate frames in between of rendered frames to further increase performance (which is the only part of it that's "fake" and there's nothing wrong with using deep learning/machine learning/AI to improve performance, there are limits to what silicon is capable of), and DLSS 3.5 added ray reconstruction and replaced multiple algorithms with a single AI model trained with a lot more data than what DLSS 3 had.

With each revision of DLSS, the image quality gets better, with the latest version it's barely distinguishable from native but the game actually has to support that version, a lot of the hate is coming from games that use older revisions. But as it continues to improve, as long as games support the later revisions, they'll be able to improve performance without degrading quality whatsoever.

Αναρτήθηκε αρχικά από smokerob79:
everything running DLSS on my 3080 looks like garbage.....fake frames do nothing when there is so much input lad just from turning it on......
Ampere doesn't support frame generation because it lacks the hardware changes that Lovelace has, if you force it to use DLSS Frame Gen then it's not going to be able to properly generate frames in between rendered frames and will not look quite right as a result (causes ghosting and other issues)

DLSS itself doesn't cause latency, only frame gen can, and NVIDIA has settings to improve latency regardless so maybe you should take a look at that.
Τελευταία επεξεργασία από r.linder; 8 Δεκ 2024, 13:09
Αναρτήθηκε αρχικά από Blueberry {JESUS IS LORD}:
never buying nvidia again cuz of this only buying amd now
amd has fsr 3 and its framegen so you should trust intel arc more
Αναρτήθηκε αρχικά από r.linder:
Since people don't seem to understand what DLSS even does:

DLSS is a real-time deep learning image enhancement and upscaler that allows the GPU to run the game at a lower resolution to increase performance while at the same time infer a higher resolution image to try to increase performance without completely degrading image quality. It literally upscales the image and then uses AI to enhance the image back to approximate native quality.

DLSS 1.0 failed to do this, DLSS 2.0 was quite a bit better as it brought AI acceleration through the GPU's tensor cores, DLSS 3.0 made use of RTX 40 series hardware to generate frames in between of rendered frames to further increase performance (which is the only part of it that's "fake" and there's nothing wrong with using deep learning/machine learning/AI to improve performance, there are limits to what silicon is capable of), and DLSS 3.5 added ray reconstruction and replaced multiple algorithms with a single AI model trained with a lot more data than what DLSS 3 had.

With each revision of DLSS, the image quality gets better, with the latest version it's barely distinguishable from native but the game actually has to support that version, a lot of the hate is coming from games that use older revisions. But as it continues to improve, as long as games support the later revisions, they'll be able to improve performance without degrading quality whatsoever.

Αναρτήθηκε αρχικά από smokerob79:
everything running DLSS on my 3080 looks like garbage.....fake frames do nothing when there is so much input lad just from turning it on......
Ampere doesn't support frame generation because it lacks the hardware changes that Lovelace has, if you force it to use DLSS Frame Gen then it's not going to be able to properly generate frames in between rendered frames and will not look quite right as a result (causes ghosting and other issues)

DLSS itself doesn't cause latency, only frame gen can, and NVIDIA has settings to improve latency regardless so maybe you should take a look at that.

A point of clarification on an otherwise decent overview of what DLSS is doing. With DLSS 3.5 it wasn't replacing multiple algorithms with a single AI model trailed with more data. It replaced multiple steps of denoisers in the process with one denoiser at the end of the process in the new AI model; which significantly increased fine detail, especially with lighting, reflections, and edges specifically with regards to Ray Tracing.

The way the AI model worked prior to 3.5 was that the game engine would generate the materials and geometry without a lighting pass, then the AI model takes that as input alongside the temporal data and generates a reflection image and a diffused global illumination image. Both of those are separately put through different hand-tuned denoisers and then the AI model generates an internal resolution composite image from the two deniosed inputs. Finally the AI model upscales the composite image up to native resolution.

With DLSS 3.5 they've essentially rebuilt the AI model to include an AI based denoiser and as such removed the multiple hand-tuned denoisers and the generation of the composite image of the resulting denoised images. The process became the engine generates the materials and geometry at the internal resolution, ray casts are sampled and the resulting color data from the sampled ray casts are fed into the AI model along with the motion vectors from the game engine and/or prior frame data (e.g. the temporal data previously sent to the GI denoiser). The AI model then generates a native resolution frame that the model already denoises and then, if on a 40-series card, passes that native resolution image on to the optical flow accelerator engine (hardware) to generate the optical flow field data. Additionally the resulting native resolution image is sent back into the AI model as an input for the next frame along with its ray sample data and motion vectors to provide better temporal feedback and stability. Finally, assuming a 40-series card, that native resolution image along with the optical flow field is passed as inputs to the AI model for frame generation.

It's akin to the old "making a copy of a copy of a copy" trope where making a copy inherently loses some of the original data/detail and then making a copy of that copy loses some more data/detail.
Αναρτήθηκε αρχικά από Blueberry {JESUS IS LORD}:
never buying nvidia again cuz of this only buying amd now
AMD does the exact same thing. With AMD it's called FSR instead of DLSS. Exact same technology in the way it functions.
Αναρτήθηκε αρχικά από Ontrix_Kitsune:
Αναρτήθηκε αρχικά από Blueberry {JESUS IS LORD}:
never buying nvidia again cuz of this only buying amd now
AMD does the exact same thing. With AMD it's called FSR instead of DLSS. Exact same technology in the way it functions.

Similar, not the exact same; which is why FSR looks significantly worse than DLSS. We'll have to see how FSR4 does with the RX8000 series.
It's probably still not going to hold a candle to DLSS 3.5+, FSR is unlikely to ever catch up unless NVIDIA just stops updating DLSS

Eventually it will come to a point where DLSS should be able to increase image quality beyond native resolution while still increasing performance
Τελευταία επεξεργασία από r.linder; 8 Δεκ 2024, 15:30
Αναρτήθηκε αρχικά από r.linder:
DLSS itself doesn't cause latency, only frame gen can, and NVIDIA has settings to improve latency regardless so maybe you should take a look at that.
I would clarify this:
Being a DLSS upscaling and also being an FSR, they add an extra step in image processing pipeline which also means extra processing time is required, but due to offloading the CUDA cores (mention of Nvidia due to the universal support of both) to process the image at a lower resolution, so if the GPU is a bottleneck this allows it to get bigger data flow that comes with the newest data from the beginning of the pipeline and this creates a somewhat balance. If it's the other way around, this will hurt the latency.
Τελευταία επεξεργασία από A&A; 8 Δεκ 2024, 15:54
< >
Εμφάνιση 76-90 από 107 σχόλια
Ανά σελίδα: 1530 50

Ημ/νία ανάρτησης: 4 Ιουλ 2024, 1:46
Αναρτήσεις: 107