Steam installeren
inloggen
|
taal
简体中文 (Chinees, vereenvoudigd)
繁體中文 (Chinees, traditioneel)
日本語 (Japans)
한국어 (Koreaans)
ไทย (Thai)
Български (Bulgaars)
Čeština (Tsjechisch)
Dansk (Deens)
Deutsch (Duits)
English (Engels)
Español-España (Spaans - Spanje)
Español - Latinoamérica (Spaans - Latijns-Amerika)
Ελληνικά (Grieks)
Français (Frans)
Italiano (Italiaans)
Bahasa Indonesia (Indonesisch)
Magyar (Hongaars)
Norsk (Noors)
Polski (Pools)
Português (Portugees - Portugal)
Português - Brasil (Braziliaans-Portugees)
Română (Roemeens)
Русский (Russisch)
Suomi (Fins)
Svenska (Zweeds)
Türkçe (Turks)
Tiếng Việt (Vietnamees)
Українська (Oekraïens)
Een vertaalprobleem melden
Aye. Dual vBIOS model as well, I run it in the factory OC Bios mode with an AB profile for further boost (stable 2077 settings: +90 core, +350 memory, 105% power limit 93c thermal limit, custom fan curve, voltage yeeted, hot spot at 83-85 max in monitoring - still reports power/voltage limited but I'm not interested in flashing the LN2 OC bios to it lol) saving the 'stock' mode should I need it.
There's a thread in troubleshooting that seems related to transient spiking knocking systems offline, and your mentioning vBIOS update likely partly explains why while it's worth still being aware of as a concern with my generation GPU, is also why it hasn't been an issue for me so far.
When I first got the card the default OC vBios allowed for a 420W (108% with spikes to 113%) power limit and higher voltage limit if you wanted to OC it beyond 'stock OC'.
Within a couple months of having it a vBios update was released that pulled that back to a 400W (105% with spikes to 108%) and dropped the voltage curve slightly.
I've stuck with that since as it's been stable.
Error checking code rewritten.
Logging code rewritten.
Added better support for texture dimensions/formats changing at runtime.
Added a developer config option to show only interpolated frames.
Improved nvngx wrapper dll compatibility.
Sweet!
You realize they work in different ways on different parts of GPU hardware...right?
And you do realize that both 2000 and 3000 also use Optical Flow Accelerator as 4000 for FG. Gain from FG most likely woludnt be as much as on 4000 series but still it should be more then enough to smooth gamaplay for all new games. Why some people still ignore the fact that Nvidia intentionally lock you out of feature that should be available to you, just to force you to buy new GPUs.
I am aware Turing and Ampere both have OFA units. Turing's is used for video playback only IIRC from past discussions about it based off the white sheet.
I'm also aware of generational improvements in hardware such as instruction optimization, core optimizations/count changes, transistor changes/count changes. Each generation of RTX has had generational improvements to OFA (as documented in said white paper), yes but that is only part of the solution for FG as nV manages it, the other being SER.
I'm willing to believe their engineers when they say they've tested it all and it incurs too high a rendering time cost to potentially be worth it. Last time they lied about hardware specifications they got a nice class action suit dumped on them after trying to hide splitting total vram across two varying speeds of RAM.
https://arstechnica.com/tech-policy/2016/07/nvidia-offers-30-to-gtx-970-customers-in-class-action-lawsuit-over-ram/
Even comparing shader cores/shader core count to gauge cross generation uplift performance isn't an accurate comparison due to generational changes.
I get it though, it's fun to play conspiracy about everything. I don't think nvidia (or amd, or any company really) are some bastions of honesty and such...they want money, i also don't think everything has an ulterior motive behind it either.
In that case though, a better one is that nvidia killed off non RT/tensor core based GPUs on purpose solely to refine both their hardware and software AI division as they'll need alternate revenue once rendering hits a performance wall due to physical process limitations.
The above all said, I would actually love it if it became known it is possible to have full DlSS3 support on Turing/Ampere with acceptable frame time cost through some sort of pipeline optimization really.
Would be unlikely but also funny
So FSR3 is only good for less than 6XXX series cards, or nVidia users who hate nVidia's stance on FG.
Dunno what's taking cdpr so long to add native support