安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题
The 50 series is still fake frames.
No more fake than anything else, its simply a new way of doing things.
But hey, good luck with your slide shows or simply not using half the new tech developed, I'll be enjoying fluid gameplay with all the eye candy turned up.
It's not though, this isn't lossless scaling's rubbish program but very expensive dedicated hardware, it really isn't noticeable unless you are a very high end competitive player, but, then, any competitive fame doesn't need it....
There is still a typical generational increase, it's just that most of the gains they're advertising is through multi-frame generation which is currently only plausible on 50 series due to architectural improvements.
Frames obtained through DLSS and frame generation aren't fake just because it involves AI, it's AI that's integrated into the GPU. It's still the GPU. They're improving more on the AI front but that's because it's newer, and it's capable of more.
Can you describe a real frame to me so I know the difference between a frame generated on a GPU vs a frame generated on a GPU?
Well Nvidia has only invented every change in the way graphics are rendered and are the company that has single handed brought every new leap in tech in the gaming industry.
I think these guys sat at home in their bedrooms know best personally and if you give them a chance they will lay down a way better 10 year plan than Nvidia can come up with, it's why these guys earn the big bucks.
This is true for pretty much any hardware change, AMD had latency issues to resolve after Zen2 split the die with their I/O die having more latency and their Ryzen 9 SKUs having latency between chiplets, Intel is going through the same thing with Arrow Lake's I/O die and with P-cores and E-cores. Eventually the technology is improved to the point that the latency isn't noticeable.
That's not describing a real frame, you're on about FPS and the tie to control input and the linear process of that which controls latency, now if only Nvidia were creating a method to help with this change...
And how do you think you'll be getting those 80 fps exactly? I mean nothing exists that can get Cp2077 truly maxed out 4k native anywhere near that, so your argument is mute.
Edit.
Also, any added latency is very small and it does not play like it's at 30fps, there is a VERY distinct difference in how 30 or 60 fps feels vs say dlss/ frame gen at 120+
Maybe try the tech out?
And that's why they are revamping input which will only get better, now I'll give you for now on twitch shooters still rock native but for story games that push FX you wont tell.
Also they have to move as they are phasing out rasterization as it's pretty much a dead end now, 2 things will be phased out in gaming over a long term 2nd is x86, both are very messy and unnecessary bulky now with legacy crap.
Consumers cry now, now, now while innovators see a future, maybe we should have stuck to 2D sprites and call it a day.