安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题
I am just shocked how little people know about tech before getting into a thread like this and commenting.
Like what made you come to such a conclusion? It wasn't math or benchmarks.
First off READ THE WHOLE SPEC SHEET. Not only does the the XT model have better clock speeds but it also has TWICE the PCIE lanes.
If you run an older board which many people do who are in the position to be looking at such low end cards this WILL hurt performance.
And for a 10% uplift in performance due to more VRAM? This tells me you don't know what VRAM is or does.
Adding more memory either for system or graphics doesn't "increase performance", it prevents performance decreases.
Its already a literal documented FACT that 8GB isn't enough even at 1080p. Daniel Owen have beat this topic TO DEATH already. Even the exact same card like the 4060 ti loses 34% performance when going down to the 8GB model at 1080p. No just in 1 game but in MANY. Just about every modern title. And 8 VS 16 is also the difference between running games like the RE remakes at 4k 60FPS solid vs less than 1 FPS.
This is literally because 8GB isn't enough and it hits system RAM.
This is made WAY WORSE for cards like the 7600 which only has HALF the PCIE lanes of the XT model and made even worse still if you are running an older board with an older PCIE version.
Please, don't bother speculating on things you clearly haven't even bothered to try and understand.
https://www.techpowerup.com/review/sapphire-radeon-rx-7600-xt-pulse/
Yes? Put the other popular ones, because idc.
Clock speed? Less than 4% difference
2655/2755. So is there a math? Yes.
PCIe is still 8 lanes, and if you don't think so, then show me where it says that. by the way, good luck finding that one on the AMD product page + Don't tell me how the PCIe lanes make a meaningful impact on such a low-end GPU, because they don't
If you REALLY want MOAR benchmarks. There is one from 8 years ago on a GTX1080.
https://www.techpowerup.com/review/nvidia-geforce-gtx-1080-pci-express-scaling/24.html
And if we have to compare how pcie scaling works on a GPU like the RTX4090, which calculates 4x more information than the mentioned RX and 8x more than the GTX, and when we have to draw the line, it doesn't matter. They certainly have different memory management systems, but it's just an offset +- something in the graph.
https://www.techpowerup.com/review/nvidia-geforce-rtx-4090-pci-express-performance-scaling-with-core-i9-13900k/
You think I don't know what is VRAM nor how it works? Oh well, let me quote you
"Adding more memory either for system or graphics doesn't "increase performance", it prevents performance decreases.
Ok if it "prevents from performance decrease" then put 24GB VRAM and run games from the past 10 years. You are preventing ABSOLUTLY nothing. If you put 48GB, the same scenario, ABSOLUTLY NOTHING, not only that, but surely the performance is even worse due slower access time...
Why do I say it improves performance? Due to requirements of the games, more VRAM will be required in the future as well, and there the impact is greater than what you have due to slower access times.
And who is Daniel Owen, the same guy who opened the overlay and points like an idiot "guys look today I'm going to show you a green card and a red card both have the same amount of VRAM blablabla oh look at this card for a bit reason uses less VRAM and performs better than the other in the same game and settings so it must be better", meanwhile it completely ignores the system RAM and says nothing about why this is happening.
https://www.youtube.com/watch?v=VPABpqfb7xg
wtf is this... If you claim this guy knows why 8 GB isn't enough, meanwhile he doesn't tell you why VRAM usage is intentionally kept lower and not used 100% and "he debunks some myths". I can tell you that there is something called texture streaming. Instead of loading data at 100% and having to remove it after a half second, which is something you don't want , you keep VRAM usage lower so textures will load faster because VRAM doesn't have to do double the work in a such moment.
Games on both consoles use fast, unified memory but both consoles use slower RAM for background tasks as well.
Ps4 has 256MB of DDR3 and a separate, dedicated ARM processor for OS and background tasks.
PS5 has 512MB of DDR4.
PS5pro has 2GB of DDR5.
Consoles would definitely use more of normal memory if it was beneficial for gaming as it’s cheaper.
… but they don’t use cheaper memory for games for a reason.
16GB is enough in most cases, only a few instances where it isn't.
Counterpoint, why is no one asking why a game that looks worse than a 2011 game graphically needs so much VRAM?
I don't see anything there that looks like it needs 20GB VRAM at all. Maybe the dev team can help me understand how so much VRAM can be wasted on a scene that looks terrible in every single way.
Cracked GPU PCB can be fixed, it just takes more time and not all shops will warranty the work because cracked GPU PCB is usually the fault of the consumer.
If you have the time, skill, and affluence, cracked GPU PCB repair (or even offering this type of service plus warranty) might be extremely lucrative (or not depending if cracked PCB is, in fact, user error, you would lose money and time).
As costs rise globally, imo, PC refurbish/repair is lucrative in certain countries and to stay honest will earn great profit - stay true to honesty within business practice.
Demoscene is still pushing Commodore 64 limits. Because they have to.
Current PC devs don't have to. They have no incentive to make their code more elegant, they can just demand more overhead. More RAM. More cycles. More power.
We need a plateau on the hardware end, because all we're doing is incentivizing laziness. The code and optimization gets sloppier and sloppier and more and more bloated, because why should coders learn to do it better and better? They don't need to.
My hot take.
Nvidia giving less VRAM than in PS5 or even incoming Switch2 is another problem.
Both problems add up making PC gaming worse.
Don't purchase Nvidia products. Nvidia is the worst value in the whole PC industry; that includes consoles, motherboards, graphic cards, CPUs, PSUs, towers, prebuilts, SSDs, RGB lighting, mice, keyboard, displays, USB devices, webcams, gamepads and whatever else I don't feel like listing right now.
Nvidia gets the golden award on the steam forums for worst value, biggest rip off, most useless gimmicks and stingiest designs.
This company would be broke if wasn't for the smoothed brained individual.
Developers being terrible at optimising their games is not NVIDIA's fault, and simply adding VRAM isn't a solution, it's only going to give developers LESS incentive to fix their crap because they'll have more video memory to work with.
I think most people would be happy to, if Nvidia and even AMD weren't upselling the halo cards by upselling even the lower cards at a worse value. If someone wants to buy low range or mid-range GPUs, they are content and don't mind with "not ultra" settings. The problem is that Nvidia doesn't even make those cards worth it, making their value proposition awful, and them being bad even for the lower settings that they're meant to run on.
Lack of VRAM is now detrimental even for 8 GB GPUs that are simply meant to do 1080P. It's why Intel's B580 is being seen as the sole reasonable and rational card for the lower end, that's actually being sold at an AFFORDABLE price point. And not for 4k, but simply 1080P to possibly 1440P resolution. And with at least 12 GB VRAM. People just want a GPU that will actually run the game at reasonable framerate and performance, and won't break the bank doing so. The problem is that Nvidia hasn't been offering any GPUs at even the lower end that could justify the price tag in comparison to the performance value.
AMD has better value but that's because they have literally nothing else to offer. NVIDIA can get away with their performance per dollar value being worse because they're the dominant corporation in their respective field, they have technologies that AMD either failed to surpass or didn't even make their own variant of, their GPU design is more lucrative and expensive to produce, etc.
I mean, sure, but the question at hand is related to VRAM and Nvidia being cheap. The rest of that is irrelevant since we're not talking about the economy or lawmakers here. Let's not derail the topic by bringing in arguments that aren't relevant to the conversation.
Simply pushing NVIDIA to throw more VRAM at everything like AMD likes to do is not a solution, it just adds fuel to the fire. All it's going to do is increase the "inflation" on how much VRAM is going to be used up by badly optimised games.
A much better idea for NVIDIA to do would be to start helping and pushing developers to reduce wasted memory, it would resolve the need to waste money on memory that doesn't really need to be there and resolves problems with current GPUs with "lacking" memory buffers.