安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题
The i9-9900K is still fine and will be fine for a few more years at least, there's no reason to upgrade just yet. It's comparable to an i7-10700K and i5-12400 at stock, it's a different story with overclocking.
And is there even a game that benefits from this?
1. It was their first iteration of the new cache design that went into mass production, not really a perfected design, it has flaws.
2. 3D V-cache is sensitive to heat and electromigration, so they had to lower the spec a bit from the 5800X and limit overclocking support, all to preserve the cache.
3. Ryzen 9 requires much more power, produces much more heat, and already has a lot of cache that also has to be shared in between chiplets IIRC, so it just wouldn't be feasible unless they made the Ryzen 9 X3D SKU monolithic, with only a single chiplet, or allowing just one chiplet to access all of the cache at once.
Cache is also a hit or miss, not all games will benefit from it. Same goes for fast RAM, 3200 MHz is fine. Lastly, PCI-e 3.0 is still fine and will be fine for a few more years, only the 3090 beyond (and GPUs running less than 16 lanes) benefit from newer revisions, but in the former case, it's such a tiny difference that it's not even worth worrying about. The 9900K isn't behind by any means, it's just not new.
The massive 96MB L3 cache in the 5800X3D allows it to beat the 12900K in some instances, but when cache doesn't help at all, it's slower than the regular 5800X because it clocks higher than the 3D variant and supports overclocking so it can be pushed even higher.
RAM doesn't make that much of a difference in most instances for gaming, 3200 MHz is fine, and most people are too focused on primary timings to notice when the secondary and tertiary timings are garbage, and it's those timings that make the real difference in latency. Primary timings are just for advertisement.
PCI-e revisions also only matter if a GPU is either well beyond the limitations of 3.0 x16 bandwidth, or if the lanes are limited to x8 or x4 like with the 5500-XT and 6500-XT, in which case 4.0 actually would matter. The 6500-XT can gain over 15% performance in some games just by having PCI-e 4.0 bandwidth as it only has 4 lanes, so the revision matters then. For people like you that have a 9900K, if you have something like a 3070 or 3080, you're fine.
you could overclock it if you want but its not even necessary.
maybe with the 4000 series but from what ive seen they are only going to run about
20% faster so 4k will still be hard to run.but at least you'll probably be able to hold 60fps
on high/ultra so thats a plus.
that gap even more its not the miracle 50% they are claiming not even close.and you can bank on that.
The 3060 was as good as the 2080 SUPER pretty much, so it's really not much of a stretch to get the same thing happening again, especially since there's another die shrink so the transistor count can skyrocket.
What game sees this improvement? The only benefits as far as im concerned is having more devices (ssd, etc) connected to the mb. I saw almost zero gain from pcie4
https://www.youtube.com/watch?v=F9-IjqTBcNY& Potentially a difference between 60 FPS and less than.
https://www.youtube.com/watch?v=PsdeJszdV7I If you pair it with the Ryzen 5 4500, it runs considerably worse than it does with the i3-12100. Same price range, but the i3 wins by a huge margin. Aside from CPU performance, the i3 supports PCI-e 4.0 while the mentioned R5 does not.
Now lets look at the 3090.
https://www.youtube.com/watch?v=b-nIMfaoZxo
Makes a small difference, not enough to really be concerned about. The only people that really care are the people who obsess over it and blow thousands of dollars just for more FPS.
If the RTX 4070 comes in around RTX 3080 Ti/RTX 3090 performance, which I think is possible, then I think the RTX 4060 (is there going to be a lesser and greater model at the onset for this tier like usual?) being above the RTX 3080 is probably a best case scenario more than an average, and it'll probably fall closer to being between the RTX 3070/Ti and RTX 3080. But that's PURELY a guess based on typical trends, which don't always hold (but they roughly have for a long time now).
Unfortunately I'm expecting pricing for all of them to be huge turn offs anyway. The whole thing with nVidia not wanting current stuff to go lower is a bad sign for where they are likely to price the new stuff.
But regardless, I could have sworn the earlier benchmarks had the RTX 3060 performing closer to between the two GTX 1080s, and only the RTX 3060 Ti was more firmly ahead, but maybe I'm mistaken. Of course, drivers change and tend to bring uplifts to newer cards, as do the suite of games review sites use to average these out (and newer games tend to run better on newer hardware, naturally).
There's always that debate, "should I only buy what's relevant at this moment?" or "should I buy something that has some headroom?"
Also the point of multi-cores isn't necessarily to have single programs use all the cores. Sometimes having plenty pf headroom for multi-tasking.