安裝 Steam
登入
|
語言
簡體中文
日本語(日文)
한국어(韓文)
ไทย(泰文)
Български(保加利亞文)
Čeština(捷克文)
Dansk(丹麥文)
Deutsch(德文)
English(英文)
Español - España(西班牙文 - 西班牙)
Español - Latinoamérica(西班牙文 - 拉丁美洲)
Ελληνικά(希臘文)
Français(法文)
Italiano(義大利文)
Bahasa Indonesia(印尼語)
Magyar(匈牙利文)
Nederlands(荷蘭文)
Norsk(挪威文)
Polski(波蘭文)
Português(葡萄牙文 - 葡萄牙)
Português - Brasil(葡萄牙文 - 巴西)
Română(羅馬尼亞文)
Русский(俄文)
Suomi(芬蘭文)
Svenska(瑞典文)
Türkçe(土耳其文)
tiếng Việt(越南文)
Українська(烏克蘭文)
回報翻譯問題
This way the task becomes more memory than CPU intensive. You are moving the infirmation multiple times to get a result, also the cache should become less efficient.
Yes, but I guess it is still better than calculate all map on a single core. Also I'm not talking about rendering (it's a GPU task), only about some simple game logic and some physics. And I would say "tick", not "frame" :)
When it comes to physics it is not always true, there are also cases where the physics are tied to the fps for better efficiency I guess or to look a little bit better if the values are adjustable? But in principle tick rate determines them.
I thought we went over this.
It's a server CPU though, it was never meant for gaming in the first place. If you want a big upgrade, even something like an Ryzen 3600 would be a giant difference, but i would try to get a 5600 instead. AM4 motherboards are dirt cheap, so this makes its a good budget pick.
The CPU is not very good in single either.
Turbo:
1 core multiplier: 36
2 cores multiplier: 35
3 cores multiplier: 34
4-10 cores multiplier: 33
Thats why your system is struggling. Its no better than a half speed modern one.
The E5 closes the gap a bit in mixed core loads, but then falls back flat on its face when tested with all core loads against the same R9 chip. No matter how you slice it the deficit for your CPU is anywhere from half to one third the speed of the modern consumer chip.
As for the games issue, as mentioned, tick rate is the limit and the reason its not all split out is that it would be too difficult to piece back together. Games are a live time render application, and they just dont have enough time to piece everything back together from a highly multi-threaded workload. For time sake many engines run with most timing related computation on a single (or a few single) threads. Then the main thread is always the limit as it tries to stitch everything together on the fly at 165fps or more (for many modern gamers).
And as for UT4, def could be the data streaming causing issues, more so if the CPU core speed drops when multi-core use is in play. The game or app may only be showing a small usage but the data transfer (even assuming storage and memory can keep up) will still cause a spike in Kernel use as the data gets shuffled. That kernel level use can cause overall system use to be high enough to kick in multi-core use. If the game core bumps down a few hundred Mhz to accommodate the spin up of extra cores that could also impact the speed of the game in question.
Some people just don't like to be informed that having more cores doesn't necessarily mean that your processor is going to be viable for that much longer than one with less cores from the same generation. It's all about how well those cores actually perform.
Any proof of this with real comparison and charts? I'm talking about 2018-2020 years games, where did you get one third of a difference? In fact my CPU should be close to i5 10500 or i7 7700 (not in single-threaded mode, of course). Again, in working tasks i7 7700U in a notebook that they gave me at my company for work purposes feels *much* slower than my home Xeon when starting and stopping services or indexing files in the IDE, for example.
OK, thanks for the explanation, it seems clearer now. But as for me I would be happy with 120-130 fps, and I even don't ask for ultra quality or 2K resolution, 1080p is enough for me.
Ah, that makes sense, and explains why my i5 2500K overclocked to 4.1 GHz performes significantly better in games like L4D2 and partly CS:GO. Because it has similar cores, but they are running 33% faster in multicore.
What about me don't want to hear the answers - that's because I have some lawns to pay, and a new PC from scratch is like 1-1.5 my month salary. So yes, if I can live without the upgrade for some time, I'll probably do it. And even after that I would stop and think whether I should exchange my build which is totally fine for work and even some simple games for 100-120 USD (which is close to nothing).
https://www.techpowerup.com/review/intel-core-i9-14900k/17.html
Of course the difference in practice will be lower since you're more often GPU limited whereas this test is done on an RTX 4090 at 720p precisely to show the difference in CPU performance.
You can indeed guesstimate that modern chips are between twice to thrice (or even more?) the performance of early 2010s CPUs now, and that's guesstimating because many of them don't even show up. They're too old and slow to be tested in modern settings. But Haswell (Intel 4th generation CPUs) is often compared to the original Ryzen CPUs and you can presume that's somewhere between, what, 25% to 33% based on where the Ryzen 5 2600 is? And it can get worse for those older consumer DDR3 era CPUs in modern games where the core count will limit them. You truly are looking at two to four times the uplift.
People are still acting like modern CPUs are barely twice as fast as their old Sandy Bridge or Ivy Bridge CPUs just because the stuff that immediately followed Sandy Bridge was progressing so slow. Well, that incremental uplift is small but compounds eventually, and than after it adds up, gains also got higher in the past five years compared to the five before them. So yes we're between two and three (or more) times faster now than many of those mid and late DDR3 era platforms. We're really at a point where anything "quad core era", which is Sky Lake/Kaby Lake and prior, is slow regardless of how many cores it has. 8th generation stuff is about to be the floor with Windows 11 in a little over a year. LGA 1200 is about to be two platforms back. I don't know what it is about with early 2010s platforms kidding themselves about new stuff barely being faster. If you're fine with the performance of those early 2010s platforms, that's one thing. I used mine until 2020 (goodness that was already four years ago). But they are, in fact, many times slower that current offerings now.
Of course I am, why am I writing here you think? Just wondering what is wrong with some of the games :)
Why are you giving me link to i9 14900? To show what can I gain if I buy this year's CPU, so that I don't longer hesitate to go and spend my money?
You see, the whole methodic is wrong IMO. We should compare ti Intel 7xxx-8xxx and Ryzen 1-2, because I planned this upgrade in late 2017, where in fact there were 4 options for young students willing to make an upgrade (I ordered them from worst and cheapset to the best and most expensive):
1. Buy a used Threadripper CPU
2. Buy a used Xeon 13xx/16xx/24xx CPU
3. Buy a new Ryzen CPU from a shop
4. Buy a new Intel Core i3/i5 7xxx from a shop
So, in many tests Threadrippers performed worse than Xeons, but of course that depended on the exact CPU models. And 26xx Xeons were not even suggested, because they were more expensive, and did not allow overclocking (which was considered important).
But when I in fact bought my 2690v2, they were already cheap enough. Also, I did not mind with the lack of overclocking much, because I thought that high core count would be enough (and the cores in 2nd generation were not as slow as in 3-4 generation Xeons, also trhe frequencies were close to that of my old desktop Sandy Bridge CPU).
That's the logic behind that choice. I chose the 2nd to best CPU in the series (only one model was better, which had 12C/24T formula).
Great! Mine is Ivy Bridge, so it's 3rd generation - let's compare it to the original Ryzens. If it loses a little, I'm okay with that, at least it was cheap for me, and yes, it is a used device with no service guarantee. But I hardly believe in 25-33% loss, and how can you say about twice or thrice? It's not possible even in single-core mode.