Steam installieren
Anmelden
|
Sprache
简体中文 (Vereinfachtes Chinesisch)
繁體中文 (Traditionelles Chinesisch)
日本語 (Japanisch)
한국어 (Koreanisch)
ไทย (Thai)
Български (Bulgarisch)
Čeština (Tschechisch)
Dansk (Dänisch)
English (Englisch)
Español – España (Spanisch – Spanien)
Español – Latinoamérica (Lateinamerikanisches Spanisch)
Ελληνικά (Griechisch)
Français (Französisch)
Italiano (Italienisch)
Bahasa Indonesia (Indonesisch)
Magyar (Ungarisch)
Nederlands (Niederländisch)
Norsk (Norwegisch)
Polski (Polnisch)
Português – Portugal (Portugiesisch – Portugal)
Português – Brasil (Portugiesisch – Brasilien)
Română (Rumänisch)
Русский (Russisch)
Suomi (Finnisch)
Svenska (Schwedisch)
Türkçe (Türkisch)
Tiếng Việt (Vietnamesisch)
Українська (Ukrainisch)
Ein Übersetzungsproblem melden
14% behind Ryzen 1800x in singlecore and nearly equal in multicore. 1800x is from exactly 2017, the game in question ("Get Even") where I did experience lags on sprint also came out in late 2017. So no, that won't do the trick :)
The problem IMHO is somewhere in other place (missing support for AVX2? Narrow video memory bus? FSB overcloking?)
You tell me? You're the one making the thread remarking about insufficient performance.
Why do you think I would defer to results showing the difference in CPU performance in games where you asked for exactly that?
The performance of a CPU is an objective metric that can be measured, and it doesn't care for any subjective whims to reorder their performance based on some arbitrary factor like year of release or cost.
The bold is my emphasis; you were wrong on that part.
Extra cores help only if you're feeding them information to help cut down the time needed to complete the task. Otherwise, they just... exist and don't do anything. Games aren't often too highly parallel. Some are starting to push more cores/threads now, but you can't substitute core count in place of core speed. It never works that way unless your task scales infinitely and linearly with core count. If you thought you could do that to resolve any performance concern you might every have in any game, you're very mistaken.
Is this the part where you lean on a synthetic, like CPU-Z or Passmark, and defer to a flawed single core methodology as the be-all, end-all to say "look, the older stuff is not that much slower"?
As for the 25% to 33% you're mis-attributing what I was referencing. I was guesstimating where Haswell/Ryzen 1000 series might roughly (two key words) land using extrapolation. But regardless of how close or far from the mark we are, we at least know it's slower than the lowest measured thing there. Look at the first chart in the link I provided. Within the suite of games that were tested, the Ryzen 5 2600 has 39.3% of the performance that the Core i9 14900K being reviewed has, and the Ryzen 7 7800X3D is another 5% faster than it (I overlooked that the scaling was done in reference to the Core i9 14900K itself and not the fastest Ryzen 7 7800X3D, so I was slightly overestimating where the stuff slower might land if anything!). So this places the Ryzen 5 2600 at just about "a third of the performance" in my mind. The first generation Ryzen is slower than the second generation (big surprise, right?). Consumer Haswell roughly equals it, give or take? And that's presuming its quad core limitation doesn't impact it. Ivy Bridge is again a bit slower than Haswell (another big surprise, right?). See where we're getting?
It doesn't matter if my guesstimate was exact. I gave a range for a reason. 25%? 27%? 30%? 33%? Either way, same thing. You're kidding yourself if you think current CPUs aren't typically many times faster in games (obviously you'll more often be GPU limited when you're not, you know... using a CPU this old and slow to begin with that it bottlenecks it so severely, but the idea is to show the difference that can occur if you are). Obviously the exact difference also moves if you change the games that comprise the test suite, but they're not going to move enough to paint an averaged picture showing the early 2010s stuff match current levels of performance.
Edit: I just saw the new page and you're linking to Userbencmark. I'm also done. Many of us have given you information and reasoning to figure this out. But you seem intent on burying your head in the sand when facts don't align with an already established conclusion, which is not the way you reach a conclusion. You can lead a horse to water, but not force it to drink.
If you want better performance, simply upgrade, You don't even have to go with anything high end, since you CPU is so old any budget CPU made today or 5 years ago would be a big upgrade.
This thread is done. There's no point in arguing with you anymore OP, not trying to sound rude, but you're wasting everyones time.
I wouldn't count on Userbenchmark as a source to compare pieces of hardware
But overall, Just because you have 10 cores/20 threads. doesn't mean that your CPU is automatically good. first gen Ryzen has access to AVX2. it has a considerable increase in IPC and it doesn't suffer as much from latency unlike the mesh based design of your current CPU
Other aspect that you have to count in is security mitigations. Spectre and Meltdown are far more hurting on old architectures compared to newer ones
Core count isn't the whole story. modern day 6c/12t cpus will crush that Ivy Bridge Xeon every day and especially in gaming.
And like Chase said, it's missing certain important instructions like AVX2 which can create issues when running programs that use those sets.
More cores do not always equate to more performance, there are plenty of newer CPUs with fewer cores that can absolutely destroy that Xeon, even in multi-core performance due to higher IPC, frequency, etc. and they draw less power while doing it.
Mitigations like Sinon mentioned can impact performance as well, older CPUs were hit harder by Spectre and Meltdown patches and as a result, some people disable them at their own risk.
Especially wouldn't use Userbenchmark to compare anything, especially Intel vs AMD CPUs because the site owner has a raging hard-on for Intel and NVIDIA, and goes into a rage editing his entire algorithm every time AMD releases a new product so that he can make their benchmarks look way worse than they're supposed to be, and other the years, it's drastically hurt the accuracy of benchmarks for everything on that site. It's common knowledge that the site is a joke and it's frequently banned on tech forums, even sizeable communities like the Intel subreddit.
You're correct, AVX2 was introduced with Haswell i3/i5/i7 in 2013, but not the Xeons like the E5-2690 V2, Celerons, and Pentiums (the latter two didn't get AVX2 until Tiger Lake in 2020).
Basicly the same architecture as Intel core 3th gen.
It also lacks TSX which is useful instruction for scalable multi-threaded workload by reducing the need for locks and other synchronization mechanisms.
https://www.anandtech.com/show/6290/making-sense-of-intel-haswell-transactional-synchronization-extensions
old and slow cpu is old and slow
games like core performance
high fps takes more cpu performance and gpu performance
Yes, I agree that the list of games makes a difference, and CS2 is not a heavy game for a CPU.
And for what the heck is a CPU from Spring 2013 considered *that old* in the end of 2017, can you finally answer that simple question?
So yes, I would better buy i7 7500/7700 - but remember I wrote before about i7 7700U being slower sometimes in work tasks? So it's not that straight and simple here.
Because of higher core frequencies, some new instructions, or why? I tried to use Core i5 11400F for 2 months. It worked with RTX 3060 12 Gb. The games were wmooth, but I did not notice it was super fast in other tasks like video encoding, moving files, starting and stopping services and such. Though it also did not feel slow.
https://ark.intel.com/content/www/us/en/ark/products/75279/intel-xeon-processor-e5-2690-v2-25m-cache-3-00-ghz.html
the socket 2011 xeons had slower cores but more of them