Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
All a multi-core synthetic test does is run a particularly coded piece of software to return an incrementally increasing number based on how many times it does it in a given time span. That's it. More cores will boost this score. faster cores will boost this score.
That's only reflective of tasks (and not even all of them) that parallelize "infinitely".
Not all software works on that basis. Real time software, like games, are ones that don't.
This was answered by myself and a few others. Games care about per core performance more than core count.
So "bad" is relative, but yes, that score difference means nothing here. The single core score would be a bit more reflective (but even that can't be taken in a vacuum because core count does matter if it's too low).
You can run Geekbench too if you want. If it is in line with the Cinebench findings, all the better. I guess the point is: to not take anything in a vacuum but correlate with other findings and observations.
i3-10100 wasn’t that fast, AMD already had faster chips with more cores, it was only relevant if you were fine with still buying a quad core. Most people weren't, AMD's offerings weren't that expensive.
Because people are still buying it due to its high price/performance value, it’s actually worth it unlike these old Xeons, as made evident by the fact that you're here because of performance issues with a 10 year old Xeon that wasn't even intended for gaming to begin with. When the 5600 released is irrelevant.
A multi-core score isn't going to reflect relative performance except for software that loads the same exact way the synthetic did.
Even "single core" scores fall under this, and this is becoming a bigger inaccuracy in the last few years due to the v-cache yet there's still a lot of people that want to think certain ones are an infallible measure of all types of single core performance. For example, some of them might test the FPU or integer hard but then they fit into a tiny cache. Great, so what happens when something is cache (memory) bound? That score just lost its meaning as a reflective measure of performance for those types of scenarios.
It doesn't mean synthetics are useless. Very far from it. It just means you should take a very hefty amount of salt with anything that tries to boil performance down to a score or few. You trade off accuracy the more you boil it down.
No, it is relevant. This model just did not exist in autumn of 2018, when I first had an idea to try out a used CPU like Xeon or Threadripper from one of those benchmarks from social network groups. So let's take the CPUs that were on the market that years :)
So five years ago, pretty much the cap for when many users start considering a change in hardware, you should too, and not just to a V3 or V4 Xeon because it's a waste of time and money, whatever performance you can get out of those, you can get considerably more performance with considerably less power usage when it comes to Ryzen 9, and still noticeably less power (up to 100W or more at max load) with current i9 processors while getting 24 cores.
These Xeons may be cheap to get but they're not cheap to run, your E5-2690 v2 uses up to around 420 watts at max load and around 80 watts at IDLE. Some V3s and V4s can use over 530 watts at max load, and over 100 watts at idle, they're using as much power as an RTX 4090 for crying out loud, there were already better options back in 2018, like the 2700X which performs better in games, pretty closely in multi-core (not slower enough that anyone would care considering the power consumption difference), while also averaging around 50~60W in gaming and less than 1/3rd of the power that your Xeon uses at max load.
https://www.cpubenchmark.net/compare/2780vs3238/Intel-Xeon-E5-2690-v4-vs-AMD-Ryzen-7-2700X
https://www.cpubenchmark.net/compare/2780vs3824/Intel-Xeon-E5-2690-v4-vs-Intel-i9-10850K
Even a used 10th gen i9 would be a sizeable improvement without reducing core count but a massive reduction in overall power consumption, even at full load it only uses about half as much energy as the Xeon. There's more to cost than just the price tag, you're getting less than ideal performance while paying up the ass in terms of running cost, and if you're not paying dollars for that, someone else is.
Maybe, but they are more expensive too.
https://www.tomshardware.com/reviews/intel-xeon-e5-2600-v4-broadwell-ep,4514-8.html
Compared to the newer Broadwell-EP Xeons it was more efficient but it’s nowhere near as efficient as modern decacore CPUs.
He literally linked you to professional data on it, here I will link it a second time:
https://www.tomshardware.com/reviews/intel-xeon-e5-2600-v4-broadwell-ep,4514-8.html
Scroll 1/3 of the way down, check the bar graph for your clearly listed E5-2690 v2, and witness for yourself the professionally validated power draw of the chip under actual loads and idle, which for the record again are:
83w idle and 426w load.
Please quit wasting peoples time here arguing that your chip is the best when you are blatantly ignoring publicly validated information for multiple posts, and arguing about it.