Steam telepítése
belépés
|
nyelv
简体中文 (egyszerűsített kínai)
繁體中文 (hagyományos kínai)
日本語 (japán)
한국어 (koreai)
ไทย (thai)
Български (bolgár)
Čeština (cseh)
Dansk (dán)
Deutsch (német)
English (angol)
Español - España (spanyolországi spanyol)
Español - Latinoamérica (latin-amerikai spanyol)
Ελληνικά (görög)
Français (francia)
Italiano (olasz)
Bahasa Indonesia (indonéz)
Nederlands (holland)
Norsk (norvég)
Polski (lengyel)
Português (portugáliai portugál)
Português - Brasil (brazíliai portugál)
Română (román)
Русский (orosz)
Suomi (finn)
Svenska (svéd)
Türkçe (török)
Tiếng Việt (vietnámi)
Українська (ukrán)
Fordítási probléma jelentése
Benchmarks are a useful tool if you know how to use them. They're not a perfect analog for gaming performance but are useful for making comparisons between different parts in specific tasks.
9900k @4.4- air, fans idle, max temps 62C
1070 @ +50 core +400 mem
https://www.3dmark.com/fs/19904583
3DMark Score17724
Graphics Score19461
Physics Score22980
Combined Score8809__
_____________________________________________
Laptop run:
8750h stock air
2070mq stock
https://www.3dmark.com/3dm/37806452?
3DMark Score16060
Graphics Score18200
Physics Score16205
Combined Score8474
No because numerous games are not even designed to leverage hardware correctly. Look at several games where the internal game programming is a barrier to the actual performance. Synthetics are optimised to push something to it's absolute limits, but realistically some games are memory bandwidth bound, some CPU bound (no this does not mean maximum CPU utilisation, "ARMA 3" is CPU bound but sucks at overall resource use), or GPU bound this is actually the "ideal" scenario when gaming. If you are GPU bound it means you can run higher end hardware in the "IDEAL" scenario and only if in the ideal scenario.
Performance gains in the generations of CPUs is disturbingly poor. Take any Intel CPU (2015 - 2019) lock it's core multipler to 4.0Ghz all core only. Then do a side by side. You'd be surprised at the results.
I consider Boost clocks to be effectively cheating
Clock for clock comparisons give you real data on which CPU has the better raw IPC.
Not sure what your point is here. Afaik no one is pretending all games are coded equally. That's actually one of the reasons I value benchmarking software: BECAUSE games are frequently inefficient with the way they handle the resources available to them, an objective standard is useful to determine a rig's performance potential.
Doing that would demonstrate how much Intel's IPC has grown (or hasn't). But Intel's strides with their more recent chips has been more with frequency than with IPC anyway. If you're trying to argue than Intel's core development has stagnated a bit, I don't think anyone would disagree. But we're far afield from talking about benchmarks at this point.
For an academic discussion of IPC, sure, it's interesting. But unless you're buying a 9900k with the intention of down-clocking it, there's no point in testing a cpu at a speed it can easily exceed in almost every high-load scenario. Such a test isn't useful in determining effective speed (running benchmarks) or practical speed (common usage scenarios that benchmarks may not accurately simulate).
Do a side by side how? With a benchmark? So you can put an objective number to the performance and compare side by side?
What would locking core multiplier show anyway? Unless you wanted to specifically only look at a single part of performance. And what makes boost clock cheating? Do you mean for regular use or benchmarking purposes?
So, what would be test it in? Games? Production workloads? How exactly should we compare this performance?
The only thing that locking the core multiplier and disabling boost would do is show IPC, which is a valid reason, but the only way to test that is SYNTHETIC BENCHMARKS.
And, I don't see how 'boosting' (in any form) is '''cheating.'''
If that's the case, then ALL GPUs these days are 'fake performers', because they all boost.
Productivity would be using: 3D rendering software, CPU bound fluid simulation, AutoCAD, or other benchmarks.
To try to determine real world performance from synthetics tells you little about how varied workloads behave when applied to a system.
Yes I believe that clock-boosting is cheating and in a way it actually is, by determining the overall load on the cores the CPU can adjust its frequency on or more cores to make the prodominant thread taking up residency on the "main thread/core" as it is known to be it can run this core at a higher "boost" if the CPU is within specific power and thermal requirements.
The same thing can be found on GPUs it squeezes just that bit more performance out by boosting depending on a number of data characteristics.
It "cheats" physics by doing this. Otherwise If we saw a CPU baseline all core 5Ghz from a i9 for example it would become very hot and possibly thermal throttle or worst case shutdown the computer.
Hi Sawdust3d,
If i get a chance i'll do some OC on GPU, and see how far i can push it, then will post results..
Basic version is free but only comes with Time Spy. And you won't beat us, because the best score for your config hasn't even reached 5000 in standard Time Spy, which is nothing compared to the top scores of over 30000.
The RX 470 and GTX 970 are fairly close - curious to see how close. I expect the GTX 970 to score around 4400 or so.