Steam 설치
로그인
|
언어
简体中文(중국어 간체)
繁體中文(중국어 번체)
日本語(일본어)
ไทย(태국어)
Български(불가리아어)
Čeština(체코어)
Dansk(덴마크어)
Deutsch(독일어)
English(영어)
Español - España(스페인어 - 스페인)
Español - Latinoamérica(스페인어 - 중남미)
Ελληνικά(그리스어)
Français(프랑스어)
Italiano(이탈리아어)
Bahasa Indonesia(인도네시아어)
Magyar(헝가리어)
Nederlands(네덜란드어)
Norsk(노르웨이어)
Polski(폴란드어)
Português(포르투갈어 - 포르투갈)
Português - Brasil(포르투갈어 - 브라질)
Română(루마니아어)
Русский(러시아어)
Suomi(핀란드어)
Svenska(스웨덴어)
Türkçe(튀르키예어)
Tiếng Việt(베트남어)
Українська(우크라이나어)
번역 관련 문제 보고
nonsense. you can totally calculate a bottleneck down to an average (percentage);
a CPU/GPU bottleneck is generally not calculated with a single, precise formula, but is instead assessed by monitoring the utilization percentages of both the CPU and GPU during demanding tasks, where a significant disparity between the two (with the GPU significantly underutilized) indicates a CPU bottleneck, while a high CPU usage with a relatively low GPU usage suggests a GPU bottleneck; essentially, a large difference in utilization between the CPU and GPU during graphics-intensive applications points towards a bottleneck in the component with lower utilization.
a 20% bottleneck out of the box is already a deal breaker for me.
A single game with the exact same settings can be CPU bottlenecked in let say a town and then GPU bottlenecked when something explodes. It completely goes out of window when you change settings and games.
300FPS average in GTA V...
At the same time the engine starts to stutter somewhere around 180 FPS.
And do people still use GTA V as a reference? wtf?
And it's crazy how they have CSGO data, which is so stupid. I can literally tell you that the game data on this site is theoretical bs calculations.
Should we be commenting on synthetic benchmarks how comparing an 8P+16E (basically 14 or 16P equivalent of Xeon cpu) processor doesn't equal to only 8P cores? And to praise the single core turbo boost?
Also noting that more precise 4nm lithography is worse than 10nm. wtf?
And that 10% and 20% bottleneck in general tasks? Lol, it can't run with Word or something?
And the logic behind these numbers is garbage, very easly can be shown that the configurator is unreliable.
Don't tell me the next source is going to be from "Test Games" or some other random joke.
bottleneck calculators act as a guide just like benchmarks do. they are based on users who performed an assessment and submitted their result data to a website. while they are not absolute, they give us a general idea of what is deemably baseline. ive used that site for many years and can vouch that its quite accurate just as it is useful.
the 9800X3D typically has a 20% bottleneck out of the box especially when paired with a 4090 and that's considering a 2% margin of error give or take. anyone can test and repeat this themselves.
a bottleneck isnt dictated by a game or a setting. if you read my previous comment again, you will better understand what a bottleneck is and how to assess one.
9800x3D may show as being heavily bottlenecked by most GPUs while at the same time still helping with reducing stutters and 1% lows.
So you can be GPU limited for average FPS and CPU limited for 1% at the exact same time.
These calculators are useless at best and misleading and harmful at worst.
these so called "bottleneck calculators" are not actually calculating anything but rather fetching result data from a database of user submitted results like any benchmarking site does to be used as a guide to help people as explained to you previously.
anyone can pair a 9800X3D with a 4090 in a test station using whatever motherboard and memory they please and assess that there is indeed a 19-21% bottleneck out of the box. this is a fact.
your opinion of whether these sites are accurate or not isnt much of a concern to the experts who will continue to use it. ;)
This data is extremely misleading if it ignores 1% lows or doesn’t tell you what games and settings it is using for calculation.
Almost all of us are CPU bottlenecked in some games in terms of 1% low and GPU bottlenecked in FPS average in single player games. Both at the same time. Even on 9800x3D and tuned 14900ks.
30% bottleneck calculation can give a wrong idea that something is too strong for the other.
https://www.youtube.com/watch?v=6QGnTlGUFn0
You can also use Intels PresentMon utility and its GPU wait metric to show how much time the GPU is spending waiting for the CPU.
this video reminded me about another thing, the amount of frame dipping these 9800X3Ds are notorious for is another reason why I would avoid it and go 14900K. ;)
thank you for sharing!
can also be ram, drive, os, background tasks, other things
could also be the gpu itself, if it does not have enough vram, cache, or vram bandwidth
Frankly this old wives tale about AMD frame dropping started with the 5800X3D. It doesn't pass the sniff test these days. Both Intel and AMD have pros and cons when it come to games and frame times. Like it says in the video "there is always going to be a bottleneck somewhere". A friend of mine has a 14900K/RTX4090 combo and he complains that the frame times should be better in many titles.
Go with any processor you want, but don't bend facts to attempt to persuade others that a 14900K is a better all round gaming CPU cause that one was put to bed some time ago.
Hopefully Intel and AMD can keep producing a whole range of excellent gaming CPU models and I would like them to use less juice than some of the more recent chips.
I have to add that I struggle with intel Arrow lake model numbers. It does also appear as if no one is talking about them and even the Intel die hards agree, that they are best avoided. We could all benefit in the long run with a more able Intel.
We don't need AMD becoming the Nvidia of X86 CPUs.
Game engines can also be a bottle neck - some like UE are well known for stuttering issues, while recent Hardware Unboxed GPU testing is showing game engines appearing to be holding back hardware.