Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
The higher difference in value between the reported and the effective clock in "method 1" stems solely from the higher requested clock rate.
Scenario 1: Requested 2190@1000, reported 2160@1000, effective ~2080@1000
Scenario 2: Requested 2100@1000, reported 2100@1000, effective ~2080@1000
In both scenarios the "requested" clock frequencies are not achievable at given voltage.
Also:
He LOCKS the GPU to a set frequency and voltage during benchmark. The GPU does not adapt clock rate and voltage during different workload.
Remark: Don't flatten the curve out.