GPU: Effective Clock vs Core Clock
Hey guys

I just stumbled upon this Video.
He compares the general core clock shown in Afterburner, Nvidia Overlays and so on with the in HWinfo shown effective clock.

I never realised there even was an effective clock but apparently the generaly shown clocks are not accurate. This would also explain why at a certain clock you don't or barely see any ingame performance difference.
However his method to adjust for the effective clock is to just use the afterburner coreclock slider instead of the voltage window. That I and probably a lot of others use to OC their GPU.

My question now is. Is this realy the "best" way to do an OC or is it just the most accurate one?

I see in his general performance Videos him using the Voltage way atleast this makes the most sense to me due to the shown 2175Mhz Clocks (with 2080ti).

https://www.youtube.com/watch?v=RH3FZXvBkiE
< >
11/1 megjegyzés mutatása
I think the video is misleading and the conclusion is nonsensical.

The higher difference in value between the reported and the effective clock in "method 1" stems solely from the higher requested clock rate.

Scenario 1: Requested 2190@1000, reported 2160@1000, effective ~2080@1000
Scenario 2: Requested 2100@1000, reported 2100@1000, effective ~2080@1000

In both scenarios the "requested" clock frequencies are not achievable at given voltage.

Also:
He LOCKS the GPU to a set frequency and voltage during benchmark. The GPU does not adapt clock rate and voltage during different workload.

Remark: Don't flatten the curve out.
< >
11/1 megjegyzés mutatása
Laponként: 1530 50

Közzétéve: 2022. jan. 20., 3:01
Hozzászólások: 1