R3v 2019 年 10 月 9 日 下午 11:58
ASUS 2070 ROG Strix core clock
Hello wonderful people. I've been having a love/hate relationship with an ROG RTX 2070, love because I love the performance, hate because in in the 5 months that I own it I've already gotten one repair and two replacements. Now the latest replacement is in my PC, and after monitoring it for a while I've noticed something unusual. I have not done any overclocking but in high load (95%-100%) when playing Control (RTX on) the hardware monitor reads the Core Glock at 1980MHz. Ive done some research and everyone it says that the card should have an out of the box turbo clock at 1815MHz. Is this normal and if it is what is it that I don't understand in the cards workings. The GPU holds a stable 64 degrees C, so I am not worried about any overheating. I am just worried that the extra clock speed could result in yet another malfunction and replacement. Again I have not done any overclocking on my machine, I just got it out of the box and plugged it in.

My system specs are as follows:
Asus H170 Pro Gaming motherboard
i7-6700k
32 Gigs DDR4
ASUS RTX 2070 ROG Strix 8gb
< >
正在显示第 16 - 18 条,共 18 条留言
dOBER 2019 年 10 月 10 日 下午 2:49 
引用自 Snow
引用自 dOBER
Holy.. it looks like you are not joking. Tell me please why should you have stutter and input delay with maxed out gpu usage? Anything to back this up?
Sure.
Here's a top PC showing awful frametime issues with GPU running close to 100%.
https://www.youtube.com/watch?v=TGogOnb6qzg

dont know why this guy has such a bad frametime but i will show you how this looks like on my "top pc". frametime is in every game a strait line without any stutter at all and this with always maxed out gpu !

edit: uploading some random games i play
https://youtu.be/94QbgqdV10g
最后由 dOBER 编辑于; 2019 年 10 月 10 日 下午 2:53
Snow 2019 年 10 月 10 日 下午 3:02 
引用自 R3v
What Snow says, does have some merit. I tested my system in a game which I know uses the GPU at around 60% which is the Witcher 3. I was getting stable frame times of 16ms with fps locked at 60. Then I tested the aforementioned Control. As I previously said GPU utilisation in the high 90s FPS in the high 40s, and frametimes fluctuating between 18-22ms. Then I limited my framerate to 40 using RTSS, and got stable frametime of around 22. That said the difference I felt was so miniscule that it doesn't really bother me. After all I've been playing a lot on consoles which drop frames like crazy in newer games.

As for the clocking issue I did some more research on that as well. It's nVidia boost 4.0. Basically the GPU will overclock itself if it feels like the temps are fine and there's enough power limit. The given number of 1815Mhz Boost clock is bascially the minimum you can except until thermal throttling occurs. If the card feels like it and it seems given my temps it feels like it alot, it will boost well over that, to a maximum of 1980Mhz, and an average clock of 1920Mhz. So yeah, there goes that. Was a usefull thread though, I learned a lot about frametimes and latency.
Glad to be of help, man! Btw I haven't play much on a modern consoles, but I play via DS3/DS4 on PC, and due to how smooth camera turns are with a controller - every microstutter is really noticeable to me. I had to play The Witcher 3 with 30FPS lock as my PC is pretty low-spec, and got to say it was an enjoyable experience. Digital Foundry framerate tests show that console games almost always have CPU-level framerate lock, and unless framerate drops - frametimes are really, really stable, just like with RTSS. I've no idea why game developers don't include CPU-level framerate limit in their games on PC, it's such a game changer, especially for weak PCs like mine.

And yeah, most 2070 reviews I've seen can't get clocks that high, but yours can. I guess you're just lucky, as your exact 2070 shows awesome results.

引用自 dOBER
dont know why this guy has such a bad frametime but i will show you how this looks like on my "top pc". frametime is in every game a strait line without any stutter at all and this with always maxed out gpu !

edit: uploading some random games i play
https://youtu.be/94QbgqdV10g
You don't expect the difference between 4ms and 7ms to be clearly visible on the default 0-100 graph, do you? You frametime graph is nothing like a straight line, all you need is to either reduce the graph limits or just use simple maths.

Let's analyze some measurements from the very beginning of your video.
1st measure - 190fps at 4.4ms frametime, should've been, 5.3ms.
2nd measure - 175fps at 5.0ms, should've been 5.7ms.
3rd measure - 168 fps at 5.4ms, should've been 6ms.
4th measure - 171fps at 5.7, should've been 5.8, so far the closest to what you should get.
5h measure - 169 at 5.8, 5.9ms.
6th - 164 at 6.1, this one is actually good.
7th - 162 at 4.7 vs 6.2, and now it's really bad.
8th - 171 at 6.5 and not 5.8, this time frame took longer than it should - stutter and added input latency.
9th - still 171fps but 6.4ms frametime, once again stutter and extra input latency.
10th - 166 at 6.7 vs correct 6.0, same old crap.

So, at measurements 1-5 and 7 we see PC compensating for late frames by pushing some frames faster than it should, 5 and 6 are close to perfect, and at measurements 8-10 we see PC making frames too late. Those frametimes are absolutely not right, but there are two things to note here.
First - you've got around 170 frames per second for the most part during those measurements, and it took 4 seconds to get 10 measurements. We only know the results of 10 separate frames, while 4 seconds at 170fps is almost 700 frames. We can possibly not truly analyse this video unless we analyse every and single frame, but that takes too much time and some kind of log analysing software that will do this job way faster than we would, hopefully before we get too old to stop seeing numbers on the screen. Still we can get the general idea of what frame inconsistency is - we've got some fine frames, we've got some late frames, and we've got some early frames compensating for those late ones.
Second - yet again, around 170 frames per second. During the measurements 2-10 (I exclude 1 because it's not close to 170) the highest difference between frametimes you had was 1.7ms. You're compensating for frametime inconsistency by pushing more frames, and it works indeed - I doubt I personally would even notice the 1.7ms difference in input lantecy. That said, if you're running some kind of VSync (GSync and FreeSync are also just VSync variations) - each time a frame takes longer than it should, front buffer doesn't get a new image and screen shows old image once again, resulting in stutter. Higher framerates and higher refresh rates make stutters and input latency inconsistency less and less noticeable, so when it comes close to 200fps - well you might get awesome results with graphics card at 99%. But difference between 5.0ms and 6.7 is still 34% difference. If you would be getting, say, around 60fps like I often do - it's as good as difference between 16.67ms and 22.34ms. Now THAT kind of difference would be clear, as it's almost 6ms input latency difference and it will look like it's running at 45fps.

tl;dr - while you don't get consistent frametimes, at framerate that high you're unlikely to tell the difference anyway.
最后由 Snow 编辑于; 2019 年 10 月 10 日 下午 3:43
R3v 2019 年 10 月 10 日 下午 3:10 
As for the results the Asus RTX 2070 ROG Strix editions is supposed to be factory overclocked, I just didn’t know it was that much and I didn’t know about the NVIDIA Boost which clicked on top of the factory OC. I owned a GTX 970 Strix before which I used daily for 3 years and it too consistently got better results than most out of the box 970s
< >
正在显示第 16 - 18 条,共 18 条留言
每页显示数: 1530 50

发帖日期: 2019 年 10 月 9 日 下午 11:58
回复数: 18