Zainstaluj Steam
zaloguj się
|
język
简体中文 (chiński uproszczony)
繁體中文 (chiński tradycyjny)
日本語 (japoński)
한국어 (koreański)
ไทย (tajski)
български (bułgarski)
Čeština (czeski)
Dansk (duński)
Deutsch (niemiecki)
English (angielski)
Español – España (hiszpański)
Español – Latinoamérica (hiszpański latynoamerykański)
Ελληνικά (grecki)
Français (francuski)
Italiano (włoski)
Bahasa Indonesia (indonezyjski)
Magyar (węgierski)
Nederlands (niderlandzki)
Norsk (norweski)
Português (portugalski – Portugalia)
Português – Brasil (portugalski brazylijski)
Română (rumuński)
Русский (rosyjski)
Suomi (fiński)
Svenska (szwedzki)
Türkçe (turecki)
Tiếng Việt (wietnamski)
Українська (ukraiński)
Zgłoś problem z tłumaczeniem
So Unigine Valley runs, and begins rendering the scenes right away. You can just let it do this for as long as you like, it'll utilize the GPU to the same extent as when you begin the benchmark. The only difference during the benchmark is that it's playing its set of 18 scenes (the same ones as the aforementioned) on a timer, while keeping a running average of framerates and developing a quantifiable "score" for use in comparisons.
My question is this: during the "normal" rendering (when the program is running and drawing scene but not benchmarking) I get 80some fps. When I start the benchmark, the first few seconds of the first scene stays at 80, then drops to 30 or 60 for the rest of the benchmark. This is irritating because I'm trying to get an apples-to-apples score to compare to my old video card setup, as I just changed them, but my old cards never "throttled" like this. And I do not have vsync on in the benchmark or the drivers.
CPU: AMD FX 8350 OC'd to 4.7GHz
Mobo: Asus Crosshair V Formula-Z
GPU: MSI Nvidia GTX 970 Gaming 4G
RAM: Crucial Ballistix 16Gb 1600MHz
SSDs: Sandisk 128Gb (OS) and two Corsair Force GT 240Gb (games)
Windows 8.1