Instal Steam
login
|
bahasa
简体中文 (Tionghoa Sederhana)
繁體中文 (Tionghoa Tradisional)
日本語 (Bahasa Jepang)
한국어 (Bahasa Korea)
ไทย (Bahasa Thai)
Български (Bahasa Bulgaria)
Čeština (Bahasa Ceko)
Dansk (Bahasa Denmark)
Deutsch (Bahasa Jerman)
English (Bahasa Inggris)
Español - España (Bahasa Spanyol - Spanyol)
Español - Latinoamérica (Bahasa Spanyol - Amerika Latin)
Ελληνικά (Bahasa Yunani)
Français (Bahasa Prancis)
Italiano (Bahasa Italia)
Magyar (Bahasa Hungaria)
Nederlands (Bahasa Belanda)
Norsk (Bahasa Norwegia)
Polski (Bahasa Polandia)
Português (Portugis - Portugal)
Português-Brasil (Bahasa Portugis-Brasil)
Română (Bahasa Rumania)
Русский (Bahasa Rusia)
Suomi (Bahasa Finlandia)
Svenska (Bahasa Swedia)
Türkçe (Bahasa Turki)
Tiếng Việt (Bahasa Vietnam)
Українська (Bahasa Ukraina)
Laporkan kesalahan penerjemahan
https://gpu.userbenchmark.com/Compare/Nvidia-RTX-4080-vs-AMD-RX-6700-XT/4138vs4109
And it's still more accurate.
You were already linked ACTUAL reviews from well-known sources but decided to ignore them and now post sites that just compile data without even reviewing. Get lost.
Nah, I accept the system requirements for Cyberpunk or Alan Wake 2. That a PS4 game requires top-tier hardware to run at 4K60 is a farce. I just hope they overshot the requirements because this should be maxed out on a 3080 at 4K60, a GPU that's actually around 74% faster than the 6700.
Once again, stop with the ignorant takes.
Yes, not ALL games. Obviously won't run Cyberpunk, Alan Wake 2, or even most modern AAA games at max settings and get 4K60, but here's the catch, Ghost of Tsushima is not a modern game. Look at the OP. The requirements for max settings are the same as Rift Apart with Ultimate Ray Tracing. How the ♥♥♥♥ does a PS4 port of a 4-year-old game have the same requirements as a PS5 game with ray tracing? This doesn't add up. Either they made HUGE enhancements for the PC port that take the visuals to a whole new level, or those requirements are nonsensical. The PS5 port of GOT looks almost identical to the PS4 version with a few minor enhancements. The biggest jumps are the frame rates and resolution.
These are the benchmarks used :
Passmark is no GPU benchmark ?
3DMark Vantage is no GPU benchmark ?
3DMark 11 is no GPU benchmark ?
3DMark Fire Strike is no GPU benchmark ?
3DMark Cloud Gate is no GPU benchmark ?
3DMark Ice Storm is no GPU benchmark ?
Tell me, smart kid, what is a GPU benchmark in your opinion and what type of benchmark should i use in your opinion : synthetic or real ?
Ok, nuff of this moving the goalposts nonsense - Short and on point :
You´ve first claimed the RTX4080 would be 257% faster and changed it later to 2.5x faster then it was 2.3x faster.
Most of your replies are inconsistent.
Some people think 200% faster simply means twice as fast (which mathematically is 100% faster). If it is of the initial value, a 100% increase doubles it.
My statement of 74% faster speed is based on aggregated performance scores measured by (other) technicians under controlled conditions. I`ve already wrote this, too.
These results are informations i`ve quoted once and don´t care about as much as you do. I am the messenger.
Another user asked you to do a quick calculation based on user benchmark results you`ve linked here yourself, what you did not.
Instead you´ve quoted him by leaving out and ignoring the relevant question that would have proved his point (and mine too), moving the goalposts further.
I really don`t care about better or worse user benchmark results measured on a variety of different setups (CPU / RAM / Mainboard / Chipset / Manufacturer [ASUS/MSI/GIGBYTE etc. Stock or OC]) - Your self created artificial "4K problem" doesn´t affect me at all.
If you can´t just accept the fact that this PC port of the Decima engine need a RTX4080 for 4K at max settings that guaranteed (min) 60FPS then whatever floats your boat.
Reasonal explanations to this were given in this thread by multiple users already.
Possible options for you are :
(1) Ignore the facts & Stomp your feet loud on the ground
(2) Step down quality settings
(3) Lower rendering and/or screen resolution
(4) Buy a RTX4090
(5) Skip this game
You are stuck in a loop always pointing at option (1) and feeling very smart by doing so.
On the other hand i am just looking forward playing Ghost of Tsushima in 1440p at max settings because i prefer a minimum framerate of 120FPS over 4K and don´t feel the need to create my own artificial problems just to rage about them in a forum like you do. A very, very simple and easy solution.
It's deplorable that YOU call other people stupid.
Ballpark. It can vary depending on the set of games, duh. Unless every single game gets compiled in review, one data set might show 2.3x, the other 2.5x, another 2.6x, and so on.
Also lol, 3DMark 11 and Fire Strike? Benchmarks that are over a decade old? I was using those on my GTX 670. Cloud Gate that is meant to test freakin' notebooks? Are you trolling? Use Time Spy or Time Spy Extreme, more recent DX12 benchmarks (which GOT will use at it doesn't even support DX11).
Here is a Time Spy benchmark where the 3080 beats the 6700 XT by 52%.
https://www.guru3d.com/data/publish/220/1ec799879f8efedcff534b240a1c3dc686af4a/untitled-52.png
That's the XT which is around 10-15% faster than the non-XT. The 3080 alone is around your quote 74% faster. The 4080 is over twice as fast lol.
Your 74% faster is wrong on all accounts.
Except it's not minimum fps. Requirements are never for minimum fps but an for an average. The minimum fps for Rift Apart on a 4090 is like 35fps but the requirements for 4K60 is a 4080 which dips way below 60fps at 4K60. System requirements have never ever been quoted for a minimum but for an average.
If a 4080 averages 100fps but gets 0.1% lows of 60fps at 4K60 and max settings? That's fine. If a 4080 averages 60fps in that game at 4K60 and max settings? That isn't fine.
What ever floats your boat, kid.
I`ll leave it to others to tell you that you have no clue of what you are talking about.
Don´t skip school !
Now leave, troll.
You did not even understood that I didn´t used these benchmarks.
Told you already - Whatever floats you boat. If you want to call me stupid of command me to leave then do it in person or remain silent, kid. ;)
>These are the benchmark used
>Mentions 3DMark 11
>I didn't use those benchmarks
Maybe don't post drunk? Anyway, away with you.
Yes these are the benchmarks used by technicians under controlled conditions but NOT by ME. I´ve quoted and linked them. See - You don´t even understood this.
Already told you : If you want to give me any commands then do it in person or remain silent.
Furthermore, this changes nothing. A DX11 synthetic benchmark that's almost 14 years old to test a 2022 GPU running a game that only supports DX12 is moronic, as is using Cloud Gate, a notebook and home computer benchmark. Ice Storm is for smartphones and tablets. How is this not trolling?
Time Spy and Time Spy Extreme are DX12 GAMING benchmarks and they're conspicuously missing from your list. Strange, huh?
Ok i`ll bite on that anyway.
Passmark is one of the real GPU benchmarks used :
https://www.passmark.com/products/performancetest/pt_adv3d.php
Syntetic ? No DX11 ?
You mentioned "accurate". What is more accurate :
Measuring only common DX12 functions or across the board including syntetic and real benchmarks ?
You proved once more that you have no clue what you are talking about.
PS: Da du meine Fremdsprachkenntnisse bemängelt hast schlage ich vor, das ich auf Wunsch diese sinnlose Diskussion in meiner Muttersprache fortführe.
I`ve already linked it and is was also used :
https://www.passmark.com/products/performancetest/pt_adv3d.php
Go on...
Nope, not this time. You have a problem with DX11 benchmarks instead of using DX12 benchmarks. Both GPU´s were tested on the same API. Speaking of real and syntetic benchmarks : Double the FPS does not mean the GPU is twice as fast in general it depends on many things lke what API you use and your skills at coding.
Speaking of just games : Between DirectX 11 and DirectX 12, the most important difference is that DirectX 11 is a high-level API, while DirectX 12 is a low-level API. There are various layers between your game and your hardware. Low-level APIs are closer to the hardware, while high-level APIs are further away and more generalized.
It’s an important distinction between DirectX 11 and DirectX 12. In short, DirectX 12 allows game developers to target optimizations closer to the hardware, reducing the overhead incurred from the API and graphics driver. In turn, it’s also more difficult for developers to work with.
In order to quantify the performance of a GPU, three tests are used: How quickly can data be sent to the GPU or read back from it? How fast can the GPU kernel read and write data? How fast can the GPU perform computations?
But well, proceed ?