Batman™: Arkham Knight

Batman™: Arkham Knight

View Stats:
iemander Sep 3, 2015 @ 10:19am
GTX 770/GTX 680 should be able to max this game out @ 60FPS without extreme settings
It's been proven time and again on Digital Foundry, that the PS4 is the equivalent of a GTX 750 TI @ 30FPS. The GTX 770/GTX680 are a little less than twice the performance so should be considered what's necessairy to max the game out @60FPS without going into the extreme settings.

If they're is unable to max the game out at 1080p with the same graphical fidelity as the PS4 (Which I assume is everything on high) it's a bad/unoptimized port.
Last edited by iemander; Sep 3, 2015 @ 10:21am
< >
Showing 1-15 of 19 comments
BlueMug Sep 3, 2015 @ 10:23am 
you are extremly right in my book
to run smooth my 970 isnt maxed out and the game has gotten worse with every update - when it came out 60 fps was low now i have it stable at 60 by changing settings and with the help of a settings guide - geforce optimal settings nvidia updates and rockstar updates have it running like trash
Stormspark Sep 3, 2015 @ 11:42am 
The PS4 is a LITTLE bit better than the 750ti. It's roughly equivalent to the Radeon HD7850 (or R7-265 in the new naming scheme). But you are correct that the 770/680 DESTROYS it in performance.
Kaldaien Sep 3, 2015 @ 11:43am 
Originally posted by iemander:
It's been proven time and again on Digital Foundry, that the PS4 is the equivalent of a GTX 750 TI @ 30FPS. The GTX 770/GTX680 are a little less than twice the performance so should be considered what's necessairy to max the game out @60FPS without going into the extreme settings.

If they're is unable to max the game out at 1080p with the same graphical fidelity as the PS4 (Which I assume is everything on high) it's a bad/unoptimized port.

Digital Foundry, if they have proven what you claim, are a bunch of technical dunces. There is soooooooooo much more high-speed memory available to the PS4's GPU that I don't think they claim what you are; you more than likely misinterpreted something they said.

The disparity between high-speed memory, relevant in games such as this, is a factor of 3:1 between the two GPUs. If you do something that requires high-resolution textures, the PS4 wins (it's got more high-speed memory plus no driver making life miserable when it comes to memory management). If you're purely compute / fillrate-bound, the GTX probably has a very real advantage. If you're somewhere in-between, there's no clear winner and Digital Foundry should understand this. It's case-by-case, with some more texture intensive games favoring the architecture of the PS4 and Xbox One, believe it or not.

If you argue a GTX 980 Ti is clearly superior to a PS4, nobody would argue with you there :) Even with the DX11 / OpenGL driver mucking things up, it's advantage PC. DX12 / Vulkan will make that even more lop-sided in favor of the PC.
Last edited by Kaldaien; Sep 3, 2015 @ 11:58am
Frogster 🐸 Sep 3, 2015 @ 11:49am 
Maybe without high textures with 2gb model and gameworks off
iemander Sep 3, 2015 @ 1:14pm 
Originally posted by Kaldaien:
Originally posted by iemander:
It's been proven time and again on Digital Foundry, that the PS4 is the equivalent of a GTX 750 TI @ 30FPS. The GTX 770/GTX680 are a little less than twice the performance so should be considered what's necessairy to max the game out @60FPS without going into the extreme settings.

If they're is unable to max the game out at 1080p with the same graphical fidelity as the PS4 (Which I assume is everything on high) it's a bad/unoptimized port.

Digital Foundry, if they have proven what you claim, are a bunch of technical dunces. There is soooooooooo much more high-speed memory available to the PS4's GPU that I don't think they claim what you are; you more than likely misinterpreted something they said.

The disparity between high-speed memory, relevant in games such as this, is a factor of 3:1 between the two GPUs. If you do something that requires high-resolution textures, the PS4 wins (it's got more high-speed memory plus no driver making life miserable when it comes to memory management). If you're purely compute / fillrate-bound, the GTX probably has a very real advantage. If you're somewhere in-between, there's no clear winner and Digital Foundry should understand this. It's case-by-case, with some more texture intensive games favoring the architecture of the PS4 and Xbox One, believe it or not.

If you argue a GTX 980 Ti is clearly superior to a PS4, nobody would argue with you there :) Even with the DX11 / OpenGL driver mucking things up, it's advantage PC. DX12 / Vulkan will make that even more lop-sided in favor of the PC.

Ehhhh... the high speed memory factor is interesting. But that in itself does not really make that much of a difference. If the PC has 8GB of system memory, some clever background caching/loading should easily be able to circumvent this problem. That's basically what porting means this generation, moving processes involving the PS4 unified RAM to the typical system/VRAM architecture of PC and still make sure stuttering is not an issue.

Anyway, Digital Foundry uses the GTX 750TI in any benchmark they do comparing the PS4 with PC, like GTA V, Evolve, Dyling Light, Project Cars, Witcher 3, SoM (I think), etc. It practically always gives somewhat better performance than games running on the PS4.
Last edited by iemander; Sep 3, 2015 @ 1:27pm
Kaldaien Sep 3, 2015 @ 1:26pm 
Originally posted by iemander:
Originally posted by Kaldaien:

Digital Foundry, if they have proven what you claim, are a bunch of technical dunces. There is soooooooooo much more high-speed memory available to the PS4's GPU that I don't think they claim what you are; you more than likely misinterpreted something they said.

The disparity between high-speed memory, relevant in games such as this, is a factor of 3:1 between the two GPUs. If you do something that requires high-resolution textures, the PS4 wins (it's got more high-speed memory plus no driver making life miserable when it comes to memory management). If you're purely compute / fillrate-bound, the GTX probably has a very real advantage. If you're somewhere in-between, there's no clear winner and Digital Foundry should understand this. It's case-by-case, with some more texture intensive games favoring the architecture of the PS4 and Xbox One, believe it or not.

If you argue a GTX 980 Ti is clearly superior to a PS4, nobody would argue with you there :) Even with the DX11 / OpenGL driver mucking things up, it's advantage PC. DX12 / Vulkan will make that even more lop-sided in favor of the PC.

Ehhhh... the high speed memory factor is interesting. But that in itself does not really make that much of a difference. If the PC has 8GB of system memory, some clever background caching/loading should easily be able to circumvent this problem. That's basically what porting means this generation, moving processes involving the PS4 unified RAM to the typical system/VRAM architecture of PC and still make sure stuttering is not an issue.

Anyway, Digital Foundry uses the GTX 750TI in any benchmark they do comparing the PS4 with PC, like GTA V, Witcher 3, SoM (I think), etc. It practically always gives somewhat better performance than games running on the PS4.

Not at all, the difference in PCIe bus bandwidth versus VRAM (a factor of 8 to 1 in the absolute best case where you have a card operating in PCIe 3.0 x16 mode) is such that if a frame ever requires more than 2 GiB of data your framerate will plummet. The bandwidth between CPU and GPU is identical on the PS4 and it's segmented on the Xbox One (high-speed 32 MiB eSRAM and low-speed DDR3). Xbox One is closer in performance scaling to the PC because of this, it's equally hard to optimize for.
Last edited by Kaldaien; Sep 3, 2015 @ 1:28pm
iemander Sep 3, 2015 @ 1:34pm 
Originally posted by Kaldaien:
Originally posted by iemander:

Ehhhh... the high speed memory factor is interesting. But that in itself does not really make that much of a difference. If the PC has 8GB of system memory, some clever background caching/loading should easily be able to circumvent this problem. That's basically what porting means this generation, moving processes involving the PS4 unified RAM to the typical system/VRAM architecture of PC and still make sure stuttering is not an issue.

Anyway, Digital Foundry uses the GTX 750TI in any benchmark they do comparing the PS4 with PC, like GTA V, Witcher 3, SoM (I think), etc. It practically always gives somewhat better performance than games running on the PS4.

Not at all, the difference in PCIe bus bandwidth versus VRAM (a factor of 8 to 1) is such that if a frame ever requires more than 2 GiB of data your framerate will plummet. The bandwidth between CPU and GPU is identical on the PS4 and it's segmented on the Xbox One (high-speed 32 MiB eSRAM and low-speed DDR3). Xbox One is closer in performance scaling to the PC because of this, it's equally hard to optimize for.

That's an absolutely absurd scenario. Why in God's name would a scene need 2GB of data. That's a true sign of absolutely horrible optimization. This is exactly why LoD exists, so detail is STREAMED into memory, not loaded outright. I doubt the PS4 even has the processing power to even be able to manage that amount of RAM without some extreme threading mechanisms.

Anyway, the PS4's high speed memory is an advantage because it makes it easier for the dev to manage loading and unloading resources. But frankly this is completely overshadowed by any PC with a far higher amount of memory.
Last edited by iemander; Sep 3, 2015 @ 1:35pm
iemander Sep 3, 2015 @ 1:36pm 
Anyway, for all this talk about theory. Digital Foundry has real world results claiming the GTX 750 TI is the PC's counterpart to the PS4.
Stormspark Sep 3, 2015 @ 1:38pm 
Originally posted by iemander:
Originally posted by Kaldaien:

Not at all, the difference in PCIe bus bandwidth versus VRAM (a factor of 8 to 1) is such that if a frame ever requires more than 2 GiB of data your framerate will plummet. The bandwidth between CPU and GPU is identical on the PS4 and it's segmented on the Xbox One (high-speed 32 MiB eSRAM and low-speed DDR3). Xbox One is closer in performance scaling to the PC because of this, it's equally hard to optimize for.

That's an absolutely absurd scenario. Why in God's name would a scene need 2GB of data. That's a true sign of absolutely horrible optimization. This is exactly why LoD exists, so detail is STREAMED into memory, not loaded outright. I doubt the PS4 even has the processing power to even be able to manage that amount of RAM without some extreme threading mechanisms.

Anyway, the PS4's high speed memory is an advantage because it makes it easier for the dev to manage loading and unloading resources. But frankly this is completely overshadowed by any PC with a far higher amount of memory.

The PS4 will basically have zero advantage even in memory speed in about a year, when video cards start coming with 6GB of VRAM standard. Right now only the 980ti and Titan X have this much, but give it a year.
Last edited by Stormspark; Sep 3, 2015 @ 1:38pm
iemander Sep 3, 2015 @ 1:44pm 
Originally posted by Saerydoth:
Originally posted by iemander:

That's an absolutely absurd scenario. Why in God's name would a scene need 2GB of data. That's a true sign of absolutely horrible optimization. This is exactly why LoD exists, so detail is STREAMED into memory, not loaded outright. I doubt the PS4 even has the processing power to even be able to manage that amount of RAM without some extreme threading mechanisms.

Anyway, the PS4's high speed memory is an advantage because it makes it easier for the dev to manage loading and unloading resources. But frankly this is completely overshadowed by any PC with a far higher amount of memory.

The PS4 will basically have zero advantage even in memory speed in about a year, when video cards start coming with 6GB of VRAM standard. Right now only the 980ti and Titan X have this much, but give it a year.
Well, to be honest. the PS4 will still have the advantage of unified memory.

But it doesn't really matter. The difference in memory speed between PS4 and PC is nothing more than an architectural difference. It has it's advantages, but it doesn't really change matters much when looking at the amount of system ram in the average PC and it's available CPU processing power.

It's really more of a problem for porting between PS4 and PC, which makes it harder. As they'll have to explicitely shift data between system en VRAM.
Last edited by iemander; Sep 3, 2015 @ 1:47pm
Kaldaien Sep 3, 2015 @ 2:06pm 
Originally posted by iemander:
Originally posted by Kaldaien:

Not at all, the difference in PCIe bus bandwidth versus VRAM (a factor of 8 to 1) is such that if a frame ever requires more than 2 GiB of data your framerate will plummet. The bandwidth between CPU and GPU is identical on the PS4 and it's segmented on the Xbox One (high-speed 32 MiB eSRAM and low-speed DDR3). Xbox One is closer in performance scaling to the PC because of this, it's equally hard to optimize for.

That's an absolutely absurd scenario. Why in God's name would a scene need 2GB of data. That's a true sign of absolutely horrible optimization. This is exactly why LoD exists, so detail is STREAMED into memory, not loaded outright. I doubt the PS4 even has the processing power to even be able to manage that amount of RAM without some extreme threading mechanisms.

Anyway, the PS4's high speed memory is an advantage because it makes it easier for the dev to manage loading and unloading resources. But frankly this is completely overshadowed by any PC with a far higher amount of memory.

Well, let's see here. A 4K framebuffer takes up 32 MiB for the output color buffer alone. Now, you need to do deferred shading in most modern games and you also need to do HDR. Let's assume a 64-bit depth buffer (32-bit floating-point depth + 8-bit stencil + 24-bit padding), 128-bit color (HDR), 32-bit normals, 32-bit specular / material ID. That is 256-bit per-pixel = 8 bytes just to render the scene period. Now, a lot of people add MSAA / TXAA on top of that these days, so storage requirements can go up another 4x beyond that.

Add image-space reflection, shadow maps, light probes, multi-frame post-processing; basically everything a modern game engine does, and you're pushing 1 GiB of memory consumption conservatively before you load a single texture. And that's 1 GiB of data actually used each frame, not stuff that might sit around for a while.

Just because you don't understand how memory is consumed by a graphics engine does not mean something is poorly optimized.

And no, the PS4 does not need any sort of processing to manage its memory. There's no contention for a cramped bus about 1/8 the speed of the actual memory. The CPU and GPU share the same bus, with a unified memory controller. Add to that, no driver layer that has to figure out how to schedule memory I/O over said bus while the OS schedules other processes and it's bliss.
Last edited by Kaldaien; Sep 3, 2015 @ 2:13pm
iemander Sep 3, 2015 @ 2:10pm 
What you say sounds impressive but you're still wrong.

Let's turn this around.

Explain to me why a GTX 750 TI coupled with an i3 is able to consistently beat or come close to the PS4 in all properly ported multiplatform games.
Last edited by iemander; Sep 3, 2015 @ 2:12pm
Kaldaien Sep 3, 2015 @ 2:13pm 
Originally posted by iemander:
What you say sounds impressive but you're still wrong.

Let's turn this around.

Explain to me why a GTX 750 TI coupled with an i3 is able to consistently beat or come close to the PS4 in all properly ported multiplatform games.

Because most games don't use a whole lot of memory.

Run this comparison on anything running idTech 5 and you will see the numbers are drastically different in favor of the PS4.
Last edited by Kaldaien; Sep 3, 2015 @ 2:16pm
iemander Sep 3, 2015 @ 2:19pm 
Originally posted by Kaldaien:
Originally posted by iemander:
What you say sounds impressive but you're still wrong.

Let's turn this around.

Explain to me why a GTX 750 TI coupled with an i3 is able to consistently beat or come close to the PS4 in all properly ported multiplatform games.

Because most games don't use a whole lot of memory.

Run this comparison on anything running idTech 5 and you will see the numbers are drastically different in favor of the PS4.

So yeah, it's not really all that important. Again, it's a nice feature and can be an advantage but it doesn't really make all that much difference.

Megatexture is getting removed in idTech 6 btw
< >
Showing 1-15 of 19 comments
Per page: 1530 50

Date Posted: Sep 3, 2015 @ 10:19am
Posts: 19