Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Digital Foundry, if they have proven what you claim, are a bunch of technical dunces. There is soooooooooo much more high-speed memory available to the PS4's GPU that I don't think they claim what you are; you more than likely misinterpreted something they said.
The disparity between high-speed memory, relevant in games such as this, is a factor of 3:1 between the two GPUs. If you do something that requires high-resolution textures, the PS4 wins (it's got more high-speed memory plus no driver making life miserable when it comes to memory management). If you're purely compute / fillrate-bound, the GTX probably has a very real advantage. If you're somewhere in-between, there's no clear winner and Digital Foundry should understand this. It's case-by-case, with some more texture intensive games favoring the architecture of the PS4 and Xbox One, believe it or not.
If you argue a GTX 980 Ti is clearly superior to a PS4, nobody would argue with you there :) Even with the DX11 / OpenGL driver mucking things up, it's advantage PC. DX12 / Vulkan will make that even more lop-sided in favor of the PC.
Ehhhh... the high speed memory factor is interesting. But that in itself does not really make that much of a difference. If the PC has 8GB of system memory, some clever background caching/loading should easily be able to circumvent this problem. That's basically what porting means this generation, moving processes involving the PS4 unified RAM to the typical system/VRAM architecture of PC and still make sure stuttering is not an issue.
Anyway, Digital Foundry uses the GTX 750TI in any benchmark they do comparing the PS4 with PC, like GTA V, Evolve, Dyling Light, Project Cars, Witcher 3, SoM (I think), etc. It practically always gives somewhat better performance than games running on the PS4.
Not at all, the difference in PCIe bus bandwidth versus VRAM (a factor of 8 to 1 in the absolute best case where you have a card operating in PCIe 3.0 x16 mode) is such that if a frame ever requires more than 2 GiB of data your framerate will plummet. The bandwidth between CPU and GPU is identical on the PS4 and it's segmented on the Xbox One (high-speed 32 MiB eSRAM and low-speed DDR3). Xbox One is closer in performance scaling to the PC because of this, it's equally hard to optimize for.
That's an absolutely absurd scenario. Why in God's name would a scene need 2GB of data. That's a true sign of absolutely horrible optimization. This is exactly why LoD exists, so detail is STREAMED into memory, not loaded outright. I doubt the PS4 even has the processing power to even be able to manage that amount of RAM without some extreme threading mechanisms.
Anyway, the PS4's high speed memory is an advantage because it makes it easier for the dev to manage loading and unloading resources. But frankly this is completely overshadowed by any PC with a far higher amount of memory.
The PS4 will basically have zero advantage even in memory speed in about a year, when video cards start coming with 6GB of VRAM standard. Right now only the 980ti and Titan X have this much, but give it a year.
But it doesn't really matter. The difference in memory speed between PS4 and PC is nothing more than an architectural difference. It has it's advantages, but it doesn't really change matters much when looking at the amount of system ram in the average PC and it's available CPU processing power.
It's really more of a problem for porting between PS4 and PC, which makes it harder. As they'll have to explicitely shift data between system en VRAM.
Well, let's see here. A 4K framebuffer takes up 32 MiB for the output color buffer alone. Now, you need to do deferred shading in most modern games and you also need to do HDR. Let's assume a 64-bit depth buffer (32-bit floating-point depth + 8-bit stencil + 24-bit padding), 128-bit color (HDR), 32-bit normals, 32-bit specular / material ID. That is 256-bit per-pixel = 8 bytes just to render the scene period. Now, a lot of people add MSAA / TXAA on top of that these days, so storage requirements can go up another 4x beyond that.
Add image-space reflection, shadow maps, light probes, multi-frame post-processing; basically everything a modern game engine does, and you're pushing 1 GiB of memory consumption conservatively before you load a single texture. And that's 1 GiB of data actually used each frame, not stuff that might sit around for a while.
Just because you don't understand how memory is consumed by a graphics engine does not mean something is poorly optimized.
And no, the PS4 does not need any sort of processing to manage its memory. There's no contention for a cramped bus about 1/8 the speed of the actual memory. The CPU and GPU share the same bus, with a unified memory controller. Add to that, no driver layer that has to figure out how to schedule memory I/O over said bus while the OS schedules other processes and it's bliss.
Let's turn this around.
Explain to me why a GTX 750 TI coupled with an i3 is able to consistently beat or come close to the PS4 in all properly ported multiplatform games.
Because most games don't use a whole lot of memory.
Run this comparison on anything running idTech 5 and you will see the numbers are drastically different in favor of the PS4.
So yeah, it's not really all that important. Again, it's a nice feature and can be an advantage but it doesn't really make all that much difference.
Megatexture is getting removed in idTech 6 btw