The Last of Us™ Part I

The Last of Us™ Part I

View Stats:
Sub-Zero Mar 31, 2023 @ 4:43pm
2
2
2
4
Some insight into what's happening in The Last of Us regarding performance.
Let's get a few things out of the way first. I'm a Systems Engineer/Programmer and dealt with Cobol, Pascal, Fortran (yay, I'm that ancient), C#, and others, so I have a fair knowledge of programming jazz.

People here have varying degrees of expertise in dealing with PCs. From a technical side, recently (last 10 years or so), GPUs have been gaining massive power for graphics. Need more FPS? Just throw more Pipelines, Shader Cores, TMUs, ROPs, and CUDAs at the consumer devices. Do you think this is good? Yes, it is, if you take proper advantage of them. Getting more resolution, more FPS, pretty graphics, and better textures is excellent. The bad thing is what follows. In recent years, developers don't seem to care about following some standard practices, simply "brute forcing" games with new hardware.

Most well-known engines like Unity and Unreal have good garbage collection procedures and many other tools available for developers. BUT it gets a bit weird here; developers usually DO NOT use good coding practices and do not thoroughly scrub/audit their code. Otherwise, those gross bugs would never happen, being left for users to "beta-test" recent games.

Engines do most of their "mainstream" heavy lifting for them (saves time, Publishers are pushy to release games ASAP, underpaid staff, someone's dog died, bad company environment, and so on). This makes developers lazy in the long run, plain and simple. Despite seeing a game with a vast QA section when you finish a game, with 50+ people who helped pinpoint bugs, they will never address the real issue. Garbage collection? Code optimization? Profiling? Removing dead ends, loops, improving algorithms, workload distribution, pipeline scheduling...whatever it's available to you (and you know how to use it)? "Nah, none of that...let's just throw all this code like it is, the engine will (poorly) optimize and organize our code, the new GPUs can handle the brute power, etc., so we can save time and deliver this product ASAP". This seems to be becoming commonplace. I'm not even considering spaghetti coding and programming vices.

In recent years, I've seen many notable Unity games with outrageous memory bugs. Some Unity patches specifically address things like "fixing a bug in memory leak/memory handling" and so on. Relying on such engines makes developers lazy. Want some examples? Final Fantasy 15 had awful coding, leading to constant crashes due to memory leaks, and it is widely known. It was never fixed, so modders had to develop fixes for the most grieving issues.

On that topic, buying new/better hardware won’t fix problems. It only MITIGATES them, depending on your graphical settings, etc. Dark Souls series had a simple engine bug since the first iteration (Demon's Souls), which caused massive slowdowns in their console versions. That was fixed by just analyzing part of the code, like 20 lines of code, and a fix was released by a well-known modder for PC and consoles, with all other games in the series having some extra benefit. The same bug was present in Elden Ring until Digital Foundry lashed hard at them. These things are commonplace and only changed when DF called them out. It shouldn't be so. Engine's fault? Sure. The devs who created the faulty code/engine? Sure. Devs don't care if things "just work"? Sure. Releasing a game with lousy optimization because hardware can just bruteforce the game? Sure. In a typical dev meeting, you can almost hear them saying, "Let's get this on the market ASAP and scramble to fix it later on" All that could be non-existent with more capable (well-paid) devs with more time to follow PROPER coding procedures, even though engines have clunky ways to "stitch" it all together. If left unchecked, minor issues will pile up, messing up the final product entirely.

However, there’s a different issue happening in TLoU.

Specifically, in TLoU, it's not your GPU or your CPU's fault. Enable TLoU’s native OSD in options to check the game’s resource management. A friend runs TLoU 1 fine with his RTX 2080ti and a somewhat old Intel i7 7700k at 1080p with over 150 FPS or so on his setup. Alas, when the game needs to stream new textures into memory, expect massive drops to 10 FPS every 5-10 seconds. This is unsolvable in current tech.

It boils down to the....available northbridge bandwidth, DMI, HD, DDR, and PCIE bandwidth, even on the newest Z690 chipsets. We have a conundrum here.

Recent problems with Forsaken, Hogwarts, AND now Last of Us 1 boil down to that. All these share a common "trait," if you will. They require HUMONGOUS amounts of memory bandwidth, texture transfers, and read/write throughput to display the graphics on your screen. Even for the most ludicrously expensive setup, the current power will NOT cover the massive throughput required by streaming on-the-fly 4k assets on a game with massive textures. Not even PCIE 5.0 at 32 GT/s and a blazing fast M.2 and DDR5 at 9000mhz will address those ridiculous drops from 80 fps to 10 fps when assets are loaded on-demand every 0.1 seconds. Technology isn't up there yet, and it won't be for many years IF they keep programming games as they do today.

For example, I'd like to point out a game that did it very well for the era it was released, and with the limited tools available to them back then. Metal Gear Solid 4 on PS3 had decent performance; its bottleneck was GPU-limited. To solve the texture problems, they worked on good texture compression to avoid taxing the system and decent graphics within the console's constraints. Installing the chapters was boring? Yeah, people would complain a lot about loading for 16 minutes. After that, you had no data streaming from Bluray (the slowest interface for the console). The game played fine with minor graphic drops because of the GPU. The SATA3 interface was enough for them to program the assets realistically and not go overboard with insane textures. Nowadays, I rarely see complaints about 30-50 minutes loading game shaders, which also became commonplace.

p.s.: Adding to all I mentioned above, engines like Unreal and Unity will almost always introduce memory leak bugs, which they openly admit in some versions and fix, but that will only work if devs are willing to put some effort into auditing their code based on which engine version they use.

tl;dr - Developers need to drop this absurd method of streaming on-demand, on-the-fly textures. Having a 15-minute load to get assets in memory every chapter won't hurt people who already suffer 50 minutes compiling shaders and will help reduce a TON of those hiccups, which are not caused by CPU or GPU at all (in TLoU's case).
Last edited by Sub-Zero; Mar 31, 2023 @ 4:49pm
< >
Showing 1-15 of 43 comments
Juub Mar 31, 2023 @ 5:19pm 
Interesting but those issues aren't on the PS5 obviously. I assume this has to do with their hardware-accelerated decompression and overall layout of the PS5's software/hardware?

Also are you assuming or do you know this?
Last edited by Juub; Mar 31, 2023 @ 5:30pm
Malecite Mar 31, 2023 @ 5:25pm 
I didn't understand half of this, but I appreciate you taking the time to try and explain it.

It sounds like the problem is what we always expect with PC Gaming: live service is double speak for "rush it out, fix later". In this instance the fact that the shader compile time puts people out of the return window combined with 0 advanced review copies makes me think this is just about the most egregious example of developer greed.

It blew up in their faces and hopefully that means something to someone in a suit.
Inferno Mar 31, 2023 @ 5:29pm 
2
I just want to point out that while you may understand programming all your claims are based off of assumptions. Also according to the list of things you have experience in none of them include game engines (you only listed languages you know). I would also want to point out it wouldn't matter if you worked on any game engine like the ones you use in your comparisons (Unity,Unreal) because Naughty dog uses their own In house engine that you have no idea how it works. I'm sorry if I'm coming off as rude but I'm honestly just tired of the post of people claiming they know whats wrong when they don't. TLOU was also coded in C++ and lisp neither of which he knows.
Last edited by Inferno; Mar 31, 2023 @ 7:40pm
Why the PS5 version runs better than the PC version?
HOMERNOY Mar 31, 2023 @ 5:33pm 
Originally posted by Juub:
Interesting but those issues aren't on the PS5 obviously. I assume this has to do with their hardware-accelerated decompression and overall layout of the PS5's software/hardware?

Yeah, but Windows now supports Direct Storage, which is similar. Digital Foundry did a Forspoken comparison with the PS5, and the PC was faster (obviously with an M.2 Nvme.
Weird Potato Mar 31, 2023 @ 5:35pm 
Originally posted by 🧋🌷Sylvia🍃🌼:
Why the PS5 version runs better than the PC version?
lol it doesn't though. "fidelity mode" is 30fps. PC can run this way way higher than 30fps and look better while doing it.
Issues with crashing and taking forever to compile shaders not with standing.
The doctor is in Mar 31, 2023 @ 5:36pm 
ive been able to run the game for good number of hours without any issues

the thing is you have to have a good bit of hardware that meets and exceeds the ps5 level of hardware that it was written for

so

8 true core 3.5ghz or more
im not talking about efficiency cores, im talking performance cores

also, if you have multi die cpu then you need to ensure that all game threads are running on performance cores. good luck.

i have amd 5700x runs great

also you need at least 10 gb of vram. i have 12 gb, so i run it 1080p on high prefab

but 16gb vram would be better

also dont try to run the game if its building the shaders, just wait

i have amd 6700rx runs great

also good to have 16gb of ram, i have 32 gb

and the game installed on m.2

i have 2 of them one for system and another for games

then you need a good enough psu and case ventilation

with all that your game will run great

no crashes, no problems, no stalls, solid 60 fps with vsync on

im not sure what engine they are using.
Juub Mar 31, 2023 @ 5:39pm 
Originally posted by HOMERNOY:

Yeah, but Windows now supports Direct Storage, which is similar. Digital Foundry did a Forspoken comparison with the PS5, and the PC was faster (obviously with an M.2 Nvme.
Yeah I saw this but since it's been available, only one game has used Direct Storage and Forspoken used 2.1 which doesn't even have GPU decompression. 2.2 has been available for months and nobody has bothered using it.
Weird Potato Mar 31, 2023 @ 5:42pm 
Originally posted by The doctor is in:
with vsync on
Get a VRR monitor. It's pretty standard now. I'm surprised in 2023 anyone puts up with Vsync.
Pigmonkey Mar 31, 2023 @ 5:43pm 
Originally posted by Inferno:
I just want to point out that while you may understand programming all your claims are based off of assumptions. Also according to the list of things you have experience in none of them include game engines (you only listed languages you know). I would also want to point out it wouldn't matter if you worked on any game engine like the ones you use in your comparisons (Unity,Unreal) because Naughty dog uses their own In house engine that you have no idea how it works. I'm sorry if I'm coming off as rude but I'm honestly just tired of the post of people claiming they know whats wrong when they don't.

You're essentially claiming you know how it all works, while at the same time telling us he cant know because noone can except naughty dog. It's a wash then....

So how about not responding. He makes some very valid points, IMO.
Last edited by Pigmonkey; Mar 31, 2023 @ 5:44pm
HOMERNOY Mar 31, 2023 @ 5:47pm 
Originally posted by Juub:
Originally posted by HOMERNOY:

Yeah, but Windows now supports Direct Storage, which is similar. Digital Foundry did a Forspoken comparison with the PS5, and the PC was faster (obviously with an M.2 Nvme.
Yeah I saw this but since it's been available, only one game has used Direct Storage and Forspoken used 2.1 which doesn't even have GPU decompression. 2.2 has been available for months and nobody has bothered using it.

My point is that Direct Storage should start being used in PC games. Forspoken still works on slower drives, but if you have an M.2 in your system, it will solve all of the VRAM issues on PC.
AngryNapkin Mar 31, 2023 @ 5:49pm 
That huge wall of text just to say the developers are trash and can’t code a game of pong to save their life.
Elenoe Mar 31, 2023 @ 5:52pm 
Originally posted by HOMERNOY:
Yeah, but Windows now supports Direct Storage, which is similar. Digital Foundry did a Forspoken comparison with the PS5, and the PC was faster (obviously with an M.2 Nvme.
had they used it? But the OP's weird assumption that the game does this all the time is false. The game runs smoothly even on my dated architecture (9th gen intel) and in 4K with presumably 14GB VRAM in using (AKA ultra textures and ♥♥♥♥). And the only time I experience narrow FPS drops due to asset loading is at the start of every chapter and it lasts about 4 steps and 360 turn around to get it solved.

And this is not about "technology not there yet". That part is about me not having what I could have (like PCIe 4) and the engine's/devs architecture that was build/tested/optimized for PS5 with lower requirements/load. The game still works with "loading a new area" idea instead of streaming. It just does it "seamless" once you jump off the cliff, that's all. And it's heavy load on PC.. done in the most generic way it seems.
Shock_&_Awe Mar 31, 2023 @ 5:57pm 
Originally posted by Inferno:
I just want to point out that while you may understand programming all your claims are based off of assumptions. Also according to the list of things you have experience in none of them include game engines (you only listed languages you know). I would also want to point out it wouldn't matter if you worked on any game engine like the ones you use in your comparisons (Unity,Unreal) because Naughty dog uses their own In house engine that you have no idea how it works. I'm sorry if I'm coming off as rude but I'm honestly just tired of the post of people claiming they know whats wrong when they don't.
This is pretty much where I lean as well. Most of this assumptions. Maybe better assumptions than some, but still just assumptions. Without actually being in the industry it a tough sell to claim they know what is happening behind the scenes.
Last edited by Shock_&_Awe; Mar 31, 2023 @ 5:59pm
HOMERNOY Mar 31, 2023 @ 5:59pm 
Originally posted by Elenoe:
Originally posted by HOMERNOY:
Yeah, but Windows now supports Direct Storage, which is similar. Digital Foundry did a Forspoken comparison with the PS5, and the PC was faster (obviously with an M.2 Nvme.
had they used it? But the OP's weird assumption that the game does this all the time is false. The game runs smoothly even on my dated architecture (9th gen intel) and in 4K with presumably 14GB VRAM in using (AKA ultra textures and ♥♥♥♥). And the only time I experience narrow FPS drops due to asset loading is at the start of every chapter and it lasts about 4 steps and 360 turn around to get it solved.

And this is not about "technology not there yet". That part is about me not having what I could have (like PCIe 4) and the engine's/devs architecture that was build/tested/optimized for PS5 with lower requirements/load. The game still works with "loading a new area" idea instead of streaming. It just does it "seamless" once you jump off the cliff, that's all. And it's heavy load on PC.. done in the most generic way it seems.

Honestly, I don't know. I don't have any problems with the game, but I have an RTX4090 and an i7 12700K. I'm just thinking if VRAM is an issue, it's a cheaper to buy an Nvme drive than replace your GPU if Direct Storage was an option in some of these games.
Last edited by HOMERNOY; Mar 31, 2023 @ 6:01pm
< >
Showing 1-15 of 43 comments
Per page: 1530 50

Date Posted: Mar 31, 2023 @ 4:43pm
Posts: 43