Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Plague Tale Requiem and Cyberpunk - Visit the market areas. It will destroy your CPU because AI is constantly spawning and despawning
Crysis Remastered - Look at the overlook in the beginning of the game. Destroys even modern CPUs to this day because its heavily dependant on single threaded performance which no CPU has enough of.
Physics: Look at Rockstar's RDR2 and GTA 5. CPU usage spikes every time your character falls or does something complex.
I run Space Engineers, and underneath the VRAGE engine is the Havok Physics engine. A lot of the Physics interactions are single threaded from what I can tell. Makes the game very stuttery.
However, since physics is an extremely important part of a physics sandbox game, it is just something to be accepted on that game.
Why are you in this thread then? This thread is a discussion on which video was more objective. HUB video was superficial as I pointed out so by all objective metrics the DF video was better. DF did not bother with the 8GB nonsense. They got to the heart of the issue which is the crappy optimisation of the game while Steve made it a battle between AMD and Nvidia.
Neither of the channels played the game long enough. If you get into Pittsburg, the 4090 runs at 70% usage across on a 12900k.
Digital Foundry's purpose was to understand what was wrong with the port because it had so many issues, benchmarking the game was pointless as it would throw out outlier results.
Steve should not have gone ahead with the benchmark because higher end GPUs are being held back by the awful CPU optimisation and there is terrible stuttering on entering new areas. Lower end GPUs can barely run the game properly. The game is allocating 5GB of VRAM for Windows on 4090.
Benchmarking a shoddy game in this manner would throw out irrelevant results which do not reflect the actual performance of the GPUs since the limiting factor is the game and not the GPU.
None of us can predict the future but we can drawn on objective facts to conclude certain things. The vast majority of the PC market uses Nvidia and is on 8GB cards. Its on the developers to optimise for these constraints and if they do not do this, their games will be tossed in Steam's hall of shame like this game was.
You really should stop thinking too much about the specifics of the PS5 architecture. It honestly doesn't matter too much. Because this game is not a PS5 game, it's a PC game. That's the whole point of a port. It's a PC game now. So it should perform accordingly. You guys defend the VRAM hunger by stating the VRAM of the PS5. Instead you should think about what quality of textures other games offer for a certain amount of VRAM. TLOU does not have better textures than other games. Yet on medium (which seemingly already is the maximum you should go for with 8GB) it often looks like a PS3 game (which had 256MB VRAM). Fact is, the game is unnecessarily hungry compared to other PC games. Which means they did not properly optimize the game for PC. Which means it's a bad port.
Just for giggles I'm gonna go and take a peak at these performance levels on 720p and 1080p respectively. Obviously I'll turn DLSS off at 720p and 1080p. But on 1440p and 4k I have DLSS Quality on.
But the main reason why I play at 4k, high preset with geometry on ultra, all high preset half-resolutions changed to full-resolution is it gives me the most stable frametime experience without sacrificing visual fidelity. The mouse stutter is really the only thing that actually causes me any stutter what-so-ever when I play. Even loading into new scenes, the performance doesn't drop, unless I go from low demanding scene (50-60fps) to a high demanding scene (40-50fps). Even after all the background loading is done, performance doesn't increase back up for me since it didn't decrease in the first place, cause I'm not actually losing real-time gameplay performance on my CPU or GPU to do what it has to finish loading during this process.
It is irrelevant that the game is developed on PS5 as a baseline. When you are going to port it to PC, it needs to meet the standards and requirements of the PC market of which the majority are on 8GB cards. The PC market is not going to sell off their PCs and buy higher end GPUs to make the lives of the devs easier. What's likely going to happen is they will destroy the game's review score, toss it in Steam's Hall of Shame with only higher end GPU buyers buying the game or AMD users, which aren't many.
If you are going to develop for PC, do it right or don't port at all.
The medium textures look like a game from the 2000 era which is unacceptable if you look at beautiful games like Plague Tale Requiem which do run on 8GB cards
Adjust all settings to high preset, same performance and same stutter. Though with geoemetry turned to high, CPU bottleneck actually lasts long enough to visibly notice it with peak CPU usage 60% and lowest GPU usage 90%.
FPS cap I get without turning graphic settings down to force high FPS is 90 and averages at 78 with 1% lows hitting 25. Why would I play on 1080p with that bad of performance various because CPU begins to hit the bottleneck there, when on 4k and targeting a near-60fps experience gives me fluid frametime and (buttery) smooth performance?
I can't replicate these 100% CPU usage situations people complain about unless I go into my BIOS and actually reduce the power limits so it won't ever use more than 95W. But I have it unlocked because my motherboard supports the peak total watts this CPU can draw without an overclock.
A lot of these CPU 100% usage issues are likely just a combination of game performance issues from lack of optimization but also from potentially bad motherboard-CPU pairings or just bad BIOS optimizations for the CPU installed. Very common when you have a low price motherboard paired with a higher end CPU.
It got way, way more positive reviews than negative ones, which is not something we can say about TLOU, which started out with "mostly negative" reviews just over 30% and is still very low (47%). Returnal is at over 80%.
The difference between TLOU and Returnal is that the latter can scale really well and still deliver nice textures even for cards with significantly less VRAM than the PS5.
Maybe after some months of patching, sure. At release it was quite a mess. For everyone. And it's still lacking essential PC features, to be honest. I wouldn't quite call that initial state unplayable (I mean, I played it after all). And I'm not sure DF did really call it that. But it was pretty bad nonetheless.
Seeing that we have countless other games that offer similar or better visuals with less VRAM and a lot less CPU load, I'm certain we could have gotten a better port.
DF played on three systems and compared differences in look and performance. More people were likely looking for that information, which is why more people liked it. What I dislike about the video is that it doesn't show what kind of PC is required to run the game like on the PS5. What's even worse is that the way they conducted the analysis made people believe that the PC version looks worse or is broken.
Interesting fact. Flight simulator is also heavily CPU bottlenecked even at 4K native when using 4090. So this game should be a good benchmark for CPU and GPU.
In my opinion, it's quite the opposite. Cyberpunk is likely the most popular GPU benchmark. HUB made a dry benchmark, while DF performed game analysis. These are two completely different videos that do not contradict each other
I see your point and maybe you are right. I think however we will have a repeat from ps4 ports when almost overnight 4GB wasn't enough for many games.
That is not the objective of the exercise.
The objective is to intentionally turn graphics settings down to force high FPS. Run at about 30% to 50% GPU usage, and 100 CPU Usage. This is not applicable to all areas.
AI is reportedly key to hitting the 100% CPU Usage.
You wouldn't.
The question is why others would.
You can achieve 50-60 FPS with 8-Cores at 4.9GHz sustained. That is nearly 2+ more cores than my 8-Core 4.1GHz sustained, a 25% performance increase.
4.1GHz * 8-Cores = 32.8GHz of available compute capacity. = 40 FPS.
4.9GHz * 8-Cores = 39.2GHz of available compute capacity. = 50 FPS (1.25 times 40).
Intel has been tossing out Quad Cores for a decade, We're just now entering into the 8-Core era.
3.6GHz to 3.8GHz per core is the baseline. "Turbo" is not how one advertises a supported frequency. Requiring Sustained frequencies of 4.9GHz on an 8-Core CPU for roughly 40GHz of compute for 60 FPS is very heavy on the system requirements.
...
It was exacerbated when adjusting settings by not having a "Please Wait" on apply causing additional performance impact.
I bought the game with the expectations of 720p/60/Medium being an enjoyable experience. It was not worth a pre-order at full price.