Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Look at my post here. You might be able to fix it your self. It's based on AMD cpu. But it could work for intel. I lowered my cpu usage from 100% to 50-60% while running HD2
In theory if you're CPU bottlenecked then changing your resolution, amount of capped frames etc is not going to have any effect simply because the GPU cannot provide the same kind of resource to the table. Not unless the options you use for graphics also take a toll on the CPU time.
Just starting the game ran my cpu 100% until I fixed my ram timings
It is the weather. It is getting warmer.
By the way, if you take a look at cpu usage on Witcher 3, Cyberpunk, Alan Wakes, Control, Battlefields, Warhammer, Elden Ring (actually has an identical "problem" to Helldivers 2 - there's a resubmit thread that waits for a memory transfer, making thread saturation or "cpu%" higher), RDR2.. Spintires... super old game, but it has a resubmit to gpu memory routine, it's heavy on the cpu.. Dishonored can be pretty heavy, depending on settings - Helldivers 2 is nothing special.
Cyberpunk, for example, has had a persistent "fault" being reported for yonks about not being able to utilize more than 60% cpu time. Some optimisations (memory based, like what is likely going on with people complaining in HD2) could fix that, and put the cpu time to 90% or more, while also increasing the gpu time. Which then started a wave of "waaah, overheat, game broke my PC buy Intel/AMD they are honest and love you because that's how corporations work" complaints instead.
I wouldn't care about any of this, wouldn't give it a second thought, wouldn't laugh about it or anything like that, even when the volume is turned up to 11 on the forums.
/If/ it wasn't for the fact that community manager cretins collect this kind of "feedback" and use it, together with corporate, to justify "recommendations" directed to the developer about reducing cpu loads and lowering utilisation of cores, gpu, and so on in general. I've seen first hand that it motivated changes to console-games, in the belief that reducing the cpu/gpu utilisation would make the consoles avoid breaking. Grown up people sat and argued in all seriousness that since consoles broke on that particular game (at the same time as the sales of the console tripled, and things like that), then the game was the reason why the cooling failed. There were people arguing that even just the perception that a game would break the console would affect sales.
So just.. you know.. lay off on the exaggerations a little bit. And try to understand that a developer of a game really can't "fix" a cpu setup with half-rate timed ram, or an overclock that is unstable. You can complain all you'd like, and even have the developer change the game in various ways to mitigate a problem (which has happened).
But it is physically impossible to somehow adjust away bad configs from running badly, without basically going - actually, we're just going to develop a game for a 1080p@30fps target, calculate the cpu budget for a dual-core at 3,4Ghz (without shared cache/hyperthreading), and of course lay off the SSE4 registries, while also avoiding completely any kind of resumbits to vram per 16.6ms/60fps for animation and such. Here you go: tetris for everyone!
Without that approach - your entirely capable PC, were it configured well, will still run into problems.
And in the meantime every other person playing the game is going to be blessed with some ham-fisted table-tweak put in by an outsourced contractor for the publisher that ruins the game.
Success...? Yay? You know.. check your "feedback" a little bit, because these community manager people are insane, and are egged on by their employer to find "stuff that appeals to the community", as one said, while presenting downloadable hats in an fps-game, almost a year after it was developed to completion.
You might get your wishes addressed in some way - but you're not going to get them fulfilled. Unless your wish is "ruin the game for everyone, including myself". Then you're in luck. That wish they can and do fulfill all the time.
It's because Intel holds the advantage, from previous findings, in a potential lawsuit that makes it effectively illegal for hardware manufacturers to "emulate" x86 instruction sets universally.
The reason for it is that Intel (and to a great extent AMD as well, even though they were stopped from making generic "compute" cores that would run gpu/shader programs as well as cpu tasks -- it would just be a core with a programmable or semi-programmable instruction set) wants to keep the "PC" schema intact. That is making cpus that fit into a slot on a mainboard, the then connects to a memory bus, which has a pci bus where the "graphics card" will be put. This is generally called the ISA, the industry standard architecture.
Is this standard a good one? From the start, there were challengers to this schema in RISC-architectures like the Amiga was based on. But the cost of having ..simplifying a lot here.. l1 cache sizes that these instructions could be prepared on and ran (the amiga had basically all of it's "ram" being comparable to an x86 processor's l1 cache - this was the most important component of that processor) was stopping this architecture from being competitive for a certain value of teapot very quickly. After 3dfx took off, there were genuine advantages to having a "graphics card" on the side as well - because what would turn into "shader code" and so on could run so incredibly much faster in parallel setups/simd(single instruction, multiple data) on dedicated hardware than it would on a cpu (even with many cores). So at the time, a lot of people thought about the 3d cards (specially Glide-compatible ones) as more or less a cheap external computer specialized at a particular task.
Since then, the graphics cards have turned into a more and more general computing platform - there is none of the vram simd type of operations running on it: any rtx or xt card will have a ton of ram on the graphics card, but all shader operations and even specialized nvidia proprietary stuff like CUDA, is actually only operating on a smaller area of memory still on the graphics card, but separate from the storage ram.
With how most toolkits are developed now, this makes a lot of sense, and graphics card drivers for nvidia and radeon cards actually use system ram for streaming resources now, instead of preloading infinite amounts of data to "vram" storage. It's also much quicker to have faster ram closer to the gpu chiplets - but again this faster ram is of course more expensive. In the same way, BAR is another part of this, it's a dedicated piece of memory on the graphics card that the "cpu" main threads can put data in without waiting for all other memory operations. Which helps with the extremely expensive, but sometimes necessary, resubmit operations when using cpu-logic on gpu-computed frame-data. Occlusion checks, for example, can be approximated on a gpu - but it's so expensive that structuring a program to use more advanced math than your typical shader/stream processor can handle will save you an absurd amount of time.
In short, a gpu on an ISA bus is a schema for taking care of a specialized type of task originally intended to offload the cpu. A lot of operations on polygons and pixels are still faster in "hardware"(gpu) rather than "sofware"(cpu). But increasingly, specially as compute-targets running on multicore cpus are taking off, even on x86, what is really happening is that the "cpu" tasks are becoming broader, while the "gpu" tasks are getting more general as well.
So in a sense, you would actually be correct in suggesting that to a certain extent, it would be possible to balance tasks between a gpu and cpu equally. But of course, that's not how this convention works - people still develop games and graphics that have a traditional one main thread, multiple calculation threads, and a graphics thread that may or may not need partial cpu-time (like on Helldivers 2).
Where the convention itself really is designed so that a cpu should simply prepare operations so that the gpu can run on full speed.
While the problem here is that if you write logic like that, you are reliant on never making changes to the graphics context dynamically - without approximating it from extremely expensive compute logic in terms of preparation time and resubmits. Of course, it is the case that on a modern gpu more than half of the computation power is basically used on running anti-aliasing filters and super-sampling on the same data stream. While many modern games still are developed in the same way as with the first cut-scene like games, where the 3d card is basically packed with a scene, and then rendering it out (with manual optimisations, static resources, and so on) in a way that really could just as well have been rendered on beforehand and played back as a video.
If you wanted to do something more interesting, however, you would need to start using more complex logic. And a gpu can't actually do that. If, for example, you at some point are wondering why a really, really bad gpu on the ps3 - managed to display and render things that still can't be emulated in real time, even with simplifications and avoiding instruction level resubmits to be current each frame (and avoiding some of the crunch this way - a technique also used in a lot of modern, quintuple-A PC games, where animation, geometry and things like that are not deformed or rigged in real-time, but at a lower framerate - this is basically because the gpu was practically only used for rendering the final product in the frontbuffer. Where the generic streaming processors were used to run shader-logic in a stacked format.
So a processor similar to that, with cores that would run programmable instruction sets, would be dynamic enough to basically "balance" gpu and cpu tasks perfectly. You could easily imagine an external bay for some of these cores still existing, to take care of higher latency work, while the "cpu" or core complexes closer to the memory bus would do the lower latency ones. But all these cores would then basically be generic, and simply programmed with (probably proprietary) instruction sets that would do graphics operations. An example of this approach is for example the Switch (based on nvidia's Tegra chipset - which is an ARM chipset with programmable cores next to the generic processing elements that have nvidia graphics instructions on them) or Apple's M1 for example (which is an overclocked phone), that like all these other ARM-devices basically have an embedded graphics core unit next to each "processor". Standard toolkits will use optimisations to basically run graphics operations on available cores like this, and it works great. There are challenges to it, of course, like there are with any architecture.
And a PC on x86 was, of course, originally made for executing linear, generic code. That wouldn't need to be specialized for the various architecture quirks. The question, of course, is whether or not you really need x86 on a computer like this. Or if we ever did. Because since we could compile stuff like this on server farms for various architectures anyway, it is akin to making every "home user" have a car-factory in their garage, so that they can produce the car every time they want to get groceries. Where the windshield is deployed by a second factory, that is running increasingly more supersampling at a shameful but "necessary" fps-drop once in a while when the factory is not running at optimal build efficiency..
But sooner or later, Intel will go out of business, or RISC and RISC-V is going to completely absorb the whole games-sphere. Because it is extremely close now, to that small architectures are quick enough to serve a series of gpu-modules for "traditional" gpu-bound computing. Or else that cores with integrated or programmable instruction sets for - for example - graphics, AI, various compute tasks, and so on, can be compiled and just run on infinitely cheaper hardware than a "PC" - but also faster than what you have to deal with in terms of chugging large amounts of memory across a now so outdated PCI bus that it genuinely should have been retired two decades ago. For example, the ryzen chipsets with "integrated" graphics modules are faster than usual on memory operations and resubmits (with absurdly high compute scores in benchmarks) because they circumvent the standard memory controller altogether.
For the time being, though, you're going to be stuck with a 3000 euro graphics card that - the instant there is a resubmit between cpu and gpu logic - is going to chug to high heaven. And where the memory bus is going to be so slow that active waiting threads are going to be aggressively polling memory again and again to check whether the memory transfer is complete, resulting in very high cpu%/thread saturation.
*lecture time with retroquark* all done now.
You can bring a horse to the river but you can't force it to drink.
And then they blame whatever game they are playing right now. It happens. Again and again. Paradox games that are basically glorified text-adventures, 2d platformers, you name it - they all have these kinds of posters complaining at some point. I got ranted at by a guy once who opened with a long tirade about how this clearly wasn't just the weather getting warmer and that he noticed the cooling goop being a bit old just as the weather gets warmer, by sheer chance, and things like that - oh, no! He knew what the problem was, and it was the game being crap and that the developer was a "Hungarian ♥♥♥♥-show circus". He had done his own research and he knew it wasn't the glorious PC hardware from the blessed northern landmasses that was at fault.
And then it turned out the cooling goop was getting worse, which we could all see from high core temps and relatively low external sensors. And that really didn't change the tune one bit - clearly(!) the goop cooked because of this ONE game! If he hadn't played this game, all would be well!
Not saying people never have legitimate problems, or that games won't have weird bugs that cause all kinds of cpu-spikes, or just occupy cores with threads that do nothing. That happens a lot. It might be that people are running some setup that might cause a specific problem.
But in this case, I'm running this game towards 40-ish fps on a 6800U(a 15-29W processor on the flimsiest laptop-cooling known to man that OEM-cretins can buy for chocolate coins) on a surprisingly low thread%. And people turn up with old and new cpus that might not even have 8 cores, and still manage to run the game fairly well on a not too high cpu%. People turn up with the same hardware as someone with issues and point out, with evidence, that their cpu% is nowhere near what they are getting, even at 4k with 99xFSAA and things like that added to a superdownsampling run, just to use all the gpu cores at 90%.
And people still go: "ohmygawd it's the bad optimisation and the developer sucks". Those Swedish tårta-eating nalle-tweakers are ruining their PCs. And this considered, very rational opinion is extremely important to everyone! And the publisher must listen, or else suffer the consequences of a screaming guy on the internet! I wish I was exaggerating, but when you get the first guy who threatens to go and buy astroturfing from whatever replaced Neogaf - and then goes through with it, just to make his opinion seem common. And then the developer actually does take it into account(!), because the publisher is beset by morons in an echo-chamber of like-minded people. After the first time you see that, you start to sort of understand how it is that some of these absurd decisions in the gaming and tech industry are made.
I'm just saying that if people run some tests. Post some hwinfo64 graphs, and maybe data on the memory timing and things like that, then we could figure out what is going on and help out a little bit. If not, maybe there is an issue that could be dumped to the developer to look at - that now has enough context and data around it that they actually can make sense of it.
But the developer can't pre-emptively "fix" something like this without fundamentally changing how the game engine is rendering and computing the game-world. Which again still would not make every game run on a broken, badly configured PC. And it's extremely likely as well that the go-to fix for most publishers ("just disconnect the graphics from the main thread!") is not something you want to see happen to a game you like. And also probably play because it is dynamic and has unusual and interesting effects going on in it. Because those effects would now be gone.
Removing those interesting things pre-emptively to silence screaming gamers on the Internets does happen, though. And it's a genuine shame that it does happen.
But even after all that, the guy with the borken PC still has a borked PC anyway. Just wanted to point that out.