Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Otherwise, lock framerate to 60FPS to reduce the load on your GPU.
Also, given you are probably using a "high seas" version of the game, that might come with its own issues. May be the exe file is using your GPU to mine bitcoin or something like that.
Blow out dust, check fans, check thermal interfaces, try undervolting the GPU
The issue is actually due to a VRAM leak, not what I initially thought. However, the high temperatures can be blamed on the game's poor optimization. I use an AIO cooler, and since the CPU usage is low, the fan RPM on the radiator remains low, causing hot air to be trapped and resulting in high temperatures. By running the AIO fans at maximum speed while playing, the GPU temperatures stay 10c+ cooler than they were previously.
It's odd that you assumed I would use that version of the game. I took advantage of regional discounts and some credit left in my Epic wallet to purchase it there. As I mentioned in my previous reply, the issue isn't related to high temperatures as I initially thought. It turns out to be due to a VRAM leak, which others with a similar AMD CPU and 3000 series Nvidia GPU have also experienced online. This was evident when observing the memory usage during the game, as it kept rising until it reached its max, which is when the lag began.
You mean what it should be under load, which is perfectly expected and normal?
Inadequate cooling does that, not a video game. Lower 80's is a normal load temp for that card. I had a 3080, I know this. It won't throttle until 83c and you can adjust that to 93c by moving the slider in MSI Afterburner.
If you don't like it, cap your frame rate, adjust your fan/s profile, or improve your cooling.
Keep your clown hat. You have no idea what you're saying and you sound idiotic.
Your gpu is designed to be used. Stop being dumb.
these could lower GPU temps by more than 10 degrees.
In your post #9 you mention you notice lag as VRAM climbs but... that isn't proof of a memory leak on its own. It would be one thing if you were getting, say, consistently 5.8 GB for hours on end and suddenly started to climb to like 12 GB, but this obviously isn't possible to confirm on your GPU as the game uses quite a bit of VRAM in general for your 8 GB GPU... in fact, if you are basically at any 4K setting you're pushing the limit and it can get up to 13.5 GB VRAM from completely normal use. Even if you play at 1440p it may be pushing your VRAm for a mere 8 GB GPU, especially if you aren't using any upscaling. Your GPU does not actually have enough VRAM to even remotely gauge a memory leak considering this game's typical VRAM usage. Someone with a GPU that has considerably more VRAM would have to find such unusual usage changes. This is why RAM based memory leaks are often easier to spot because people tend to have much more (often an unused abundance) of that compared to VRAM limitations and often already running games pushing their VRAM to peak, anyways.
The game cannot be blamed for making your GPU to hot. That would be your system having inadequate cooling for your configuration. If you had adequate cooling it would be stable no matter the load, even running Furmark and such. Your GPU should be able to adequately hit 100% while using all of its hardware and executing any type of instruction set, stably.
As an example, a common issue seen in the past (and still relevant even now but is much better) is AVX instruction sets for the CPU. They were rarely used so when people did stress tests in game or even the more demanding Prime95 and other stress test applications they would appear fine. However, if they ran Prime95 with AVX instructions or, as an example, the Battlefield 5 release which was one of the first games to considerably start using AVX instructions in its code it suddenly rocketed up in temperatures to unstable levels because they didn't truly stress test their PC properly and update their cooling adequately. As another example, while a good deal of the gamers here may run games usually fine but can't run the most extreme Furmark settings on their GPU or heavy Prime95/OCCT/etc. configurations for CPU with a weaker less demanding CPU... I can do so just fine on a GPU and not break 63C at the most extreme settings while my 16 core/32 thread overclocked CPU can do the same but top at around 74C despite being vastly more demanding than most people's CPUs. Why? Because I have stable adequate cooling and no matter the load I've confirmed through thorough proper testing it will hold for days, or more technically.
100% GPU usage in GoT is not the same as 100% usage in another game or something like Furmark. They're not equivalent as the measurement from 100% hides a lot of the context behind that figure and is only a final point judgement missing tons of details.
Further, a game isn't poorly optimized because it uses your GPU more than your CPU. It depends on how the load is distributed and which piece of hardware caps first. For instance, a CPU might never go past 20% in a game because of a lack of multi-threaded support to support every single core/thread you have as most games and engines have somewhat limited multi-threaded support, if not entirely single-threaded to begin with. Thus just because you see it at 20% does not mean your CPU did not bottleneck. Checking those CPU threads may show that those threads are at 100% and in such a case if you lowered your GPU settings dramatically you would find your FPS doesn't really change because, again, CPU bottleneck in such a case. The same can happen in reverse for GPU. GPU's are usually the first to bottleneck due to how scaling works, especially with more competent CPUs as game CPU usage hasn't dramatically climbed compared to GPU usage and thus GPUs have been the main bottleneck for over a decade now. There are exceptions, of course, like some CPU intensive RTS (real time strategy) games but they're very rare.
What can you do?
You could improve cooling solutions in various ways. You could turn down room temperature as ambient can have an impact, to some degree but this isn't ideal and shouldn't be necessary. You could try cleaning potential dust buildup in your system that could be causing issues. Is your computer placed in a spot where it isn't choking airflow wise? You could limit FPS as some have mentioned so it doesn't reach 100% since it is having some issues doing such in this game. You could see if cleaning up messy cabling inside the case helps as many often underestimate the impact of poor cabling everywhere in a case impacting proper airflow. You could undervolt the GPU which can often result in nearly identical performance (ex 95% of original performance) while having dramatically less power draw and thus thermal impact (almost certain to solve your issue at a minor performance cost). You could make sure it isn't a coincidence and your GPU's fans (esp if you have multiple) aren't starting to fail (like 2 fans work but 3rd one is failing or something). Test it in other GPU demanding games and see if you are actually seeing similar results now.
I appreciate your perspective, but several sources and user reports suggest that there are indeed VRAM leak issues and optimization problems with the PC port of Ghost of Tsushima.
First, user experiences and technical discussions highlight recurring VRAM leaks. Users have observed significant increases in VRAM usage over time, leading to performance degradation. Restarting the game often resets the VRAM usage to normal levels, a clear indicator of a memory leak accumulating over time. For example, there is a thread where users share the same issue on steam. https://steamcommunity.com/app/2215430/discussions/0/7093810350792471869/
Moreover, benchmarks and performance reviews of the game have shown that the game can push hardware to its limits due to poor optimization. TechPowerUp's comprehensive benchmark review indicates that the game's VRAM usage can be extraordinarily high, particularly on GPUs with lower VRAM capacities. This excessive demand can lead to performance issues and overheating, which are often signs of suboptimal resource management by the game https://www.techpowerup.com/review/ghost-of-tsushima-benchmark/6.html#:~:text=URL%3A%20https%3A%2F%2Fwww.techpowerup.com%2Freview%2Fghost,100.
Additionally, poor optimization leading to high temperatures is a common issue observed in many initial PC ports. When a game is not well-optimized, it often leads to inefficient use of system resources, causing components like the GPU to run hotter than they would with better-optimized games.
80c or a little more is probably normal for the 3070 TI desktop GPU in eye candy mode. Enable DLSS, cap the framerate to 60 and turn down a few settings, you should easily be able to drop below 80c.
As an experiment, I'd run in potato mode just to see what temps you get and work your way up from there.