Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
https://www.youtube.com/watch?v=aIWai4acAhw
We all tried many times to reason with him and give detailed explainations, but he ignores and refuses everything. He just does not want to admit the truth. I guess its some sort of "self protection" for him.
Come on, this is even below kindergarten level now.
Here is a summary again what is eco-simulated that results in the huge 94 total threads of the game: https://imgur.com/a/xjuvWbY
And around 20 of them are live world-environment calculations (memory address associations). What means in theory, the more cores your CPU has, up to 20 can have a positive impact on the games performance and against the CPU bottleneck.
- NPC's and palicos do not just pop up on the world itself and vanish as you get close to them, but have their full daily and task routine
https://www.youtube.com/watch?v=NUoU9LvbIF4
https://www.youtube.com/watch?v=M6RAv9CWZnA
- natural wildlife havior in association with each other and the weather
https://www.youtube.com/watch?v=E2pfOUcPEuI
https://www.youtube.com/watch?v=hicme7RIQH0
- dynamic eco-enviroment with decay and dynamic interactions to it
https://www.reddit.com/r/MonsterHunter/comments/1in9zt8/corpses_decay_over_time_and_even_leak_body_fluids/
And all that and more, all the time.
while zen6 will be 24 cores with x3d only 12 cores
but the game need a lot of cores if not oc
Those details are fantastic but it doesn't excuse the poor performance.
https://www.youtube.com/watch?v=abhGU7AssjY
The TDP and °C will be the interesting factor too. Afterall, many like myself insist on Air-Cooling for easier and long time relyable maintenance.
Cache is going to be an issue with that many cores, too.
To be fair, these details should not be CPU heavy. The huge thread count makes me wonder how much this and games like this in the future will start bottlenecking at the RAM.
As long the RAM-rate is in sync with the CPU's processing speed, this should not be an issue. Having all these tasks being processed at the same time, is what creates this huge multitasking load. Its just logical that it ended to be so CPU intensive.
The little check on the MHWilds Benchmark i made, showed this game does like multithreading a lot.
With over 94 active processing threads: https://imgur.com/a/xjuvWbY
And around 20 of them are live world-environment calculations (memory address associations). What means in theory, the more cores your CPU has, up to 20 can have a positive impact on the games performance and against the CPU bottleneck. That can not be always compesated by L3-cache.
Mark my words, if the next console generation comes with 20 or 16 cores as default, this will be also the new standard for future developed games. Like 8 cores(+) is it now with this generation.
Thats not always the case. Sometimes its very reasonable and even nessesary to outsource a task into its own thread. Mostly in cases when its not needed for the "core procress" and only has a supportive/additional role to play.
One question that we have not risen so far is, what if the netcode is part of that problem? Afterall most of the mass eco-simulation and CPU-bahavior has to be communicated and stay in sync in an online-coop environment. What is, if the tasks are quite fast calculated already, but the netcode that needs to sync these informations with everyone else in the lobby, puts breaks in between them for the sync data. Since we do know the host is not calculation the environment for everyone else, but everyone is calculating his own part localy but still stays in sync to a certain point.
Yeah... How about we don't return to the days of single thread processing?
The only time when you really should be using threads for simple processes is when you're handling stuff like netcode or you have to access external resources like preloading assets of the harddrive, etc.
The common trade-off between performance and bandwith. Either you distribute the entire simulation state across all connected party directly or you limit the data transmitted and have each client calculate based on the data to reach (roughly) the same simulation state.
Raises the question what's part of the simulation state. Since the game under the hood is still mission based I'd argue the world is probably synced when the mission parameters are negotiated between clients. As in what monsters are on the map and their state and locations, etc. Since players are more likely to engage with large monsters they probably have the highest priority in terms of state updates. Followed by the small monsters. All the rest eg. critters, static collectibles, non-combat NPCs, etc. are likely not part of the simulation state at all but entirely handled client-side and only synced post interaction on-demand.
Be happy you can upgrade every 5 years instead of the 6 months to 1 year back in 90's :D
One thing is quite overlooked here. Dynamic scaling of the tasks. Most "one task/thread" designs are only possible if you know exactly what you will have to handle or what to expect. What is if this is not the case here. Like you do not know if 20, 40 or 100 NPC's will be connected to that one task. The you have to make it dynamic and better give it its own thread. A process where 100 tasks are running on their own thread, will always perform way better then a process with with 100 tasks in one thread. And that is the better design, even back then where there was just 2 cores or hyperthreads (see Gothic 3 for example back then).
Putting all these loads on just one thread/core will be even more bottlenecking and CPU (heat) intensive, then splitting it up into as much threads as possible. I think Capcom did the right decition here with this design with having focus on the future hardware and lifetime and not the past (where Windows 10 and its runtimes is even not supported anymore officialy). I know many will not agree with this, but in the long run, and the more modern hardware will be the common ground for everyone, this will be benefitial to the possebilities of the game.
The main problem with threading aren't usually the threads themselves but memory access timing inconsistencies. Threads run in parallel that means you can easily run into problems where one thread relies on variables being already updated and thus reading them before the thread that should be doing the updating had a chance to do just that. Whenever that happens you get a race condition. Essentially one thread that isn't expected to be faster is somehow running laps around the ones supposed to feed it information.
In the best case (for the player) this means you waste a bunch of cycles on operating on outdated information. In the worst case you process objects that haven't been properly initialized yet. One common example for the latter are the funky strings / geometrical faces that you sometimes see spiking all over the screen out of character models .. well or them T-posing menacingly. Or the game just crashes "unexpectedly" which spoken as a developer is the best thing that can happen because then at least they get a crash dump which might or might not be helpful. Ironically the absolute worst case for everyone involved is that nothing happens .. a problem nobody knows about is one that never gets fixed.
Not all state that needs to be shared across threads can be made atomic and that's when you need to lock threads to keep the entire house of cards from crashing down (sometimes literally). Temporarily locking threads willy-nilly to enforce memory access timing consistency is the last thing you want to do in a game where every little thing can be the difference between stable fps and 0.01% lows.
I did some research in the Benchmark lately and my 13600k is bottlenecking somewhere around 100fps.
Basically no problem because it's more than enough but at this point it's only around 40-50% load and all cores are equaly pending somwhere between 20% and 60%.
So i'd say no matter how much of the 20 cores the game is really using there should be quite some headroom left before any throttling.
At first glance it looks like some kind of threadmanagemant issue for me.
I remember things like this in World back than on my i5 8400.
More like with 6 threads instead of 20 but it was basically the same problem of bottlenecking without really having too much load on any core.