Instale o Steam
iniciar sessão
|
idioma
简体中文 (Chinês simplificado)
繁體中文 (Chinês tradicional)
日本語 (Japonês)
한국어 (Coreano)
ไทย (Tailandês)
Български (Búlgaro)
Čeština (Tcheco)
Dansk (Dinamarquês)
Deutsch (Alemão)
English (Inglês)
Español-España (Espanhol — Espanha)
Español-Latinoamérica (Espanhol — América Latina)
Ελληνικά (Grego)
Français (Francês)
Italiano (Italiano)
Bahasa Indonesia (Indonésio)
Magyar (Húngaro)
Nederlands (Holandês)
Norsk (Norueguês)
Polski (Polonês)
Português (Portugal)
Română (Romeno)
Русский (Russo)
Suomi (Finlandês)
Svenska (Sueco)
Türkçe (Turco)
Tiếng Việt (Vietnamita)
Українська (Ucraniano)
Relatar um problema com a tradução
The problem is definitely on the client. It is a single-threaded game, due to Unreal Engine's rendering being single-threaded. On multi-core CPUs, you can start Process Explorer, and check the game's threads. The main thread will take 12.5% of the CPU on quad-cores (ie, 1/8 of a quad-core, ie, one hardware thread is fully utilized). Once the main thread uses 100% of one hardware thread (ie, 1/8th of a quad-core CPU or 1/12 of a 6-core cpu, and so on), the game cannot make use of any more CPU capacity.
There are some other "utility" threads in addition to the "main thread", used for sound and who-knows-what. This does not the make the game "multi-threaded". The main thread, where all the animation and rendering happens, is what limits the game's ability to make better use of available CPU.
It is a lack of discipline on the dev's part. It should not be hard to test the game on a PC with the minimum requirements, and on a PCI with the "recommended" requirements, and make sure it runs at worst case at 50FPS and 60+FPS. Worst case = 100 players, heaviest map, typical end-game with lots of explosions and habs, and vehicles.
"Clear user data cache" is SNAKE OIL.
EDIT: Easier way to tell how close you are to CPU bottleneck: type "stat unit" in console. The "Game" time should be less than 1 frame time (16.66ms for 60 FPS), ie, 16.66ms of gametime corresponds to the main thread not being able to do any more processing between consecutive 60Hz VSYNC's.
Name one game engine that splits its rendering thread onto multiple threads please. It would make no sense to try to split a serial task into multiple threads unless if your goal is to create a bigger workload for little or no gain and introduce more hard to find bugs.
Unreal Engine's main thread is used for a lot of things like all the game logic unless if specifically put things onto a different thread, but rendering has its own thread.
Unreal Engine is very much a multi-threaded game engine.
LOL, you should check your PC for bitcoin miner malware.
@Dangfoe, you are sadly, ill informed.
Rendering the world includes tasks like calculating the coordinates and orientation of all vegetation and buildings. Each tree, each house is an individual asset that can be calculated independent of anything else in the scene, and it makes total sense that rendering the game world be split into as many slices as there are processor hardware threads (eg, 8 slices on a quad-core CPU), so they can be rendered simultaneously. This would result in 8 command lists that would be submitted to the GPU.
Read Unreal Engine's own article about this, titled "Rendering Dependency Graph", to become better informed. RDG is treated as "feature" in UE, and is described as being still "under development". Epic Games is fully aware of what would take to implement multi-threaded rendering, but their engine was not architected with multi-core CPU's. The multi-core/multi-threading capability is merely being added as an afterthought.
Intel has been trying since the advent of multi-cores to convince game developers to effectively use multi-threading. See the GDC presentation from 2010 titled "Don't Dread Threads". It's an introductory level presentation:
https://www.gdcvault.com/play/1012189/Don-t-Dread
At the very beginning of the presentation they show the FPS effect of serial rendering vs multi-threaded rendering (spoiler: 33FPS vs 104FPS in their demonstration animation)
I'll have to call BS. There is no i7-8770. If you don't even know what CPU you have, you probably can't even measure FPS properly. LOL.
@Tachyons just because the game can run at the bare minimum 60FPS on (nearly) the best hardware available today does not mean it is optimized. It merely means that it can be brute-forced to a playable standard by throwing enough money at it.
Optimized would mean it should run at least at 60FPS on any map with 100-players on the "recommended requirements"
If OWI would update the "recommended requirements" to not be a complete lie, there would be fewer complaints, but also fewer players buying the game, expecting their hardware meets the "recommended requirements".
While the "recommended requirement" PC is not nearly good enough to run the game in its current state, the reason why "Intel Core i or AMD Ryzen with 6 physical cores/16GB/GTX-1060" cannot run the game well is because the game does not use the available power effectively. Sure, you can brute force with a much faster CPU, double the RAM and 4x the GFX power, this will overcome the lack of optimization, but does not mean the game is optimized, which would mean it runs well (60FPS) on the recommended hardware.
Can anyone who claims they can run the game well, post benchmark video proof on YT, on the v1.1 version of the game, while playing on the heaviest map, Fallujah with 100 players?
A lot of hardware configs can run at 100+ FPS on the older maps, but that's meaningless if they have to sit out (or hate the ♥♥♥♥♥♥ experience) on the Fallujah map due to low FPS.
Thanks.
RDG is a GPU feature not a CPU. How would you split a serial task like taking a snapshot of a world by using multiple threads?
There is a ton of features that manufacturers try to push onto developers that never gets implemented in the real world because developers don't live in a bubble like they do and actually deal with real world variables.
Computing the world is not serial. There are 100,000 (or some number) of assets that need to be computed on each frame, and that pool can be split into smaller batches, each batch executed in parallel.
Google "Rendering Dependency Graph" and watch Intel's video. Open your mind to new learning. I won't summarize it for you here.
Edit: The RDG is also referenced in Intel's presentation on multithreading at about the 15:00 mark.
Squad is not capable of using anything close to 100% of 4 cores. Something else is using yours.
Yes, I have cleared it, I do it every update or drive update. I played the PTU with the 100 man servers no problem. Post patch release though, it just fell apart. Did everything support suggests and more, persists.