RTX 4080 16GB Power Usage
I have tested every game on high-ultra 1080p and wattage usage do not exceed 110 w is it normal? Like I have everything stable 80+ fps, but it kinda strange
< >
Сообщения 114 из 14
RTX 4080 at 1080p hits the CPU performance limit and wont utilize its full potential.
It`s kinda pointless to get cards like RTX 4080 and 4090 for 1080p resolution, no matter how powerful CPU is it will hit the performance limit and can`t spit out more frames so to speak.
Автор сообщения: Rumpelcrutchskin
RTX 4080 at 1080p hits the CPU performance limit and wont utilize its full potential.
It`s kinda pointless to get cards like RTX 4080 and 4090 for 1080p resolution, no matter how powerful CPU is it will hit the performance limit and can`t spit out more frames so to speak.
I was just thinking that one of pcie connector is not connected, that's why I am asking
Автор сообщения: Rumpelcrutchskin
RTX 4080 at 1080p hits the CPU performance limit and wont utilize its full potential.
It`s kinda pointless to get cards like RTX 4080 and 4090 for 1080p resolution, no matter how powerful CPU is it will hit the performance limit and can`t spit out more frames so to speak.
But you see that there is a trick, CPU is not even near to 80% load question is what is going on
Is that overall CPU use? On what CPU (specifically, how many cores/threads are on the CPU)?

If it's not a single core CPU, which I'm really guessing it's not in this day and age and with an RTRX 4080, then overall CPU use is almost pointless to look at. Looking at individual cores is necessary, but still not fool proof. All it takes is a single core to be at 100% (or sometimes, not even that) to have a potential CPU bottleneck.

Overall CPU use as an indicator of whether a CPU is the limitation or not isn't been accurate beyond single core CPUs.
As I tested some games,
Tarkov 30-75% usage 1080p maxed
Cyberpunk 70-80% usage everything high except RT Psycho getting 80-120 fps
Minecraft 15-50% 32 chunks render distance with shaders chocaptic maximum on 1.19.1
Автор сообщения: Illusion of Progress
Is that overall CPU use? On what CPU (specifically, how many cores/threads are on the CPU)?

If it's not a single core CPU, which I'm really guessing it's not in this day and age and with an RTRX 4080, then overall CPU use is almost pointless to look at. Looking at individual cores is necessary, but still not fool proof. All it takes is a single core to be at 100% (or sometimes, not even that) to have a potential CPU bottleneck.

Overall CPU use as an indicator of whether a CPU is the limitation or not isn't been accurate beyond single core CPUs.
I can't get what's going on CPU is not higher then 80% usage, GPU TDP max 110w
What is the wattage when you isolate the gpu on a gpu ONLY stress test @ 100% (like Aida 64 and others)?
The answer is simple , you most likely need a faster CPU - it s time to get one
Автор сообщения: smallcat
The answer is simple , you most likely need a faster CPU - it s time to get one

The answer may be even simpler, OP should get a monitor that is 1440p or higher.

Stop using high end GPUs with 1080p monitors folks
Автор сообщения: Kowalski
I can't get what's going on CPU is not higher then 80% usage, GPU TDP max 110w
I explained it, and you didn't clarify.

Is 80% your overall use? If so, you can not use that to determine anything. You have to look at a per-core basis to better determine this, and even that isn't fool proof.
Автор сообщения: Kowalski
Minecraft 15-50% 32 chunks render distance with shaders chocaptic maximum on 1.19.1
Minecraft Java is largely single core limited. It IS multi-threaded, largely when the client is first loading the game in the recent few versions (okay that doesn't count) or when loading a lot of chunks fast. It also splits threads for dimensions and some other things. But Minecraft is MOSTLY single core limited in typical play, and almost always CPU bound. Version 1.18 and newer are also rather heavy due to the terrain height changes.

But 32 render distance with shaders? Goooood luck! Every time you double the render distance, you quadrouple the loaded chunks. The new simulation distance helps lower CPU load somewhat from that, but with shaders that's still a high render diistance. Even shaders can be CPU limited in weird situations, especially if you're near a lot of entities and don't have a shader pack that disabled entity shadows, because these became MUCH more taxing after 1.18 for me (the default video setting for this does NOT apply to shaders so you NEED shaders that allow this to be adjusted). Found this out the hard way with BSL. Once I updated it and a new version added that setting, it was a MASSIVE difference. Easy way to find out, turn entity distance down to 50% from 100% and see if it does anything. If not then this isn't your issue. In general, 32 render distance in 1.18 and up is harder, and shaders over 16 is already often hard (though not sure if an RTX 4080 should handle it or not).

So you could be CPU AND GPU limited here. Minecraft Java with shaders will vary a LOT. Like a LOT.
10700k is too slow for the 4080
if it s i7 10700K , not too slow but perhaps is overheating . I would run a benchmark in order to find out why this happens .
Автор сообщения: rezo
10700k is too slow for the 4080

The 10700K is not too slow for a 4080. However, at 1080p. the 10700K WILL bottleneck a 4080 in many instances. Especially CPU intensive games. Thus, you would not see full GPU usage in those games.

A 10700K would do the same thing to a 3080. Not every game, but at 1080p, in many CPU intensive games, it WILL bottleneck the GPU. Use 1440p, or 4K resolution, and those CPU bottlenecks will be mostly eliminated.

With that said, GPUs like the 4080, and 4090, are not really designed for 1080p. To get the most out of them, you want to be using higher resolutions like 1440p, or 4K. Now, if you have a 1080p 240Hz, or even 360Hz monitor, and are trying to maximize them, then you will want a high-end GPU. However, you will also need to get one of the higher end CPUs as well to insure that the CPU is able to push those high frames that the GPU is rendering.

Nothing wrong with getting a modern mid-range CPU and a modern high-end GPU for 1440p, or 4K. But at 1080p, you will definitely run into CPU bottlenecks. At this point, a mid-range CPU will not be enough to push the GPU to its full extent.
Отредактировано ZeekAncient; 3 янв. 2023 г. в 16:30
high res take more gpu, high fps takes more cpu and gpu

if playing at 1080p 300+hz/fps the cpu will be the bottleneck
if at all of the k res and multiple displays the gpu will be the bottleneck with almost any modern cpu
< >
Сообщения 114 из 14
Показывать на странице: 1530 50

Дата создания: 3 янв. 2023 г. в 5:19
Сообщений: 14