David is Back 2023 年 12 月 19 日 上午 7:22
Some games are seriously lagging on Xeon E5-2690v2
It seems to happen on map segments preloading. Can it bee because of my cheap SATA SSD, PCI-E 3.0 or because AMD GPUs (mine is RX 6660 XT) are screwed up for such games? Should I change my platform, if CS2 which I play most of the time is running fine, and my PC is more than OK in working tasks?
< >
目前顯示第 121-135 則留言,共 265
A&A 2024 年 5 月 1 日 上午 4:25 
引用自 David Is Back
I was thinking about the gaming stuff in terms of parallel execution yesterday.

I wonder if games can use parellel computations when calculating physics, projectiles coordinates and object states, for example, by deviding 3D space into large cubes, and processing those cubes by different CPU cores.

I mean, why in a lot of games the game get's bottlenecked to a single core speed? I still don't get the point.

Because this is how AI vision is simulated (the enemies should see not only in 1 meter in front of them, but like actual humans do)? Or because of how data is feeded to the GPU by the single core, like someone here had already mentioned?

I'm a developer myself (not a game dev though), so I'm curious.
You can use multiple coordinate systems which you can do if you have a single system first, but then there will be events where particles will need to be transferred from one coordinate system to another, and you need to change their location with the opposite coordinates, depending how big they are, obviously the load on each core will be different, also you can expect possible bugs and after completing one frame, all data must be collected in the same first ststem and sent to GPU which sounds worse than having dual GPU in SLI.
This way the task becomes more memory than CPU intensive. You are moving the infirmation multiple times to get a result, also the cache should become less efficient.
最後修改者:A&A; 2024 年 5 月 1 日 上午 4:58
David is Back 2024 年 5 月 1 日 上午 6:23 
引用自 A&A
You can use multiple coordinate systems which you can do if you have a single system first, but then there will be events where particles will need to be transferred from one coordinate system to another
Can't we use a single one, or the resolution is not enough for small things like particles?

引用自 A&A
obviously the load on each core will be different
Yes, but I guess it is still better than calculate all map on a single core. Also I'm not talking about rendering (it's a GPU task), only about some simple game logic and some physics. And I would say "tick", not "frame" :)
David is Back 2024 年 5 月 1 日 上午 6:27 
Also I want to mention that CS2 and Paladins are not lagging on my CPU, but many indi games on UE4 do. Sounds like UE4 texture streaming problem (maybe not good for DDR3 RAM or 128-bit memory bus of my GPU).
A&A 2024 年 5 月 1 日 上午 7:22 
引用自 David Is Back
引用自 A&A
You can use multiple coordinate systems which you can do if you have a single system first, but then there will be events where particles will need to be transferred from one coordinate system to another
Can't we use a single one, or the resolution is not enough for small things like particles?

引用自 A&A
obviously the load on each core will be different
Yes, but I guess it is still better than calculate all map on a single core. Also I'm not talking about rendering (it's a GPU task), only about some simple game logic and some physics. And I would say "tick", not "frame" :)
You can use a single coordinate system, just to use multiple cores, you need to have multiple tasks and implemitation is more difficult (if we ignore the other components in a game like input processing, AI calculation, audio, etc.) and since consumer class computers don't have too many cores, which makes a straightforward design run slower than a complex one, and of course, developers want to use the fastest one for the majority, unless we're Paradox, which who a 64-core processor to run City Skylines 2.

When it comes to physics it is not always true, there are also cases where the physics are tied to the fps for better efficiency I guess or to look a little bit better if the values are adjustable? But in principle tick rate determines them.
最後修改者:A&A; 2024 年 5 月 1 日 上午 7:23
David is Back 2024 年 5 月 1 日 上午 7:40 
引用自 A&A
You can use a single coordinate system, just to use multiple cores, you need to have multiple tasks and implemitation is more difficult (if we ignore the other components in a game like input processing, AI calculation, audio, etc.) and since consumer class computers don't have too many cores
I have 10 cores, so what's my problem? My CPU is from 2013, games from 2017-2020 should not run that bad on it. Maybe some critical instructions are missing, or L3 cache is too slow?
It's Chase 2024 年 5 月 1 日 上午 7:50 
引用自 David Is Back
引用自 A&A
You can use a single coordinate system, just to use multiple cores, you need to have multiple tasks and implemitation is more difficult (if we ignore the other components in a game like input processing, AI calculation, audio, etc.) and since consumer class computers don't have too many cores
I have 10 cores, so what's my problem? My CPU is from 2013, games from 2017-2020 should not run that bad on it. Maybe some critical instructions are missing, or L3 cache is too slow?
The architecture is just really old. More cores don't help in gaming when you are lacking in single threaded performance, and lacking instructions that even other older CPUs have. I don't think yours even supports AVX2, which even an i7 4790 has. Your motherboard also only supports PCIE 3 which can also be causing issues.
I thought we went over this.

It's a server CPU though, it was never meant for gaming in the first place. If you want a big upgrade, even something like an Ryzen 3600 would be a giant difference, but i would try to get a 5600 instead. AM4 motherboards are dirt cheap, so this makes its a good budget pick.
最後修改者:It's Chase; 2024 年 5 月 1 日 上午 8:07
A&A 2024 年 5 月 1 日 上午 8:08 
引用自 David Is Back
I have 10 cores, so what's my problem? My CPU is from 2013, games from 2017-2020 should not run that bad on it. Maybe some critical instructions are missing, or L3 cache is too slow?
You don't have AVX2 and FMA3, which could be used to optimize calculations. "Bit manipulation instructions" also, but I don't see a lot of usecases in gaming.

The CPU is not very good in single either.
Turbo:
1 core multiplier: 36
2 cores multiplier: 35
3 cores multiplier: 34
4-10 cores multiplier: 33
最後修改者:A&A; 2024 年 5 月 1 日 上午 8:16
xSOSxHawkens 2024 年 5 月 1 日 上午 8:11 
引用自 David Is Back
引用自 A&A
You can use a single coordinate system, just to use multiple cores, you need to have multiple tasks and implemitation is more difficult (if we ignore the other components in a game like input processing, AI calculation, audio, etc.) and since consumer class computers don't have too many cores
I have 10 cores, so what's my problem? My CPU is from 2013, games from 2017-2020 should not run that bad on it. Maybe some critical instructions are missing, or L3 cache is too slow?
You have 10 (slow) cores. They are a decade old and core for core are half the power/speed of anything modern. Running a (mostly single thread dependent) game on your CPU would be no different that running a game on my 5950x with it power limited and speed limited to half its normal speeds...

Thats why your system is struggling. Its no better than a half speed modern one.

The E5 closes the gap a bit in mixed core loads, but then falls back flat on its face when tested with all core loads against the same R9 chip. No matter how you slice it the deficit for your CPU is anywhere from half to one third the speed of the modern consumer chip.

As for the games issue, as mentioned, tick rate is the limit and the reason its not all split out is that it would be too difficult to piece back together. Games are a live time render application, and they just dont have enough time to piece everything back together from a highly multi-threaded workload. For time sake many engines run with most timing related computation on a single (or a few single) threads. Then the main thread is always the limit as it tries to stitch everything together on the fly at 165fps or more (for many modern gamers).

And as for UT4, def could be the data streaming causing issues, more so if the CPU core speed drops when multi-core use is in play. The game or app may only be showing a small usage but the data transfer (even assuming storage and memory can keep up) will still cause a spike in Kernel use as the data gets shuffled. That kernel level use can cause overall system use to be high enough to kick in multi-core use. If the game core bumps down a few hundred Mhz to accommodate the spin up of extra cores that could also impact the speed of the game in question.
r.linder 2024 年 5 月 1 日 上午 9:14 
引用自 xSOSxHawkens
引用自 David Is Back
I have 10 cores, so what's my problem? My CPU is from 2013, games from 2017-2020 should not run that bad on it. Maybe some critical instructions are missing, or L3 cache is too slow?
You have 10 (slow) cores. They are a decade old and core for core are half the power/speed of anything modern. Running a (mostly single thread dependent) game on your CPU would be no different that running a game on my 5950x with it power limited and speed limited to half its normal speeds...

Thats why your system is struggling. Its no better than a half speed modern one.

The E5 closes the gap a bit in mixed core loads, but then falls back flat on its face when tested with all core loads against the same R9 chip. No matter how you slice it the deficit for your CPU is anywhere from half to one third the speed of the modern consumer chip.

As for the games issue, as mentioned, tick rate is the limit and the reason its not all split out is that it would be too difficult to piece back together. Games are a live time render application, and they just dont have enough time to piece everything back together from a highly multi-threaded workload. For time sake many engines run with most timing related computation on a single (or a few single) threads. Then the main thread is always the limit as it tries to stitch everything together on the fly at 165fps or more (for many modern gamers).

And as for UT4, def could be the data streaming causing issues, more so if the CPU core speed drops when multi-core use is in play. The game or app may only be showing a small usage but the data transfer (even assuming storage and memory can keep up) will still cause a spike in Kernel use as the data gets shuffled. That kernel level use can cause overall system use to be high enough to kick in multi-core use. If the game core bumps down a few hundred Mhz to accommodate the spin up of extra cores that could also impact the speed of the game in question.
That's more or less what I told him almost 6 months ago and he went into a rage because of it. I really wouldn't bother wasting any more of your time with people like this OP, because he adamantly refused the answers he was given because it wasn't the answers he wanted to hear.

Some people just don't like to be informed that having more cores doesn't necessarily mean that your processor is going to be viable for that much longer than one with less cores from the same generation. It's all about how well those cores actually perform.
最後修改者:r.linder; 2024 年 5 月 1 日 上午 9:16
David is Back 2024 年 5 月 1 日 下午 10:22 
引用自 A&A
You don't have AVX2 and FMA3
Is that crucial, or maybe it's not the main problem?
David is Back 2024 年 5 月 1 日 下午 10:32 
引用自 xSOSxHawkens
against the same R9 chip
R9 is high end CPU, it's more expensive than mine for sure.

引用自 xSOSxHawkens
to one third the speed of the modern consumer chip
Any proof of this with real comparison and charts? I'm talking about 2018-2020 years games, where did you get one third of a difference? In fact my CPU should be close to i5 10500 or i7 7700 (not in single-threaded mode, of course). Again, in working tasks i7 7700U in a notebook that they gave me at my company for work purposes feels *much* slower than my home Xeon when starting and stopping services or indexing files in the IDE, for example.

引用自 xSOSxHawkens
As for the games issue, as mentioned, tick rate is the limit and the reason its not all split out is that it would be too difficult to piece back together. Games are a live time render application, and they just dont have enough time to piece everything back together from a highly multi-threaded workload. For time sake many engines run with most timing related computation on a single (or a few single) threads. Then the main thread is always the limit as it tries to stitch everything together on the fly at 165fps or more (for many modern gamers).
OK, thanks for the explanation, it seems clearer now. But as for me I would be happy with 120-130 fps, and I even don't ask for ultra quality or 2K resolution, 1080p is enough for me.

引用自 xSOSxHawkens
That kernel level use can cause overall system use to be high enough to kick in multi-core use. If the game core bumps down a few hundred Mhz to accommodate the spin up of extra cores that could also impact the speed of the game in question.
Ah, that makes sense, and explains why my i5 2500K overclocked to 4.1 GHz performes significantly better in games like L4D2 and partly CS:GO. Because it has similar cores, but they are running 33% faster in multicore.
最後修改者:David is Back; 2024 年 5 月 1 日 下午 10:44
David is Back 2024 年 5 月 1 日 下午 10:37 
引用自 r.linder
doesn't necessarily mean that your processor is going to be viable for that much longer than one with less cores from the same generation. It's all about how well those cores actually perform
Ah, LOL, you are talking again about my 2500K? I'm not using it anymore, it is standing in the corner of my room in the box quietly...

What about me don't want to hear the answers - that's because I have some lawns to pay, and a new PC from scratch is like 1-1.5 my month salary. So yes, if I can live without the upgrade for some time, I'll probably do it. And even after that I would stop and think whether I should exchange my build which is totally fine for work and even some simple games for 100-120 USD (which is close to nothing).
David is Back 2024 年 5 月 1 日 下午 10:42 
For example Core i7 13700K or 14700K which I wanted to buy, costs like x4 of the money (was x5 several month before) I will get after I sell out my motherboard + CPU + RAM + air cooling. And a decent motherboard with overclocking support is x2.5-x3 of that money, so x7 in worst case. Also good RAM will cost close to 100 USD. So x8 for a new modern PC, but is that really so crucial in a daily life, if I'm even not playing heavy games a on a daily basis?
Illusion of Progress 2024 年 5 月 1 日 下午 11:56 
引用自 David Is Back
引用自 xSOSxHawkens
to one third the speed of the modern consumer chip
Any proof of this with real comparison and charts?
Sort of, yes, but it's a limited sample size of games so the real difference will vary depending on the games you play, but it's probably a good example due to the range of CPUs it spans anyway.

https://www.techpowerup.com/review/intel-core-i9-14900k/17.html

Of course the difference in practice will be lower since you're more often GPU limited whereas this test is done on an RTX 4090 at 720p precisely to show the difference in CPU performance.

You can indeed guesstimate that modern chips are between twice to thrice (or even more?) the performance of early 2010s CPUs now, and that's guesstimating because many of them don't even show up. They're too old and slow to be tested in modern settings. But Haswell (Intel 4th generation CPUs) is often compared to the original Ryzen CPUs and you can presume that's somewhere between, what, 25% to 33% based on where the Ryzen 5 2600 is? And it can get worse for those older consumer DDR3 era CPUs in modern games where the core count will limit them. You truly are looking at two to four times the uplift.

People are still acting like modern CPUs are barely twice as fast as their old Sandy Bridge or Ivy Bridge CPUs just because the stuff that immediately followed Sandy Bridge was progressing so slow. Well, that incremental uplift is small but compounds eventually, and than after it adds up, gains also got higher in the past five years compared to the five before them. So yes we're between two and three (or more) times faster now than many of those mid and late DDR3 era platforms. We're really at a point where anything "quad core era", which is Sky Lake/Kaby Lake and prior, is slow regardless of how many cores it has. 8th generation stuff is about to be the floor with Windows 11 in a little over a year. LGA 1200 is about to be two platforms back. I don't know what it is about with early 2010s platforms kidding themselves about new stuff barely being faster. If you're fine with the performance of those early 2010s platforms, that's one thing. I used mine until 2020 (goodness that was already four years ago). But they are, in fact, many times slower that current offerings now.
David is Back 2024 年 5 月 2 日 上午 1:57 
But they are, in fact, many times slower that current offerings now.
I can't agree with that. I already told about 7700U in a notebook, which fells slower in many tasks (well, OK, it's a notebook, so it's CPU is downvolted and undeclocked, but anyway).

If you're fine with the performance of those early 2010s platforms, that's one thing
Of course I am, why am I writing here you think? Just wondering what is wrong with some of the games :)

https://www.techpowerup.com/review/intel-core-i9-14900k/17.html
Why are you giving me link to i9 14900? To show what can I gain if I buy this year's CPU, so that I don't longer hesitate to go and spend my money?

You see, the whole methodic is wrong IMO. We should compare ti Intel 7xxx-8xxx and Ryzen 1-2, because I planned this upgrade in late 2017, where in fact there were 4 options for young students willing to make an upgrade (I ordered them from worst and cheapset to the best and most expensive):

1. Buy a used Threadripper CPU
2. Buy a used Xeon 13xx/16xx/24xx CPU
3. Buy a new Ryzen CPU from a shop
4. Buy a new Intel Core i3/i5 7xxx from a shop

So, in many tests Threadrippers performed worse than Xeons, but of course that depended on the exact CPU models. And 26xx Xeons were not even suggested, because they were more expensive, and did not allow overclocking (which was considered important).

But when I in fact bought my 2690v2, they were already cheap enough. Also, I did not mind with the lack of overclocking much, because I thought that high core count would be enough (and the cores in 2nd generation were not as slow as in 3-4 generation Xeons, also trhe frequencies were close to that of my old desktop Sandy Bridge CPU).

That's the logic behind that choice. I chose the 2nd to best CPU in the series (only one model was better, which had 12C/24T formula).

But Haswell (Intel 4th generation CPUs) is often compared to the original Ryzen CPUs
Great! Mine is Ivy Bridge, so it's 3rd generation - let's compare it to the original Ryzens. If it loses a little, I'm okay with that, at least it was cheap for me, and yes, it is a used device with no service guarantee. But I hardly believe in 25-33% loss, and how can you say about twice or thrice? It's not possible even in single-core mode.
< >
目前顯示第 121-135 則留言,共 265
每頁顯示: 1530 50

張貼日期: 2023 年 12 月 19 日 上午 7:22
回覆: 265