Steamをインストール
ログイン
|
言語
简体中文(簡体字中国語)
繁體中文(繁体字中国語)
한국어 (韓国語)
ไทย (タイ語)
български (ブルガリア語)
Čeština(チェコ語)
Dansk (デンマーク語)
Deutsch (ドイツ語)
English (英語)
Español - España (スペイン語 - スペイン)
Español - Latinoamérica (スペイン語 - ラテンアメリカ)
Ελληνικά (ギリシャ語)
Français (フランス語)
Italiano (イタリア語)
Bahasa Indonesia(インドネシア語)
Magyar(ハンガリー語)
Nederlands (オランダ語)
Norsk (ノルウェー語)
Polski (ポーランド語)
Português(ポルトガル語-ポルトガル)
Português - Brasil (ポルトガル語 - ブラジル)
Română(ルーマニア語)
Русский (ロシア語)
Suomi (フィンランド語)
Svenska (スウェーデン語)
Türkçe (トルコ語)
Tiếng Việt (ベトナム語)
Українська (ウクライナ語)
翻訳の問題を報告
Highest-end Raptor Lake makes some promises of its own. Like 6GHz clock speed at defaults, 8GHz with over-clocks.
Interesting, nonetheless. I wonder if Intel's 13th generation will see a similar uplift.
ComputerBase tested Cyberpunk, Far Cry 6 and Spider-Man Remastered with RT[www.computerbase.de] and they did not report any significant difference in performance to Intel's 12th gen.
Where's the Hitman 3 non-RT test[bilder.pcwelt.de] to validate his statement?
This makes some sense, as presumably the draw call ability of both Intel 12th and Ryzen 5k are similar?
If anyone here has the full versions of 3DMark I can work with you to get some comparisons, I can offer Ryzen 3k and Ryzen 5k results from the draw call API testing the suite has, just need someone with some Intel 11th and 12th gen results to post ;)
Sadly their site doesnt allow searching or comparing results from the niche draw calls test :/
Whenever I said a 9900k is very close to being a bottleneck in Cyberpunk even on higher resolutions no one wanted to believe me.
And Cyberpunk uses all the Ray-Tracing there is (atleast to my knowledge).
20% gain my ass
This makes is seem like you didnt read your own source, or you are questioning the validity of your own source?...
Seems like in the RT loaded metro its 20% lead, which is in line with the roughly 15-40 percent reported by various RT loaded titles.
AMD 7K seems to be demonstrably better at RT loading. Thanks for another source showing this!
Until you realize 7600x gave the same number as 7900x. Tldr: faster cpu gives more fps than the slower cpu. Aka water is wet
TBH, chances are that the entire RT pipeline is a single thread, so the better ability to handle the RT calls (at least for Metro) is probably going to be present on most 7k chips.
This isnt simply a case of faster CPU being faster, else we would see similar gains in non RT titles against Intel and Ryzen 5k. Instead we see the 7k holding well in regular loads while blasting out much higher spreads on RT loading.
Personally, I am both super suprised and super excited. I would never have thought such an impact from CPU on RT. Very interested to see how this plays out in the next few weeks as more people get more time playing around with the chips.
You can go to the next pages and see 7900x is losing to 12600k in rt performance (cp2077, crysis3). The results are too random to make that conclusion
We dont know for sure what settings the HardwareTimes review used, but we *do* know that they reported an average FPS on a 3080-Ti of 132 on the 12700, and 138 on the 7900. Meanwhile the testing you are linking to is claiming is claiming 220-240 FPS for those chips at 1440p and 290-300+ fps at 1080p for those chips with a 3090.
There is def a difference between the 3090 and the 3080-Ti, just as there is between a 6800xt and 6900xt. But not 2-3x the FPS difference.
This seems to imply that either Hardware Times used higher settings, DigitalFoundry used lower and less demanding settings, or a mixture of both.
Either way, we are still seeing at least two sources now (yours and mine) that have at least one-off examples of RT loads on 7k being abnormally and unexpectidly high.
That fact cant be argued. And so far seems hard to explain.
On a side note, 3DMark has an API Draw Call Test. I would love to try and get some results collated for that and can offer 3900x and 5950x results if anyone has the app and an intel 11th or 12th gen we can add those, and then when someone here has 7k we could see how they stack up in raw draw calls testing. Sadly 3dmark doesnt have the draw calls searchable on their site for easy comparison so we would all have to work out sharing the results.
Not really worth paying attention to, wait to see properly set up systems running at their best before making any decisions.
I mean, hell, my 9900k with a 3090 matches half those numbers!
To me, that shows the testing was half arsed.
So it seems it is poorly optimised in the game engine as it needs to be brute forced to work as it should.
As I said, wait to see some numbers where both systems are properly set up.
Possibly... related or not to the overall theme of "CPU plays a big part in the underlying rendering pipeline of something that mostly adds GPU demands", but I play Minecraft with shaders. It's not ray tracing, but I noticed a similar peculiarity recently.
The 1.18 update of the game brought increased performance demands. It also brought some new settings. Playing with the same shaders I did in 1.16 (I skipped 1.17 but it's not essential for this as 1.18 was the shift), I noticed a rather substantial performance drop in some areas of my world. Namely, my home village. Playing with some of the new settings, one of which is entity distance, I found some immediate results. This is typically something that's known to be CPU reliant, so I dropped it from 100% to 50% (the new setting is a slider from 50% to 500% in stages of 25%), and performance went up. It was not quite as high as it was with 1.16, but at least playable in those spots.
Now normally, I'd chalk this up to "the new version is just more demanding and can't be compared to the old one" and be done with it. After all, one of the things 1.18 did was increase the ground depth so there may be more entities (monsters, namely) spawning underground. Except... if I play WITHOUT shaders, I can have that setting at 100% and that performance drop doesn't occur. Huh?
It made it hard to tell what, exactly, is the limitation. I think the numbers suggest the GPU is, despite it being a typically CPU setting, which would make sense as the shaders not being involved means the performance drop isn't present. But entities are... definitely a classic CPU thing with Minecraft. And the GPu does shaders just fine when that entity thing isn't the limitation. Very confusing. Only way I can think of to be sure to figure out what it is would be to find someone else with a Zen 2 CPU and a much more powerful GPU than my GTX 1060, have them test exactly my world in the same spots with exactly the same shaders/settings, and see if the drops are also there to the same extent. If so, it's CPU, and if not, it's GPU. I'm hoping the numbers suggesting GPU are correct. If it's CPU, I guess there's no CPU in existence that gets my village performing nicely?
Both Minecraft and the shaders for them are known to be... not efficient (not saying the shader authors do bad jobs, just that Minecraft itself isn't made for it so getting it working just takes a lot). Minecraft itself is also very random due to the random generation of worlds, so finding performance results on it is generally hard. I'm sort of hoping a GPU upgrade helps with this particular thing but even if not, I'm hoping it brings my frame rate up enough to let me raise the render distance when not in an "entity distance obscurity tanking frame rate" situations. Thus far, just my one village does it so I swap the setting down when in it. It changes nothing visually in my village, but it's a slight annoyance having to change when entering/leaving my village.
Anyway... sorry for a bit of rambling. But my point is this thread reminded me of that. So while my first post expressed surprise that the CPU could play a big part in something that mostly adds GPU demand, I probably shouldn't have been as surprised because maybe I've experienced it myself (just not with ray tracing).