Instalar Steam
iniciar sesión
|
idioma
简体中文 (chino simplificado)
繁體中文 (chino tradicional)
日本語 (japonés)
한국어 (coreano)
ไทย (tailandés)
Български (búlgaro)
Čeština (checo)
Dansk (danés)
Deutsch (alemán)
English (inglés)
Español de Hispanoamérica
Ελληνικά (griego)
Français (francés)
Italiano
Bahasa Indonesia (indonesio)
Magyar (húngaro)
Nederlands (holandés)
Norsk (noruego)
Polski (polaco)
Português (Portugués de Portugal)
Português-Brasil (portugués de Brasil)
Română (rumano)
Русский (ruso)
Suomi (finés)
Svenska (sueco)
Türkçe (turco)
Tiếng Việt (vietnamita)
Українська (ucraniano)
Comunicar un error de traducción
Memory speeds mean nothing when your game performance is still lower.
No, there is no magical "AMDip". Theres no benchmarks showing that, infact looking at benchmarks shows the opposite Intel is getting trashed. Period.
Stop making things up, this isn't the school yard in the 90s I can literally look up every claim you fabricate. Stop embarrassing your self.
It's real but far better on 9000 than it used to be.
Memory speeds impact raytracing, which is something I like.
I also play at 4k so the max fps at 1080p doesn't matter.
As I said, it's the lows I care about and I mainly play games that won't fit into the 3d cache, which is when the dips happen.
I really do not know why you are so angry at my choice and do defencive if a fairly widely known weakness of amd chips, it won't even bother most people, but I hate stutters, so I picked the platform, thst with heavy tweaking, will provide the experience I am after.
If you don't see or feel it, I'm happy that you enjoy your amd chip, they are great cpu's since the 5800x3d.
It's also notably worse than previous gens and AMD, no one should buy the core ultra series.
Pointing out your lunacy doesn't mean I'm mad.
Literally every single real world benchmark and trusted entity vs a cringe lord youtuber memer and you choose to trust the memer? Are you 12?
Plus what makes you think RT a GPU task gets sped up by system memory speeds? Did you also get that from that cringe lord?
Also if the nonx3d chips are already getting better lows than the Intel parts how are the x3d chips magically worse during a cache miss?
None of your explanations makes sense and you have yet to provide a reliable source for anything. The guy you mentioned talks about benchmarks like some sort of conspiracy to promote AMD.
Its pretty much the entire world saying and demonstrating one thing and you plus that nobody on youtube saying another.
i highly doubt intel will pass amd in the nest few years
at least for core performance and in games
intel figured out how to smack more cores on cpus for better overall performance in highly threaded tasks
rt is not a purely gpu task
it can be done on cpu or any gpu without 'rt' cores
like 20 years ago i did rt renderings on cpu only, it took hours per frame but did it
Techspot's numbers in the 14 game average show that the 9800x3D has 1% lows of 149, compared to 116 for the 14900k[www.techspot.com], and even the 7800x3D beats it in this regard with 1% lows of 136.
Maybe the KS is a little stronger than the K since it's binned to be higher clocked, and maybe you can improve it a little more with some tuning, but I'm doubting that you're overcoming a 33 frame difference which makes the 9800x3D on average 28.4% stronger than the 14900k in the 1% lows even with relatively extreme overclocking measures, and even if you could, the 9800x3D can be overclocked too so we're probably losing as much ground as we gain by bringing that into the equation
In the test suite, there isn't a single game where the 14900k beats the 9800x3D in the 1% lows
Even in Hitman 3, which is a rare example of a game that favors the 14900k and the 285k over the 7800x3D, the 9800x3D has the highest 1% lows.
The 9800x3D's worst showing in this regard is in Star Wars Outlaws, where the 1% lows are tied with the 14900k at 109 F.P.S., but even here here we're saying the Zen 4 3D v-cache chips are better.
We're also seeing the 14900k's 1% lows in Homeworld 3 are reall, real bad. Most of Ryzen chips used as a basis of comparison are getting between 43-47 F.P.S. on the 1% lows, but it's only 30 F.P.S. on the 14900k, and the 9800x3D is performing like a champ with 63 in the 1% lows.
Tom's hardware[www.tomshardware.com] is showing 148 1% lows at stock settings for the 9800x3D in the 13 game average and enabling P.B.O. improves that to 151
On the intel side of the equation 14900k and the 285k are both getting 1% lows of just 111. 14900k doesn't win in any of the per game breakdowns here. The only game where the 14900k comes close to the 9800x3D is F1 2024, but the 14900k still loses by a single frame (169 v. 168).
The Gamer's Nexus[gamersnexus.net] doesn't provide a mean average, but their per game breakdown isn't showing us much different.
1‰ lows might be another story, but I'm not seeing it in the Gamer's nexus charts (which are the only ones of these three which show that metric). The lows are surprisingly bad for the 14900k in Warhammer Ⅲ.
You are also experiencing 1‰ lows only about a tenth as often as 1% lows, and if we're saying the 1% lows are never any better and usually much worse on Intel processors then it doesn't really matter since the purpose of evaluating based on the lows is consistency and the 1% lows already show Intel loses on that front soundly. You'd only compare 1‰ lows if the 1% lows were comparable enough.
Granted, I haven't done any evaluation regarding the differential between the 1% lows and the average framerate on the processors, but insofar as the 9800x3D is concerned, that's what frame limiters are for. Currently, the 9800x3D is the undisputed king of games and Intel's the embittered former champion looking for a rematch it just can't ever seem to win back. It's just not in the cards. Intel is done son, at least insofar as game performance goes.
This generation was Intel's best chance to finally recapture the championship since Zen 5 stalled in gaming perf. for the mostpart, but Intel fumbled the ball with an overall perf. regression.
Why do I trust frame chasers numbers? Because his results match mine on a fully overclocked system, not stock, for 99% of people AMD IS BETTER.
As for stability / smoothness of gameplay, the smaller the gap between the lows and highs the better the experience.
On the examples you selected, the drop in frame rate is larger than on Intel, by quite the margin.
I do not care about max fps, I care that it is consistent, the smaller the drop, the better.
If one drops 40 fps and the other 80, one us far more noticeable than the other, by percentage, AMD's drop is larger, and again, not showing 0.1% numbers.
These sites are also using stuff at stock, not tuned and overclocked to get 100% out of the hardware, which is where I have MY system, the vast majority do not, so AMD is, once again the far better option, but not for me or those like me who tune and overclock and want smooth consistent frame times.
AMD drops far more frames when it does than Intel does by percent and if you go by actual game play, not just final benchmark numbers (and include 0.1%) it tends to do it more often when the systems are fully set up.
I am not trying to poop in your cereal or claim Intel is better for most, only that in MY personal use case, it suits ME better.
I do not care who makes it, I simply want the best experience for how I use it.
Nothing has gone over our heads, you are simply making things up.
Theres no magical "fully tuned" 14900ks system with unicorn properties. Not only that but you are also acting like AMD systems can't have tuned memory and be overclocked.
So I'll put this is short simple terms: What you are claiming is not seen or measured by ANY trusted establishment. Its you and a small time youtube memer making one claim vs the likes of Gamers Nexus, Hardware Unboxed, JayzTwoCents, LevelOneTech, and literally the rest of the world showing the opposite. So no, I'm not buying that BS.
Its just him and a rando youtuber making things up.
rt is not a purely gpu task
it can be done on cpu or any gpu without 'rt' cores
like 20 years ago i did rt renderings on cpu only, it took hours per frame but did it [/quote]
I'm aware of what RT is and how it works, thats why I called him out because RT in games are LITERALLY done on the GPU if not the game would be sub 1FPS.
you can do rt on non rt gpus
many amd gpus without nvidia rt cores can do games just fine
nvidia rt gpus just have the shortcuts to do it faster or more efficiently
the cpu creates the scene and gives all that info to the gpu, and the gpu renders the frames
fps is often limited by either the cpu unable to give new info to the gpu, so it cant make a new frame (gpu goes to idle since it has nothing new to do)
gpu cant draw fast enough with the info its being given (gpu at 100% load and fps still falling short of refresh rate)
Sorry can you just google how this stuff works please? For real. AMD GPUs don't need "Nvidia RT" (whatever you think that means) to do RT, AMD has their own hardware for that.
Ray tracing in games happens ENTIRELY on the GPU. Again, look this stuff up.
nvidia 2xxx and later gpus have 'rt' cores that have shortcuts for rt tasks
amd has similar things for hardware accelerated rt work
for the gpus without hardware rt support, they have to do it other ways that take longer
rt can also be done on cpu
feel free to look up how physx worked, there ware drivers to use cpu or gpu
even physx standalone driver to use with amd cards that did not support nvidias physx