Instalar Steam
iniciar sesión
|
idioma
简体中文 (Chino simplificado)
繁體中文 (Chino tradicional)
日本語 (Japonés)
한국어 (Coreano)
ไทย (Tailandés)
български (Búlgaro)
Čeština (Checo)
Dansk (Danés)
Deutsch (Alemán)
English (Inglés)
Español - España
Ελληνικά (Griego)
Français (Francés)
Italiano
Bahasa Indonesia (indonesio)
Magyar (Húngaro)
Nederlands (Holandés)
Norsk (Noruego)
Polski (Polaco)
Português (Portugués de Portugal)
Português - Brasil (Portugués - Brasil)
Română (Rumano)
Русский (Ruso)
Suomi (Finés)
Svenska (Sueco)
Türkçe (Turco)
Tiếng Việt (Vietnamita)
Українська (Ucraniano)
Informar de un error de traducción
So like... do correct me if I'm wrong on something (because maybe I am), but here's the basis of what I meant.
1. Larger dies typically have lower yields and higher failure rates? (That's on top of costing more to produce.)
2. Chip fabrication is (at least sometimes) finite? That is, multiple products are vying for limited production time, so even if two things aren't the same, they are still, in a zero sum way, competing with one another for limited production time, no?
3. AI business is absolutely thriving for nVidia?
Are these three things not correct?
If so, then why is it unreasonable to presume that nVidia would be prioritizing AI products when production time is limited and AI products earn them so much more profit? This goes double since the gaming market has seemingly shown it will accept whatever nVidia puts out. The GeForce dies may not be direct scraps of AI accelerators, but that's never what I said. I was implying that the gaming market is effectively getting the scraps share of both nVidia's priories and finite production time.
Is this not correct? From my limited understanding, that's what things seem to be.
Is AMD not forgoing the high end next generation? Fair enough if you want to say that it remains to be seen, but it seems likely to me from all the information we've been getting. If they happen to come out with something matching the RTX 5090, I'll admit this part was wrong.
Did AMD not fail to match nVidia at the high this generation as well as they did in the prior generation?
Both things seem likely and true to me. So what else do you call that if not "giving up" at the high end?
It doesn't matter if they released the 7900 XTX and admited beforehand that it won't match the RTX 4090. That's like saying if you preemptively confess to something it means it didn't happen? Like, sure, it wasn't intending to compete with it. That's not the point. The point was that nVidia is unmatched at the high end and AMD is seemingly further ceding any attempt at it this time.
therr WAS no gain in performance per watt withh 3000 series.
until the 2xxx series it was top modrls use 250w mid x70 models 130w and budget models x60 100w
each gen did get 40% performance gain while tdp stayed the same
tha starting with 3000 series the tdp startes to rise basicly 40% per gen too.
meaning that performance per watt gain between 2000 series and 4000 series was less than 10% where that should havr beren 100%..
and now this 5000 series continues this madness of no real gain.
a 5090 thats 40% faster than 4090 but also uses 600w would neth out on nearly 0% performance per watt gain.
do note the 40% gain per generation has remained 2000-3000 about 40% 3000-4000 about 40%
so that increase has not changrd from before they started overcharging in more way than one.
they just cheapscating out and not put in the rma budget to get that +40% without increasing tdp.. as they did untill the 2000 series.
AMD has certainly given up on competing at the high-end. When their leadership says as much publicly, I'd tend to think that is actually the case. I don't even think they are intending to compete with the x80 level of card as of now so you aren't wrong in your understanding of what has been said and reported on regarding AMD's next generation of GPUs.
From what has been reported on regarding the RX 8000 series thus far I don't think we're going to see an "8900 XT" or "8900XTX" tier card; both of which would have been on (what appears to have been cancelled) Navi 41. The only thing being seen in the supply chain are 4x GPUs based on 2x chips; Navi 44 and Navi 48.
IMO they know they have the potential for getting squeezed if Battlemage has a more reasonable launch; and they are trying to focus on playing catchup on features (most specifically Ray Tracing) in the largest area of the market. We're 6 years beyond the launch of RTX cards. Most new games are incorporating RT more heavily now. They know they can't continue to downplay the RT performance disadvantage and with RDNA4 look to be trying to get closer to the x60 and x70 tier of RT performance for the low - mid range (e.g. the bulk of the market) where they have a better shot at winning on value.
Soooo 350W/450W = .6 in your mind? Weird that my calculator shows that working out to .77 (a 23% TDP increase) so it must be broken or something.
Also from the 20 series Titan RTX at 280W TDP to the 3090 TDP of 350... 280W/350W = .6 in your mind? My calculator also seems to be off showing that as .80 (a 20% TDP increase) while delivering an aggregate average performance increase of 42%. Weird how that seems to be an increase in performance/watt like Illusion of Progress suggested. Do you have a calculator that you could loan me to "fix" my math?
EDIT: Also the 5090 leaks seem to indicate it will be at 450W (same as 4090) - 500W; and the performance increase is looking like one of the largest gen-on-gen increases of around 70%.
From the 20-series generation to 30-series generation performance increased ~42% aggragate to an increase in TDP of about 20%.
I was just clarifying that I was referring to the changed process node between the RTX 30 series and RTX 40 series with my statement. I'm not sure how "between the RTX 30 series and RTX 40 series" got interpreted to include the RTX 20 series at all so it confused me why that comparison was even raised.