Raoul 1 MAR 2024 a las 6:41 a. m.
RTX 5090 70% faster vs RTX 4090 ?!?
750W cards incoming or are they just going to show us some slides of a very specific optimised use case in a single program?

https://www.pcgamesn.com/nvidia/geforce-rtx-5090-performance

Edit: 675W*
Última edición por Raoul; 4 MAR 2024 a las 4:04 a. m.
< >
Mostrando 121-128 de 128 comentarios
Illusion of Progress 25 NOV 2024 a las 11:39 p. m. 
Publicado originalmente por The_Abortionator:
First, whats with this delusion people have that AI cards for servers have ANYTHING to do with gaming cards? Where did this nonsense come from? What makes you think this?

Its like the clowns saying 4090s were hard to get because all the dies were being used for the h100. They aren't the same die, not even related.

AI accelerators DO NOT have the gaming components needed for gaming because thats not what they are for. Gaming cards aren't "AI scraps". That makes ZERO SENSE.
Maybe I should have phrased it more thoroughly, like "getting the scraps of nVidia's priority" (which is the AI sector) to prevent confusion by people who might read into it. I never said AI accelerators and consumer GeForces were exactly the same.

So like... do correct me if I'm wrong on something (because maybe I am), but here's the basis of what I meant.

1. Larger dies typically have lower yields and higher failure rates? (That's on top of costing more to produce.)

2. Chip fabrication is (at least sometimes) finite? That is, multiple products are vying for limited production time, so even if two things aren't the same, they are still, in a zero sum way, competing with one another for limited production time, no?

3. AI business is absolutely thriving for nVidia?

Are these three things not correct?

If so, then why is it unreasonable to presume that nVidia would be prioritizing AI products when production time is limited and AI products earn them so much more profit? This goes double since the gaming market has seemingly shown it will accept whatever nVidia puts out. The GeForce dies may not be direct scraps of AI accelerators, but that's never what I said. I was implying that the gaming market is effectively getting the scraps share of both nVidia's priories and finite production time.

Is this not correct? From my limited understanding, that's what things seem to be.
Publicado originalmente por The_Abortionator:
Second what does "AMD has given up" even mean?
Is AMD not forgoing the high end next generation? Fair enough if you want to say that it remains to be seen, but it seems likely to me from all the information we've been getting. If they happen to come out with something matching the RTX 5090, I'll admit this part was wrong.

Did AMD not fail to match nVidia at the high this generation as well as they did in the prior generation?

Both things seem likely and true to me. So what else do you call that if not "giving up" at the high end?

It doesn't matter if they released the 7900 XTX and admited beforehand that it won't match the RTX 4090. That's like saying if you preemptively confess to something it means it didn't happen? Like, sure, it wasn't intending to compete with it. That's not the point. The point was that nVidia is unmatched at the high end and AMD is seemingly further ceding any attempt at it this time.
De Hollandse Ezel 26 NOV 2024 a las 4:21 a. m. 
Publicado originalmente por Illusion of Progress:
Publicado originalmente por Komarimaru:
Everyone thought the 4090 was going to be a 600+ watt card on launch, it's 450 watt.

I'd leave speculation to just that, and wait and see what the power draw and power efficiency is, no predictions have every been right about it.
Though I agree with the "wait and see" approach, and while rumors can be all over, they tend to become less all over as time draws closer, and there's sound reasoning as to why we shouldn't expect a repeat of last time.

Namely, between the RTX 30 series and RTX 40 series, a "not insignificant" change to a more efficient process node occurred (Samsung 8nm to TSMC 5nm), and this is largely what allowed nVidia to advance in performance/watt with the RTX 40 series.

Such a shift is not going to happen this time. So any increases will instead have to come from either architecture changes (I don't think it's radically changing?) or increased power draw (which lends to things like adding more cores, having a higher frequency, etc.).

The majority of nVidia's lineup (namely, x80 and below) is basically just getting scraps from the AI sector, as nVidia can sandbag so much because AMD has given up. But nVidia knows the lower SKU buyers don't upgrade as often (and also that many of them only buy nVidia), so nVidia can peddle lower performance increases, lower VRAM amounts, etc. down there and people will still eat it up. But they're going to actually need a substantial enough uplift at the halo end if they want to lure the FOMO users into ditching an RTX 3090, 4080, and even 4090 for an RTX 5090. How much is "substantial enough" to encourage that, I don't know. But there's a good chance they might have to raise the power to do that. I'm not saying it will be 600W, but I am saying I definitely won't be surprised if it's higher than it is now, even if that is up to 600W. It really depends on how much of a performance improvement nVidia intends to make it over the RTX 4090, mainly.


therr WAS no gain in performance per watt withh 3000 series.

until the 2xxx series it was top modrls use 250w mid x70 models 130w and budget models x60 100w

each gen did get 40% performance gain while tdp stayed the same

tha starting with 3000 series the tdp startes to rise basicly 40% per gen too.
meaning that performance per watt gain between 2000 series and 4000 series was less than 10% where that should havr beren 100%..

and now this 5000 series continues this madness of no real gain.

a 5090 thats 40% faster than 4090 but also uses 600w would neth out on nearly 0% performance per watt gain.

do note the 40% gain per generation has remained 2000-3000 about 40% 3000-4000 about 40%
so that increase has not changrd from before they started overcharging in more way than one.

they just cheapscating out and not put in the rma budget to get that +40% without increasing tdp.. as they did untill the 2000 series.
Última edición por De Hollandse Ezel; 26 NOV 2024 a las 4:26 a. m.
PopinFRESH 26 NOV 2024 a las 5:06 a. m. 
Publicado originalmente por Illusion of Progress:
...
...
It doesn't matter if they released the 7900 XTX and admited beforehand that it won't match the RTX 4090. That's like saying if you preemptively confess to something it means it didn't happen? Like, sure, it wasn't intending to compete with it. That's not the point. The point was that nVidia is unmatched at the high end and AMD is seemingly further ceding any attempt at it this time.

AMD has certainly given up on competing at the high-end. When their leadership says as much publicly, I'd tend to think that is actually the case. I don't even think they are intending to compete with the x80 level of card as of now so you aren't wrong in your understanding of what has been said and reported on regarding AMD's next generation of GPUs.

From what has been reported on regarding the RX 8000 series thus far I don't think we're going to see an "8900 XT" or "8900XTX" tier card; both of which would have been on (what appears to have been cancelled) Navi 41. The only thing being seen in the supply chain are 4x GPUs based on 2x chips; Navi 44 and Navi 48.

IMO they know they have the potential for getting squeezed if Battlemage has a more reasonable launch; and they are trying to focus on playing catchup on features (most specifically Ray Tracing) in the largest area of the market. We're 6 years beyond the launch of RTX cards. Most new games are incorporating RT more heavily now. They know they can't continue to downplay the RT performance disadvantage and with RDNA4 look to be trying to get closer to the x60 and x70 tier of RT performance for the low - mid range (e.g. the bulk of the market) where they have a better shot at winning on value.
PopinFRESH 26 NOV 2024 a las 5:19 a. m. 
Publicado originalmente por De Hollandse Ezel:
...
tha starting with 3000 series the tdp startes to rise basicly 40% per gen too.
meaning that performance per watt gain between 2000 series and 4000 series was less than 10% where that should havr beren 100%..

Soooo 350W/450W = .6 in your mind? Weird that my calculator shows that working out to .77 (a 23% TDP increase) so it must be broken or something.

Also from the 20 series Titan RTX at 280W TDP to the 3090 TDP of 350... 280W/350W = .6 in your mind? My calculator also seems to be off showing that as .80 (a 20% TDP increase) while delivering an aggregate average performance increase of 42%. Weird how that seems to be an increase in performance/watt like Illusion of Progress suggested. Do you have a calculator that you could loan me to "fix" my math?

EDIT: Also the 5090 leaks seem to indicate it will be at 450W (same as 4090) - 500W; and the performance increase is looking like one of the largest gen-on-gen increases of around 70%.
Última edición por PopinFRESH; 26 NOV 2024 a las 5:27 a. m.
Illusion of Progress 26 NOV 2024 a las 1:47 p. m. 
Publicado originalmente por De Hollandse Ezel:
therr WAS no gain in performance per watt withh 3000 series.
This is why I said "between RTX 30 series and RTX 40 series". The improvement was with the RTX 40 series.
PopinFRESH 26 NOV 2024 a las 4:29 p. m. 
Publicado originalmente por Illusion of Progress:
Publicado originalmente por De Hollandse Ezel:
therr WAS no gain in performance per watt withh 3000 series.
This is why I said "between RTX 30 series and RTX 40 series". The improvement was with the RTX 40 series.
You would have been correct regardless as I showed above. Titan RTX performance:TDP -> RTX 3090 performance:TDP was an increase in performance-per-watt.

From the 20-series generation to 30-series generation performance increased ~42% aggragate to an increase in TDP of about 20%.
Illusion of Progress 26 NOV 2024 a las 5:38 p. m. 
Yeah, that definitely wasn't to say the RTX 30 series was no performance per watt increase over the RTX 20 series. Looking at the RTX 20 series to RTX 30 series, it seems it went from TSMC 12nm to Samsung 8nm there (I'm looking at launch models, so I don't know is later refreshes changed this in-between). That alone would have signified to me that "there was no efficiency improvement" would have almost surely been incorrect.

I was just clarifying that I was referring to the changed process node between the RTX 30 series and RTX 40 series with my statement. I'm not sure how "between the RTX 30 series and RTX 40 series" got interpreted to include the RTX 20 series at all so it confused me why that comparison was even raised.
Última edición por Illusion of Progress; 26 NOV 2024 a las 5:39 p. m.
Casey Payne 10 ENE a las 1:16 p. m. 
It’s hilarious reading this now 😂😂😂
< >
Mostrando 121-128 de 128 comentarios
Por página: 1530 50

Publicado el: 1 MAR 2024 a las 6:41 a. m.
Mensajes: 128