Installer Steam
Logg inn
|
språk
简体中文 (forenklet kinesisk)
繁體中文 (tradisjonell kinesisk)
日本語 (japansk)
한국어 (koreansk)
ไทย (thai)
Български (bulgarsk)
Čeština (tsjekkisk)
Dansk (dansk)
Deutsch (tysk)
English (engelsk)
Español – España (spansk – Spania)
Español – Latinoamérica (spansk – Latin-Amerika)
Ελληνικά (gresk)
Français (fransk)
Italiano (italiensk)
Bahasa Indonesia (indonesisk)
Magyar (ungarsk)
Nederlands (nederlandsk)
Polski (polsk)
Português (portugisisk – Portugal)
Português – Brasil (portugisisk – Brasil)
Română (rumensk)
Русский (russisk)
Suomi (finsk)
Svenska (svensk)
Türkçe (tyrkisk)
Tiếng Việt (vietnamesisk)
Українська (ukrainsk)
Rapporter et problem med oversettelse
it be a LONG time before "twice the performance at same tdp" ever becomes true again.
in my opinion it's only worth upgrading for some big gains and not some mid crap but i get your point for one part but the other i don't where you say the 4xxx 350w part
my 4090 does not really go above 250w depending on the game
and you should lock your fps maybe?
the only way to pass it is to increase power and cooling to get better performance from each cores
It's absolutely fun to discuss the changes in trends for discussion sake on how the market is changing, sure. I do that myself.
But there is no "standard" for how things will advance, and as it is, it does seem like chips (CPUs and GPUs) have been pushing power draw up in order to achieve their gains lately.
What will happen for that next generation? Time will tell.
I'm familiar with them, and they often seems revisionist of what really happened.
Case in point....
These performance spreads were definitely not typical. This is way off.
In what world was the x60 typically less than half the performance of the x70, but over half the cost? That would suggest the x60 has a worse value than the x70 when the reality was the opposite; the x60 has almost always, if not always, been the best value in a generation.
Using TechPowerUp's relative performance and comparing the various tiers from within generations to see what trends emerge, it seems it was almost always more like this...
x80 is to be considered 100% of the performance of a given generation because it was traditionally the flagship and released first. So if there's a "baseline" for performance of a generation, this is often it.
x60 was typically somewhere around two-thirds (~66%) the performance of the generation baseline. For some examples, the GTX 1060 was 62% of the GTX 1080, and going back to both Kepler generations and Fermi, they were even closer. The GTX 660 and GTX 760 were 70%+ (!) of the respective x80 performance, and the GTX 560 Ti was almost three quarters of the GTX 580!
x50 is the biggest wild card as it varies a lot. Sometimes it's as low as half of the x60 (Kepler and Pascal), so it would therefore be around one-third (33%) of the performance of the generation baseline. Other times it's not so much slower than the x60 as is up to 80% of the performance of the x60 (Maxwell generation).
Then the Ti models would typically come later as refreshes and be a bit faster than their base model.
So if the x80 isn't even typically double the x60, then what alternate reality are you living in where the x70 could be?
I'm also not sure why you listed the x90 for historical reference because other than the dual GPU models of the past, it was only introduced last generation. And since the RTX 3090 was little more than an RTX 3080 Ti with more VRAM, I'm not sure why you're putting such a performance spread between those tiers too. It was certainly not twice the performance of the x70 either.
we can disagree.. and well I might do the x60 for the very old series a bit dirty.. for basicly I never owned anythging below an x70.. nor did any gamer around me.. x60 cards were looked down on as "budgetcrap" x70's were the staple menu for all.. even x80 non ti were often seen as "just high midend"
-> but we can look up gflops for each chip.. and you see it does matches my list pretty neatly.
and yeah I left x50 out.. most gen did not have one.. and when they did.. it generally was an "for office use" card aka when you had a cpu without grafics but needed nothing fancy.
as for the x90.. it did excist in the older generations.. before titans became a thing (not every generation had one but some had).. and I tend to see titans as basicly rebranded x90's or the reverse x90s as rebranded titans.. as the card above the x80ti. which always was the top card.
those titans were usually much more expensive.. (back when 1000-1100 euro for a gpu seemed like grazy much).. and thus more people stuck to the x80ti which was the best of the normally numbered cards in each series.
starting with the 3xxx series they stopped launching titans but reintroduced x90.. so well I see them as titans.
some series had 2 seperate launches of titans.. which would be what the x90ti was..
what I miss in the 4xxx series is an x80ti.. classicly thats one of the cards a series launches with.... it launches with x70 and x80ti.. than 6 months later an x80 apears, and 6 months after than an x60 and titan (or x90) are added.
but ok fact remains ALL the best cards upto the 2xxx series.. including the titans.. never exceeded 250w (ok one titan had 270w.. but than I have to point at the many ones that used 230W.. so lets call it 250W +/- 20W for all the topmodels of both the regulair lineup and the titans.
which still makes todays tdp's insane.
given how expensive gas is.. not an option either.. but a bunch of solarpanels.. might do.
-> given how compagnies now ask you to PAY for any surplus energy you give back..
yeah thats right..
lets say on an average day
your solar panels produce in 8 hours of daylight 10kwh your working so you only use 2kwh
at morning/night after sunhours you use 9kwh.
in the past they looked at total 10-2-9 = -1 so you got charged for 1kwh
now they do it different :
each kwh given back COSTS you a little money like 6 cents.. so thats 8x6 = 48 cent.
than at evening/night you got to pay what you use.. so thats 9 kwh you got to pay at about 60 cents a pop including taxes
so as you cannot compensate high use at night with overproduction at day.. and you actually need to PAY to deliver energy back to the grid now.. you better use it all.
aka pc on at daylight.. off at night.
that to me means no gains at all..
they should just pump a little more R&D in their grafic card divisition now they risen them like 2 or 3 times more than just inflation should warrant.
sure they profit of AI.. but we gamers deserve R&D too..
just pump more voltage into it.. and cool it better thats overclocking.. we can do that ourself
what I need them to do is actually produce better power per watt producing silicone.
high ones to x90, and lower ones down to x50 or mobile versions
RTX4060 has the same performance as RTX3060 and the TDP is basicly the same as RTX3050. 40% power efficiency improvement.
The only explanation I can see for the TDP increase in the high end is that they removed the SLI support. It is interesting that RTX ADA 6000 is better than RTX4090, while it is a 300W card. Obviously they have higher binned chips, but this difference is just too big?
Will the march continue? It depends on AMD.
Sounds super easy. Have you considered starting your own GPU manufacturing company? Surely you'll be able to produce a GPU that is better performance per watt than these power hungry Nvidia chips
I'd say it more depends on Intel at this point. AMD is getting squeezed and they are still way behind on RT performance / capability. If Intel delivers what it loos like from Battlemage; and they can finish getting their teeth cut on driver support for solid day-one playability on modern games then AMD is going to be in a real bad spot, likely being driven to target the low-mid end and will likely struggle to be price competitive with Intel at that range. At least until Intel starts gaining market share and decides to start pricing their GPUs at the price points they likely should be at.