Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
This is nothing new, it's why tech like dlss and frame gen became so prominent, you also decided to ignore the development seen in raytracing plus the previously mentioned techs.
Just look at how much more complex gpu's are now vs then a d it should be very plain and easy to see why gains slow down.
Not sure if your percentage gains are completely accurate either, but, I guess it depends how you decided to measure them.
Cards now are far more capable than ever at better prices with more features so it isn't all bad.
Dynamic voltage and frequency scaling played the main role here.
WRONG SON......we are getting to the end of transistor shrink and we will not see the gains we got from going from 45nm to 28nm ever again.....going from 5nm to 3 or even 1nm will not be that big of a jump.....
that is a major factor at this point....ROPs stack, Vram limitations and GPU lay out are going to become major factors
chiplets just like in CPUs will be a bigger factor as time goes on simply because they will not be willing to risk making GPU's twice the size of a 5090 in a monolithic die.....
rim mount GPU's are also in the pipe line allowing 2 GPU dies to be stack back to back BUT they need cooling on both sides.....(this one could be a pipe dream for consumer level cards just do to production costs)
...game developers have to up their craft and can't rely on recycling the same ideas, just with another visual upgrade. ;-) Plus platforms such as Commodore 64 had been in commercial use for over a decade. Coders got to know the machine inside out and by the end could squeeze far more out of every precious kilobyte than at the beginning. The most recent tech is by definition always the least proven and tested, as bugs, crashes and driver issues amply demonstrate.
Based on steam account age alone, I'm very likely older than you, second, you said I am wrong then typed all that agreeing with everything I said....
Are you OK?
but every gen only gains around 5-10%, so not really that huge
pushing beyond the silicons envelope takes alot more power and gives much less performance per watt
so anything thats + performance is an improvement, not worse
the 45-38nm, were the actual size of the transistors
somewhere around when 10nm and less, they changed to the 'tolerance' of the mfg process
not the size of any part of the components
if they can get the tolerance down to from 5nm to 3nm, thats about a 66% improvement, that would be huge gains in efficency
incorrect. its all about increasing the efficiency.
if you increase the poweruse more than you increase performance.. thats utterly useless.
and for decades it was 1 generation per 18 months
and each generation would be the same wattage but 40% more powerfull
(or 100% more powerfull at same wattage over 2 generations)
OR alternative halving the wattage for the same performance
while the price of the product stayed more or less the same (only small inflation correction.
since the 2000 series the price has inflated by greed.
since 3000 series the poweruse is inflated by lazyness
since 5000 series they gave up on proper performance gain tryinh to hide 4 years of SLACKING in r&d with useless ai cores and idiotic powerdraws.
they are making gpus that are stronger, not weaker
the lower end cores used in laptops should have slightly better efficiency (performance per watt)
Yes such greed, they invest billions but decide not to capitalise on it by deliberately producing lackluster chips, it has nothing to do with how obscenely complex and small chips gave gotten to the point we are at the limit of current material abd manufacturing sciences.
You have to be trolling.
OK, let's look at transistor count from 10 years ago yo today to see just h I w much more complex and expensive manufacturing has become shall we?
The 980 is BILLIONjust over 10 years old, with a transistor count if 5.2 Billion vs the 5090's 92 Billion.
The complexity involved is truly mind boggling, now let's go back a few more generations to the 7800 gtx of about 20 years ago when they only had 302 million transistors.
Progress to fit in more and the gains from shrinking them were easy to achieve.
That gpu's are as cheap as they are is crazy, the gains we continue to see are pretty monumental even if they are only on the very high end, but thst filters down.
Entry level cards gave never offered so much performance at such a good value as they do right now.
I remember paying £700 for an asus 7800 GTX when it launched, with inflation that would be over £1200 today and entry level cards are an order of magnitude more complex, powerful and capable at 1/3rd of the price.
Let's look at the 2080ti, the start of the 'modern' gpu techs.
2080ti released in 2018 with 18.6 Billion transistors costing £900, adjusted for inflation that is £1200.
Now a 5060 ti 16GB has 21.9 Billion transistors, costs £400 and handily beats the old king, again, for a 3rd of the price, how is that not solid progress?
Not to mention the added tech we have now.
If you really think there is deliberate stagnation and greed behind prices and smaller performance increases, you really should just walk away and try understanding what is involved.
Debatable. After GeForce 4, I used to buy exclusively cards that cost 150 EUR max. Up to 2017 that would net me a card that could play anything, albeit not necessarily everything at high settings, FHD. Plus 4GB of VRAM on top of that, that is double the amount of VRAM of minimum specs of games of that era (Dishonored 2, Mankind Divided, etc.)
Now I'm paying double to achieve the same (when it should be for WQHD by now given how old FHD is). Plus there's still cards for over 300 EUR released with 8GB, specifically marketed as FHD hardware. 8GB are the absolute minimum specs in 2025 (Doom, Indiana Jones, etc.) The CPU levels I target still cost pretty much the same -- there's also still decent budget CPUs for ~100 EUR and even a Ryzen 9950X3D is cheaper than an Athlon 64 FX-55 ever used to be 20 years ago. Generally, people are switching to higher priced graphics cards because they have to. 500, 600 bucks+ is the new midrange.
HOWEVER: This is offset by that you can use cards longer, which is naturally connected also to the slower gains -- also on consoles, which are really running the show on what gets be developed on the blockbuster front. AAAA PC exclusives are pretty much dead, after all. Plus: The worth of cards used to be slashed in half after a year or two. You can still sell GPUs after years in use (also connected to the slower gains, naturally).
Nah we are not at the end, mainly because the node take the best 3nm, it's called 3nm for marketing and realistically it's not 3nm.
Here are some results of the 3nm process I'll quote.
"For TSMC’s N3 process, the contacted gate pitch (the distance between transistor gates) is reported to be around 45-47 nm for real-world implementations, which is significantly larger than 3nm."
"The fin pitch (distance between fins in FinFET transistors) is estimated to be around 25-26 nm, and the metal pitch (distance between metal interconnects) is similarly in the range of 30 nm or slightly less. These dimensions are much larger than 3nm, showing that the node name is a marketing convention rather than a literal size."
"TSMC’s 3nm node still uses FinFET (fin field-effect transistor) technology, unlike Samsung’s 3nm node, which uses GAAFET (gate-all-around FET) with nanosheets. The smallest feature sizes in TSMC’s N3 process, such as gate width or channel thickness, are likely in the range of 10-20 nm, but no single feature is as small as 3nm."
So we are only at 10nm at very best in the smallest parts of a chip but more like 20 to 40nm+ for most of it.
With the marketing at some point they will have to release a -0.1 node at this rate.
"At better prices" seems like a pile of crap. Prices have gone insane, but a big part of that is that scalpers & crypto freaks showed the GPU companies that lunatics would overpay for GPUs. So they had no reason not to jack the prices up. The lunatics will still happily buy. . . and the rest of us are just stuck with awful prices and are forced to buy if we want to keep gaming.
But we can't even get those regular inflated prices, because lack of supply & scalpers drive it all even higher.