Vanythe Sep 9, 2023 @ 5:20pm
The problem with new generation cards and their reliance on software features to be able to perform.
Inflation has gone through the roof, new mid-range cards are sold for high end, sometimes even enthusiast level prices whilst being weaker than the previous generation. Most modern games use FP16 so I'll use that as a general performance measurement, along with memory.


RTX 2060 Super - 14.4 TFLOPS FP16, 448 GB/s memory bandwidth, 256-bit bus.
RTX 3060 8GB - 12.8 TFLOPS FP16, 240 GB/s memory bandwidth, 128-bit bus.
RTX 4060 8GB - 15.1 TFLOPS FP16. 272 GB/s memory bandwidth, 128-bit bus.


All of these cards use GDDR6. Two of them have their bus cut in half. This will be a problem in the future because these new cards cannot push things fast enough.

Still, maybe that doesn't matter, the future of gaming is with AI-assisted whatever and ray tracing so that's where cores and stuff plays an important role. Let's see:


RTX 2060 Super - Tensor Cores: 272, RayTracing Cores: 34
RTX 3060 8GB - Tensor Cores: 112, RayTracing Cores: 28
RTX 4060 8GB - Tensor Cores: 96, RayTracing Cores: 24


How do you sell a new product thats worse in almost every way than the one you made two generations ago? Software "Features" of course!

Both the 30 and 40 series cards have insane gains with DLSS/FSR options in modern AAA games so that means they are better!! End of story!

Or hmm, where have I seen this before, gpus that have their own proprietary software feature / api and rely on developers to use it in order for their product to appear superior/good to the customer?

3DFX and their GLide API - revolutionary at first, ended up obsolete a few years later because, lo and behold, the future of gaming was not 640x480 at 16-bit colors. Still, at the time they were the "best" gaming choice... if you never thought about future releases. 3DFX went bankrupt and was bought by nvidia in 2000.

S3 and their MeTal API - Same as above, but with even less support. It looked great in Unreal Tournament, the textures were so sharp and detailed... After that nobody cared. Odd? Not really, it's just that other tech which worked on everything has replaced it. S3 went bankrupt around 2011.

Matrox and their EBM tech... yeah that didn't save the abysmally performing card, even though the tech demos would make you believe it was the *future*. Matrox no longer produces their own cards, rather rebrands existing low-end AMD chips as their own and provide custom (but very high quality) drivers for those cards.


My point is, something better than both DLSS/FSR can come in a day, or two, or a week, a month, and these cards simply don't have the "oomph" for any future tech that relies on raw performance.
< >
Showing 31-45 of 54 comments
Originally posted by Vanythe:
2060 has 3 variants, base, super and the 12GB one. I just chose the Super because I felt like it was the closest to some kind of mid-line for comparison. Even then, the Ti variant of the 4060 has the same cut bandwidth.
That's sort of what I mean though. Making your comparison more of a "like for like" would have made your point all the same, and people might see it as dishonest to compare different tiers.

I get why you did it. The "base" x60 feels more like an "LE" (or lesser) model and the "Ti" feels more like the "real, base" x60 model now, especially with the RTX 30 and 40 series. But I still would have tried to just do a like for like to keep people from being able to wave your argument away because they'll think you're using indirect comparisons to exaggerate a point.
Vanythe Sep 11, 2023 @ 9:51am 
Originally posted by Illusion of Progress:
Originally posted by Vanythe:
2060 has 3 variants, base, super and the 12GB one. I just chose the Super because I felt like it was the closest to some kind of mid-line for comparison. Even then, the Ti variant of the 4060 has the same cut bandwidth.
That's sort of what I mean though. Making your comparison more of a "like for like" would have made your point all the same, and people might see it as dishonest to compare different tiers.

I get why you did it. The "base" x60 feels more like an "LE" (or lesser) model and the "Ti" feels more like the "real, base" x60 model now, especially with the RTX 30 and 40 series. But I still would have tried to just do a like for like to keep people from being able to wave your argument away because they'll think you're using indirect comparisons to exaggerate a point.

Nit picking would be possible no matter what kind of comparison I made. Even if I gave the best possible example, someone would probably reduce it to "but the newer one weighs less so it won't bent the connector"
PopinFRESH Sep 11, 2023 @ 9:59am 
Originally posted by Vanythe:
Originally posted by smallcat:
RTX 2060S 8GB - 5.81 TFlops , FP32 float
RTX 3060 12GB - 12.74 TFlops , FP32 float
RTX 4060 8GB - 15.11 TFlops , FP32 float
So , what are you complaining of ? Inflation , no way back unfortunately . Memory is not everything .

Exactly, memory isn't everything, it's one of the primary components of what makes a good graphics card. If you don't have enough memory, you really can't do much, regardless of TFLOPS. In this case however, they have enough memory, but they simply cannot push it as good as they should have been able to. How many processing power will be held in "queue" because of the 240 GB/s bandwidth? You can have a 200 TFLOPS card but if you give it a 240 GB/s limit, it's not gonna be able to push the data fast enough, so the card in itself is a bottleneck.

^ this shows you don't understand how GPU architecture works. The memory bandwidth is a function of the memory and bus width, and the memory bus width is a function of the GPCs (speaking of NVIDAs GPUs). You are the one who is solely looking at a specific number/metric in isolation rather than looking at how the architecture works as a whole. Just as you are looking at "number of cores" as a metric without consideration of what those cores are / how they are different. The SMs on 30 series vs the SMs on 20 series have half of the shader cores able to function as either Int32 or fp32; when needed a 30 series card can have twice the fp32 performance compared to a 20 series card with the same exact number of SMs.

Your logic is the equivalent of comparing a Core 2 Quad Q6600 to a current generation Pentium Gold G7400T and saying the Core 2 Quad is better because "it has 4 cores, and the Pentium Gold only has 2 cores". While back in reality, the Pentium Gold G7400T is about 230% faster in threaded performance and more than 400% faster in single threaded performance.

As nullable said, you are complaining about them doing more with less.
Vanythe Sep 11, 2023 @ 10:44am 
Originally posted by PopinFRESH:
Originally posted by Vanythe:

Exactly, memory isn't everything, it's one of the primary components of what makes a good graphics card. If you don't have enough memory, you really can't do much, regardless of TFLOPS. In this case however, they have enough memory, but they simply cannot push it as good as they should have been able to. How many processing power will be held in "queue" because of the 240 GB/s bandwidth? You can have a 200 TFLOPS card but if you give it a 240 GB/s limit, it's not gonna be able to push the data fast enough, so the card in itself is a bottleneck.

^ this shows you don't understand how GPU architecture works. The memory bandwidth is a function of the memory and bus width, and the memory bus width is a function of the GPCs (speaking of NVIDAs GPUs). You are the one who is solely looking at a specific number/metric in isolation rather than looking at how the architecture works as a whole. Just as you are looking at "number of cores" as a metric without consideration of what those cores are / how they are different. The SMs on 30 series vs the SMs on 20 series have half of the shader cores able to function as either Int32 or fp32; when needed a 30 series card can have twice the fp32 performance compared to a 20 series card with the same exact number of SMs.

Your logic is the equivalent of comparing a Core 2 Quad Q6600 to a current generation Pentium Gold G7400T and saying the Core 2 Quad is better because "it has 4 cores, and the Pentium Gold only has 2 cores". While back in reality, the Pentium Gold G7400T is about 230% faster in threaded performance and more than 400% faster in single threaded performance.

As nullable said, you are complaining about them doing more with less.

Again, I could be wrong, I'm no expert, but the benchmarks show that they seem to be doing equal-or-less with less rather than more-with-less. Adding salt to injury, the 30 and 40 series draws 20-40 watts more to achieve around 5% (technical city number) overall performance increase? Even then, some heavier benchmarks favor the 20 series cards.
Last edited by Vanythe; Sep 11, 2023 @ 10:45am
Originally posted by Vanythe:
Nit picking would be possible no matter what kind of comparison I made. Even if I gave the best possible example, someone would probably reduce it to "but the newer one weighs less so it won't bent the connector"
You can't control others though. The fact that bad actors exist anyway shouldn't be used to not put forth as strong an argument as possible. Let those who are going to nitpick anyway, nitpick. When you use bad supporting reasoning, that nitpicking becomes valid counterarguments.
emoticorpse Sep 11, 2023 @ 11:27am 
Originally posted by Vanythe:
Originally posted by PopinFRESH:

^ this shows you don't understand how GPU architecture works. The memory bandwidth is a function of the memory and bus width, and the memory bus width is a function of the GPCs (speaking of NVIDAs GPUs). You are the one who is solely looking at a specific number/metric in isolation rather than looking at how the architecture works as a whole. Just as you are looking at "number of cores" as a metric without consideration of what those cores are / how they are different. The SMs on 30 series vs the SMs on 20 series have half of the shader cores able to function as either Int32 or fp32; when needed a 30 series card can have twice the fp32 performance compared to a 20 series card with the same exact number of SMs.

Your logic is the equivalent of comparing a Core 2 Quad Q6600 to a current generation Pentium Gold G7400T and saying the Core 2 Quad is better because "it has 4 cores, and the Pentium Gold only has 2 cores". While back in reality, the Pentium Gold G7400T is about 230% faster in threaded performance and more than 400% faster in single threaded performance.

As nullable said, you are complaining about them doing more with less.

Again, I could be wrong, I'm no expert, but the benchmarks show that they seem to be doing equal-or-less with less rather than more-with-less. Adding salt to injury, the 30 and 40 series draws 20-40 watts more to achieve around 5% (technical city number) overall performance increase? Even then, some heavier benchmarks favor the 20 series cards.

Which benchmarks show 2000 series doing better than 4000 series?

And you mean like a 2080 vs a 4060 or something? Or like a 2080 vs a 4080?
PopinFRESH Sep 11, 2023 @ 12:03pm 
Originally posted by emoticorpse:
Originally posted by Vanythe:

Again, I could be wrong, I'm no expert, but the benchmarks show that they seem to be doing equal-or-less with less rather than more-with-less. Adding salt to injury, the 30 and 40 series draws 20-40 watts more to achieve around 5% (technical city number) overall performance increase? Even then, some heavier benchmarks favor the 20 series cards.

Which benchmarks show 2000 series doing better than 4000 series?

And you mean like a 2080 vs a 4060 or something? Or like a 2080 vs a 4080?

^ this

and

Originally posted by Illusion of Progress:
Originally posted by Vanythe:
Nit picking would be possible no matter what kind of comparison I made. Even if I gave the best possible example, someone would probably reduce it to "but the newer one weighs less so it won't bent the connector"
You can't control others though. The fact that bad actors exist anyway shouldn't be used to not put forth as strong an argument as possible. Let those who are going to nitpick anyway, nitpick. When you use bad supporting reasoning, that nitpicking becomes valid counterarguments.

^ this, when you make claims like these in the OP and above, without substantiating them with anything valid; and then chalk it up to an excuse of "people are going to nitpick whatever so.." your argument just comes off as being made in bad faith.

If you were specifically referring to rasterization performance and comparing from a 10-series to a 20-series making this claim / complaint then you'd have some level of rational because that move was shifting die area toward adding additional fixed function units for RT and Tensor acceleration. An RTX 2080 had roughly the same rasterization performance of a 1080 Ti. But making these claims from a 20-series to a 30-series, or even the 40-series is just flat out ignorant. The rasterization performance from the 20-series to the 30-series was a fairly substantial jump with most being nearly a model-class increase.

Wether you like it or not, the current near-ish term future of graphics rendering is Ray Tracing / Full Path Tracing, and leveraging AI based features to increase the perceived resolution, image quality, and frame rate are part of that future. Specifically in regards to NVIDIA, those aren't just "software features"; they are reliant on those additional fixed function hardware introduced in the 20-series and have been gen-on-gen improved with each subsequent GPU generation. This is one of the primary reasons why DLSS2 looks and performs far and away better than FSR2; and why from what we've seen so far, DLSS3 looks substantially better than FSR3. Not to mention the DLSS3.5 improvement that unifies the denoisers and makes RT look substantially better with "more correct" lighting and reflections while also slightly increasing performance.

This really is no different from when things transitioned to T&L and then as new hardware advancements were made, transitioned to rasterization. The current "Ray Tracing" and the AI features are an enablement for transitioning to fully path traced renderers. The AI features are going to continue to improve their ability to make image quality better than what is achieved by raw brute-force throw more hardware at the current rendering concepts.
ZAP Sep 11, 2023 @ 12:43pm 
It's all still weird in the current environment. Both sides seem like the cards got knocked down a SKU in performance or something. Even at $499, no tax or shipping, free game, an 8 in the model number.. I don't know, man.

Bought a Nord Lead 4 instead 😐
Jamebonds1 Sep 11, 2023 @ 12:51pm 
Ever if those video cards had specifications with higher number, it meant nothing until confirmed by the benchmark, video games with integrated benchmark, or when reviewers play game as normal with a FPS counter.

Important to notice, FPS counter is not an absolute number.
Last edited by Jamebonds1; Sep 11, 2023 @ 12:52pm
ZAP Sep 11, 2023 @ 1:16pm 
If they took a card and stripped all the fake frame pushing A.I.-simp technology out and just make a regular priced card for once, if you know what I mean.
☎need4naiim☎ Sep 11, 2023 @ 1:42pm 
Originally posted by ZAP:
If they took a card and stripped all the fake frame pushing A.I.-simp technology out and just make a regular priced card for once, if you know what I mean.
The problem for us consumers is that gamers have a tendency to fall for gpu manufacturers' marketing trap very easily. Those paid youtubers and other streamers have been unfortunately effective on making those prices remain high since mining craze when gpu prices went 4x the normal value. But things CAN CHANGE VERY QUICKLY. History speaks for itself.

The most palpable and doable thing for us gamers is to get stubborn at not purchasing overpriced items from those manufacturers. Trust me, we can afford to game on older hardware but they can't afford plummeting sales as a manufacturer.

At worst, we would still have numerous pretty games for entertainment (games like RDR2, D2R, DR2.0, FH4 or WRC Generations can be easily played with cards even from Maxwell architecture) but manufacturers can't afford it if their gpus produced for gaming market not getting sold.
Last edited by ☎need4naiim☎; Sep 11, 2023 @ 1:47pm
Vanythe Sep 11, 2023 @ 1:47pm 
It's not even that, I mean look at Crysis from 2007. Do games *really* need to look better than that? Many of them don't come close even today, but require 10 times the computer specs to achieve 1/3 of what that game did. Insane really.
☎need4naiim☎ Sep 11, 2023 @ 2:08pm 
Originally posted by Vanythe:
It's not even that, I mean look at Crysis from 2007. Do games *really* need to look better than that? Many of them don't come close even today, but require 10 times the computer specs to achieve 1/3 of what that game did. Insane really.
To be frank, i can live with games that look at least as good as the best-looking games of 2014.

I have played NFS: The Run which is an 2011 game, and then NFS: Rivals, from 2013. The jump in quality from The Run to Rivals was bigger than the jump from Rivals to Unbound (if there is any). I am not joking. At best quality settings, that 2013 EA game NFS: Rivals looks awesome. And those graphics can be achieved easily with a 970/1060 range card. That's why i don't want to pay ridiculous prices for 8GB cards now.
Originally posted by ZAP:
It's all still weird in the current environment. Both sides seem like the cards got knocked down a SKU in performance or something.
Absolutely, basically sums up my feelings.

To a point it's arbitrary what the names and numbers are, so it wouldn't be so bad if the price for performance uplifts were bigger since that matters more, but even those feel pretty bad.

Still hoping next generation shakes out a bit better and this "complimentary instead of replacement" generation is partially just the result of the oversupply issue from the mining boom, even if next generation is likely a ways off. If that is also a bad uplift, that will mean people will basically just stick to what they have until it actually fails since it will almost never be worth upgrading anymore, and games will more be forced to stagnate as the average level of performance does as well.
Originally posted by Vanythe:
It's not even that, I mean look at Crysis from 2007. Do games *really* need to look better than that? Many of them don't come close even today, but require 10 times the computer specs to achieve 1/3 of what that game did. Insane really.
That's diminishing returns for you. Makes the stagnation possibility that much worse if it happens.
RSebire Sep 11, 2023 @ 3:45pm 
There is only so many floatation points you need.
For me, it is not about performance anymore.
It is all about efficiency.

Performance per Watt is where it is at.

This is not a race.
It is a marathon.
< >
Showing 31-45 of 54 comments
Per page: 1530 50

Date Posted: Sep 9, 2023 @ 5:20pm
Posts: 54