安裝 Steam
登入
|
語言
簡體中文
日本語(日文)
한국어(韓文)
ไทย(泰文)
Български(保加利亞文)
Čeština(捷克文)
Dansk(丹麥文)
Deutsch(德文)
English(英文)
Español - España(西班牙文 - 西班牙)
Español - Latinoamérica(西班牙文 - 拉丁美洲)
Ελληνικά(希臘文)
Français(法文)
Italiano(義大利文)
Bahasa Indonesia(印尼語)
Magyar(匈牙利文)
Nederlands(荷蘭文)
Norsk(挪威文)
Polski(波蘭文)
Português(葡萄牙文 - 葡萄牙)
Português - Brasil(葡萄牙文 - 巴西)
Română(羅馬尼亞文)
Русский(俄文)
Suomi(芬蘭文)
Svenska(瑞典文)
Türkçe(土耳其文)
tiếng Việt(越南文)
Українська(烏克蘭文)
回報翻譯問題
The other RTX 4080 had better performance but actually is a regression in price for performance.
Keep in mind the x80 was traditionally VERY close (single digit difference) to the top in performance before this generation. To be missing a little over around one third of the performance (using the 30% to 40% number being mentioned here, if true) would be a BIG reduction.
That RTX 3060 8GB should really be an RTX 3050 Ti (the space is open, there's no desktop version of the RTX 3050 Ti, only laptop). That it has the same core as the regular 12GB RTX 3060 is a plus, but the memory bus width reduction of 33.3% really cripples its performance. That this change came under the radar, no big announcement like they'd tried with the RTX 4080 12GB Fake Edition, the backlash they got from that stunt had taught them a valuable lesson. Hence, the RTX 4070 Ti was created, and nVidia quietly launch the RTX 3060 8GB.
The typical buyers, and not tech savvy at that, may not even know there's a difference, the only difference in the box is a smaller print of "12GB' or "8GB". They both carry the "RTX 3060" nomenclature, and are about the same price. A typical buyer may not even be aware he or she is buying which version. It all depends on which version is being pushed by the staff of a store. Sure, nV fans can argue that it's the fault of the store then, but the fault lies originally with nGreedia for being duplicitous by creating the cut-down 8GB card with the same name as the regular version.
I mean, last I heard, they weren't even interested in continuing to make GPUs, their ARC cards being their first and last attempt. I'm guessing they backpedaled on that and actually wanna make a serious go, but they'll need to deliver SOMETHING on the raytracing front, because even AMD at least brute forces it, and new future games are unlikely to utilize it LESS.
Drivers mainly, but they're getting better. Pricing should be lower as well.
https://en.overclocking.com/intel-continues-to-improve-the-performance-of-its-arc-gpus/
I'd expect that they can deliver something good enough in time, if they keep at it. I really hope they will stay in the game long enough.
The people understand the situation relax :)
The thing is, the people who don't frequent here don't focus over specs or even a slight slowdown in fps so they wouldn't care the way some people on this forum do. If they're not on here, they probably actually enjoy gaming more than investigating how much performance they can squeeze out of a pc for an extra $50 or $100. Those people who drop in to ask a question are happy just to have a stable pc or get it back to stable, even if their boards vrms aren't reputable or their GPU has less ram. They probably don't even look at FPS. They're measurement is only "does it look ok to me" or "does it look crappy to me".
I will call it ignorance, but it's justifiable. The other option is to spend their life on here forever waiting on the perfectly priced GPU instead of enjoying themselves on what's actually out. I personally would rather just spend the couple hundred and move on with my life than not spend that money and despise the situation.
Hey, I'm perfectly happy with my 3060 Ti, but I wouldn't begrudge someone right now for being in the market for a 30xx series card but wanting performance for value. Why should they get ripped off on a BUDGET card (because the 60xx series IS midrange budget, I don't think anyone denies that) just because Nvidia doesn't mention the specs? When I bought my 2060 Super, I did so thinking I got it for a good deal at $400 MSRP.
But for all I know, the 8 GB VRAM might not have meant much if it was slower in other respects. The fact that the current 3060 8 GB is technically slower than even a 2060 Super should offend ANYONE who cares about performance for value, because it's being sold for more than that card now, despite it being a "next gen" card. People don't like to get ripped, no matter what tier it is. That much was made clear with the 4080 12 GB.
The 4080 12G was also the leaked 4070, this is just a cheaply made 3060
The last two GPUs I got for my HTPC, the GT 1030 and the GT 430, had the same exact thing thing where there were two versions of an otherwise identical named/numbered card. There were two versions of them, one decent, one bad (often priced very similarly too) and you had to research to make sure you were getting the better one. Worse, they didn't even have VRAM distinctions so sometimes it wasn't always easy to see on listings which it was. You had to try and look up part numbers and see what the bus width/CUDA cores were.
This is nothing new sadly. What's new is nVidia is trying it at the gaming segment (x50/x60 and above). They tries passing an x60 Ti off as an x80, people called it out for trying to pass an x70 off as an 80 (which nVidia silently smirked at probably because this is actually what the OTHER RTX 4080 is...), and learned their lesson. The lesson they learned wasn't to not do it, it was to bring as little attention to it as possible.
If anything, they’ll clock it higher and it’ll be about the same as a 3070
But even the RTX 4070 Ti to RTX 4060 Ti loses a lot of CUDA cores (I know they aren't linear to performance but it's a big indicator). Also, why do they always launch the x60 Ti model first? If these are going to be planned into the lineup instead of refreshes now, just readjust stack naming. They could introduce more tiers instead of only going down to x60 (or x50), but that'd go against their narrative of selling low end chips as mid-range, mid-range as high end, etc.
It's amazing my GPU is over half a decade out of date and isn't even from a higher tier than the RTX 4060 Ti in name, and yet the RTX 4060 Ti could only have 2 GB more VRAM than it (it's likely going to be that or 10 GB at best because I don't see it having 12 GB given both RTX 4070s have that). I know Pascal offered a bit of VRAM for the time (I went from 1 GB to 6 GB with changing to it), but still. I can and do run out of VRAM with one of the things I do so drastically raising my GPU computing power but only adding 25% more VRAM won't do.
The 128-bit bus is actually not a big deal to me if the VRAM speed compensates enough for it, though it does lend to the idea that nVidia is selling lesser spec chips at higher names/tiers to get away with charging more for less.
Also, 220W might not be a lot today but it's almost double my GTX 1060. Is performance per watt that bad/slowing down that much in the last couple generations or what?
I almost expect it to be a somewhat mediocre price for performance like the RTX 3060 series was. With the rest of the RTX 40 lineup is looking, it almost doesn't have room to be great. And this is following cuts to the RTX 3060. There seems to be enough room where nVidia can position this to be enough of an increase over the RTX 3060 while still being disappointing on its own, and I hope that's not where it lands (won't be personally considering this card anyway, but it's the principle of it).