安裝 Steam
登入
|
語言
簡體中文
日本語(日文)
한국어(韓文)
ไทย(泰文)
Български(保加利亞文)
Čeština(捷克文)
Dansk(丹麥文)
Deutsch(德文)
English(英文)
Español - España(西班牙文 - 西班牙)
Español - Latinoamérica(西班牙文 - 拉丁美洲)
Ελληνικά(希臘文)
Français(法文)
Italiano(義大利文)
Bahasa Indonesia(印尼語)
Magyar(匈牙利文)
Nederlands(荷蘭文)
Norsk(挪威文)
Polski(波蘭文)
Português(葡萄牙文 - 葡萄牙)
Português - Brasil(葡萄牙文 - 巴西)
Română(羅馬尼亞文)
Русский(俄文)
Suomi(芬蘭文)
Svenska(瑞典文)
Türkçe(土耳其文)
tiếng Việt(越南文)
Українська(烏克蘭文)
回報翻譯問題
If it is a x8 card, that only uses 8 lanes, then yes, you will notice a difference.
What videocard do you have?
Would it be faster still in the v. 4 board? Yes but I have no idea how much. Maybe some have run benchmarks on setups with both versions and will share this info. Nevertheless, I am very satisfied with the outcome here, on my lower-end z390 board. That's what counts.
Bottom line: It's perfectly fine to run a v. 4 gpu on a v. 3 board, provided the power supply and board specs can handle it. You just may not see the card's fullest potential.
There should be some benchmarks around if you can't test it yourself.
There's like two GPUs in particular where it matters to any meaningful extent (the RX 6400 and 6500 series when put in a PCI Express 3.0 or especially 2.0 board). For any other GPU, it's a non-issue. You don't need to worry about it.
The PCI Express version largely just determines the maximum amount of bandwidth between the CPU and GPU. And that amount of bandwidth tends to far, far, far exceed what is often necessary for games.
So here's a hypothetical question, if a particular thing is nowhere near your limitation, and you double that particular thing, what happens? It's obvious; a big lot of nothing happens. You don't gain performance because it's not your limitation. So the inverse is true; you're not missing performance not to have it.
Products (in this case, graphics cards) simply tend to gravitate towards supporting the newest version when they are designed, but that doesn't mean it's "under-performing" if you put it in a system that doesn't support that same, nor does it mean much when games will not need it all (this is actually the key part, because people focus on the GPU like it's some constant thing but it's going to depend on what the game needs). Most games/uses will not push anywhere near the amount of bandwidth through that link to the point that it saturates the latest versions. And for the few edge cases that can, they often aren't very "real world applicable, because you'll almost always have something else be a bottleneck first.
Look at the PCI Express link utilization in Afterburner if you want to check yourself for your own reassurance. You might be surprised how low it often is.
There might be some split moments it registers a difference that a benchmark will reflect, so you may find benchmarks show a difference (usually only with top end cards, and even then usually not far off from margin of error, so you definitely won't notice it blindly), but in real world use and in the vast majority of cases, no it's not a major difference. The "lost potential" is a non-issue.
A PCI Express 4.0 graphics card will perform fine in a 3.0 system. Any limitation it faces will quicker come from other things, such as the CPU itself, sooner than it will from the strict lack of bandwidth between the CPU and GPU because of the PCI Express version. The level of bandwidth we have there is just so far in excess of what is typically necessary.
It barely matters
even on a much weaker gpu with 4-8 lanes, it will make a huge difference
pci-e 4.0 is only really useful when the cpu has fewer lanes or the device cannot use as many lanes
The top most PCIE X16 length slot should function as a full speed X16 slot. UsingPCIE 4.0 GPUs on PCIE 3.0 Motherboard is not a problem at all.
that site alr did the test with so many games
if the cpu has few lanes, it would not be able to put all 16 lanes to the gpu slot
or fi the gpu is stuck a x16 slot with fewer lanes
A majority of NVIDIA GPUs they will still operate at X16 even when put into a PCIE 3.0 X16 length slot.
However to avoid a possible issue you want to then limit any extra PCIE cards when doing this.