Nainstalovat Steam
přihlásit se
|
jazyk
简体中文 (Zjednodušená čínština)
繁體中文 (Tradiční čínština)
日本語 (Japonština)
한국어 (Korejština)
ไทย (Thajština)
български (Bulharština)
Dansk (Dánština)
Deutsch (Němčina)
English (Angličtina)
Español-España (Evropská španělština)
Español-Latinoamérica (Latin. španělština)
Ελληνικά (Řečtina)
Français (Francouzština)
Italiano (Italština)
Bahasa Indonesia (Indonéština)
Magyar (Maďarština)
Nederlands (Nizozemština)
Norsk (Norština)
Polski (Polština)
Português (Evropská portugalština)
Português-Brasil (Brazilská portugalština)
Română (Rumunština)
Русский (Ruština)
Suomi (Finština)
Svenska (Švédština)
Türkçe (Turečtina)
Tiếng Việt (Vietnamština)
Українська (Ukrajinština)
Nahlásit problém s překladem
https://youtu.be/kb5YzMoVQyw?feature=shared
There is no more reason to speculate.
Nvidia did not design current balancing on pins. I explain in simple words.
If you want to push 48A through 6 pins, 6 cables, where each cable has 10A limit, you need to make sure that the current is balanced over all pins. If you don't, just by some dirt, or slight wearing on the material you get difference between lines. As a result some line will exceed 10A limit and will start melting. That is what is happening and it will be happening.
Now you can ask the question why Nvidia did it?
If you don't like them, you might say, because they fired last electronics engineer to hire yet another AI engineer (but that's not serious
Leaks show that they were using multiple connectors per card in prototypes. Most probably they had a current balancing between these connectors. Before the release they removed additional connectors, but did not redesign current balancing. Somebody has probably found it, but it was too late, PCBs were printed, release date soon, big boss said said NO.
Typical corporate stuff. (thats just my guess)
Hopefully other manufacturers will correct this.
1 card sent to Der8auer; 1 card with daisy chaining; Der8auer tried to repicate the situation; and one ASUS 5080 thus far reported.
I am not saying NVIDIA is not at fault but you cannot fully blame them if a user knowingly decides to dismiss the recommended specs for whatever "reasons." It does not matter how much of an enthusiast you claim to be, there is a reason it is unsafe to use 12vhpwr. Could it be a better design, yes. But that does not negate the responsibility of the user. So far, everyone one of those cards have used outdated specs. We have yet to see a credible report of a 12v 2x6 atx 3.1 cable being melted.
https://www.youtube.com/watch?v=6FJ_KSizDwM
I saw this one too. Corsair replied to it.
https://www.reddit.com/r/Corsair/comments/1iovz6i/corsair_cable_pins_cause_for_melting_concern/
https://x.com/FalconNW/status/1889428378769564121
https://www.reddit.com/r/nvidia/comments/1inpox7/rtx_50_series_12vhpwr_megathread/
Just use good PSU's with new native 12V-2x6 cables and don't re-use old trash 12VHPWR third party cables bought on temu and you'll be fine.
My personal conviction is that it's half fake/isolated issue or worse it's a massive disinformation campaign and maybe Der8auer and that user with supposedly melted 5090 probably received a paycheck to damage the Nvidia brand.
Just open your eyes Derau8er is doing clickbaiting content with his latest videos about those cables melting topics, what he did was half assed research and not real testing. As Jonny-Guru-Gerow (Corsair Head of R&D) said:
" I just think his video was too quick and dirty. Proper testing would be to move those connectors around the PSU interface. Unplug and replug and try again. Try another cable. At the very least, take all measurements at least twice. He's got everyone in an uproar and it's really all for nothing. Not saying there is no problem. I personally don't *like* the connector, but we don't have enough information right now and shouldn't be basing assumptions on some third party cable from some Hong Kong outfit. "
Falcon Northwest statement:
"HUGE respect for @der8auer
's testing, but we're not seeing anything like his setup's results.
We tested many 5090 Founder's builds with multiple PSU & cable types undergoing days of closed chassis burn-in.
Temps (images in F) & amperages on all 12 wires are nominal."
ModDIY also released a statement saying they recommend to use 12V-2x6 cables instead of 12VHPWR.
"We recommend that all users upgrade to the new 12V-2X6 cables to take full advantage of the enhanced safety and performance features offered by this new standard. "
OC3D statement:
"All lights were green when we switched to a new 12V-2×6 power cable. Only our hard-used 16-pin power cables had issues. This implies that general wear and tear could make the difference between a safe and a dangerous power cable."
"Today, we learned that worn/used 16-pin GPU power cables can have uneven power distribution across the cable."
"For consumers, our recommendation is clear. When you buy a power-hungry GPU, consider buying a new 16-pin power cable. If you bought a new PSU with your GPU, you won’t need a new cable. However, if you plan to reuse your power supply, a new 12V-2×6 cable could save your bacon. "
So as I said just use good brand newcables and not garbage used old ass cables and you will be fine. Don't believe trash clickbait content with poorly documented claims.
you can never sit here and tell me this bad design is some how on the end user for any of its issues.....
to kick this even farther down the road the irony is in the fact using the old 8 pin connection size with 16ga wire would have gotten past most of the problems and would not have even been a issue with line balancing......
Nonsense nr 1.
Nonsense nr. 2
This (below) actually makes sense and it confirms BAD design from Nvidia.
If you want to know why these cables melt, watch the video from Buidzoid. He explains in technical terms why the design is wrong. If you don't understand or don't believe, take this video to somebody whol learned electronics and watch together. Ask if that makes sense.
Stop spreading nonsense. It is really science, not magic.
No one is arguing it is good design. It could be better. But NVIDIA went to 12v 2x6 for a reason. The 12vhpwr does not work. And when the 4090s started melting, they said to switch over. All the companies started to recommend the new standards. So, although the design could be better, you cannot just blame them when a user knowingly disregards the new specs because of "reasons." If a company says "in order to use this product in the best possible condition, do this," and you ignore it, then it is on you when something foul happens.
"Oh it's actually YOUR FAULT that our cards melt on our new connectors! You didn't connect them properly! Disregard the fact that this doesn't happen with the other connectors when they aren't installed properly."
Plus the fact that there are already multiple examples on this on a brand new card does not bode well for the future. But, considering this has been a problem for a while now, people will still gobble up the Team Green offerings even if they melt.
Their 40 series GPU's were cracking left and right (especially the top models).
GPU brands to stay far from are: ASUS, PNY, and Gigabyte.
From boards cracking for no reason, to lack of fuses, to those blower model GPU's - all junk.
There are no legal standards like we have in houses that specify wire gauges/thickness and amps per room.
You wont find 15 gauge wires on the GPU when you should.
oof