Nainstalovat Steam
přihlásit se
|
jazyk
简体中文 (Zjednodušená čínština)
繁體中文 (Tradiční čínština)
日本語 (Japonština)
한국어 (Korejština)
ไทย (Thajština)
български (Bulharština)
Dansk (Dánština)
Deutsch (Němčina)
English (Angličtina)
Español-España (Evropská španělština)
Español-Latinoamérica (Latin. španělština)
Ελληνικά (Řečtina)
Français (Francouzština)
Italiano (Italština)
Bahasa Indonesia (Indonéština)
Magyar (Maďarština)
Nederlands (Nizozemština)
Norsk (Norština)
Polski (Polština)
Português (Evropská portugalština)
Português-Brasil (Brazilská portugalština)
Română (Rumunština)
Русский (Ruština)
Suomi (Finština)
Svenska (Švédština)
Türkçe (Turečtina)
Tiếng Việt (Vietnamština)
Українська (Ukrajinština)
Nahlásit problém s překladem
I am sorry I keep buying $15.000 Nvidia cards, my ML engineers are spoiled and refuse to use anything else.
Also, high-end consumer cards are fairly common for machine learning, we for example use 2080 Tis in a couple of our machines.
AI oriented cards make them far more than consumer oriented ones, and they know that a majority of OEM computer manufacturers and consumers will blindly buy whatever they crap out as long as the price tag isn’t TOO absurd.
Unless something changes the landscape, expect NVidia to keep milking their reputation for stupid amounts of cash, consumer be damned.
People are dumb.
https://www.tomshardware.com/news/nvidia-rtx-4090-prices-have-been-creeping-upward
I don’t think AI tech itself is going to go away, but I doubt most of industry in its current form will last.
You as a consumer have been interacting with these systems for 10-15 years now, previously they were simply called algorythms.
But in the last 3/4 years a couple of things changed; the technology and methods very rapidly advanced, and marketing figured out they could sell machine learning as "AI" implying Skynet-like capabilities which is utterly ridiculous.
ML has been here for a while, and as you said it is not going anywhere. "Facebook for cats but with AI", indeed has no future.
This is pretty much the issue here. People being spoiled and people refusing to try alternatives. Businesses and managers need to quit accepting it when there is literally little to zero reason now.
Proof there is little to zero reason:
https://ts2.space/en/the-lamini-ceo-pokes-fun-at-nvidia-gpu-shortage-highlights-the-advantage-of-amd-gpus/
As the article points out, the claims should come with scrutiny since they are made by the company. But no more or less than any other company who has previously said CUDA is better because *they* use it. The whole "they use it so they are bias" argument has to go both ways or no ways.
In the end there are plenty of viable alternatives, including very arguably at least one dead on equal, option to CUDA. Depending on use case and workload there have been viable and/or equal alternatives for years. But at least now, there is little argument against the existence of parity products, often with differing advantages that can offset any minor differences.
And yes, to be clear, I know that you (omega) have more first hand exp here than I do. But I also know that Llamini and their CEO have more experience than you. So when weighing who I think is more correct I will land on the CEO of a major ML/LLM company who says they can use AMD alternatives just as well as CUDA in ML/LLM compute.
Money. That simple. Money. They have people (read companies and managers) who will gladly spend $15,000.00 per card simply because of the name on it despite there being alternatives that do just as well or comparably well at a fraction of the cost. They will spend the money on the name, not on any actual tech advantage, and then they will claim they had to because their engineers demanded it. Imagine if auto-techs got to demand Snap-On or bust lmao. Sorry but not sorry here, devs and engineers demanding any tool due to being spoiled is just flat out childish. Specially if there are multiple tools that do the job more or less the same.
To be clear, I am not talking about people who *code* in cuda. I am talking about people who use cuda code. If someone has a working knowledge base in coding in Cuda asking them to use alternatives is not fair. But if we are talking about software that runs *on* cuda or other compute systems, where the Cuda/ROCm or other, is used as a form of compute acceleration, there is often little reason not to use the alternatives.
When ROCm can be used just as well as CUDA, someone who is an ML engineer refusing to use one over the other is like an automotive engineer being unwilling to work because the tools are Bosch and not Snap-On. A tool is a tool. They need to use it and do their job not refuse to use a specific brand of tool.
That's why Nvidia can afford to not be very competitive with AMD and Intel, yet they still go out of their way to try to inhibit them anyway because Jensen just wants to be another Steve Jobs at this point.
1. Full-range day-1 hardware support
2. Support by ML frameworks and models
ROCm is only functional on a select set of hardware and support is slow to appear. It for example works on the 7900 XTX, but not on the 7800 XT. And this 7900 XTX support only landed a few days ago.
Many models will refuse to officially support AMD+ROCm related issues, they will simply tell you to switch to Nvidia because that is all the model was tested on.
ROCm is gaining traction, but still has all the negative annotations of yesteryear. AMD is fighting an uphill battle against a fortified monopoly, and these mathmaticians who know Python don't give a crap about open standards and open source.
I don't get it. They are into the gaming card business, so why not give gamers open-source drivers and use community to improve their very product?
That'd be money for them.
I don't even know what you are talking about with "giving gamers their support". Are you trying to insinuate that there is some kind of gaming GPU shortage because Nvidia is selling cards for AI? Last I checked every card current and last gen are widely available.