Steam 설치
로그인
|
언어
简体中文(중국어 간체)
繁體中文(중국어 번체)
日本語(일본어)
ไทย(태국어)
Български(불가리아어)
Čeština(체코어)
Dansk(덴마크어)
Deutsch(독일어)
English(영어)
Español - España(스페인어 - 스페인)
Español - Latinoamérica(스페인어 - 중남미)
Ελληνικά(그리스어)
Français(프랑스어)
Italiano(이탈리아어)
Bahasa Indonesia(인도네시아어)
Magyar(헝가리어)
Nederlands(네덜란드어)
Norsk(노르웨이어)
Polski(폴란드어)
Português(포르투갈어 - 포르투갈)
Português - Brasil(포르투갈어 - 브라질)
Română(루마니아어)
Русский(러시아어)
Suomi(핀란드어)
Svenska(스웨덴어)
Türkçe(튀르키예어)
Tiếng Việt(베트남어)
Українська(우크라이나어)
번역 관련 문제 보고
Kind of weird that it's Radeon, their consumer AI tech isn't anywhere near Nvidia's atm.
Like maybe smaller ventures might be better served by a handful of GPUs compared to proper AI chips.
Personally I'm not TOO worried. The difference with mining was it allowed GPUs to become income generators passively, and to anyone. You can make money with GPUs for AI or creating content... but that's just using them as a method to do work. You still have to do the work. Mining was far, far more passive and accessible. That's what allowed the value on them to skyrocket.
Youtubers will make videos for traffic, and I get the impression that's what this is. If there was substance to this possibly resulting in skyrocketing GPU value, it'd probably already be happening. That's how it went with mining anyway; the effects actually occurred before publications like websites and Youtubers actually caught on and started pointing it out.
As always though, time will tell. But I don't see this an analogous to the mining situation at all.
I expect it to have no meaningful impact on consumer hardware prices.
CUDA, that is the only reason Nvidia is the go-to for these type of workloads. Everything is build around CUDA which is exclusive to Nvidia.
A GPU is a GPU, we ML on 2080 Tis. The workload will define if the hardware is sufficient or not. And in 99% of cases these cards are perfectly sufficient, not everyone is training the next ChatGPT, most ML projects are much smaller in scale and easier to train.
"AI accelerator" is a very broad term which applies to any type of processor. The most suitable solution will depend on the workload being performed,
This tech is still very much in development and we do not know where it will go. For example some upcoming methods are doing ML on the CPU instead.
For 'hobbyist' perhaps.
Nvidia's H100 and A100 are the best suited for the job.
For the vast majority of ML implementations a high-end consumer card is more than sufficient. Like I said in my previous post, not everyone is building the next ChatGPT.
My knee-jerk reaction was more that because we hadn't seen the sky fall, so to speak, with prices skyrocketing already, that either the "dedicated AI chips" like the H100s either have a much more effective results for the price ratio, or that there was more to it.
Either way I'm definitely not concerned of this becoming the next mining equivalent that skyrockets GPU value. There was a lot more to things with the Ethereum occurrence that aren't really present here. So the video just seems like its trying to attract views.
The H100 is only suitable for the most demanding of tasks.
I don't think the ML field is going to make a noticable impact on chip availability, but it will most certainly have an impact on R&D, consumers are second class citizens now.
When these cards are needed they are typically just rented in the cloud. You can have a T4 for $150 a month or something. And you will be using it for may a couple of hours at most. ADding up to maybe $10 a month.