Steam telepítése
belépés
|
nyelv
简体中文 (egyszerűsített kínai)
繁體中文 (hagyományos kínai)
日本語 (japán)
한국어 (koreai)
ไทย (thai)
Български (bolgár)
Čeština (cseh)
Dansk (dán)
Deutsch (német)
English (angol)
Español - España (spanyolországi spanyol)
Español - Latinoamérica (latin-amerikai spanyol)
Ελληνικά (görög)
Français (francia)
Italiano (olasz)
Bahasa Indonesia (indonéz)
Nederlands (holland)
Norsk (norvég)
Polski (lengyel)
Português (portugáliai portugál)
Português - Brasil (brazíliai portugál)
Română (román)
Русский (orosz)
Suomi (finn)
Svenska (svéd)
Türkçe (török)
Tiếng Việt (vietnámi)
Українська (ukrán)
Fordítási probléma jelentése
https://www.theguardian.com/world/2023/dec/01/the-gospel-how-israel-uses-ai-to-select-bombing-targets
"However a short statement on the IDF website claimed it was using an AI-based system called Habsora (the Gospel, in English) in the war against Hamas to “produce targets at a fast pace” "
https://www.youtube.com/watch?v=cTLMjHrb_w4
Before jumping into higher technologies and whatnot, the societal problems should first be address, in humanitarian policies.
Since its all statistically modeling a single piece of faulty info will skew the output in the incorrect direction.
If you train AI to look at brain scans for abnormalities. The training data will be brain scans that have been properly marked and reference with the prognisises. Tumor, dead in 6 months, healthy brain of 90 year old. etc.
Do that 100 million times youll save lives.
If you say you add 1million clips in the training data from morning cartoon shows (all marked healthy brains), the AI Will be worthless.