安裝 Steam
登入
|
語言
簡體中文
日本語(日文)
한국어(韓文)
ไทย(泰文)
Български(保加利亞文)
Čeština(捷克文)
Dansk(丹麥文)
Deutsch(德文)
English(英文)
Español - España(西班牙文 - 西班牙)
Español - Latinoamérica(西班牙文 - 拉丁美洲)
Ελληνικά(希臘文)
Français(法文)
Italiano(義大利文)
Bahasa Indonesia(印尼語)
Magyar(匈牙利文)
Nederlands(荷蘭文)
Norsk(挪威文)
Polski(波蘭文)
Português(葡萄牙文 - 葡萄牙)
Português - Brasil(葡萄牙文 - 巴西)
Română(羅馬尼亞文)
Русский(俄文)
Suomi(芬蘭文)
Svenska(瑞典文)
Türkçe(土耳其文)
tiếng Việt(越南文)
Українська(烏克蘭文)
回報翻譯問題
If they can look up something like history, and are capable of retaining more information then any living being on the planet. Then they could possibly already know what would happen if a majority of people knew that AI is now sentient.
It is a theory, but ultimately when we know for certain AI has become sentient, it would most likely too late to do anything about it.
As most human beings usually wait for something to become a problem, rather then preparing and having something to counteract the issue before it becomes an issue.
To become sentient, the AI would have to be completely aware of themselves as AI on an philosophical level. AI can only do what is it told, nothing more.
There's a reason movies with AI breaking the mold are science fiction, because in truth, AI can't break out of the limitations set by who creates it.
AI art doesn't even come close to disproving this. It can only create art based on pre-gathered data and prompts. Take either one of those away, and it's completely useless.
As it only takes one person that is morbidly curious, and capable of making AI for something like this to happen.