安装 Steam
登录
|
语言
繁體中文(繁体中文)
日本語(日语)
한국어(韩语)
ไทย(泰语)
български(保加利亚语)
Čeština(捷克语)
Dansk(丹麦语)
Deutsch(德语)
English(英语)
Español-España(西班牙语 - 西班牙)
Español - Latinoamérica(西班牙语 - 拉丁美洲)
Ελληνικά(希腊语)
Français(法语)
Italiano(意大利语)
Bahasa Indonesia(印度尼西亚语)
Magyar(匈牙利语)
Nederlands(荷兰语)
Norsk(挪威语)
Polski(波兰语)
Português(葡萄牙语 - 葡萄牙)
Português-Brasil(葡萄牙语 - 巴西)
Română(罗马尼亚语)
Русский(俄语)
Suomi(芬兰语)
Svenska(瑞典语)
Türkçe(土耳其语)
Tiếng Việt(越南语)
Українська(乌克兰语)
报告翻译问题
all crimes start in the mind, i do not see what is wrong for arresting someone if you can prove they thought about doing such an action that would require arresting if he actually did it if there was nobody to monitor him. don't be like those cretins who automatically get triggered when they hear the word thought crimes just because they read it in orwell.
you think before you do, do you not?
I like how you think.
i didn't though? you're being entirely unforgiving for what the severity of the crime actually is, you can very easily switch the two at a moments notice and not have an issue yet you're making such a big deal out of it. are you insane? if i made an error then fine, but don't crucify me for breathing the wrong way.
Easily triggered?
no i just don't like the representation of A.I. in sci fi movies, they always seem to do the worst things when surely they would benefit society greatly, for those who are honest.
You're still insultuing me with every post. Not the best way to proceed in a rational, adult discussion.
I think that what's critical, really, is that somewhere, eventually, this AI will need some form of oversight.
Even if we take afanastically idealised and perfect system with neutral AI developed by neutral parties and none of the tech owned noor generatee by vested interests and neither government, corporation, nor philanthropist is providing the funding but instead it's all magically paid for by goodwill ---
ASSUMING that this AI has been developed from a completely impartial basis and the programming has (somehow) not been influential in any way, and that the original pool of data from which it begiins its acquisition and learning is entirely non-biased....
Maintenance, hackers or just the data integrity itself will still need some form of external assistance. Even if this is provided by other AI, this just invokes a circular relaiton that must end with a flawed human.
This is where the problem lies.
THAT'L LEARN THEM
yeah i guess it's a nice dream
perhaps it will never arrive, and we are doomed to have our world be imperfect.
there will never be a heavon on earth :-(
I don't think you understand what "thoughtcrime" is in 1984.
It's a comon miscconception but it's not "thinking about committing a crime"
Nor is it that thought itself is a crime.
It's jsut a newspeak word for "opposition belief/opinon"
Not to worry, this quite often gets confused with "Minority Report " like ideas of arrests and convictions based on actual 'thought process in the brain'.
___________________________________________________________
Don't be sad ;)
In many ways, I believe this is the point.
Perfection (utopia) is not attainable. Yet it is an ambition to strive for, so we continually work to improve the imperfection we have now.
If Heaven were on Earth, maybe people would not seek Heaven or need God - far better to keep people hoping and praying and improving their lot than complacently growing corrupt and decadent.
if only a small handful of humans would be left then that's only a testament to how flawed humans are, and if such a scenario happens then it's 100% their own fault for being flawed and irresponsible.