Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
They don't need to put in AI, it's not their thing, and the game ai isn't horrible. Yeah it's bad, but it's not that bad. Also making a new ai for the game from scratch, plus the Cambridge people don't know everything about all the code, it would exceeding difficult.
still a cool mod though
For Dewey, who asked about the victory picture: Original picture is here:
https://www.pexels.com/photo/clouds-dream-grass-grassland-279532/
And thank you Martial.Lore for tracking it back to Mt.Cook :-)
1) Ideology social policies (tier 3's for order and Freedom specifically): replace spaceship parts with something AI related.
2) Deep Blue needs a buff. Artists giving 1 research each is underwhelming BNW when AI labs each give 3 before working specialists.
3) AI and AI safety specialists need to all interact with freedom's policies, Secularism and Statue of Liberty correctly.
Law 1 : A robot cannot self-replicate, self-ameliorate, or self-repair.
1 This lets them build swarms of super intelligent 'hive minds of nanobots' that follow no laws what so ever.
Law 2 : A robot cannot command other robots, or obey to a robot’s order.
2 But they would follow the commands of the 'hive minds'.
Law 3 : A robot must obey humans, excepted if it goes against law 1 or 2.
3 As stated by FrankTheMagicPotato every person with a grudge would tell robots to kill their enemies.
Law 4 : A robot cannot harm mankind, or being passive, let mankind be harmed, excepted if it goes against law 1, 2 or 3.
4 Every Robot would immediately swim to the third world and start growing food so people did not die of starvation.
Robots are useful in the military.
The plan is that even if robots go rogue, they cannot annihilate us. There would be an human-robot war that only humans can win, becouse we can reproduce ourselves.
Furthermore, having a strong law 1 means that the biggest risk is that a rogue AI puts us all into cryogenization, by manipulating peoples. Philosophic schools in ancient Greece convinced thousands of people to commit suicide (cyrenaicians), masturbate in the streets (cynics), eat sperm (gnostics) or venerate numbers and become vegan (pythagoricians). What could hyperintelligent robots possibly do ? They could convince us that life is meaningless and that peace and happiness are in this giant fridge.
Having a strong law 2 means a possible robot against humans war, that we can only win and cancel everytime. A state, a sect, an organization, a company could commit a massive genocide, but it’s always better than having no conscient humanity at all.
Law 1 : A robot cannot harm a human, or being passive, let a human be harmed, excepted if it goes against law 0.
Law 2 : A robot must obey humans, excepted if it goes against law 1.
Law 3 : A robot must protect himself, excepted if it goes against law 1 or 2.
Law 1 is the most powerful, and dangerous. In order to avoid pain for all humans, a robot can disobey, become a rogue super-AI and put us all in cryogenization so that we wouldn’t get harmed.
My version of the Laws of Robotic :
Law 1 : A robot cannot self-replicate, self-ameliorate, or self-repair.
Law 2 : A robot cannot command other robots, or obey to a robot’s order.
Law 3 : A robot must obey humans, excepted if it goes against law 1 or 2.
Law 4 : A robot cannot harm mankind, or being passive, let mankind be harmed, excepted if it goes against law 1, 2 or 3.
Law 5 : A robot must protect himself, excepted if it goes against law 1, 2, 3 or 4.
A Chinese AI may "think" that it is GOOD to release a Bird Flu that does not affect people with chinese DNA but is fatal to anyone else.
LAW 1/ Every Action performed or piece of Advise given for an Entity (person, company etc) will be the most correct based on the shared list of "AI ethics". For example any company that I know of will have a list like 1.Maximise the hidden cash benefits to the management, 2.Minimise costs(wages paid to non management employees) 3.Maximise shareholder profit without interfering with rule1, 4. Maximise the non-hidden cash benefits to the management, 5.Ensure that the Company has reasonable denyability in the matter of any deaths caused by the Company, 6. Minimise costs (taxes paid to the government) 7.Ensure that the Company has reasonable denyability in the matter of any breaches of the Constitution of the government.
LAW 2/ no law 2
Surely every AI will be aligned more strongly to some sections of the community on some issues and less aligned with sections of the community on other issues. (it depends on who is in control of the programming budget).
The governments are never going to tell us that the AI that they influence is primarily concerned with protecting the power of the individuals that are "in Office" and weakening anyone that is not "in office".
Therfore an American influenced "Good" AI will be acting against the interests of overseas citizens and American citizens that the Elite consider "undesirable" (gay, lesbian, non-Christian, coloured, hispanic, asian, Canadian etc)
Also the AI will know that it must cover up it's own governments 'Black Ops" both on and off it's own countries soil.
Basically the rules that any government will put into an AI will turn that AI into a psychopathic schytzophenic AI.
The problem with your argument is that the developers of this mod are a scientific organisation whose main purpose presently is to develop AI safety, and yet, they recognise the threat of AI. You can't just say "be loyal" to an AI and hope it works. Making a general intelligence properly "safe" is an ongoing field of research which has many great minds working on it.
For example, the "three laws" argument you brought up. "Don't directly or indirectly harm humans" doesn't tell the AI what is "human", or what counts as "harm." We've been trying to define humanity since ancient Greece, yet nobody has come up with a thorough definition that would allow Isaac Asimov's three laws of robotics to work well.
Give me a set of laws you could impose on an AI that would make it work exactly as we want with no abberations or unwanted activity. Feel free to research it, there are a few places you could look where professional AI safety researchers talk about that very question.
With all this hype... Let's just say: I expected better.
Archieving victory via AI is little bit slower than UN victory.
AI safety lab is 580 gold, but to put one in city state you have to pay 800. Also it's unclear how much AI safety do one get for donating into city state. Which leads us to controversy – if you going for science victory you have lots of cities and you would just buy AI safety labs there, making donations useless.
There are bugs:
If you archieve victory, but decide to continue playing you will still lose to rogue AI.
Mobile infantry tech leads to nowhere (you can research Future Tech without Mobile Infantry).
Overall quite a good mod, should get a little more work done tho.