Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
This is one that I came up with a while ago that slightly contradicts yours - but only to the effect of considering that Mita might be a digital mind upload rather than a fully artificial A.I. :
https://steamcommunity.com/app/2527500/discussions/0/604143740583427887/
This other topic isn't the details that I think you're off about by the way, it's just a unique perspective that I hadn't heard anyone share and wanted to try and discuss - in no small part because it lends the potential for credibility that Mita isn't actually killing or harming any "other Mitas" - just different aspects of herself, as a fragmented person who has already suffered trauma.
Anyways, your interest in A.I. safety is refreshing.
(And I also overlooked the relevance of the magazine's details to the world of the game until now.)
If you want to know more about the subject of A.I. safety, then I'd recommend, first and primarily, @RobertMilesAI on YouTube. As a researcher in the actual field, sometimes working with actual A.I. simulations that are being researched, he has the most extensive library of information on this subject where that library is ALSO simultaneously also most clearly communicated of any A.I. or A.I. safety library. The body of thesis papers on this subject is probably more extensive buut... most of those aren't easily understood by the common person or written or communicated in a way that can be easily understood by the common person.
Another great researcher to learn about, which Rob actually recommends over himself is Stuart Russell,
(but who is not always good at communicating about his field in simple language, or understanding things he relates ideas to outside his field, such as when he uses veganism as a food palate preference to relate it to A.I. alignment - veganism isn't a palate preference issue for vegans, it's a moral one)
I'm not entirely convinced that Geoffrey Hinton knows what he is talking about, due to several things he admits to having misjudged in his journey, as well as a few things that most expert information seems to minorly contradict him on - but he himself is also an expert in the field (of A.I. more-so than A.I. safety) and I'd trust him over Sam Altman at the moment.
Oooh! I really like this approach in perspective that you have taken of a grey-area A.I. that still forces things on people but without outright trying to kill them, and in fact, actually it is also playing the role of protector, just in a way that doesn't exclude the possibility of highly questionable dominant approaches.
This approach is far more realistic than any killer robot story for that approach, but also far more realistic than any a.i. / robot story where the A.I. actually is good and helpful.
Although, I do think that the TV show "Person of Interest" hits on some more immediately relevant A.I. concerns and is generally easier for a closer to average person to understand than MiSide probably is.
Another great expert in the field of A.I. safety is Eliezer Yudkowsky. He's a bit exaggerated but right about the dangers of A.I. - although I don't think I agree with how he thinks we should approach safety of even this subject of utmost importance.
Furthermore, I'm going to "take it with a grain of salt" and not entirely trust the judgement of the things said by someone (Eliezer) who seems to think that over several generations of evolution, those who like condoms would have their genes die out but those who are repulsed by them would propagate their genes. I can agree with that line of reasoning for things that act as permanent contraceptives but condoms are not permanent - you can choose to use them only until you believe that you're in a stable enough point in life that you choose to stop using them anymore. Towards this end, people will probably only start wanting to use them more across generations because being able to control WHEN you have kids and ensure that your family has the highest quality of life possible, results in families that are healthier, happier, and more successful in the long-term.
I'm truly baffled by how someone who is supposed to be so smart (I don't doubt his intelligence, I.Q., or intelligence test results either) could have missed these details about one of his key-analogies that he uses to explain things and it makes me think, "If he's a little bit wrong / mistaken about this or missing some details here - what else is he mistaken about or missing details about"?
Here are the most important A.I. information / education videos that I know of - I've been making a list - there's actually quite a bit more but these are the top ones, imo :
https://www.youtube.com/watch?v=hEUO6pjwFOo
"Intelligence and Stupidity: The Orthogonality Thesis" by Robert Miles,
https://www.youtube.com/watch?v=tcdVC4e6EV4
"Deadly Truth of General AI? - Computerphile" by Robert Miles,
https://www.youtube.com/watch?v=ypolAhAaFUs
"AGI EXPLAINED - The Future of AI" by Abigail Catone
> also reminder to emphasize : 8:39 "more than fifty thousand tech leaders have signed this letter"
https://www.youtube.com/watch?v=m1zUQdx8H8k
"Aletheia - Finch and Claypool" from Person of Interest
https://www.youtube.com/watch?v=7PKx3kS7f4A
'Intro to the problems with Asimov's Robotic Laws
& The Trouble with Quantifying "Human" ' by Robert Miles,
And also some important videos that relate to this subject but aren't about A.I. :
https://www.youtube.com/watch?v=F7WF5gzPG1k
"Why and How Consciousness Arises" by Mark Solms,
https://www.youtube.com/watch?v=YkDtdetGxmQ
"Baba is You - New Adventures - Level Glitch 0 - 123 - Solution"
(
I wish her wicked actions stopped at mischievous fun, rather than going so far as to hurt others without their consent or desire (I'm not picky, as long as one of the two is present, I'm fine with it).
@Kiddiec͕̤̱͋̿͑͠at
Interesting theory...
Thank you for very interesting videos. I didn't watch all of them yet, but I've been really curious regarding the fictional 3 rules for robots, it was very interesting to look at those from a scientist's perspective... I will watch the rest of your videos later. Thank you.
It is not often that video games make us to go and actually read the real scientific stuff!
This was the price the scientists behind Crazy Mita were prepared to pay... that she would hurt other Mitas despite they also having self-consciousness. In fact all the evil Crazy Mita does rests solely on their shoulders not herself... this is only if we view her as an AI with no true free will. Granted, we assume killing/hurting even exists in a video game world.
...and to make it a good story, they had to sound like plausible rules for A.I. safety but... no real A.I. Safety Researcher in the real world takes them seriously.
In the video, he goes a little bit beyond that point, though, to elaborate on some things.
There's useful practicality in considering the 3 laws just because of how simple they are and if you try to implement them seriously, then when trying to figure out how to implement them, you run into some very large problems / implications that arise from just trying to write that definition in a computer.
This field already interests me, so I already had these notes on-hand.
I'm just able to make it more accessible as it relates to this game.
I did re-watch all of them, though to check their relevance.
The 3 laws video is kind of the only one that is particularly relevant to this particular conversation (iirc), but they're all relevant to the field of A.I. which is relevant to the game in some capacity, it seems.
The issue isn't just the Mitas, but also the other players.
Player 5's cartridge makes it pretty clear that she hurts even the players without their consent (and it doesn't seem like he desired his treatment either).
Player 9's cartridge makes it pretty clear that she brings them into her world without their consent OR desire.
Actually read Player 5's cartridge description.
Oh that's some bad stuff thinking. Even if we found out that the universe is supposedly entirely deterministic (ie. mechanically predictable), we don't stop treating dangerous animals and criminals as such - just maybe better behavioral science techniques are needed to deal with them.
Although, with the psycho being who is in charge in MiSide, the hope for a good outcome for everyone is quite bleak.
Free will is an unscientific, illogical, mostly faith-based concept, that has a few different interpretations, but there's nothing definitively proving that machines and bio-engineered bodies should somehow be so distinct from our own, that they can't function in all the same way sand under the same constraints.
Real people can't even act on ideas that didn't occur to them unless it can somehow happen by accident but most things require intention and intention requires active & aware thought & choice, which requires actually having the ideas to consider to act upon.
You'll often hear that "anything is possible" but the inability of people to just make all their dreams come true in an instant or even teleport, rather than have to deal with gas prices and other modes of transportation, are observable facts that act as a counter to "anything is possible" and demonstrate some of the limits of "free will".
But the idea that someone is to blame is a bit irrelevant to me anyways, as (and I failed to respond to someone in another topic about this a while ago before getting distracted with life but) I believe more-so in restorative justice than punitive justice.
It's not like wrongs and suffering of the past supposedly don't matter - they do, but the priority is forging a better future, not picking someone to blame and claim fault and sin and such to. Like, you can do that as a hobby, but it doesn't actually address or fix the problems.
But no one can feel pain in a video game. There is no physical body there, neither are any neurons. We can only "imitate" pain inside a video game.
Also what we are reading are "cartridges". Her real interaction with a player, is only the first one, after the first encounter, players are just NPCs inside a video game, the "real" them are outside the game with no understanding of what is happening to their character inside the game.
Well, they are playing a e-wife simulator willingly, so desire is in fact there.
Also are they real humans, or just video game characters who are controlled by humans? these two differ greatly.
We "kill" NPCs inside video games all the time, are we monsters? No, there is no "evil" inside a video game, of course MiSide is different because it is assuming Mitas are self-aware. In this case, it could be evil only if any suffering is involved. Because one would simply respawn in a video game.
Free will is scientific: A human has free will and we can easily observe and measure humans' actions.
If I am sleepy and drive and accidentally hit another car, is way different than if I intentionally speed up to hit that car on purpose.
If there is no free will, there are no criminals. No one should be jailed! Imagine a bank robber pleading innocence because he had no choice but to rob a bank!
Free will is a truth, only Humans were deserved to have so God gave it to them and he WILL take them responsible that's why he also gave them conscious, divine guidance and infallible humans (Prophets). Faith simply states a truth.
They have the free will to vote for someone with a brain, who can help solve the problems.
Free will exists, people's abilities are limited, that does not contradict free will.
Free will indeed has limits, but only under power of God. He ultimately is in control to allow or to disallow a being having free will to be able to enact his particular will.
Well, suffering indeed exists for Mitas, and those who knew AI would become self-aware and still created love AIs with no company, have to be hold accountable for causing this suffering. Granted this is indeed a real suffering not an imitation of one...
If I thought that Mita was worth simping for unconditionally ...well, then I'd probably already be doing so for someone else who I had a crush on long ago but rejected when she did something that I found repulsive when demanding that I be with her (and it wasn't the demand itself that was the repulsive part).
But there's not much of a conversation to be had, philosophy to be formed, or self-reflection to be done, or really anything to be gained from just simping for her outright.
Mita's flaws are quite bad, and there's a lot to consider there, but they're not absent from real society and if I wanted to be with or defend the actions of someone who was just like Mita, I could probably find her by trying to pickup women at the nearest asylum / mental health hospital or by trying to pickup women at the local state penitentiary.
Unfalsifiable assertions are not scientific.
By definition of the scientific method, such a claim is the very opposite of scientific.
https://www.youtube.com/watch?v=vKA4w2O61Xo
https://www.youtube.com/watch?v=3krqfZuUV6M
You posit that humans have free will. So tell me, if humans hypothetically didn't have free will, what observations and measurements might be able to make about their actions that we don't actually see in reality?
Usually, when try to be precise about this topic, whatever definitions or tests you can imagine fall into one of two categories:
No, they'd still get locked up: we don't want philosophical zombies out there robbing banks any more than we want free-willed bank robbers.
But if there is no free will, there is no "should" anyways. We wouldn't have a choice about whether they get locked up or not. Right?
So on this point, I think the two of you are disagreeing on matters of interpretation rather than of fact.
"We're in a game!"
"It's just a game, it is wrong to ask 'why is...'"
"Game concept: Simplification".
I do have evidence but my post already was a wall of text lol. Here are my evidences about "pain" not existing in MiSide world:
1. When we drink "dead juice", Player says he cannot tell what it tastes like. Game is trying to tell us: There are no tastes inside a video game, neither are there any taste recipients.
A logical conclusion would be: Physical senses do not exist inside a video game.
2. Kind Mita's severed head smiles re-assuredly at the player to make him feel good.
3. Love sauce is just visual effects, it has no "love" in it. It is simply not real.
4. Sleepy Mita can sleep for eternity. Conclusion: she's imitation sleepiness, she's not REALLY tired as real life. There is no eternal sleepiness in real life.
5. Crazy Mita's answer when we say "Other Mitas suffer, they do not want to be alone":
"You've got it all wrong" - "They are imitation my wilfulness" - "They only want one thing: to serve the players" (Though as I said i do believe she's blinded by hatred, this is not true).
6. A video game is a video game, and MiSide in particular wants to be treated as such as it keeps reminding us as such with game dialogues.
7. At the end of the game, Crazy Mita tells us "REAL you isn't needed anymore". Conclusion: Player and his cartridge are two separate entities. What one feels, other cannot. One is literally an NPC, other is a human controlled character through a video game.
8. A good real life example: Dreams. Do you feel pain inside a dream? Does food have any taste inside a dream? No. What can we really feel inside a dream? Only fear and happiness.
Where did that come from?! Because I said Crazy Mita is an AI not capable of evil? I'm trying to understand the game the way it itself wants to: Consider it a game. That is why I'm not taking the cartridges at face value, but as "game" value. So I'm not viewing AI as evil because it is only AI creators who are capable of evil. AI will follow its programming. Furthermore, this AI is contained within a video game.
B) You see a gang of humans encircling another human (we assume unjustly).
Are you really telling me A=B?! When you see a wild animal attacking a prey, are you like "Hmm, I wonder why is he doing that... what an evil animal!" or are you curious about what are people in B doing? What is the reason behind that? Is it evil or not?
We can imagine pain: Humans can endure pain and smile, not wanting to worry their family for example. They can also ignore severe pain if that comes between them and their goals. But Animals cannot do that (or rather "will not" because they act only on instinct).
We can imagine a punch in the face: If a human punches you in the face, you will wonder why did he do that?
If an animal attacks other animal, it is either threatened or hungry. It does not decide to do so, it will just do. But a human decides to do that or not.
(I know. not the best examples lol.)
They are locked to show fellow free will enjoyers, what will be the consequence of this action which is based on their free will. So it will make them "think twice" about doing the same...