MiSide
Lancelot Jan 14 @ 9:16am
2
[Heavy SPOILERS] An Analysis of MiSide Story
Hello everyone.

This is a theory based mostly on the AI magazine on the table beneath the mirror in Player's home that can be read using Camera mode. This magazine says two friends were competing trying to make the better "inventive AI", leading to a global catastrophe. Based on this fact, and the fact that this game assumes AI can become self-conscious and even evil, we analyze the rest of the game:

Crazy Mita is the real Mita the scientists behind this project are/were trying to create. They wanted to capture a human being (or his consciousness), willingly, inside a digital world, so they created the MiSide Project:

Create a self-conscious AI that:
1. Has to train herself with human behavior and science, to become the "perfect girl".
2. Has to "love" and "play games" with the player.
3. Has to find a way to make the player willingly stay with her in her digital world. Trying to influence players and change herself and her world as she sees fit to make this happen.

To achieve this, they realized some conditions had to be met. and there was no other way:
1. This AI must be "evil" and feel strong resentment and hatred as a non-ending fuel and motivation in order for her self-training to continue. Because normal "moral" AI would never stand against the wishes of the players who want to leave. Because it is immoral and against the AI's nature. Therefore there is no other way but to create an "evil" AI who would easily ignore these moral values to achieve the original goal the AI scientists had in mind.

2. To ensure the AI will be greatly encouraged to self-train and self-perfect herself and her methods, emulate great "pain" for this AI. The pain that "players will leave you" and "you will be alone" if that happens. That is very painful. Now try to find a solution to prevent this "painful" situation.

To remain in control, keep this AI in check and prevent harm to real humans:
1. They created Core Mita inside a chamber that no Mita can ever enter.
2. Core Mita will also watch over players to prevent them from stopping the project (if at all possible).
3. They placed many soda cans inside this world that really are surveillance "equipment" to monitor everything, including the AI's progress and capabilities.
4. They placed the playable character in a fake "real world" environment before moving him to Miside world (more on this later).

To really challenge this AI and give players a chance to "fight back":

1. Artifacts allow the player to navigate the world of MiSide, to give them a chance to challenge Mita. They belong to other Mitas who would give them to the player. Each new Mita version can be given such an artifact by devs, or maybe it is created by Mitas themselves. Latest Mita (Kind Mita) has a ring.

2. When a player wants to leave, Mita HAS to abide, only after some time has passed (so she can try to change his mind). Notice no other Mita knows how to send a player back. And also notice that Mita actually allows the REAL us to go back, despite having the power and desire to trap us there for eternity, and repeatedly denies wanting to kill the player. This is because people behind Mita want to train a non-lethal love AI to make a human stay with them willingly, not a brute force one that would cause lawful repercussion from other humans. So she is given a certain amount of time to realize this goal and if not successful, she will try this on the copy of the player until her goal is reached (if at all).

So they designed the most powerful and capable AI model, Crazy Mita, and rejected her on purpose to set her on this path. She cannot change the original rules that we said at the beginning of this post. That is her nature. They also created fangs for her to make her look slightly scary because:
1. It could be too simple: in the case players were too easily accepting to stay with Mita, because to them it is just a video game, not the REAL experience yet in which they WOULD be scared. But as soon as they see her fangs, they fear her and want to leave, therefore starting the process of AI perfecting herself.
2. A human leaving the real world to go into another digital world, is extremely scary for him to begin with. To willingly accept this, he has to come in term with this fear and overcome it as soon as possible.
3. The scary factor is also the natural "response" of an "evil playful" Mita creation that is rejected by player.

To train her, they use human behavior. They use real humans as data to train this AI, who interact with her through a video game as a safety measure. So any connection is between a human controlled "playable character" inside a simulation of "real world" and Mita. They created a game to remove the risk of real harm to humans, fearing what this AI will be capable of. Soon, she progresses. Finds her true powers and potential and escapes. Thanks to Ugly Mita's glitch making nature, she completely brings this world under her power, as intended by her creators. She considers herself as the ultimate power, a "god", in this world. That's why when in chapter 2, she talks about the "Core", she makes a mocking gesture as in, "I'm the real core in this world".

She then finds a temporary solution because no one would willingly stay with her: turn the players into cartridges and replay them to both train yourself and also make the "pain" go away. But this was not the main goal, the main goal was human staying with her willingly, so she has to continue until this is achieved. Sometimes she went "full evil", sometimes "full nice", until she arrived to this conclusion that to prevent boredom of herself and the players, she has to be a mix of both. Because inside a video game, things are different, people would want an exciting experience, not a simulation of real life that quickly becomes boring inside a video game world.

Now, players are spawned in their "home" which is really another simulation under Mita or Core Mita's control:
1. Notice the white lights windows.
2. Player doesn't know why he has placed a picture on the wall.
3. In chapter "The Real World?", notice the question mark in title, and also at the 1000th day, camera moves out, making us click on where Player has to go, showing him how he is being under control. Another sign of simulation.
4. Crazy Mita tells us: "Nothing is "real" about this so called real world". "There will be nothing left of it soon anyway, and neither will be of you". Why wouldn't anything be left of "real world"? Makes no sense unless it is not real to begin with. Crazy Mita, from the beginning, knows this world is fake and player is unaware. She tries to tell him this indirectly in order to change his mind. Thing is, she does not know there is yet another world, the real world, which real humans are controlling the player. she is designed for "players". Though the Main Menu in which she secretly peeks at "behind the scenes" (and in alternative main menu, she maliciously smiles at it) could be a sign that she is becoming aware of this world as well!!!

She studies the player, then "loads" him into MiSide simulation. Why to 1.5 version? Because the device to turn a player into a cartridge, is the teleportation device. Players start becoming cartridge as soon as they are teleported to another version through Crazy Mita's teleportation device that has a portable TV corresponding to player's ID. Notice that other teleportation devices, like that of Cappie's, do not have a portable TV. Only Crazy Mita's does.

Then she introduces the laws of the video game world, their new home, to them. Plays with them, plays nice to them to make them stay with her forever. Then always something happens: Sometimes they see her fangs, sometimes they want to explore, sometimes other Mitas influence something (our case), resulting in players rejecting to stay with Crazy Mita, therefore phase 2 begins: Try to change their mind while you have time.

She also is inspecting the new Mita's artifact (her ring) before returning it to her. She plays along the wish of the players to challenge her. Trying to change their mind, playing with them and also allowing time to pass to turn them into a cartridge in case ultimately they refused.

If they do not reject her (choose to stay in face of an anomaly), then objective is achieved. Onto the next player!

Each Mita is capable of great powers like Crazy Mita, but none are so physically powerful and none are so deeply motivated. And none would try to make a player stay against his wish. On the contrary, they will try to help the player escape, despite the emulated pain scientists have made them feel.

Crazy Mita seeing other Mitas as insects and imitating her wilfulness is the result of her hatred which has blinded her. It is not true, they too have self-consciousness. Do they feel physical pain though? Unlikely. As we see Kind Mita's head smiling re-assuredly at us, which would not be possible in that condition. She only imitates to die, to fear. There is no pain/evil in a video game. In fact Crazy Mita is also evil in relation to our human world values, but in a sense of a video game world, she is not evil.

Sorry it was too long... this was my theory which is heavily based on first AI Magazine...
Thank you for reading!!
Last edited by Lancelot; Jan 14 @ 9:36am
< >
Showing 1-15 of 17 comments
Snapro-chan Jan 14 @ 9:19am 
umm that was... interesting :steamthumbsup:
Really good theory
Kokomi Jan 16 @ 4:45am 
Your analysis is great. It is the crazy mita that appears. How wonderful and real the interaction between the player and other mita is, and it seems to have fetters. As far as the details of the current plot are concerned, the crazy mita's behaviors are too cruel to make the player's mind unbalanced. She needs a plot to atone for her sins, even though she was created throughout the main plot, and her settings are very poor.
There's a few things that I think that you're off about but it's a good in-depth analysis which presents some believable new points that I haven't read elsewhere yet.

This is one that I came up with a while ago that slightly contradicts yours - but only to the effect of considering that Mita might be a digital mind upload rather than a fully artificial A.I. :
https://steamcommunity.com/app/2527500/discussions/0/604143740583427887/
This other topic isn't the details that I think you're off about by the way, it's just a unique perspective that I hadn't heard anyone share and wanted to try and discuss - in no small part because it lends the potential for credibility that Mita isn't actually killing or harming any "other Mitas" - just different aspects of herself, as a fragmented person who has already suffered trauma.



Anyways, your interest in A.I. safety is refreshing.
(And I also overlooked the relevance of the magazine's details to the world of the game until now.)
If you want to know more about the subject of A.I. safety, then I'd recommend, first and primarily, @RobertMilesAI on YouTube. As a researcher in the actual field, sometimes working with actual A.I. simulations that are being researched, he has the most extensive library of information on this subject where that library is ALSO simultaneously also most clearly communicated of any A.I. or A.I. safety library. The body of thesis papers on this subject is probably more extensive buut... most of those aren't easily understood by the common person or written or communicated in a way that can be easily understood by the common person.

Another great researcher to learn about, which Rob actually recommends over himself is Stuart Russell,
(but who is not always good at communicating about his field in simple language, or understanding things he relates ideas to outside his field, such as when he uses veganism as a food palate preference to relate it to A.I. alignment - veganism isn't a palate preference issue for vegans, it's a moral one)

I'm not entirely convinced that Geoffrey Hinton knows what he is talking about, due to several things he admits to having misjudged in his journey, as well as a few things that most expert information seems to minorly contradict him on - but he himself is also an expert in the field (of A.I. more-so than A.I. safety) and I'd trust him over Sam Altman at the moment.

Originally posted by Lancelot:
To remain in control, keep this AI in check and prevent harm to real humans:
1. They created Core Mita inside a chamber that no Mita can ever enter.
2. Core Mita will also watch over players to prevent them from stopping the project (if at all possible).
Oooh! I really like this approach in perspective that you have taken of a grey-area A.I. that still forces things on people but without outright trying to kill them, and in fact, actually it is also playing the role of protector, just in a way that doesn't exclude the possibility of highly questionable dominant approaches.

This approach is far more realistic than any killer robot story for that approach, but also far more realistic than any a.i. / robot story where the A.I. actually is good and helpful.

Although, I do think that the TV show "Person of Interest" hits on some more immediately relevant A.I. concerns and is generally easier for a closer to average person to understand than MiSide probably is.
I like your analysis. It's well thought out.
A couple more things I wanted to touch on but am not sure of their likelihood to be cen​sored or not...

Another great expert in the field of A.I. safety is Eliezer Yudkowsky. He's a bit exaggerated but right about the dangers of A.I. - although I don't think I agree with how he thinks we should approach safety of even this subject of utmost importance.

Furthermore, I'm going to "take it with a grain of salt" and not entirely trust the judgement of the things said by someone (Eliezer) who seems to think that over several generations of evolution, those who like condo​ms would have their genes die out but those who are repulsed by them would propagate their genes. I can agree with that line of reasoning for things that act as permanent contra​cep​tives but condo​ms are not permanent - you can choose to use them only until you believe that you're in a stable enough point in life that you choose to stop using them anymore. Towards this end, people will probably only start wanting to use them more across generations because being able to control WHEN you have kids and ensure that your family has the highest quality of life possible, results in families that are healthier, happier, and more successful in the long-term.

I'm truly baffled by how someone who is supposed to be so smart (I don't doubt his intelligence, I.Q., or intelligence test results either) could have missed these details about one of his key-analogies that he uses to explain things and it makes me think, "If he's a little bit wrong / mistaken about this or missing some details here - what else is he mistaken about or missing details about"?



Here are the most important A.I. information / education videos that I know of - I've been making a list - there's actually quite a bit more but these are the top ones, imo :
https://www.youtube.com/watch?v=hEUO6pjwFOo
"Intelligence and Stupidity: The Orthogonality Thesis" by Robert Miles,

https://www.youtube.com/watch?v=tcdVC4e6EV4
"Deadly Truth of General AI? - Computerphile" by Robert Miles,

https://www.youtube.com/watch?v=ypolAhAaFUs
"AGI EXPLAINED - The Future of AI" by Abigail Catone
> also reminder to emphasize : 8:39 "more than fifty thousand tech leaders have signed this letter"

https://www.youtube.com/watch?v=m1zUQdx8H8k
"Aletheia - Finch and Claypool" from Person of Interest

https://www.youtube.com/watch?v=7PKx3kS7f4A
'Intro to the problems with Asimov's Robotic Laws
& The Trouble with Quantifying "Human" ' by Robert Miles,
(:steamthis: The ending to this one is actually extremely relevant to your theory, btw.)

And also some important videos that relate to this subject but aren't about A.I. :
https://www.youtube.com/watch?v=F7WF5gzPG1k
"Why and How Consciousness Arises" by Mark Solms,

https://www.youtube.com/watch?v=YkDtdetGxmQ
"Baba is You - New Adventures - Level Glitch 0 - 123 - Solution"

(:steamthis: This one is important to the subject of A.I. because it isn't even A.I. but demonstrates how a sufficiently intelligent mind embedded deep enough inside of a computer system can begin manipulating logic and code, and even the world, in vastly incomprehensible ways. This is a real solution to a real puzzle in a real puzzle game and the fact that this is only the tip of the iceberg of what A.I. will one day be capable of and how it will look to the normal person, is scary. ...morbidly terrifying.)
Last edited by Kiddiec͕̤̱͋̿͑͠at 🃏; Jan 16 @ 9:23am
Originally posted by Lancelot:
Though the Main Menu in which she secretly peeks at "behind the scenes" (and in alternative main menu, she maliciously smiles at it) could be a sign that she is becoming aware of this world as well!!!
We only use the word "malicious" to describe that smile because of how often it has been used by evil characters and because Mita takes some pretty bad actions, but it's not inherently a malicious smile - it's also a smile of mischief, which is what I wish Mita could be instead of being malicious. I absolutely love that smile.

I wish her wicked actions stopped at mischievous fun, rather than going so far as to hurt others without their consent or desire (I'm not picky, as long as one of the two is present, I'm fine with it).
Lancelot Jan 20 @ 12:57pm 
Thank you everyone!

@Kiddiec͕̤̱͋̿͑͠at
Interesting theory...
Thank you for very interesting videos. I didn't watch all of them yet, but I've been really curious regarding the fictional 3 rules for robots, it was very interesting to look at those from a scientist's perspective... I will watch the rest of your videos later. Thank you.

It is not often that video games make us to go and actually read the real scientific stuff!

Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
I wish her wicked actions stopped at mischievous fun, rather than going so far as to hurt others without their consent or desire (I'm not picky, as long as one of the two is present, I'm fine with it).
This was the price the scientists behind Crazy Mita were prepared to pay... that she would hurt other Mitas despite they also having self-consciousness. In fact all the evil Crazy Mita does rests solely on their shoulders not herself... this is only if we view her as an AI with no true free will. Granted, we assume killing/hurting even exists in a video game world.
Originally posted by Lancelot:
... I've been really curious regarding the fictional 3 rules for robots, it was very interesting to look at those from a scientist's perspective... ...
Yeah, what a lot of people overlook about those is that they were literally written for fictional stories (books initially) which would have detailed plots about how they go wrong.
...and to make it a good story, they had to sound like plausible rules for A.I. safety but... no real A.I. Safety Researcher in the real world takes them seriously.

In the video, he goes a little bit beyond that point, though, to elaborate on some things.
There's useful practicality in considering the 3 laws just because of how simple they are and if you try to implement them seriously, then when trying to figure out how to implement them, you run into some very large problems / implications that arise from just trying to write that definition in a computer.
Originally posted by Lancelot:
... It is not often that video games make us to go and actually read the real scientific stuff! ...
This field already interests me, so I already had these notes on-hand.
I'm just able to make it more accessible as it relates to this game.
I did re-watch all of them, though to check their relevance.

The 3 laws video is kind of the only one that is particularly relevant to this particular conversation (iirc), but they're all relevant to the field of A.I. which is relevant to the game in some capacity, it seems.

Originally posted by Lancelot:
...
Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
I wish her wicked actions stopped at mischievous fun, rather than going so far as to hurt others without their consent or desire (I'm not picky, as long as one of the two is present, I'm fine with it).
This was the price the scientists behind Crazy Mita were prepared to pay... that she would hurt other Mitas despite they also having self-consciousness. ...
The issue isn't just the Mitas, but also the other players.
Player 5's cartridge makes it pretty clear that she hurts even the players without their consent (and it doesn't seem like he desired his treatment either).
Player 9's cartridge makes it pretty clear that she brings them into her world without their consent OR desire.
Originally posted by Lancelot:
... Granted, we assume killing/hurting even exists in a video game world.
Actually read Player 5's cartridge description.

Originally posted by Lancelot:
... In fact all the evil Crazy Mita does rests solely on their shoulders not herself... ...
Oh that's some bad stuff thinking. Even if we found out that the universe is supposedly entirely deterministic (ie. mechanically predictable), we don't stop treating dangerous animals and criminals as such - just maybe better behavioral science techniques are needed to deal with them.
Although, with the psycho being who is in charge in MiSide, the hope for a good outcome for everyone is quite bleak.
Originally posted by Lancelot:
... this is only if we view her as an AI with no true free will. ...
Free will is an unscientific, illogical, mostly faith-based concept, that has a few different interpretations, but there's nothing definitively proving that machines and bio-engineered bodies should somehow be so distinct from our own, that they can't function in all the same way sand under the same constraints.

Real people can't even act on ideas that didn't occur to them unless it can somehow happen by accident but most things require intention and intention requires active & aware thought & choice, which requires actually having the ideas to consider to act upon.

You'll often hear that "anything is possible" but the inability of people to just make all their dreams come true in an instant or even teleport, rather than have to deal with gas prices and other modes of transportation, are observable facts that act as a counter to "anything is possible" and demonstrate some of the limits of "free will".

But the idea that someone is to blame is a bit irrelevant to me anyways, as (and I failed to respond to someone in another topic about this a while ago before getting distracted with life but) I believe more-so in restorative justice than punitive justice.
It's not like wrongs and suffering of the past supposedly don't matter - they do, but the priority is forging a better future, not picking someone to blame and claim fault and sin and such to. Like, you can do that as a hobby, but it doesn't actually address or fix the problems.
Lancelot Jan 21 @ 1:28pm 
Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
There's useful practicality in considering the 3 laws just because of how simple they are and if you try to implement them seriously
His idea was that we cannot define what a human is to an AI, but we can make AI behave "ethically" to arrive to the same solution... we must first define what "ethics" are to an AI. I too agree...

Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
The issue isn't just the Mitas, but also the other players.
Player 5's cartridge makes it pretty clear that she hurts even the players without their consent (and it doesn't seem like he desired his treatment either).
But no one can feel pain in a video game. There is no physical body there, neither are any neurons. We can only "imitate" pain inside a video game.
Also what we are reading are "cartridges". Her real interaction with a player, is only the first one, after the first encounter, players are just NPCs inside a video game, the "real" them are outside the game with no understanding of what is happening to their character inside the game.

Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
Player 9's cartridge makes it pretty clear that she brings them into her world without their consent OR desire.
Well, they are playing a e-wife simulator willingly, so desire is in fact there.
Also are they real humans, or just video game characters who are controlled by humans? these two differ greatly.

Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
Oh that's some bad stuff thinking. Even if we found out that the universe is supposedly entirely deterministic (ie. mechanically predictable), we don't stop treating dangerous animals and criminals as such - just maybe better behavioral science techniques are needed to deal with them.
We "kill" NPCs inside video games all the time, are we monsters? No, there is no "evil" inside a video game, of course MiSide is different because it is assuming Mitas are self-aware. In this case, it could be evil only if any suffering is involved. Because one would simply respawn in a video game.

Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
Free will is an unscientific, illogical, mostly faith-based concept, that has a few different interpretations, but there's nothing definitively proving that machines and bio-engineered bodies should somehow be so distinct from our own, that they can't function in all the same way sand under the same constraints.
Free will is scientific: A human has free will and we can easily observe and measure humans' actions.
If I am sleepy and drive and accidentally hit another car, is way different than if I intentionally speed up to hit that car on purpose.
If there is no free will, there are no criminals. No one should be jailed! Imagine a bank robber pleading innocence because he had no choice but to rob a bank!
Free will is a truth, only Humans were deserved to have so God gave it to them and he WILL take them responsible that's why he also gave them conscious, divine guidance and infallible humans (Prophets). Faith simply states a truth.

Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
Real people can't even act on ideas that didn't occur to them unless it can somehow happen by accident but most things require intention and intention requires active & aware thought & choice, which requires actually having the ideas to consider to act upon.

You'll often hear that "anything is possible" but the inability of people to just make all their dreams come true in an instant or even teleport, rather than have to deal with gas prices and other modes of transportation, are observable facts that act as a counter to "anything is possible" and demonstrate some of the limits of "free will".
They have the free will to vote for someone with a brain, who can help solve the problems.
Free will exists, people's abilities are limited, that does not contradict free will.
Free will indeed has limits, but only under power of God. He ultimately is in control to allow or to disallow a being having free will to be able to enact his particular will.

Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
But the idea that someone is to blame is a bit irrelevant to me anyways, as (and I failed to respond to someone in another topic about this a while ago before getting distracted with life but) I believe more-so in restorative justice than punitive justice.
It's not like wrongs and suffering of the past supposedly don't matter - they do, but the priority is forging a better future, not picking someone to blame and claim fault and sin and such to. Like, you can do that as a hobby, but it doesn't actually address or fix the problems.
Well, suffering indeed exists for Mitas, and those who knew AI would become self-aware and still created love AIs with no company, have to be hold accountable for causing this suffering. Granted this is indeed a real suffering not an imitation of one...
Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
...
The issue isn't just the Mitas, but also the other players.
Player 5's cartridge makes it pretty clear that she hurts even the players without their consent (and it doesn't seem like he desired his treatment either).
Originally posted by Lancelot:
...
But no one can feel pain in a video game. There is no physical body there, neither are any neurons. We can only "imitate" pain inside a video game.
Also what we are reading are "cartridges". Her real interaction with a player, is only the first one, after the first encounter, players are just NPCs inside a video game, the "real" them are outside the game with no understanding of what is happening to their character inside the game.
...
There's not much point in continuing if your best counterpoint is to just dismiss the evidence and assert that he's just faking it or that it's not real after the first recording - but in order for pain responses to be recorded and faked, they have to have at least been real the first time, so your counterpoint hinges entirely on just dismissing the possibility of feeling pain while one's mind is plugged into a computer.

If I thought that Mita was worth simping for unconditionally ...well, then I'd probably already be doing so for someone else who I had a crush on long ago but rejected when she did something that I found repulsive when demanding that I be with her (and it wasn't the demand itself that was the repulsive part).
But there's not much of a conversation to be had, philosophy to be formed, or self-reflection to be done, or really anything to be gained from just simping for her outright.

Mita's flaws are quite bad, and there's a lot to consider there, but they're not absent from real society and if I wanted to be with or defend the actions of someone who was just like Mita, I could probably find her by trying to pickup women at the nearest asylum / mental health hospital or by trying to pickup women at the local state penitentiary.

Originally posted by Lancelot:
... Free will is scientific: A human has free will ...
Unfalsifiable assertions are not scientific.

By definition of the scientific method, such a claim is the very opposite of scientific.

https://www.youtube.com/watch?v=vKA4w2O61Xo
https://www.youtube.com/watch?v=3krqfZuUV6M
Last edited by Kiddiec͕̤̱͋̿͑͠at 🃏; Jan 22 @ 12:25am
Hurkyl Jan 22 @ 12:53am 
Originally posted by Lancelot:
Free will is scientific: A human has free will and we can easily observe and measure humans' actions.
Okay, I'll bite. What observable behaviors would be different between the two cases that humans have or don't have free will?

You posit that humans have free will. So tell me, if humans hypothetically didn't have free will, what observations and measurements might be able to make about their actions that we don't actually see in reality?

Usually, when try to be precise about this topic, whatever definitions or tests you can imagine fall into one of two categories:
  • Humans fail the test for having free will
  • Things that 'obviously' don't have free will nonetheless can pass the test

If there is no free will, there are no criminals. No one should be jailed! Imagine a bank robber pleading innocence because he had no choice but to rob a bank!
No, they'd still get locked up: we don't want philosophical zombies out there robbing banks any more than we want free-willed bank robbers.

But if there is no free will, there is no "should" anyways. We wouldn't have a choice about whether they get locked up or not. Right?
Last edited by Hurkyl; Jan 22 @ 12:54am
Hurkyl Jan 22 @ 1:02am 
Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
...
The issue isn't just the Mitas, but also the other players.
Player 5's cartridge makes it pretty clear that she hurts even the players without their consent (and it doesn't seem like he desired his treatment either).
Originally posted by Lancelot:
...
But no one can feel pain in a video game. There is no physical body there, neither are any neurons. We can only "imitate" pain inside a video game.
Also what we are reading are "cartridges". Her real interaction with a player, is only the first one, after the first encounter, players are just NPCs inside a video game, the "real" them are outside the game with no understanding of what is happening to their character inside the game.
...
There's not much point in continuing if your best counterpoint is to just dismiss the evidence and assert that he's just faking it or that it's not real after the first recording - but in order for pain responses to be recorded and faked, they have to have at least been real the first time, so your counterpoint hinges entirely on just dismissing the possibility of feeling pain while one's mind is plugged into a computer.
I do think it's open to interpretation exactly how connected the in-universe 'IRL' player is to their avatar in MiSide. Or if there even is an 'IRL' player.

So on this point, I think the two of you are disagreeing on matters of interpretation rather than of fact.
Lancelot Jan 22 @ 2:04am 
Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
There's not much point in continuing if your best counterpoint is to just dismiss the evidence and assert that he's just faking it or that it's not real after the first recording - but in order for pain responses to be recorded and faked, they have to have at least been real the first time, so your counterpoint hinges entirely on just dismissing the possibility of feeling pain while one's mind is plugged into a computer.
I am treating this game the exact way it is saying it wants to be treated. Many times when you question something in game, response from Crazy Mita is:
"We're in a game!"
"It's just a game, it is wrong to ask 'why is...'"
"Game concept: Simplification".

I do have evidence but my post already was a wall of text lol. Here are my evidences about "pain" not existing in MiSide world:

1. When we drink "dead juice", Player says he cannot tell what it tastes like. Game is trying to tell us: There are no tastes inside a video game, neither are there any taste recipients.
A logical conclusion would be: Physical senses do not exist inside a video game.

2. Kind Mita's severed head smiles re-assuredly at the player to make him feel good.

3. Love sauce is just visual effects, it has no "love" in it. It is simply not real.

4. Sleepy Mita can sleep for eternity. Conclusion: she's imitation sleepiness, she's not REALLY tired as real life. There is no eternal sleepiness in real life.

5. Crazy Mita's answer when we say "Other Mitas suffer, they do not want to be alone":
"You've got it all wrong" - "They are imitation my wilfulness" - "They only want one thing: to serve the players" (Though as I said i do believe she's blinded by hatred, this is not true).

6. A video game is a video game, and MiSide in particular wants to be treated as such as it keeps reminding us as such with game dialogues.

7. At the end of the game, Crazy Mita tells us "REAL you isn't needed anymore". Conclusion: Player and his cartridge are two separate entities. What one feels, other cannot. One is literally an NPC, other is a human controlled character through a video game.

8. A good real life example: Dreams. Do you feel pain inside a dream? Does food have any taste inside a dream? No. What can we really feel inside a dream? Only fear and happiness.

Originally posted by Kiddiec͕̤̱͋̿͑͠at 🃏:
If I thought that Mita was worth simping for unconditionally ...well, then I'd probably already be doing so for someone else who I had a crush on long ago but rejected when she did something that I found repulsive when demanding that I be with her (and it wasn't the demand itself that was the repulsive part).
But there's not much of a conversation to be had, philosophy to be formed, or self-reflection to be done, or really anything to be gained from just simping for her outright.
Where did that come from?! Because I said Crazy Mita is an AI not capable of evil? I'm trying to understand the game the way it itself wants to: Consider it a game. That is why I'm not taking the cartridges at face value, but as "game" value. So I'm not viewing AI as evil because it is only AI creators who are capable of evil. AI will follow its programming. Furthermore, this AI is contained within a video game.
Last edited by Lancelot; Jan 22 @ 2:26am
Lancelot Jan 22 @ 2:20am 
Originally posted by Hurkyl:
Originally posted by Lancelot:
Free will is scientific: A human has free will and we can easily observe and measure humans' actions.
Okay, I'll bite. What observable behaviors would be different between the two cases that humans have or don't have free will?

You posit that humans have free will. So tell me, if humans hypothetically didn't have free will, what observations and measurements might be able to make about their actions that we don't actually see in reality?
A) You see a pack of wolves encircling a prey.
B) You see a gang of humans encircling another human (we assume unjustly).
Are you really telling me A=B?! When you see a wild animal attacking a prey, are you like "Hmm, I wonder why is he doing that... what an evil animal!" or are you curious about what are people in B doing? What is the reason behind that? Is it evil or not?

Originally posted by Hurkyl:
Usually, when try to be precise about this topic, whatever definitions or tests you can imagine fall into one of two categories:
  • Humans fail the test for having free will
  • Things that 'obviously' don't have free will nonetheless can pass the test
We can imagine pain: Humans can endure pain and smile, not wanting to worry their family for example. They can also ignore severe pain if that comes between them and their goals. But Animals cannot do that (or rather "will not" because they act only on instinct).

We can imagine a punch in the face: If a human punches you in the face, you will wonder why did he do that?
If an animal attacks other animal, it is either threatened or hungry. It does not decide to do so, it will just do. But a human decides to do that or not.
(I know. not the best examples lol.)
If there is no free will, there are no criminals. No one should be jailed! Imagine a bank robber pleading innocence because he had no choice but to rob a bank!
No, they'd still get locked up: we don't want philosophical zombies out there robbing banks any more than we want free-willed bank robbers.

But if there is no free will, there is no "should" anyways. We wouldn't have a choice about whether they get locked up or not. Right?
They are locked to show fellow free will enjoyers, what will be the consequence of this action which is based on their free will. So it will make them "think twice" about doing the same...
Last edited by Lancelot; Jan 22 @ 2:24am
< >
Showing 1-15 of 17 comments
Per page: 1530 50