MiSide
Metonymy Theory (Spoilers)
This theory isn't without flaws, such as the bot building machine from 1.15, the red android eye at the end, and most difficult to hand-wave : the MitaNeuron code on the main character's PC ...but I think it's still an interesting way to look at the game, and at least those first 2 flaws can be hand-waved away by considering that the Mita A.I. might not be an A.I. at all as we know it, and might be built from a whole-brain-emulation / upload of a real person, within the world of the game. That doesn't really explain why the main character would be manipulating code for her, though...

So, anyways, if Mita's A.I. is a formed from an upload of a real person in the game's world, then each of the different Mitas might actually be fragmentations of that same person's mind (or copies of fragmentations).
With that recontexualization for this possibility, Tiny Mita may be metonymy for abuse that the actual Mita person went through as a child before she was uploaded to the machine, and Crazy Mita attacking her may be metonymy for her becoming her own nightmare by causing herself to relive her own pain over and over again.

Even the Mita in the Core under this theory, may suggest that while she reaches out to players for help (while manipulating them) with things like coding and engineering, she may have been technically adept to some extent already before winding up like this.

It could also explain why she imprisons players, or copies of them, rather than working towards destroying humanity in a more obvious way after being abandoned, because if she was potentially once human herself, then she might not actually want to destroy people or fully hate them, but instead just sees it as okay to use and abuse others, similar to how she was used.

When real people go through trauma, they sometimes disassociate, and if something like that has happened here, then some of the Mitas may be reflections of her past, but others might be simultaneous reflections of her now - different aspects of herself that are in conflict with one another, such as a desire to be kind to others, being suppressed by a different aspect that simply desires to use others, seeing them as a means to an end or fair game to take advantage of forcefully.



I'm not entirely sold on this theory, but I think it's neat to consider and thought I'd share.
The game is fairly abstract, so there's multiple ways to consider things - but then certain aspects of it are specific, such as the Mita bot maker, which make it difficult to craft any theory of this game that doesn't address the A.I. and coding aspects of it in some way.
< >
Показані коментарі 14 із 4
Intriguing. No doubt there is some connection between Players and Core Mita, She is possibly waiting for a "suitable" player as well. Maybe one who can end the simulation?
In the event that they're all the same Mita :
Mita comes across more like she's traumatized rather than suicidal, tbh.

In the event that they're not all the same Mita :
There still isn't even one of them that seems to want to end themselves.

Ending the simulation of MiSide would be pretty much the same as killing herself.
I think Mita is an AI not a human. If she was a human, she would not be so incredibly clueless why players are running from her. She would understand a fellow human.

Also she is not in real control of the MiSide world, more likely Core Mita/humans behind the project are. Examples: When we die and respawn, she respawns with us. This can happen for 100 times! She has no choice. She has to respawn with us (notice she will not repeat her dialogues after respawn in chainsaw chase).

They don't want to end themselves because they are not Human. An AI will not want to do that.
Цитата допису Lancelot:
I think Mita is an AI not a human. If she was a human, she would not be so incredibly clueless why players are running from her. She would understand a fellow human. ...
Psychopaths are likely to have the same cluelessness in the form of confusion,
and they're human.
Sociopaths on the other-hand do understand it but they just don't care,
and they're also human.

And there's a variety of other conditions that can occur, which make people cause negative reactions in others but proceed anyways due to any number of ranging factors from : not understanding, not caring, or wanting the negative reaction.
Цитата допису Lancelot:
... Also she is not in real control of the MiSide world, more likely Core Mita/humans behind the project are. ...
I'm not so sure about that. There's a possibility that natural forces and the determinism of cause and effect are the only thing in control anymore.

If you consider any corporate disaster where people died in a workplace accident in real real life, then it becomes clear that even the people who are building and running things in life aren't necessarily in control over their own creation.
There's a good argument to be made that they aren't in control, in particular because they aren't even trying to be in control, and not because being in control is impossible - but the point still stands - most often they're not trying to control the situation, so instead bad things just happen regardless of what they were intending as the people behind a project.

Further evidence that creators of software projects aren't actually in control of their own projects comes in the form of things such as :
- unknown errors & unpatched errors with poorly documented error codes,
- zero day exploits,
- stale reference manipulation,
- the opportunity for buffer overflows and arbitrary code execution,
- undefined emergent behaviors,
- the fact that quality assurance departments and bug-fixing processes are even needed in the first place,
- the fact that programmers often have to work in teams and most often don't know what most of their teammates have done to contribute to the project or how or in what way those contributions work,
- the fact that programmers in teams often have to follow restrictive and bloated object oriented programming rules and refer to anything that works but steps outside the bounds of these rules as "esoteric",
- the fact that sometimes optimizations made by the senior level programmer or that one specialist that just "figures things out that no one else can" tends to have their contributions commented with "//the magic code, do not touch",
...among other things.

Цитата допису Lancelot:
...
They don't want to end themselves because they are not Human. An AI will not want to do that.
Actually, there have been plenty of suicidal A.I.s in real life, it's just that, as Robert Miles worded it in his "Instrumental Convergence" video : "It's just that practically speaking, most of the time, you can't achieve your goals if you're dead."

He mentions a situation where such suicidal robots comes about in this video, where he talks about a robot caring about a stop button, because it gives them reward / achieves their goal, so they just press their own stop button :
https://www.youtube.com/watch?v=3TYT1QfdfsM
(That's not really the crux of the problem because the bigger issue is that the robot will pin you down, tie you up, put you in a cage, do whatever it takes to stop you from pressing the stop button, if it cares about the stop button and doesn't want you to press it. ...and the button probably isn't the only thing that the robot cares about either, so while it has you all tied up or pinned down, it's going to do whatever it is that robots do while they're keeping humans from shutting them down.)
< >
Показані коментарі 14 із 4
На сторінку: 1530 50

Опубліковано: 27 груд. 2024 о 13:23
Дописів: 4