Інсталювати Steam
увійти
|
мова
简体中文 (спрощена китайська)
繁體中文 (традиційна китайська)
日本語 (японська)
한국어 (корейська)
ไทย (тайська)
Български (болгарська)
Čeština (чеська)
Dansk (данська)
Deutsch (німецька)
English (англійська)
Español - España (іспанська — Іспанія)
Español - Latinoamérica (іспанська — Латинська Америка)
Ελληνικά (грецька)
Français (французька)
Italiano (італійська)
Bahasa Indonesia (індонезійська)
Magyar (угорська)
Nederlands (нідерландська)
Norsk (норвезька)
Polski (польська)
Português (португальська — Португалія)
Português - Brasil (португальська — Бразилія)
Română (румунська)
Русский (російська)
Suomi (фінська)
Svenska (шведська)
Türkçe (турецька)
Tiếng Việt (в’єтнамська)
Повідомити про проблему з перекладом
Mita comes across more like she's traumatized rather than suicidal, tbh.
In the event that they're not all the same Mita :
There still isn't even one of them that seems to want to end themselves.
Ending the simulation of MiSide would be pretty much the same as killing herself.
Also she is not in real control of the MiSide world, more likely Core Mita/humans behind the project are. Examples: When we die and respawn, she respawns with us. This can happen for 100 times! She has no choice. She has to respawn with us (notice she will not repeat her dialogues after respawn in chainsaw chase).
They don't want to end themselves because they are not Human. An AI will not want to do that.
and they're human.
Sociopaths on the other-hand do understand it but they just don't care,
and they're also human.
And there's a variety of other conditions that can occur, which make people cause negative reactions in others but proceed anyways due to any number of ranging factors from : not understanding, not caring, or wanting the negative reaction.
I'm not so sure about that. There's a possibility that natural forces and the determinism of cause and effect are the only thing in control anymore.
If you consider any corporate disaster where people died in a workplace accident in real real life, then it becomes clear that even the people who are building and running things in life aren't necessarily in control over their own creation.
There's a good argument to be made that they aren't in control, in particular because they aren't even trying to be in control, and not because being in control is impossible - but the point still stands - most often they're not trying to control the situation, so instead bad things just happen regardless of what they were intending as the people behind a project.
Further evidence that creators of software projects aren't actually in control of their own projects comes in the form of things such as :
- unknown errors & unpatched errors with poorly documented error codes,
- zero day exploits,
- stale reference manipulation,
- the opportunity for buffer overflows and arbitrary code execution,
- undefined emergent behaviors,
- the fact that quality assurance departments and bug-fixing processes are even needed in the first place,
- the fact that programmers often have to work in teams and most often don't know what most of their teammates have done to contribute to the project or how or in what way those contributions work,
- the fact that programmers in teams often have to follow restrictive and bloated object oriented programming rules and refer to anything that works but steps outside the bounds of these rules as "esoteric",
- the fact that sometimes optimizations made by the senior level programmer or that one specialist that just "figures things out that no one else can" tends to have their contributions commented with "//the magic code, do not touch",
...among other things.
Actually, there have been plenty of suicidal A.I.s in real life, it's just that, as Robert Miles worded it in his "Instrumental Convergence" video : "It's just that practically speaking, most of the time, you can't achieve your goals if you're dead."
He mentions a situation where such suicidal robots comes about in this video, where he talks about a robot caring about a stop button, because it gives them reward / achieves their goal, so they just press their own stop button :
https://www.youtube.com/watch?v=3TYT1QfdfsM
(That's not really the crux of the problem because the bigger issue is that the robot will pin you down, tie you up, put you in a cage, do whatever it takes to stop you from pressing the stop button, if it cares about the stop button and doesn't want you to press it. ...and the button probably isn't the only thing that the robot cares about either, so while it has you all tied up or pinned down, it's going to do whatever it is that robots do while they're keeping humans from shutting them down.)