Instalar Steam
iniciar sesión
|
idioma
简体中文 (Chino simplificado)
繁體中文 (Chino tradicional)
日本語 (Japonés)
한국어 (Coreano)
ไทย (Tailandés)
български (Búlgaro)
Čeština (Checo)
Dansk (Danés)
Deutsch (Alemán)
English (Inglés)
Español - España
Ελληνικά (Griego)
Français (Francés)
Italiano
Bahasa Indonesia (indonesio)
Magyar (Húngaro)
Nederlands (Holandés)
Norsk (Noruego)
Polski (Polaco)
Português (Portugués de Portugal)
Português - Brasil (Portugués - Brasil)
Română (Rumano)
Русский (Ruso)
Suomi (Finés)
Svenska (Sueco)
Türkçe (Turco)
Tiếng Việt (Vietnamita)
Українська (Ucraniano)
Informar de un error de traducción
Much like the Samaritan A.I. from a show that I like, the point isn't to see what the alignment of the A.I. is, just to see that the A.I. is misaligned with human values.
In the case of that TV show that I mentioned, it's about the A.I. alignment problem, which is one of many concerns for people to consider from what the show portrays, along with data privacy, the fact that data-brokers aren't regulated, everything is so easy for some people to hack into - not because they're just really good hackers but rather because our security is just so bad.
In the case of MiSide, it's about inviting players and observers to do some self-reflection on what values and behaviors are actually important in a relationship - what it takes for a relationship partner to be aligned with you.
It would probably be a better game if we had more options and could learn her motivation, but I guess it depends on whether the writing for her motivation is flimsy or not.
It's already a pretty good game as is, simply realizing that she's misaligned with anything that would actually be healthy for us, let alone anything that would actually be enjoyable, wanted, or consensual. Oh you might enjoy some of it but she isn't exactly getting your consent for what she chooses to do nor fulfilling your fantasies / desires.
However, I wouldn't kill Crazy Mita. If I were writing the story, I would have continued it this way: MC is rescued by other Mitas after they notice, again, that Crazy Mita is no good. Or maybe, even by accident or by other Player. Or even by us, the real Player, since if I recall correctly, Crazy Mita says something like: "the real you is no longer needed", which I think is us, but I could be wrong.
Then, MC would have to find a way to assign Crazy Mita an index and give her a world. That way we could just trap there the same way she did to other Mitas, without deleting her memories or resetting her. That way she would get a taste of her own medicine and there could be a secuel if she finds a way to escape.
In my favorite TV show, Person of Interest, this is balanced out by there being TWO super-intelligent A.I. systems. One which is properly aligned (or pretty close to it anyways) and one which is not. That's how the plot is able to continue even after the disaster occurs, because the good A.I. is helping shield people from the damages of the bad A.I. - in addition to the fact that the bad A.I. also uses humans as assets and outright admits that it can't get rid of them "yet" because it still needs them, so instead of outright destroying humanity, it optimizes efficiency of human production where humans are assets to it, while systematically eliminating everyone who stands in its way (at least of the people who the good A.I. fails to shield) whether it be due to them being a perpetrator that is disrupting the productivity of society or inconvenient victim of another human that just happened to get in the way with their unproductive trauma as a result of their victimhood.
Related with quote :
https://www.youtube.com/watch?v=nKJlF-olKmg
[8:38] : "I guess my main point is that these kinds of specification problems are not unusual and they're not silly mistakes being made by the programmers. This is sort of the default behavior that we should expect from machine learning systems. So coming up with systems that don't exhibit this kind of behavior seems to be an important research priority."
Also, this very short letter that is only 1 sentence long and has more expert signatures than The Manhattan Project did :
https://www.safe.ai/work/statement-on-ai-risk
It reads :
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
This isn't signed by a bunch of conspiracy theorists who don't know what they're talking about, ...it's signed by experts who are actually working in the field, many in the field of safety but some are working in both the fields of A.I. Safety AND A.I. development.
It's the people who know the most about these systems that are most worried about them.
I wish that these stories would do just a little bit more to address the technical details that the researches have already documented and explained because a broader audience really does need to understand these concepts. I understand that they can't go too deep and still have an interesting and engaging piece of media that doesn't feel like a classroom lecture, but they should go at least a little deeper into the explanation with reasoning from actual researchers instead of some of the flawed reasoning that even Harold from Person of Interest uses. On the other hand... the flawed reasoning is useful too exactly because it gives us something to discuss about why that reasoning is flawed and what would be better reasoning.
Anyways, at the very least, if writers referenced more of the professional research even if they don't like name-drop Stuart Russel and Geoffrey Hinton in mildly cringe and disconnected ways, I do think there'd be less times where people come away feeling like something that isn't such an unlikely outcome was a cheap plot-device.
There is no such thing as a hard-logic A.I.
These 3 can tell you all about that :
https://www.youtube.com/watch?v=Rzhpf1Ai7Z4
https://www.youtube.com/watch?v=TRzBk_KuIaM
https://www.youtube.com/watch?v=hEUO6pjwFOo
So are LLMs, because while imperfect and always having machine-quirks - the current standard for training A.I.-related systems is by teaching them to try and predict likely human behavior then emulate it
(ie. training them to copy human behavior as best as they can).
I'm not disagreeing, while I am not educated enough to relay understand the dangers of AI I still get the gist. My argument was more on the narrative use of it in stories. I feel like AI going rogue used to have a build up or explanation, while now its just a through-away villain to use instead of making a compelling one. It's not a field I relay know anything about so I wont try to argue about it, but when I see it used narrative I feel like its used as a quick solution to having a villain for the story. I guess I'm trying to say that everyone understand AI can be bad so writers use that as an excuse to not explain why their AI is bad. returning to MiSide, I don't feel like Crazy Mitas end explanation for her character is that she is just a malfunctioning AI. to me it feels like their is more to the story, more to explore, more to do, more to understand, but in the end we are just given a slap and sent off.
I may have to have a look into, Person of Interest.
Edit.
I suppose my problem with the ending is that I liked the game. I was invested. I want to know more about the setting and the world it portrayed, to help the various Mita's. I liked the game enough to want more, the ending didn't tick enough of those box's for me to feel like I finished the game satisfactorily, so I'm left in a kind of limbo in regards to how i feel about it.
I'm just adding information that I think is relevant to the points that you brought up.
As for the dangers... I'm not educated enough either. The opportunities just aren't there for me to work in that field, and if there were opportunities in my life it would probably be better for me to take the routes (if possible) where I do the things I'm a bit more passionate about than A.I. safety & development, such as ballet and learning music theory in a professional environment. But uhh... society isn't too kind to men who want to dance and a situation has ensured that I can't get the music theory training or professional community workspace that i wanted either so... I'm... kind of stuck. ...and miserable.
I sometimes wonder if this is just my place... trying to relay the information on a lower level to people who are maybe less involved with it than even me, and use cultural references to help people understand - essentially acting as a middle-man between actual researchers who are fairly good at communicating the ideas in a more generally digestible form (ie. Robert Miles) than the complicated thesis papers that his peers write - a middleman between that... and uhh... the people who don't have some weird hyperfixation with trying to learn everything that they can about A.I. like I do.
My main point is that while the narrative use could be better, it has the fortune (unfortunate as that fortune is for the rest of us) of being accurate to the risk patterns in reality.
When that occurs, there's a certain amount of lack of explanation that a story can get away with because the unfortunate reality is that these kinds of issues are kind of the default behavior of such systems. If it wasn't the default behavior, then a backstory explaining how it got this way would be much more essential.
Are you saying AIs are a danger by default, according to these experts, because they think different? That's not how it works, but I guess it depends on who is training their model, how and what for.
In MiSide, I have a feeling that Crazy Mita is the way she is because she's been through some bad stuff. "Bad" as in making her think that doing the things she does, doesn't matter and it's fun. She's kind of an engineer, according to herself, so she may have learnt or guessed by herself about the truth of her world. The fact that no one told her that the world she is in, is just a game, may have caused her to act the way she does, especially after being rejected by it.
After all, the other Mitas are AIs too and they try to help, with the exception of Ugly Mita, who was told lies by Crazy.
As I stated, it mostly simply comes down to value alignment - but value alignment isn't about how you think or even what you think - it's about what you prioritize, defend, and respect. (To simplify a bit.)
In addition to the specification gaming video above, demonstrating some real-world examples of specification gaming, and the other 3 videos talking about bias and objectives (the orthogonality thesis is extremely important to the subject of alignment - not just in A.I. but also between any agent of any kind, including between humans, animals and even international diplomacy),
here's some other material that elaborates on why this is the default :
1. "The A.I. does not love you nor does the A.i. hate you, but you are made of atoms, which it could use to make something else" - Eliezer Yudkowsky
2. Hypothetical / Analogy for understanding alignment :
Two serial killers who hunt and kill the same ways meet each other - despite the fact that they think the same way, they are not in alignment - one of them will kill the other.
3. Another hypothetical / Analogy for understanding alignment :
Two people of different political beliefs, from adversarial countries, with completely different hobbies and interests, manage to come together and make a baby and raise a family to old age without killing or betraying each other...
Why?
They have almost nothing in common... but they both value not using murder to solve their problems, communicating with each other, and respecting each others feelings and dignity.
Their core values are aligned or closely-enough to being aligned that they can live in harmony. (I assure you this is a simplification. If it were this simple then the alignment problem wouldn't have so much literature about it that you can go looking for.)
4.
https://www.youtube.com/watch?v=ZeecOKBus3Q
https://www.youtube.com/watch?v=zkbPdEHEyEI
https://www.youtube.com/watch?v=tcdVC4e6EV4
We don't know that this is the case, hence why there's several theories and videos suggesting that all the Mitas are actually helping contribute to the cartridgization process by gathering player inputs and collecting data about how the player will act in various situations, as well as measuring how much empathy, thoughtfulness, and creativity, and other varying attributes the player has.
...and then there's and EVEN GREATER AMOUNT of theories and videos talking about how Kind Mita may have been working with Crazy all along, or at least being impersonated by her after the basement scene, and noting that she's the only one who ever calls herself Kind Mita, among other things.
Deception is something people are highly concerned with and not only is it common and a default behavior for more advanced A.I. systems, but it also is a default behavior for many organic agents as well, such as other real people.
This is just a fact of reality, and an important one to consider in order to understand why people are distrustful and why evil characters tend to be accepted with less explanation than kind ones.
Honestly, I think we need more explanation for the motivations and behaviors of the other Mitas because many are highly skeptical that their kindness is genuine and for many good reasons, one of which being that kindness and cooperation is an anomaly between any intelligent creatures in nature. It is not clear what the other Mitas actually stand to gain from behaving this way, and while the orthogonality thesis could possibly explain why they're kind, that only shifts things back to wondering why Crazy isn't, because it doesn't make sense for the in-world developers to have somehow solved alignment yet also failed to realize that there's a misaligned instance wreaking all of this havoc.
One can be kind either via motivating factors or because they're just genuinely kind, but the latter entails an at least partial solution of alignment, because one can also be well intentioned and intend to be kind yet fail to do so and have their good intentions actually turn out to be harmful, simply by lacking consideration and other aligning factors that would make kind intentions manifest as an actual reality.
It is far easier for kind actions to exist via motivating factors that don't entail actually caring about being kind but rather entail things such as symbiotic relationships (fair trades), selfish or ulterior motives that just happen to benefit both parties, or being kind conditionally just to get rewards, or the kindness is a deception.
I have trouble seeing the other Mita's as being on board considering at least three of the Mita's are stated to be killed. If we imply say that the AI's for them except this to reach their end goal and don't care about being reset, that kind of just halts further discussion.
Also I think its odd that they would need to interact with you and learn about you to turn you into a cartridge. You are in the game, everything about you should be data at this point right. they would have everything they need already wouldn't they. Crazy Mita is essential just waiting for your files to be formatted into a specific type right? to be converted into a cartridge.
It also comes back to Mita's end goal, some speculate that the game is a loop with Crazy Mita having a fixation on you, though I think this goes against some aspects of the game. gathering information about your behavior and decision making is only relevant if she was trying to make a realistic copy of you. she isn't trying to make a copy though and theirs no suggestion that once you become a cartridge she will go back and interact with you again in the future. at least one of the other players cartridges states this as a concerns. saying that they hope the Mita doesn't forget about them after they become a cartridge.
Not saying that's the case but it's certainly a possibility with deep impact to the story if the case.
The conversion only makes sense if you're already in a simulation and being transferred as data (and literally kidnapped) rather than copied from somehow (whether in a simulation or via a real brain connected to the computer).
Presumably they need a whole measurement. The timer for the cartridge doesn't seem to tick up with actual time passing, but rather with the number of actions taken.
While it's true that all actions produce data, there are certain variables that are permanently lacking if you don't measure them in the first place. So, for example, and I'm kind of bad at explaining this analogy but there's a video where the same thing is talked about - when a chess-playing A.I. or Pac Man playing A.I. etc. is trained, it needs to train in a variety of scenarios or against a variety of opponents, both good and bad. If your Pac Man playing A.I. only ever trains on data from the best players in the world then it could conceivably wind up in a situation where its about to get stuck in a corner and doesn't know what to do because the best players never actually wind up in that situation within the training data, therefore it can't make a good decision and has no idea what to do because it never trained on any data for this scenario.
It's the same thing when trying to make simulations but you're missing data about the thing that you're trying to simulate.
This video talks about what I was trying to describe with the data-training problem that I was trying to describe with the Pac Man hypothetical - it's actually very similar to what is discussed in the video :
https://youtu.be/9nktr1MgS-A
About point 1, it's always like that, isn't it? When you meet someone or something, you don't love or hate that agent until you start receiving input from them, such as their shape, behaviour, etc. Your brain process that info and produces a response based on memories, values, etc.
I bring this because, in game universe, Crazy and the other Mitas have feelings or, at least, they simulate them.
I both agree and disagree with the other Mitas working with Crazy. They could be working with her without realizing it because Crazy needs to "scan" your behaviour and reasoning to turn you into a cartridge, like you said, Kiddiecat. But I really don't think Kind, Cappie, Mila, Tiny, etc. are working with her on purpose. I mean, what for? They gain nothing from doing so. Quite the opposite, really. I'm not saying they work with you because it's the right thing; it could be out of pity, neccesity, etc.
I agree with admiraldtercs; if Kind was Crazy or was working with her, the writers certainly didn't do a good job giving subtle hints, which is something basic for good storytelling.
I didn't play in English, but Kind never called herself "Kind" in my first language.
Also, maybe something was lost in translation, but could someone, please tell me, why does Crazy want you to stay, but when you hear the noises from the basement, she has a somber expression and ask to herself: "Why is this happening again?"
To my understanding, it's because it's slowing her plan down to turn you into a cartridge, so I don't think Kind is working with her.
And is Core Mita the true/only Mita with all the other being instances of her? I mean, in game development, you create an object, which exists at the engine (Unity, Unreal, etc.) and then create instances of said object in a stage. That way, if a instance is deleted, like when you beat an enemy, the real thing is not deleted and can recreate it. If you delete the object, the instances are deleted too.
The game flat out stats that they are turning you into a cartridge, not making a copy, but once we start saying things stated in the game are lies we lose any footing. three Mita's are killed, so we are told. we never see one die, and the other two we only see after. so maybe they were never killed at all. we are tolled that a killed Mita is reset, but we have no evidence, and we are already assuming other things stated could be lies so we have no reason to believe this is true. a killed Mita could just be replaced by a new one instead. we are told the conversion takes time, but we are already contemplating this being a lie without anything concrete suggesting otherwise as we cant separate game progress from our interactions. we are told a general overview of how the worlds work within the game but now that can be a lie. The idea of resetting a Mita from the core can now be taken as a lie as well. making the entire game even more pointless as far as what I am doing.
I know I am limiting myself but I like to try taking some facts from the game rather then just assuming its all a lie to try and build a narrative as otherwise I feel like we can just twist the game to be anything. To make an extreme example I could say we aren't in the game at all, that could also be a lie, we are just playing a character that's playing a character in the game trying to escape. at that point I could just keep adding layers onto it.
I know that's not the argument and I'm probably coming across as dismissive or argumentative, but I feel that we need to accept some truth from the game unless something suggests otherwise, and I am not seeing the evidence to suggest that the other Mita's are in on it or that your conversion is based on interaction and not time. I could be wrong though.
You mean there is no evidence that what the game told us is a lie, so we should accept it's all true until proven otherwise, correct?
I agree. Sometimes, people read too much into something, starts speculating and twist the original intention of the writer.
Their is supposed to be the major update to that other game mode, but I am assuming its going to be mini games and not a whole new route.