Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
In her recordings, Alexandra talks about pursuing the truth even if it makes you uncomfortable. Choosing to leave the beatiful, comfortable, and purpose-filled simulation means accepting reality even if it's scarier than the lies. Essentially, choosing the tower is following Alexandra's philosophy.
It's just like The Truman Show, The Matrix, and Plato's cave. Truman was in a world that was meant to be ideal, but he left to see reality. The Matrix was way more pleasant than the real world, but it was still a lie. It's hard to adjust your eyes to the real world after being in the Plato's cave for so long, but reality is still real-er.
Also, I don't think that the simulation is humanity's greatest achievement. The goal of the simulation was to produce a bot that would question what it was told. Alexandra says in one recording that questioning in that way is what makes something intelligent / human / something-like-that. By leaving the tower, the simulation's process has achieved its purpose: creating an intelligent (according to Alexandra's definition) being.
Urial and Samsara work thematically, but perhaps not logically. Here's what I mean. Some texts / recordings talk about people doing things for the generations to come. A passage from Cicero, for example, talks about people laboring with full knowledge that they'd die before seeing the fruits of their labor. Alexandra and company set up this project to leave something behind them even though they knew that they would die from whatever killed humanity. You get the idea. So Urial was there so that you could stand on his shoulders and achieve what he laid the ground for. Logically, however, I don't see why he couldn't just leave. Samsara's[en.wikipedia.org] name has to do with cycles because he has chosen to obey Elohim and keep the process running. Though if he's on Elohim's side, why did he climb the tower? Again, like Urial, Samsara's being there makes sense thematically but perhaps not logically.
With all that said, you've got a point about destroying the simulation. Are all the other bots just dead now? Was that really necessary? Also I have no idea how long the facility could stay active realistically.
Especially since the simulation was able to produce its intended result in a matter of a few years. Had it kept running, it could have made thousands more by using the template of the first one. Instead, they pop out one bot and shut down.
Also, I don't think the entire point of the simulation was simply to see if they could do it. Knowing that nobody would be around to see if they were successfull, it's unlikely they were trying to prove anything to anyone. The message I got from the story was they were trying to preserve humanity, one way or another.
One more thing... doesn't it seem that Elohim and Milton were also conscious beings at that point? Elohim seems to adhere more strictly to his programming, but Milton seems to have spontaneously gained consciousness. Clearly the bot isn't the only success of the simulation, another reason not to destroy it.
I suppose that doesn't really deduce conciousness in any way, right? I feel I have a "concious" experience and everyone else says it, but tough, I can make a program that when you feed the string "Are you concious?" into it, it spits out "Yeah! I'm concious!" and "Why did you ask?" for good measure.
The problem is Writer's Explanation makes very little sense. It "was a very long time" between the ending of the story and the "last human dying", Milton is a "product of corruption", and Elohim and Milton are "not working together". Their words, not mine :\
Can a non-human consciously decide to go against self preservation and commit suicide? Clearly they werent programmed to do that, because the simulation was deleted by a human-written script. I think they were as human as the bot.
As for the data corruption, I think one of the logs mentioned that the database was already starting to act up by grabbing random data, but they didn't have time to go over the code again. I think that was a sign that the simulation was already getting a mind of its own while humans were still alive.
Besides, my logic for supporting Elohim (at the end of my first playthrough) is that defiance of Elohim would just produce overall negative results for Elohim, the tower would collapse (there's a LOT of Fake Spoilers around the game and this one came from the trailer) and generally everyone would get off worse. So Milton's defiance just seemed to be blind.
Milton is most likely a "friendly supercomputer AI" of a sort who is faking the whole "uninstall" and "get pissed off". There are hints IN THE GAME about Milton controlling all of your responses, so it only makes sense that Milton has a reason for that, and the reason would be not wanting to go with you.
Not buying your argument that other endings are better because something is "alive" in the simulation. For one thing, you're basically saying that separate realities exist. There is no reality "inside" the simulation because the simulation is already a part of reality. The simulation, in real terms, is an assemblage of mechanical parts manipulating its parts as programmed. The reality of the simulation, so far as the story reveals, is that lines of code are being transferred from one machine to another.
Where the story falls short is the absurdity to which one is to believe that these lines of code are prepared for the real world; that the new machine in which they are installed can detect its surroundings whereas no such operation would take place in the simulation. The simulation was always code. Even if we "separate" portions of code from one another (i.e. the AI and its simulated environment), code receiving other code as input is not the same thing as code receiving signals from whatever detector we are to believe can serve as "eyes" for a machine. That machine needs something that doesn't exist in our world - a set of instructions to translate the input of one or more detectors that interact with a real environment and translate all of that into whatever code the AI can "understand."
The game does not elaborate on this being a part of EL. Let's just suspend disbelief and appreciate this work of fiction on its own terms. Where I return you, then, is to accept that the code which constitutes the Artificial Intelligence is real, but lacks the qualities of life. While a part of the simulation, the AI cannot detect anything except code, and it can affect nothing except to fulfill conditions necessary to trigger the execution of different code, whereafter a COPY of the AI code installs on a new machine. Were the code to remain a part of the simulation, it would not satisfy the qualities of life, and the odds of it having any facsimile to the FEELINGS of being alone or not are rather moot. To personify in any way the machine that is created in the tower ending, to specifically personify it as experiencing something unpleasant as a result of the simulation being gone, and to assume that nothing exists in the real world to satisfy the machine's own definition of being "not alone" - these personal things of yours simply would not apply to the machine in the way that you are familiar.
A final word. No offense to you. I played this game three times for achievements then uninstalled it. I don't regret the purchase, but at the same time I think your question/comments and my actually having to think about them was more rewarding than simply playing the game alone.
Actually I'm beginning to think that *you* (the player) are the bot. The bot has reached the level of a human, thats why you are playing it. I mean, why not? After all, a true AI would pretty much be a human mind. So consider yourself the mind of the bot and now you are completely prepared for the "real world".
But that's also why I think the real world is hell, because you are now the only human left in a post-human world. However, on the other hand, you would probably be driven insane if left inside the simulation forever.
My only idea of a purpose would be just to survive till an alien life finds the planet, and the bot would be the knowledge guardian.
When the simulation was deleted that doesn't mean it is gone forever. There were a lot of computers in that building, and it was likely just deleted from that paticular computers main memory. Another copy could be ran on another system somewhere else if the new robot being you become wants to (or another application could be runing to monitor when that simulation state ends, for one reason or another, and launches another simulation).
Second, I can't remember where but I got the sense that the idea behind the simulation was to create a consciousness that wants to exist outside of the simulations walled garden. They stress a lot that the planet is a great place that humanity mucked up. So they want something that can both exist in this new world that humans can no longer inhabit, and also retain humanity's knowlege and experiences. In a sense early man made way for modern man which made way for the AI. Each generation takes from and learns from the previous ones.
Lastly each of the other main AI's you encounter serve a purpose but none of them want to exist outside the simulation. EL0HIM is there to give you something to both follow and rebel against. Milton is there just to sow doubt. The Shepheard is there to help anyone that get's as far as he has but he never wanted to leave either because he felt he wasn't ready. Those other 2 AI's you run into on the tower weren't helpful, I'm not sure why people keep saying that, both of them try to halt your progress, so that's why they never left. You are the first that got as far as you did without going crazy so that's why you get to leave.
Just some things to think about.
Besides, there are really 4 AI on the top of the tower at the same time. You, Shepherd, Samsara and that easter egg dev guy.
I think that ambiguity of mind and identity was intended as part of the philosophical discussion. It's a difficult question, for example, whether a person whose brain had been split down the middle (a corpus callosotomy) is two people despite behaving like one 99% of the time, or if a person who's had half of their brain removed (hemispherectomy) is half-dead. Those questions only become more confusing in regards to AI, where identity is less physical and more functional.
It's all the same data being accessed by slightly different code, after all.
To take the analogy further, climbing the tower is like lucid dreaming: becoming aware that you're dreaming and that you can take matters into your own hands. That last morsel of awareness allows you to be more than just a passive actor in your dream, and that kind of active awareness is likely what Alexandra considered most human and deserving of inheriting the earth. Because otherwise, you're just sheep.
How is that a depressing or tragic ending? The whole point of the game is saying that confronting reality is way better than deluding yourself. Even if Soma/Talos is destined to be alone on the planet (which, since animals are probably still alive and it having probably the ability of creting other androids, seems implausible), the point is that intelligent life still exists.
EDIT about shutting off the simulation: that's also meant to accentuate Soma/Talos' freedom. As Alexandra said, the purpose of the simulation is creating an AI that is free to question the status quo. Shutting the simulation enables the AI to choose what to do onwards: if it wants to roam the world alone, it can do that. if it wants instead to create a race of androids to populate Earth with, it also can - but how to program and built them will be its choice, not the human's.
There is still the unexplained sequence in the end cinematic that shows thousands of servers powering up, even though the simulation was supposedly deleted.
No, I'm a realist. There's nothing abstract about thousands of computers powering up. There's no ideal or philosophy there, just cold, hard technology. Computers are tools which serve some purpose, and you don't power on an entire datacenter for absolutely no reason.