The Talos Principle

The Talos Principle

View Stats:
Soylent Mar 6, 2015 @ 5:25am
Ending story discussion [SPOILERS]
End-game spoilers here.

I just finished the game, and did all 3 endings. There are a few things I wanted to talk about regarding the storyline.

Clearly, the "tower ending" seems to be the "correct" ending, based on all the very climactic ending cinematics. However, to me it seems to be the worst ending for your character (the bot). Unlike the other two endings, the bot ends up alone in an alien world. Completely alone, because the simulation was destroyed, Elohim is gone and Milton may or may not be in your head.

My first contention with this ending is that there was no reason the simulation would have been programmed to be deleted. It fulfilled its duty, but why delete it when the simulation itself is the last great human achievement?
And what is this bot supposed to do now, just walk around looking at ruins till the end of time? He can't make more bots, because the simulation was destroyed, therefore preventing new "PASSED" processes from being generated. It sounds like a terrible existance to create for anyone.

But if the simulation continued to exist, it could have kept pumping out more passing processes and allow the first bot to start making new bots (assuming the humans left him with the capability), and humanity would have eventually have restored in the form of androids.

So I think that the other 2 endings are better for the bot, because they allow the bot to continue to live in a virtual world, with other bots and Elohim and Milton. He can continue to run the simulation over and over, or he can become a messenger and help others out. But he still has a purpose, the simulation continues, and all is well in virtual land.



I also wanted to speculate how much time has passed since the last of the humans: I think it's only been a few years since human died out. Judging from the excellent condition of the hardware, the lack of layers of dirt or dust, and the electronics, servers, and power are 100% functional, it could only have been a few years old.

I'm no engineer, but power generators like water turbines do not last 100 years without maintance. I do know about servers, and there's no way all those servers would have been completely functional after sitting for 10 years, let alone 100 year. (Speaking of servers, what was the purpose of those hundreds of servers booting up in the cinematic?) So putting all this together, I think it's been less than 10 years since humanity died out. My guess is 3-4 years.

Anyway, since a sudden virus wiped out the humans in about 1 year (according to references in some of the texts), there are going to be billions of skeletons and corpses littering the world.

Humans created hell for this bot.


[One last note: What was the deal with Uriel and Samsara? It said they couldn't get past the last part or something, but clearly they were right at the top of the tower]



Last edited by Soylent; Jun 23, 2015 @ 5:00am
< >
Showing 1-15 of 35 comments
SpartanIII Mar 6, 2015 @ 12:34pm 
Originally posted by Soylent:

[...]

why delete it when the simulation itself is the last great human achievement? [...] And what is this bot supposed to do now, just walk around looking at ruins till the end of time? [...] It sounds like a terrible existance to create for anyone.

[..]

[One last note: What was the deal with Uriel and Samsara? It said they couldn't get past the last part or something, but clearly they were right at the top of the tower]

In her recordings, Alexandra talks about pursuing the truth even if it makes you uncomfortable. Choosing to leave the beatiful, comfortable, and purpose-filled simulation means accepting reality even if it's scarier than the lies. Essentially, choosing the tower is following Alexandra's philosophy.

It's just like The Truman Show, The Matrix, and Plato's cave. Truman was in a world that was meant to be ideal, but he left to see reality. The Matrix was way more pleasant than the real world, but it was still a lie. It's hard to adjust your eyes to the real world after being in the Plato's cave for so long, but reality is still real-er.

Also, I don't think that the simulation is humanity's greatest achievement. The goal of the simulation was to produce a bot that would question what it was told. Alexandra says in one recording that questioning in that way is what makes something intelligent / human / something-like-that. By leaving the tower, the simulation's process has achieved its purpose: creating an intelligent (according to Alexandra's definition) being.

Urial and Samsara work thematically, but perhaps not logically. Here's what I mean. Some texts / recordings talk about people doing things for the generations to come. A passage from Cicero, for example, talks about people laboring with full knowledge that they'd die before seeing the fruits of their labor. Alexandra and company set up this project to leave something behind them even though they knew that they would die from whatever killed humanity. You get the idea. So Urial was there so that you could stand on his shoulders and achieve what he laid the ground for. Logically, however, I don't see why he couldn't just leave. Samsara's[en.wikipedia.org] name has to do with cycles because he has chosen to obey Elohim and keep the process running. Though if he's on Elohim's side, why did he climb the tower? Again, like Urial, Samsara's being there makes sense thematically but perhaps not logically.

With all that said, you've got a point about destroying the simulation. Are all the other bots just dead now? Was that really necessary? Also I have no idea how long the facility could stay active realistically.
Last edited by SpartanIII; Mar 6, 2015 @ 1:31pm
BlackWater Mar 6, 2015 @ 2:18pm 
My thoughts were only one was needed, as Milton was a portable Elohim so to speak, who could be copied and Dloaded into other bots. The tools were there to make more robot boidies, they merely needed programing. Far as time I assumed it to be centuries, based on the 999999 answer when asked how long the system was running. Then again, considering programs process things so fast from our persecptive, Generations could have been tested and reset in a matter of hours or less. Saddest moment in the game for me was the document about the pets, made me remember "A world without people" that aired on Discover a 2 years back or so.
Soylent Mar 6, 2015 @ 2:26pm 
Originally posted by SpartanIII:
With all that said, you've got a point about destroying the simulation. Are all the other bots just dead now? Was that really necessary? Also I have no idea how long the facility could stay active realistically.

Especially since the simulation was able to produce its intended result in a matter of a few years. Had it kept running, it could have made thousands more by using the template of the first one. Instead, they pop out one bot and shut down.

Also, I don't think the entire point of the simulation was simply to see if they could do it. Knowing that nobody would be around to see if they were successfull, it's unlikely they were trying to prove anything to anyone. The message I got from the story was they were trying to preserve humanity, one way or another.

One more thing... doesn't it seem that Elohim and Milton were also conscious beings at that point? Elohim seems to adhere more strictly to his programming, but Milton seems to have spontaneously gained consciousness. Clearly the bot isn't the only success of the simulation, another reason not to destroy it.
Last edited by Soylent; Mar 6, 2015 @ 2:28pm
Yup, Elohim and Milton seem to be much better at this sort of stuff. In theory they would be supercomputers as Milton acts like an "iterative" kind of bot that has the special ability to change its code. Though having a bot that has equal amount of intelligence and ability as a human would really be recreational as the improvements cause a feedback loop that causes what some people dub as the 'supercomputer singularity'.

I suppose that doesn't really deduce conciousness in any way, right? I feel I have a "concious" experience and everyone else says it, but tough, I can make a program that when you feed the string "Are you concious?" into it, it spits out "Yeah! I'm concious!" and "Why did you ask?" for good measure.

The problem is Writer's Explanation makes very little sense. It "was a very long time" between the ending of the story and the "last human dying", Milton is a "product of corruption", and Elohim and Milton are "not working together". Their words, not mine :\
Soylent Mar 6, 2015 @ 4:00pm 
Elohim said he was scared and wanted to live forever. Milton refuses to go with the bot (if you took that route), knowing he will also be deleted. The bot killed them for his own goals, but they were ok with it, like they were tired of existing.

Can a non-human consciously decide to go against self preservation and commit suicide? Clearly they werent programmed to do that, because the simulation was deleted by a human-written script. I think they were as human as the bot.

As for the data corruption, I think one of the logs mentioned that the database was already starting to act up by grabbing random data, but they didn't have time to go over the code again. I think that was a sign that the simulation was already getting a mind of its own while humans were still alive.
I think that's not really conciousness. It's self-awareness. The 'bot' you are playing as can't really be sure it's a simulation. For all you know the files were planted instead of honest.

Besides, my logic for supporting Elohim (at the end of my first playthrough) is that defiance of Elohim would just produce overall negative results for Elohim, the tower would collapse (there's a LOT of Fake Spoilers around the game and this one came from the trailer) and generally everyone would get off worse. So Milton's defiance just seemed to be blind.

Milton is most likely a "friendly supercomputer AI" of a sort who is faking the whole "uninstall" and "get pissed off". There are hints IN THE GAME about Milton controlling all of your responses, so it only makes sense that Milton has a reason for that, and the reason would be not wanting to go with you.
MassConnect Mar 6, 2015 @ 5:01pm 
What's this about bots "living" in the simulation and "living" in reality? Ignore for a moment the philosophical question as to whether a machine can achieve some semblance of consciousness by some definition. When did we throw out the premise that "life" was defined differently than consciousness? What justification is there to call lines of code interacting with other lines of code as having the qualities of life?

Not buying your argument that other endings are better because something is "alive" in the simulation. For one thing, you're basically saying that separate realities exist. There is no reality "inside" the simulation because the simulation is already a part of reality. The simulation, in real terms, is an assemblage of mechanical parts manipulating its parts as programmed. The reality of the simulation, so far as the story reveals, is that lines of code are being transferred from one machine to another.

Where the story falls short is the absurdity to which one is to believe that these lines of code are prepared for the real world; that the new machine in which they are installed can detect its surroundings whereas no such operation would take place in the simulation. The simulation was always code. Even if we "separate" portions of code from one another (i.e. the AI and its simulated environment), code receiving other code as input is not the same thing as code receiving signals from whatever detector we are to believe can serve as "eyes" for a machine. That machine needs something that doesn't exist in our world - a set of instructions to translate the input of one or more detectors that interact with a real environment and translate all of that into whatever code the AI can "understand."

The game does not elaborate on this being a part of EL. Let's just suspend disbelief and appreciate this work of fiction on its own terms. Where I return you, then, is to accept that the code which constitutes the Artificial Intelligence is real, but lacks the qualities of life. While a part of the simulation, the AI cannot detect anything except code, and it can affect nothing except to fulfill conditions necessary to trigger the execution of different code, whereafter a COPY of the AI code installs on a new machine. Were the code to remain a part of the simulation, it would not satisfy the qualities of life, and the odds of it having any facsimile to the FEELINGS of being alone or not are rather moot. To personify in any way the machine that is created in the tower ending, to specifically personify it as experiencing something unpleasant as a result of the simulation being gone, and to assume that nothing exists in the real world to satisfy the machine's own definition of being "not alone" - these personal things of yours simply would not apply to the machine in the way that you are familiar.

A final word. No offense to you. I played this game three times for achievements then uninstalled it. I don't regret the purchase, but at the same time I think your question/comments and my actually having to think about them was more rewarding than simply playing the game alone.
Last edited by MassConnect; Mar 6, 2015 @ 8:17pm
Soylent Mar 7, 2015 @ 4:35am 
Originally posted by Tweak:
What's this about bots "living" in the simulation and "living" in reality? Ignore for a moment the philosophical question as to whether a machine can achieve some semblance of consciousness by some definition. When did we throw out the premise that "life" was defined differently than consciousness? What justification is there to call lines of code interacting with other lines of code as having the qualities of life?

Not buying your argument that other endings are better because something is "alive" in the simulation. For one thing, you're basically saying that separate realities exist. There is no reality "inside" the simulation because the simulation is already a part of reality. The simulation, in real terms, is an assemblage of mechanical parts manipulating its parts as programmed. The reality of the simulation, so far as the story reveals, is that lines of code are being transferred from one machine to another.

Where the story falls short is the absurdity to which one is to believe that these lines of code are prepared for the real world; that the new machine in which they are installed can detect its surroundings whereas no such operation would take place in the simulation. The simulation was always code. Even if we "separate" portions of code from one another (i.e. the AI and its simulated environment), code receiving other code as input is not the same thing as code receiving signals from whatever detector we are to believe can serve as "eyes" for a machine. That machine needs something that doesn't exist in our world - a set of instructions to translate the input of one or more detectors that interact with a real environment and translate all of that into whatever code the AI can "understand."

The game does not elaborate on this being a part of EL. Let's just suspend disbelief and appreciate this work of fiction on its own terms. Where I return you, then, is to accept that the code which constitutes the Artificial Intelligence is real, but lacks the qualities of life. While a part of the simulation, the AI cannot detect anything except code, and it can affect nothing except to fulfill conditions necessary to trigger the execution of different code, whereafter a COPY of the AI code installs on a new machine. Were the code to remain a part of the simulation, it would not satisfy the qualities of life, and the odds of it having any facsimile to the FEELINGS of being alone or not are rather moot. To personify in any way the machine that is created in the tower ending, to specifically personify it as experiencing something unpleasant as a result of the simulation being gone, and to assume that nothing exists in the real world to satisfy the machine's own definition of being "not alone" - these personal things of yours simply would not apply to the machine in the way that you are familiar.

A final word. No offense to you. I played this game three times for achievements then uninstalled it. I don't regret the purchase, but at the same time I think your question/comments and my actually having to think about them was more rewarding than simply playing the game alone.

Actually I'm beginning to think that *you* (the player) are the bot. The bot has reached the level of a human, thats why you are playing it. I mean, why not? After all, a true AI would pretty much be a human mind. So consider yourself the mind of the bot and now you are completely prepared for the "real world".
But that's also why I think the real world is hell, because you are now the only human left in a post-human world. However, on the other hand, you would probably be driven insane if left inside the simulation forever.
Last edited by Soylent; Mar 7, 2015 @ 4:39am
[TLGS] Mario-x Mar 7, 2015 @ 8:52am 
Elohm was programed to keep it running, Milton was programed to question everything and only believe what can be found in the files. The bot made to learn. I considered climbing the tower to be the bad ending. I would have thought the bot would have been programed to DO something once in the real world, I would think the bot, at the end, would feel no purpose since he was not programed with a task.

My only idea of a purpose would be just to survive till an alien life finds the planet, and the bot would be the knowledge guardian.
fracturedorb Mar 7, 2015 @ 11:22am 
Some other ideas to kick around:

When the simulation was deleted that doesn't mean it is gone forever. There were a lot of computers in that building, and it was likely just deleted from that paticular computers main memory. Another copy could be ran on another system somewhere else if the new robot being you become wants to (or another application could be runing to monitor when that simulation state ends, for one reason or another, and launches another simulation).

Second, I can't remember where but I got the sense that the idea behind the simulation was to create a consciousness that wants to exist outside of the simulations walled garden. They stress a lot that the planet is a great place that humanity mucked up. So they want something that can both exist in this new world that humans can no longer inhabit, and also retain humanity's knowlege and experiences. In a sense early man made way for modern man which made way for the AI. Each generation takes from and learns from the previous ones.

Lastly each of the other main AI's you encounter serve a purpose but none of them want to exist outside the simulation. EL0HIM is there to give you something to both follow and rebel against. Milton is there just to sow doubt. The Shepheard is there to help anyone that get's as far as he has but he never wanted to leave either because he felt he wasn't ready. Those other 2 AI's you run into on the tower weren't helpful, I'm not sure why people keep saying that, both of them try to halt your progress, so that's why they never left. You are the first that got as far as you did without going crazy so that's why you get to leave.

Just some things to think about.
As I said, Writer's Explanation does not correlate at all, and Milton's role is mangled for many people. They say it's from corruption and random chance and Milton just started as a normal library assistant, but then why does it "reset" itself after death?

Besides, there are really 4 AI on the top of the tower at the same time. You, Shepherd, Samsara and that easter egg dev guy.
dump Jun 22, 2015 @ 4:42pm 
I think a good analogy is when you dream that you're someone else. Each AI is a different dream personality. You don't need to mourn the loss of those different personalities when you wake up, because you know that they're all just differently exaggerated aspects of you. The difference between whether or not you take Milton along is about the same as the difference between whether or not you remember a particular dream. Even if you don't, everything that constituted Milton exists along with you -- except the frustrating conversations you have with it. So I don't think deleting the simulation is so much the end of all the other AIs in the simulation.

I think that ambiguity of mind and identity was intended as part of the philosophical discussion. It's a difficult question, for example, whether a person whose brain had been split down the middle (a corpus callosotomy) is two people despite behaving like one 99% of the time, or if a person who's had half of their brain removed (hemispherectomy) is half-dead. Those questions only become more confusing in regards to AI, where identity is less physical and more functional.

It's all the same data being accessed by slightly different code, after all.

To take the analogy further, climbing the tower is like lucid dreaming: becoming aware that you're dreaming and that you can take matters into your own hands. That last morsel of awareness allows you to be more than just a passive actor in your dream, and that kind of active awareness is likely what Alexandra considered most human and deserving of inheriting the earth. Because otherwise, you're just sheep.
Last edited by dump; Jun 22, 2015 @ 4:45pm
dome.barbato Jun 23, 2015 @ 4:19am 
Soma/Talos finds itself in the real world, free of limitation and authority, with all the surviving knowledge of humankind in its brain, with the capacity of experiencing the world at its full extent and continuing the legacy of humanity, thereby avoiding the full disappearance of human achievements from the universe.

How is that a depressing or tragic ending? The whole point of the game is saying that confronting reality is way better than deluding yourself. Even if Soma/Talos is destined to be alone on the planet (which, since animals are probably still alive and it having probably the ability of creting other androids, seems implausible), the point is that intelligent life still exists.

EDIT about shutting off the simulation: that's also meant to accentuate Soma/Talos' freedom. As Alexandra said, the purpose of the simulation is creating an AI that is free to question the status quo. Shutting the simulation enables the AI to choose what to do onwards: if it wants to roam the world alone, it can do that. if it wants instead to create a race of androids to populate Earth with, it also can - but how to program and built them will be its choice, not the human's.
Last edited by dome.barbato; Jun 23, 2015 @ 4:41am
Soylent Jun 23, 2015 @ 11:39am 
Originally posted by Casual Toast:
It does not possess the sum total of human knowledge. The archive was heavily corrupted and much was lost. A parallel with the library at Alexandria was drawn very early on. This makes it even more tragic. So even if we are to assume that the contents of the library were uploaded with the AI program, which is ridiculous because there isn't enough storage space for it, there are huge holes in the data.

Confronting reality no matter how bleak is a positive message, but this does not diminish the tragedy of the ending in the slightest. I'll remind you that not only is the entire human race extinct, but also every other intelligent AI has just been horribly destroyed in a virtual apocalypse, leaving the protagonist utterly alone. It may take hundreds or thousands of years for it to accomplish the task of creating more sentient AI on its own, or it may never accomplish this at all. The one thing that would have helped has just been completely erased.

Very bleak, and very sad.

There is still the unexplained sequence in the end cinematic that shows thousands of servers powering up, even though the simulation was supposedly deleted.
Soylent Jun 23, 2015 @ 2:00pm 
Originally posted by Casual Toast:
You just want some kind of wiggle room to shove a wedge into so you can believe what you want to. It's almost like what a religious person does with god, inserting it wherever they can instead of facing reality. Maybe ascending the tower was the wrong ending for you. I think you may have missed the point of it.

No, I'm a realist. There's nothing abstract about thousands of computers powering up. There's no ideal or philosophy there, just cold, hard technology. Computers are tools which serve some purpose, and you don't power on an entire datacenter for absolutely no reason.
< >
Showing 1-15 of 35 comments
Per page: 1530 50

Date Posted: Mar 6, 2015 @ 5:25am
Posts: 35