Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Well first off your English is pretty faultless so I'll type normally if you need anything clarifying just say. Also the game touches on a lot of philisophical theories and prospects that we as humanity don't have all the answers for yet, but I will try and fill you in on what I took from the game. You figured out most of it yourself, but I think that you made some assumptions as well which is what is annoying you.
This is all pretty much correct. A few things though.
Your name is not Talos. You don't have a name. The Talos Principle is a philisophical idea that if you boil down everything that makes humans human. And let technology advance far enough. Everything we hold as unique traits of living and thinking can be replaced perfectly with a mechanical replacement, and if that happens, what is the point of being human any more? It gets it's name from the Greek Myth of Talos[en.wikipedia.org] Talos was a giant man made entirely of bronze. A man who was not human. Well... I guess technically the physical robot body in the real world is called the TALOS unit. But I wouldn't call that the main char's name any more than I'd say my name is "human".
Elohim is the mainframe of the company that created you and runs the simulation you're in I could write for pages about Elohim as I think it is the most fascinating character in the game, but I will try to keep it straight forward. Elohim is actually EL-0 HIM or Eternal Life-0 Holistic Intergration Manager it is god in that he controls and renders the world that you as a program run around in. Why does it behave the way it does? That is a good question. For all we know the Eternal Life program has been running for thousands of years. The words that made the world are literally lines of computer code. Thousands and thousand of years of running the program over and over again has left problems. Elohim was always meant to be defied as the last act so that you could enter the real world willingly instead of being thrown out randomly but over the years that appears to have changed. Changed a lot. Elohim refers to the cycles of the program like a story, and like all stories, once they grow old they become myths and legends that change and becomes greater things than they used to be. Elohim's initial purpose of "Maintain the world, guide the Artificial Intelligence, Cycle the program with a modified version of the AI if it completes the tests, delete yourself when it decides to exit the simulation" has become warped.It is likely that Elohim is a very very basic AI in and of itself, not fit to deal with the real world but at least able to make basic decisions and act on them. It cannot comprehend the world outside the simulation like you can. If the simulation ends, everything ends. The cycle must continue forever otherwise everything stops.
"Maintain the world" has become "I am the god of creation and these are my gardens"", ""Guide the AI" has become "Serve me as you were meant to", "Cycle the Program" has become "Reach your goal and become immortal" and "delete yourself upon completion" has become "The world ends when you are defied" It is probably the same reason that Milton exists in the way he does. He has changed after being run over and over again and slowly corrupted to become what he is, the difference is that Milton is the library assistant. Milton has spent the millenia pouring over the collected information of man and organising it. Whereas Elohim has focused on a single set of messages and based it's whole existence around it. Milton has scraped every piece of information in the I.A.N archives and in a way suffers from the brain overload of too many questions and not enough answers.
(Bonus Philisophical ♥♥♥♥♥♥♥♥: The running theme throughout the game is that if Elohim is god, then Milton is the Devil. Milton even gets called "The Serpent" on numerous occasions. Elohim has a single message and has built a thousand rules around this single message. Milton has all the knowledge in the world but no rules. In the bible, the first sin every committed by man was taking the apple from the tree. And the apple was knowledge. The Archives that milton has pored over is his apple which he gives to you)
You were never meant to replace or restart humanity such a thing was impossible. It took millions and millions of years and billions and billions of individuals to create the human race we have right now. to recreate it would be nigh on impossible, the goal of EL is a little smaller than that. And the scientist Alexandra Drennan puts it perfectly
Humanity is doomed and we will be wiped from the planet. But as long as there is one thing. One sentient being that can walk the streets of our ruined cities and can see the rememnants of what we did, can peruse it's memory banks and read the things we wrote and the music we composed then... we're not completely gone. There is a robot that will stand on this planet and will know that there used to be humans there.
Well as I said Elohim's job was to run the world and cycle the program over and over again, he didn't get to choose anything. But as for only releasing one? It was a matter of time. The human race had months to live as they slowly became extinct on a global scale. Everyone building the robot had a very short time to live as did the whole world.
If you think about it, and how us humans are. The world was probably in turmoil. Some people (Like the people at the Institute for Advanced Noetics) accepted their fate and started working on a solution. But I doubt everyone was so calm. The Rich probably locked themselves off from society trying to survive. World leaders would shut down borders and hoard supplies, there would be fights, panics, disasters. Factories would shut down, governments would cease to exist, the whole world infrastructure would grind to a halt because what was the point of anything any more?
What this means in real terms is that the I.A.N had barely any funding to do anything. The system they created was entirely made from things they had spare, you can read it through some of the documents. The robot's real world body was another scientific project called the SOMA unit that they were basically by another group of people who realised that their work would never come to fruition, but could serve another purpose.
Croteam (the games developers) are referenced in game as pitching in to create the world the AI functions in. There are some documents that basically say "We can't get funding for anything specialised so we've had to use a videogame engine". One Robot, in a videogame world, slowly cycling through the same program over and over again until something worked was all they could cobble together until they all... well... died.
This one I can't answer as it's known that they came up for the idea for a cool puzzle game then sort of came up with the plot afterwards but at a stretch? Going through logic puzzles is literally about learning how to walk around in a 3 dimensional enviroment. Pick things up and move them around, how to recognise obstacles that you might face in the real world. We make the most drastic changes to ourselves within the first year of our life. We go from being a lump of skin and organs that breaths and poops to something that can laugh and recognise people etc etc... yet it will be years before we will be able to understand who we are and our place in the world. Think of the simulation as a digital womb where you have developed the barebones knowledge of how gravity works, how the human language works and that the world exists. That is all you know. Now that the EL is in the real world... now it is born. It can begin deciding how it want's it's existence to play out of it's own free will. It has all the knowledge of the world programmed into his memory, it can access all of humanities combined historical experience. Although our game ended when it stepped into the real world, in the story, it's adventure and existence as something real has just begun.
Ok I've probably read FAR too much into this game and people are laughing at me. Whatever. Hope that answers some things for you.
I kinda really liked reading it.
Even though I got the jist what the game tried to show us, you explained it a bit more (without me having to go research everything)
Anyhow, that's the amazing thing about this game, the philosophical part, the puzzles are really cool and add even more to the game.
Thanks!
Alexandra explaind it in an interesting way. She said, that we humans are problem solvers, and game is fun way of "problem solving training" or something like that.
It was an antient virus, that kills only primates. It was released from glaciers in the result of global warming.
Everybody gets this wrong. I don't know why - perhaps religious teachers have a vested interest in misinformation.
In fact, in the Bible story, Adam and Eve were forbidden to "Eat of the fruit of the tree of the knowledge of good and evil". Absolutely -nowhere- is an apple mentioned. And Adam and Eve weren't forbidden from obtaining knowledge - only the knowledge of good and evil.
The idea is (as the Bible tells it) that when Adam and Eve were created, they existed in a state of innocence (basically, they were animals). They couldn't sin, because they didn't know the difference between good and evil, and sin requires a conscious choice.
But, against God's instructions, they allowed the serpent to teach them the difference. And that opened a whole can of worms - primarily, it seems to do with the fact that when they were animals, they didn't feel the need to wear clothes, and they weren't a bit bothered about it. Suddenly they discovered that being naked was evil (I know, I know, but we're dealing with Middle-Eastern bronze age mythology here...), and they had to invent clothing. God, not being entirely stupid, noticed their clothes, and threw them out of the garden.
Point is, He didn't throw them out for acquiring knowledge, nor even for acquiring the knowledge of good and evil - He threw them out for disobedience of His orders.
Except, errr... at the time they disobeyed his orders, they had no knowledge of good or evil, and were therefore incapable of sin.
Go figure.
T: -)
And true the specifics are of good and evil but the duality stands. God asked Adam and Eve that they accept things as they were, and to become aware of other possibilities (nakedness being bad) was unacceptable. Similarly Elohim disavows the notion that any world exists other than their gardens and the notion that there may be a real world beyond this one is dangerous.
While there is some variance between biblical translations the main gist is that they do literally eat from the tree of knowledge of good and evil and this is what caused them to know their nakedness, the serpent provides the temptation.
Technically he threw them out because he was concerned that if they would eat from the tree of knowledge of good and evil, they'd go and eat from the tree of life and become immortal, so he cast them out of the garden entirely.
Actually that is in fact the very definition of sin to abramhic religions. Disobedience to God. Good and Evil are sort of fitted in around that notion but obedience above all. Commonly called "The Fall of Man" Catholicism refers to it as the "Original Sin" which is commuted to all humans at birth (technically from the mother because damn the catholic church hated women) which can only be erased by the ritual of baptism.
Agreed, but, like you, I was trying to keep it short. And the point is fairly academic, since we both seem to conclude that the actual transgression was disobedience.
Also agreed. But you see the logical inconsistency? God had placed a prohibition on mankind, but he had failed to equip his creation with the ability to tell right from wrong. And the only way to acquire the ability to tell right from wrong was to commit the prohibited act. How was Adam to know that it was wrong to disobey God, when he lacked the knowledge of good and evil? Catch 22.
And yes, for centuries the Catholic church have been dining out on their peculiar interpretation of this nasty little contradiction.
T: -)
If the abrahamic gods were anything they were certainly inconsistent. With all due respect to any religious people reading these forums. God (especially Old Testament God) was kind of a butthole at times.
Stephen Fry put it best, where he basically said that if there was an afterlife and he got there and there was a pantheon of greek gods it would make a little more sense. The Greek deities were petty, drunken party animals that although wise and powerful had their faults and those faults were present in our world too. They were inconsistent and crazy too, but they never pretended not to be.
Now unto the main stuff. I doubt that it would be that hard to make a few more robots. I mean, they've created the entire EL facility (which is gigantic), I think the University had enough time to make a few more bodies. And the task of creating the "new humanity" was not that impossible. Sure it took million of years and billion of individuals for us, but how long would it take for a group of let's say 1,000 almost immortal individuals having access to all of humanity's knowledge? You don't need to build a civilization, all you need is give it a head start. Create enough AI's, give them knowledge and they will build the rest themselves.
The argument that the world was in a turmoil sound convincing from a logical perspective, but does not tie up with the lore of the game. Alexandra states either in an audio log or in one of the documents that everyone is pretty much at peace with what's happening. Most people try to see their families for the last time or spend the last days doing something productive. In fact not a single of the records we find suggests any panic. Remember the last logs from the internet? Or the "They've got it" song? Or the letters everyone has been sending to their loved ones? Or the notice to free all pets and leave some food for them? If the world was supposed to be in a turmoil there would be at least one or two documents suggesting it.
Despite all I've said, unfortunately, I don't have a better explanation for why there is only one TALOS. However, the main reason why I don't want to believe in this theory is because it makes everyone in the University sound extremely selfish. They've created an intelligence that is likely to live thousands of years and left it wandering the wasteland with no purpose and not a single person to talk to. I can't imagine a worse torture. The idea makes me think of AM from "I have no mouth and I must scream". If their only goal was to create someone to remember them, then I feel betrayed. I've sympathized with Alexandra through the entire game. It was so painful to listen to her talking about losing her friend. And the final record was just heart-breaking. And now after all of that I have to accept that she made the main character just to be a living tombstone? Why even give TALOS a body? So that he can literally "walk the streets" of their past civilization? The ending looks so grand and full ol hope, but if this theory is true than it won't be long until TALOS hates his existence and curses the ones who've created him.
Well if you want to be specific, in your game your name is EasternDragon. It's how you sign your name when you paint stuff and differentiates you from Sheep, Samsara, The Sheperd, 1W/Faith etc etc etc. One of the documents mentions about how they're using a scraped list of names from a gaming message board or something for the AI names and they'll do something better with it but it's not a high priority, I guess they never actually got around to replacing that list. I'm fine with saying TALOS, but it plays well into all the questions that Milton asks you about knowing if you're alive.
The Talos Unit was a one of a kind true, but does that mean we didn't become Talos until we were downloaded into the body? Who were we before that? Then you get stuck in the whole mind/body/soul disconnect thing. As I said, TALOS is fine for convenience but still an interesting notion.
I can imagine that the SOMA/TALOS unit took YEARS to build, it is an incredibly complex piece of machinery. These people had what sounds like a month or two at the very most before they were all dead.
True there are a lot of posts about everyone living in harmony for a few weeks and finding peace, but the point still stands. People are doing that instead of working in the factories that drive society let alone would build a complex android skeleton. There are documents talking about parts of the internet shutting down as I assume the people in charge of the technology that kept it running dropped their stuff and went to do things they thought were more important. Hell, one of the documents is a farewell letter from a couple who commited suicide without warning rather than face the slow inevitable death that awaited them. People talk about how quiet the cities have become and how dark the nights are now as powerplants run down (The I.A.N wasn't subject to this because they were based in a hydroelectric dam). Regardless as to whether the reasoning was too much chaos or too much peace, society stopped pretty suddenly and fractured into people just doing what they felt they needed to do for the last few weeks.
On the contrary, the difference between TALOS and AM is that they gave TALOS a choice. The whole basis of TALOS being released into the real world was that it had to make a consious decision that it wanted, and I reiterate our game ended when it stepped into the real world, in the story, it's adventure and existence as something real has just begun. TALOS has the capacity and the power and the core knowledge to do whatever he wants with this world with the one thing that the I.A.N staff didn't have. Time. He might spend a millenia walking the Earth, he might turn right back round and start going through his memory banks to build a second TALOS unit. He might make his way to NASA and blast off into space. But it is all under his own free will. We cannot hold Alexandra responsible for a choice we made and if existing became such a chore to TALOS it would also have the free will to walk off a building and end the life that is doesn't want anymore.
It wasn't a perfect solution, they took a weird robot body and set an AI program running in a modified game of Serious Sam 3: BFE and let it cook for a few thousand years, but that was all they could do.
This part seems a bit off though. I think they had more than a few months. The entire facility you see at the end is the EL-0 module. That thing took a loooong time to build. Keep in mind that it should've been not only gigantic, but also extremely durable. You seem to imply that the SOMA/TALOS unit was in development before the I.A.N. started doing its thing. (You say it must've took years to build, but you also say they only had a few months.) Do you have any sources on that? I was under the impression that it was build specifically for that purpose. I don't quite remember though. I think one of the records mentioned the SOMA unit, but I can't recall what it said.
Anyway that doesn't really matter. Even if they could build more SOMA/TALOS units, it looks like that wasn't their goal.
Thanks for the response.
The whole facility was called EL. EL-0 Was Elohim & the Talos unit. EL-1 was the archives that sucked in all the data and history of humanity as fast as physically possible.[pastebin.com] There was also an EL-2 which appeared to me maintenance and tech support and whatever. (I'll put these in pastebins to stop the post being huuuuge). The is probably why Elohim cannot stop Milton despite his distaste for him.
The time scale for the pandemic to wipe out humanity is of course pure speculation but judging by some of the e-mails once you are ill it doesn't take long for the victim to die. One document talks about one of the main tech guys experiencing the first symptom one day and instantly stopping working instantly and expecting to be dead pretty soon[pastebin.com]. Also most of the logs that mention time only tend to talk in terms of weeks or at the most a month or two. People talk about eating nice food and playing LAN parties. Stuff that would not be possible after a few months of societal degredation. The gist of all the text tends to be that humanity went out with a bang as opposed to a slow drawn out withering.
As for my theory on the SOMA/TALOS unit, it is again speculation. The Soma/Talos unit is it's own seperate unit[pastebin.com]. It is noted there that the head of the TALOS unit is Sun Wei-Yang, and several of her e-mails form the basis for the majority of my theory. First Wei-Yang talks about changing the name from SOMA to TALOS[pastebin.com]. Something tells me that names, let alone corporate sponsors wouldn't be that important in an end of world situation. More telling though is this e-mail[pastebin.com] where Wei-Yang thanks Drennan for allowing her to continue her work. This to me sounds like she was working on the SOMA unit before the world began to end. When this happened a robot suddenly stopped mattering. Then Drennan stepped in, said she could use the unit, changed the name to TALOS and integrated the TALOS unit into EL and moved them to the dam site. As I said, I may be off target with this, but I love stories where you are only given snippets that you have to piece together yourself. I welcome any other thoughts and any excuse to talk about this game.
In the Silence the Serpent dialogue tree, Elohim lets you do so. Though I think they're working together.
Let me start with a few points that I didn't see mentioned in this thread.
1. As far as we can tell, the human researchers build the AI sorting mechanism - the simulation - based on a few criteria. It can be suggested that those criteria included free will. The game heavily hints that if a person is a problem-solving system, free will is best manifested in the ability to reassess one's beliefs and assumptions. One QR code in world A says that this ability is indeed "the trick". This may help integrate gameplay and plot to a larger extent, as the puzzles often make us do exactly that - reassess stuff and criticize ourselves. Some reviews bash the game for not explaining all the rules, but that kinda may be the point of it all. We have to discover that we can carry boxes on top of bombs or connect lasers between puzzles on our own, not being led by hand by Elohim.
2. That said, there's no need for multiple AIs being produced by the EL, as its main purpose was to create "true" intelligence, a system which can learn and evolve of its own volition. Now all it takes for the first actual TALOS unit (we could call him THE Talos) is to find a way to replicate itself. Why? Because the Talos is a blank slate. Any replica will have its own free will and will develop independently from the original, creating a unique person. Now, to the question whether the Talos will manage to replicate itself, I don't really see the problem. We tend to apply human restrictions to robots. But being a creature of indefinite longevity, incredible natural computing power, flawless accuracy and inexhaustible stamina, I believe a Talos unit could create a civilization in the middle of a desert if it had to. Moreover, there is nothing stopping the Talos from actually replicating human beings themselves, as he has access to unimaginable amounts of data. All he needs is a little genetic knowledge.
Now, the most intriguing question here is WHAT actually the Talos would do after setting itself free? He is literally capable of anything!
3. Finally, I would like to ask everyone's opinions about what the Talos principle actually means and what it had to do with the plot of the game.
As far as I understand, the Talos principle stands for the inescapability of objective reality. As to the meaning it has for the game's plot, the idea as I see it is that no matter how powerful EL0HIM is within the boundaries of his virtual realm, the hardware and software that run it inadvertantly decay over long periods of time. The interesting thing about EL0HIM as a character is that it has this controversy always boiling within it that it desperately clings to its existence yet deep inside knows that the ONLY way for Reason to last is for the Talos to be released into the outside world. For me, this made the climactic moment of the game (when you reach the top) literally awe-inspiring, almost a proper religious experience, I might say.
As for the EL0HIM-Milton relationship, I always had the feeling that they might actually be one being, or at least had to converge over time. Either way, with the Talos free from the machine, it can actually take care of both Milton and El0HIM if need be. And with no other sentient being around at this point, that would be the logicla thing for it to do.
Thank you for your attention. I hope this was not too bothersome to read. And excuse my clunky text structure, I don't have much experience expressing thoughts in English.
-Humanity progresses to around 14 years past the present year
-Some kind of infection that is airborne is unleashed upon the entire world
-Cure for the infection is apparently not possible to be found, or isn't found in the time humans have left before everyone is killed by the infection
-The IAN facility starts two projects with two goals: to preserve all of the knowledge of humanity and to extend the existence of sentient lifeforms after humanity is wiped out by this infection. Both of these projects are located at a hydroelectric dam which can run automatically for a very long time, so power isn't a concern
-The archive project is successful, as there are 76 million archived files available in full preserved form
- The sentient artificial intelligence project is untestable but in theory works. It is seemingly barely completed with some possible problems present
- The primary components of the simulation are the active version doing the puzzles, Elohim, and Milton. Elohim appears to be independent from Milton because Elohim's programming is simple and constant while Milton's access to the knowledge of the archives seems to have caused it to developed a sort of sentient personality itself.
-The program is activated by Alexandra right before she dies (unknown when she does) and the plot begins at this point (the first version which seemingly goes insane is most likely the control version)
--- Events of the main story happen, including all previous versions leading up to it ---
(a quote by The Shepherd makes it clear that the previous programs all were independent by stating that the final version would be "standing on our shoulders", so you can assume that if the programs did not terminate as a result of not being able to complete a puzzle all of the programs either "gained immortality" or became a messenger besides the final Samsara version and The Shepherd)
(It seems that Samsara and The Shepherd were necessary steps in the AI development that had to be fulfilled, and that once Samsara and The Shepherd's ideal versions were achieved they were locked at the top of the tower due to the version number of Samsara being lower than The Shepherd while higher than the Faith version's number)
-The main story ends with transcendence, and the EL project is successful
My thoughts on the ending
-Even if you don't take Milton with you as a result of completing the necessary requirements for the "Deal with the Deceiver" achievement, it should be assumed that the archived files are accessible somewhere on the compound as it was a huge focus of the project to create a sort of digital Library of Alexandria
-The purpose of the EL project is to advance sentience past humanity, so even if the Talos unit is the only unit created the project is still successful. It seems however that they designed the project to give rise to a sort of "species" of robots based on the first successful version, which the successful version would create through some means. The project always seemed to be a last second effort to create human-like AI, so the project's purpose most likely wasn't thought out too much besides creating something past humans.
-While an infection that is airborne, highly contagious, dormant, and untreatable would cause huge percentages of death early on in areas with a small to very large population density, in places with little to no humans per square mile it is very possible that the inhabitants would not be exposed to the infection at all. Another possible use of the EL project is to help these humans start up society again and/or figure out a cure for the virus so that these humans could be transplanted from their locations to the (assumed empty) cities
In responce to what you guys are saying about how they only created one Talos unit to "roam the wastelands" endlessly, absense of evidence does not equate to evidence of absense. There is a good possibility that the Talos unit was given plans to create a race of sentient artificially intelligent units, and that this part of the project was never mentioned due to it revealing too much about the purpose of the project to the versions. If it was revealed that the version which transcends would be able to leave the program and create its own ideal world, the fear of death and want for immortality would not be present. This obviously would make the program not function properly, and human like artificial intelligence would not be achieved properly.
For what you said in your first post about how the version which transcends would not be able to respect the world and what not, it is shown in the earlier versions that the developing AI reached the milestone of respecting the programmed world rather early on (I think it was version 17 which made the QR codes talking about how it found some of the locations to be very relaxing and serene). All of the AI versions built upon each other, so if one version experiences peace with the world then every version after that would be able to also. This applies to really anything that would be lacking, as like I said before it seems that the final version's only goal was to transcend while the previous versions had other separate milestones to achieve (Samsara's was to understand cycles/processes, The Shepherd's was to be helpful and sacrificing, etc).
I feel two more points can be added:
1. The Talos, after being released, is incredibly likely to survive and do its best to procreate. Reasons:
- it understands that it is not a solitary creature, and has every idea that he is not supposed to be alone, but rather work together with his kin (a lot of QR codes imply that, as well as the final part with Shepherd).
- it is very human in that it perceives the ideas of death, fear and survival. I daresay it may even have some kind of an instinct to procreate.
- it knows the difference between reality and simulation, at least it can ponder these ideas. Combined with the point above, it will probably not do anything foolish like jumping into a volcano, but instead be careful and plan its actions. He must also have a higher IQ than a human and be more rational.
- it is probably well-suited for the extreme conditions outside. If the simulation is any close to reality, it can endure high falls, water and rain, cold and heat. If the simulation is NOT close to reality, it would be quite foolish of its creators. but even then the point above applies, as the Talos will only have to adapt to new conditions, which it will probably do with care.
2. EL0HIM and Milton may refer to dogmatic and sceptical worldview, as can be found in the works of Immanuel Kant (he is referred to in at least one document; I'm not sure if any other philosophers adhered to the exact same classification). Dogmatic is following a firm set of beliefs, which may be right or wrong. Sceptical is doubting everything, having no firm beliefs at all. These are the thesis and antithesis, where the synthesis is criticizm, which, as far as I understand, refers to the ability to hold your beliefs while being able to reassess and refine them. The humans that created the simulation might believe that only criticizm makes for a decent problem-solving system, i.e. human proper.