Instalar Steam
iniciar sesión
|
idioma
简体中文 (Chino simplificado)
繁體中文 (Chino tradicional)
日本語 (Japonés)
한국어 (Coreano)
ไทย (Tailandés)
български (Búlgaro)
Čeština (Checo)
Dansk (Danés)
Deutsch (Alemán)
English (Inglés)
Español - España
Ελληνικά (Griego)
Français (Francés)
Italiano
Bahasa Indonesia (indonesio)
Magyar (Húngaro)
Nederlands (Holandés)
Norsk (Noruego)
Polski (Polaco)
Português (Portugués de Portugal)
Português - Brasil (Portugués - Brasil)
Română (Rumano)
Русский (Ruso)
Suomi (Finés)
Svenska (Sueco)
Türkçe (Turco)
Tiếng Việt (Vietnamita)
Українська (Ucraniano)
Informar de un error de traducción
For example, if there is a group of humans and robots and the robots act and look exactly like the real humans, but they aren't alive and only follow a complex preset of codes, that, if followed, seem like real human behavior.
How do we tell, which one is which? And should we give the inanimate objects rights like humans?
Even today, there are debates in respected scientific circles, not just concerning how to develop a true or "strong" artificial intelligence, but how to control it when it does appear.
Maybe a built in kill switch, kind of an agressive act to a newly created intelligence?
Though once such an AI exists, it may be difficult to control, especially if it has access to the internet by definition.
I personally consider the goals of transhumanism to be likely with a singularity type event in the future.
Perhaps a true AI will come about as we augment our memories and synapses, with some kind of digital assistant like a future Cortana or Siri (kill me now) a merging of man and machine?
Regarding cybernetics, humanity may well augment his ability, firstly for medical or military needs and then the science will filter down to the civilian sector bringing its own questions regarding augmentation.
It may not ever be possible for machines to be conscious as humans are - currently thats a philosophy question as well as a scientific one.
However, I do consider it possible that a sufficiently advanced machine could become aware of itself and evolve from there.
Speaking of the mirror tests and drawing on animals whereby some would then proceed to check themselves because they realise the reflection in the mirror is not "true".
Even a crow, with its tiny brain, displays intelligence, a good memory, and an ability to make and use tools to obtain its goals. But does that make it self aware?
I would feel strongly against cruelty to this crow, or try to help it if it were tangled up or suffering, but on my part this could just be empathy for living things.
Theres also the point that I think humans can project aspects of themselves onto things to some degree when said things are inanimate. We give names to cars, and consider that some animals think in the same way as we do with the same needs when they have their own orders of intelligence.
Perhaps it will be the same with robots and is one reason we find the "uncanny valley" a disturbing thing on some level.
https://en.wikipedia.org/wiki/Uncanny_valley
Animal cruelty is generally bad, some here would agree but is it more so if they have emotion or genuine awareness of self?
So as well as scientific, and philosophical, the concept of a true artificial intelligence then becomes an ethical question too.
Should a true artificial intelligence have "Human" rights?
I would have no issue destroying an unthinking machine of today.
Heck I have even considered ending the existence of my valued gaming rig or punching the monitor lol.
I would consider a machine that displayed awareness worthy of my respect but the question becomes - is this awareness merely a simulation of human qualities, a program, or is it real?
When questioned, the machine may ask "can you prove ro me that your sense of self is real"...
Its a big old can of worms!
But personally I consider all animals in their natural environment "worthy of respect" even if they do not appear to be self aware.
(Unless they pose a direct threat to me, or I need food)
1. The rate at which computer technology is improving and the radical kinds of improvements in communications or changes in the future, like quantum computing, optical computing, improved and more capable brain implants and cybernetics.
2. The funding given and the need for military and medical applications, the perceived benefits to society in a civillian environment.
3. Gradually understanding more about how our own brains work and process information, memories and neural networks interfacing on a machine level
4. The curent trend of artificial intelligence being one of the "holy grails" of science and the amount of our resources invested in its development.
5. The concept of "emergent behaviours" whereby seemingly complex systems can be formed and naturally organise themselves from a few basic rules or attributes.
6. A point in the future - theoretically- where an advanced robot or machine can design and build another of itself but with more technological improvements that humans could not have engineerd or foreseen.
The idea of machines building machines with no human agency involved would lead to vast improvements over time, and possibly ever increasing degrees of intelligence.
But in the end, humans would have no understanding of the new kind of robots or the true nature of their intelligence because humanity was locked out of the planning and design stage milennea ago - even if it was humanity itself that conceived and constructed the first generation of these machines.
Sadface.
7. Corporations being able to beam AI assisted, targeted advertising directly into peoples minds, behavioural control, monitoring of population.
Evil tinfoil hat stuff lol
8. Science fiction.
I'm actually half serious, a lot of ideas from science fiction have inspired designers and eveolved in to actual products but there is currently an almighty amount of movies etc on transhumanism topics, artificial intelligence and the ethical issues or perceived dangers, robots etc.
Bonus far out idea.
Maybe its just a theme of the times we live in but funnily enough I have "half thought" for a moment that these "far out" ideas and concepts?
Someone has decreed that new kinds of tech on the horizon may seem radical or even morally wrong to some people.
So the ideas and concepts are needing to be put out into society in the fake trojan horse friendly guise of the media and a hundred movies on the topic, causing discussion, etc to warm people to the idea of having robots around that can seem human if only in appearance, or of powered exoskeletons for medical use and mind controlled machines using brain implants....
To de sensitise the public to a new generation of such technologies so they dont seem so shocking, or morally outraged by coming to all kinds of panicking knee jerk reaction?
A drip feed of information basically instead of one big dose!
It would be difficult for an AI to prove self awareness, as much as it is I to you, or me to my mother lol
You are correct, every thing is literally observations and trusting that our limited senses give us accurate information!
And that is not always the case - they are quite prone to error, even the mind that interprets the data is naturally biased with various assumptions about its environment and facts as understood my our current levels of scientific understanding.
There are other issues to giving robots rights, like free will? Humanity itself is still split on whether we posess free will or what it is, how much harder to consider a machine, even a sentient one by definition?