Instale o Steam
iniciar sessão
|
idioma
简体中文 (Chinês simplificado)
繁體中文 (Chinês tradicional)
日本語 (Japonês)
한국어 (Coreano)
ไทย (Tailandês)
Български (Búlgaro)
Čeština (Tcheco)
Dansk (Dinamarquês)
Deutsch (Alemão)
English (Inglês)
Español-España (Espanhol — Espanha)
Español-Latinoamérica (Espanhol — América Latina)
Ελληνικά (Grego)
Français (Francês)
Italiano (Italiano)
Bahasa Indonesia (Indonésio)
Magyar (Húngaro)
Nederlands (Holandês)
Norsk (Norueguês)
Polski (Polonês)
Português (Portugal)
Română (Romeno)
Русский (Russo)
Suomi (Finlandês)
Svenska (Sueco)
Türkçe (Turco)
Tiếng Việt (Vietnamita)
Українська (Ucraniano)
Relatar um problema com a tradução
Funny thing for people to be saying on the INTERNET!
...one word: decentralization.
The important question isn't who, it's how.
What's in the code says a lot about how it will operate.
It's impossible to tell who will be the first to make a major AGI / ASI system - we can't even make decent predictions because, in theory, all you need is any person with a computer connected to the internet & enough know-how.
This is also why it's sometimes referred to as an "AI arms race".
Not all A.I. are gong to be the same, so therefore, the people who realize the threat that an uninhibited ASI, with no moral guidance, poses, are trying to come up with a way to create an ASI of their own, which DOES have moral guidance & structures in the code that will make it safe... and honest... as quickly as possible.
...however, this also causes a logical paradox in A.I. safety because building something safely & building something quickly, are typically mutually exclusive to each other.
...but at least someone who is taking any safety measures at all, will likely do a better job than someone who is not & who is just building one for curiosity or corporate purposes...
https://www.youtube.com/watch?v=igKb2DhP7Ao&t=2m59
https://www.youtube.com/watch?v=-JlxuQ7tPgQ
What's more likely than that is the dangers of corporate accidents with AI.
"Stamps are made of carbon, hydrogen, & oxygen, & people are made of carbon, hydrogen, & oxygen, & I'm going to need all of that which I can get to make more stamps so..."
We know our own intelligence evolved from a single genetic mutation a long time ago. The possibility of accidental intelligence is very real.
The potential for danger becomes even greater when a person dismisses it or doesn’t believe there is any. It’s a common way people are injured. This doesn’t mean we need to fear tools; just learn, respect, and use them responsibly.
Eh- This is sorta true, but also untrue. While it can make you super lazy-- it also frees you up to do more advanced things. Before writing was super common people had to remember things, when typing was big it freed up having to worry about your handwriting and you can focus on the writing-- PCs come along now you dont have to worry about messing with the mechanics of a typewriter. -- as you go you have less ♥♥♥♥ to handle.
This goes with AI. As it is-- now you don't have to stop make reminders or figure out what 27452 times 18392--- the next step would be to get AI to spit out stochastic calculus so we dont have to ♥♥♥♥ with that and we can do something else.
Why learn to drive when an AI can do it and you can put those motor skills to better use.
A great example would the Apollo missions-- Those guys had to do so much math on the fly-- while they still can and will in needed-- most of that is freed up by a computer to they can tend to more mission critical tasks. -- the next step would be to pretty much remove the idea that its no big deal to go to space-- as if its leaving your house, freeing up the scientist to purely focus on his tasks.
true, at some point, humanity will need to get rid of "money" and "jobs" per se and replace them with roles. -- Its a pretty interesting thought experiment to wonder about theoretical aliens that would visit earth. How would you imagine their home world? Do you think they would be divided by countries, and within those countries be divided by politics, companies and religion? They would probably be unified.
Except "all of humanity" wouldn't be responsible for its creation - just some people - so this point really amounts to blaming the victims.
Being dangerous often is about a large number of scenarios that don't entail "learning to become dangerous".
The most immediate one that comes up, repeatedly, is naivety; simply saying that everything will be "A-OK" and we need not take any safety precautions, what-so-ever.
Being dangerous often comes from a lack of understanding & / or a lack of care as to what the consequences of our choices will be.
Being dangerous can come from just not being safe - not even trying to.
Because of "natural law", plenty of things are dangerous without intention.
Of course, to some extent, you already understand this:
...but it's worth pointing out some other examples for other people who don't still.
Somehow, I think it would pick an intelligent animal that is a little more compliant & a little less likely to be as deceptive as humans:
https://youtu.be/46nsTFfsBuc&t=3m05s
"the transfer of control of an activity or organization to several local offices or authorities rather than one single one."
Oh, the irony of people presenting this "solution" or asking this on a CLOUD-based computing service...
https://www.google.com/search?q=define+cloud+storage
"Cloud storage is a model of computer data storage in which the digital data is stored in logical pools, said to be on "the cloud". The physical storage spans multiple servers..."
https://www.google.com/search?q=define+data+center '
"A data center or data centre is a building, dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems."
https://www.google.com/search?q=why+have+multiple+data+centers
"By implementing a multi-data center strategy that uses off-site storage to back up mission-critical data in a dedicated backup facility, companies can protect themselves from downtime. Data center redundancy can also provide protection from data loss in the event of a natural disaster or ransomware attack." (or any kind of attack, really)
You can't shut down an ASI for the same reason that you can't turn off the internet or shut down all of the world's power plants at the same time.
https://www.youtube.com/watch?v=WPnkpaFABrQ&t=1m33s
⠀⠀⠀
How can you not?
An AI isn't exactly a primitive structure, it's actually complex system made up of several other complex systems that are made up of primitive structures.
...if people can't even get basic primitive blocks of code to function correctly, then how do you think they'll actually be able to upscale that without errors?
It's like expecting someone to be able to engineer & build a stable bridge in real-life when they can't even get a model-bridge, that's only a couple feet large & made out of balsa-wood, correct in the classroom. It [the full scale bridge] isn't going to happen in that scenario.
No, he's not, he's talking about AI.
If you actually watch the whole video, or even just 2:52 then you'll see that he also mentions how instrumental convergence applies to humans collecting money & how this can be used to reliably predict things about the world, despite the fact that people don't want money just for the sake of having money.
Humans are ALSO a form of intelligence & yet you claim that these points only apply to simple programs.
Do you collect anything? Maybe action figures, amiibo, bottle caps, ...games on Steam (oh, look "359 games owned", it says here on this public profile of yours) ...money?
How does that, by your own position, not make you a "slave" to these capitalist desires, yourself?
Does everything you do serve some "higher purpose" or "greater good"? ...or do you sometimes engage in preferential activities (terminal goals) simply for the sake of enjoyment & want some things simply for no other reason than you "like them"?
You are a very complex biological system & yet it is doubtful that you are completely "free" from any terminal attachments of your own. AGI & ASI is built on top of the primitive systems that are the building blocks of its overall structure.
https://www.youtube.com/watch?v=9i1WlcCudpU
Are you telling me that you & everyone you know are, & were, perfect students & children, who never did anything that your parents or teachers didn't permit you to?
...there could even be actual restrictions, but... you can just get around those.
Youtube channels like "Lockpicking Lawyer" show how incredibly poor even our everyday analogue security measures are - most of them can be circumvented in a mere 2 seconds with the proper know-how & enough practice or natural skill to make it work on the first try.
A lot of research has been done & debates held by really smart people which disagrees with you on that.
You won't find these kind of debates over shrink rays, which even experts in scientific fields generally agree is such a large improbability that it's likely not possible - yet they do debate among each other about the subject of impending "artificial general intelligence".
This is only technically correct because it's oversimplifying the fact that hardware failures happen sometimes & then malfunctions occur which were never programmed into the instructions.
I do QUITE appreciate this post of yours, though, as it quite succinctly highlights the majority of the issues.