The woman clothed with the sun
  1260.org  
Свящ. Писание     ru     en  
       
 
 
. Download
zip-file
Main
+ Categories
+ Apparitions
La Salette
Fatima
Beauraing
Heede
Garabandal
Zeitun
Akita
Melleray
Medjugorje
History
Apostasy
Communism
1000 years
Bible
Theotokos
Commentary
Prayer
Rosary
Theosis
Heart
Sacrifice
Church
Society
Nature
Personalities
Texts
Articles
Directory
References
Bibliography
email
 
Rickards, James Society. Personalities Stiglitz, Joseph

Sam Harris

Samuel Benjamin Harris (born April 9, 1967) is an American author, philosopher and neuroscientist. He is a critic of religion and proponent of the liberty to criticize it. His first book, The End of Faith (2004), is a critique of organized religion.

Born: April 9, 1967; Los Angeles, California, U.S.

We will build machines that are so much more competent than we are

Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris — and not just in some theoretical way. We're going to build superhuman machines, says Harris, but we haven't yet grappled with the problems associated with creating something that may treat us the way we treat ants.

… It's not that our machines will become spontaneously malevolent. The concern is really that we will build machines that are so much more competent than we are that the slightest divergence between their goals and our own could destroy us.

Just think about how we relate to ants. We don't hate them. We don't go out of our way to harm them. In fact, sometimes we take pains not to harm them. We step over them on the sidewalk. But whenever their presence seriously conflicts with one of our goals, let's say when constructing a building like this one, we annihilate them without a qualm. The concern is that we will one day build machines that, whether they're conscious or not, could treat us with similar disregard.

It's crucial to realize that the rate of progress doesn't matter, because any progress is enough to get us into the end zone. We don't need Moore's law to continue. We don't need exponential progress. We just need to keep going.

Finally, we don't stand on a peak of intelligence, or anywhere near it, likely. And this really is the crucial insight. This is what makes our situation so precarious, and this is what makes our intuitions about risk so unreliable.

It seems overwhelmingly likely, however, that the spectrum of intelligence extends much further than we currently conceive, and if we build machines that are more intelligent than we are, they will very likely explore this spectrum in ways that we can't imagine, and exceed us in ways that we can't imagine.

And it's important to recognize that this is true by virtue of speed alone. Right? So imagine if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT. Well, electronic circuits function about a million times faster than biochemical ones, so this machine should think about a million times faster than the minds that built it. So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

Sam Harris
Can we build AI without losing control over it?
(TED Talk, Jun 2016)

See also

Links

Bibliography

       
     
        For this research to continue
please support us.
       
       
       
Contact information     © 2012—2017    1260.org     Disclaimer