Is The A.I. Singularity Coming And If So When?

Is The A.I. Singularity Coming And If So When?

Author
Discussion

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
Following on from the Thread about the TV program 'Humans' (and so as not take that thread O/T again) I'm interested to hear what the contributors to this Science forum think is the future timeline for the eventual creation of an A.I. that is at least as smart as the average human being.

Or maybe you don't think it'll ever happen.

Ray Kurzweil thinks it'll be here within the next 20 years give or take 5 years and after that everything will change almost overnight as it starts to run it's own version of evolution at incredible speeds. There are of course many nay-sayers to his predictions out there.

Polls of scientists show that over 50% of them think that it'll be here by 2045 making that a 'probable scenario' in the view of the scientific establishment.

I'd do a poll of the decade (2020's, 2030's, 2040's, >100 years, >1,000 years, Not At All etc) that you think it'll happen but don't know how to.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
Damn, 2 minutes longer than I thought. biggrin

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
BGARK said:
Didn't know that was out there. I started this thread with the intention of doing a poll but when composing the opening post didn't see the option for one.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
Guvernator said:
I'd define true AI as the ability to learn and adapt to your environment without any external help or any pre-programmed\stored routines, i.e. the "code" would literally be being written on the fly, much like the human brain creates new neuron pathways and connections as it learns.
From 2:12 onwards, a self learning algorithm on a neural net:

http://www.bbc.co.uk/news/technology-33481535

Indeed self-learning (deep-learning) algorithms are evolving at an exponential rate as this states and demonstrates:

https://www.ted.com/talks/jeremy_howard_the_wonder...

(That link is very enlightening by the way)

The Blurb:

"What happens when we teach a computer how to learn? Technologist Jeremy Howard shares some surprising new developments in the fast-moving field of deep learning, a technique that can give computers the ability to learn Chinese, or to recognize objects in photos, or to help think through a medical diagnosis. (One deep learning tool, after watching hours of YouTube, taught itself the concept of “cats.”) Get caught up on a field that will change the way the computers around you behave … sooner than you probably think."


mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
Guvernator said:
Certainly very interesting but still quite limited in that they can only "learn" one specific task and still with lots of help. We are getting quite good at limited or narrow AI, it's a step in the right direction to be sure but I'd want to see proof that it can learn anything, even something that it hasn't been programmed and that it actually understands what it is learning and not just mimics it with some clever algorithm.
I'm not arguing that AI is with us now or even will be in 10 years time just that the growth is now on a fixed exponential curve. The chap on that link demonstrated that in some fields algorithms have already overtaken human capability and in others it's now very-very close.

Of course there's still a long way to go but an exponential growth rate devours "a long way to go" very very quickly.





Mods: Sorry, that pic contains a swear word, not my doing.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
Guvernator said:
but you could just as easily hit a brick wall which means you don't make any substantial breakthroughs for decades.
You 'could' but there is no evidence that we 'will'. Indeed all of the evidence thus far IS for exponential growth having taken place since the mid 20th century and still being on-course.

1 stumbling block in one field doesn't make the overall picture any less so.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
ZOLLAR said:
Did you read this webpage also? (the pics you post are from there)

http://waitbutwhy.com/2015/01/artificial-intellige...

It really blew my mind, part of me hopes I'm not around when it happens (based on predictions I will be) and another part of me wants to see it happen!
I did, a few weeks back yes and it is a bit of a wake up call.

This is why leaders in the Computing sector are now issuing warnings - they see the fast approaching AI juggernaut and recognise that not enough is being done to prepare for it, whatever 'it' might be.

Did you read part 2?

http://waitbutwhy.com/2015/01/artificial-intellige...

Edited by mudflaps on Thursday 16th July 15:10

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
Guvernator said:
mudflaps said:
I did, a few weeks back yes and it is a bit of a wake up call.

This is why leaders in the Computing sector are now issuing warnings - they see the fast approaching AI juggernaut and recognise that not enough is being done to prepare for it, whatever 'it' might be.
Again I ask the question, if "it" is coming, why do we automatically assume that's a bad thing? Why do we need to issue warnings?
I don't think we are automatically assuming that it'll be a bad thing I think they are saying we had best start thinking about this just in case.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
p1stonhead said:
Going back to the fifth paragraph of my opening post

mudflaps said:
Polls of scientists show that over 50% of them think that it'll be here by 2045 making that a 'probable scenario' in the view of the scientific establishment.
Part 2 of that article says:

In 2013 Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such Human Level Machine Intelligence to exist?”

It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:

Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075


So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts is almost certain AGI will happen within your lifetime.

A separate study conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:

By 2030: 42% of respondents
By 2050: 25%
By 2100: 20%
After 2100: 10%
Never: 2%

Pretty similar to Müller and Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
anonymous said:
[redacted]
I think this is part of the perceived problem. Governments and the public at large are almost completely in the dark as to the (up-to-the-minute) current state of play of the research and development. Google, Apple et all will tell you that this due to commercial considerations but there appears to be almost zero oversight of what is happening in (say) Googles Labs and when we do find out some glimmer of news it's invariably out-of-date news.

You're quote about "their (Googles) systems are able to automatically translate between pretty much every spoken language on the planet, properly understand queries written in natural language" is now old(ish) news and Google are prepared to talk to the BBC about it (I posted a link earlier) but there were things being held back that are clearly even further along the development road.

A bit like manufacturers releasing version 10 of windows to the public whilst back in the office version 12 is already well under way.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
ReaderScars said:
p1stonhead said:
I agree, definitely the best article I've read about AI.
Everybody with an interest in the subject should read it.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
Guvernator said:
In short, where's my jetpack? smile
hehe

Seriously. For £100 today you can carry around a device that has the sum of all human knowledge at its command. Greater than every library on earth combined. There is almost nothing you cannot know almost instantly provided you have a connection.

It can carry out applications for just about any activity you can think of (RunPee is the ultimate for me). Make Video Phone Calls and send messages to almost anywhere on the planet instantly. Seek directions to and from anywhere and will know where you are on the face of the globe.

Just 25 years ago my Dad would have said that was pure magic.

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
anonymous said:
[redacted]
I don't know, and that brings me back to the problem really. Who does? Who is thinking about the implications of one day waking up to a world where AGI suddenly exists? Where ASI might just be days away? Where's the 'brake' on this? The time for reflection/consideration of the possible impacts?

I don't think it'll necessarily be a bad thing but where is the 'pause button' whilst greater intellects than mine have a conflab. Unfortunately I do think that the likes of Google, Apple, MS etc are in an 'AI arms race' and so I worry that commercial expediencies will inevitably outweigh thorough investigation. Can you imagine the riches to be made by being the first company to the alter of AGI?

mudflaps

Original Poster:

317 posts

106 months

Thursday 16th July 2015
quotequote all
warp9 said:
The recent developments with siri/talk to google etc and conversations like this thread make me wonder how close we are to this technology moving sideways into other real world applications.
Fully developed AGI threatens most of the Service Sector I'd have thought and the Service Sector is now where most of the Western world lives and works.

On another note I'd just like to reiterate what Bill Gates said. In response to a question about whether or not machine intelligence could become a threat, the former Microsoft CEO gave the following answer:

"First the machines will do a lot of jobs for us and not be super intelligent," he wrote. "That should be positive if we manage it well," Gates said.

"A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned."

mudflaps

Original Poster:

317 posts

106 months

Friday 17th July 2015
quotequote all
Joey Ramone said:
You're thinking like a human. AI won't think like a human.
Agreed. I don't even think an AGI (Advanced General Intelligence or Human Level Intelligence) necessitates having a 'mind' or 'consciousness'. Although I've heard arguments that we might need to trick it into thinking that it has one in order to engender some types of behaviour that might be advantageous to us.

mudflaps

Original Poster:

317 posts

106 months

Friday 17th July 2015
quotequote all
anonymous said:
[redacted]
As I said in another thread we might already have an alternative to Quantum computers

mudflaps said:
Other avenues are being explored to overcome the silicon chip limiter for instance 3-D silicon chips.

http://www.gizmag.com/high-rise-3d-chips-big-data/...

From that article

This research is still in its early stages, but the scientists say their design and manufacturing techniques are scalable and could lead to a significant leap in computing performance.

"Monolithic 3D integration of logic and memory and emerging nanotechnologies like CNT transistors are promising steps for building the next generation of ultra-high efficiency and high performance electronic systems that can operate on massive amounts of data," says Shulaker. "The ability to operate on massive amounts of data in an energy-efficient manner could enable new applications that we can’t dream of today."
And I agree with your "within our lifetime" comment.

mudflaps

Original Poster:

317 posts

106 months

Friday 17th July 2015
quotequote all
0000 said:
Dictation software is a classic example. Google and Apple get slightly better results predominantly through having access to vastly more data, a route they take because they've failed to improve the algorithms at anything other than a glacial pace let alone exponential.
So Googles move to deep learning is "glacial pace".

Not exponential you say?

Jeremy would beg to differ https://www.ted.com/talks/jeremy_howard_the_wonder...

mudflaps

Original Poster:

317 posts

106 months

Friday 17th July 2015
quotequote all
anonymous said:
[redacted]
I could ask how old you are but to save your modesty I'll just you when you think AGI will be with us? 2050? Sooner, Later?

mudflaps

Original Poster:

317 posts

106 months

Friday 17th July 2015
quotequote all
0000 said:
He's a salesman predicting a revolution who founded a machine learning company in August and made that talk in December...
Yeah, he's just a tad more than that biggrin



mudflaps

Original Poster:

317 posts

106 months

Friday 17th July 2015
quotequote all
0000 said:
He's a salesman predicting a revolution who founded a machine learning company in August and made that talk in December...
Oh and "play the ball not the man". If you disagree with what he's said then you need to say so and why you disagree. Where he is wrong etc. Because I was mighty impressed with that talk.