Is The A.I. Singularity Coming And If So When?

Is The A.I. Singularity Coming And If So When?

Author
Discussion

jbudgie

8,925 posts

212 months

Saturday 16th April 2016
quotequote all
When large intellects like Hawking and Kurzweil are worried about the implications of AI then it seems to me that we should listen closely.

Flooble

5,565 posts

100 months

Saturday 16th April 2016
quotequote all
ash73 said:
...
Yes, very good point. In the context of computers I think it's basically using spare processing capacity to think about other things, unprompted. It's a small step from there to becoming self-aware.
...
Ah yes, indeed, that's an interesting point about spare processing capacity. The question in my mind is how the AI reaches the point where it does that "unprompted thinking". I struggle to see how or why anyone would write a computer that basically day-dreamed. But also how it would reach an intelligent point - humans take a good 20-25 years before their neural networks are "complete". (I really do mean 25 years BTW - 18 year old men are proven to not be totally finished in building their risk modelling pathways ... evidence available in your nearest hedge).

ash73 said:
...
Just stop going to war for one year and we can go to Mars.
...
Sobs, don't remind me how much money Bush and Blair poured away :-(

0000

13,812 posts

191 months

Saturday 16th April 2016
quotequote all
RobDickinson said:
0000 said:
I'm a software developer. I say it won't happen without something revolutionary, it's not going to just evolve in my lifetime, same as it hasn't in the last lifetime people have been talking about this for.
I find that an odd statement esp from someone in IT.

We couldnt do it before, so we wont be able to do it in the future?

We've never had the raw processing power before. Were getting more experienced in learning systems etc.

See the news above, we've simulated 1 second of brain activity. That took 40min.

According to my calculations we can real time that in about 18 years if nothing else changes to make it quicker.
Simulated being the operative word. It's not that we couldn't do it before so we can't do it in the future, in fact I suspect we can do it in the future, I just don't think relying on Moore's law is going to get us there.

My AI professor thought as much and while I was undecided when I heard him say it I've become increasingly convinced he was right.

RobDickinson

31,343 posts

254 months

Saturday 16th April 2016
quotequote all
As for commercial interests and incentives..

Google exist because they created a simple learning system.

Making Ai would make you a fortune

mebe

292 posts

143 months

Sunday 17th April 2016
quotequote all
Flooble said:
I wonder how many people on here in the "it's coming" camp also work in Software Development?

And how many "it'll never happen" fall into the category of "developer"?

I'd love to see a Venn Diagram. With experience comes cynicism ...
32 Years professionally and about 40 years total in writing software and/or managing people writing software.

My own view is we haven't a clue what makes intelligence. So we can soon simulate an entire brain - great, good luck knowing what to program it with. The brain is a complex thing, but given that most of its inputs are subtle chemical signals and the remainder high bandwith things like vision which need a sense of touch to interpret and then memory (but what did we remember?) to give any sense of understanding and years of training (about 16-18 for us). Apparently cells in our nervous system pre-process inputs before sending them to the brain, we are only just starting to think about what they are doing. No, its not happening any time soon.

A computer isn't intelligent until it can understand itself, to have needs and to understand how it can change them and be able to change, evaluae change and evolove. Sure we have deep learning but there is zero intelligence there, just great algorithms.

In the next 20 years computers will get a whole lot better at doing complex seemingly magical tasks that seem to be only possible with intelligence, but there will be no intelligence there.

Einion Yrth

19,575 posts

244 months

Sunday 17th April 2016
quotequote all
mebe said:
Flooble said:
I wonder how many people on here in the "it's coming" camp also work in Software Development?

And how many "it'll never happen" fall into the category of "developer"?

I'd love to see a Venn Diagram. With experience comes cynicism ...
32 Years professionally and about 40 years total in writing software and/or managing people writing software.

My own view is we haven't a clue what makes intelligence. So we can soon simulate an entire brain - great, good luck knowing what to program it with. The brain is a complex thing, but given that most of its inputs are subtle chemical signals and the remainder high bandwith things like vision which need a sense of touch to interpret and then memory (but what did we remember?) to give any sense of understanding and years of training (about 16-18 for us). Apparently cells in our nervous system pre-process inputs before sending them to the brain, we are only just starting to think about what they are doing. No, its not happening any time soon.

A computer isn't intelligent until it can understand itself, to have needs and to understand how it can change them and be able to change, evaluae change and evolove. Sure we have deep learning but there is zero intelligence there, just great algorithms.

In the next 20 years computers will get a whole lot better at doing complex seemingly magical tasks that seem to be only possible with intelligence, but there will be no intelligence there.
Much the same as Mebe in terms of time and experience. I think he's probably wrong.

mebe

292 posts

143 months

Sunday 17th April 2016
quotequote all
ash73 said:
I think you're using the word intelligence to describe in some mystical sense whatever it is we don't understand. Once we do understand the human brain's functions it will be nothing but algorithms too. At the end of the day it's a computer, imagine pulling apart a PC and trying to understand every detail of its programming using nothing but a voltmeter.
Guess the parameters of the question are ill defined, to me AI means something more than a bunch of algorithms - I could probably knock up a quick snail simulator in a few moments however it's not going to evolve and adapt to the environment.

Toaster

2,939 posts

193 months

Sunday 17th April 2016
quotequote all
Einion Yrth said:
mebe said:
Flooble said:
I wonder how many people on here in the "it's coming" camp also work in Software Development?

And how many "it'll never happen" fall into the category of "developer"?

I'd love to see a Venn Diagram. With experience comes cynicism ...
32 Years professionally and about 40 years total in writing software and/or managing people writing software.

My own view is we haven't a clue what makes intelligence. So we can soon simulate an entire brain - great, good luck knowing what to program it with. The brain is a complex thing, but given that most of its inputs are subtle chemical signals and the remainder high bandwith things like vision which need a sense of touch to interpret and then memory (but what did we remember?) to give any sense of understanding and years of training (about 16-18 for us). Apparently cells in our nervous system pre-process inputs before sending them to the brain, we are only just starting to think about what they are doing. No, its not happening any time soon.

A computer isn't intelligent until it can understand itself, to have needs and to understand how it can change them and be able to change, evaluae change and evolove. Sure we have deep learning but there is zero intelligence there, just great algorithms.

In the next 20 years computers will get a whole lot better at doing complex seemingly magical tasks that seem to be only possible with intelligence, but there will be no intelligence there.
Much the same as Mebe in terms of time and experience. I think he's probably wrong.
I think mebe is right but then again some would say I'm being negative

Toaster

2,939 posts

193 months

Sunday 17th April 2016
quotequote all
ash73 said:
I think you're using the word intelligence to describe in some mystical sense whatever it is we don't understand. Once we do understand the human brain's functions it will be nothing but algorithms too. At the end of the day it's a computer, imagine pulling apart a PC and trying to understand every detail of its programming using nothing but a voltmeter.
The Brain is not a computer it doesn't have software and thats just for starters, it may be there are conceptual parallels that can be drawn but its a living organism.

davepoth

29,395 posts

199 months

Sunday 17th April 2016
quotequote all
Toaster said:
The Brain is not a computer it doesn't have software and thats just for starters, it may be there are conceptual parallels that can be drawn but its a living organism.
http://motherboard.vice.com/read/komiku-neuron-com...
https://www.newscientist.com/article/dn15019-compu...
http://singularityhub.com/2016/03/17/this-amazing-...

I beg to differ.

glazbagun

14,280 posts

197 months

Sunday 17th April 2016
quotequote all
mebe said:
ash73 said:
I think you're using the word intelligence to describe in some mystical sense whatever it is we don't understand. Once we do understand the human brain's functions it will be nothing but algorithms too. At the end of the day it's a computer, imagine pulling apart a PC and trying to understand every detail of its programming using nothing but a voltmeter.
Guess the parameters of the question are ill defined, to me AI means something more than a bunch of algorithms - I could probably knock up a quick snail simulator in a few moments however it's not going to evolve and adapt to the environment.
I think our first intelligences will be very good search engines or Google/Tesla autopilots IE: specialized tool algorithms that have taught themselves using a pre-programmed process and whatever inputs they subsequently recieve. When an ant is following a pheromone trail it's using intelligence, even if it isn't using self-awareness, and we have parts of our nervous system which process signals before they ever reach whatever our "consciousness" is.

If we can develop an intelligence whose job is to optimise another "tool kind of intelligence" that is part if it's own system, I would consider that a proto-self awareness, although I can't imagine how an algorithm comes up with the concept of "I"vs"Me" by itself.

After all, there is no single "us" that lives only in one part of our brain, we are simply an amalgam of our own various processes.

RobDickinson

31,343 posts

254 months

Monday 18th April 2016
quotequote all
As according to waitbutwhy we already have very good limited AI, AI that is specific to searching etc - googles language translation is entirely AI.

We even recently had an AI that learned to play a game and beat a master. This is generic AI.

Theres a fair bit of I cant imagine it/ I dont understand therefore it cant happen it in this thread.

I'll admit I dont know how the brain works past basics, we might not create AI by imitating a brain, it might (or not) happen some other way.

But at this point in time we have , or will soon have ( next 20 years) enough processing power to emulate a human brain. AI might come sooner, or take a little longer. But I'm with Hawking et al in that it will happen.

I've been writing software since 1981 too.. and its all been 'crud' for the most part, boring non intelligent stuff for people to use, doesnt mean there arnt propeller heads out there doing crazy cool stuff.

mebe

292 posts

143 months

Monday 18th April 2016
quotequote all
RobDickinson said:
But at this point in time we have , or will soon have ( next 20 years) enough processing power to emulate a human brain. AI might come sooner, or take a little longer. But I'm with Hawking et al in that it will happen.
We already have enough processing power to emulate smaller brains but the world isn't awash with artificial cat intelligences.

Einion Yrth

19,575 posts

244 months

Monday 18th April 2016
quotequote all
mebe said:
RobDickinson said:
But at this point in time we have , or will soon have ( next 20 years) enough processing power to emulate a human brain. AI might come sooner, or take a little longer. But I'm with Hawking et al in that it will happen.
We already have enough processing power to emulate smaller brains but the world isn't awash with artificial cat intelligences.
Well, I don't think we can do cat yet, as a straight simulation anyway. Also they'd be of limited utility, spending 23 out of every 24 asleep for instance.

otolith

56,144 posts

204 months

Monday 18th April 2016
quotequote all
We have a literal nematode brain simulation.

One of the challenges of simulating the connectomes of larger brains is that you have to map them. The nematode studied has only 302 neurons.

https://en.wikipedia.org/wiki/List_of_animals_by_n...

glazbagun

14,280 posts

197 months

Monday 18th April 2016
quotequote all
otolith said:
We have a literal nematode brain simulation.

One of the challenges of simulating the connectomes of larger brains is that you have to map them. The nematode studied has only 302 neurons.

https://en.wikipedia.org/wiki/List_of_animals_by_n...
Yeah, openworm was mentioned in the thread a few pages back by now. Naturally the next thing to do was give it a Lego body:

https://youtu.be/YWQnzylhgHc

otolith

56,144 posts

204 months

Monday 18th April 2016
quotequote all
I thought it might have been. We can map the connectome of vertebrates on a macro scale using scanning, but doing it at the cellular level requires old fashioned dissection, sectioning, staining and microscopy. I guess it would be possible to automate that. An alternative approach might be to look at the embryological development of vertebrate brains and see if we can write algorithms to model their growth.

Jabbah

1,331 posts

154 months

Monday 18th April 2016
quotequote all
The problem with direct brain simulations is that we don't know exactly how all the physical processes work inside the brain, the connectome is only half the story. The main problem is being able to work out if certain synapses are excitatory or inhibitory. For C.Elegans the best model was created using genetic algorithms to determine the synaptic strengths. This is possible for simple worms with 302 neurons and 7500 synapses but when you start getting to larger mammal brains such as cats with 760 million neurons and 10^13 synapses then it is slightly more difficult. It is also very inefficient simulating the brain as you are simulating the physical processes that are processing the information; much better to work out the basic rules of neural processing and create new processes in software and / or hardware to do similar processing.

Most people seem to be thinking of AI as a set of hand crafted rules from experts, that AI is programmed to do certain tasks. This has changed massively over the last 15 years or so with the advent of neural networks, deep learning and the processing power for those algorithms to flourish. These AI systems learn from the huge amounts of data we are coalescing rather than being programmed and are reaching human level and beyond in many narrow tasks. Recurrent networks are capable of learning through time such as LSTMs and other networks have been created that allow learning algorithms to be applied to new data such as Neural Turing Machines. New things been investigated are attention mechanisms and memory networks.

Things are really changing and ramping up in the machine learning space, especially now that it is good enough for commercial investment. There seems to be a lot of people who did some machine learning many years ago and so write it off, they should really look closely at the current state of the art and how quickly that is improving.

Jabbah

1,331 posts

154 months

Monday 18th April 2016
quotequote all
otolith said:
but doing it at the cellular level requires old fashioned dissection, sectioning, staining and microscopy. I guess it would be possible to automate that.
There are some new optogenetic techniques that cause the neurons to light up when they fire that makes this a lot easier. They have used it on zebrafish amongst other things:

https://www.youtube.com/watch?v=YZxTvH-X61o

These techniques allow controlling of neuron behaviour using lasers too.

Jabbah

1,331 posts

154 months

Monday 18th April 2016
quotequote all
Toaster said:
The Brain is not a computer it doesn't have software and thats just for starters, it may be there are conceptual parallels that can be drawn but its a living organism.
The brain certainly seems to processes information sent through nerves as action potentials to neurons which either fire or don't depending on other inputs. Computers don't need software, they can be completely defined in hardware. How do you define what is alive? In the end it is just chemistry.