Is The A.I. Singularity Coming And If So When?

Is The A.I. Singularity Coming And If So When?

Author
Discussion

mudflaps

Original Poster:

317 posts

107 months

Monday 20th July 2015
quotequote all
Nobody is saying that todays Siri is the finished article, far from it. What is being said is that Siri is a step along the road to the singularity. Yes its AI but its simple AI and not to be confused with AGI.

0000

13,812 posts

192 months

otolith

56,266 posts

205 months

Tuesday 21st July 2015
quotequote all
glazbagun said:
But probably easier for a machine to do than a human- look at how this forum software fragments discussion when you try and have a complicated debate. Two completely logical AI's arguing over the same data sets would be amazing. You could instruct one to support point A, the other B and just wait until it was resolved.
Surely if both were completely logical and using the same data they would have nothing to argue about?

mudflaps

Original Poster:

317 posts

107 months

Tuesday 21st July 2015
quotequote all
ash73 said:
It was a hoax? Well, I still think they can pass the test with a neural net algorithm on a classical computer. It's not about the hardware, it just requires an Einstein programmer to come along.
We're not yet in the room but we might be knocking on the door shortly.

BGARK

5,494 posts

247 months

Tuesday 21st July 2015
quotequote all
anonymous said:
[redacted]
I do, The hardware these days is more than capable of running dumb AI, from a worm to a human I am not sure where it would struggle or reach a limit. its definitely not solely about transistor speeds.

Whats missing is the software, and how as humans we create such software I have no idea but I am sure someone will suss it out.


glazbagun

14,283 posts

198 months

Tuesday 21st July 2015
quotequote all
BGARK said:
I do, The hardware these days is more than capable of running dumb AI, from a worm to a human I am not sure where it would struggle or reach a limit. its definitely not solely about transistor speeds.

Whats missing is the software, and how as humans we create such software I have no idea but I am sure someone will suss it out.
You may have already read in the above waitbutwhy article, but here's the "worm emulator 1.0" running in a lego body. They didn't program it to do anything, just gave it motors and a sensory device and let it go:


https://youtu.be/YWQnzylhgHc

Smitsonian article here:

http://www.smithsonianmag.com/smart-news/weve-put-...

With this project, I wonder how whatever "eat this/can't eat this" stimulus can be made to work. Presumably this worm is neither hungry nor full, such stimuli just aren't firing.

Guvernator

13,170 posts

166 months

Wednesday 22nd July 2015
quotequote all
BGARK said:
I do, The hardware these days is more than capable of running dumb AI, from a worm to a human I am not sure where it would struggle or reach a limit. its definitely not solely about transistor speeds.

Whats missing is the software, and how as humans we create such software I have no idea but I am sure someone will suss it out.
Not sure how you work out the hardware is there. Best guesses for the processing power of the human brain clock in at about 38 petaflops, it weighs about 1.5 kilos and just needs food and water a few times a day to power it. The fastest supercomputers on the planet only manage half that speed, take up a very large room and have the power requirements of a small town.

The last time they tried to emulate the human brain, they had to use a network of thousands of computers with 83000 processors running in parallel and in even then it took them 40 minutes to simulate 1 second of brain activity. We may be able to simulate simple animals but if we want to create AI as intelligent as we are, we are still many years away from building something with the same processing power and the amazing space\energy efficiency of the human brain.

ikarl

3,730 posts

200 months

Wednesday 22nd July 2015
quotequote all
Guvernator said:
Not sure how you work out the hardware is there. Best guesses for the processing power of the human brain clock in at about 38 petaflops, it weighs about 1.5 kilos and just needs food and water a few times a day to power it. The fastest supercomputers on the planet only manage half that speed, take up a very large room and have the power requirements of a small town.

The last time they tried to emulate the human brain, they had to use a network of thousands of computers with 83000 processors running in parallel and in even then it took them 40 minutes to simulate 1 second of brain activity. We may be able to simulate simple animals but if we want to create AI as intelligent as we are, we are still many years away from building something with the same processing power and the amazing space\energy efficiency of the human brain.
With the talk of 'petaflops' (had to Google that) and exponential growth, how long do you estimate Guv for a computer with the processing power of the human brain to be the size of a small car?

otolith

56,266 posts

205 months

Wednesday 22nd July 2015
quotequote all
I think we will find a better way round the problem sooner than we will have the computing power for brute force emulation.

mudflaps

Original Poster:

317 posts

107 months

Wednesday 22nd July 2015
quotequote all
ikarl said:
With the talk of 'petaflops' (had to Google that) and exponential growth, how long do you estimate Guv for a computer with the processing power of the human brain to be the size of a small car?
And that is the key, Exponential Growth. With that things that you might estimate to be 50 or 100 years away - based on existing growth patterns to date - can be realised in a fraction of that, say 10 to 20 years.

Guvernator

13,170 posts

166 months

Wednesday 22nd July 2015
quotequote all
ikarl said:
With the talk of 'petaflops' (had to Google that) and exponential growth, how long do you estimate Guv for a computer with the processing power of the human brain to be the size of a small car?
Processing speed has actually been slowing down, we've not had a major speed leap in over 5 years hence the move to multiple cores rather than trying to make each processor run faster. I suspect we are fast approaching the limit of current silicon technology so unless we have a major breakthrough in 3d silicon processing or some other technology, I'm going to say 15-20 years to reach the size of a car and then maybe another 15-20 to get it down to the size of the human brain.

Some might see this as pessimistic when we've gone from a computer the size of a room being beaten comprehensively by a current smartphone within 30 years but I think we've been plucking all the low hanging fruit and exponential improvements will get harder from now on. Even if we managed a massive breakthrough in organic or quantum computing tomorrow, it would still take decades to turn it into a viable working computer.

0000

13,812 posts

192 months

Wednesday 22nd July 2015
quotequote all
I don't think running current algorithms faster is going to help much anyway.

otolith

56,266 posts

205 months

Wednesday 22nd July 2015
quotequote all
The emergent behaviour of the model of the worm brain suggests that literal modelling of the human brain would be possible given enough processing power; but I wonder if it might be like naively trying to model projectiles or planetary motion using some sort of iterative method without the shortcut of calculus.

Martin4x4

6,506 posts

133 months

Wednesday 22nd July 2015
quotequote all
ash73 said:
It was a hoax? Well, I still think they can pass the test with a neural net algorithm on a classical computer. It's not about the hardware, it just requires an Einstein programmer to come along.
Turing was a brilliant, the Turing test is not.

https://www.newscientist.com/blogs/shortsharpscien...

CrutyRammers

13,735 posts

199 months

Wednesday 22nd July 2015
quotequote all
Guvernator said:
Processing speed has actually been slowing down, we've not had a major speed leap in over 5 years hence the move to multiple cores rather than trying to make each processor run faster. I suspect we are fast approaching the limit of current silicon technology so unless we have a major breakthrough in 3d silicon processing or some other technology, I'm going to say 15-20 years to reach the size of a car and then maybe another 15-20 to get it down to the size of the human brain.

Some might see this as pessimistic when we've gone from a computer the size of a room being beaten comprehensively by a current smartphone within 30 years but I think we've been plucking all the low hanging fruit and exponential improvements will get harder from now on. Even if we managed a massive breakthrough in organic or quantum computing tomorrow, it would still take decades to turn it into a viable working computer.
As I understand it, the focus of late has been on making chips more efficient, so they consume less power and produce less heat, so that you can put faster ones in your pocket devices.

glazbagun

14,283 posts

198 months

Wednesday 22nd July 2015
quotequote all
ash73 said:
The thing is, academics who experiment with neural nets won't have the first clue how to write fast code, and if you're scaling up a process across 80 billion neurons with a trillion connections even the tiniest inefficiency will make it tank. Give it to some game programmers and see what they can do!
God help us. The first true AI will appear in HalfLife 5 and will take over the world by accident, thinking it's the aliens.

otolith

56,266 posts

205 months

Wednesday 22nd July 2015
quotequote all
Certainly a large set of individual program instances communicating by UDP is not an efficient way of doing it! But then the worm is just a test case, it's not meant to perform.

FarmyardPants

4,112 posts

219 months

Thursday 23rd July 2015
quotequote all
There are strong AI believers and weak AI believers. The strong AI believers reckon that the human thought process is algorithmic, and should in theory be programmable. The weak AI believers say that there is something else to a "mind" that transcends algorithms, and to program a mind is impossible. The sticky issue lurking somewhere between the two groups is the subject of consciousness (or conscious thought), which is where the Turing test comes in. From what I learnt it goes like this: since even we cannot say what consciousness is, Turing proposed that we define it as being that which we find indistinguishable from a known control that we have decided a priori possesses this property. If we are unable to decide if a test subject is conscious or not, how can we say that anybody we talk to is (assuming we have no 'clues' that would lead us to one conclusion or another)? It's a reasonable argument albeit not without some flaws/suppositions.

As far as strong AI/consciousness is concerned and the algorithmic nature of it, I believe it is not possible to directly devise an algorithm that implements a mind, for to do so implies that the implementor first understands what a mind is, which supposes a level of intelligence outside the boundaries of the mind itself. I think it is possible, however, to write an algorithm that writes other algorithms, that [by some process] produces an end result that is not comprehensible by the originator, and by this mechanism allows the complexity to escape beyond what would otherwise be achievable, ie it should be possible to "grow" a program. The problem with this is that such growth needs steering (training) and generally speaking, the training process needs human input to tell the growing algorithm whether it's on the right track. The amount of data required to do this is enormous (think: growing child receiving sensory input). I see this as the major hurdle: you could in theory create something indistinguishable from a conscious being, but in reality you could never feed it enough data to get there. And you can't feed it data from another algorithm because that in turn doesn't know what it's doing for the same reason - you can't bootstrap it.

Asterix

24,438 posts

229 months

Thursday 23rd July 2015
quotequote all
FarmyardPants said:
There are strong AI believers and weak AI believers. The strong AI believers reckon that the human thought process is algorithmic, and should in theory be programmable. The weak AI believers say that there is something else to a "mind" that transcends algorithms, and to program a mind is impossible. The sticky issue lurking somewhere between the two groups is the subject of consciousness (or conscious thought), which is where the Turing test comes in. From what I learnt it goes like this: since even we cannot say what consciousness is, Turing proposed that we define it as being that which we find indistinguishable from a known control that we have decided a priori possesses this property. If we are unable to decide if a test subject is conscious or not, how can we say that anybody we talk to is (assuming we have no 'clues' that would lead us to one conclusion or another)? It's a reasonable argument albeit not without some flaws/suppositions.

As far as strong AI/consciousness is concerned and the algorithmic nature of it, I believe it is not possible to directly devise an algorithm that implements a mind, for to do so implies that the implementor first understands what a mind is, which supposes a level of intelligence outside the boundaries of the mind itself. I think it is possible, however, to write an algorithm that writes other algorithms, that [by some process] produces an end result that is not comprehensible by the originator, and by this mechanism allows the complexity to escape beyond what would otherwise be achievable, ie it should be possible to "grow" a program. The problem with this is that such growth needs steering (training) and generally speaking, the training process needs human input to tell the growing algorithm whether it's on the right track. The amount of data required to do this is enormous (think: growing child receiving sensory input). I see this as the major hurdle: you could in theory create something indistinguishable from a conscious being, but in reality you could never feed it enough data to get there. And you can't feed it data from another algorithm because that in turn doesn't know what it's doing for the same reason - you can't bootstrap it.
Pfff

RobDickinson

31,343 posts

255 months

Thursday 23rd July 2015
quotequote all
Humans always thing there is something magical going on until science nails it down and oh look its just physics/etc.

The brain is complicated and neurons arnt as simple as binary switches but we will get there one day.

That wait why not article though talking about an AI exponential growth, it will have limits, it cant grow its own processing strata like we cant grow our own brains. Sure it can optimise its code, and more processing can be created but it wont just 'happen'