Why is AI such an impossible goal?

Why is AI such an impossible goal?

Author
Discussion

Morningside

Original Poster:

24,110 posts

228 months

Monday 9th January 2017
quotequote all
Lots of android/robot programmes around lately and I wondered why AI as been such an impossible goal? Each year some scientist predicts "AI in the next 20 years" but here were are...nothing!

I know that we have limited 'intelligence' with Siri and Google but nothing you could clearly communicate with and IBMs Watson was an amazing achievement and well beyond everything to date but still it could not ask about its own existence.

I know something like chess was thought to be that impossible task due to the complexities of the game and it also appears that "GO" (I think) has now been done but these are games by strict rules and it would never break out and become sentient.

In the past of the 1950's and '60s we have tried to build a brain but is that really the way to do it? Not really talking about a conscious life but more of a realist Turing test I suppose.

Suppose if we knew the answer to this then the goal would be achieved.


Greetings, Professor Falken.
A strange game. The only winning move is not to play. How about a nice game of chess?

otolith

55,899 posts

203 months

Monday 9th January 2017
quotequote all
The machine that plays Go is a significant advance on chess playing machines because you can't just use brute force the way you can in chess. Too many possibilities.

We don't have general AI yet, but we are getting closer.

SystemParanoia

14,343 posts

197 months

Monday 9th January 2017
quotequote all

How about Global Thermonuclear War?

Morningside

Original Poster:

24,110 posts

228 months

Monday 9th January 2017
quotequote all
otolith said:
The machine that plays Go is a significant advance on chess playing machines because you can't just use brute force the way you can in chess. Too many possibilities.

We don't have general AI yet, but we are getting closer.
What/who is the latest best AI so far?

Ransoman

884 posts

89 months

Monday 9th January 2017
quotequote all
you can't program genuine free will and curiosity.

otolith

55,899 posts

203 months

Monday 9th January 2017
quotequote all
Morningside said:
What/who is the latest best AI so far?
We're seeing advances made in single purpose AI for functions like voice recognition or game playing which are edging closer to the possibility of general purpose AI.

For instance, the way that the Go challenge was won was to build a neural net, teach it to recognise winning patterns used by human masters and then have it play millions of games against other instances of itself and learn from those. Effectively it was taught to play and then sent away to think about it.

https://www.wired.com/2016/03/sadness-beauty-watch...

That's a very different thing to the algorithmic, mechanical way that chess was won.

otolith

55,899 posts

203 months

Monday 9th January 2017
quotequote all
Ransoman said:
you can't program genuine free will and curiosity.
Yet we have biological machines in our heads which appear to do exactly that.

ZOLLAR

19,908 posts

172 months

Monday 9th January 2017
quotequote all
OP,

I quite enjoyed reading this a while back.

http://waitbutwhy.com/2015/01/artificial-intellige...


skeeterm5

3,328 posts

187 months

Monday 9th January 2017
quotequote all
You have to answer a different question first - what do you actually mean by AI? I think that the Turing test is too one dimensional and not that good a measure in today's world. To my mind all that does is seek to show that AI is nothing more than replicating a human in a conversational machine.

I think that there is also a real mix up between AI and consciousness, two very distinct things in my mind and I do not buy the "self awareness" test.

I think you could make the case that some of the really good predictive and behavioural analytics display good elements of AI, takning lots of data and concluding so something that may not even be in the data. If you consider this as AI then there are many very sophisticated examples.

S

Edited by skeeterm5 on Monday 9th January 17:05

Dr Jekyll

23,820 posts

260 months

Monday 9th January 2017
quotequote all
skeeterm5 said:
You have to answer a different question first - what do you actually mean by AI? I think that the Turing test is too one dimensional and not that good a measure in today's world. To my mind all that does is seek to show that AI is nothing more than replicating a human in a conversational machine.

I think that there is also a real mix up between AI and consciousness, two very distinct things in my mind and I do not buy the "self awareness" test.

I think you could make the case that some of the really good predictive and behavioural analytics display good elements of AI, takning lots of data and concluding so something that may not even be in the data. If you consider this as AI then there are many very sophisticated examples.
The problem with defining AI is that once you know how to do something artificially it stops looking like a sign of intelligence. But what do you mean by consciousness?

davepoth

29,395 posts

198 months

Monday 9th January 2017
quotequote all
Dr Jekyll said:
The problem with defining AI is that once you know how to do something artificially it stops looking like a sign of intelligence. But what do you mean by consciousness?
And that's the rub. We have yet to quantitatively define consciousness, so it's very difficult to tell whether a computer has it or not. That's especially true because we try to frame consciousness as a human thing, and a machine consciousness might look very different.

We are making steps though. An Xbox with a kinect fitted is probably more intelligent than a lot of single celled life, and some of the best autonomous vehicles are approaching insect level.

Morningside

Original Poster:

24,110 posts

228 months

Tuesday 10th January 2017
quotequote all
Some good points raised here.

I was watching an ant type thing crawling along the bathroom floor this morning and thought does it have any idea where it is? Does it see me as a huge giant? Does it feel hunger or sorrow? Or is it just a simple chemical robot that does simple tasks that look conscious?

I mean the creature would have a very, very small brain with not many internal connections.

SystemParanoia

14,343 posts

197 months

Tuesday 10th January 2017
quotequote all
Morningside said:
Some good points raised here.

I was watching an ant type thing crawling along the bathroom floor this morning and thought does it have any idea where it is? Does it see me as a huge giant? Does it feel hunger or sorrow? Or is it just a simple chemical robot that does simple tasks that look conscious?
Ahem..

https://www.youtube.com/watch?v=ZLZW8Deq8vE

otolith

55,899 posts

203 months

Tuesday 10th January 2017
quotequote all
anonymous said:
[redacted]
But it's based on a very anthropocentric idea of the concept of intelligence. We are the most intelligent thing we know of, so we test for intelligence by the ability to pass as one of us, but there could be naturally occurring intelligences in the universe far exceeding our own but which would not be able to pass as human by making small talk. We could conceivably create a true AI which doesn't pass the Turing test - for one thing, it might choose not to! Or it might, particularly if it has been created by bootstrapping less sophisticated AI, appear to be entirely inscrutable.

SystemParanoia

14,343 posts

197 months

Tuesday 10th January 2017
quotequote all
The amazon Echo/Dot is pretty good at picking up on inference.



Who is the Potus ?

What is his wife's name?

What are their children's names?


you can go pretty deep into the tree before it all falls apart

Dr Jekyll

23,820 posts

260 months

Tuesday 10th January 2017
quotequote all
otolith said:
We could conceivably create a true AI which doesn't pass the Turing test - for one thing, it might choose not to! Or it might, particularly if it has been created by bootstrapping less sophisticated AI, appear to be entirely inscrutable.
It could get a headache trying to think down to our level.

otolith

55,899 posts

203 months

Tuesday 10th January 2017
quotequote all
Dr Jekyll said:
It could get a headache trying to think down to our level.
Brain the size of a planet and...

SystemParanoia

14,343 posts

197 months

Tuesday 10th January 2017
quotequote all
Google's been doing cool stuff with their neural networks

Self learning machines creating their own encryption

published paper:
https://arxiv.org/abs/1610.06918

Random Article:
http://arstechnica.co.uk/information-technology/20...

I feel they should try again, but give unlimited access to a couple of super computers and the D:wave ( http://www.dwavesys.com/ )

Terminator X

14,921 posts

203 months

Tuesday 10th January 2017
quotequote all
Why wish for this as we're fked once it turns up.

TX.

skeeterm5

3,328 posts

187 months

Tuesday 10th January 2017
quotequote all
anonymous said:
[redacted]
By your definition a lot of humans wouldn't pass the test and nor would any non human intelligence. That is the fundamental issue with the test, it doesn't prove intelligence at all, it proves an ability to fool somebody that you are talking to a human, how does that demonstrate intelligence?


If you had the desire you could create a really deep relational database with good logic trees to spoof "intelligent" conversation. I think a better measure is an ability to create unique solutions to problems based on varied data sets, which then makes it useful.