Why is AI such an impossible goal?

Why is AI such an impossible goal?

Author
Discussion

amancalledrob

1,248 posts

134 months

Wednesday 1st February 2017
quotequote all
otolith said:
I don't think brute force simulation is likely to be the answer - I'm merely pointing out that there is nothing magical or spiritual about how intelligence (and what we call consciousness) emerges from biological machines.
"Any sufficiently advanced technology is indistinguishable from magic" - Arthur C. Clarke

Biological machines are natural technology. Ergo spiritual no, magical yes

otolith

56,121 posts

204 months

Wednesday 1st February 2017
quotequote all
Indistinguishable from magic if you don't understand how they work but not, however, supernatural.

smn159

12,654 posts

217 months

Wednesday 1st February 2017
quotequote all
Ransoman said:
you can't program genuine free will and curiosity.
Well given that we don't have genuine free will that makes it a bit easier to program.

amancalledrob

1,248 posts

134 months

Wednesday 1st February 2017
quotequote all
otolith said:
Indistinguishable from magic if you don't understand how they work but not, however, supernatural.
Agree completely

The_Doc

4,885 posts

220 months

Thursday 16th February 2017
quotequote all
I don't think we will make a "spontaneous free thought object", as my definition of AI, before we have understood the human brain.

The human brain is the most complicated object in our consideration

The human brain grows spontaneously from 2 cells and not only wires itself, but teaches itself how to think.

If we "grow" a human brain analogue as AI..... Are we not creating new life rather than AI?

Should the goal for AI not be to create a partially sentient machine?
Should the goal for creating a partially sentient machine, be not to make it a real human.
When we create an AI, can we kill it or deny it it what it wants? (Geneva Convention on (human) rights)

Any brute force machine, even one that learns from it's mistakes or works out new tactics, is a long long way from AI in my book.

.....just my two penn'orth....

The_Doc

4,885 posts

220 months

Thursday 16th February 2017
quotequote all
And just to bring it down to Earth...

yesterday my 5 year old, who's computer/brain I have been diligently programming for 5 years, said to me:

"When we stop talking Daddy, where do the words go?"

Forget the Turing Test, I now define a new test called the Hage Test, whereby spontaneously an AI asks a question of a totally abstract or existential nature. Programming the AI to ask questions disqualifies you from the test. Spontaneous.

This "Go Computer" is some way off this eh?

Guvernator

13,156 posts

165 months

Thursday 16th February 2017
quotequote all
The_Doc said:
And just to bring it down to Earth...

yesterday my 5 year old, who's computer/brain I have been diligently programming for 5 years, said to me:

"When we stop talking Daddy, where do the words go?"

Forget the Turing Test, I now define a new test called the Hage Test, whereby spontaneously an AI asks a question of a totally abstract or existential nature. Programming the AI to ask questions disqualifies you from the test. Spontaneous.

This "Go Computer" is some way off this eh?
Is it totally spontaneous though? Your sons computer\brain has taken in stimuli, probably from interacting with you to form a logical structure in his memory store of how speech works. His brain has accepted input, compared this against what he already knows and then used that information to either come up with a valid answer or in this case decided he doesn't have enough information so has requested more input. This isn't too dissimilar to what machines can do already it's just that his brain has million of lines of more "code" so it seems more spontaneous but is in fact the logical conclusion to the process outlined above.

If you break down every process sufficiently you can eventually get down to logical steps that got you there, we just haven't developed the technology to a point where a computer can work these out to a granular enough fashion for itself.

Who says a human is anything more sophisticated then millions of programmed responses? If you were able to somehow record my lifetime of experiences you could pretty much work out to a very high degree of accuracy how I'd react to any given situation. In effect I've been "programmed" by living the life I have. How is this any different to someone coding millions of lines of code or even setting a computer to work through millions of scenarios and "learn" a response for each scenario?

I've been married for 8 years, sure my wife still surprises me at times due to her "women's fuzzy logic" but I can pretty much predict what her reaction will be to certain situations to a pretty accurate degree as I've worked out patterns in her learned behaviour just by being around her. Human's, not so spontaneous and unpredictable as we'd like to believe.

768

13,680 posts

96 months

Thursday 16th February 2017
quotequote all
I can't think of any existing AI which extrapolates an awareness of gaps in its knowledge quite like that. There are subtleties to intelligence that I suspect are beyond just more of the same.

The_Doc

4,885 posts

220 months

Thursday 16th February 2017
quotequote all
Guvernator said:
The_Doc said:
And just to bring it down to Earth...

yesterday my 5 year old, who's computer/brain I have been diligently programming for 5 years, said to me:

"When we stop talking Daddy, where do the words go?"

Forget the Turing Test, I now define a new test called the Hage Test, whereby spontaneously an AI asks a question of a totally abstract or existential nature. Programming the AI to ask questions disqualifies you from the test. Spontaneous.

This "Go Computer" is some way off this eh?
Is it totally spontaneous though? Your sons computer\brain has taken in stimuli, probably from interacting with you to form a logical structure in his memory store of how speech works. His brain has accepted input, compared this against what he already knows and then used that information to either come up with a valid answer or in this case decided he doesn't have enough information so has requested more input. This isn't too dissimilar to what machines can do already it's just that his brain has million of lines of more "code" so it seems more spontaneous but is in fact the logical conclusion to the process outlined above.

If you break down every process sufficiently you can eventually get down to logical steps that got you there, we just haven't developed the technology to a point where a computer can work these out to a granular enough fashion for itself.

Who says a human is anything more sophisticated then millions of programmed responses? If you were able to somehow record my lifetime of experiences you could pretty much work out to a very high degree of accuracy how I'd react to any given situation. In effect I've been "programmed" by living the life I have. How is this any different to someone coding millions of lines of code or even setting a computer to work through millions of scenarios and "learn" a response for each scenario?

I've been married for 8 years, sure my wife still surprises me at times due to her "women's fuzzy logic" but I can pretty much predict what her reaction will be to certain situations to a pretty accurate degree as I've worked out patterns in her learned behaviour just by being around her. Human's, not so spontaneous and unpredictable as we'd like to believe.
Yes you are right.

My five year old also draws with felt tips for fun, there is no reward I can see for this, she puts pages in a box and doesn't come and collect treats. I also want my potential AI to compose pleasing (not just plink plonk) music. And listen to music and display an emotional response. Spontaneous creativity is the hallmark of an intelligence. But then brains are just wiring.... Until you start to think about memories and emotions.

We are fundamentally far away from intelligence, we just have massively powerful pattern and data followers.

otolith

56,121 posts

204 months

Thursday 16th February 2017
quotequote all
That's a very anthropocentric concept of what intelligence could be.

768

13,680 posts

96 months

Thursday 16th February 2017
quotequote all
Tell that to Turing. smile

V8LM

5,174 posts

209 months

Thursday 16th February 2017
quotequote all
otolith said:
scorp said:
otolith said:
Yet we have biological machines in our heads which appear to do exactly that.
No one understands how they work though, so how would you begin to model something based on something you don't understand ?
We don't know - but intelligence as we understand it is a property which can emerge out of something that works using nothing more than physics. We might not know how to do it, but it's clearly possible.
Harnessing four billion years of evolution is quite possibly a way forward:

http://pubs.acs.org/doi/full/10.1021/acscentsci.6b...

http://www.nature.com/nchem/journal/v9/n2/full/nch...

otolith

56,121 posts

204 months

Friday 17th February 2017
quotequote all
768 said:
Tell that to Turing. smile
It's a test. If you want to test for human-like intelligence it might even be a good one, though it's based on a 1950's idea of what computers would need to be genuinely intelligent to do. I think we could probably manage that parlour trick without actually cracking strong AI. A machine with chimpanzee level general intelligence would be a greater achievement yet would not pass it.

The_Doc

4,885 posts

220 months

Friday 17th February 2017
quotequote all
otolith said:
That's a very anthropocentric concept of what intelligence could be.
Admittedly, but the most complicated and proficient intelligence is currently human.

Zad

12,699 posts

236 months

Sunday 19th February 2017
quotequote all
I wrote a 20,000 word thesis on AI at university. Admittedly 20 years ago, but not much has changed. The TL:DR of it is that we can't even work out the question, let alone the answer. I cannot prove objectively that anyone else is sentient. Hell most mornings I have all on to prove that about myself.

Frimley111R

15,661 posts

234 months

Thursday 8th June 2017
quotequote all
Look out, we're all doomed, again. Elon says so...

https://flipboard.com/@FoxNews/-elon-musk-says-art...

Guvernator

13,156 posts

165 months

Thursday 8th June 2017
quotequote all
Some of that is highly extrapolated but it will happen at some point if we continue down the path of technology development that we are currently on. We don't even need true AI for a computer to do most of the jobs that we do now, just a really well programmed one.

I am 40 odd so I don't think it will happen in my lifetime but for my daughter, this will be a very real issue that she will face when she enters her work life. The problem that the Oxford study highlights will have a huge impact on our society. In 30 years when a computer can do 90% of the jobs humans do now but much more safely, cheaply and efficiently, what will all those millions of humans do with their time and more importantly how will they earn a living to pay for food, clothes, housing etc?

smn159

12,654 posts

217 months

Thursday 8th June 2017
quotequote all
Guvernator said:
Some of that is highly extrapolated but it will happen at some point if we continue down the path of technology development that we are currently on. We don't even need true AI for a computer to do most of the jobs that we do now, just a really well programmed one.

I am 40 odd so I don't think it will happen in my lifetime but for my daughter, this will be a very real issue that she will face when she enters her work life. The problem that the Oxford study highlights will have a huge impact on our society. In 30 years when a computer can do 90% of the jobs humans do now but much more safely, cheaply and efficiently, what will all those millions of humans do with their time and more importantly how will they earn a living to pay for food, clothes, housing etc?
Universal basic wage, funded through taxation on the companies which operate the AIs - i.e. equivalent to the wages paid to the replaced humans?

Current machine intelligence is really just fairly basic decision making but using very large data sets

Guvernator

13,156 posts

165 months

Thursday 8th June 2017
quotequote all
smn159 said:
Universal basic wage, funded through taxation on the companies which operate the AIs - i.e. equivalent to the wages paid to the replaced humans?
A few problems I can see with this solution.

1. Universal basic wage means you don't have the chance to better your situation. How do you improve your lot in life? What if you aspire to more than just a basic standard of living? Also without that motivation to do better, what is your purpose in life?

which leads nicely onto....

2. The devil makes work for idle hands, you'll have millions of bored humans sitting around with lots of time on their hands and not much to do. The basic wage might not allow them to live out their dreams of an idyllic lifestyle, you need money to properly enjoy all that free time. Some people can handle this, most won't and it will cause grief.

3. What is the motivation for companies to replace humans with AI and then still pay those humans to sit around doing nothing? How will governments enforce this when they can't even make them pay their proper share of taxes now? The more likely scenario is that humans will be out of work and left to fend for themselves.

4. With a basic wage, who will be able to buy all the wonderful toys, cars, gadgets, services that these AI will be producing at an accelerated rate?

The Oxford study is a good start but I think this needs some serious consideration otherwise we could face some serious implications as a society if we aren't properly prepared for the transition to AI. It's all well and good for sci-fi stuff like Star Trek to portray this wonderful socialist utopia where money doesn't exist, everything is free and people work for the hell of it but there will be a massive social upheaval before we can approach anything like that and I suspect that process won't be very easy.

smn159

12,654 posts

217 months

Thursday 8th June 2017
quotequote all
Guvernator said:
A few problems I can see with this solution.

1. Universal basic wage means you don't have the chance to better your situation. How do you improve your lot in life? What if you aspire to more than just a basic standard of living? Also without that motivation to do better, what is your purpose in life?

which leads nicely onto....

2. The devil makes work for idle hands, you'll have millions of bored humans sitting around with lots of time on their hands and not much to do. The basic wage might not allow them to live out their dreams of an idyllic lifestyle, you need money to properly enjoy all that free time. Some people can handle this, most won't and it will cause grief.

3. What is the motivation for companies to replace humans with AI and then still pay those humans to sit around doing nothing? How will governments enforce this when they can't even make them pay their proper share of taxes now? The more likely scenario is that humans will be out of work and left to fend for themselves.

4. With a basic wage, who will be able to buy all the wonderful toys, cars, gadgets, services that these AI will be producing at an accelerated rate?

The Oxford study is a good start but I think this needs some serious consideration otherwise we could face some serious implications as a society if we aren't properly prepared for the transition to AI. It's all well and good for sci-fi stuff like Star Trek to portray this wonderful socialist utopia where money doesn't exist, everything is free and people work for the hell of it but there will be a massive social upheaval before we can approach anything like that and I suspect that process won't be very easy.
Maybe the transition over time will be away from the current social economic model whereby improving your lot in life is defined as earning more money so that you can consume more stuff and buy a bigger house / car. Maybe success will be measured in terms of personal betterment or growth in other ways

I think that you're right and that the process will be a painful one. The tendency will be for those who control the AIs to want to accumulate the wealth for themselves.