Is The A.I. Singularity Coming And If So When?

Is The A.I. Singularity Coming And If So When?

Author
Discussion

SpudLink

5,778 posts

192 months

Monday 21st March 2016
quotequote all
Guvernator said:
Any A.I. which we might develop will be done so under the strictest conditions.
One would hope so. And if it were being developed by an American coperation or British university that might well be the case. However, suppose the Chinese government were to get hold of research from Google or MIT or DARPA. Can we trust they will apply the appropriate strict conditions? Or Russia? Or..... North Korea?

I'm not scaremongering, but I don't think everyone shares our 'standards'.

popeyewhite

19,867 posts

120 months

Monday 21st March 2016
quotequote all
IainT said:
An AI may have neurons or directly equivalent.
What will manage the job that billions of neurons do so very well now? I don't think an AI will ever reach the complexities of emotional responses a human brain does.

IainT said:
It might not but the thing that we call 'empathy' may be a state reachable from different places.
Well, neuroscience and evolutionary psychology both suggest empathy arises from parental nurture. Any suggestions how empathy might be engineered?

IainT said:
Empathy is certainly not solely a human trait although only observed in biology identical to ours.
I think I've mentioned that primates have been observed to display empathetic traits, but birds have as well.

IainT said:
All we really know is that biological intelligence can work like ours and develop like ours. Machine intelligence may end up looking very much like ours or may not - it may be that it is empathetic, it may not.
Unless an AI can be engineered to grow and learn then it's unlikely true empathy will ever be achieved by a machine.


ewenm

28,506 posts

245 months

Monday 21st March 2016
quotequote all
Easy to avoid the AI singularity - just give the AI a PH (or Facebook or Twitter or etc) logon and let it argue away ad infinitum. wink

IainT

10,040 posts

238 months

Monday 21st March 2016
quotequote all
My 'guess' would be that a true strong AI would not be implemented on standard computer hardware - super or not. It'll be a breakthrough in quantum computing most likely. I used to think we'd actually see it come in some hybrid form of computer-biology interface but the raw speed difference between the two makes me doubt that as a likely route.

I'd expect AIs to develop/learn/grow much as biological entities do but if it's the nurturing that leads to empathy then I doubt we'll see AI exhibit it unless it's something that's inbuilt by the original 'programming'. AI without empathy is quite a scary concept as it is in a person.

Piersman2

6,597 posts

199 months

Monday 21st March 2016
quotequote all
ewenm said:
Easy to avoid the AI singularity - just give the AI a PH (or Facebook or Twitter or etc) logon and let it argue away ad infinitum. wink
Hmmm... akin to the end scene in Wargames with the computer playing tic-tac-toe against itself! smile

AshVX220

5,929 posts

190 months

Monday 21st March 2016
quotequote all
I'll put this in here (again) for those that haven't seen it's a fascinating read.
Part 1 - http://waitbutwhy.com/2015/01/artificial-intellige...

Part 2 - http://waitbutwhy.com/2015/01/artificial-intellige...

Guvernator

13,153 posts

165 months

Monday 21st March 2016
quotequote all
ash73 said:
I disagree with nearly everything you've said, apart from the timescale. I think it's the one sci-fi scenario that is most likely to become reality, in fact I would say there is a probably of 1.0 a self-aware AI will be created given sufficient time; and we have absolutely no idea what it will do. We're not talking about a mindless virus that can be controlled in lab conditions, we're talking about something smart, which will learn and reprogram itself at an exponential rate, created by logically imperfect minds. Good luck trying to predict its behaviour.
Oh I have no doubt that at some point AI will be a reality. If we just take into the account the increase in processing power alone, we'll probably be a in a position to equal the power of the brain in the next 20 years or so. We'll still have the huge obstacle of working out how we program intelligence but again not an insurmountable problem given human ingenuity.

So I'm not arguing that we won't achieve AI, what I am arguing about is that everyone seems to think that as soon as we do, it will decide humans are surplus to it's requirements and snuff us out. My argument is why would it? If we've admitted we won't predict it's behaviour, why does everyone seem so intent on predicting that it will annihilate us?

I'll offer a different scenario. We develop AI and it becomes stronger\faster\ more intelligent than anything that has gone before. It takes a look at it's creators and sees that we are struggling but instead of wiping us out it decides to help us to improve our situation using it's intelligence to spring us into a new age. My scenario is just as if not more likely but for some reason everyone jumps to the default beware the rise of the machines hyperbole. Like I said, too much bad sci-fi.

IainT

10,040 posts

238 months

Monday 21st March 2016
quotequote all
Guvernator said:
So I'm not arguing that we won't achieve AI, what I am arguing about is that everyone seems to think that as soon as we do, it will decide humans are surplus to it's requirements and snuff us out. My argument is why would it? If we've admitted we won't predict it's behaviour, why does everyone seem so intent on predicting that it will annihilate us?
It's because we tend to anthropomorphise things and assume it would do what we'd probably do!

Terminator X

15,075 posts

204 months

Tuesday 22nd March 2016
quotequote all
mondeoman said:
Just read up on the Go! challenge (AI won 4-1) and the current champ had to leave the room at one point as he was so shocked by the "un-human" move(s) that were made.

An article suggested that AI is very likely to do things that we cant even think of, just because it has no boundaries, and we can't possibly think of all the boundaries that should be put in place.

This could all go horribly wrong, very quickly.
Don't worry there are clever people in charge of it all smash

TX.

RobDickinson

31,343 posts

254 months

Tuesday 22nd March 2016
quotequote all
AI isnt (currently) likely to have the ability to kill us all.

Might have small local ability to kill some people, but generally there isnt enough dangerous things that can be operated remotely to be a problem.

And any AI would be constrained by the hardware its running on. Perhaps it could 'escape' onto the internet but lag time and general low power of nodes out there would cause it real issues - in comparison to say Tianhe-2 which is approaching the size and power of a human brain (if not the complexity).

Terminator X

15,075 posts

204 months

Tuesday 22nd March 2016
quotequote all
ash73 said:
Guvernator said:
All I know is people seem to have some very negative feelings about AI based on nothing more than a bunch of Hollywood movies and a few scaremongering articles. I'm willing to put my stake in the ground and predict we won't be wiped out by manic AI in my lifetime or the next 100 years for that matter. Any A.I. which we might develop will be done so under the strictest conditions. It getting out, going rouge and deciding to wipe out humanity is entertaining science FICTION at best.
I disagree with nearly everything you've said, apart from the timescale. I think it's the one sci-fi scenario that is most likely to become reality, in fact I would say there is a probably of 1.0 a self-aware AI will be created given sufficient time; and we have absolutely no idea what it will do. We're not talking about a mindless virus that can be controlled in lab conditions, we're talking about something smart, which will learn and reprogram itself at an exponential rate, created by logically imperfect minds. Good luck trying to predict its behaviour.
Switch under their chin though to turn them off ...



TX.

Monty Python

4,812 posts

197 months

Tuesday 22nd March 2016
quotequote all
No - can't see it happening myself. We don't even know how the human brain works as everyone is different. It's also constantly changing, creating new signal pathways in response to external stimuli.

The other question is why do we want to? Surely we'd be better off expending more effort in curing the various afflictions we suffer from at the moment that trying to create "artificial people".

Guvernator

13,153 posts

165 months

Tuesday 22nd March 2016
quotequote all
Monty Python said:
No - can't see it happening myself. We don't even know how the human brain works as everyone is different. It's also constantly changing, creating new signal pathways in response to external stimuli.

The other question is why do we want to? Surely we'd be better off expending more effort in curing the various afflictions we suffer from at the moment that trying to create "artificial people".
Why wouldn't we? There are three massive draws I can see for creating AI. One

1) The hope that we'll be able to create something more intelligent then us which in turn may help us to improve the human race

2) Immortality, if we can create artificial brains and learn a lot about how the human brain works in the process, we may be able to combine the two and transfer our brains into artificial media which means we can cheat death.

3) Playing God, for some the lure of creating "artificial life" will be a goal in itself.

Monty Python

4,812 posts

197 months

Tuesday 22nd March 2016
quotequote all
Guvernator said:
Why wouldn't we? There are three massive draws I can see for creating AI. One

1) The hope that we'll be able to create something more intelligent then us which in turn may help us to improve the human race

2) Immortality, if we can create artificial brains and learn a lot about how the human brain works in the process, we may be able to combine the two and transfer our brains into artificial media which means we can cheat death.

3) Playing God, for some the lure of creating "artificial life" will be a goal in itself.
The way we're going at the moment it'll be a race between this happening and us wiping each other out (or the planet doing it for us) - and I expect the latter to happen first.

Actually, thinking about it a bit more, how do we know we haven't already done? Maybe they're testing us....

Edited by Monty Python on Tuesday 22 March 12:56

warp9

1,583 posts

197 months

Tuesday 22nd March 2016
quotequote all
So in this thread we've established that we don't know when or on what system true AI will occur, how we will measure it, what it will think like or have any idea about what it will do or how it will behave.

But there is a consensus that the human race is actively working towards it and in x years it will happen.

Is this a light bulb moment or a gradual awakening? At what point will it have it's own opinion and agenda? Will we know?

Ex Machina was interesting in that the abilities of the AI robot was well ahead of what it's human overlords thought and used replicated emotion to achieve it's goals.

A previous poster mentioned about remembering when first being lied to. My son first told fibs at 2 years old. Are we that naive to think an artificial construct wouldn't tell fibs?

Another earlier poster on this said he had a theory that the internet was already conscious. Maybe they are testing!

Mothersruin

8,573 posts

99 months

Tuesday 22nd March 2016
quotequote all
ash73 said:
That's film's interesting because of the act of deceit by the AI. I can actually remember the first time someone lied to me when I was a toddler at playschool, I was really shocked.
Santa isn't real mate.

Guvernator

13,153 posts

165 months

Tuesday 22nd March 2016
quotequote all
warp9 said:
Another earlier poster on this said he had a theory that the internet was already conscious. Maybe they are testing!
If it is then it'll be a narcissistic nut case pervert. 90% of all internet traffic is made up of porn and social media. Dread to think what that AI will be like when it decides to take over. wink

Derek Smith

45,655 posts

248 months

Tuesday 22nd March 2016
quotequote all
Would AI be immune to evolution?

1/ If so then it will collapse because it will be unlikely to change to fit circumstances if it is self aware.

2/ If it is subject to evolution then it will change into something more sensible.

3/ If it immune and tried to change, it will get it wrong as it will have conceit.


Guvernator

13,153 posts

165 months

Tuesday 22nd March 2016
quotequote all
Derek Smith said:
Would AI be immune to evolution?

1/ If so then it will collapse because it will be unlikely to change to fit circumstances if it is self aware.

2/ If it is subject to evolution then it will change into something more sensible.

3/ If it immune and tried to change, it will get it wrong as it will have conceit.

Evolution is a VERY slow process. A creature that is clever enough will be able to adapt itself or it's tools to slowly changing environment. It's what humans do now after all. You could argue that modern humans have slowed the rate of evolution even further as we adapt the environment to suit us rather than the other way round. No reason to think that an artificial intelligence wouldn't do the same.

mondeoman

11,430 posts

266 months

Tuesday 22nd March 2016
quotequote all
AshVX220 said:
mondeoman said:
Just read up on the Go! challenge (AI won 4-1) and the current champ had to leave the room at one point as he was so shocked by the "un-human" move(s) that were made.

An article suggested that AI is very likely to do things that we cant even think of, just because it has no boundaries, and we can't possibly think of all the boundaries that should be put in place.


This could all go horribly wrong, very quickly.
Do you have a link mondeoman, I found a couple, but not one that discusses what you mentioned (the score, or the fact the champ left the room in shock)?
Cheers
http://qz.com/636637/the-beginning-of-the-end-googles-ai-has-beaten-a-top-human-player-at-the-complex-game-of-go/

Wasn't the original site, but it'll do.