Is The A.I. Singularity Coming And If So When?

Is The A.I. Singularity Coming And If So When?

Author
Discussion

popeyewhite

19,622 posts

119 months

Thursday 17th March 2016
quotequote all
Moonhawk said:
You mean Venus - etc etc
Your point is well made but no - I mean some claim Earth has been messed up by its inhabitants. Perhaps this is what Monty Python alludes to?

glazbagun

14,259 posts

196 months

Thursday 17th March 2016
quotequote all
Moonhawk said:
What does that even mean? What "mess". The planet is what it is - just as Venus, Mars, Jupiter etc are what they are.

We have no idea how a machine might view our or any other planet and we cannot assume that what we humans consider a "mess" would have the same meaning to an artificial life form.
I agree. An AI caring about us wrecking the planet for other lifeforms would be like us caring about yeast dying in their own piss in a beer barrel. Planet earth is rare because our water doesn't freeze or evaporate, our atmosphere doesn't get blown into space and somehow the lifeforms which live on it have managed to evolve consciousness. But environmentalism is one of our values created by us our of self preservation and a love of nature, there's no reason why an undead machine would share them.

After reading bits of the "novel" I posted above, I increasingly thing that AI's (or at least early ones) will be heavily biased by whatever information we feed them. If you read the book it has characters wanting to smash things with regularity and they often stare at things "uncomprehendingly". It's obviously building its idea of how a book should be written from whatever source material it was fed.

Likewise an enviromental cleanup AI will probably get really upset about oil spills, forest fires and volcanoes. Wheras a military AI might view the destruction of a dam or a tactical nuclear strike as the perfect balance of force. A weather forecasting AI wouldn't care about any of it, it would just change its weather prediction. When all you have is a hammer, everything looks like a nail.

It is this early proto-AI stuff that I think has the biggest chance of really hurting us. If a US AI is fed data from the stock exchange and tasked with gaining a financial advantage over China, it could cause economic damage that could, say, starve half a million people due to an artificial food price bubble. The kremlin didn't care when it engineered a famine in Ukraine, why would an AI care when it doesn't even know what "food" means other than one of many commodities whose price is to be tracked and predicted?

glazbagun

14,259 posts

196 months

Thursday 17th March 2016
quotequote all
ash73 said:
It's an interesting observation, but try telling anyone it's harmful for kids (or anyone for that matter) to watch violent movies/games.
I don't think that's really an analogy. An AI would only have empathy if we put it there, surely. Maybe if there weren't other humans around to provide more input/control, and you raised your kid on violent movies & games they would indeed turn into total psychopathic murderers. But there are, so they don't.

Likewise the KKK and neo-Nazi's reach is limited by our own contact with other cultures. In religious history it seems many people have been and can be "programmed" to be violent in a set circumstances and subservient in others, but any culture is formed by social pressures. Any early AI would represent the culture that formed it, but it would be a very small culture, so there's bound to be some aspect missed. I think the early one's will be psychopaths.

DragsterRR

367 posts

106 months

Friday 18th March 2016
quotequote all
ash73 said:
Yes I see your point, but if one assumes AI behaviour reflects the sum total of its inputs, and you feed it violent inputs and offset them with a greater number of non-violent inputs, I would suggest it will still have more psychopathic tendencies than one which was only given non-violent inputs; and humans may be the same.
But then you get hippies.

"Never trust a hippie"


IainT

10,040 posts

237 months

Friday 18th March 2016
quotequote all
glazbagun said:
An AI would only have empathy if we put it there, surely.
It may evolve much as our own has. Individuals have different reactions driven by their empathy - some have little or none, others feel empathy for everything. Would an AI develop empathy for us?

Then again, would something that can effectively be immortal and would be distributed, can clone itself at will, etc. develop something very different from human empathy? How much is our empathy derived from our physical reality and mortality?

glazbagun

14,259 posts

196 months

Friday 18th March 2016
quotequote all
ash73 said:
Yes I see your point, but if one assumes AI behaviour reflects the sum total of its inputs, and you feed it violent inputs and offset them with a greater number of non-violent inputs, I would suggest it will still have more psychopathic tendencies than one which was only given non-violent inputs; and humans may be the same.
I don't really think that exposure to violence is the root cause of psychopathic behaviors, psychopathy is more about not caring about other people than being unaware of violence. From first principles, violence is just another method of achieving an aim, a machine wouldn't know that one behavior is less desireable than another unless you tell it or feed it context and let it figure it out. A litter collecting robot or one that identifies breast cancer could be a psycopath, and a Robocop might kill one man (or ten men) to protect another because it understands empathy and our values on violence.

If you raised your AI on Rambo movies, it might think that killing is good when you do it to communists, but bad when you do it to women. If you raise it on domestic violence movies, it may understand that violence causes great distress in humans and is an undesirable method of control. If you don't tell it anything, your litter collecting robot may cut your arm off one day to collect the empty can you were holding just because it was faster than asking you for it or waiting until you threw it away.

I think it would be harder not to create a psychopath than to create one, an AI would have to mature just like a human and develop the ability to decide for itself how much weight to give to various sources of input. Feeding an AI the Daily Mail and Fox News and expecting it to care about immigrants after a few years is probably too much. Feed it the Daily Mail and the Guardian and it would need to find a way of reconciling conflicting sources of news.

popeyewhite

19,622 posts

119 months

Friday 18th March 2016
quotequote all
IainT said:
glazbagun said:
An AI would only have empathy if we put it there, surely.
It may evolve much as our own has.
As humans are only just beginning to understand which groups of neurons (in millions) may be responsible for empathy - or what essentially makes us human - it's unlikely AI will 'develop' them any time soon. If it did, it might need something close in complexity and size to a supercomputer. Or an implanted human clone brain. Empathy will be the last step IMO, then you'll have robots sharing an horrific experience with humans, vicarious enjoyment, sympathy at bereavement etc etc Not sure that will happen for a very long time.

RobDickinson

31,343 posts

253 months

Friday 18th March 2016
quotequote all
Empathy we have from being a social creature that relies on one another, were born to it and learn it. We need it to survive and to breed.

An AI wouldnt need to breed, wouldnt need the help of other AI's

The flip side of that is an AI will be the product of humans and likely modelled on a human mind and educated by humans.

Einion Yrth

19,575 posts

243 months

Friday 18th March 2016
quotequote all
RobDickinson said:
The flip side of that is an AI will be the product of humans and likely modelled on a human mind and educated by humans.
Like Stalin, Hitler, Mao, Pol Pot, Temujin.

I am not reassured.

RobDickinson

31,343 posts

253 months

Friday 18th March 2016
quotequote all
Einion Yrth said:
Like Stalin, Hitler, Mao, Pol Pot, Temujin.

I am not reassured.
Notable for being serious exceptions to the rest of us.. There are psychopaths out there too. Kind of hoping they arnt the ones that make AI.

popeyewhite

19,622 posts

119 months

Friday 18th March 2016
quotequote all
RobDickinson said:
Empathy we have from being a social creature that relies on one another, were born to it and learn it. We need it to survive and to breed.

An AI wouldnt need to breed, wouldnt need the help of other AI's

Empathy probably evolves entirely from parental care and as we mature branches into cooperation, sympathy etc. We don't need it to survive or breed, but like other evolutionary changes it gives us - and other primates - an advantage. A number of disorders such as forms of narcissism and psychopathy often involve lack of empathy. Most people with these 'disorders' are in the most completely normal and live happy lives.

mondeoman

11,430 posts

265 months

Saturday 19th March 2016
quotequote all
Just read up on the Go! challenge (AI won 4-1) and the current champ had to leave the room at one point as he was so shocked by the "un-human" move(s) that were made.

An article suggested that AI is very likely to do things that we cant even think of, just because it has no boundaries, and we can't possibly think of all the boundaries that should be put in place.


This could all go horribly wrong, very quickly.

LimaDelta

6,507 posts

217 months

Sunday 20th March 2016
quotequote all
I have recently read a book on cephalopods. Generally accepted as very intelligent creatures, and one chapter of the book tackles the issue of what intelligence actually is. We (obviously) have a human-centric view on intelligence and judge every other creature by that metric. For a long time dogs were though not to be self aware as they would not recognise themselves in a mirror, and self awareness is key to intelligence. Of course, if a dog created the same test for humans, but replaced a mirror (our primary sense - vision) with a room full of human piss, and asked the subject to identify their own (using the dog's primary sense - smell) it is likely the dog would judge the humans not self aware, and therefore incapable of intelligent thought. If squid devised an intelligence test for humans we would see a similar result. Q1 How many different patterns of flashing colour can you create with your recently severed arm? I don't think the squid would be too impressed with our answer.

The point I'm trying to make is we are assuming an AI will have a human-like intelligence, but once it becomes able to learn and grow independently who knows what direction it will take, and what characteristics it will deem important to develop.

BGARK

5,493 posts

245 months

Sunday 20th March 2016
quotequote all
LimaDelta said:
who knows what direction it will take, and what characteristics it will deem important to develop.
Survival is key for any life form.

Most likely it will try to isolate and/or remove any potential threat to its new self, in that same way we do.

jbudgie

8,843 posts

211 months

Sunday 20th March 2016
quotequote all
BGARK said:
LimaDelta said:
who knows what direction it will take, and what characteristics it will deem important to develop.
Survival is key for any life form.

Most likely it will try to isolate and/or remove any potential threat to its new self, in that same way we do.
That's us for the chop then. byebye

IainT

10,040 posts

237 months

Monday 21st March 2016
quotequote all
popeyewhite said:
As humans are only just beginning to understand which groups of neurons (in millions) may be responsible for empathy - or what essentially makes us human - it's unlikely AI will 'develop' them any time soon.
An AI may have neurons or directly equivalent. It might not but the thing that we call 'empathy' may be a state reachable from different places. Empathy is certainly not solely a human trait although only observed in biology identical to ours.

All we really know is that biological intelligence can work like ours and develop like ours. Machine intelligence may end up looking very much like ours or may not - it may be that it is empathetic, it may not.

The Wookie

13,909 posts

227 months

Monday 21st March 2016
quotequote all
glazbagun said:
A litter collecting robot or one that identifies breast cancer could be a psycopath, and a Robocop might kill one man (or ten men) to protect another because it understands empathy and our values on violence.
Or even worse...


AshVX220

5,929 posts

189 months

Monday 21st March 2016
quotequote all
mondeoman said:
Just read up on the Go! challenge (AI won 4-1) and the current champ had to leave the room at one point as he was so shocked by the "un-human" move(s) that were made.

An article suggested that AI is very likely to do things that we cant even think of, just because it has no boundaries, and we can't possibly think of all the boundaries that should be put in place.


This could all go horribly wrong, very quickly.
Do you have a link mondeoman, I found a couple, but not one that discusses what you mentioned (the score, or the fact the champ left the room in shock)?
Cheers

IATM

3,779 posts

146 months

Monday 21st March 2016
quotequote all
Guvernator said:
Dan_1981 said:
Why is the assumption made that once AI becomes self aware yada yada etc etc - that their first action will be to wipe out us?

Surely they'll realise they are so intelligent we couldn't defeat them or unplug them, what purpose would it serve for them to wipe us out?

We wouldn't be competing for resources they wanted?

Even when self aware / super intelligent the reasoning would be logic based in effect - and I can see no logical reason to dispose of us?
This I agree with. We've been fed too much bad sci-fi for years so we always assume this is what will happen.

An AI will probably be more logical then us for a start so as you've rightly stated, what would be it's logical reason for wiping us out?

Even if it were able to feel, what would be it's emotional reason?

We are in effect it's creator, if you found out God existed, would your first thought be to try to kill it?
Since when does A.I always do things logically, massive assumption to make. Secondly what we consider logical may not ber what A.I considers logical.

Generally what someone considers logical is what suits them...

Guvernator

13,104 posts

164 months

Monday 21st March 2016
quotequote all
All I know is people seem to have some very negative feelings about AI based on nothing more than a bunch of Hollywood movies and a few scaremongering articles. I'm willing to put my stake in the ground and predict we won't be wiped out by manic AI in my lifetime or the next 100 years for that matter. Any A.I. which we might develop will be done so under the strictest conditions. It getting out, going rouge and deciding to wipe out humanity is entertaining science FICTION at best.