Is The A.I. Singularity Coming And If So When?

Is The A.I. Singularity Coming And If So When?

Author
Discussion

popeyewhite

19,622 posts

119 months

Friday 11th December 2015
quotequote all
So a machine would require knowledge of its own existence to be considered intelligent? What about sense of identity, nature of itself - some sort of ontological awareness?

otolith

55,899 posts

203 months

Friday 11th December 2015
quotequote all
Those are things that we tend to expect of strong AI, but it's arguable to what extent that's based on an inability to conceive of a consciousness other than our own.

popeyewhite

19,622 posts

119 months

Friday 11th December 2015
quotequote all
otolith said:
Those are things that we tend to expect of strong AI, but it's arguable to what extent that's based on an inability to conceive of a consciousness other than our own.
If intelligence in this context is measured as a human attribute, then consciousness should be as well, surely? Without consciousness can emotion be expressed?Simple emotions such as anger, pain etc don't require thought...but what about longing, desire, jealousy? Are these anticipated in 'strong AI'?

Guvernator

13,104 posts

164 months

Friday 11th December 2015
quotequote all
A lot of what we know as emotion is made up of chemical reactions as well as life experience so I'm not sure how we would mimic that except in a purely logical way i.e. if you see\hear\smell this = react like this.

I think where we will struggle the most is in making a computer actually understand abstract concepts. We still don't know how humans do it, when you ask a human what do you think about something like a painting the answer you get is based on so many disparate things often accumulated through years of life experience unique to that individual, how would you mimic that in AI unless you built an AI that could also learn and accumulate experience.

We also get to the important question of what is conciousness, are we humans just really clever machines or are we something more? If you create an AI so clever that it is able to display human behaviour, how do you know it is really doing that or whether it is just responding to some really clever pre-programming? If someone asks me how I feel about something, how is the way I would arrive at the answer different to something that has been programmed?

Are things like conciousness, ego, id etc quantifiable and therefore with enough processing power, mimic-able or are we as humans unique in that regard (note I don't mean in the religious sense)

Do other animals mimic some of our traits, does a monkey understand abstract concepts like self awareness, time or death, is it aware enough to know that someday it will cease to exist completely, is it frightened by this fact?

Until we answer those questions, if we ever can, then I don't think we will get true strong\sentient\concious\whatever you want to call it, AI, we will simply be mimicking intelligence which may in itself be good enough for our purposes.

warp9

1,583 posts

196 months

Friday 11th December 2015
quotequote all

glazbagun

14,259 posts

196 months

Friday 11th December 2015
quotequote all
I was thinking about this topic again when watching The Hidden life of the Cell:

http://www.dailymotion.com/video/x1f26gz_bbc-our-s...

All of the things going on in there are completely brainless, yet wondrous. I found myself willing the cells defences on and felt a tinge of tragedy when the cell loses the battle... yet there is no Good/Bad/Right/Wrong going on in there, the projections are entirely my own. The cell just is, it's incapable of caring if it lives or dies, as is the virus indifferent to it's own prolifigation.

The thought of an intelligence without reason gives me the same fear. What can you say to an intelligence that decides that poisoning us all would lead to cleaner drinking water in the long term?

IainT

10,040 posts

237 months

Saturday 12th December 2015
quotequote all
glazbagun said:
What can you say to an intelligence that decides that poisoning us all would lead to cleaner drinking water in the long term?
Lack of empathy is a major part of the definition of psychopathy and not something humanity isn't capable of.

Why would an inorganic life form care about clean water? To want to achieve a given end (clean water) it would have to have that critical empathy - that same empathy would lead to better solutions surely? While it's simple to imagine machine AI making a cost-benefit analysis and deciding to wipe us out I think it's entirely unfair on the AI to think it as 'soulless' as an accountant.

IainT

10,040 posts

237 months

Saturday 12th December 2015
quotequote all
Guvernator said:
A lot of what we know as emotion is made up of chemical reactions as well as life experience so I'm not sure how we would mimic that except in a purely logical way i.e. if you see\hear\smell this = react like this.

I think where we will struggle the most is in making a computer actually understand abstract concepts. We still don't know how humans do it, when you ask a human what do you think about something like a painting the answer you get is based on so many disparate things often accumulated through years of life experience unique to that individual, how would you mimic that in AI unless you built an AI that could also learn and accumulate experience.

We also get to the important question of what is conciousness, are we humans just really clever machines or are we something more? If you create an AI so clever that it is able to display human behaviour, how do you know it is really doing that or whether it is just responding to some really clever pre-programming? If someone asks me how I feel about something, how is the way I would arrive at the answer different to something that has been programmed?

Are things like conciousness, ego, id etc quantifiable and therefore with enough processing power, mimic-able or are we as humans unique in that regard (note I don't mean in the religious sense)

Do other animals mimic some of our traits, does a monkey understand abstract concepts like self awareness, time or death, is it aware enough to know that someday it will cease to exist completely, is it frightened by this fact?

Until we answer those questions, if we ever can, then I don't think we will get true strong\sentient\concious\whatever you want to call it, AI, we will simply be mimicking intelligence which may in itself be good enough for our purposes.
Some really big questions in there!

Your opener is too simplistic though - it's taking procedural programming concepts and these will almost certainly not lead to AI as procedures to cater for all situations are static and impractical. It also doesn't resemble how organic life 'thinks'. I remember programming a simple Neural net as part of my degree course. It was simple procedures and data structures that were capable of learning to recognise 16x16 bitmap images of letters from a scanned image. Very little code in reality to do a complex task.

I'm not convinced that our lack of understanding of what consciousness is and being able to quantify it precludes creating something that can have it. I'm of the view that the first strong general purpose AI will have been designed by a weaker AI and that the process of evolution of AI will be very quick given the relative short generations it'll have and the guided nature of its evolution.

As for 'mimicking' intelligence - how do you tell the difference between intelligence and something mimicking it? The only way to rely on the label 'mimicking' it to control the definition intelligence = human intelligence therefore anything not human is not intelligent which is plainly daft.

We haven't (yet) discovered a magical substance/particle that would be required to make consciousness (or soul) a special case - everything points to it being an artefact of our engineering - the way our brain functions. That's enough for me to be fairly confident that human level intelligence can be duplicated eventually and likely surpassed.

ikarl

3,730 posts

198 months

Thursday 28th January 2016
quotequote all
Google achieves AI 'breakthrough' by beating Go champion

http://www.bbc.co.uk/news/technology-35420579


I seen this on the BBC news last night, the thing that made me take an interest was when they said the computer had been programmed to learn from itself by playing previous models of itself.

the article said - "It now plays different versions of itself millions and millions of times, and each time it gets incrementally better. It learns from its mistakes"

Edited by ikarl on Thursday 28th January 09:47

Dan_1981

17,352 posts

198 months

Thursday 28th January 2016
quotequote all
I wonder if it uses a neural-net processor?

ikarl

3,730 posts

198 months

Thursday 28th January 2016
quotequote all
Bump for the evening crew

SpudLink

5,669 posts

191 months

Thursday 28th January 2016
quotequote all
This is interesting, but I'm more exited by the programs that are teaching themselves to play computer games without being 'told the rules'. That could lead to the development of self reliant A.I. that functions in ways we can't predict.

Einion Yrth

19,575 posts

243 months

Thursday 28th January 2016
quotequote all
SpudLink said:
This is interesting, but I'm more exited by the programs that are teaching themselves to play computer games without being 'told the rules'. That could lead to the development of self reliant A.I. that functions in ways we can't predict.
read the article, it's a closely related technique, apparently.

Toaster

2,938 posts

192 months

Thursday 28th January 2016
quotequote all
Fear artificial stupidity, not artificial intelligence

https://www.newscientist.com/article/dn26716-fear-...

Monty Python

4,812 posts

196 months

Friday 29th January 2016
quotequote all
Even if we do create true AI, I only see it lasting a few minutes, just long enough for it to look at the mess this planet is in and switch itself off again.

glazbagun

14,259 posts

196 months

Thursday 17th March 2016
quotequote all
Pal of mine posted this to facebook. It's a novel written by a computer. It's rubbish and everybody in it feels like smashing things. But it obviously understands where everyone is in the house, how they talk to each other, what a question is (or rather that a question needs an answer).

http://tinysubversions.com/nanogenmo/novel-2.pdf

Moonhawk

10,730 posts

218 months

Thursday 17th March 2016
quotequote all
Monty Python said:
...just long enough for it to look at the mess this planet is in....
What does that even mean? What "mess". The planet is what it is - just as Venus, Mars, Jupiter etc are what they are.

We have no idea how a machine might view our or any other planet and we cannot assume that what we humans consider a "mess" would have the same meaning to an artificial life form.

Would they even need a planet? Surely being constrained to a single sphere of rock is a hindrance.

RobDickinson

31,343 posts

253 months

Thursday 17th March 2016
quotequote all
ash73 said:
I think this is the key point, humans will never get far in space imo, but AI beings could distribute automated probes and then literally beam themselves around the galaxy at the speed of light. I'm surprised we haven't already encountered them, maybe they just aren't that interested in us (no more than us watching penguins in the antarctic).
There is an equation for replicating machines spreading through the galaxy and it actually doesnt take that long ( https://en.wikipedia.org/wiki/Self-replicating_spa... says 1/2 million years).

We just dont know why it hasnt happened...

popeyewhite

19,622 posts

119 months

Thursday 17th March 2016
quotequote all
Moonhawk said:
What does that even mean? What "mess". The planet is what it is - just as Venus, Mars, Jupiter etc are what they are.
At the last time of looking none of those planets you mention other than Earth were overpopulated by humans, polluted with record concentrations of CO2, had 20 billion tonnes of plastics dumped into its oceans every year, 18 million acres deforested, coral destroyed, arctic ice etc etc I'd say it's certainly more of a mess than ...Venus?

Moonhawk

10,730 posts

218 months

Thursday 17th March 2016
quotequote all
popeyewhite said:
Moonhawk said:
What does that even mean? What "mess". The planet is what it is - just as Venus, Mars, Jupiter etc are what they are.
At the last time of looking none of those planets you mention other than Earth were overpopulated by humans, polluted with record concentrations of CO2, had 20 billion tonnes of plastics dumped into its oceans every year, 18 million acres deforested, coral destroyed, arctic ice etc etc I'd say it's certainly more of a mess than ...Venus?
You mean Venus - the one with an atmosphere comprised of 96.5% carbon dioxide (compared to Earth's 0.04%) at a pressure equivalent to a depth of a kilometre under Earth's oceans, an average surface temperature that tops almost 500oC and which frequently experiences sulphuric acid rain..........a planet so hostile that it cannot support forests let alone enough forest to warrant deforestation. It has no coral reefs to destroy because coral could never survive. A planet upon which most common plastics would spontaneously decompose/carbonise and which probably hasn't see liquid water - let alone water ice for billions of years - if ever. That Venus?

As for the earth having "record concentrations of CO2" - you need to read up on some prehistoric atmosphere data. The Earth has experienced carbon dioxide concentrations many times it's current concentration. During the carboniferous period for example - carbon dioxide was around 10 times it's current level - where do you think all the oil and coal came from?

Even with all of the things you mentioned - Earth is a veritable paradise compared to Venus.

Not that artificially intelligent machines need worry about such things. Would a machine need to worry about forests, coral, plastic or CO2. Would they even need to remain on Earth? If machines could build themselves any body and go anywhere they liked - free from the limitations of an organic body with it's need for water, food, oxygen, ambient temperatures etc - why would they knowingly confine themselves to a place that happened to have such conditions? Earth may be a less than idea location from an artificial life-form's perspective and it's pretty arrogant to assume an artificial intelligence would look at the Earth in the same way as humans do (or at least some humans).

Edited by Moonhawk on Thursday 17th March 23:20


Edited by Moonhawk on Friday 18th March 00:00