Is The A.I. Singularity Coming And If So When?

Is The A.I. Singularity Coming And If So When?

Author
Discussion

otolith

56,091 posts

204 months

Monday 18th April 2016
quotequote all
Jabbah said:
There are some new optogenetic techniques that cause the neurons to light up when they fire that makes this a lot easier. They have used it on zebrafish amongst other things:

https://www.youtube.com/watch?v=YZxTvH-X61o

These techniques allow controlling of neuron behaviour using lasers too.
That's very interesting, though a zebra fish brain is tiny - likely to be a lot harder with something physically larger.

Toaster

2,939 posts

193 months

Tuesday 19th April 2016
quotequote all
davepoth said:
No Actually I think in part we agree, and its interesting others haven't picked up on your postings its a bit frankenstein-ish but a biological computer which is where this is going is more likely to achieve an end result in the AI world of re-creating a human brain. I suspect that as time moves on instead of brain cells in chips a chip will be in the brain.

If a fully aware biological brain is created, and we are unable to distinguish it from a human and that artificialy created biological brain is implanted in to an android so it can walk/talk learn there is then a huge ethical and moral dilemma. Would it have a "Soul" could it be switched off, would it be a sentient being even though an artificial created one.

I honestly do not think that Hardware and Software with all its algorithms being discussed will be the optimal AI platform

Toaster

2,939 posts

193 months

Tuesday 19th April 2016
quotequote all
Jabbah said:
The brain certainly seems to processes information sent through nerves as action potentials to neurons which either fire or don't depending on other inputs. Computers don't need software, they can be completely defined in hardware. How do you define what is alive? In the end it is just chemistry.
I guess its a view a very reductionist one but a view that I don't think holds much water

The mInd is an emergent property of the Brain, also electrical circuits do opperate differently to electrochemical ones

How do you define Alive well just a couple

Awareness
Self determination


AshVX220

5,929 posts

190 months

Wednesday 20th April 2016
quotequote all
Toaster said:
I guess its a view a very reductionist one but a view that I don't think holds much water

The mInd is an emergent property of the Brain, also electrical circuits do opperate differently to electrochemical ones

How do you define Alive well just a couple

Awareness
Self determination
When do people here think we become "Self-Aware"? Are we born self-aware (I don't think so), or do we become self aware as we develop? I don't have children so can't really answer the question, but I would imagine around the age of 18 months to 2 years, or is it later or earlier? Is it when a child can recognise themselves in a mirror? Or when a child is aware of there place in society? Are any animals self aware, or do they just plod on through life, doing whatever it is they do with out any awareness of "self"? Indeed, what is "Self-awareness"?

Jabbah

1,331 posts

154 months

Wednesday 20th April 2016
quotequote all
otolith said:
That's very interesting, though a zebra fish brain is tiny - likely to be a lot harder with something physically larger.
True, but it depends on what you are trying to accomplish. The evidence suggests that the brain is based on relatively simple repeating elements that combine to produce intelligent behaviour. For example experiments have been done to rewire the signals from the eyes in ferrets to the audio cortex and the audio cortex took over the job of processing visual information. Also human DNA doesn't encode enough information to be able to generate many specific modules and wiring in the brain. So if we can determine how these elements work by using tools and techniques such as optogenetics then it is likely that we will be able to create intelligent systems based on biologically realistic algorithms. Of course, these algorithms may not be the most efficient for intelligence when not in a biological substrate though, other algorithms might be better and scale to much more intelligent systems than could be possible with biology.

otolith

56,091 posts

204 months

Wednesday 20th April 2016
quotequote all
Agreed, that's the same idea I alluded to earlier thumbup

Jabbah

1,331 posts

154 months

Wednesday 20th April 2016
quotequote all
Toaster said:
The mInd is an emergent property of the Brain, also electrical circuits do opperate differently to electrochemical ones
What is the brain? For all intents and purposes it is a collection of atoms that have combined to process information through neural spikes and networks. The mind certainly seems to be an emergent property but emergent from complex information processing.

Toaster said:
Awareness
Self determination
So a baby is not alive for the first six months? How about a plant? An amoeba? Replicating proteins? Where do you draw the line?

Assuming by awareness you mean consciously aware:
http://www.wired.com/2013/04/baby-consciousness/

Do you believe that awareness and self determination are not possible for machines? What about when the information processing of an animals brain is duplicated:

http://edition.cnn.com/2015/01/21/tech/mci-lego-wo...

Would that not have as much awareness as the worm?

What is self determination other than being able to choose actions based on available information? Why isn't a machine capable of that?

Is there something special about organic chemistry that makes them capable of awareness and self determination that inorganic chemistry hasn't got? Other than mysticism of course.

Edited by Jabbah on Wednesday 20th April 11:03

Toaster

2,939 posts

193 months

Wednesday 20th April 2016
quotequote all
Jabbah said:
So a baby is not alive for the first six months? How about a plant? An amoeba? Replicating proteins? Where do you draw the line?

Assuming by awareness you mean consciously aware:
http://www.wired.com/2013/04/baby-consciousness/

Do you believe that awareness and self determination are not possible for machines? What about when the information processing of an animals brain is duplicated:

http://edition.cnn.com/2015/01/21/tech/mci-lego-wo...

Would that not have as much awareness as the worm?

What is self determination other than being able to choose actions based on available information? Why isn't a machine capable of that?

Is there something special about organic chemistry that makes them capable of awareness and self determination that inorganic chemistry hasn't got? Other than mysticism of course.

Edited by Jabbah on Wednesday 20th April 11:03
https://www.psychologytoday.com/articles/199809/fetal-psychology

As if overturning the common conception of infancy weren't enough,
scientists are creating a startling new picture of intelligent life in
the womb. Among the revelations:

o By nine weeks, a developing fetus can hiccup and react to loud
noises. By the end of the second trimester it can hear.

o Just as adults do, the fetus experiences the rapid eye movement
(REM) sleep of dreams.

o The fetus savors its mother's meals, first picking up the food
tastes of a culture in the womb.

o Among other mental feats, the fetus can distinguish between the
voice of Mom and that of a stranger, and respond to a familiar story read
to it.

o Even a premature baby is aware, feels, responds, and adapts to
its environment.

o Just because the fetus is responsive to certain stimuli doesn't
mean that it should be the target of efforts to enhance development.
Sensory stimulation of the fetus can in fact lead to bizarre patterns of
adaptation later on.

Jabbah

1,331 posts

154 months

Wednesday 20th April 2016
quotequote all
Reaction is not the same as awareness. A computer can react to stimuli too. What we are trying to determine here is if you claim of:

Toaster said:
The Brain is not a computer... ...its a living organism.
and then defining being alive as having awareness and self determination is valid. You are essentially defining a computer as something that cannot have either of those things. Seems a bit of a circular argument to me.


Toaster

2,939 posts

193 months

Wednesday 20th April 2016
quotequote all
Jabbah said:
Reaction is not the same as awareness. A computer can react to stimuli too. What we are trying to determine here is if you claim of:

Toaster said:
The Brain is not a computer... ...its a living organism.
and then defining being alive as having awareness and self determination is valid. You are essentially defining a computer as something that cannot have either of those things. Seems a bit of a circular argument to me.


I think your being reductionist but what the heck someone has to argue the case smile The human mind is both like and unlike a computer both of these states can be argued for and against AI and how it can replicate a Human brain but we are never going to get the answer on this forum for me all I want to do is point out the behaviourist / reductionist argument which never really holds much water

This was part of an interesting article written by Ari N. Schulman as to why Minds are not like computers so whilst some on this thread marvel about we know how to model the Neurones and pathways of a brain it just a case of replication and bingo we have AI that is at least as good as a Human..... :

"As the high-level AI project has failed to meet its original goal, some attention has returned to the study of the brain itself under the belief that, if nothing else, we might make a computer think by simply copying the brain onto it. The unit of the mind typically targeted for replication is the neuron, and the assumption has thus been that the neuron is the basic functional unit of the mind. The neuron has been considered a prime candidate because it appears to have much in common with modules of computer systems: It has an electrical signal, and it has input and output for that signal in the form of dendrites and axons. It seems like it would be relatively straightforward, then, to replicate the neuron’s input-output function on a computer, scan the electrical signals of a person’s brain, and boot up that person’s mind on a suitably powerful computer. One ­proponent of this possibility, Zenon Pylyshyn https://en.wikipedia.org/wiki/Zenon_Pylyshyn, a professor at the Rutgers Center for Cognitive Science, describes the “rather astonishing view” of what would happen if replicating the neuron’s external behavior were not sufficient to replicate the mind:

If more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function of each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. [His emphasis.]

If Pylyshyn is right that the neuron’s role in the mind is entirely a matter of its electrical input-output specification — that is, that the neuron is a black box — then he is clearly correct that the conclusion of his thought experiment is astonishing and false."

...

Toaster

2,939 posts

193 months

Wednesday 20th April 2016
quotequote all
ash73 said:
I think consciousness is just a byproduct of linking sensory information with a mental model of the world; we use our spare capacity to continually update that model.
Possibly however, Interactive voice response IVR has no understanding it just 'recognises' patterns

Remember Spinvox, they claimed their technology was better than other IVR systems and yet huge amounts of human resource was used to translate speech to text...................

"We don't actually need to send any messages to human agents... All messages in the first instance will go through our automated voice message conversion system. Only if the system itself is unsure of a particular word or a particular fragment of the message will either a whole or part of the message be sent to an agent for quality control purposes. This in turn is fed back into the system to train it in a live learning mode.

http://www.theregister.co.uk/2009/07/29/spinvox_me...

The world of IVR is still not much further down the road for full automation so even recognising simple words is far more complex than many would care to admit.

Jabbah

1,331 posts

154 months

Wednesday 20th April 2016
quotequote all
Toaster said:
...for me all I want to do is point out the behaviourist / reductionist argument which never really holds much water
They are not really the same thing. Unless you are talking about behaviour at the level of reduction that a reductionist would, for example ion pumps in the Hodgkin-Huxley neuron model, but then you are really getting towards behaviour of physical systems as understood by physicists rather than behaviour of a person. As for not holding much water, you are far from showing that let alone providing anything more compelling.

Toaster said:
This was part of an interesting article written by Ari N. Schulman as to why Minds are not like computers
I think you quoted the wrong bit, that was actually in support of computational theory of mind. Schulman went on to say some things about why it may not be true but he used a lot of hand waving in the process. Those that claim that the brain is not reducible certainly have as much burden of proof as those who do. But yes, the brain does not have a well defined abstraction layer. Spike trains may well be enough to make something intelligent but there are many more processes in the brain such as neuro-transmitters that have a large effect such as emotional state. I find that Schulman has mischaracterised the need for accuracy in producing models of neurons etc. The behaviour of neurons seems stochastic with the brain built upon such principles. As such it is highly resistant to neurons spiking at the wrong time or failing all together. It is therefore very unlikely that the brain would be sensitive to artificial neurons that differ in behaviour by 0.1% or so from the original. Various drugs do far more than this. Even then you are dealing with numbers of molecules in a stochastic way, so this level is likely to be abstracted to simple numbers. It is highly unlikely that the brain works with infinite precision or high analog precision is required as noise essentially drowns out any useful signals at those levels.

This is all assuming that we want to create a human like not human level intelligence though. With regards to a singularity what we want is greater than human level intelligence that we can control. It would be rather troublesome for us to create human like intelligence but even more so than us...


Toaster

2,939 posts

193 months

Wednesday 20th April 2016
quotequote all
Jabbah said:
Toaster said:
...for me all I want to do is point out the behaviourist / reductionist argument which never really holds much water
They are not really the same thing. Unless you are talking about behaviour at the level of reduction that a reductionist would, for example ion pumps in the Hodgkin-Huxley neuron model, but then you are really getting towards behaviour of physical systems as understood by physicists rather than behaviour of a person. As for not holding much water, you are far from showing that let alone providing anything more compelling.

Toaster said:
This was part of an interesting article written by Ari N. Schulman as to why Minds are not like computers
I think you quoted the wrong bit, that was actually in support of computational theory of mind. Schulman went on to say some things about why it may not be true but he used a lot of hand waving in the process. Those that claim that the brain is not reducible certainly have as much burden of proof as those who do. But yes, the brain does not have a well defined abstraction layer. Spike trains may well be enough to make something intelligent but there are many more processes in the brain such as neuro-transmitters that have a large effect such as emotional state. I find that Schulman has mischaracterised the need for accuracy in producing models of neurons etc. The behaviour of neurons seems stochastic with the brain built upon such principles. As such it is highly resistant to neurons spiking at the wrong time or failing all together. It is therefore very unlikely that the brain would be sensitive to artificial neurons that differ in behaviour by 0.1% or so from the original. Various drugs do far more than this. Even then you are dealing with numbers of molecules in a stochastic way, so this level is likely to be abstracted to simple numbers. It is highly unlikely that the brain works with infinite precision or high analog precision is required as noise essentially drowns out any useful signals at those levels.

This is all assuming that we want to create a human like not human level intelligence though. With regards to a singularity what we want is greater than human level intelligence that we can control. It would be rather troublesome for us to create human like intelligence but even more so than us...
biggrin Possibly the most eloquent response I have had on Pistonheads, I need reflect and come back to you on some of your points

ZOLLAR

19,908 posts

173 months

Saturday 23rd April 2016
quotequote all
Just reading a bit of "wait but why" (I think the blog on AI from there has popped up here a few times)

I was always in the camp of if we created AI and it moved onto super intelligence (which seems to be likely) that we could stop it swiftly by removing its power but I read this paragraph on WBW and I sort of went "Ah".

(Turry that he refers to is a robotic AI whose programmed goal was to become constantly better at writing a simple one sentence note, this resulted in it killing all humans as they may stand in the way of that goal, it's worth reading the whole article actually http://waitbutwhy.com/2015/01/artificial-intellige... )

"From everything I’ve read, once an ASI exists, any human attempt to contain it is laughable. We would be thinking on human-level and the ASI would be thinking on ASI-level. Turry wanted to use the internet because it was most efficient for her since it was already pre-connected to everything she wanted to access. But in the same way a monkey couldn’t ever figure out how to communicate by phone or wifi and we can, we can’t conceive of all the ways Turry could have figured out how to send signals to the outside world. I might imagine one of these ways and say something like, “she could probably shift her own electrons around in patterns and create all different kinds of outgoing waves,” but again, that’s what my human brain can come up with. She’d be way better. Likewise, Turry would be able to figure out some way of powering herself, even if humans tried to unplug her—perhaps by using her signal-sending technique to upload herself to all kinds of electricity-connected places. Our human instinct to jump at a simple safeguard: “Aha! We’ll just unplug the ASI,” sounds to the ASI like a spider saying, “Aha! We’ll kill the human by starving him, and we’ll starve him by not giving him a spider web to catch food with!” We’d just find 10,000 other ways to get food—like picking an apple off a tree—that a spider could never conceive of."


I've no idea whether AI or Super AI would be good or bad for the human race but one thing is for sure, if we aren't very very careful it could get out of hand extremely quickly and we'd be in a Pandora's box situation.