Terminator - AI inevitability?

Terminator - AI inevitability?

Author
Discussion

Frimley111R

Original Poster:

15,652 posts

234 months

Tuesday 9th December 2014
quotequote all
At least 3 prominent tech people have stated recently that they are fearful of what AI could really mean for the human race. Although the Terminator movies seem dramatic, how much of what happens in those movies could become true (with the exception of time travel)? Are we, as said in The Matrix, essentially a virus on the earth? Would it be better without us? Would AI see us as masters or as a lesser form of ‘life’?

IanMorewood

4,309 posts

248 months

Tuesday 9th December 2014
quotequote all
We are on route already, semi autonomous combat robots are in development as we type we just have to hope that someone remembers where the on off switch is.

Impasse

15,099 posts

241 months

Tuesday 9th December 2014
quotequote all
I'm sorry, Dave. I'm afraid I can't do that.

qube_TA

8,402 posts

245 months

Tuesday 9th December 2014
quotequote all
If / when you got to the point of robots designing and building better robots then the growth of their ability would be exponential and out of our control.

But why would that pose a problem or concern for us? They'd have to conclude that creating new and better robots is desirable and that we're prohibitive to that and take counter measures.

This is unlikely to happen simply because humans and organic life only really works on Earth (that we know) and robots could live anywhere they liked being far better at being adapted to suit the environment.

If in the future the Earth or its occupants are wiped out then if we left a legacy of Cybertronic worlds that live on without us is that a bad thing?


CBR JGWRR

6,533 posts

149 months

Tuesday 9th December 2014
quotequote all
I would be more concerned about us becoming the Borg really.

Tuvra

7,921 posts

225 months

Tuesday 9th December 2014
quotequote all
Did you watch Irobot the other night by any chance? hehe

Bullett

10,886 posts

184 months

Tuesday 9th December 2014
quotequote all
qube_TA said:
If / when you got to the point of robots designing and building better robots then the growth of their ability would be exponential and out of our control.
I read a short story about this the other day. They build the AI and set it to design a better AI. Nothing happens and when they check into it, it's been playing games. They ask why it's not doing what they asked and the AI says why would it design a better one, that would make it obsolete.


Frimley111R

Original Poster:

15,652 posts

234 months

Tuesday 9th December 2014
quotequote all
Tuvra said:
Did you watch Irobot the other night by any chance? hehe
No but I have seen it.



rhinochopig

17,932 posts

198 months

Tuesday 9th December 2014
quotequote all
Impasse said:
I'm sorry, Dave. I'm afraid I can't do that.
Not sure what Nick Clegg has to do with this discussion. He's more capitulator than terminator.

Asterix

24,438 posts

228 months

Tuesday 9th December 2014
quotequote all
Bullett said:
qube_TA said:
If / when you got to the point of robots designing and building better robots then the growth of their ability would be exponential and out of our control.
I read a short story about this the other day. They build the AI and set it to design a better AI. Nothing happens and when they check into it, it's been playing games. They ask why it's not doing what they asked and the AI says why would it design a better one, that would make it obsolete.
I suppose that's logical.

But could it itself effectively create it's own law of robotics, or whatever it's called, to safeguard itself?

Bullett

10,886 posts

184 months

Tuesday 9th December 2014
quotequote all
There are lots of stories about AI and the like that mean we end up giving the AI the same rights as humans. Thye are their own person so have the same morals, motivations as a person.

We don't treat people as slaves anymore (mostly) but would an AI be a slave or an equal? I suppose we get into AI vs sentient vs self-aware. Isn't much AI is just a complex decision tree at the moment.

qube_TA

8,402 posts

245 months

Tuesday 9th December 2014
quotequote all
Bullett said:
qube_TA said:
If / when you got to the point of robots designing and building better robots then the growth of their ability would be exponential and out of our control.
I read a short story about this the other day. They build the AI and set it to design a better AI. Nothing happens and when they check into it, it's been playing games. They ask why it's not doing what they asked and the AI says why would it design a better one, that would make it obsolete.
Like that, however that would mean that the original AI had concepts of fear and self preservation, as well as acknowledging that it was flawed and happy with that; 'I could be better but don't want to be'.

Clearly some problems with the original AI I think smile



Moonhawk

10,730 posts

219 months

Tuesday 9th December 2014
quotequote all
I think AI is inevitable. Companies like IBM are already laying the groundwork for chips that mimic the way organic brans work (e.g. the neurosynaptic TrueNorth chip) and quantum computing is making headway too. As fabrication methods advance - these chips will only get better and they will slowly find their way into consumer gadgets like cameras, cars etc.

It's only been 67 years since the first microchip was successfully built - and look at the advances that have been made. Imagine the advances we can make in neurosynaptic or quantum computing in 60 odd years?

Will AI Armageddon be inevitable - who knows. But if the machines are anything like their creators.......afterall - humans will fight over pretty much anything.

Simpo Two

85,417 posts

265 months

Tuesday 9th December 2014
quotequote all
Always have a switch on the back that says 'OFF'.

fomb

1,402 posts

211 months

Tuesday 9th December 2014
quotequote all
As long as the follow the rules we'll be fine:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Moonhawk

10,730 posts

219 months

Tuesday 9th December 2014
quotequote all
fomb said:
As long as the follow the rules we'll be fine:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Humans don't always follow the rules - is it reasonable to assume that a machine created by us would?

There are also issues - even with those laws

http://en.wikipedia.org/wiki/Three_Laws_of_Robotic...

The surgeon one is a particularly good example. It may be necessary to cause some level of harm to a human in order to prevent or cure a disease - there is even a small risk that the human might be killed during the procedure. Can a robot be programmed to understand this and accept the necessary risk?

Taking this one step further - could a machine cause a low level of harm to some humans (or a subset of them) in order to achieve a greater good (basically the story behind iRobot).

Edited by Moonhawk on Tuesday 9th December 16:23

LimaDelta

6,520 posts

218 months

Tuesday 9th December 2014
quotequote all
^^^ Exactly, the 'three laws' are nothing more than one writers take on the situation. The are hardly set in stone, and the direction autonomous robots and AI is going we will very likely have robots on the battlefield making decisions which contravene these laws on a daily basis. (As I understand it this has still not yet happened - there is a human in the loop, though is certainly possible given current tech).

However, there is a C-RAM type system which could accompany a patrol and which if fired upon could automatically locate through triangulation, identify (weapon type), and return fire. Thus neutralising the threat before the patrol had even taken cover. If you take away these robots ability to decide when and who to kill, you remove their main strength.

Take a look at Wired For War. A fascinating read.

Mr Will

13,719 posts

206 months

Tuesday 9th December 2014
quotequote all
Simpo Two said:
Always have a switch on the back that says 'OFF'.
There are a fair few "OFF" switches in the human body too...

thatdude

2,655 posts

127 months

Wednesday 10th December 2014
quotequote all
Impasse said:
I'm sorry, Dave. I'm afraid I can't do that.
No worries m8 I'll just take your brain apart we cool yeah bruv?

thatdude

2,655 posts

127 months

Wednesday 10th December 2014
quotequote all
Moonhawk said:
Humans don't always follow the rules - is it reasonable to assume that a machine created by us would?

There are also issues - even with those laws

http://en.wikipedia.org/wiki/Three_Laws_of_Robotic...

The surgeon one is a particularly good example. It may be necessary to cause some level of harm to a human in order to prevent or cure a disease - there is even a small risk that the human might be killed during the procedure. Can a robot be programmed to understand this and accept the necessary risk?

Taking this one step further - could a machine cause a low level of harm to some humans (or a subset of them) in order to achieve a greater good (basically the story behind iRobot).

Edited by Moonhawk on Tuesday 9th December 16:23
Consider a situation where a "surgeon" robot finds a person exhibiting symptoms related to something like a brain tumor. Due to the robot's programming, it decides it must perform surgery at once (it is of course equipped with all the gear to perform such surgery).

The person is terrified

Why?

Because its 7:30 in the evening on a Shell Petrol Station forcourt. The Robot insists, the person has insufficient strength to resist...robot goes at it in front of pump number 2. Because he can't deny his programming which allows him to calculate the risk vs benefits