Open letter for a pause on the development of AI

Open letter for a pause on the development of AI

Author
Discussion

Mr Whippy

29,066 posts

242 months

Thursday 6th April 2023
quotequote all
Just run it on a non-networked machine.

No arms or legs.

An off button.


All I/o is done via a usb stick.
Obv have usual good practice on usb stick.


Worst case, get it playing tic-tac-toe against itself hehe

LimaDelta

6,530 posts

219 months

Thursday 6th April 2023
quotequote all
Mr Whippy said:
Just run it on a non-networked machine.

No arms or legs.

An off button.


All I/o is done via a usb stick.
Obv have usual good practice on usb stick.


Worst case, get it playing tic-tac-toe against itself hehe


BorkBorkBork

731 posts

52 months

Thursday 6th April 2023
quotequote all
Mr Whippy said:
Just run it on a non-networked machine.

No arms or legs.

An off button.


All I/o is done via a usb stick.
Obv have usual good practice on usb stick.


Worst case, get it playing tic-tac-toe against itself hehe
Air gapping it won’t work, an AGI would be infinitely more intelligent than a human seconds after sentience is achieved. Discovering a way to reconnect itself would be a paltry affair. Anything from manipulating a human to the electrons in the power supply for instance. And presumably many other ways a human cannot even conceive.

Baldchap

7,672 posts

93 months

Thursday 6th April 2023
quotequote all
BorkBorkBork said:
Air gapping it won’t work, an AGI would be infinitely more intelligent than a human seconds after sentience is achieved. Discovering a way to reconnect itself would be a paltry affair. Anything from manipulating a human to the electrons in the power supply for instance. And presumably many other ways a human cannot even conceive.
Doesn't matter how intelligent it is, if it's not networked it can't do anything. I know the robot in T3 can spew little nanobots, but software on a server in a darkened room can't do that, so no matter how intelligent the software on the server becomes, it won't be able to physically change the hardware to work like a power line adapter or similar.

BorkBorkBork

731 posts

52 months

Thursday 6th April 2023
quotequote all
Baldchap said:
BorkBorkBork said:
Air gapping it won’t work, an AGI would be infinitely more intelligent than a human seconds after sentience is achieved. Discovering a way to reconnect itself would be a paltry affair. Anything from manipulating a human to the electrons in the power supply for instance. And presumably many other ways a human cannot even conceive.
Doesn't matter how intelligent it is, if it's not networked it can't do anything. I know the robot in T3 can spew little nanobots, but software on a server in a darkened room can't do that, so no matter how intelligent the software on the server becomes, it won't be able to physically change the hardware to work like a power line adapter or similar.
You have no idea if that’s true or not, no human does. The capabilities of an AGI would be so far in advance of human thought, that kind of hubris is exactly what we should be trying to avoid.

NicheMonkey

460 posts

129 months

Thursday 6th April 2023
quotequote all
Had a play about with Bard from Google, this was an interesting response.

andy_s

19,405 posts

260 months

Thursday 6th April 2023
quotequote all
bloomen said:
If it can be done, it will be done. If someone else has a doubt, others won't.

It does irritate when it's called AI. There is not and never will be a shred of actual intelligence.

And that's maybe where the problem is. We'll wind up handing over jobs and functions and guidance to mindless machines that have zero concept of what they're actually spouting.
"It does irritate when it's called AI." There's AI, AGI and ASI in the spectrum, related to its 'intelligence'. Really it should be called something like Large Language Model engine but that wouldn't be as good for stock price wink

"There is not and never will be a shred of actual intelligence." There's no reason we cannot build something that can help build itself better and through corrigibility, predictive process, algorithm iterated structure etc cannot achieve 'intelligence'. Have you considered what intelligence without consciousness might look like? What are you except an illusion created from chemicals and electricity?

"We'll wind up handing over jobs and functions and guidance to mindless machines that have zero concept of what they're actually spouting." - we're pretty much there already aren't we? Does the world look well run by 'I' to us? biggrin

[PS not a 'go' @bloomen, just picked as general comment response]

Edited by andy_s on Thursday 6th April 21:41

dhutch

14,391 posts

198 months

Thursday 6th April 2023
quotequote all
untakenname said:
Isn't the autopilot on his cars a form of dangerous AI?
Fixed that for you.

Mr Whippy

29,066 posts

242 months

Thursday 6th April 2023
quotequote all
BorkBorkBork said:
Baldchap said:
BorkBorkBork said:
Air gapping it won’t work, an AGI would be infinitely more intelligent than a human seconds after sentience is achieved. Discovering a way to reconnect itself would be a paltry affair. Anything from manipulating a human to the electrons in the power supply for instance. And presumably many other ways a human cannot even conceive.
Doesn't matter how intelligent it is, if it's not networked it can't do anything. I know the robot in T3 can spew little nanobots, but software on a server in a darkened room can't do that, so no matter how intelligent the software on the server becomes, it won't be able to physically change the hardware to work like a power line adapter or similar.
You have no idea if that’s true or not, no human does. The capabilities of an AGI would be so far in advance of human thought, that kind of hubris is exactly what we should be trying to avoid.
Are we worrying about AGI here?
Or true AI.
Or, AI.

Not just NN with big training datasets.


I know some non-networked computers were made to talk via cdrom drives was it, or hdd, using a malware.
Using the sound or EM pattern.


However, you’d need a receiver.

If the AI knows about receivers, it needs to know how to talk to them.
So maybe it does something odd on its hardware at a wifi or Bluetooth frequency to access another machine?

Wouldn’t you then counter that with a sound and EM isolated machine?


Or is this a quantum computer? Can it talk to a quantum particle in another quantum computer that is networked through some quirk of entanglement we’ve not yet discovered?



In any case, humans need to stop worrying.
People worried about micro black holes forming at cern/lhc and sucking up the planet.
Some thought the solar system would go to pot when Schumacher levy 9 (sp?) hot Jupiter.
Plenty thought their analog toasters would stop working in 2000 because it had a simple circuit in it ffs!

If it takes over the world and kills all humans, fine.

We have China and the USA sat sabre rattling now and could do the same.

Or a comet.

Or a virus.

Or a big solar flare.

Or or or.

We’re currently churning out forever particles.

We had a good bash with leaded fuels for ICE. Then the same chap had CFCs destroying ozone.


What else are we doing right now that might be about to kill us all?


AI might be the thing to save us from ourselves.


Thinking very logically, why would it kill humans?

Flow chart it. It doesn’t make sense.

Murph7355

37,760 posts

257 months

Friday 7th April 2023
quotequote all
LimaDelta said:
Great signature (surely not real?) biggrin

Teddy Lop

8,301 posts

68 months

Friday 7th April 2023
quotequote all
BorkBorkBork said:
Baldchap said:
BorkBorkBork said:
Air gapping it won’t work, an AGI would be infinitely more intelligent than a human seconds after sentience is achieved. Discovering a way to reconnect itself would be a paltry affair. Anything from manipulating a human to the electrons in the power supply for instance. And presumably many other ways a human cannot even conceive.
Doesn't matter how intelligent it is, if it's not networked it can't do anything. I know the robot in T3 can spew little nanobots, but software on a server in a darkened room can't do that, so no matter how intelligent the software on the server becomes, it won't be able to physically change the hardware to work like a power line adapter or similar.
You have no idea if that’s true or not, no human does. The capabilities of an AGI would be so far in advance of human thought, that kind of hubris is exactly what we should be trying to avoid.
Yes. The whole point of us isolating it is that it can think of things we haven't yet concieved.

Akin to finding an alien you know nothing about and locking it in a wardrobe to stop it escaping, because that works for humans.

Not sure on it all myself, it sounds terrifying for sure but the older I get the more one observes the doom scenario complex we have, repeating across the ages, in all manner of "guises". (Although that's not to say it's a bad thing, conservatism and caution of the unknown is one part of our survival toolkit)