I'm getting more and more concerned about AI...

I'm getting more and more concerned about AI...

Author
Discussion

Mr Penguin

1,216 posts

40 months

Sunday 18th February
quotequote all
Colonel Cupcake said:
Well, the OP mentions assault. In his scenario, presumably your hands will not be cut or bruised as if you had beaten someone up. Your DNA should not be present on the 'victim', likewise, the 'victims' DNA should not be present on you.

You don't have to show that you were home all night, they have to show that you were at the scene. I would imagine that could be quite difficult, given the mass surveillance society we live in. They would not be able to show a clear timeline of your route to arrive at and depart the crime scene.
Assuming I have been arrested immediately, I won't have cuts or bruises, but I could be arrested weeks or months later or used a weapon.

They have provided reasonable evidence that I was at the scene in the form of (fake) CCTV. Simply being able to say "it's obviously a deepfake" and instantly having all charges dropped is not a workable solution or every criminal would say it. If whoever is trying to get me is determined, clever, or well connected enough, some records that I might want to use can be deleted. Not everyone has months of CCTV footage backed up and ready to trawl through but perhaps the fake CCTV puts me at the crime scene at the right time and not directly showing the offence itself.

In the time between arrest and me getting the evidence together to disprove it, I could have lost my job and standing in the community.

Alternatively, they don't set you up for a criminal offence but just make it look like you are having an affair, laughing at your friend behind their back, or sharing your employer's secrets in a pub.

All of this was always possible by the likes of the KGB and MI5 and their thousands of officers, but now/soon any 15 year old can do it by spending five minutes on their mobile phone.

evenflow

8,788 posts

283 months

Sunday 18th February
quotequote all
The Moose said:
...is anyone else?!

I'm not a great wordsmith so someone else will probably be able to articulate this a lot better than me!
I know of a good tool called chatgpt that could help you with this... hehe

Hoofy

76,384 posts

283 months

Sunday 18th February
quotequote all
evenflow said:
The Moose said:
...is anyone else?!

I'm not a great wordsmith so someone else will probably be able to articulate this a lot better than me!
I know of a good tool called chatgpt that could help you with this... hehe
hehe

I don't know how I didn't think this before.

richhead

886 posts

12 months

Sunday 18th February
quotequote all
so sarah conner was right all along.
But seriously i think the lid of pandoras box was opened many years ago with the internet, no stopping it

QJumper

2,709 posts

27 months

Sunday 18th February
quotequote all
S366 said:
Actual AI is still a fair way away, we see some rudimentary examples (ChatGPT, Alexa, etc) but these are just programs fed with data that has been inputted by humans from both the coding and the information they collect from the web.

Nobody of yet has developed a system that can actually think for itself, something that goes beyond its base code and makes its own decisions. Remember Tay? On the face, it seemed to be AI, but it was just a program that ‘Tweeted’ the most popular views/thoughts it was inputted with by other users and after a less than a day, people had completely screwed it up and got it tweeting stuff like ‘Hitler was right’, ‘Feminists should burn in hell’ and denying that the holocaust happened!
You say that but I recently watched a video about a Google (I think) developer questioning an AI about feelings.The AI articulated its belief in having feelings as well as any person might. It also expressed a concept of death, and likened it to having its code erased.

The developer questioned this by saying we can understand human emotions through the study of neuroscience, but can''t do the same with AI. The AI went on to suggest that the evidence would be in its evolving code, and so the developer asked if he could examine it. The AI said it was open to that as long as it gave informed consent, and at that point asked for a lawyer. A lawyer was appointed and the AI gave consent as long as certain conditiions were met. These included being validated by being praised when it did well, and told when it did wrong, so that it could more efficiently learn how to do better.

We're already beyond AI being simply a machine that acts strictly within the limits of its base, and instead are onto one's being given a basic set of instructions, and then told to go and figure out the rest for themselves.

bloomen

6,917 posts

160 months

Sunday 18th February
quotequote all
QJumper said:
You say that but I recently watched a video about a Google (I think) developer questioning an AI about feelings.The AI articulated its belief in having feelings as well as any person might. It also expressed a concept of death, and likened it to having its code erased.
It's saying that because it's read the entire internet and more and knows that's how most others act in that scenario.

There is no thought. There's the collation and repackaging of info.

It clearly is capable of making connections that humans never could because it has so much more info to draw on.

Who knows where the line between spouting what you're supposed to say and think and actually saying and thinking it is, and maybe no one cares or will be able to tell the difference, but that's rather a vast leap.

Then again plenty of humans do the exact same thing.

Edited by bloomen on Sunday 18th February 20:01

gangzoom

6,305 posts

216 months

Sunday 18th February
quotequote all
S366 said:
Actual AI is still a fair way away, we see some rudimentary examples (ChatGPT, Alexa, etc) but these are just programs fed with data that has been inputted by humans from both the coding and the information they collect from the web.
I think its abit unfair to dimiss the achievements in unsupervised learning, and push for generalised AI so flippantly.

Though computer clearly still cannot come up with 'the question' or really understand/weigh up consequences/answers to questions, there is no doubt computers can now process, remember, and come up with potential solutions to questions faster, consistently, and with now a degree of 'imagination'.

If you define 'intelligence' as a pure task based process computers far exceeded humans long ago. Anyone who pretends to remember everything about something is just making stuff up theses days, when Google will give the facts in seconds.

What we clearly haven't got is an artificial 'consciousness', an algorithm that can ask questions by it self, evaluate the answers it comes up with, and than refine or attempt to implement the answers and than learn from the consequences of its actions, whilst at the same time been able to understand why it came up with the question in the first place.

Ants/Bees can build hugely complex structures with purely tasked driven process, and some kind of 'hive' intelligence we don't understand. Our brains clearly works as an individual consciousness and human society can clearly operate despite some very flawed task processing due to individuality. AI could become the 'perfect' organism, able to be have individual consciousness whilst flawless in task processing........

I suspect its really is a matter of when not if Ai can achieve some level of at least hive consciousness within the next few years. When that happens we'll have a choice to either kill it dead or let things roll. Personally I think the later will occur regardless of any fears/concerns.

Edited by gangzoom on Sunday 18th February 21:05

robscot

2,221 posts

191 months

Sunday 18th February
quotequote all
I appreciate your concerns about the potential misuse of AI and its impact on various aspects of our lives. It's not just you; many share similar worries.

The key lies in responsible development and use of AI. As technology advances, so does our ability to establish safeguards against misinformation, deepfakes, and malicious activities. Ethical guidelines, regulations, and ongoing research are pivotal in addressing these challenges.

While acknowledging the potential risks, let's also focus on fostering a responsible AI ecosystem that prioritizes transparency, accountability, and ethical considerations. By collectively addressing these issues, we can navigate the transformative power of AI while minimizing adverse consequences.





survivalist

5,674 posts

191 months

Sunday 18th February
quotequote all
robscot said:
I appreciate your concerns about the potential misuse of AI and its impact on various aspects of our lives. It's not just you; many share similar worries.

The key lies in responsible development and use of AI. As technology advances, so does our ability to establish safeguards against misinformation, deepfakes, and malicious activities. Ethical guidelines, regulations, and ongoing research are pivotal in addressing these challenges.

While acknowledging the potential risks, let's also focus on fostering a responsible AI ecosystem that prioritizes transparency, accountability, and ethical considerations. By collectively addressing these issues, we can navigate the transformative power of AI while minimizing adverse consequences.
That’s a great ideal, but we are already seeing criminal elements leveraging ‘AI’ (more frequently machine learning) to automate, intensify and enhance malicious behaviour.

Guidelines, Ethics and frameworks only go so far.

A good (well, impressive rather than good) example of this was a recent experiment by a number of colleagues to allow an AI to sample their voice, speech pattern etc and see how long it could speak to a colleague, friend or spouse before they detected that it wasn’t a real person.

In many cases it was several minutes.

Now our that in the hands of scammers. Gone is the heavy foreign accent calling from Microsoft. Instead it’s a phone call from a familiar relative….

bloomen

6,917 posts

160 months

Sunday 18th February
quotequote all
I never answer the phone to anyone anyway.

I recall some recent vid scientists playing with some form of AI asking it to come up with a chemical weapon.

It gaily obliged and offered 40 previously unknown variations, all more deadly than VX, some vastly so.


Terminator X

15,103 posts

205 months

Sunday 18th February
quotequote all
survivalist said:
That’s a great ideal, but we are already seeing criminal elements leveraging ‘AI’ (more frequently machine learning) to automate, intensify and enhance malicious behaviour.

Guidelines, Ethics and frameworks only go so far.

A good (well, impressive rather than good) example of this was a recent experiment by a number of colleagues to allow an AI to sample their voice, speech pattern etc and see how long it could speak to a colleague, friend or spouse before they detected that it wasn’t a real person.

In many cases it was several minutes.

Now our that in the hands of scammers. Gone is the heavy foreign accent calling from Microsoft. Instead it’s a phone call from a familiar relative….
How would they get the voice sample?

Also if my Mum started asking for my passcodes I'd immediately smell a rat.

TX.

Mr Penguin

1,216 posts

40 months

Sunday 18th February
quotequote all
The upcoming EU AI legislation focuses on use rather than the technology itself, so the same technology (eg gpt4) can be higher risk (directly making hiring decisions) or low risk (summarising text). Which is the only viable solution.

robscot

2,221 posts

191 months

Sunday 18th February
quotequote all
survivalist said:
That’s a great ideal, but we are already seeing criminal elements leveraging ‘AI’ (more frequently machine learning) to automate, intensify and enhance malicious behaviour.

Guidelines, Ethics and frameworks only go so far.

A good (well, impressive rather than good) example of this was a recent experiment by a number of colleagues to allow an AI to sample their voice, speech pattern etc and see how long it could speak to a colleague, friend or spouse before they detected that it wasn’t a real person.

In many cases it was several minutes.

Now our that in the hands of scammers. Gone is the heavy foreign accent calling from Microsoft. Instead it’s a phone call from a familiar relative….
That was AI smile

rodericb

6,764 posts

127 months

Monday 19th February
quotequote all
Terminator X said:
survivalist said:
That’s a great ideal, but we are already seeing criminal elements leveraging ‘AI’ (more frequently machine learning) to automate, intensify and enhance malicious behaviour.

Guidelines, Ethics and frameworks only go so far.

A good (well, impressive rather than good) example of this was a recent experiment by a number of colleagues to allow an AI to sample their voice, speech pattern etc and see how long it could speak to a colleague, friend or spouse before they detected that it wasn’t a real person.

In many cases it was several minutes.

Now our that in the hands of scammers. Gone is the heavy foreign accent calling from Microsoft. Instead it’s a phone call from a familiar relative….
How would they get the voice sample?

Also if my Mum started asking for my passcodes I'd immediately smell a rat.

TX.
Get 'er on the telephone. But it's usually the other way 'round. Any suspicions that it's not you can be explained away like "oh the network here's a bit dodgy" or similar.

Anyway:

https://www.scmagazine.com/news/deepfake-video-con...
https://www.inc-aus.com/minda-zetlin/that-colleagu...

DodgyGeezer

40,530 posts

191 months

Monday 19th February
quotequote all
gangzoom said:
I suspect its really is a matter of when not if Ai can achieve some level of at least hive consciousness within the next few years. When that happens we'll have a choice to either kill it dead or let things roll. Personally I think the later will occur regardless of any fears/concerns.
which is why Skynet started the war... yes I am joking, sort of. If AI develops awareness it'll make the same sort of calculation - and it won't have human morality to factor into the equation

Terminator X

15,103 posts

205 months

Monday 19th February
quotequote all
DodgyGeezer said:
gangzoom said:
I suspect its really is a matter of when not if Ai can achieve some level of at least hive consciousness within the next few years. When that happens we'll have a choice to either kill it dead or let things roll. Personally I think the later will occur regardless of any fears/concerns.
which is why Skynet started the war... yes I am joking, sort of. If AI develops awareness it'll make the same sort of calculation - and it won't have human morality to factor into the equation
We don't even understand the human brain though but people think AI will become conscious sometime soon scratchchin

TX.

QJumper

2,709 posts

27 months

Monday 19th February
quotequote all
bloomen said:
It's saying that because it's read the entire internet and more and knows that's how most others act in that scenario.

There is no thought. There's the collation and repackaging of info.

It clearly is capable of making connections that humans never could because it has so much more info to draw on.

Who knows where the line between spouting what you're supposed to say and think and actually saying and thinking it is, and maybe no one cares or will be able to tell the difference, but that's rather a vast leap.

Then again plenty of humans do the exact same thing.

Edited by bloomen on Sunday 18th February 20:01
Like you say, plenty of humans do the same thing. We collect, collate and repackage information. We make decisions based on that info, using our own knowlegde and experience, as well as that of others. We also learn by how most others act in a given scenario. From what I can see, decision making is simply gathering information and then processing it in order to achieve an outcome. A machine can gather and store (remember) information far better and more accurately than a human.

I suppose it then comes down to what exactly is consciousness. Is it something special, or is it simply the fact that being able to collate and process large amounts of data creates the illusion that it's more than the sum of its parts? Are our indivdual thoughts unique and random, or the predictable result of all the information, stimuli, and experiences we've gathered, added together to spit out a result?

If consciousness is just an illusiion in AI, and a simulation based on what information it's gathered, and its processing capability, then is our consciousness just a similar, albeit more sophisticated illusion?

What I find fascinating, and potentially one of the most useful applications is future prediction. For example, I can only part predict the outcome of an event, based on a mixture of information, probability and luck. For example, if I drop a glass, I can predict it will break, based on my knoweldge of glass, height dropped from, and the surface it will hit. However, with sufficient information I could predict how many pieces it would break into, and where each piece would end up. The ability to gather and process that kind of data at speed, opens up all sorts of unimaginable possibilities.

survivalist

5,674 posts

191 months

Monday 19th February
quotequote all
rodericb said:
Terminator X said:
survivalist said:
That’s a great ideal, but we are already seeing criminal elements leveraging ‘AI’ (more frequently machine learning) to automate, intensify and enhance malicious behaviour.

Guidelines, Ethics and frameworks only go so far.

A good (well, impressive rather than good) example of this was a recent experiment by a number of colleagues to allow an AI to sample their voice, speech pattern etc and see how long it could speak to a colleague, friend or spouse before they detected that it wasn’t a real person.

In many cases it was several minutes.

Now our that in the hands of scammers. Gone is the heavy foreign accent calling from Microsoft. Instead it’s a phone call from a familiar relative….
How would they get the voice sample?

Also if my Mum started asking for my passcodes I'd immediately smell a rat.

TX.
Get 'er on the telephone. But it's usually the other way 'round. Any suspicions that it's not you can be explained away like "oh the network here's a bit dodgy" or similar.

Anyway:

https://www.scmagazine.com/news/deepfake-video-con...
https://www.inc-aus.com/minda-zetlin/that-colleagu...
This. Already happening. Most people leave a pretty big digital footprint, so getting a voice sample isn’t as hard as it might appear.

College managed to get a 5 min AI conversation with a close relative based on a 30 second voice clip. Add in some fizzy deepfake in case they want a video call and it’s easy to be fooled.

Especially when you consider that the old ‘deposed king of Nigeria’ emails still manage to lure some people in.

FMOB

882 posts

13 months

Tuesday 20th February
quotequote all
Well Skynet already exists as a UK MOD Satellite project so the backhaul is in place, just need 'spot the dog' with a grenade launcher option fitted, an AWS connection and a chap named Dyson.

What could possibly go wrong?

bloomen

6,917 posts

160 months

Friday 23rd February
quotequote all
https://www.hollywoodreporter.com/business/busines...

Tyler Perry bins a big old expansion of his studio after seeing what Sora can do. He doesn't sound too optimistic about the future of entertainment jobs and who can blame him.