Is The A.I. Singularity Coming And If So When?

Is The A.I. Singularity Coming And If So When?

Author
Discussion

Guvernator

13,156 posts

165 months

Thursday 23rd July 2015
quotequote all
RobDickinson said:
Humans always thing there is something magical going on until science nails it down and oh look its just physics/etc.

The brain is complicated and neurons arnt as simple as binary switches but we will get there one day.

That wait why not article though talking about an AI exponential growth, it will have limits, it cant grow its own processing strata like we cant grow our own brains. Sure it can optimise its code, and more processing can be created but it wont just 'happen'
This is what I was trying to get at. Their will be physical limits on how much smarter it can get. You can't bypass the basic laws of physics. The idea that a machine could make itself super-intelligent within a few hours by self improvement is a bit far fetched.

The only way I could see this possibly happening is if the AI manifests itself on the internet and it works out how to hack all the firewalls and security protocols to enable it to keep hijacking and adding more servers\processing power to it's "neural net" but again that has it's own complications. If Google becomes self aware, then we might need to worry. smile

0000

13,812 posts

191 months

Thursday 23rd July 2015
quotequote all
ash73 said:
The thing is, academics who experiment with neural nets won't have the first clue how to write fast code, and if you're scaling up a process across 80 billion neurons with a trillion connections even the tiniest inefficiency will make it tank. Give it to some game programmers and see what they can do!
Neural networks have been used in games for ages, the monsters in DOOM even used AI albeit not ANNs. Game developers taught about ANNs in education have had children who are now adults. Maybe even grandchildren they've been around so long. They're also straightforward to implement, you won't break new ground in algorithmic time complexity by having someone look for implementation improvements. That they've been around 50 years would yield far more improvement in runtime speed than you're going to gain in algorithmic time complexity from a game developer improving an implementation.

IainT

10,040 posts

238 months

Thursday 23rd July 2015
quotequote all
DervVW said:
It is too high...
will an AI have compasion or a moral centre?
They both seem to be a function of our type of consciousness which, to me, seems a function of our intelligence/meat-machine. If A I 'intelligence' is like ours but more "oomph" then there's no reason to think it couldn't be. Given how much of our morality is cultural, learned, we can't know what culture AI would have.

When I say AI I'm referring to post-singularity AI - sentient AI rather than the AI we use and interact with every day. Most AI is single-task AI and quite hard to tell from 'dumb process'.

Much as it's hard to see it people are gradually getting more and more compassionate - we have time and energy to spare on other people's plights.

Equally society is moving away from war and becoming more tolerant and liberal (in the classic sense rather than US/UK political). I don't see why AI would necessarily be different. Maybe AI would recognise our sentience and help us to improve, maybe there would be a sense of loyalty as its creator.

What is certain is that we will reach singularity

glazbagun

14,280 posts

197 months

Thursday 23rd July 2015
quotequote all
anonymous said:
[redacted]
It's an ignorance I'm really annoyed at myself for, though education has a lot to answer for in my day- computer science at my school was about using a spreadsheet! This post made me determined to do something about it but, like getting fit, it requires motivation beyond the occasional burst of enthusiasm.
http://coding2learn.org/blog/2013/07/29/kids-cant-...

FarmyardPants

4,108 posts

218 months

Thursday 23rd July 2015
quotequote all
anonymous said:
[redacted]
Are you referring to any post in particular?

FarmyardPants

4,108 posts

218 months

Thursday 23rd July 2015
quotequote all
Asterix said:
Pfff
Care to elaborate?

0000

13,812 posts

191 months

Thursday 23rd July 2015
quotequote all
ash73 said:
As an example look at the architecture of Geppetto OpenWorm, specifically look at the number of levels of abstraction. JVM, frameworks, middleware, libraries, modules, web services, etc. It's designed to make implementation of academic studies as easy as possible, which is perfectly sensible when they are still figuring out how it should work. But when you scale up to 80 billion nodes it's not just the efficiency of the learning algorithm, even the tiniest overheads in the transport layer become significant.

When academics say they need a computer the size of a warehouse to simulate the human brain I think they are just extrapolating the performance of these crude prototypes.

Game programmers just use OpenGL APIs and game engines such as Unity these days, so I'm being a bit flippant with my suggestion, but the point is the current implementations are not designed to scale.
Not designed to scale because it's written for the JVM? The world's moved on. Using the JVM is a sensible choice for scaling that project - probably why Google, Twitter, the NSA and everyone else that wants their software to scale is using it.

Anyway, I'm off to see how my Hadoop dataload is going, perhaps not before I ping them a message to let them know it's not designed to scale because Java.

otolith

56,135 posts

204 months

Thursday 23rd July 2015
quotequote all
ash73 said:
When academics say they need a computer the size of a warehouse to simulate the human brain I think they are just extrapolating the performance of these crude prototypes.
I think what they are doing conceptually will scale with a more sane implementation - but the worm they are modelling has 302 neurons. An arthropod has maybe three orders of magnitude more neurons. A lower vertebrate like a fish or frog, five orders more than a worm. A typical small mammal, six orders. A dolphin or lower primate, 7. A human, 8. It would be trivial to rewrite that software to be 100 or 1000 times faster, what they've done looks pretty inefficient. So you're up to a fish. Ten times faster still for a rat - maybe. Starting to get into diminishing returns and you still need to find a hundred fold performance increase to run it for a human. Also, the ratio of the number of synapses to the number of neurons is not constant - it's about 23 for the worm, about 700 for a human.

I don't think you can realistically do this in software - IBM are trying to do it in hardware;

http://www.kurzweilai.net/ibm-simulates-530-billon...

However the worm model is based on microscale neuron analysis - that IBM work is not a literal model of a monkey brain to anywhere near the same level of detail. As far as I know, C.elegans is the only species for which we have that mapping. There is work being done on high resolution automated brain tomography which will eventually allow mapping of larger brains to the same level of detail.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC445488...


0000

13,812 posts

191 months

Thursday 23rd July 2015
quotequote all
There is no performance overhead.

0000

13,812 posts

191 months

Thursday 23rd July 2015
quotequote all
Yeah, great, now show me a JVM that doesn't do function inlining. Or for that matter executes, what is that, Visual Basic?!

DragsterRR

367 posts

107 months

Thursday 23rd July 2015
quotequote all
Won't happen in our life times if ever.
Because accountants.

Unless someone can see a big profit in it... (Electronic slavery anyone?)

otolith

56,135 posts

204 months

Thursday 23rd July 2015
quotequote all
anonymous said:
[redacted]
I don't think the problem is the number of objects so much as the number of concurrent communications between them which need to be resolved between each iteration of the model if you are to make the thing respond in real time. You can throw memory at it and work around the language's limitations on collection sizes - I don't think the storage of the current state of a neuron would take much memory, for a human brain you'd be looking at about 80GB of RAM per byte per neuron.

I think even now Java is still noticeably slower than C++ for mathematical operations, but in any case is this something you'd really want to write in any high level general purpose programming language and run under a general purpose operating system on off the shelf kit? IBM are building the logic into chips.

mudflaps

Original Poster:

317 posts

106 months

Thursday 23rd July 2015
quotequote all
DragsterRR said:
Won't happen in our life times if ever.
Because accountants.

Unless someone can see a big profit in it... (Electronic slavery anyone?)
Interesting Premise, Accountants don't like profits.

otolith

56,135 posts

204 months

Thursday 23rd July 2015
quotequote all
DragsterRR said:
Won't happen in our life times if ever.
Because accountants.

Unless someone can see a big profit in it... (Electronic slavery anyone?)
DARPA isn't looking for profit.

otolith

56,135 posts

204 months

Thursday 23rd July 2015
quotequote all
anonymous said:
[redacted]
To store the state of the total set of neurons in the human brain would require about 80GB for each byte of storage each neuron requires. i.e. if each needed one byte, you'd need 80GB of RAM. It's obviously going to be more than that, not least because it's going to need pointers to the set of neurons to which it is connected.

mudflaps

Original Poster:

317 posts

106 months

Tuesday 28th July 2015
quotequote all
More warnings of future problems.

http://www.bbc.co.uk/news/technology-33686581

ikarl

3,730 posts

199 months

Tuesday 28th July 2015
quotequote all
Killer robots: Tech experts warn against AI arms race

http://www.bbc.co.uk/news/technology-33686581

More than 1,000 tech experts, scientists and researchers have written a letter warning about the dangers of autonomous weapons.

In the latest outcry over "killer robots", the letter warns that "a military AI [artificial intelligence] arms race is a bad idea".

Among the signatories are scientist Stephen Hawking, entrepreneur Elon Musk and Apple co-founder Steve Wozniak.

The letter will be presented at an international AI conference today.

mudflaps

Original Poster:

317 posts

106 months

Tuesday 28th July 2015
quotequote all
You were pipped to the post chap biggrin

ikarl

3,730 posts

199 months

Tuesday 28th July 2015
quotequote all
mudflaps said:
You were pipped to the post chap biggrin
dammit, if I hadn't copy/pasted that little bit of text hehe

Asterix

24,438 posts

228 months

Tuesday 28th July 2015
quotequote all
The Military have the budget.

So much of our current day tech was once a Military secret at one point.