Is The A.I. Singularity Coming And If So When?
Discussion
Guvernator said:
Certainly very interesting but still quite limited in that they can only "learn" one specific task and still with lots of help. We are getting quite good at limited or narrow AI, it's a step in the right direction to be sure but I'd want to see proof that it can learn anything, even something that it hasn't been programmed and that it actually understands what it is learning and not just mimics it with some clever algorithm.
I'm not arguing that AI is with us now or even will be in 10 years time just that the growth is now on a fixed exponential curve. The chap on that link demonstrated that in some fields algorithms have already overtaken human capability and in others it's now very-very close.Of course there's still a long way to go but an exponential growth rate devours "a long way to go" very very quickly.
Mods: Sorry, that pic contains a swear word, not my doing.
Personally I'm looking forward to downloading my brain into a new and improved android body.
This would probably a direct offshoot of achieving the technology to create AI. If we can create an artificial brain, we have also created a vessel that might be capable of hosting a human one.
This would probably a direct offshoot of achieving the technology to create AI. If we can create an artificial brain, we have also created a vessel that might be capable of hosting a human one.
A brilliant read on AI by Wait But Why?;
Part 1
http://waitbutwhy.com/2015/01/artificial-intellige...
Part 2
http://waitbutwhy.com/2015/01/artificial-intellige...
Part 1
http://waitbutwhy.com/2015/01/artificial-intellige...
Part 2
http://waitbutwhy.com/2015/01/artificial-intellige...
mudflaps said:
I'm not arguing that AI is with us now or even will be in 10 years time just that the growth is now on a fixed exponential curve. The chap on that link demonstrated that in some fields algorithms have already overtaken human capability and in others it's now very-very close.
Of course there's still a long way to go but an exponential growth rate devours "a long way to go" very very quickly.
The growth of processing power has been pretty much exponential so far although their is recent evidence to show that this has been slowing down. Progress in other scientific fields which are also required to support AI are anything but exponential, more like S curves. We will have a sudden breakthrough then years of development where nothing much happens and then another breakthrough. It's this erratic progress rate that means prediction isn't as easy as just plotting a nice upward trending graph.Of course there's still a long way to go but an exponential growth rate devours "a long way to go" very very quickly.
As an example a year ago IBM managed to write one bit of information on a quantum field and even the implications of that are predicted to take years to fully study and realise, where do you see the next breakthrough, and the next? Yes you could reach a tipping point where developments start to speed up exponentially but you could just as easily hit a brick wall which means you don't make any substantial breakthroughs for decades.
Guvernator said:
but you could just as easily hit a brick wall which means you don't make any substantial breakthroughs for decades.
You 'could' but there is no evidence that we 'will'. Indeed all of the evidence thus far IS for exponential growth having taken place since the mid 20th century and still being on-course.1 stumbling block in one field doesn't make the overall picture any less so.
Dan_1981 said:
We wouldn't be competing for resources they wanted?
They may not wipe us out - but they could send us back to the stone age - thereby removing the competition for power.
Edited by Moonhawk on Thursday 16th July 13:32
anonymous said:
[redacted]
My comment on the first AI being biological was slightly tongue in cheek. Pretty much on the basis that it could well be that synthetic biology researchers will produce an intelligent artificial organism before all the problems of creating a machine based AI are solved.anonymous said:
[redacted]
Because that's not how the brain works. An algorithm is just that, a clever program. Any programme will have limitations in what it can do defined by it's programming, no matter how clever that might be. In order for AI to replicate the ability of humans to learn and understand it's environment, it needs to be able to re-wire itself at a fundamental level or create new connections, much like the brain does. It may be possible to emulate intelligence in programming but that would be highly inefficient (I'd like to see how many lines of code that would take) and also just that, an emulation.Real human-like intelligence needs something more, the brains ability to re-write and create new hardware connections on the fly is one of the things that makes it the most efficient computer on the planet.
mudflaps said:
Guvernator said:
Certainly very interesting but still quite limited in that they can only "learn" one specific task and still with lots of help. We are getting quite good at limited or narrow AI, it's a step in the right direction to be sure but I'd want to see proof that it can learn anything, even something that it hasn't been programmed and that it actually understands what it is learning and not just mimics it with some clever algorithm.
I'm not arguing that AI is with us now or even will be in 10 years time just that the growth is now on a fixed exponential curve. The chap on that link demonstrated that in some fields algorithms have already overtaken human capability and in others it's now very-very close.Of course there's still a long way to go but an exponential growth rate devours "a long way to go" very very quickly.
Mods: Sorry, that pic contains a swear word, not my doing.
http://waitbutwhy.com/2015/01/artificial-intellige...
It really blew my mind, part of me hopes I'm not around when it happens (based on predictions I will be) and another part of me wants to see it happen!
ZOLLAR said:
Did you read this webpage also? (the pics you post are from there)
http://waitbutwhy.com/2015/01/artificial-intellige...
It really blew my mind, part of me hopes I'm not around when it happens (based on predictions I will be) and another part of me wants to see it happen!
I did, a few weeks back and it is a bit of a wake up call.http://waitbutwhy.com/2015/01/artificial-intellige...
It really blew my mind, part of me hopes I'm not around when it happens (based on predictions I will be) and another part of me wants to see it happen!
This is why leaders in the Computing sector are now issuing warnings - they see the fast approaching AI juggernaut and recognise that not enough is being done to prepare for it, whatever 'it' might be.
Did you read part 2?
http://waitbutwhy.com/2015/01/artificial-intellige...
Edited by mudflaps on Thursday 16th July 15:10
mudflaps said:
I did, a few weeks back and it is a bit of a wake up call.
This is why leaders in the Computing sector are now issuing warnings - they see the fast approaching AI juggernaut and recognise that not enough is being done to prepare for it, whatever 'it' might be.
Again I ask the question, if "it" is coming, why do we automatically assume that's a bad thing? Why do we need to issue warnings?This is why leaders in the Computing sector are now issuing warnings - they see the fast approaching AI juggernaut and recognise that not enough is being done to prepare for it, whatever 'it' might be.
Guvernator said:
mudflaps said:
I did, a few weeks back and it is a bit of a wake up call.
This is why leaders in the Computing sector are now issuing warnings - they see the fast approaching AI juggernaut and recognise that not enough is being done to prepare for it, whatever 'it' might be.
Again I ask the question, if "it" is coming, why do we automatically assume that's a bad thing? Why do we need to issue warnings?This is why leaders in the Computing sector are now issuing warnings - they see the fast approaching AI juggernaut and recognise that not enough is being done to prepare for it, whatever 'it' might be.
p1stonhead said:
A brilliant read on AI by Wait But Why?;
Part 1
http://waitbutwhy.com/2015/01/artificial-intellige...
Part 2
http://waitbutwhy.com/2015/01/artificial-intellige...
Going back to the fifth paragraph of my opening postPart 1
http://waitbutwhy.com/2015/01/artificial-intellige...
Part 2
http://waitbutwhy.com/2015/01/artificial-intellige...
mudflaps said:
Polls of scientists show that over 50% of them think that it'll be here by 2045 making that a 'probable scenario' in the view of the scientific establishment.
Part 2 of that article says:In 2013 Vincent C. Müller and Nick Bostrom conducted a survey that asked hundreds of AI experts at a series of conferences the following question: “For the purposes of this question, assume that human scientific activity continues without major negative disruption. By what year would you see a (10% / 50% / 90%) probability for such Human Level Machine Intelligence to exist?”
It asked them to name an optimistic year (one in which they believe there’s a 10% chance we’ll have AGI), a realistic guess (a year they believe there’s a 50% chance of AGI i.e. after that year they think it’s more likely than not that we’ll have AGI), and a safe guess (the earliest year by which they can say with 90% certainty we’ll have AGI). Gathered together as one data set, here were the results:
Median optimistic year (10% likelihood): 2022
Median realistic year (50% likelihood): 2040
Median pessimistic year (90% likelihood): 2075
So the median participant thinks it’s more likely than not that we’ll have AGI 25 years from now. The 90% median answer of 2075 means that if you’re a teenager right now, the median respondent, along with over half of the group of AI experts is almost certain AGI will happen within your lifetime.
A separate study conducted recently by author James Barrat at Ben Goertzel’s annual AGI Conference did away with percentages and simply asked when participants thought AGI would be achieved—by 2030, by 2050, by 2100, after 2100, or never. The results:
By 2030: 42% of respondents
By 2050: 25%
By 2100: 20%
After 2100: 10%
Never: 2%
Pretty similar to Müller and Bostrom’s outcomes. In Barrat’s survey, over two thirds of participants believe AGI will be here by 2050 and a little less than half predict AGI within the next 15 years. Also striking is that only 2% of those surveyed don’t think AGI is part of our future.
mudflaps said:
ZOLLAR said:
Did you read this webpage also? (the pics you post are from there)
http://waitbutwhy.com/2015/01/artificial-intellige...
It really blew my mind, part of me hopes I'm not around when it happens (based on predictions I will be) and another part of me wants to see it happen!
I did, a few weeks back and it is a bit of a wake up call.http://waitbutwhy.com/2015/01/artificial-intellige...
It really blew my mind, part of me hopes I'm not around when it happens (based on predictions I will be) and another part of me wants to see it happen!
This is why leaders in the Computing sector are now issuing warnings - they see the fast approaching AI juggernaut and recognise that not enough is being done to prepare for it, whatever 'it' might be.
Did you read part 2?
http://waitbutwhy.com/2015/01/artificial-intellige...
Edited by mudflaps on Thursday 16th July 15:10
Slightly off topic but read this also it's about the Fermi paradox and other life in the universe.
Basically from what I understand if A.I. doesn't get us our very existence may finish us off
http://waitbutwhy.com/2014/05/fermi-paradox.html#
anonymous said:
[redacted]
It's certainly very impressive at what is does but it's confined to doing a set number of limited tasks, most of which are related. However
1) It doesn't actually understand what you are asking it. It merely responds with the best match according to an algorithm hence you as a human usually having to sift through the top 10 or so answers to see if it actually returned the correct pattern match.
2) Because it doesn't really understand your question, it can't answer it, it merely finds information posted up onto the internet (by humans funnily enough), it can't do anything else. Ask it to comment on whether it likes a particular painting and why and get it to give you it's own opinion rather than throwing up a random list of other peoples, then I'll be impressed.
anonymous said:
[redacted]
I think this is part of the perceived problem. Governments and the public at large are almost completely in the dark as to the (up-to-the-minute) current state of play of the research and development. Google, Apple et all will tell you that this due to commercial considerations but there appears to be almost zero oversight of what is happening in (say) Googles Labs and when we do find out some glimmer of news it's invariably out-of-date news.You're quote about "their (Googles) systems are able to automatically translate between pretty much every spoken language on the planet, properly understand queries written in natural language" is now old(ish) news and Google are prepared to talk to the BBC about it (I posted a link earlier) but there were things being held back that are clearly even further along the development road.
A bit like manufacturers releasing version 10 of windows to the public whilst back in the office version 12 is already well under way.
anonymous said:
[redacted]
Google translate doesn't use AI. Not in the sense that the system understands the syntax and semantics of what is being translated anyway. I uses a brute force statistical approach based on publicly available documents from the UN, EU etc that are produced in multiple languages. It works on the basis that what is being said has already been translated before and searches the document base for the best match. Gassing Station | Science! | Top of Page | What's New | My Stuff