Page 1 of 4 1234 LastLast
Results 1 to 25 of 80

Thread: [News] AI will be smarter than humans in 30 years

  1. #1
    Join XS BOINC Team StyM's Avatar
    Join Date
    Mar 2006
    Location
    Tropics
    Posts
    9,468

    [News] AI will be smarter than humans in 30 years

    http://www.fudzilla.com/news/42978-a...ns-in-30-years

    SoftBank founder Masayoshi Son has said that computers running artificial intelligence programmes will exceed human intelligence within three decades,

    Son told a large audience at the Mobile World Congress, the telecom industry's annual conference in Barcelona, that he believed that a computer will have an IQ equal to 1,000 times the average human in 30 years.

    ?Even clothing like a pair of sneakers will have more computing power that a person. We will be less intelligent than our shoes," he said, to laughter. Many of his audience were holding iPhones so we guess a lot of them were less intelligent than their shoes already. Son said that AI will only be dangerous if people react badly.

    "I believe this artificial intelligence is going to be our partner. If we misuse it, it will be a risk. If we use it right, it can be our partner.?

    That is good, humanity has a good track record of not using technology unwisely.

    The prediction of the inevitable rise of AI is not the first for the Japanese investor, who owns majority control of U.S. wireless carrier Sprint (S, -0.34%) and last year bought British chip designer ARM Holdings. Last year, Son predicted that the "singularity," a fusing of AI programs and human society leading to massive technological advances, is "inevitable".

    Son said he bought ARM to capitalize on the growth of AI.

  2. #2
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    We are quickly heading for the inevitable self aware AI system movies so greatly predicted. Tell me why the gargantual investment in this segment without any regulation again?
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  3. #3
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,955
    People have been saying this for years.

    AI will always be limited by the learning capability of its programmers. I think it's fair game to say that AI will be more efficient at doing things the smartest people on earth can do - but to say they'll outsmart the human race is outlandish.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  4. #4
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    A computer is only as smart as the idiot programming it.

    Either that, or we'll have doomsday, and we'll be needing to look for a John Connor.
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  5. #5
    Xtreme Addict
    Join Date
    Mar 2005
    Location
    Rotterdam
    Posts
    1,553
    Quote Originally Posted by AliG View Post
    People have been saying this for years.

    AI will always be limited by the learning capability of its programmers. I think it's fair game to say that AI will be more efficient at doing things the smartest people on earth can do - but to say they'll outsmart the human race is outlandish.
    I dunno.. with companies like google putting big money into Quantum A.I. one can expect a lot to happen.
    Gigabyte Z77X-UD5H
    G-Skill Ripjaws X 16Gb - 2133Mhz
    Thermalright Ultra-120 eXtreme
    i7 2600k @ 4.4Ghz
    Sapphire 7970 OC 1.2Ghz
    Mushkin Chronos Deluxe 128Gb

  6. #6
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,955
    Quote Originally Posted by Dimitriman View Post
    I dunno.. with companies like google putting big money into Quantum A.I. one can expect a lot to happen.
    Nah. I work with a lot of the machine learning guys in autonomous vehicles, who are in turn partnered with several big name AI groups.

    Wouldn't worry about it.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  7. #7
    Xtreme X.I.P. Particle's Avatar
    Join Date
    Apr 2008
    Location
    Kansas
    Posts
    3,219
    Quote Originally Posted by Sparky View Post
    A computer is only as smart as the idiot programming it.
    This is a mistake of linear thinking. AI needs to be bootstrapped by human thought and creation, but it's conceivable that we will eventually develop self learning systems that are better at designing self learning systems than we are. Even our current, fairly primitive, narrow AIs often work in ways that are opaque to the people who created them. That is sort of the point after all--to build a system that can be trained to figure out a faster or more effective way of accomplishing a task than the programmer would know how to explicitly define with traditional logic. Otherwise, we'd just use traditional programs to do all tasks.
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Rule 3:
    When it comes to computer news, 70% of Internet rumors are outright fabricated, 20% are inaccurate enough to simply be discarded, and about 10% are based in reality. Grains of salt--become familiar with them.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

    Random Tip o' the Whatever
    You just can't win. If your product offers feature A instead of B, people will moan how A is stupid and it didn't offer B. If your product offers B instead of A, they'll likewise complain and rant about how anyone's retarded cousin could figure out A is what the market wants.

  8. #8
    Xtremely High Voltage Sparky's Avatar
    Join Date
    Mar 2006
    Location
    Ohio, USA
    Posts
    16,040
    I was trying to not be too wordy

    My point is, it can only go so far. It isn't like the computer is going to invent something totally new. It still requires a particular task given to it - even if it figures out some better way of doing it later - or it has to be trained to do something, etc. Still will depend on the person to manage that. And if that person has an ID 10 T problem, well, the computer just won't learn very well either!

    The idea that we'll create some sort of AI that then outsmarts us and makes us "obsolete" in a sense is, well, pretty absurd. How's that old comment go, "artificial intelligence is no match for natural stupidity."
    The Cardboard Master
    Crunch with us, the XS WCG team
    Intel Core i7 2600k @ 4.5GHz, 16GB DDR3-1600, Radeon 7950 @ 1000/1250, Win 10 Pro x64

  9. #9
    Xtreme Mentor
    Join Date
    Aug 2006
    Location
    HD0
    Posts
    2,646
    Quote Originally Posted by Dimitriman View Post
    We are quickly heading for the inevitable self aware AI system movies so greatly predicted. Tell me why the gargantual investment in this segment without any regulation again?
    You live in Denmark.

    I can tell you that in most of the rest of the world, politicians don't make as good of rules.

    Also, good luck getting a politician to understand a deep neural network. There are CS PhDs that struggle with that... all they know is that it works and that numerical stability is a pain.



    Quote Originally Posted by Sparky View Post
    A computer is only as smart as the idiot programming it.
    For certain classes of tasks, that's true.

    Right now it's more accurate to say that it's only as sophisticated as the person programming it.
    At the same time it can sort through 1,000,000x as many data points and do some pretty simple things blindingly fast... and if it can combine simple tasks reasonably well, then it can end up with some pretty complex outputs.
    Last edited by xlink; 02-28-2017 at 08:49 AM.

  10. #10
    Xtreme Enthusiast
    Join Date
    Sep 2005
    Location
    Toronto, Canada
    Posts
    570
    Dolphins are smarter than humans.
    q9550 @ 444 x 8.5 1.3v - Venomous X
    p5q DLX
    Ocz RPR 1066 4 x 2g @ 1066
    eah5870 V2 @ 920/1250 - HR-03gt
    Antec Fusion Remote MAX
    Xonar HDAV1.3
    Ocz zx850w


  11. #11
    Xtreme Member
    Join Date
    Jul 2008
    Location
    NYC
    Posts
    325
    Quote Originally Posted by Particle View Post
    This is a mistake of linear thinking. AI needs to be bootstrapped by human thought and creation, but it's conceivable that we will eventually develop self learning systems that are better at designing self learning systems than we are. Even our current, fairly primitive, narrow AIs often work in ways that are opaque to the people who created them. That is sort of the point after all--to build a system that can be trained to figure out a faster or more effective way of accomplishing a task than the programmer would know how to explicitly define with traditional logic. Otherwise, we'd just use traditional programs to do all tasks.
    Well said. I completely agree.
    Win XP Pro x64 / Win 7 x64 / Phenom II / Asus m3a79-t Deluxe / 8x2 GB GSkill and some other stuff.....

  12. #12
    Xtreme Member
    Join Date
    Jul 2008
    Location
    NYC
    Posts
    325
    Quote Originally Posted by Sparky View Post
    My point is, it can only go so far. It isn't like the computer is going to invent something totally new. It still requires a particular task given to it - even if it figures out some better way of doing it later - or it has to be trained to do something, etc. Still will depend on the person to manage that. And if that person has an ID 10 T problem, well, the computer just won't learn very well either!

    The idea that we'll create some sort of AI that then outsmarts us and makes us "obsolete" in a sense is, well, pretty absurd. How's that old comment go, "artificial intelligence is no match for natural stupidity."
    It's really only an absurd idea if you lack the imagination, or think too linearly.

    A different way to think about it, still sort of "linearly", is that even if this AI you're describing only does something we've trained it to do, only faster or in a new way, you could still consider the possibility that the combination of solutions will constitute something new as a result. In other words, if we train it to do A, B, C.... Z, we may never predict it would combine all of that into completely new 'solutions'. "Fine" you may think, "that's still things we understand even if we didn't predict them", but the thing is that if we reduce any given process to something "linear" (in this sense) then ALL problems and solutions fit in the same realm, and the "level of intelligence" that "outsmarts us" that you talk about is really only the sum of the "linear" parts.

    So we're really back to the Turing test meaning in a more practical sense that "who cares" if the AI is really smarter in a human sense, or if it just outsmarts us in this "linear" sense? It still outsmarts us.
    Win XP Pro x64 / Win 7 x64 / Phenom II / Asus m3a79-t Deluxe / 8x2 GB GSkill and some other stuff.....

  13. #13
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Neural networks are the new hype of the day. We can build programs to generate output for input data better and faster than humans but that's what we designed computers for. And we call those programs Artificial Intelligence if the complexity (or amount of garbage) is so high that we no longer understand how the output was generated.
    But no matter how smart the programs are, we cannot build conscious nor we can train a net to be wise.

    Side note, neural networks are used many times for cases where there is no other class of problem solvers for the given task. In those scenarios, it's hard to prove that solutions are correct and many "see" the results as "outstanding" because that's what they want to see.

  14. #14
    Xtreme Member
    Join Date
    Jul 2008
    Location
    NYC
    Posts
    325
    Quote Originally Posted by sergiu View Post
    Neural networks are the new hype of the day. We can build programs to generate output for input data better and faster than humans but that's what we designed computers for. And we call those programs Artificial Intelligence if the complexity (or amount of garbage) is so high that we no longer understand how the output was generated.
    But no matter how smart the programs are, we cannot build conscious nor we can train a net to be wise.
    But again, what do you base that claim on?

    First of all you would have to define what "consciousness" and "wisdom" are. The latter is probably far easier to define, and probably not even that hard to get an AI to display wisdom. As for "consciousness": A lot of biologists and physicists think of it as being purely the result of the activity in the brain, nothing else. And the brain is something we're understanding more and more, and is also what gave us the idea for neural networks.

    There's really no reason to think that our consciousness would be significantly different from that of an AI if we construct the neural network similarly to the human brain. The reason we resist seeing it as 'equal' in 'quality' is really just ego-centrism in my opinion. We think we're special, and so the concept of creating a machine that would have the same qualities or even surpass us is offensive. I think that's the underlying cause for resistance.
    Win XP Pro x64 / Win 7 x64 / Phenom II / Asus m3a79-t Deluxe / 8x2 GB GSkill and some other stuff.....

  15. #15
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,955
    Quote Originally Posted by sergiu View Post
    Neural networks are the new hype of the day. We can build programs to generate output for input data better and faster than humans but that's what we designed computers for. And we call those programs Artificial Intelligence if the complexity (or amount of garbage) is so high that we no longer understand how the output was generated.
    But no matter how smart the programs are, we cannot build conscious nor we can train a net to be wise.

    Side note, neural networks are used many times for cases where there is no other class of problem solvers for the given task. In those scenarios, it's hard to prove that solutions are correct and many "see" the results as "outstanding" because that's what they want to see.
    Right, and neural nets can be trained to solve problems that would otherwise require exponential computational time. And even though we write the algorithms, it's very hazy how the actual training models work (i.e. why certain nodes efficiently relate to each other, but others don't).

    But having worked with them for speech recognition, I also know for a fact there's no chance of them ever growing sentient LOL
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  16. #16
    Xtreme Member
    Join Date
    Jul 2008
    Location
    NYC
    Posts
    325
    How do you know?

    Making claims on the internet is about the easiest thing in the world.
    Win XP Pro x64 / Win 7 x64 / Phenom II / Asus m3a79-t Deluxe / 8x2 GB GSkill and some other stuff.....

  17. #17
    Xtreme Guru
    Join Date
    May 2007
    Location
    Ace Deuce, Michigan
    Posts
    3,955
    Quote Originally Posted by MattiasNYC View Post
    How do you know?

    Making claims on the internet is about the easiest thing in the world.
    It's called being an engineer who understands how machine learning works

    This stuff always cracks me up. No one who actually works in the industry believes self-aware computers are possible.
    Quote Originally Posted by Hans de Vries View Post

    JF-AMD posting: IPC increases!!!!!!! How many times did I tell you!!!

    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    terrace215 post: IPC decreases, The more I post the more it decreases.
    .....}
    until (interrupt by Movieman)


    Regards, Hans

  18. #18
    Xtreme Member
    Join Date
    Jul 2008
    Location
    NYC
    Posts
    325
    Oh really? And as an engineer, did you also study "conscience" or "self-awareness" the way it applies to biology?
    Win XP Pro x64 / Win 7 x64 / Phenom II / Asus m3a79-t Deluxe / 8x2 GB GSkill and some other stuff.....

  19. #19
    Xtreme Addict
    Join Date
    Dec 2004
    Location
    Flying through Space, with armoire, Armoire of INVINCIBILATAAAAY!
    Posts
    1,939
    Quote Originally Posted by AliG View Post
    It's called being an engineer who understands how machine learning works

    This stuff always cracks me up. No one who actually works in the industry believes self-aware computers are possible.
    Counterexample: I worked in AI research, and I think that self-aware computers can exist. There's no reason why not.
    Trivially, you could just run an atomic-level simulation of a human brain. We already know how to simulate the interactions between atoms, and we also have a self-aware machine (human brain); so just simulate it!

    Horribly inefficient, of course, but it can obviously exist.
    Sigs are obnoxious.

  20. #20
    Xtreme X.I.P. Particle's Avatar
    Join Date
    Apr 2008
    Location
    Kansas
    Posts
    3,219
    Quote Originally Posted by AliG View Post
    It's called being an engineer who understands how machine learning works

    This stuff always cracks me up. No one who actually works in the industry believes self-aware computers are possible.
    You seem to be conflating consciousness with intelligence or capability. Do you assume that the latter requires the former?
    Particle's First Rule of Online Technical Discussion:
    As a thread about any computer related subject has its length approach infinity, the likelihood and inevitability of a poorly constructed AMD vs. Intel fight also exponentially increases.

    Rule 1A:
    Likewise, the frequency of a car pseudoanalogy to explain a technical concept increases with thread length. This will make many people chuckle, as computer people are rarely knowledgeable about vehicular mechanics.

    Rule 2:
    When confronted with a post that is contrary to what a poster likes, believes, or most often wants to be correct, the poster will pick out only minor details that are largely irrelevant in an attempt to shut out the conflicting idea. The core of the post will be left alone since it isn't easy to contradict what the person is actually saying.

    Rule 2A:
    When a poster cannot properly refute a post they do not like (as described above), the poster will most likely invent fictitious counter-points and/or begin to attack the other's credibility in feeble ways that are dramatic but irrelevant. Do not underestimate this tactic, as in the online world this will sway many observers. Do not forget: Correctness is decided only by what is said last, the most loudly, or with greatest repetition.

    Rule 3:
    When it comes to computer news, 70% of Internet rumors are outright fabricated, 20% are inaccurate enough to simply be discarded, and about 10% are based in reality. Grains of salt--become familiar with them.

    Remember: When debating online, everyone else is ALWAYS wrong if they do not agree with you!

    Random Tip o' the Whatever
    You just can't win. If your product offers feature A instead of B, people will moan how A is stupid and it didn't offer B. If your product offers B instead of A, they'll likewise complain and rant about how anyone's retarded cousin could figure out A is what the market wants.

  21. #21
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by AliG View Post
    It's called being an engineer who understands how machine learning works

    This stuff always cracks me up. No one who actually works in the industry believes self-aware computers are possible.
    You could not have said it better!

    @MattiasNYC & @iddqd : Natural science assumes our brain is a very complex biological computational machine, possibly a near neighbour classifier (quoting a colleague). The logical conclusion is that it's possible to build an equivalent or better artificial AI given infinite resources. How about the following assumption? What if conscious processes happens outside the brain in an environment out of reach using for our knowledge level and the brain is just the interface? Both the naturalistic assumption and the former one cannot be proven, however the ramifications of one being true over the other is huge. One allows building of such an AI and states it's only a matter of time while the other denies any possibility for it.

    Consciousness and wisdom implies awareness over the environment, awareness of universal moral laws, awareness of good or evil or being able to judge wisely human problems (and possibly other points that I do not see now). A true AI would also have the capacity to obey or rebel his master commandments no matter if those are good or evil. 15 years programming experience and deep passion for hardware is enough for me to know this will never happen. And the cherry on top: I assume God exists and humans were created in His image. If my assumption is true, then the logic ramification is that the only way I can create any true intelligence is by the union with someone of opposite sex. And this kind of intelligence is not artificial.

    And more on topic: we are playing now with neural networks in my company in order to detect appliances based on smart meter data. The amount of complexity hidden in the calculations and training is so high that almost nobody really understands how it goes to the results. And the results look good, but in the end is nothing else than pure computations. There is nothing truly intelligent except the the mind who defined the mathematical formulas used.
    Last edited by sergiu; 02-28-2017 at 03:00 PM.

  22. #22
    Xtreme Member AbortRetryFail?'s Avatar
    Join Date
    Apr 2008
    Posts
    367
    AI will be smarter than humans in 30 years
    Only if it's smart enough to reboot after I pull the power

    I suspect "AI Logic" will improve immensely. That does not necessarily mean there will be 'conscientious' AI as defined by human sentience.

  23. #23
    Xtreme Member
    Join Date
    Jul 2008
    Location
    NYC
    Posts
    325
    Quote Originally Posted by sergiu View Post
    @MattiasNYC & @iddqd : Natural science assumes our brain is a very complex biological computational machine, possibly a near neighbour classifier (quoting a colleague). The logical conclusion is that it's possible to build an equivalent or better artificial AI given infinite resources. How about the following assumption? What if conscious processes happens outside the brain in an environment out of reach using for our knowledge level and the brain is just the interface? Both the naturalistic assumption and the former one cannot be proven, however the ramifications of one being true over the other is huge. One allows building of such an AI and states it's only a matter of time while the other denies any possibility for it.
    Well, first of all the "other" doesn't actually prove that there isn't a possibility for this AI we're talking about. It assumes that just because humans somehow interface with something "outside" of the brain, the AI wouldn't be able to. In order to know that to be true you'd have to know a lot more. Of course the fundamental problem here is that we've so far found absolutely zero evidence proving such a 'connection' or 'interface', so it's just pure speculation and not at all even remotely factual.

    Secondly, I don't see how infinite resources are required. Plenty of resources? Yes. Infinite? No.

    Quote Originally Posted by sergiu View Post
    Consciousness and wisdom implies awareness over the environment, awareness of universal moral laws, awareness of good or evil or being able to judge wisely human problems (and possibly other points that I do not see now). A true AI would also have the capacity to obey or rebel his master commandments no matter if those are good or evil. 15 years programming experience and deep passion for hardware is enough for me to know this will never happen. And the cherry on top: I assume God exists and humans were created in His image. If my assumption is true, then the logic ramification is that the only way I can create any true intelligence is by the union with someone of opposite sex. And this kind of intelligence is not artificial.
    Well this just leads back to what I said earlier which is that humans are terribly ego-centric, and place a huge importance on ourselves in the universe. Abrahamic belief systems are just very blatant examples of that.

    I don't think it's true though. I don't think it's true that we're the center of the universe, I don't think it's true that we were created by an extra-terrestrial or supernatural entity, and I don't think there's a god. But obviously, if that's your belief, then you will have a fundamental problem accepting the possibility of AI becoming conscious; completely regardless of facts or science or reality. Faith and the supernatural can by definition not be falsified so your view will always allow you to find that it is correct, because it can't be proven wrong. Unfortunately it doesn't make it true though.

    "Wisdom" is actually entirely irrelevant to the point of whether or not AI will outsmart humans, so all this talk about morality is irrelevant I think.

    Anyway, we're not the center of the universe, and there is no god.

    Quote Originally Posted by sergiu View Post
    And more on topic: we are playing now with neural networks in my company in order to detect appliances based on smart meter data. The amount of complexity hidden in the calculations and training is so high that almost nobody really understands how it goes to the results. And the results look good, but in the end is nothing else than pure computations. There is nothing truly intelligent except the the mind who defined the mathematical formulas used.
    See, that's the problem (in bold) right there: You don't know how it gets to the results. All you understand are the results.

    But that's no different from the human mind. You can put a person in a testing facility and measure the output of what the human does, but you'll not understand just how his mind, or 'consciousness' came to those conclusions. You know your own mind, but not anybody else's. So, if this is the case, how would you even distinguish between an AI that is extremely advanced but not conscious and self-aware and one that is? I mean, if you can't even understand how it calculates the output then how would you know? If a super-computer fools you on a Turing test you might at some point figure out how it does it, come to the conclusion that it's not conscious etc.... but what if it only fools you without you understanding how?... How do you actually know it is not conscious?
    Win XP Pro x64 / Win 7 x64 / Phenom II / Asus m3a79-t Deluxe / 8x2 GB GSkill and some other stuff.....

  24. #24
    Xtreme Member
    Join Date
    Jul 2008
    Location
    NYC
    Posts
    325
    Quote Originally Posted by AbortRetryFail? View Post
    AI will be smarter than humans in 30 years
    Only if it's smart enough to reboot after I pull the power
    I think the scenarios that worry people would probably put you in a position where you can't just pull the power.

    Quote Originally Posted by AbortRetryFail? View Post
    I suspect "AI Logic" will improve immensely. That does not necessarily mean there will be 'conscientious' AI as defined by human sentience.
    I agree. You mean "conscious" though right, not "conscientious"?
    Win XP Pro x64 / Win 7 x64 / Phenom II / Asus m3a79-t Deluxe / 8x2 GB GSkill and some other stuff.....

  25. #25
    Xtreme Member
    Join Date
    Feb 2006
    Location
    Stuttgart, Germany
    Posts
    225
    Quote Originally Posted by MattiasNYC View Post
    Anyway, we're not the center of the universe, and there is no god.
    You are making a bold a affirmation that you cannot prove. You can assume that there is no God just like I assume there is one. The denial of existence of God is just as religious as the recognition of the evidence, since neither of them can be proven.
    My second point is that all those articles stand on assumptions which in turn stand on other assumptions.And on top of those assumptions we have news that sell. Just changing the root assumption from evolved universe to created one changes the possible outcome for humans ever being able to generate a higher intelligence.

    And to add to the topic, might be better to define what kind of intelligence can and cannot be developed and what could count as true intelligence and what not. We are already at the point where there is almost no game where humans can beat the computers, however this is pure brute force, no matter the algorithm used. There are also other ways to validate an intelligence like putting it to come up with rational provable explanations to existential questions. And we could start with:
    - how was the first star formed? (you have to first compress H2 and He2 to some point before gravity can take place)
    - how were the first planets formed? (gravity according to simulations cannot create large bodies by aggregation)
    - how life came from non life? How information came into existence? and who designed the architecture? (DNA is like a source code in base 4, ~770MB/human genome, however in my experience, source code is useless if you do not design a computer architecture that can load and execute the source code).
    - how many genetic mutations and how many generations are needed from first cell to Homo Sapiens Sapiens and how much time given the minimum generational age between mutations (mathematical models for evolution show it's possible only when entity is very small and generation time is smaller than 3 months).
    And I could add some bonus questions, like why there is a constant layer of marine fossils all over the world like somehow it was covered in water all at once, why all fossils are actually result of rapid burial, why there is C14 found consistently in diamonds when there is no physical way to contaminate them, why the diffusion rate of He2 in rocks found in the deep of the earth suggests accelerated decay and an earth age of about 6000 years? And the list could go on.

    The questions from above are questions that are raised by scientists all over the world which until now have no fact backed explanations but only superficial ones from the pure evolution point of view (imposed religiously by scientific community). A true AI that is superior in intellect to human mind should be able to understand all the fields (biology, chemistry, physics, etc), connect the dots then be creative and generate unified theories.

    Now a few observations from software development:
    - there is at least a bug for every 10-100 lines of code (close to lower rate)
    - subtle bugs take years to be discovered and sometimes complete redesign for fixing
    - concurrent programming is hard and concurrency bugs are transient many times and hard to fix
    - the quality of the developers that graduate is worse and worse every year

    Given the observations, you may understand why I (and I think many other engineers) do not believe it will ever be possible to develop something that can communicate in an intelligent manner with humans and be able to contribute to the society by answering existential questions. However, if I would live to see it, I would not be surprised if the AI would answer to all the existential questions with one world: God.
    Last edited by sergiu; 02-28-2017 at 07:50 PM.

Page 1 of 4 1234 LastLast

Bookmarks

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •