Quote:
Originally Posted by Wizzo
Google is simply a search engine company, Alphabet on the other hand! 
|
A rose, or in this case machine overloards, by any other name...
Have any of you been watching this AMC series "Humans"?
Pure fantasy or not, having lifelike androids for maids, nannys, and yes, fuck toys, is something that a lot of people would want once it becomes available.
But is artificial sentience possible? For every so-called 'reliable source' that claims it isn't there are at least the same number of links out there claiming it is...
Artificial Sentience -- is it possible? The answer is yes.
Talking to Robots: Artificial Intelligence Is Possible
Are We Smart Enough to Control Artificial Intelligence? --
Even if the odds of a superintelligence arising are very long, perhaps it?s irresponsible to take the chance. One person who shares Bostrom?s concerns is Stuart J. Russell, a professor of computer science at the University of California, Berkeley. Russell is the author, with Peter Norvig (a peer of Kurzweil?s at Google), of Artificial Intelligence: A Modern Approach, which has been the standard AI textbook for two decades.
?There are a lot of supposedly smart public intellectuals who just haven?t a clue,? Russell told me. He pointed out that AI has advanced tremendously in the last decade, and that while the public might understand progress in terms of Moore?s Law (faster computers are doing more), in fact recent AI work has been fundamental, with techniques like deep learning laying the groundwork for computers that can automatically increase their understanding of the world around them.
Bostrom?s book proposes ways to align computers with human needs. We?re basically telling a god how we?d like to be treated.
Because Google, Facebook, and other companies are actively looking to create an intelligent, ?learning? machine, he reasons, ?I would say that one of the things we ought not to do is to press full steam ahead on building superintelligence without giving thought to the potential risks. It just seems a bit daft.? Russell made an analogy: ?It?s like fusion research. If you ask a fusion researcher what they do, they say they work on containment. If you want unlimited energy you?d better contain the fusion reaction.? Similarly, he says, if you want unlimited intelligence, you?d better figure out how to align computers with human needs.
Bostrom?s book is a research proposal for doing so. A superintelligence would be godlike, but would it be animated by wrath or by love? It?s up to us (that is, the engineers). Like any parent, we must give our child a set of values. And not just any values, but those that are in the best interest of humanity. We?re basically telling a god how we?d like to be treated. How to proceed?
If a serious discussion is truly desired on this the least everyone can do is quit with the "this one guy says it's not possible so it's not possible" and admit that other qualified experts on the same subject are saying it just might be at some point. Is the "skynet" analogy accuate? Probably not, but that part of the it really should be obviously taken as the comedic element here. It's a loose analogy, as I've already admitted, but it's about the closest one we have that most people 'get'.
And the fact is we as a species have made more technological leaps and bounds in the past 50 years than in all the rest of human history combined. Thus I find it odd that anyone, but especially highly educated people (like Deutche, for example), presume to declare what is and isn't possible and say it in absolutes. We're witnessing only the infancy of robotics, and programming for that matter. I can only wonder at the ego one must possess to loudly proclaim something, anything, in this insanely-advancing industry as "impossible".