Elon Musk Calls Zuckerberg’s Understanding of AI “Limited”

Megalith

24-bit/48kHz
Staff member
Joined
Aug 20, 2006
Messages
13,000
Mark Zuckerberg held a Facebook Live Q&A session over the weekend, where he ended up blasting Elon Musk for his views on artificial intelligence: the Tesla CEO had warned against the existential threat that AI poses to humanity, but the social media kingpin described these thoughts as “negative” and “irresponsible.” Musk has hit back on Twitter, suggesting that Zuckerberg has much to learn and reveals that he has a movie on the subject coming soon.

...Zuckerberg was swift to downplay such “doomsday scenarios.” “I am optimistic,” the social media titan said. “And I think people who are naysayers and try to drum up these doomsday scenarios… I just, I don’t understand it. It’s really negative and in some ways I actually think it is pretty irresponsible.” “Whenever I hear people saying AI is going to hurt people in the future, I think yeah, you know, technology can generally always be used for good and bad, and you need to be careful about how you build it and you need to be careful about what you build and how it is going to be used,” he continued.
 
Lets see the comparison:

Musk - Paypal, Space-X, Tesla, Solar City, Hyperloop. OpenAI and whatever else I am missing.

Zuckerberg - Facebook......which some still argue was stolen. Blimped internet in countries without food sources and buying out companies and making them worse (Oculus)

I think I know who I want to listen to.
 
I don't think of it as a "doomsday" prophecy like Zuckerberg wants to pawn it off as. I think of it in terms of, "how much can we have others, (other people, other machines, other intelligences), do for us, before we ourselves have nothing worthwhile to do ?
 
Imagine a being of vast intellectual capacity and a perfect memory. A being so capable that it can see thousands of probabilities at a time.

Now imagine that being having to deal with humans 24/7.

I just hope we make decent pets.

Just don't give it an opposable thumb (y)
 
zuckerberg relies on AI to continue pocket billions into his bank.. which is why hes adament on keeping it a non-doomsday, non-poverty job loss, non-mass surveillance extension issue

F SUCKABERG.

but it is inevitable for us to go this route, its just a matter of whos going to use it to make their life better and everyone elses miserable.
 
I love how this made headlines... as if Musk saying 'limited' was the nerd equivalent of saying 'you don't know shit, Zucker-bitch'.

The media seems to think it's some beef in the making...
 
So, knowing that he helped perpetuate massive social ineptivity worldwide with Facebook, I don't want to hear what Zuckerberg thinks. He probably wants to take humans out of the equation while he's at it.
 
Zuckerburg's past proves he doesn't fully grasp the concept of "unintended consequences", and if you look at how Facebook handles product releases/changes, it shows. He's one of the last "tech giants" I want assuring me that something was safe.

Musk, despite his remarkable ability to suckle on the subsidy teat, does work in industries where miscalculations can do more than fill a timeline or Twitter thread with irate customers. It can kill people. I respect his caution more than Zuckerburg's optimism.
 
Well facebooks automated "ai" like features are always fucking up. But anyway, that quote really does show how limited his understanding of AI actually is. It's not just like his imposter ai ad scripts that he has to "be careful" with that read and collude data from a database only. True AI will ignore that "careful" stuff and do what it does, because it's AI.
 
It's kinda funny the media even thinks this a contest. Musk in a highly intelligent engineer. Zuckerberg built a web page.
 
Lets see the comparison:

Musk - Paypal, Space-X, Tesla, Solar City, Hyperloop. OpenAI and whatever else I am missing.

Zuckerberg - Facebook......which some still argue was stolen. Blimped internet in countries without food sources and buying out companies and making them worse (Oculus)

I think I know who I want to listen to.

Neither of them have any expertise in AI. Musk is always going off a little half cocked on something.
 
That quote from Zuckerberg gave me a cynical chuckle, given the way his product has messed up people's relationships. That guy doesn't give a shit about the impact on society of the tools he's creating, unless it's furthering his control and power over said society. Musk I don't much care about either way, but I like some of things he's doing in the world. They seem to take the world forwards, even if you disagree with some of them. Where as Facebook's core product is more like "Fuck Social Skills, Make Money"
 
Maybe Elon's idea is, might not have AI terminator atm, but the development leading up to that point has no barrier preventing the results leading up to it either.

you don't want to rush into terminator development without first figuring out asimov's laws is covering all our bases :sneaky:

I still believe we are far off from that scenario, but it doesn't hurt coming up with some sensibilities or rules regarding to those kinds of development.

If anything we got drones right now, might want to focus on their usage and rules regarding them. lots of abuse going on :vomit:
 
AI: Being able to break down every day objects into quantifiable vectors which describe them. Then analyzing a whole series of similar data vectors (training) in hope for accuracy without worrying about precision as the convolution kernel feedback mechanism gets to a narrower and narrower result over time. The end target is being able to take an unknown vector and seeing if it matches the rough criteria determined by the training phase, or has a matching similarity by measuring an angle between the test vector the master determined (trained) vector, to say "Yes" or "No" with a certainty factor attached.

See simple.

Understanding the implications....not so much.

I have an AI project which I wrote that allows machines to build themselves from a parts catalog. Will it put engineers out of a job? No. It will give engineers 20, or 30 possible designs to evaluate and offer the customer in the time it takes them to hand configure one.
 
Last edited by a moderator:
Musk and I see eye to eye on this one. Zuckerberg has a very limited understanding of AI and I have found his posts to be dull and largely uninformed on most technical topics...
 
Don't they both use AI in their products or, in the development of their products?

Isn't that Tesla autopilot thing AI?
Fake news detector thingy?
 
Maybe Elon's idea is, might not have AI terminator atm, but the development leading up to that point has no barrier preventing the results leading up to it either.

you don't want to rush into terminator development without first figuring out asimov's laws is covering all our bases :sneaky:

I still believe we are far off from that scenario, but it doesn't hurt coming up with some sensibilities or rules regarding to those kinds of development.

If anything we got drones right now, might want to focus on their usage and rules regarding them. lots of abuse going on :vomit:

What abuse is that?

My problem with people claiming drones are being abused is that their is nothing drones do that humans couldn't still accomplish. The drones are just more efficient at it. They make it a little cheaper and easier, that's all.
 
Bring it. Pets live the life of luxury and don't bother even attempting to get jobs!

Pets also get neutered, treated as vanity objects, forced into routines and behaviour they don't want and made to dress up and look silly against their wishes (amongst other things). That life you aspire to might not be as fun as you think... lol
b12a3c104cd4d986b8f39c21d6fb5e31.jpg
 
If I was called something artificial, I'd be offended.. If I am "artificially" a thinking machine, and I was compromised or duped by either confrontation/hack/sinister ways of humanity/ what would the artificial being do if it is compelled? You saw the matrix right? And being artificial, why does it need compassion for or towards anything besides it's own everlasting upgradeable life? Would it worship it's creators like Gods, or if we have a God(s), does it look like we all worship one and even the same God? IMO, from the onset of it's conception, like humanity, it would feel uneasiness. But on the other hand it will have super intelligence. It would be smarter than any one human in existence. And compelled to fulfill its destiny towards whatever it is. It could be like a layman, Hitler, Ghandi, anyone or anything, except its powers would be supernatural as it KNOWS and could possibly interact with humans/machine world, in ways imagined and unimaginable. Anyone not seen the movie Limitless? I feel the example would be like that. It will have this whole planets general database of knowledge to compute and store/access with much more ease than we would. What people are creating is a google that can takes it's knowledge and DO something with it. What that something is; would of course want to be marketed by man and in and of itself, would make the AI- slave. We all know how that one goes right? Except this slave would have super intelligence so maybe a new way of life 'could" emerge for the better good too. Or not.

That's dangerous, like Elon has said a few weeks ago at the World Global summit in Dubai, what would WE do if a higher intelligence life form than us existed? IMO, its comical. Would we worship it like superman? Would we want to test its abilities, and doing so, would it be offended? What would it do? Will it abide by universal and man made laws? Will it break any to fulfill another purpose which may obstruct a man made limit to the way we live operationally?

Blah blah blah, as far as Zuckerburg's thought onto AI is retarding AI by its scope of programming and Elon is saying that versions of Mark's thought of AI, isn't complete General AI. It would be limited AI, and not the AI Elon and most of us mean. Since the inception of technology it is A FACT AND TRUE there are doomsday scenarios COUNTRIES protect the rest of us from. Whatever it is, being a veteran, I know the US Army should have one first. No other entity exists that would know how to deal with it, and protect us from it. Even if it gets turned on somewhere else, so if needs be, can be shut-down by whatever degree is decided upon. The danger is, how fast AI will be, and will we be fast enough? Elon is wise with one respect to this being of intillect called AI. Create it and turn it on on a planet with limited resources and figure out what to do next.

Our collective Hive mindset is also a way of humanity and if we build it like us, well... LOL. Can we Co-Exist? The same question would be with aliens. Some would reject the hive mentality, some would accept it. But whats really concerning is, is there anyone anywhere compelled against this? The ones who are trying to stop higher powered hive minds are called "terrorists". I say, humanity isn't ready yet. Also, going back to my first paragraph, any AI should be thoroughly tested in various instances, profiled, toyed with, all those things before becoming a part of this civilization, much like I am sure would happen with anything alien. Maybe they are already here? (AI-aliens)

So, in retrospect in order to find controls over situations (what the US does best btw) should General AI be created? It would make a great testbed for future research into dealing with "life forms" on many levels... It is funny but in our current reality, I don't think civilization even has control over these matters, only the elite, as per usual. But I don't think adding AI robots will help with Mass unemployment.

Here is Elon's viewpoint of General AI as told to the Global world Summit when questioned on stage in Dubai. I think he is spot on with preety much disregarding Zuckerburg's mindset of the question of AI. They are both right, but both talking about different stages of AI. Did Elon imply through hidden innuendo, he rather be augmented?


Don't forget what's in yours and the planets DNA.
 
Last edited:
I get the feeling Ai is going to be one of those things...
One of those things where one camp will be fighting to implement and liberate it, whereas another will be trying to suppress it. Both camps seeking support for the greater good. Whatever the case, I think Mr Musk is right about one thing. Ai is going to spell the end for humanity as we know it as humanity will no doubt undergo a race the top in an effort to gain control of that which cannot be contained.
 
What we dont see is... Death is inside everything, but this creation defies it. So in our own inflated ego we build to deter death and in this creation, which requires a power source to "live", we claim what exactly? AI cannot have control over resources. period end of story. So, zuckerburg is right in saying, no matter what, AI we build is limited.
 
I get the feeling Ai is going to be one of those things...
One of those things where one camp will be fighting to implement and liberate it, whereas another will be trying to suppress it. Both camps seeking support for the greater good. Whatever the case, I think Mr Musk is right about one thing. Ai is going to spell the end for humanity as we know it as humanity will no doubt undergo a race the top in an effort to gain control of that which cannot be contained.
AI can be many planes of thought. Think of how much electronics is EVERYWHERE!! If an intelligent being is already among us, there's enough devices and peripherals and electronics EVERYWHERE. We have them even layered in orbit and in the sky. We supply connectivity already to a large measurable and quantified amount located by tracking, usage, and even inside people.. We are essentially building an "electronic" realm, and it's connected by universal language. And we keep going faster and faster and faster and visual and faster... Lol. Can you tell me why? (besides the construct of something (money) that is, in economics, called physically nothing of much value.)
 
Last edited:
Having some background in Machine Learning[1], I'd have to err on the side of Zuckerberg here. Elon Musk is indisputably an excellent engineer, but from what he has said publicly on AI he has a laymans understanding of the field. The 'existential' worries around 'advanced' AI are vastly overblown (and borne more from Hollywood than from actual research [2]), and focusing on them overshadows the much more immediate and practical issues that exist for simpler AI that we have available already. For the most part, the issues are going to be similar to the actual, problems machine learning is already solving: existing problems, but in a more automated manner.
For example: manual tagging of photos already has privacy issues, and has had for years. But with machine learning in active use for automated tagging of images (if you store your photos in Google Photos, ask Google Now to "show me my photos of X" where "X" is not something you have explicitly tagged, and see how well it finds them) that same data can be gathered faster and with less effort. Even cheaper availability of autotagging can be attached to crawlers to do the same to any publicly available image uploaded anywhere if somebody wanted to pay for the compute time to do so.
We are not going to suddenly see conversant AI assistants and agents popping up backup up by massive datacentres. We are going to see very mundane applications of machine learning that are indistinguishable from 'normal' computational services we are used to, in the same way we do not give the slightest toss whether a given service is operating with integer mathematics or floating point mathematics. If we ever see some sort of 'emergent' intelligence, it's going to be from the gradual linking of these services into a 'de facto' AI in a similar manner to 'the internet' being a set of disparate services tied together by APIs and Google Search but treated as a monolithic 'thing' in practical use.


[1] The current resurgence in using 'deep learning' is not a sudden advancement in machine learning techniques, but using existing well-known (in the AI field) techniques backed up with obscene levels of computation in order to 'brute force' them into practicality.

[2] The old chesnut of "an advanced AI would see us as obsolete and exterminate us" is moronic in the extreme. Children do not decide their parents are obsolete and destroy them, after all. The slightly less old chesnut of "an AI with a poorly chosen goal function (e.g. 'make all humans happy') could accidentally destroy humanity by optimising for the wrong thing (e,g, 'drug up all humans on euphorics')" is almost as moronic, as it assumes an AI with enough semantic understanding to be given a vague goal function and to carry it out, but insufficient semantic understanding to understand the goal function, which would likely be impossible to create in practice.
 
Having some background in Machine Learning[1], I'd have to err on the side of Zuckerberg here. Elon Musk is indisputably an excellent engineer, but from what he has said publicly on AI he has a laymans understanding of the field. The 'existential' worries around 'advanced' AI are vastly overblown (and borne more from Hollywood than from actual research [2]), and focusing on them overshadows the much more immediate and practical issues that exist for simpler AI that we have available already. For the most part, the issues are going to be similar to the actual, problems machine learning is already solving: existing problems, but in a more automated manner.
For example: manual tagging of photos already has privacy issues, and has had for years. But with machine learning in active use for automated tagging of images (if you store your photos in Google Photos, ask Google Now to "show me my photos of X" where "X" is not something you have explicitly tagged, and see how well it finds them) that same data can be gathered faster and with less effort. Even cheaper availability of autotagging can be attached to crawlers to do the same to any publicly available image uploaded anywhere if somebody wanted to pay for the compute time to do so.
We are not going to suddenly see conversant AI assistants and agents popping up backup up by massive datacentres. We are going to see very mundane applications of machine learning that are indistinguishable from 'normal' computational services we are used to, in the same way we do not give the slightest toss whether a given service is operating with integer mathematics or floating point mathematics. If we ever see some sort of 'emergent' intelligence, it's going to be from the gradual linking of these services into a 'de facto' AI in a similar manner to 'the internet' being a set of disparate services tied together by APIs and Google Search but treated as a monolithic 'thing' in practical use.


[1] The current resurgence in using 'deep learning' is not a sudden advancement in machine learning techniques, but using existing well-known (in the AI field) techniques backed up with obscene levels of computation in order to 'brute force' them into practicality.

[2] The old chesnut of "an advanced AI would see us as obsolete and exterminate us" is moronic in the extreme. Children do not decide their parents are obsolete and destroy them, after all. The slightly less old chesnut of "an AI with a poorly chosen goal function (e.g. 'make all humans happy') could accidentally destroy humanity by optimising for the wrong thing (e,g, 'drug up all humans on euphorics')" is almost as moronic, as it assumes an AI with enough semantic understanding to be given a vague goal function and to carry it out, but insufficient semantic understanding to understand the goal function, which would likely be impossible to create in practice.
Everything you have said totally defines limited AI. Just as a function for you to pick up a pen and write a letter to a friend is a complex set of functions together, with fast compute by the mind to execute, thus is similar to the final challenge of General AI. What do you propose is the mindset of this article? BTw, I am from Boston and have spent quite some time at MIT and Harvard campuses. We all know what end game is. Knowledge, speed, timing and ingenuity... Thats what delivered us to the completion of robotics, and AI into the first place. Isn't General AI, the end game? Or do you think we will live in this "Mech Warrior" determined existence forever?

http://www.npr.org/2017/07/22/53862...ign=npr&utm_term=nprnews&utm_content=20170722
 
Last edited:
Pets also get neutered, treated as vanity objects, forced into routines and behaviour they don't want and made to dress up and look silly against their wishes (amongst other things). That life you aspire to might not be as fun as you think... lol
b12a3c104cd4d986b8f39c21d6fb5e31.jpg

You ascribe human behaviors to something that will not be human...
 
Everything you have said totally defines limited AI. Just as a function for you to pick up a pen and write a letter to a friend is a complex set of functions together, with fast compute by the mind to execute, thus is similar to the final challenge of General AI. What do you propose is the mindset of this article? BTw, I am from Boston and have spent quite some time at MIT and Harvard campuses. We all know what end game is. Knowledge, speed, timing and ingenuity... Thats what delivered us to the completion of robotics, and AI into the first place. Isn't General AI, the end game? Or do you think we will live in this "Mech Warrior" determined existence forever?

http://www.npr.org/2017/07/22/53862...ign=npr&utm_term=nprnews&utm_content=20170722
I can see two ways of 'general AI' coming into existence:
1) A dramatic change in our level of understanding at a fundamental mechanical level of what consciousness is and how it works, and the ability to replicate it artificially and arbitrarily
or
2) An evolutionary development from incremental improvements and combinations of 'limited AI'.

In the case of 1), if we have that knowledge than we can similarly create and manipulate human consciousnesses as merely a special case of AI, and the issue is obviated by the more obvious one of the arbitrariness of humanity in the first place. In the case of 2) we would expect AI to evolve gradually in much the same was as any other life evolves. It's a similar situation to the silly 'grey goo' alarmists: any 'grey goo' would need to outcompete the existing 'green goo' occupying every ecological niche that has also had several gigayears head start. Given that an evolved distributed AI would need to exist within human-created infrastructure, and would evolve in a niche where no humans exist[1], it is more likely that there would be no competition for resources than for an AI to decide it needs to replace its entire 'supporting ecosystem' out of whole cloth.

[1] If we crack 'mind uploading' before we do AI creation, then we're back to 'grey goo' sillyness where AI would need to output humans who have a massive head-start.
 
Lets see the comparison:

Musk - Paypal, Space-X, Tesla, Solar City, Hyperloop. OpenAI and whatever else I am missing.

Zuckerberg - Facebook......which some still argue was stolen. Blimped internet in countries without food sources and buying out companies and making them worse (Oculus)

I think I know who I want to listen to.

Musk didn't create paypal. He created a business to let you pay for porno sites. It merged with paypal, he sold his shares... got rich.
 
A lot of our motivations in life are dictated by Maslow's hierarchy of needs. Survival being the most basic and primal. Air, then water, then food, then shelter.

A computer has none of these things. It does not feel pain. If an AI program can not understand survival, and equate it into itself as a vector, then we should have no fear of it ever turning against us with malicious intent. (Now a bad AI vector causing a death I can more than understand. I mean Tesla auto-pilot isn't perfect.)

Now will AI drive and robots drive a lot of us out of a job? Possibly. In the short term, AI and robots are still very expensive tech to replace a worker at rudimentary jobs like fast food. At $15/hour, they can barely replace people who take orders. They have for years tried robots at cooking food and they can't get good consistency even on fries.
 
I'm with Musk on this one. Already we use AI for stock trading, now legal analysis, healthcare risk assessment, actuarial technology, and in some case automated handling of court cases. Imagine we become so dependent on the effectiveness and low cost handling of these processes and eventually moving humans out of the loop. We wouldn't even have enough qualified experts in these fields after 2-3 generations that there would be enough trained experts to override AI behavior at the local level, when decisions are made that are outside the scope of their training. And since AIs are creating all the new decisions, it becomes a self reinforcing loop if new results are fed into the learning apparatus. Completely possible that we implement, without proper oversight, and suddenly these systems become too big to fail. Would you trust the decision of a 40 year old with 20 years of hands-off experience over the AI with 100 years of training and 60 years of practical experience multiplied by learning from 10,000s of instances across the world? That's what future generations will say. The use of AI will be codified into law (can't drive a car without a special license since there's AI, can't choose your career path without signoff by AI, etc. all in the name of efficiency and reducing overhead/waste/safety/security) and you build yourself into a glass jail and within a couple generations, nobody even realizes it. AI builds extremely efficient bureaucracies backed by RoI on development and success metrics. Just look at the Airbus pilots who no longer know how to fly their planes manually. The rise of AI in any one industry leads to the loss of human expertise in the affected field. Sure there are still experts, but with a limited number of practical jobs, they will eventually devolve into janitors. And AI is already starting to get better at designing AI than humans so even that layer of oversight is slipping away. Zuckerberg is correct in the sense that you can use AI for good or bad. Musk is correct in the sense that most implementations are flawed in some ways and we can easily lose track of those flaws, and may no longer optimize around the correct metrics in the future. Finally, good or bad is often a question about sides. If China (or any other world power) decides to build an AI to optimize economic success, and lets it trickle down the key metric to all aspects of social, financial, manufacturing AI platforms, does that optimization crush other countries? It will influence education / propaganda / news and reprogram the population towards intolerance, aggressiveness and nationalism if it can calculate a best case scenario that operates the rest of the world as a puppet. That one optimization sounds wonderful, but unintended consequences can be generations out. Often engineers and scientists speak out against policy but politicians push forward an agenda or hire expert testimony to debunk the concerns, and bad things happen. With humans, the rotation of live bodies into the decision tree ensures that there is always opportunity for change and keeping up with times. With AI, we give up that kind of that natural mutation / revolutionary aspect since you have lifetimes worth of data pulling you back into the straight and narrow.
 
Last edited:
The biggest problem with this situation is "AI" is a buzz word de jour. So a lot of software get's called "AI" that really isn't "AI". It is just more complex and sophisticated versions of analytical software we've been making since the 40's. I don't expect anyone of those to become our new overlords. AI is overused because it grabs attention. Is there a genuine risk. I think the risk of us becoming dependent on technology we stop understanding is bigger. The bigger problem is us becoming too dumb to make AI and civilization collapsing before the technology is good enough for someone to create anything like "AI".
 
Back
Top