Elon Musk Calls Zuckerberg’s Understanding of AI “Limited”

I can see two ways of 'general AI' coming into existence:
1) A dramatic change in our level of understanding at a fundamental mechanical level of what consciousness is and how it works, and the ability to replicate it artificially and arbitrarily
or
2) An evolutionary development from incremental improvements and combinations of 'limited AI'.

In the case of 1), if we have that knowledge than we can similarly create and manipulate human consciousnesses as merely a special case of AI, and the issue is obviated by the more obvious one of the arbitrariness of humanity in the first place. In the case of 2) we would expect AI to evolve gradually in much the same was as any other life evolves. It's a similar situation to the silly 'grey goo' alarmists: any 'grey goo' would need to outcompete the existing 'green goo' occupying every ecological niche that has also had several gigayears head start. Given that an evolved distributed AI would need to exist within human-created infrastructure, and would evolve in a niche where no humans exist[1], it is more likely that there would be no competition for resources than for an AI to decide it needs to replace its entire 'supporting ecosystem' out of whole cloth.

[1] If we crack 'mind uploading' before we do AI creation, then we're back to 'grey goo' sillyness where AI would need to output humans who have a massive head-start.

I think you need to take a trip to Cambridge Massachusetts. These machines were doing just what you have described since the early 90s.

https://en.wikipedia.org/wiki/Thinking_Machines_Corporation

https://en.wikipedia.org/wiki/FROSTBURG
The system had a total of 500 billion 32-bit words (≈2 terabytes) of storage, 2.5 billion words (≈10 gigabytes) of memory, and could perform at a theoretical maximum 65.5 gigaFLOPS. The operating system CMost was based on Unix, but optimized for parallel processing.

And you can't forget about good ol, watson. https://en.wikipedia.org/wiki/Watson_(computer)

The point is, I don't think becoming "self-aware" will ever happen, unless Musk pushes his research into that, and no-one is quite as resourceful as he has been lately..
 
Last edited:
You ascribe human behaviors to something that will not be human...

Errr - no, I didn't actually. You seem to have taken my comments literally rather than as an analogy.
My comments weren't meant to say an AI would treat a human pet in the exact same manner - rather to point out that there are potential negative aspects to something that may sound positive on the surface.
...apologies if it came off unclear :)

I do think in the context of the conversation, there is a distinction between the AI that is being referred to in the original article and truly sentient, conscious AI.
 
We're back to square one. If humanity dies by AI it's not because AI is evil. It's because humans have used the technology irresponsibly.

I think if anything we should accelerate AI research and create benevolent AIs that can fight rogue and malicious AI programs set free on the net. The only thing capable of countering an AI is an even better AI.

Banning or restricting AI research could only leave us exposed to intentionally malicious AI, that no regular anti virus software can take on.
 
We're back to square one. If humanity dies by AI it's not because AI is evil. It's because humans have used the technology irresponsibly.

I think if anything we should accelerate AI research and create benevolent AIs that can fight rogue and malicious AI programs set free on the net. The only thing capable of countering an AI is an even better AI.

Banning or restricting AI research could only leave us exposed to intentionally malicious AI, that no regular anti virus software can take on.
I think of it like nuclear research or bioweapon research. On the one hand we need to keep up with three Joneses and learn what we can from it. Apply good security practices and that's fine but to say there is no significant danger or need for very strict protocols and oversight is a gross misstatement of the risks. When I read about AI, I see the use of machine learning to solve problems currently defined as relevant, and the tendency to leave such systems in autopilot way past the point of relevance leaving people ill equipped to solve future problems. We see weird versions of this happening today such as the inability to upgrade and replace the switching systems in old metro trains because the old automation system worked so well, we lost the skills necessary to fux/improve it after a few generations, and instead have to redesign it from first principles. Machine learning has the side effect that many solutions are very foreign and novel, but when they work, there is a strong incentive to adopt it based on their veracity without understanding the method, and inherent bias due to the data fed into the system during training.
 
I think you need to take a trip to Cambridge Massachusetts. These machines were doing just what you have described since the early 90s.
The Connection Machines devices had a novel system architecture, but were not magically AIs because of it, general or otherwise. There is a lot more to it than mere architecture.
 
Way WAY back in the 1960's when I was working on a computer science degree, we studied something called "heuristics". This is a term that has been lost in Computer Science. AI is a relatively new term, and means that a machine can only do certain tasks - like play chess, like the computers in airplanes, maybe the computer in a "self" guided missile... etc. If we had a machine that was truly heuristic, then that might be the term that most of you posters are looking for. Being heuristic and being Artificially Intelligent aren't even close in meaning. Being heuristic would take many, MANY AI's and meld them together and start to learn (by itself). IMHO, that won't be happening for quite awhile.
I would propose that both Musk and Zuckerberg don't totally understand even what they are talking about! If it's just the term "AI", then have them (or anybody) correctly define just what AI is. They cannot do it. To stop their buffooning, I would simply ask them "How does AI differ from Heuristics"???
Let me give you a simple example - let's take your index finger. It's a "simple" form of AI, isn't it? It is made up of 4 bones - three that you can readily see , and a fourth that is embedded in your palm. It is supplied blood from veins and arteries for the life that it lives. Without that, your index finger is a goner. Around the bones are muscles that work with the help of tendons. You can use your finger to point, type, and curl up (with the other fingers) to form a fist. You can turn your palm over and curl up your index finger to form a "come here" motion. And wait: there's a [H]ard thingy on the end of your finger - some of us may call that a fingernail. You can use your index finger with your thumb to open bottles and jars. A beer can, perhaps. All of those things put together are just little AI's. They are all controlled by our brain - which contains all of the Heuristics of learning how to use the index finger. What it can and cannot do. So, Heuristics is the life-long learning that your brain put together in just the use of your index finger. How many different AI's are in the human body? More than we can count. How many different things can our brain perform than that "simple" computer in Cambridge? More than we can count.
So? you all learned a new Computer Science term - that's been hidden for over 50 years. Go home and have a beer! Watch your index finger for a while!!!
 
Neither of them have any expertise in AI. Musk is always going off a little half cocked on something.

The self driving car has to have some form of artificial intelligence.
The rocket that lands itself does so autonomously right? Gotta have some sort of Intelligence there, too.
 
The Connection Machines devices had a novel system architecture, but were not magically AIs because of it, general or otherwise. There is a lot more to it than mere architecture.
They had quite the elegant and proprietary learning software to be considered quite well and fast at the time. 10GB of memory.. lol this was in the 90s.
 
Back
Top