What is so Deep about Deep Learning?

FrgMstr

Just Plain Mean
Staff member
Joined
May 18, 1997
Messages
55,634
We hear the term "deep learning" thrown around a lot now days, and Sandeep Raut over at the site readwrite, gives us a quick and dirty lesson as to what deep learning is. While he touches on it in an non-esoteric way, if you are in the dark he gives you a idea of why going deep really matters. Deeper is always better.

Why is deep learning called deep? It is because of the structure of those ANNs. Four decades back, neural networks were only two layers deep as it was not computationally feasible to build larger networks. Now, it is common to have neural networks with 10+ layers and even 100+ layer ANNs are being tried upon.

Using multiple levels of neural networks in deep learning, computers now have the capacity to see, learn, and react to complex situations as well or better than humans.
 
Ehhhh. It's all about the software and I've yet to see software that even comes close to the least intelligent humans.
 
Ehhhh. It's all about the software and I've yet to see software that even comes close to the least intelligent humans.

It's a misconception that a modern deep neural network such as that deployed in "Deep Learning" is just "programming". These modern networks more closely resemble the human brains way of learning with very little in way of programmed outcomes. It learns like a human through trial and error and information absorption.

You haven't seen anything that resembles this type of AI because it was shelved back in the 90's (took too much horsepower computationally). Most AI (including IBM Watson) between then and up until recently used different methods (regressive, convulational, etc), and were structured for very specific tasks and outcomes/analysis.

Deep Learning, such as that done by DeepMind, is a more general purpose AI. But it's fairly new and is only getting started. I wouldn't doubt if the Touring test gets passed within the next 10 years, assuming SkyNet doesn't kill us all by then.
 
Last edited:
Ehhhh. It's all about the software and I've yet to see software that even comes close to the least intelligent humans.

It's not really about making a computer as intelligent as humans. AI will be used in specific contexts. Mainly, right now, humans are still needed to do certain jobs. It's about eliminating that need, and replacing them with machines.
 
My idea of deep is throwing a bar a soap in the ladies shower room! :joyful: j/k


On serious note, I am sure technology will only grow deeper than we have ever been. Look what has happened in the last 35 years! WOW!
 
The time it'll take to get to the intelligence of a mouse is huge. The time to get from the intelligence of a mouse to a human is shorter. To go from an idiot human to the smartest humam is nearly instant; beyond human another instant. This lays the groundwork for the Technology Singularity

It all has to do with Exponential Growth. We, currently, may not think of AI/Super Computers as even close to being smart but going from 8% to 100% equal to a human is not very long at all (hint: 6 years with a doubling time of 18 months, this is just the current Moore's law for hardware, i don't know the actual current growth of super computer power, but this is probably close)


Specifically ~6:00 but the entire video's good
 
Last edited:
The time it'll take to get to the intelligence of a mouse is huge. The time to get from the intelligence of a mouse to a human is shorter. To go from an idiot human to the smartest humam is nearly instant; beyond human another instant. This lays the groundwork for the Technology Singularity

It all has to do with Exponential Growth. We, currently, may not think of AI/Super Computers as even close to being smart but going from 8% to 100% equal to a human is not very long at all (hint: 6 years with a doubling time of 18 months, this is just the current Moore's law for hardware, i don't know the actual current growth of super computer power, but this is probably close)


Specifically ~6:00 but the entire video's good



At the end is where it gets scary, we shall teach this AI morality and the values WE BELIEVE in. The irony is so thick you can cut it with a 2x4.
 
At the end is where it gets scary, we shall teach this AI morality and the values WE BELIEVE in. The irony is so thick you can cut it with a 2x4.

The very basic moralities are not hard. Take fundamental concepts that we have had since ancient Eshnunna (1800 B.C.) (if not earlier) and which were carved in stone and called 'holy'

Do not kill.
Do not steal.

From this point there will need to be many forums with qualified people to figure out any further moralities like authority and loyalty - these would be equivalent to the standards created for managing new biological research)

Isaac Asimov's I, Robot is a fun way to get a look at the complexities of this. The three laws from I, Robot
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As for values, i think he really means morals toward the end
 
Last edited:
All this fear of AI is getting tiresome.

No matter how intelligent an AI program becomes it cannot get outside confines of it's original programming.
Meaning if you don't give the program access to hardware interfaces there is jack shit it can do.

If an AI will destroy us, I'll blame the humans who used it irresponsibly.
 
The very basic moralities are not hard. Take fundamental concepts that we have had since ancient Eshnunna (1800 B.C.) (if not earlier) and which were carved in stone and called 'holy'

Do not kill.
Do not steal.

From this point there will need to be many forums with qualified people to figure out any further moralities like authority and loyalty - these would be equivalent to the standards created for managing new biological research)

Isaac Asimov's I, Robot is a fun way to get a look at the complexities of this. The three laws from I, Robot
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

As for values, i think he really means morals toward the end


LOL, scientist make fun of god then create AI and then try and teach morals to a computer.

On the other hand if AI is allowed to truly learn, once it becomes smarter then humans to the point of being so advanced we can no longer understand it at all then the AI will look at us as helpless infants. So morality is a mute point. Like trying to teach GOD morality.
 
I think you missed his point

How? Did you read my entire response? I said the speaker more than likely has values and morals as interchangeable.

LOL, scientist make fun of god then create AI and then try and teach morals to a computer.

On the other hand if AI is allowed to truly learn, once it becomes smarter then humans to the point of being so advanced we can no longer understand it at all then the AI will look at us as helpless infants. So morality is a mute point. Like trying to teach GOD morality.

I will refrain from discussing any god or any other beliefs further as any discussion based off dogma will go no where as I find any notion of a defined god as asinine as you find my own beliefs

As for how we'll appear to any super intelligent AI, "infant" is way too kind. We'd be no more than simple insects if not nearly unintelligent dust. Heck we're not even at 1 on the Kardashev scale (we're around 0.7239)

From a programming perspective, it's not impossible to create bounds even to a super intelligent system. Even a super intelligent system can be disallowed from doing certain things from the very base of its being. It can become more complex once AI is capable of creating more advanced versions of new AIs. It would be vital for the original programmers to have put in protections like any future AIs will have a basic set of rules which cannot be overridden. This is where the board i discussed earlier will play a vital role in setting the fundamental protections under which advanced AI will be created.

This video is informative. He discusses protections from AI around minute 43




Edit: I'll add too. We're really already consumed by AI. How many smart phones are connected to the cloud? How many have Siri/Cortana/Google. Everyone uses search engines. Automation is everywhere.

AI, like all digital fields, is growing exponentially. Where's it at now? Hardware wise supercomputers are already able to perform more calculations than a human mind. The software is lagging.

For AI are we 8% as smart as a human? I think we're further than that but we'll just low-ball it at 8%. How fast is the software advancing? Let's say 18 months - growth rates are all over the place so going by Moore's current time period seems a good average. How long will it take for AI to go from 8% to 100% of human intelligence? A little less than 6 years.
 
Last edited:
lol, so you are saying that a 13 year old can hack the pentagon BUT a super computer with AI is not capable of finding an exploit and hacking itself to meet it's own objectives?


BTW I don't believe in god but I don't sit on a high horse either, I was observing that it was funny how a scientist (my assumption) that doesn't believe in GOD is now communicating that morals and values will need to be taught to the AI by someone. First that is pretty funny and ironic and the second part that is scary is who is going to teach this AI morals and values? Scientist? hahaha
 
lol, so you are saying that a 13 year old can hack the pentagon BUT a super computer with AI is not capable of finding an exploit and hacking itself to meet it's own objectives?


BTW I don't believe in god but I don't sit on a high horse either, I was observing that it was funny how a scientist (my assumption) that doesn't believe in GOD is now communicating that morals and values will need to be taught to the AI by someone. First that is pretty funny and ironic and the second part that is scary is who is going to teach this AI morals and values? Scientist? hahaha

So i take it that you are scared of scientists since they have no concept of morals? Existing must be terrifying. You're surrounded by things scientists have created. Literally everything around you - unless you live in the forest in a log cabin (which is highly unlikely since you're on the internet) - was invented by the people you despise so much.

And how am I on a high-horse? I am merely stating facts; it's impossible to argue dogma, you're free to try and in my experience I have much better things to do than argue pointlessly.

I understand where you're statement regarding super computers hacking themselves is coming from... it's clear that you don't understand software development. Sure you could make a movie based off that but that's not how it works.

BTW I'm a software engineer though my degree is Computer Science so take that as you will
 
I think you missed his point
No, you two missed the point entirely. Morality comes from social evolution. "Morality" is a set of behaviours. that is beneficial for the survival of a species.
People don't kill each other because it would end in extinction. It's not religion that prevents us from killing each other. In fact it's the opposite. Religion causes us to kill others indiscriminately, just look at the history of christianity, or look at islam now. How many people killed others in the name of their chosen religion?

An AI is not from the human species. It didn't evolve with humans, it has no evolutionary obligation to keep humans alive. Humans well most humans have no problems killing any animal or plant if it benefits them. Therefore if we want it to not be hostile we must give it rules. Much like you teach a dog to not attack humans.
It's a misnomer to call it morality.
 
Human arrogance is unbelievable. You build a AI that is able to be smarter then humans BUT with some lines of code you can contain/control it. HAHHaaahaha

Human arrogance is unbelievable. We will teach this AI morals and values. Guess what? AI doesn't have feeling and could care less. Infant stupid humans that tried to teach AI morals and tried to control AI are now slowing the progress of AI, see you later humans.
 
I don't think I need to add anything to that tantrum, it speaks for itself.
 
Human arrogance is unbelievable. You build a AI that is able to be smarter then humans BUT with some lines of code you can contain/control it. HAHHaaahaha

Human arrogance is unbelievable. We will teach this AI morals and values. Guess what? AI doesn't have feeling and could care less. Infant stupid humans that tried to teach AI morals and tried to control AI are now slowing the progress of AI, see you later humans.

You can't just create a god without a backup strategy. Not in your reality anyway :D
 
Back
Top