GTC November 2021 Keynote with NVIDIA CEO Jensen Huang

Comixbooks

Fully [H]
Joined
Jun 7, 2008
Messages
21,947
Think fast. Technology is changing the world more quickly than ever. And NVIDIA GTC brings together many of the people who are working to accelerate it.

The biggest highlight is NVIDIA CEO Jensen Huang’s agenda-setting keynote on Tuesday, Nov. 9 at 9 a.m. Central Europe Time.

Huang will describe how the company is advancing AI for a variety of industries. He’ll reveal the latest technology for enterprise and data center AI, conversational AI and natural-language processing, and AI at the edge in everything from robotics to healthcare to autonomous vehicles. He’ll also explore new applications for virtual worlds and how NVIDIA is working with partners to build digital twins of factories, cities and entire regions.

Headline-grabbing speakers include Epic Games CEO and founder Tim Sweeney, Open AI co-founder and Chief Scientist Ilya Sutskever, Stanford Professor and deep learning pioneer Fei-Fei Li, VMware CTO Kit Colbert, and Walmart Director of Personalization and Recommendations Kannan Achan, and many more.

There will be sessions focused on 13 different industries, ranging from energy and retail to manufacturing, finance and healthcare.

And the free, virtual conference is a better experience than ever, with simple one-step registration, an integrated high-definition video player, playlists with meaningful content and a more visual experience.

We’ve packed GTC with interactive events. They include training sessions with our Deep Learning Institute, our AI Art Gallery and networking options for a growing array of communities.

Braindates will help attendees find and meet others who share their interests. The intimate one-on-one or group chats connect those interested in a broad range of topics for video calls throughout the event.

And the conference’s moderated Q&A capability lets attendees post live questions to moderators and speakers, giving them more opportunities to interact with industry leaders than ever.

Everything at next week’s GTC — from dazzling demos to hands-on training to insights from industry leaders — is designed to help participants move faster. So get a move on. GTC registration is free and open to all. And there’s no need to register for GTC to watch the keynote.

Mark the date — Nov. 9 at 9 a.m. Central Europe Time, with a rebroadcast at 8 am Pacific Time — on your calendar. Just point your browser to the GTC website to watch. See you there.


 
I am pretty impressed by the numbers posted on their networking hardware there, I mean that is way way way beyond what I could ever think of using but sweet Jebus that is dope.
 
As somebody who is already using the PaloAlto NGN networking hardware and is due for a hardware refresh there in the next 3 years, I look forward to hopefully getting upgraded to the NVidia BlueField.
 
Great. More of this AI and machine learning nonsense.

I honestly feel that society would be better off if we were to just scrap any effort at AI/Machine learning.

These stupid black box implementations that guess at solutions without being able to explain how they arrived at them are a detriment to society.

Unless they can be programmed to explain their reasoning and how they arrived at a conclusion, they can't be trusted and should all just be scrapped.

Black box solutions are never OK for any purpose. You absolutely have to understand and be able to explain the logic.

The world would be a better place if they just quit it!
 
Great. More of this AI and machine learning nonsense.

I honestly feel that society would be better off if we were to just scrap any effort at AI/Machine learning.

These stupid black box implementations that guess at solutions without being able to explain how they arrived at them are a detriment to society.

Unless they can be programmed to explain their reasoning and how they arrived at a conclusion, they can't be trusted and should all just be scrapped.

Black box solutions are never OK for any purpose. You absolutely have to understand and be able to explain the logic.

The world would be a better place if they just quit it!
The AI stuff I’ve dealt with have been used to generate the algorithms, the algorithms are then implemented by the developers so not much more of a black box than any of the other big software sets.
 
Great. More of this AI and machine learning nonsense.

I honestly feel that society would be better off if we were to just scrap any effort at AI/Machine learning.

These stupid black box implementations that guess at solutions without being able to explain how they arrived at them are a detriment to society.

Unless they can be programmed to explain their reasoning and how they arrived at a conclusion, they can't be trusted and should all just be scrapped.

Black box solutions are never OK for any purpose. You absolutely have to understand and be able to explain the logic.

The world would be a better place if they just quit it!
As far as I'm aware most AI algorithms are still simple decision trees, they have just gotten much more complex over time. That is why INT8 speed is a big thing now in GPGPU. The secret sauce probably has to do with the data crunching to resolve to better and preferred outcomes for the decision tree.
 
As far as I'm aware most AI algorithms are still simple decision trees, they have just gotten much more complex over time. That is why INT8 speed is a big thing now in GPGPU. The secret sauce probably has to do with the data crunching to resolve to better and preferred outcomes for the decision tree.
Not so simple any more but essentially yes, big branching decision trees in parallel as more data is analyzed in parallel allowing for more granular decisions.
 
As far as I'm aware most AI algorithms are still simple decision trees, they have just gotten much more complex over time. That is why INT8 speed is a big thing now in GPGPU. The secret sauce probably has to do with the data crunching to resolve to better and preferred outcomes for the decision tree.

Hmm. I always figured they were statistical matching algorithms. "This one is 85% like examples of this which the system has been trained on witha confidence level of 95%" and then pick the closest match.
 
Hmm. I always figured they were statistical matching algorithms. "This one is 85% like examples of this which the system has been trained on witha confidence level of 95%" and then pick the closest match.
A little of column A, a little of column B. There isn't just one way to do these things, it's sort of what makes AI so interesting.
 
These stupid black box implementations that guess at solutions without being able to explain how they arrived at them are a detriment to society
You have basically just described all of humans (they are black box implementations that guess at solutions without being able to explain how they are arrived at them).

How do you identify Jim vs John? Why did you design a race car in a particular way given the near infinite number of variables and designs that could have also worked? Why did you paint a scene of a beach when there are near infinite number of other environments you could have painted to portray the same theme?

How do you code logic for something that you yourself cant explain?

Even simple things become difficult (not impossible): how do you recognize the letter 'a'? How do you recognize the letter 'a' in cursive? How do you recognize the letter a in a bunch of noise?

edit: This is also assuming you are trying to get the system to learn something we humans already know. How do you get a system to learn something that we possibly havent even studied or there is too much data for us to try to predict the outputs from the inputs?
 
Last edited:
You have basically just described all of humans (they are black box implementations that guess at solutions without being able to explain how they are arrived at them).

How do you identify Jim vs John? Why did you design a race car in a particular way given the near infinite number of variables and designs that could have also worked? Why did you paint a scene of a beach when there are near infinite number of other environments you could have painted to portray the same theme?

How do you code logic for something that you yourself cant explain?

Even simple things become difficult (not impossible): how do you recognize the letter 'a'? How do you recognize the letter 'a' in cursive? How do you recognize the letter a in a bunch of noise?

edit: This is also assuming you are trying to get the system to learn something we humans already know. How do you get a system to learn something that we possibly havent even studied or there is too much data for us to try to predict the outputs from the inputs?

Nonsense.

The entire scientific method is based upon forming a hypothesis and then proving it.

AI is more like:

Data -> Black hole -> Answer?

How do you even trust that answer if you have no idea how it was arrived at.

It's dangerous irresponsible to rely on this nonsense for anything that matters. It simply cannot be trusted.

An answer - without fully understanding that answer - is dangerous, as it winds up being misused, misunderstood and applied incorrectly. It would be better in that case to have no answer at all.
 
Nonsense.

The entire scientific method is based upon forming a hypothesis and then proving it.

AI is more like:

Data -> Black hole -> Answer?

How do you even trust that answer if you have no idea how it was arrived at.

It's dangerous irresponsible to rely on this nonsense for anything that matters. It simply cannot be trusted.

An answer - without fully understanding that answer - is dangerous, as it winds up being misused, misunderstood and applied incorrectly. It would be better in that case to have no answer at all.
I think the most exciting uses will be in medicine. Being able to feed an enormous amount of test results (CT scans, blood tests, x-rays, DNA, etc.) into a machine learning program and discovering a new link that would otherwise have been missed is going to provide medical breakthroughs where we previously had literally no idea what to do for treatments. We've created all this data and machine learning/AI is what will let us harness its power.
 
Nonsense.

The entire scientific method is based upon forming a hypothesis and then proving it.

AI is more like:

Data -> Black hole -> Answer?

How do you even trust that answer if you have no idea how it was arrived at.

It's dangerous irresponsible to rely on this nonsense for anything that matters. It simply cannot be trusted.

An answer - without fully understanding that answer - is dangerous, as it winds up being misused, misunderstood and applied incorrectly. It would be better in that case to have no answer at all.
You trust it the same way you trust all the blackboxes walking around you. Statistics. Is it getting the job done accurately 99.99% of the time? ok good. Is 99.99% of the time not good enough? Then test it till you get 99.999%.
 
I think the most exciting uses will be in medicine. Being able to feed an enormous amount of test results (CT scans, blood tests, x-rays, DNA, etc.) into a machine learning program and discovering a new link that would otherwise have been missed is going to provide medical breakthroughs where we previously had literally no idea what to do for treatments. We've created all this data and machine learning/AI is what will let us harness its power.

As a tool for early stage research to try to find ideas, maybe, but for direct development of a product, I think you'll have a really difficult time getting anything past FDA with mere AI based correlation data.

And to change that would probably require an act of Congress to change 21 CFR Part 820 and similar regulations.

The regulations include stringent requirements on what it takes to validate medical products, and understanding the link will be required for that. Anyone who tries without meeting hose requirements will be shut down by FDA rather quickly, like 23 And Me was when they decided to offer medical diagnoses without the proper validation.

The medical sector is definitely not one where you can apply the tech mentality of it being better to ask for forgiveness than for permission, and "fail fast and fix later" . That's a good way to land yourself in jail.

And this is for good reason. I remember reading an article a while back that relates exactly to AI based diagnosis and the dangers with trusting black box algorithms too much.

In this one study they were trying to determine if they could use AI to read chest X-rays to look for tuberculosis. All seemed good at first until someone quite randomly figured out that the AI system was actually using the brand of X-ray machine as part of the diagnosis. You see, tuberculosis is more common in poor countries, and poor countries are more likely to have cheaper and older X-Ray machines. The algorithm was not only picking up on the machine brand from the identifying marks, but also using artefacts present in image quality and other aspects of machine to machine variation to diagnose the patient.

This is the type of problem you have with black box algorithms. It is very difficult to know exactly what is being used to diagnose a patient, and it is very easy for inadvertant variables that are completely inappropriate to wind up being used.

There are ways around this problem though. I can envision getting the best of both worlds by relying on AI to help figure out which variables are important. An AI based screening DoE of sorts then have it report out each of the variables it deems significant to diagnosis. Then have a human specialist in the particular medical field review all of those variables manually. Cull those that are plain wrong (like MRI machine brand, or image contrast) do more research into those variables that are not well understood to determine if they are truly significant or not and then use the final massaged list of variables to create a static algorithm that is used for the actual diagnosis. That algorithm would - of course - need a proper validation.

Use of a black box method directly on patients would be a very very bad idea and probably completely illegal. There would be a very high risk of misdiagnosis based on false correlations.

Right now too many people who seem to think AI will solve everything have that "eyes open just a little bit too wide and wanting to tell you about Jesus" look. They are so amazed with the technology that they are completely blinded to it's pitfalls. Some sober thinking on the subject and taking a few steps back would probably be healthy.
 
Last edited:
You trust it the same way you trust all the blackboxes walking around you. Statistics. Is it getting the job done accurately 99.99% of the time? ok good. Is 99.99% of the time not good enough? Then test it till you get 99.999%.

The problem is the confounding factors that won't be completely teased out in that type of analysis.
 
As early stage research to try to find ideas, maybe, but for direct development of a product, I think you'll have a really difficult time getting anything past FDA with mere AI based correlation data.
You seem to know much, but was it not someone common for things to be discovered to work, not quite understood why, validated that it work well through study and massively used. Like did we wait to well understand why Methylphenidate (and other stimulant) did work on children with ADHD before using it ?
That: https://www.sciencedaily.com/releases/2020/01/200117100257.htm (that more than 60 year's since Ritalin started to be prescribed)

I could imagine the science process testing a lot of tings, observing the effects, if one seem to have good effects make vast study to confirm the positive effects and observe any negative one and use it without fully understanding why it work.
 
Last edited:
You seem to know much, but was it not someone common for things to be discovered to work, not quite understood why, validated that it work well through study and massively used. Like did we wait to well understand why Methylphenidate (and other stimulant) did work on children with ADHD before using it ?
That: https://www.sciencedaily.com/releases/2020/01/200117100257.htm (that more than 60 year's since Ritalin started to be prescribed)

I could imagine the science process testing a lot of tings, observing the effects, if one seem to have good effects make vast study to confirm the positive and effect and observe any negative one and use it without fully understanding why it work.

There are some examples of that in the pharma world, particularly for treatments for mental disorders.

Honestly, I do not know how they got those approved past regulators. It shouldn't have worked, based on current regs.

I'm wondering if it just got to the point where so many physicians were prescribing these things off-label that FDA eventually relented and made some sort of exception.
 
There are ways around this problem though. I can envision getting the best of both worlds by relying on AI to help figure out which variables are important. An AI based screening DoE of sorts then have it report out each of the variables it deems significant to diagnosis. Then have a human specialist in the particular medical field review all of those variables manually. Cull those that are plain wrong (like MRI machine brand, or image contrast) do more research into those variables that are not well understood to determine if they are truly significant or not and then use the final massaged list of variables to create a static algorithm that is used for the actual diagnosis. That algorithm would - of course - need a proper validation.
Well in your first comment you said "I honestly feel that society would be better off if we were to just scrap any effort at AI/Machine learning" yet here you can envision how it could be useful. Why would you expect machine learning to be perfect when it's still early in development and the potential hasn't been fully realized? I see no reason why machine learning shouldn't be used in conjunction with traditional methods to further research; it's a tool just like anything else is a tool.
 
Well in your first comment you said "I honestly feel that society would be better off if we were to just scrap any effort at AI/Machine learning" yet here you can envision how it could be useful. Why would you expect machine learning to be perfect when it's still early in development and the potential hasn't been fully realized? I see no reason why machine learning shouldn't be used in conjunction with traditional methods to further research; it's a tool just like anything else is a tool.

Yeah, I guess I should clarify that I think AI could be a useful tool if applied correctly, as a tool to aid in providing inputs to traditional methods.

That - however - is not what I see the evangelists of AI doing. They seem to believe that AI should just be let loose on the public without secondary human review and validation.

Sometimes it is in relatively harmless stuff that is only annoying if it goes wrong (Google Assistant) .

Sometimes it is in absolutely frightening applications like self driving cars, or the aforementioned medical diagnoses.

I think AI is best suited on the front end that helps find patterns that are later researched, verified and validated by human experts using traditional methods.

I think AI black box models deployed directly have absolutely no merits at all, and we would all be better off if they were to just disappear.
 
Looking at the crazy number of projects that Nvidia alone is running alongside GPUs, who knows how many other projects AMD and Intel is running, I'm not surprised we're in a huge chip shortage situation.

What surprises me more is how comfortable Jensen is talking about anything and everything.
Nonsense.

The entire scientific method is based upon forming a hypothesis and then proving it.

AI is more like:

Data -> Black hole -> Answer?

How do you even trust that answer if you have no idea how it was arrived at.

It's dangerous irresponsible to rely on this nonsense for anything that matters. It simply cannot be trusted.

An answer - without fully understanding that answer - is dangerous, as it winds up being misused, misunderstood and applied incorrectly. It would be better in that case to have no answer at all.

But they do know, that's how they're able to improve the AI algorithms. Improvements such as learning time, processing speed, error rates... are all because they're able to access the classification data and analyze it to improve the algorithm.
 
Nonsense.

The entire scientific method is based upon forming a hypothesis and then proving it.

AI is more like:

Data -> Black hole -> Answer?

How do you even trust that answer if you have no idea how it was arrived at.

It's dangerous irresponsible to rely on this nonsense for anything that matters. It simply cannot be trusted.

An answer - without fully understanding that answer - is dangerous, as it winds up being misused, misunderstood and applied incorrectly. It would be better in that case to have no answer at all.
Einstein didn't understand why Quantum mechanics worked the way that it does, so I certainly do not expect you to.
Tho AI is just deep learning, or finding the common patterns from a large dataset. Pretty simple really.
 
Einstein didn't understand why Quantum mechanics worked the way that it does, so I certainly do not expect you to.
Tho AI is just deep learning, or finding the common patterns from a large dataset. Pretty simple really.

When you are working at the edge of what is new in science, you have theories that you are developing.

The thing with Quantum Mechanics is that we are still a long way off from practical applications. Before you start selling a product, especially one that has risks involved with it (misdiagnosis, accidents, etc) there is a while different burden of proof associated with testing than there is with early theories.

If you have a solid theory as to how/why something works, then you have a basis for which to test it, and prove that it will work that way every time, beyond just a binomial sample size. If you don't understand how or why something works, you also don't know enough to say it willbe long term stable. A simple current state large sample size test will not negate that.
 
Back
Top