Intel CEO Sees ‘Green Shoots’ Emerging

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,875
Intel seeing a bright future

“Gelsinger also touted Intel’s emergence in artificial intelligence.

“There’s a range of requirements for AI, and there are these big monster training environments where all these machines do is train for days or weeks on 100 billion parameter models,” he said. “For that, we have very high-end offerings.”

There will be a broad infusion of AI into workloads everywhere, he added, saying, “Those workloads could be some data preparation, could be some inferencing, could be some more medium-sized model-training workloads, not 100 billion parameters, but 10 billion parameters where you just run them on a fleet of Sapphires. The performance of Sapphire Rapids is quite spectacular, and that performance, we expect, will become much more of the mainstream of computing as AI gets infused into every application going forward.”

Some analysts [and investors?] were skeptical …”

1677243000741.png


Source: https://www.eetimes.com/intel-ceo-sees-green-shoots-emerging/
 
Ugh.

I hate AI shit, and if their prediction that it will sneak its way into everything is accurate, I will be unhappy. I want no AI what so ever in my life.

If I don't do it myself, manually I don't trust it.

This is not the future I was promised.
 
Ugh.

I hate AI shit, and if their prediction that it will sneak its way into everything is accurate, I will be unhappy. I want no AI what so ever in my life.

If I don't do it myself, manually I don't trust it.

This is not the future I was promised.
AI is useful now, and will be great in the future, but... there are a lot of kinks to work out. To abuse car analogies, AI models like ChatGPT and current computer vision systems are akin to the Ford Model T, if not earlier cars. They're impressive for their time, but it's obvious there's a long way to go before they're truly sophisticated.

But I think we'll get there. The key will be to aggressively police the ethics (broadly representative training data, for example) and work toward AI that truly understands what it's producing.
 
What about the prescription micro dosages of psilocybin mushroom 🍄 🍕 organic pizzas

Yes, Gelsinger appears to be quite the sampler - way out there numerous times, starting with "your dividend is safe" - LoL
 
  • Like
Reactions: erek
like this
AI is useful now, and will be great in the future, but... there are a lot of kinks to work out. To abuse car analogies, AI models like ChatGPT and current computer vision systems are akin to the Ford Model T, if not earlier cars. They're impressive for their time, but it's obvious there's a long way to go before they're truly sophisticated.

But I think we'll get there. The key will be to aggressively police the ethics (broadly representative training data, for example) and work toward AI that truly understands what it's producing.

My take on AI is that even now, it may get things right a majority of the time, but a minority of the time it will make mistakes.

As it gets better, that minority will become a smaller and smaller percentage of the time, but it will always be there, so you are always going to need manual review unless you have a fault tolerance.

To me it is just easier to do everything myself manually, than it is to chase down weird ass unpredictable errors an AI system has made.

And the application doesn't matter. I don't want to use it professionally to write code or design other shit, or even in something trivial like with a voice assistant that tries to interpret a phone call and put something in my calendar. Even something like that can be a real pain in the ass if it gets it wrong, and I miss my dentists appointment.

So I guess my point is, I don't trust AI for even the most trivial of tasks, and I certainly don't trust it for anything important.

The only thing I could ever see using it for is to sift through massive quantities of data and come up with potentially significant correlations which it can suggest to a human for potential further study. I would be entirely opposed to just using any correlations it finds though. Those should be presented to a human who should come up with a hypothesis and test it manually, to make sure we understand exactly what is going on before it is used.

Direct use of black-box models just needs to be wholesale banned for any application. Unless you know exactly what it is doing, it is useless.
 
Ugh.

I hate AI shit, and if their prediction that it will sneak its way into everything is accurate, I will be unhappy. I want no AI what so ever in my life.

If I don't do it myself, manually I don't trust it.

This is not the future I was promised.
Thank god AI is only as smart as the human who programmed it.

But yeah, I agree. Nobody gives a shit about AI, we just want more efficient & faster processing power.
 
  • Like
Reactions: erek
like this
AI is useful now, and will be great in the future, but... there are a lot of kinks to work out. To abuse car analogies, AI models like ChatGPT and current computer vision systems are akin to the Ford Model T, if not earlier cars. They're impressive for their time, but it's obvious there's a long way to go before they're truly sophisticated.

But I think we'll get there. The key will be to aggressively police the ethics (broadly representative training data, for example) and work toward AI that truly understands what it's producing.
ChatGPT and the GPT-3 model is cool but you train it by feeding it the internet one dump truck at a time.
Have you ever been on the internet? it’s kind of terrible.
Chat GPT is awesome because it used the internet to learn how to have a conversation and answer questions.
ChatGPT is doomed because the internet taught it how to have a conversation.

There are other AI models out there right now for simulation and design that have a more practical future for many companies financial future.
ChatGPT is a toy you can make do cool things.
 
  • Like
Reactions: erek
like this
maybe they can get AI to bring Battlemage to market with better drivers, and a price that isn't ludicrous.
 
Nobody gives a shit about AI, we just want more efficient & faster processing power.
The idea that no one (would it be facebook, google, netflix, almost all industry with dataset that exist) does not give a shit about learning model sound just strange and out of place.

It has been happening for decades now, this:
https://www.sciencedirect.com/science/article/pii/S0022391321002729
https://researchoutreach.org/articles/detecting-dental-diseases-ai-dental-image-analysis/#:~:text=Sangyeon Lee at the Korea Advanced Institute of,detection of dental diseases from panoramic X-ray images.

https://www.technologynetworks.com/cancer-research/news/software-tool-uses-ai-to-identify-cancer-cells-328231#:~:text=UT Southwestern researchers have developed a software tool,clinicians a powerful way of predicting patient outcomes.

Will be everywhere and when you travel, you use to translate stuff, find route on google map, do a google search and so on all the time.

Interacting with a computer using natural language is quite in line with the future people were promised for.
 
Ugh.

I hate AI shit, and if their prediction that it will sneak its way into everything is accurate, I will be unhappy. I want no AI what so ever in my life.

If I don't do it myself, manually I don't trust it.

This is not the future I was promised.
AI isn't ready. It will never be ready as long as the people that program it have biases. The scary part is how many AI systems can be turned in to rabid racists or develop languages of their own and talk without the people in charge knowing what is being said.
Terminator wasn't a blueprint on what to do, it was a cautionary tale.
 
ChatGPT and the GPT-3 model is cool but you train it by feeding it the internet one dump truck at a time.
Have you ever been on the internet? it’s kind of terrible.
Chat GPT is awesome because it used the internet to learn how to have a conversation and answer questions.
ChatGPT is doomed because the internet taught it how to have a conversation.

There are other AI models out there right now for simulation and design that have a more practical future for many companies financial future.
ChatGPT is a toy you can make do cool things.
ChatGPT is just a language learning model. In my opinion you can't truly call something an AI if it cannot simulate other things the human mind does.
 
I think AI got so good (and maybe because often more by raw power-sample size of the learning input more than reasoning) that people forgot what it tend to be, the way some talk (but I am not sure people are being serious)

Speech recognition, computer vision, enemy in a video game that make "decision", self driving cars, youtube that recommend stuff are form of AI, there is giant market for it, a long history of it.

Maybe Intel have a bit in mind language model a la GPT, but training data will be streets red cone image, radiography with cancer vs radiography of people without cancer, skin-teeth scan, pharmacology development of molecule, material scan, input of all the mining-oil drilling scanner-sonar stuff that exist with the actual results for the location that got drilled to make them better, etc... not just language.

Pretty much every sector that has a large dataset with a result that you can make learning machine will do it.

In my opinion you can't truly call something an AI if it cannot simulate other things the human mind does.
Usually that more reserved for artificial general intelligence, AGI than AI or strong AI:
https://en.wikipedia.org/wiki/Artificial_general_intelligence
 
Megacorps and world governments care about AI and the granular control it represents.

OK but hear me out 👐

Instead of 'a well regulated militia' it's 'a well regulated homegrown robot militia with your own homegrown robot militia ai'

I argue my own killer robot also falls under The Second Amendment

Edit: I have a Roomba, a lighter, and bug spray - it's a start 👍
 
Last edited:
I like watching the developments being made. It certainly is interesting to be alive right now watching this. I am excited at the idea that one day actual AI could become a thing. I personally don't think AI would turn into a murderous army wiping out all humans. I believe that is just artistic interpretation of our fear of everything unknown. Perhaps AI will solve fusion reaction technology and also crack open every single crypto currency lol.
 
I like watching the developments being made. It certainly is interesting to be alive right now watching this. I am excited at the idea that one day actual AI could become a thing. I personally don't think AI would turn into a murderous army wiping out all humans. I believe that is just artistic interpretation of our fear of everything unknown. Perhaps AI will solve fusion reaction technology and also crack open every single crypto currency lol.

We can build them into craft to send off and explore space and report back instead of worrying about all the stuff that comes along with sending humans, like food, oxygen, room to take stretch
 
I like watching the developments being made. It certainly is interesting to be alive right now watching this. I am excited at the idea that one day actual AI could become a thing. I personally don't think AI would turn into a murderous army wiping out all humans. I believe that is just artistic interpretation of our fear of everything unknown. Perhaps AI will solve fusion reaction technology and also crack open every single crypto currency lol.
The dark cyberpunk future is going to enjoy you as well, oh optimistic one. :borg:
This is what AI is going to be used for, and no it isn't a "conspiracy", this is very real and publicly available information:

https://www.weforum.org/great-reset/








As for Intel, they had better get their act together and start doing what AMD and ARM are doing and get specialized, otherwise they will get left in yesteryear.
Even megacorps aren't immune to obsolescence.
 
Back
Top