Intel to Use Integrated GPU to Detect Malware Attacks

DooKey

[H]F Junkie
Joined
Apr 25, 2001
Messages
13,551
Intel has come up with two new efforts to combat malware. The first is called Accelerated Memory Scanning and it will use the integrated GPU of its CPU's to scan memory for malware. Scanning intensity will be adjusted based upon GPU load and can even be turned off if playing a game. An Intel driver runs in Ring 3 and can even be expanded to Ring 0. Microsoft has endorsed this technology and it's going to be added to Windows Defender. The second effort is call Advanced Platform Telemetry and is a cloud-based machine learning system that monitors telemetry from the system. Intel claims this will increase accuracy of detecting advanced threats. Cisco is going to be the first partner to utilize this technology. With both of these efforts and the redesign of new CPU's to combat Meltdown and Spectre it appears Intel is taking security seriously. That's good news for all of us.

Intel is planning to combine the above-mentioned security solutions under the umbrella of Intel Security Essentials toolkit. It will also include AES-NI and SGX instructions. The former is tasked with accelerated encryption, while the latter is used by applications to set aside private regions of code and data. The company’s firmware protection technology is going to be part of Intel Security Essentials as well. The kit will be supported by Core, Xeon and Atom processors.
 
Unless I missed it but it didn't say when they project this will be a thing? Probably when it doesn't matter anymore...
 
So does this mean those who have a dedicated video card(s) can use the CPU's GPU as a full-time hardware accelerated malware memory scanner?
 
So does this mean those who have a dedicated video card(s) can use the CPU's GPU as a full-time hardware accelerated malware memory scanner?

I would say that's probably a true statement. If the integrated GPU is idle then the AMS can go full tilt.
 
So does this mean those who have a dedicated video card(s) can use the CPU's GPU as a full-time hardware accelerated malware memory scanner?

There would still be some CPU usage moving all that stuff to/from RAM over another bus. I don't think the goal is any kind of performance boost, but a security one from using a discrete, "safe" processor and memory pool.


Why don't they use the iGPU to offset the performance losses created by their Spectre and Meltdown flaws.
asking for a friend
Errr, because an iGPU doesn't do what a CPU does.
 
What kind of power usage would this entail? Would it be a constant scan using iGPU idle time? That would raise power costs a bit if so.
 
Hopefully this is some kind of hook and Microsoft does all the heavy lifting. Granted Intel GPU driver hasnt crashed in a long time for me. If this is some horrible push from remnants of McAffee...

Wouldnt the better solution be if they sold us CPUs/boards w/o Spectre/Meltdown/iME faults in the first place. So malware doesn't have an easy hook?
 
There would still be some CPU usage moving all that stuff to/from RAM over another bus. I don't think the goal is any kind of performance boost, but a security one from using a discrete, "safe" processor and memory pool.



Errr, because an iGPU doesn't do what a CPU does.

They could use a translation layer. Inefficient, but hey, if you've got an iGPU sitting idle because you have a discrete graphics card, it may be able to accelerate at least a percentage, and partially mitigate the loss. On the other hand, maybe not, but I wouldn't mind hearing that from an engineer one way or the other. I'm sure it wouldn't work for all instructions and functions, but maybe something along the lines of hyperthreading or MMX / SSE types of things. Just speculating.

That's beside the point though. I actually like this anti-malware idea, as long as John McAfee isn't involved :p I don't think I'd rely on it solely, but if it had close to zero overall system performance hit, I couldn't see it hurting.
 
Last edited by a moderator:
They could use a translation layer. Inefficient, but hey, if you've got an iGPU sitting idle because you have a discrete graphics card, it may be able to accelerate at least a percentage, and partially mitigate the loss. On the other hand, maybe not, but I wouldn't mind hearing that from an engineer one way or the other. I'm sure it wouldn't work for all instructions and functions, but maybe something along the lines of hyperthreading or MMX / SSE types of things. Just speculating.

I can see what you're imagining, but I think it would not be practical. Any kind of software solution that would divert vector instructions to the GPU would likely incur so much overhead it would be slower than just leaving it on the CPU. I'm not a hardware engineer, but my experience in software tells me that you'd need some kind of watchdog to inspect programs as they run, catch and redirect the instructions you are looking for, but also need to keep all referenced data and results coherent between the CPU and GPU as the program instructions jump between them. Not to mention that CPU vector instructions are extremely low latency compared to a GPU, the CPU thread would probably spend more time stalled. If the CPU's instruction decoders were wired to GPU execution resources, this could be a different scenario - but I don't know if that kind of architecture is in development or if it would even be practical.
 
Why don't they use the iGPU to offset the performance losses created by their Spectre and Meltdown flaws.
asking for a friend

Because architecture doesn't work that way. If you really wanna dive in to superscalar processors, speculative execution, and this class of covert channel attacks, I really encourage it. If you do, you'll discover that this is NOT a simple thing (nor an Intel thing). If not that's ok too, but then you should consider not just spouting off about ti and hating on Intel for it.
 
It's all a rat race at this point. Actually, it always has been. Around the time of the McAfee acquisition, Intel spoke of security and how it was one of the "three pillars" of computing. They had all of the opportunity in the World to build security into their _hardware_. It never happened, and looks like never will.
 
....They had all of the opportunity in the World to build security into their _hardware_. It never happened, and looks like never will.

That's not entirely true, at least we have the NX bit! And, iirc, we have AMD to thank for pushing Intel on that one too.

Also, its arguable how much security is practical in a CPU - I think full-blown antivirus is far beyond the scope of what the hardware is meant to provide. The current scheme works well aside from a few bugs - kernel mode vs user mode works well as long as the important stuff behaves itself. And I don't think its Intel's responsibility to design their CPUs to babysit a badly written kernel-mode bluetooth driver.
 
I remember back around 2006 and 2007, when GPGPUs and the unified shader architecture made by both NVIDIA and AMD/ATI (at the time), that GPUs were going to be able to drive anti-virus and anti-malware software.
Seemed so new and neat at the time, and how much more powerful GPUs were at raw compute than CPUs were during that era (quad-core CPUs were very new at the time), it seemed like the logical choice to drive the software - but nothing ever came of it.

Crazy that it took over a decade to see the first public implementation of this finally come about, at least on Intel iGPUs.
 
I've read a number of posts around the 'net regarding moore's law and what can still be done with the ongoing current fab processes for both CPU and GPU's. I don't know enough about it but it really seems like that CPU's are reaching that end quicker and should simply be replaced by GPU's at this point, at least until they take a more radical different direction. Odds are someone will burst this bubble but I haven't truly heard of any MD/SP affecting AMD/NV yet either.
 
I've read a number of posts around the 'net regarding moore's law and what can still be done with the ongoing current fab processes for both CPU and GPU's. I don't know enough about it but it really seems like that CPU's are reaching that end quicker and should simply be replaced by GPU's at this point, at least until they take a more radical different direction. Odds are someone will burst this bubble but I haven't truly heard of any MD/SP affecting AMD/NV yet either.


GPUs are great at accelerating easily threaded code, but they do that by sacrificing single thread IPC. CPUs will never be replaced, but we will see an evolution of the HSA standards to further unify both.
 
Back
Top