Intel Launches Neural Compute Stick 2

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
As one of the first announcements at their artificial intelligence conference in Beijing, Intel unveiled the Neural Compute Stick 2. Like its predecessor, the USB stick can accelerate AI workloads while drawing all the power it needs from a USB 3.0 plug. The new version of the stick uses the Intel Movidius Myriad X CPU, and Intel claims it delivers up to 8 times the performance of the original compute stick.

What looks like a standard USB thumb drive hides much more inside. The Intel NCS 2 is powered by the latest generation of Intel VPU – the Intel Movidius Myriad X VPU. This is the first to feature a neural compute engine – a dedicated hardware neural network inference accelerator delivering additional performance. Combined with the Intel Distribution of the OpenVINO toolkit supporting more networks, the Intel NCS 2 offers developers greater prototyping flexibility. Additionally, thanks to the Intel AI: In Production ecosystem, developers can now port their Intel NCS 2 prototypes to other form factors and productize their designs.
 
Considering the size of the chips and the limited amount of cooling, how much acceleration compared to proper EDT CPU or workstation level GPU can this stick really offer?
 
A portable AI? Was this thing's development code name "Dixie Flatline"?
 
Last edited:
Considering the size of the chips and the limited amount of cooling, how much acceleration compared to proper EDT CPU or workstation level GPU can this stick really offer?

More than you can think. We have been using specialized HW acceleration for quite a while for our RT controls and it makes a massive difference in overall performance especially in the arena of $ vs time. For example, the PLECS (plexim) RT Box reduced our sim time from days to minutes in some cases. Worth every penny when you consider than an engineers time is measure in the $100-$200/hour range. It doesn't take long to make that money back and thus take benefit of the increased productiivty.

https://www.plexim.com/products/rt_box
 
A portable AI? Was this thing's development code name "Dixie Flatline"?

That was just a copy of a console cowboy

More than you can think. We have been using specialized HW acceleration for quite a while for our RT controls and it makes a massive difference in overall performance especially in the arena of $ vs time. For example, the PLECS (plexim) RT Box reduced our sim time from days to minutes in some cases. Worth every penny when you consider than an engineers time is measure in the $100-$200/hour range. It doesn't take long to make that money back and thus take benefit of the increased productiivty.

https://www.plexim.com/products/rt_box

Except your sample hardware accelerator is a fairly large piece of equipment compared to a thumb drive.
 
That was just a copy of a console cowboy

Except your sample hardware accelerator is a fairly large piece of equipment compared to a thumb drive.

We use thumb drive tech as well and it also saves massive time for SW development. Your argument was these small things can't be powerful compared to a big CPU/GPU. That RT box is is more powerful that a 42U rack full of the biggest baddest CPU's on the market for what we do at fraction of the price. Thus there is no doubt in my mind that that one little thumb stick can easily beat any single desktop setup handily for a fraction of the cost considering the application.
 
I am aware that dedicated devices like ASICs are normally much more efficient than completely generalized cpus. That's why they used to talk about how old Colossus was better at code breaking than an original Pentium.

Intel's new toy still looks to me like a low end cell phone processor in a small box being sold for about $100 and for serious AI applications the dedicated hardware would be full scale chip sized with appropriate hardware. Otherwise, why aren't deep learning and other pseudo AI rigs hundreds of these instead of thousands Xeon Phi's or equivalent sized FPGA chips. Even the box you linked to has a whole bunch of specialized hardware to make it's smaller dedicated dual core do its thing.
 
...
Intel's new toy still looks to me like a low end cell phone processor in a small box being sold for about $100 and for serious AI applications the dedicated hardware would be full scale chip sized with appropriate hardware. Otherwise, why aren't deep learning and other pseudo AI rigs hundreds of these instead of thousands Xeon Phi's or equivalent sized FPGA chips. Even the box you linked to has a whole bunch of specialized hardware to make it's smaller dedicated dual core do its thing.

Because they're used for shallow learning. It's all about selecting the right tool for the task. ;-)
 
It still seems more like Intel's version of the plug an electronic device into your car's 12V cigarette lighter port and get 50HP and 20 more mpg.
 
Ima gonna buy a bunch of USB hubs and jack every open hole with one of these! What's the USB daisy-chain limit, 127? Yeah, baby. I'll turn my laptop into a super-computer.
 
If you look at the specs the Movidius X is pretty fucking impressive for such a small, low powered chip.
It can run neural networks which can see or recognise things, e.g. object track on a 720p stereo feed. It appears its further upgraded now to handle 4k or up to 8x hd feeds (720p assumedly), it's the sort of thing they'd use in Magic Leap etc or some netbook that you wanted a cheap, low power neural net for. I already have an application for the next generation or two away, these aren't fast enough sadly (need at least 4k/120).
 
Back
Top