How to Bypass Matlab’s ‘Cripple AMD CPU’ Function

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,785
"I will say one more thing on this issue. As far as I’m personally concerned, any piece of software that claims to support AVX, AVX2, SSE, or any other SIMD code should prominently state whether that code executes solely on supported Intel microprocessors. Failing to inform customers that your software won’t execute ideal code on their platform due to hard-coded limits in your application ought to constitute false advertising. AMD advertises its CPUs based on factors like AVX2 support, but software vendors are under no obligation to inform you whether you’ll be able to use features you literally paid for. This should not be the case. Multi-million dollar software developers are capable of performing the due diligence required to be honest about their optimization practices."

https://www.extremetech.com/computi...ss-matlab-cripple-amd-ryzen-threadripper-cpus
 
LOL! Use this one weird trick to turn the tables.

For those who didn't click through, setting an environment variable does this (bolding mine): "AMD’s performance improves by 1.32x – 1.37x overall. Individual test gains are sometimes much larger. Obviously these results are much worse for Intel, changing what looked like a narrow victory over the 3960X and a good showing against the 3970X into an all-out loss."
 
So, Intel's flagship "extreme edition" is now, officially, Threadripper's bitch. Good.

It already was from the start. One or two victories in Intel's favor doesn't change the fact that the Threadripper 3960X and 3970X CPU's are flat out better. They are substantially more expensive, so no one should expect them to be on the same playing field anyway.
 
This sounds like lazy coding more than anything nefarious, but, hey, I'm a big fan of Hanlon's Razor. Hopefully Mathworks will fix their code base.

No, if you follow the various posts (not all of which are linked here) you'll see that the MKL library (which comes from Intel) checks the CPU capabilities via the CPUID instruction, then, if the CPU is an Intel one, it sets flags that tell the library which version(s) of AVX and SSE the CPU reports, but if it's not an Intel CPU, it says "don't use SSE or AVX at all; fall back on 80387 floating point". (That is, if the CPU says "GenuineIntel" you get modern math, if it says "AuthenticAMD" you don't.) This was disovered all the way back in 2004.
 
It already was from the start. One or two victories in Intel's favor doesn't change the fact that the Threadripper 3960X and 3970X CPU's are flat out better. They are substantially more expensive, so no one should expect them to be on the same playing field anyway.

bingo! AMD could play the performance for dollar card before. now, hopefully they can charge a premium for being better (stock holder :p), but can they change mindset for high end buyers? other question is, will intel be able to play the value card now.
 
bingo! AMD could play the performance for dollar card before. now, hopefully they can charge a premium for being better (stock holder :p), but can they change mindset for high end buyers? other question is, will intel be able to play the value card now.

I think Intel is trying to play the value card with their HEDT products. They know their platform is aging. They know the chips are only marginally better than the ones they are replacing. They also know there is a small market for people who only need 9900K type of performance but want HEDT platform features. They do have competitors to AMD in the form of the 10940X and the 10980XE which at least when overclocked, compete well with the 3900X and the 3950X. Unfortunately, I think they are priced too high. Intel is still of the mind that it's the premium option where they do compete. They have the same attitude with the 9900K and KS which they charge a premium for as "the best gaming CPU's on the planet." Intel isn't wrong, but the small improvement over a 3700X / 3900X in gaming doesn't override the fact that in every other metric, these CPU's get trashed by anything from AMD with a higher core count.

The 10980XE needs to be no more than $800 and they should drop the prices on X299 chipsets ever so slightly to bring the motherboards down a bit. That would be a way to secure some favor in the market and compete. Unless you need the extra PCIe lanes or memory bandwidth, the victories the 10980XE can achieve over a 3950X don't justify the price increase over it. Worse yet, you couldn't buy these CPU's as an alternative to the 3950X anyway as people are getting the 3950X's as they trickle out. I haven't even heard of anyone getting their hands on a 10980XE.

BTW, I have reached out to Intel for information on Cascade Lake-X availability and I've not heard anything back. This tells me that its as much of a paper launch as AMD's TR3 chips are. The only processor I think actually launched on Monday was the 3950X.
 
No, if you follow the various posts (not all of which are linked here) you'll see that the MKL library (which comes from Intel) checks the CPU capabilities via the CPUID instruction, then, if the CPU is an Intel one, it sets flags that tell the library which version(s) of AVX and SSE the CPU reports, but if it's not an Intel CPU, it says "don't use SSE or AVX at all; fall back on 80387 floating point". (That is, if the CPU says "GenuineIntel" you get modern math, if it says "AuthenticAMD" you don't.) This was disovered all the way back in 2004.

I was more going off of Mathworks lazy programming of blindly using the Intel libraries and not even doing a proper check for AMD processors to provide an alternate math library. I believe this part of Matlab code isn't exactly new, so it was done ages ago and never re-examined. Like it or not (and I don't) Intel's libraries are designed to gimp non genuine Intel parts, so with that out in the open, due diligence is the responsibility of the software developer. Hopefully this raises enough of a controversy for Mathworks to get off their rear and fix the problem on their end.
 
Frankly, I'm surprised the run-time distribution of the Math library still processes the debug environment variables like "MKL_Debug_CPU_Type". I would have figured Intel would disable them.
 
Back
Top