AMD vs. Intel Contributions To The Linux Kernel Over The Past Decade

erek

[H]F Junkie
Joined
Dec 19, 2005
Messages
10,875
Man, Intel is dominatin' in weighs unwitnessed due to AMD's other successes!

"When it comes to a total commit count by domain, on the Intel side it peaked in 2016 when they also had their record number of kernel developers contributing. Intel overall though remains one of the biggest companies contributing to the upstream Linux kernel. AMD's commit count has been roughly the same for the past three years, again largely driven by their AMDGPU graphics work. It will be interesting to see how 2020 plays out thanks to all of the work Intel is doing on their Xe Graphics as well as enabling other new hardware platforms that are coming up. On the AMD side, their developers remain very busy as well and hopefully will be a record-setting year for them."


https://www.phoronix.com/scan.php?page=news_item&px=AMD-Intel-2010s-Kernel-Contrib
 
Linux might be free though work is work even if people aren't getting paid for it all those folks still have bills to pay themselves. I'm glad AMD is doing well enough that it can greatly increase it's contributions and when you consider Intel's weight versus AMD the ratio of effort to money as of 2019/2020 heavily favors AMD.
 
Considering Intel has its own distro and that they command so much of the server space, which runs linux, their dominance here isn't really a surprise.

ironically, their distro runs surprisingly well on Zen.
 
Linux might be free though work is work even if people aren't getting paid for it all those folks still have bills to pay themselves. I'm glad AMD is doing well enough that it can greatly increase it's contributions and when you consider Intel's weight versus AMD the ratio of effort to money as of 2019/2020 heavily favors AMD.

Free is relative. Linux is the biggest revenue generating OS in the world. It powers all the infrastructure between me typing this to you reading it. ;) Intel supports Linux as well as they do for the simple reason that 90% of all servers in the world run Linux. Over the last few years Intel has also been ramping up their support of FreeBSD as well. Unix servers have been pretty much all proprietary stuff from the HPs and IBMs of the world for years... but Intel recognizes that has changed a lot as well. To be fair to Intel the work they have put into Linux the last 10 years or so has really helped the Linux servers space grow to what it has become. The old Unix powered big iron server world has really changed. Intel is the main reason for that. As an example... no way IBM buys Red hat 4 or 5 years ago, now they pay 34 billion cause they know if they don't Red hat would have replaced them, or they would have had to pay even more to buy their way out later.

Intel has always been a good open source supporter... and its really helped shape the server world. Software is the key and Intel knows it. There is a reason Intels stock is up 7% today and they blasted market expectations. Software >.< Around here we see what AMD is cooking and we say... who buys Intel today. Security holes... and AMD is killing them with Zen 2 Epyc. Thing is Intel has poured a ton of resources into making sure their Linux server platforms are insanely rock solid on the software side. Sure AMD might have the faster chips... and more features... and dominate on hardware costs. Still Intel has been putting in the work for years. AMD is going to need 5-6 years of beat the crap out of Intel server chips before they even start to dent Intels server market performance.

I am a big AMD booster. Software though is what holds AMD back in the server market no doubt. They made a very wise decision a couple years ago now moving their Linux Graphics software development to a 100% open source stack. They really do need to step up their platform work. As Micheal points out most of AMDs uptick in comits comes from the graphics side which is great. But if AMD really wants to take Intel on for real in the server market.... they need to basically double or triple their work on Linux CPU and Platform stacks. Its been a bit sad the last few years to be honest... seeing AMD release killer server hardware, and then really not push on the software side of things like they should be. The people buying big iron don't care if AMD wins compression and database access benchmarks by 20%.... they care that the entire stack running it all was poured over by a AMD software group that has ensured the system is military grade rock solid.
 
Free is relative. If you count support costs, Linux is the biggest a major revenue generating OS in the world. It may powers all a lot of the infrastructure between me typing this to you reading it. ;) Intel supports Linux as well as they do for the simple reason that 90% 30% of all servers in the world run Linux. Over the last few years Intel has also been ramping up their support of FreeBSD as well. Unix servers, which run Intel hardware, have been pretty much all proprietary stuff from the HPs and IBMs of the world for years... but Intel recognizes that has changed a lot as well. To be fair to Intel the work they have put into Linux the last 10 years or so has really helped the Linux servers space grow to what it has become. The old Unix Intel powered big iron server world has really changed. Intel is the main reason for that. As an example... no way IBM buys Red hat 4 or 5 years ago, now they pay 34 billion cause they know if they don't Red hat would have replaced them, or they would have had to pay even more to buy their way out later.

FTFY
 
  • Like
Reactions: ChadD
like this

lol

Fair yes... the world is less propritery, I simply give Intel credit for realizing that and putting the work in on the software side to stay relevant. AMDs biggest missed opportunity the last 10 years has been on the software side. They make great server hardware but hardware is only half the solution in that market. As you point out support is where the money is.... and reducing that cost for end users means you move more hardware. Intel does understand this... their sales decks highlight total cost not just up front hardware costs. They do so much shit I can't stand... but they understand their core business better.
 
AMDs biggest missed opportunity the last 10 years has been on the software side.

While I doubt any amount of software could have saved Bulldozer... it would have fucking helped.


My biggest complaint concerning AMD is their software support, be that drivers, getting developers on board for features, BIOS / UEFI, shoring up board partner releases, and so on -- both Intel in the CPU space and Nvidia in the GPU space have made significant industry contributions to ensure that workloads run well on their hardware, usually before said hardware releases.


AMD has stepped up significantly over the past few years in both markets and I do hope they keep growing and improving their developer support as their hardware finds more homes.
 
While I doubt any amount of software could have saved Bulldozer... it would have fucking helped.


My biggest complaint concerning AMD is their software support, be that drivers, getting developers on board for features, BIOS / UEFI, shoring up board partner releases, and so on -- both Intel in the CPU space and Nvidia in the GPU space have made significant industry contributions to ensure that workloads run well on their hardware, usually before said hardware releases.


AMD has stepped up significantly over the past few years in both markets and I do hope they keep growing and improving their developer support as their hardware finds more homes.

i agree but funny now after all the countless intel security patches, maybe not bulldozer, but piledriver not really holding up that bad considering. i was gaming on an FX-8370/980ti setup up until a couple months ago and never had any problems. yeah i'm getting better perfomance now w/ a 3800X but hey got to remember both intel and nvidia have 5x(?) the amount of employees/resources to make said things happen, develop new tech, and find time to work on linux support at the same time. but amd has also given the tech world many contributions that both intel and nvidia would have tried to proprietorize and keep for themselves to try and milk the consumer and line their pocketbooks. not like amd's just sitting around doing nothing.

just my 2c
 
but amd has also given the tech world many contributions that both intel and nvidia would have tried to proprietorize and keep for themselves to try and milk the consumer and line their pocketbooks.

AMD got their start producing copies of Intel CPUs for IBM -- they didn't really come into their own in the CPU market until the Athlon was launched. The CPUs themselves were solid but man were the platforms -- everything but the CPUs -- an outright shitshow. I lived through it, and even bought a few Pentium IVs before AMD brought out the Athlon64 as a result. I ran Athlon64s and then X2s until Intel released Core 2 -- a design tracing back to the Pentium Pro -- and proceeded to resume burying AMD in performance right up until the Zen 2 / Ryzen 3000 series became available.

Now, Intel's mistakes are legion, but so are their innovations. As risk averse as they appear to be at times, they've taken some wild ones, with more than a few leaving openings for their competition:
  • AMDs opening for the first Athlon was Intel working on Netburst for the Pentium IV: Intel bet poorly both on the direction of consumer software and on the limits of semiconductor technology, and a hungry AMD picked up right where they left off after gobbling up assets and engineers from DECs Alpha; note that the Athlon looked and behaved much like a slightly beefier Pentium III, as opposed to the departure that Netburst was
  • With the Athlon64, AMD did two thngs:
    • They brought the memory controller on-die (something they just undid for the first time with Zen 2 / Ryzen 3000!), which was novel only in that it provided a one-time speed-bump; Intel didn't do this until after Core 2, yet Core 2 outperformed AMDs offering across the board; still, this move provided a clear lead for AMD for some time
    • AMD extended x86 to 64bits -- something that wasn't put to use until years later, but also something that Intel was trying to avoid given their investment in IA64 and Itanium, which while novel, ended in tears. Extending x86 was trivial- AMDs real coup here was getting support from Microsoft, which essentially killed whatever momentum Intel had built up toward pushing Itanium into the consumer space
  • With the AthlonX2, AMD had a dual-core CPU that worked well and kept them generally in the lead until Core 2 hit
  • Everything after the AthlonX2 was a wash and sold at a discount versus Core 2 and its successors, even more especially Bulldozer that went backward in performance
  • Zen and Zen+ weren't particularly inspiring outside price, and AMDs traditionally poor platform support really hurt adoption
  • Zen 2 seems to have shored most of that up while providing per-core parity at lower costs, and strikingly, more cores for the consumer platform while the enterprise Epyc (and prosumer Threadripper derivatives) reached per-core parity with their consumer cousins while providing significantly more cores
The challenge here is that AMD hasn't really 'innovated' as much as 'picked up the slack' when Intel has made exceptionally poor bets. Netburst and Intel's recent fab issues have provided the rare openings that AMD has had a product ready to exploit, and for the size of their operation, that is commendable, but it doesn't represent real innovation relative to say Intel and Nvidia, which are both much larger companies.

i agree but funny now after all the countless intel security patches, maybe not bulldozer, but piledriver not really holding up that bad considering.

What I find particularly funny is that the vulnerabilities being discovered have been trivial to fix in hardware -- Intel even understood that their CPUs could be exploited, but given how obscure the vulnerabilities are, never expected anyone to try.

Now, that's a bad security practice on the face, but realistically the realm of potential security weaknesses in a particular design is so broad that at some point the design team has prioritize or a product will never get shipped. Further, the Skylake architecture has been on the market longer than most, and far longer than Intel ever planned, to the point that basically every server and most workstations, desktops, and laptops in the world are using it. That unprecedented number of parts fielded means that it's also going to be the #1 targeted architecture by security researchers of all persuasions -- a position AMD could only hope to be in one day, and one I do certainly hope AMD is preparing for, from a security standpoint.

i was gaming on an FX-8370/980ti setup up until a couple months ago and never had any problems.

Let me say this: there is nothing inherently wrong with Bulldozer. It didn't come close to meeting expectations at the top end, but it was still a decent budget buy for many workloads. I build one for a sibling (FX8350, I believe), and had no concerns in terms of day to day usability.
 
They brought the memory controller on-die (something they just undid for the first time with Zen 2 / Ryzen 3000!), which was novel only in that it provided a one-time speed-bump; Intel didn't do this until after Core 2, yet Core 2 outperformed AMDs offering across the board; still, this move provided a clear lead for AMD for some time.

I would hardly call that comparable. You could say they moved the Mem controller to the CPU, and with Ryzen 3000, that Mem controller is still on the CPU. They found an innovative way to decouple the CPU cores and the Mem controller with minimal impact to latency. As far as designing uarchs, AMD is definetly more bold and innovative, but higher risk (Bulldozer).

You also forgot how AMD influenced Vulkan/DX12 with Mantle by splitting work into as much threads as possible (some can't), I doubt MS was going to to that, they are only focused on new features, they had no vested interest until Mantle cam along.
We can thank Freesync for Gsync now able to work with "non - certified" monitors.
Open solutions usually win out in the long run. (Physx anyone?)
 
As far as designing uarchs, AMD is definetly more bold and innovative, but higher risk (Bulldozer).

Pretty sure Itanium trumps any risk AMD has ever taken. Bulldozer wasn't a risk as much as just an engineering dead end, and one that mirrored Netburst in more than a few ways, which makes the decision to pursue Bulldozer after Intel had shown those avenues to be less fruitful fairly baffling.

You also forgot how AMD influenced Vulkan/DX12 with Mantle

DX12 is a desktop port of the API used on Xbox consoles and had been in development longer than Mantle. Mantle was AMD jumping the gun before the market was ready, and like the many times AMD has done that before, was abandoned for more universal solutions.

We can thank Freesync for Gsync now able to work with "non - certified" monitors.

We can thank Nvidia for setting a very high standard for VRR right out of the gate -- still the highest, actually -- and for bringing order to the shitshow that Freesync has been.

Open solutions usually win out in the long run. (Physx anyone?)

PhysX is open and quite broadly used today. So is CUDA.

About the only thing Nvidia doesn't open up is the source code for their drivers, which is completely understandable given their significant market lead in innovation.
 
The history of AMD CPU innovation...

K5 - The first Non Intel x86 to ever hit the market. It was clock for clock FASTER then the Pentium chips and slotted into the same boards. (which is why Intel made the Pentium 2 a slot chip... so they could claim AMD didn't have a licence for making slot stuff and kill them off)

K6 - Actually designed by NexGen which AMD bought. Performance wise it beat the Pentium clock for clock... Intels fab process meant Intel could still make higher clock rate parts. But Intel took a sales shellacking in the low and mid range.

K6-2 - Dynamic instruction reordering (one of the greatest innovations in CPUs in the last 20 years that no one talks about) The K6-2 was faster then the pentium II most of the time... the PII only won in float point math and it wasn't a big win. AMD also introduced 3DNow SIMD instructions with K6-2.

K7 (Athlon classic) - First x86 CPU to hit 1 Ghz. The Athlon was also the first commercial CPU to use copper interconnects. The Athlon was designed by Dirk Meyer (AMDs first version of Lisa su he was CEO for the "good" years of Athlon - A64) he came from Alpha... with pretty much the entire Alpha design team. Athlon classic was basically designed by the Alpha processor team with a few of the ex NextGen employees that created K6. This is why the Athlon used a Alpha EV6 Bus which meant the CPU bus ran at a double data rate. The Athlon Classic was also one of the first CPUs with a true modern branch prediction unit. They also overhauled the FPU... and K7 is where the AMDs FPU designs started beating Intels.

K7 (Athlon Thunderbird) - AMD introduced (the first I'm aware of) a exclusive cache design. This basically means things in the L1 cache don't have to be in the L2 cache. (which is how P II / P III and P IV worked). The CPU treated the cache system as one large memory space. Without getting super technical it was as solution that made a TON of sense for the manufacturing of the time. Exclusive designs have draw backs... but those are not evident with the small amounts of cache any of those chips used. The main advantage is a Inclusive design like the one Intel used where cache data was written back and forth... was it increased branch prediction misses. Its one of the reason the Athlon thunderbird generation performed better clock for clock. It was an elegant solution for a simpler time.

K7 (Athlon Palomino) - Hardware Data prefetch (gave this gen a 10-15% bump in performance over thunderbird) they also re arranged the core layout and found a 20% power consumption boost... even though it was on the exact same Fab process as thunderbird.

K8 (Athlon64) - x86_64 ... not sure we really need to say more. AMD invented the x86 64bit architecture. On die mem controller (although to give credit where it is due Transmeta was actually the first x86 company to do this)

K10 (phenom) - First true 4 cores on one die chip.

K10 (Fusion / Llano) - First APU... the fusion project started 5 years prior when AMD bought ATI. First gen released in 2011... third gen is what is in the PS4 / Xbone

Bulldozer - Perhaps mistakenly was a completely new design... and didn't incorporate any of AMDs previous work. Yes shades of Netburst. For better or worse it introduced clustered multi threading CMT is very much like SMT with some advantages... but ya we all know AMDs marketing dept dropped the ball, and things didn't perform as people might have expected based on what they where saying. Also like Intels netburst bust... AMD attempted to lengthen the pipe, and ran into the same issues with branch prediction. The bottom line with BD... AMD made some bad bets. They bet that software would really start taking advantage of multi core chips. Their CMT implementation is Far superior to SMT... and they where not 100% wrong CMT does act a lot more like 2 real cores... but in the end it isn't really 2 cores, its basically a much better hardware version of SMT. If software was multi threaded better (faster) these would have done better at launch and AMD would be a hero giving us many more cores then Intel for the same cost. THIS is why BD has actually aged pretty well. On release with pretty much ALL software being single threaded or dual threaded at best.... it didn't shine. Today with much more software being able to use 4 cores BD does A LOT better then any Intel CPU from that time. BD was to early.

Zen - Zen is another completely new arch. They made so many changes I won't even try to list them all... bottom line it is 50% faster then AMDs previous excavator chips. They abandoned their CMT design in favor of a traditional SMT implementation. (one that is better then Intels) Zen is clock for clock faster then its Intel counterparts.
Zen+ - not a lot of real performance gains... but some real innovative features like Precision Boost have been refined. AMD at this point has basically made overclocking pointless.
Zen 2 - The first consumer shipping chiplet part ever made. Double digit IPC gains over Zen+... and 16 core consumer parts. It is by far the most innovative CPU on the market at the moment. Perhaps Intel responds in a year or two from now with some type of 3D stacked chiplet... but until then it doesn't look like Intel has anything but refresh parts. Nothing ground breaking.

Didn't mean to go crazy on the post... but bottom line is AMD has been the hungrier company for 25 years and it has shown. They have solved a ton of computing world issues with novel ideas for over 20 years now. Most of their bets have paid off. They found ways to make their K5 and K6s faster at the things regular people use CPUs for vs the Pentiums of the day, the trade off was FPU. Was a smart move and they sold a TON of K6 parts. (it made them a viable company). K7 they saw an opportunity when Alpha was closing.. snapped up that design team and wooed Alphas head developer, who became their CEO from 2008-2011. That got AMD the Althon chip, and lead to hyper transport and got AMD into the server business. (the rumor is Meyer left cause he wanted to push hard into the server space and the AMD board didn't want to take the risk.. which is logical when you see how strong Opteron came out of the gate then was just there for years after Meyer was forced out). K8 was a reaction to Itanium and Intels goal of eventually blocking AMD from the server market by introducing a server only IA they didn't have a licence to. Their response of x86_64 was genius and that Intel didn't see that coming is telling.

Even bulldozer we all see it as a failure... and ya ok it wasn't a success. But you can see that AMD saw where things where and took a risk. Would Intel ever decide to throw in on a technology like CMT. Hell no.. but AMD did take the swing. They bet on the software industry and well I guess we all lost. Still its easy to go hit Youtube and find tons of videos of people running modern games on FX-8350 chips with modern GPUs. So AMD bet that software would become multi threaded a lot faster then it did and lost. I find it funny that in tests today... AMDs chip is in fact faster then Intels Ivy bridge it got killed by. Zen and Zen 2 have both been technical home runs.
 
See who's missing here? Nvidie because they are proprietary and contribute next to nothing for the Linux Kernel.
 
Itanium wasn't intel's baby, it was from HP.

That's not correct. The Itanic was Intel's baby 100%. HP was allowed in on it for what amounted to an exclusivity agreement for a time for the use of the Itanic to get a huge jump on other system builders. The point of the Itanic was twofold. The first was to use Intel's stranglehold on the market to force PCs off of x86 in order for Intel to be exclusive again. This would have killed AMD in the long run as AMD did not have a license to use the Itanic instruction set. There would be no more competition for Intel in the PC space because Intel didn't have to license the Itanic instruction set for anyone else to use. The other reason was to move away from the old x86 instruction set and all the problems that it brought along. The Itanic would allow Intel to basically throw all of that away and start fresh. Intel had planned on starting in the server space with Itanic and once that was locked up to move it to the desktop realm as well.

Obviously, this didn't work as planned. AMD came out with the x86_64 instruction set extending x86's life and everything else is history including Intel's adoption of AMDs extensions. Despite AMD's efforts Intel still could have eventually won with Itanic except for one major flaw. Itanic's required x86 translation layer needed for compatibility for a time was abysmally slow and the market decided x86_64 was the better option.

Intel's loss with the Itanic was a huge boon for compatibility down the line and quite a setback for Intel at the time. In a way it was a pity as with certain workloads the Itanic was a performance monster which is one of the reasons it was kept around so long even after Intel had killed off any real development.
 
See who's missing here? Nvidie because they are proprietary and contribute next to nothing for the Linux Kernel.

They don't contribute directly to the kernel, but they do contribute indirectly and their drivers are solid with performance on par with their Windows counterparts.

The worlds supercomputers aren't complaining about Nvidia drivers.
 
FIFY
And thanks for proving my point

Nope. Freesync has only been noticeably cheaper in noticeably inferior implementations, and even the very best Freesync implementations fall short of G-Sync. I don't even know if it's possible to build a Freesync monitor that is in every way indistinguishable from G-Sync.

Granted, the best Freesync implementations are pretty good and Nvidia has put together a great certification program to sort the wheat from the ass, and also any VRR is usually better than no VRR, but let's not pretend that they're equivalent nor that AMDs involvement wasn't a shitshow.

Itanium wasn't intel's baby, it was from HP.

SmokeRngs covered it pretty well.

I'm not as pessimistic about IA64; it's the kind of technology that was, and still largely is, ahead of its time. As to Intel's motivations, there were certainly performance objectives there too- and I'm sure Intel was still a bit miffed at the time that AMD became their chief competitor in the desktop space due to IBM forcing a dual-source for Intel's early x86 products.

Still, given how regulators tend to look at monopolies, if Intel had pushed AMD out of the market even fairly they'd likely be staring down a breakup. Their US$1BN settlement for AMDs accusations makes that pretty clear, as AMD would have died without it.

See who's missing here? Nvidie because they are proprietary and contribute next to nothing for the Linux Kernel.

The only thing missing is the comprehension of the situation by more biased members and their need to make fanboy quips to support their financial decisions and religious affiliations.
 
Last edited:
Back
Top