AMD Confirms Stadia Will Run on Intel CPUs

AlphaAtlas

[H]ard|Gawd
Staff member
Joined
Mar 3, 2018
Messages
1,713
As one of the world's most pervasive cloud service providers, Google is in a better position to launch a successful game streaming platform than almost anyone. The hardware they choose to use for the launch of their "Stadia" streaming service will undoubtedly influence future game streaming efforts, hence AMD's stock price shot through the roof when Google announced they were using AMD GPUs.

However, PCGamesN writer Dave James noticed that Google was conspicuously silent when it came to Stadia's CPUs. They were happy to share clock speeds, cache numbers and the fact that they're using "custom" x86 chips, but they refused to confirm the vendor of the platform's CPU. Eventually, AMD reached out and said that "the Stadia platform is using custom AMD Radeon datacentre GPUs, not AMD CPUs." Barring any surprise announcements from VIA, that more or less confirms that Stadia will run on some sort of Intel CPU platform, but just why Google refused to mention Chipzilla by name remains a mystery. The author suggested that Intel might not want to associate themself with what might be a "doomed" venture. Maybe Google plans to switch to EPYC CPUs or an unannounced Intel server platform sometime in the future, or maybe they just don't think it's particularly relevant. Whatever the reason may be, I also find the omission to be curious, and look forward to seeing what happens with Stadia's hardware in the future.

A switch to AMD's EPYC processors has been mooted as a potential future step for Stadia, and Google's Phil Harrison told us himself that "we're just talking about Gen 1 at the moment, but there will be iterations on that technology over time," so there is some potential for a changing of the processor guard either before or after launch. Whatever the truth of the matter is I still find it beyond strange that no-one involved is talking about the Intel CPUs being used for Google Stadia, even if they're not necessarily doing anything that special with regards the innovative streaming service. Certainly the multi-GPU features on offer with the Radeon graphics cards warranted mention, but just a note on the specs slide alone could have still done good things for Intel.
 
Considering when it was announced they said that the CPU's will have hyper-threading, this is pretty obvious.
 
Considering when it was announced they said that the CPU's will have hyper-threading, this is pretty obvious.

They could've (erroneously) been talking about AMD's SMT implementation. However, I admit that is pretty unlikely, seeing how AMD wouldn't want to mix up their branding with Intel's SMT as part of the announcement for many reasons, hence they would be careful about the wording.
 
Considering when it was announced they said that the CPU's will have hyper-threading, this is pretty obvious.
Link? People often mistakenly use this copyrighted term as a general term to describe SMT, like when people use "Google" as a verb to describe a web search.
 
Well we'll see if the service really takes off when they really get things going. On the hardware end things won't get interesting until there are 2 real GPU options in a year or two when Intel is shipping their own GPUs and google can weigh out Intel/Intel vs AMD/AMD. Nvidia was never and will never be an option for these types of services. Both Intel and AMD are good open source citizens and as far as the software end they should both be basically interchangeable.
 
Isnt ARM working on an x86 execution engine?

There is always Cyrix and others that could be resurrected.
 
Isnt ARM working on an x86 execution engine?

Which costs performance, making it a non-starter. Nevermind ARM isn't high performance enough to match x86 yet.

There is always Cyrix and others that could be resurrected.

No, because there's no way for them to obtain an x86/x86-64 license.
 
Link? People often mistakenly use this copyrighted term as a general term to describe SMT, like when people use "Google" as a verb to describe a web search.

Zm1pY3JvLmNvbS9GLzIvODI4Njg2L29yaWdpbmFsL1NjcmVlbi1TaG90LTIwMTktMDMtMTktYXQtMS4yOC4xOS1QTS5wbmc=.jpg


https://www.tomshardware.com/news/google-amd-custom-gpu-stadia-gaming,38865.html
 
They could've (erroneously) been talking about AMD's SMT implementation. However, I admit that is pretty unlikely, seeing how AMD wouldn't want to mix up their branding with Intel's SMT as part of the announcement for many reasons, hence they would be careful about the wording.

  • "Custom CPU" is Intel parlance (AMD uses semi-custom)
  • "Hypertreading" is Intel parlance (AMD says SMT)
  • 9.5MB L2+L3 is Skylake-server cache configuration: 1 MB L2 + 1.375MB L3 per core.
  • Lisa Su tweet mentioned "AMD GPU"
So everyone except Barrons and PcGamesN, and some other believed it was Intel CPU.
 
Link? People often mistakenly use this copyrighted term as a general term to describe SMT, like when people use "Google" as a verb to describe a web search.


kleenex
xerox
walkman
ipod
nintendo

Just to name a few cases
 
They are Skylake-X custom CPUs, 4C/8T - the Cache gives it away. They use existing infrastructure on which they have been deploying their customer VMs for a while now.
So either they built new racks with existing stock of CPUs, or needed to order some more from their master agreement with Intel, or they used existing VM racks and added in the GPUs.
 
Barring any surprise announcements from VIA, that more or less confirms that Stadia will run on some sort of Intel CPU platform
Afaik, VIA doesn't have any sort of hyperthreading-like SMT implementation, so I'd count them out here. That leaves Intel or an ARM CPU, but it'd have to be a pretty beefy ARM chip unless they compiled the binaries into native machine code. That and I don't know if there are any ARM smt implementations either.
 
Yeah.. it probably is Intel, but AMD saying "It isn't us" is definitely not the same thing as AMD confirming that Google is using Intel.
 
No, because there's no way for them to obtain an x86/x86-64 license.

Well they can't because there is no Cyrix anymore there defunct. If they where not then sure they could have they had an x86 licence... they won all their lawsuits with Intel and it was proved their X86 implementation had nothing to do with Intel. In fact Intel lost law suits and was paying Cyrix for their superior float point implementation which Intel cribbed.

Anyway to make the point even more pointless. The Cyrix assets and patents ended up with AMD. lol
 
"custom x86" could be made by anyone. Not likely AMD, since they denied it was their. Probably Intel, but it could be ARM, or a new google made cpu. Pretty sure google has already been making/using custom x86 cpu's for their cloud servers.. (might be remembering that wrong tho)
 
"custom x86" could be made by anyone. Not likely AMD, since they denied it was their. Probably Intel, but it could be ARM, or a new google made cpu. Pretty sure google has already been making/using custom x86 cpu's for their cloud servers.. (might be remembering that wrong tho)

Well not anyone can just make x86. Its not open source. In order to make an x86 chip you have to have a licence. Same goes for ARM to make one you need a licence. The difference is ARM will sell anyone a licence. Intel will not. Intel used to sell licences... and then at some point they decided that was a bad idea. So AMD and VIA who had licences prior continued to make x86... and after lots law suits and cross patents things have mostly settled out.

Google makes custom server chips sure but they are not x86. Google have lots of research projects... and are behind things like tensor flow.

There is Chengdu Haiguang IC Design Co. which produces x86 chips in China based on AMD Ryzen. But I doubt that is the answer, I'm sure occam's razor applies here. :)
 
Well not anyone can just make x86. Its not open source. In order to make an x86 chip you have to have a licence. Same goes for ARM to make one you need a licence. The difference is ARM will sell anyone a licence. Intel will not. Intel used to sell licences... and then at some point they decided that was a bad idea. So AMD and VIA who had licences prior continued to make x86... and after lots law suits and cross patents things have mostly settled out.

Google makes custom server chips sure but they are not x86. Google have lots of research projects... and are behind things like tensor flow.

There is Chengdu Haiguang IC Design Co. which produces x86 chips in China based on AMD Ryzen. But I doubt that is the answer, I'm sure occam's razor applies here. :)
Wait a tick. Does intel own the x86 license or amd? While we know you hate intel I'm pretty sure AMD owns the licensing rights... I could be wrong but last i heard intel didnt ever hold the rights for x86 nor sell licensing. Been a while since I looked into it, but AMD had always controlled the licensing.
 
Wait a tick. Does intel own the x86 license or amd? While we know you hate intel I'm pretty sure AMD owns the licensing rights... I could be wrong but last i heard intel didnt ever hold the rights for x86 nor sell licensing. Been a while since I looked into it, but AMD had always controlled the licensing.

Amd owns x64 extension, Intel owns x86

Just as Brian says they both own parts. x86 was Intels spec and they own it. AMD created the 64 bit extensions. Intel also owns other spin offs like SSE and AMD has their own such extensions that go way back like 3Dnow (man that brings back memories). AMD also picked up the patents from Cyrix. Cyrix white roomed x86 and created their own compatible but different version of x86. Intel sued them more then once over that and lost... a court decided that Cyrix was legally in the clear. Not long after Cyrix counter sued Intel... because in Cyrixs work they had used hardware math multipliers instead of the simplified version of Volder's algorithm Intel used to do pseudo multiplication and division. (you may remember early pentiums having multiplication issues ect... this was part of Intels problem as their chips got more complicated) Cyrix x86 was actually much more accurate they solved the speed issues that lead Intel to go the route they did. Its also part of the same issue say GPUs have today not being able to comply completely to IEEE 754 float point spec because they don't use proper registers. Cyrix accomplished their upgrade using some cleaver register renaming tricks. (GPUs have the same issue today even the latest vegas and turings they don't have float overlow registers and the associated triple check registers for perfect accuracy).... Anyway Cyrix sued as when Intel went and created the pentium pro and the PII (and everything since) they adopted the same cleaver registry tricks that allowed for proper math multipliers... they also cribbed some specific power management tricks Cyrix developed. Intel settled that suit.

So to make a long story short... Intel owns the base spec and many of the multi media extensions like MMX SSE SSE2 ect. AMD owns x86_64 and for the last number of years they hold the Cyrix patents that Intel has a very long term cross licence patent on after their law suit settlement with the now defunct Cyrix. Cyrix dropped their suit against Intel with Intel paying them for a long term licence for the tech they stole... and Intel gave Cryix full access to Intel patents. Which would have allowed them to build chips using things like SSE and anything else Intel had in its portfolio. Intel in no way wanted to have the Cyrix v Intel case actually go through the courts. By the time it would have wrapped up if Intel had lost which I think everyone that knew the tech expected... they could have been on to P4 or even core 2 by then and possibly could have ended up owning royalties going back years. Who knows if Cyrix had just hung in for a few more years and took that all to court they might have ended up with a massive chunk of change and may well have still been making chips today.

Anyway, rambling.... the history of x86 is interesting anyway. For the record I don't hate Intel... they do some stuff that drives me nuts, but they do other things I am a huge fan of. One of which is their top notch support of open source. A company the size of Intel its possible to love and hate them all at once.
 
Last edited:
Link? People often mistakenly use this copyrighted term as a general term to describe SMT, like when people use "Google" as a verb to describe a web search.

Actually, here you go:

"google [...] search for information about (someone or something) on the Internet using the search engine Google [...]"

Just as Brian says they both own parts. x86 was Intels spec and they own it. AMD created the 64 bit extensions. Intel also owns other spin offs like SSE and AMD has their own such extensions that go way back like 3Dnow (man that brings back memories). AMD also picked up the patents from Cyrix. Cyrix white roomed x86 and created their own compatible but different version of x86. Intel sued them more then once over that and lost... a court decided that Cyrix was legally in the clear. Not long after Cyrix counter sued Intel... because in Cyrixs work they had used hardware math multipliers instead of the simplified version of Volder's algorithm Intel used to do pseudo multiplication and division. (you may remember early pentiums having multiplication issues ect... this was part of Intels problem as their chips got more complicated) Cyrix x86 was actually much more accurate they solved the speed issues that lead Intel to go the route they did. Its also part of the same issue say GPUs have today not being able to comply completely to IEEE 754 float point spec because they don't use proper registers. Cyrix accomplished their upgrade using some cleaver register renaming tricks. (GPUs have the same issue today even the latest vegas and turings they don't have float overlow registers and the associated triple check registers for perfect accuracy).... Anyway Cyrix sued as when Intel went and created the pentium pro and the PII (and everything since) they adopted the same cleaver registry tricks that allowed for proper math multipliers... they also cribbed some specific power management tricks Cyrix developed. Intel settled that suit.

So to make a long story short... Intel owns the base spec and many of the multi media extensions like MMX SSE SSE2 ect. AMD owns x86_64 and for the last number of years they hold the Cyrix patents that Intel has a very long term cross licence patent on after their law suit settlement with the now defunct Cyrix. Cyrix dropped their suit against Intel with Intel paying them for a long term licence for the tech they stole... and Intel gave Cryix full access to Intel patents. Which would have allowed them to build chips using things like SSE and anything else Intel had in its portfolio. Intel in no way wanted to have the Cyrix v Intel case actually go through the courts. By the time it would have wrapped up if Intel had lost which I think everyone that knew the tech expected... they could have been on to P4 or even core 2 by then and possibly could have ended up owning royalties going back years. Who knows if Cyrix had just hung in for a few more years and took that all to court they might have ended up with a massive chunk of change and may well have still been making chips today.

Anyway, rambling.... the history of x86 is interesting anyway. For the record I don't hate Intel... they do some stuff that drives me nuts, but they do other things I am a huge fan of. One of which is their top notch support of open source. A company the size of Intel its possible to love and hate them all at once.

Then why were Cyrix CPUs so shitty?
 
i wonder if Google is running multiple instances per machine and this is why they're being vague about the CPU details- because each box has a number of HCC or XCC die Skylake-SP chips divided up into a number of 4C/8T or 6C/12T virtual machines with dedicated GPUs for each via PCIe passthrough.

my only experience with such a config is with a single vm with dedicated GPU on a 6M/12T Opteron / dual GPU system so I don't know how well it would scale up- could they be using something like quad-socket boxes with eight GPUs for density?
 
i wonder if Google is running multiple instances per machine and this is why they're being vague about the CPU details- because each box has a number of HCC or XCC die Skylake-SP chips divided up into a number of 4C/8T or 6C/12T virtual machines with dedicated GPUs for each via PCIe passthrough.

my only experience with such a config is with a single vm with dedicated GPU on a 6M/12T Opteron / dual GPU system so I don't know how well it would scale up- could they be using something like quad-socket boxes with eight GPUs for density?

I just couldn't care less about Stadia, heck, I think that even AMD doesn't care beyond the point of delivering the hardware they've contracted for.

Two things to keep in mind about Stadia:

1) Google will never solve the latency issue between the time you press a button and something happens on your screen because physics. And because of that, Stadia won't be used by anyone who enjoys high performance gaming with all the settings cranked to Ultra at 1440p or 4K, with all the eye candy goodies turned on.

2) Stadia is about the future of YouTube. User generated content is stagnating as everyone is copying everyone, production costs are skyrocketing because it's expensive to make quality videos, and there is barely anything original anymore on YouTube other than shit that just happens and you were lucky enough to film and upload. Stadia will infuse YouTube with new blood, at least in theory.
 
Then why were Cyrix CPUs so shitty?

They where not. :) Their 486 instruction chip was pinned for 386 boards. If you upgraded a 386 machine with a 486 cyrix chip you where not unhappy. Was it as fast as a fully 486 system... no. And sadly the cheapo makers loved to make "486" Cyrix systems that where really 386 machines with Cyrix 486. So consumers got much cheaper machines... and their kids remember them as sucking, not that their parents in some cases paid a grand instead of two. :)

They did the same later with 586... with chips somewhere in between 486 and pentiums that slotted into 486 boards.

Their 686 chips where hands down better then the pentiums of the day. The main issue there was software optimization. Its not that Cyrix chips where not equal and in many ways superior... its that they never really got any major OEM wins, and seeing as they where almost always in low ball cheapo OEM machines such as emachines ect... the major software companies didn't go out of their way optimizing Pro software, even games where badly optimized. AMD had the same issues trying to get software companies to optimize or use their 3DNOW stuff instead of MMX ect. (Intel also spent a lot of money in the shadows making sure the the major software makers of the day compiled for their FPU and not for Cyrix) Not such a small thing either Intel forced them to waste millions on BS lawsuits ALL of which Intel lost, regardless a decade in court vs Intel will cut into your software developer outreach program budgets. :)

Their never released M3 chip would have been one of the first real APUs... its fun to think about what could have been with Cyrix if they hadn't been burning so much money on legal BS, and had money to market, support and continue developing some of the most advanced stuff around. Their MediaGX chips believe it or not are still around in updated AMD form. lol
https://en.wikipedia.org/wiki/Geode_(processor)

Who knows where we would be if there had been a legit 3 way competition in x86 all these years.
 
"custom x86" could be made by anyone. Not likely AMD, since they denied it was their. Probably Intel, but it could be ARM, or a new google made cpu. Pretty sure google has already been making/using custom x86 cpu's for their cloud servers.. (might be remembering that wrong tho)

The instruction set can be re-implemented, but the instruction set itself is patent protected. And re-implementation costs performance.
 
This is interesting. It would seem an odd time to select Intel as a vendor for a new platform considering all of their supply issues.

They must have had a good reason. I wonder what it is?
 
Back
Top