ARM server status update / reality check

I'm tempted to get one just to... play with it.

Have a few 'junk' monitors and assorted peripherals that would make for a straightforward "1x computer in corner" machine.
 
Yeah, I guess I stand corrected, and the Pi has enough market-share for Broadcom to justify a complete redesign. I have no complaints about this!

That, and the Nvidia Nano is going to need a $50 price cut (this does everything it does for half the price).
 
Yeah, SQ1 looks like the first serious effort to get ARM on Windows. No surprise it's left to Microsoft to make this happen.

Of course, they announced the Snapdragon 8CX in May, and the galaxy book s in early August, and you still can't buy it? Time is ticking for this thing's marketability :rolleyes:

RISC V will always be that technology that is "just around the corner," just like Linux on the desktop. It will find corner cases to give it some momentum (WD using RISCV on disk controllers, as an example), but ARM already has too much market share to give it any easy win.

The people who think RISC V will be in servers anytime soon make me laugh. These people have not been watching what AMD did with Rome. It's enough off a performance/watt and perf/dollar improvement over Intel to gain x86 some unexpected server momentum, and completely kill any growth potential for alternative architectures for at least the next couple years. ARM has first-,mover rights, and RISC V is going to have a hard time getting any traction.
 
Last edited:
The people who think RISC V will be in servers anytime soon make me laugh.

I think we'll get there, but I really don't think it will be soon. Like any hardware, it'll take a 'killer application' to drive demand, and making the ISA 'free' doesn't help that much given the capital investment that is needed to get a wafer run from a fab in the first place.
 
I think we'll get there, but I really don't think it will be soon. Like any hardware, it'll take a 'killer application' to drive demand, and making the ISA 'free' doesn't help that much given the capital investment that is needed to get a wafer run from a fab in the first place.


There is no "killer app" advantage of RISC V vs ARM. They're both RISC architectures, with similar theoretical perf/watt. As of a recent announcement by ARM, both have the ability to add custom instructions to your architecture. You have the freedom to design your own custom cores for both architectures.

The only remaining difference is licensing costs, but considering how many ARM OEMs are taking the easy way out (and tweaking existing ARM, rather than build-your own from scratch), it seems to be well-worth the cost of entry.

I don't see that changing anytime soon on the consumer side (with a complete lack of powerful RISCV cores for OEMs to choose from and tweak). And as for server, it isn't five years ago: today you have almost a dozen powerful ARM CPUs to choose from, leaving zero opportunities for another server architecture.

And once RISC V vendors build-up that valuable list of powerful and efficient IP Cores to compete directly with ARM and get those Chip OEM design wins, you're not exactly going to give them away for free. So the cost benefits of having an open instruction set are still up in the air. And thanks to ARM letting you customize their cores (for a higher fee), they've been able to essentially centralize CPU developments that used to be wasteful and separate! And as a result, their A70 series cores are more powerful than they've ever been!

RISC V is much nicer for students, but I don't see those benefits making such a big deal once these things get mass-produced and powerful.
 
Last edited:
Amazon announcement of new ARM-based servers and AWS services. Custom 7nm silicon built over Neoverse cores. Amazon customers can already test them

https://aws.amazon.com/es/about-aws...eneration-arm-based-aws-graviton2-processors/

Performance and price compared to Skylake and Cascade Lake Xeons

https://aws.amazon.com/es/blogs/aws...ute-optimized-memory-optimized-ec2-instances/

ARM-based AWS instances are coming before Rome-based instances. Amazon will use those ARM servers for its own internal workloads and also will sell them to customers

https://www.bloomberg.com/news/arti...cloud-hardware-to-outrun-microsoft-and-google
 
Last edited:
There is a reason ARM is being used internally only at first. ANY off the shelf software will need to be re-written. x86 emulation on ARM blows anus. Microsoft provides it on their ARM version of 10, but only 32-bit and it's very slow. Very few vendors are going to want to re-write their software for ARM, x86 is too common.
 
https://aws.amazon.com/ec2/graviton/

AWS Graviton processors are custom-built by Amazon Web Services using 64-bit Arm Neoverse cores to deliver the best price performance for your cloud workloads running in Amazon EC2. Amazon EC2 provides the broadest and deepest portfolio of compute instances, including many that are powered by latest-generation Intel and AMD processors. AWS Graviton processors add even more choice to help customers optimize performance and cost for their workloads.

The first-generation AWS Graviton processors power Amazon EC2 A1 instances, the first ever Arm-based instances on AWS. These instances deliver significant cost savings over other general-purpose instances for scale-out applications such as web servers, containerized microservices, data/log processing, and other workloads that can run on smaller cores and fit within the available memory footprint.

AWS Graviton2 processors deliver a major leap in performance and capabilities over first-generation AWS Graviton processors. They power Amazon EC2 M6g, C6g, and R6g instances that provide up to 40% better price performance over comparable current generation instances1 for a wide variety of workloads, including application servers, micro-services, high-performance computing, electronic design automation, gaming, open-source databases, and in-memory caches. The AWS Graviton2 processors also provide enhanced performance for video encoding workloads, hardware acceleration for compression workloads, and support for CPU-based machine learning inference. They deliver 7x performance, 4x the number of compute cores, 2x larger caches, and 5x faster memory compared to the first-generation Graviton processors.
 
There is a reason ARM is being used internally only at first. ANY off the shelf software will need to be re-written. x86 emulation on ARM blows anus. Microsoft provides it on their ARM version of 10, but only 32-bit and it's very slow. Very few vendors are going to want to re-write their software for ARM, x86 is too common.

Paypal and other companies have been using ARM servers for a while. AWS customers like Smugsmug migrated to ARM since first gen Graviton

Amazon-Arm-SmugMug-Large.jpg


AWS customers already can test second gen Graviton


and the datacenter link given above reports some users that have been using first gen Ampere servers.
 

Amazon chose Neoverse (not Rome) for its internal workloads, and ARM-based AWS instances are available to customers whereas ROME-based instances aren't deployed yet.

https://www.forbes.com/sites/moorin...viton2-processors-with-ec2-6th-gen-instances/
 
Amazon chose Neoverse (not Rome) for its internal workloads, and ARM-based AWS instances are available to customers whereas ROME-based instances aren't deployed yet.

Nobody knows what amazon uses for its internal workloads. I highly doubt they limit themselves to just ARM. If you can prove this please do so, but don't infer something without a source.

Disjointed metrics and false equivalences. Whatever share ARM has now and grows into, does not compare to the existing Epyc install base, which in turn does not compare to the intel install base. These things don't change overnight. Both Rome and Neoverse are going to exist in AWS. They are different segments with different market turnover. Starting a pissing contest now will only lead to stage fright.
 
Nobody knows what amazon uses for its internal workloads. I highly doubt they limit themselves to just ARM.

"We're going big for our customers and our internal workloads"

"AWS' initial strategy is to move its internal services to Graviton2-based infrastructure. Graviton2 required significant investment, but AWS can garner returns and improve its operating margins due to the ability to cut out middlemen involved with procuring processors, power savings due to Arm and efficiency gains from optimizing its own infrastructure."

https://www.zdnet.com/article/aws-g...-arm-in-the-data-center-cloud-enterprise-aws/

Both Rome and Neoverse are going to exist in AWS.

Don't mix present tense with future tense. Graviton2-based instances already exist and can be tested by customers now. Rome-based instances aren't deployed yet.
 
"We're going big for our customers and our internal workloads"

"AWS' initial strategy is to move its internal services to Graviton2-based infrastructure. Graviton2 required significant investment, but AWS can garner returns and improve its operating margins due to the ability to cut out middlemen involved with procuring processors, power savings due to Arm and efficiency gains from optimizing its own infrastructure."

https://www.zdnet.com/article/aws-g...-arm-in-the-data-center-cloud-enterprise-aws/

Those quotes are out of context and disjointed. It is Larry Dignan who adds the context afterwards. Not amazon.

AWS is launching new Arm-based versions of Amazon EC2 M, R, and C instance families.

This is at best dishonest. If amazon does make this move, which I do not discount, it would be big news and probably all over the place. The fact that one blogger picks and chooses a quote, does not make this so.

Don't mix present tense with future tense. Graviton2-based instances already exist and can be tested by customers now. Rome-based instances aren't deployed yet.

I quote
Amazon EC2 M6g instances are currently in preview and will be generally available soon.

IMO preview != deployment which is why I phrased future tense as not to mislead.

Edit: To make things clear.

Amazon chose Neoverse (not Rome) for its internal workloads, and ARM-based AWS instances are available to customers whereas ROME-based instances aren't deployed yet.

You used different terms and generalities to express your bias.

ARM-based != Graviton2
Rome-based not deployed.

This infers deployment of Graviton2 but they are just in preview.
 
Last edited:
Those quotes are out of context and disjointed. It is Larry Dignan who adds the context afterwards. Not amazon.

He interviewed Amazon people and is reporting Amazon plans. Its internal workloads will be handled by ARM servers.

This infers deployment of Graviton2 but they are just in preview.

Graviton2 instances are ready for testing. Rome instances aren't.
 
ThunderX3 will come out in the early part of 2020, with a greater than 2X generational performance improvement compared to TX2.

ThunderX3 will be a monolithic design, not chiplet.

The IPC jump combined with the clock speed jump is expected to be around 50%.

"We have about a 20 percent die area advantage over Naples, and we have a similar power advantage."

"And when we move to 7 nanometers with ThunderX3, we see that our area and power advantage actually gets better. Our area compared to AMD Rome and Intel Ice Lake is better, and our power efficiency will be significantly better."

https://www.nextplatform.com/2019/12/10/looking-ahead-to-marvells-future-thunderx-processors/
 
He interviewed Amazon people and is reporting Amazon plans.

Yet not a single quote by amazon in the article says what you are saying. Thus this is his interpretation not amazons.

You choose to state it as fact before it is confirmed by any direct source. At least Jim from AdoredTV understands the difference between fact, leak, and inference. Someday you may reach his level.

Its internal workloads will be handled by ARM servers.

Now this may be true. I personally believe that Graviton2 will be used internally by AWS.

IMO it will not be ubiquitous. The needs of such a large company will require a myriad of compute systems.

Graviton2 instances are ready for testing. Rome instances aren't.

That is a proper statement. See it's not that hard.
 
EPI uses RISC-V. First processor will be available next year.

https://www.european-processor-initiative.eu/


And this is supposed to somehow get me worried about someone actually paying money for a RiscV Server chip?

ARM server processors have a 5 year head-start on RiscV, and you've got almost a dozen different brands established now. That's a very crowded market to make inroads into.

And thanks to Fujitsu's ARM SVE-powered Post-K supercomputer, RiscV HPC vector advancements look more like child's play:

https://www.eetimes.com/andes-core-has-risc-v-vector-instruction-extension/#

And even "expensive" ARM is winning distributed cloud compute by-default, thanks to Amazon building their own with Gravitron 2!

Since RISCV doesn't have the perf/watt advantage , there's not much chance of these server makers completely changing architectures AGAIN! The RISC re-invasion already happened, and developers universally decided the cost of a custom ARM license was worth it to beat x86!

Just because someone gets together t develop a processor doesn't mean it will ever get used. RIscV is not too late for lower-performance/embedded usage, but server has a certain level of performance expectations today. You can't hope to make inroads with a ThunderX1-level product when all your competitors are bringing ThunderX3 or better to the table.

RiscV has a much better chance io appearing in phones than it ever has of making it into a server. Once you add in the cost of making a complex server chip, the added overhead o buying an ARM incense is lost-in-the-noise; it's much easier to make a high-performing, fully compatible next-generation ARKM core.
 
Last edited:
Now this may be true. I personally believe that Graviton2 will be used internally by AWS.

IMO it will not be ubiquitous. The needs of such a large company will require a myriad of compute systems.

It is mentioned in the Zdnet article why Amazon will use Graviton2 for its internal workloads.

This is easy to check. 64 N1 cores at 2.6GHz achieve about 1310 SPECint2006 and EPYC 7742 does about 1481, but one is a 105W chip and the other is a 245W chip. So Amazon can get similar throughput but at half the power.

EPYC2 is totally destroyed.
 
It is mentioned in the Zdnet article why Amazon will use Graviton2 for its internal workloads.

This is easy to check. 64 N1 cores at 2.6GHz achieve about 1310 SPECint2006 and EPYC 7742 does about 1481, but one is a 105W chip and the other is a 245W chip. So Amazon can get similar throughput but at half the power.

EPYC2 is totally destroyed.

For the 100dth time. Source your scores.

We all know the 1310 is an estimate from anandtech. The 1481 is from where ???

It's freaking hard to source SPECint2006 because it is freaking useless. It was closed in 2017

https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170529-47127.html

2360

Yeah it's dual, but it's also a 7601.

Shenanigans
 
Last edited:
For the 100dth time. Source your scores.

We all know the 1310 is an estimate from anandtech. The 1481 is from where ???

It's freaking hard to source SPECint2006 because it is freaking useless. It was closed in 2017

https://www.spec.org/cpu2006/results/res2017q2/cpu2006-20170529-47127.html

2360

Yeah it's dual, but it's also a 7601.

1310 is the score given by ARM during N1 presentation (it is also available on Hot-Chips slides), not an estimate from AT. The score of 1481 for 64C Zen2 follows from the AT review of EPYC 7742.

The spec.org result that you give is using the AMD compiler, which is a compiler optimized to trick scores by breaking subtests as libquantum (as one can see in the graph). The same EPYC chip using a genuine compiler as GCC scores about 300 points less.

An EPYC 7601 with GCC8 does 691 points. The Neoverse reference chip does 1310 points with GCC8. Graviton2 would probably do more and Ampere chip will do a lot of more.
 
Those quotes are out of context and disjointed. It is Larry Dignan who adds the context afterwards. Not amazon.

I watched the event, and they said are moving to ARM for internal workloads.

AWS is launching new Arm-based versions of Amazon EC2 M, R, and C instance families.
This is at best dishonest. If amazon does make this move, which I do not discount, it would be big news and probably all over the place. The fact that one blogger picks and chooses a quote, does not make this so.

Whatever you say

AWS-ECS2-Graviton-2-Instances-Large.jpg
 
Last edited:
Can you link to it. This isn't about which is better. It's about the proper flow of information.

https://www.zdnet.com/article/aws-g...-arm-in-the-data-center-cloud-enterprise-aws/

And AWS' initial strategy is to move its internal services to Graviton2-based infrastructure. Graviton2 required significant investment, but AWS can garner returns and improve its operating margins due to the ability to cut out middlemen involved with procuring processors, power savings due to Arm and efficiency gains from optimizing its own infrastructure.

It was like the third result in Google seearch, so it wasn't exactly buried.
 
https://www.zdnet.com/article/aws-g...-arm-in-the-data-center-cloud-enterprise-aws/



It was like the third result in Google seearch, so it wasn't exactly buried.

That's the previous link from post 100.

What the juanrga's image refers to is this



It says nothing close to

Amazon chose Neoverse (not Rome) for its internal workloads,

In fact. They state a bit later 18min partners are leading the way with it.

Look I have no doubt it is making ways into both internal and external workloads, but until I hear it from amazon. The above statement just doesn't square with the available information.
 
That's the previous link from post 100.

What the juanrga's image refers to is this



It says nothing close to



In fact. They state a bit later 18min partners are leading the way with it.

Look I have no doubt it is making ways into both internal and external workloads, but until I hear it from amazon. The above statement just doesn't square with the available information.


The image I posted is for proving that Graviton2 is coming to "M, R, and C instance families" and that your accusation that the journalist Larry Dignan was being "dishonest" when he wrote "AWS is launching new Arm-based versions of Amazon EC2 M, R, and C instance families" was both ridiculous and unfair.

The video you have posted isn't the talk I referred to. In fact in the cut you give us Andy Jassy is talking about customers and partners of the A1, i.e. the original Graviton chip made of sixteen A72 cores.
 
Last edited:
The image I posted is for proving that Graviton2 is coming to "M, R, and C instance families" and that your accusation that the journalist Larry Dignan was being "dishonest" when he wrote "AWS is launching new Arm-based versions of Amazon EC2 M, R, and C instance families" was both ridiculous and unfair.
The exact context in the article is this:
"We're going big for our customers and our internal workloads," said Raj Pai, vice president of AWS EC2. AWS is launching new Arm-based versions of Amazon EC2 M, R, and C instance families.

I stand by my disjointed comment. These are 2 separate statements. One by Raj Pai the other by Larry Dignan. It is bad practice to take a quote as such without context an add another fact. Where did Raj Pai say this? Why is there only this 10 word quote from Raj? If there is a direct source from Raj to Larry, it is in a vacuum.

Moreover. You added the above quote

AWS' initial strategy is to move its internal services to Graviton2-based infrastructure. Graviton2 required significant investment, but AWS can garner returns and improve its operating margins due to the ability to cut out middlemen involved with procuring processors, power savings due to Arm and efficiency gains from optimizing its own infrastructure.

without context. They are 2 separate quotes from 2 different people AKA disjointed.

The video you have posted isn't the talk I referred to. In fact in the cut you give us Andy Jassy is talking about customers and partners of the A1, i.e. the original Graviton chip made of sixteen A72 cores.

So post the link to the talk or some semblance to what you are referencing.
 
Last edited:
Another rung on the ladder and a bit of a sanity check on this thread

https://www.anandtech.com/show/1557...n1-soc-for-hyperscalers-against-rome-and-xeon

Glancing through it seems in integer it is 0.75+% x86 (104% 64 to 80 zen2 to amp)

Single threaded per core, 128bit single lane simd with fp16.

Could be very good at video as anyone who's worked with neon and their cousins understands from their color channel muxing.
 
Not sure if these have been shared, but look fairly interesting and easy to use, and might pick one up myself.
14″ PINEBOOK Pro LINUX LAPTOP

Pinebook_Pro-photo-1.png

CPU: 64-Bit Dual-Core ARM 1.8GHz Cortex A72 and Quad-Core ARM 1.4GHz Cortex A53
GPU: Quad-Core MALI T-860
RAM: 4 GB LPDDR4 Dual Channel System DRAM Memory
Flash: 64 GB eMMC 5.0
Wireless: WiFi 802.11AC + Bluetooth 5.0
One USB 3.0 and one USB 2.0 Type-A Host Ports
USB 3.0 Type-C ports with alt-mode display out (DP 1.2) and 15W 5V 3A charge.
MicroSD Card Slot: 1
Headphone Jack: 1
Microphone: Built-in
Keyboard: Full Size ANSI(US) type Keyboard
Touch-pad: Large Multi-Touch Touchpad
Power: Input: 100~240V, Output: 5V3A
Battery: Lithium Polymer Battery (10000mAH)
Display: 14.1″ IPS LCD (1920 x 1080)
Front Camera: 2.0 Megapixels



 
Last edited:
Back
Top