Sun or Oracle Sparc processors

Joined
May 22, 2010
Messages
2,079
I asked a former instructor about the Sparc processor and he said the following:

"The Sparc processors are able to due a high-level of multi-threaded tasks simultaneously. If you have an application that requires this such as a web server or database server, the ability of running a 64 threads simultaneously is awesome. However, if you do not have the need to process that many threads simultaneously, the cost of the Sparc system is not worth it. Intel and AMD have made huge strides in the past 10 years and their processors do an amazing job. For most applications, one of the AMD or Intel multi-core systems will do the job. If you are using one server to do e-mail, web, database, file-sharing, etc., then an Intel or AMD system is probably the better choice since the required code is so different. I believe Sparc processors are typically socket type processors, but usually you cannot upgrade them like a PC. The motherboard is only designed to handle a certain chip or maybe a few chips of very similar capabilities."

It's still not clear if the Sparc is socket based or embedded though and what they mean that the process are still not upgradeable like Intel and AMD processors. The only stuff about the 8 threads per core for a total of 64 threads simultaneously or more does though or at least it should and compared to Intel 2 threads per core that would be awesome if I had a need for it, but I probably don't. Also how does this compare to AMD's hyper-transport, which if I'm not mistaken is the equivalent of Intel's hyper-threading. What do the Sparc processors look like especially the current ones and if they are available seperately how much do they cost and where can I purchase them besides from Oracle?
I currently have a dual processor Xeon E5 2600 v2 series system as my server for live testing of network configurations, so why would I need a system that uses a Sparc processor(s))? At the most even a Xeon E3 system may suit my need, but I choose the E5 because I had other plans as well, such as folding at least if not OpenCL or DirectCompute or anything in that neighborhood.
 
You wouldn't need SPARC. It's a RISC processor that thrives on Solaris. Some were socket CPUs, so were embedded on thin clients and servers - and some of the higer end stuff ran on board CPUs.
 
You again! ;)
Older SPARC processors were more akin to how most x86 CPUs were from the 2000s to 2010s, but the most recent SPARC processors have changed their designs quite a bit.

Intel HyperThreading, as you stated, allows two threads to run simultaneously per core, whereas on modern SPARC processors, I believe 8+ threads can be run per core, but these cores are much "weaker" in terms of processing power.
All cores are not equal, and in all honesty, in raw processing power, x86 (Intel) cores are much more powerful, but they do not have the threading capabilities of SPARC, so each has their pros and cons.

Your 2P Xeon E5 system should be more than enough for everything you are running, at least from what you stated.
One thing that really benefits from SPARC processors/systems are massive databases, especially if they are coded to take advantage of massive parallel threading, where powerful cores/threads aren't really needed, but a lot of threads are to handle all of the individual requests (quantity vs quality, so to speak).

I believe that almost all SPARC CPUs are socket-based, unless they are included on a daughterboard or module with external cache, then the CPU itself would most likely be BGA.
These CPUs would be far beyond the cost of Intel E5 v3 CPUs, and possibly beyond the cost of Intel E7 CPUs as well (multiple thousands of dollars per CPU/module).

As for HyperTransport, that is a system/CPU/controller bus and has nothing to do with threading, and is definitely not similar to HyperThreading in any way, shape, or form.
 
You wouldn't need SPARC. It's a RISC processor that thrives on Solaris. Some were socket CPUs, so were embedded on thin clients and servers - and some of the higer end stuff ran on board CPUs.

So what if it is RISC?
Is there something wrong with that? :confused:

SPARC can run a multitude of OS environments, not just Solaris, so that's a bit of an incomplete statement.
Other UNIX and GNU/Linux OSes are more than optimized for it in this day and age.
 
You wouldn't need SPARC. It's a RISC processor that thrives on Solaris. Some were socket CPUs, so were embedded on thin clients and servers - and some of the higer end stuff ran on board CPUs.

What do they look like though? Are there any good pictures showcasing them.
 
So what if it is RISC?
Is there something wrong with that? :confused:

SPARC can run a multitude of OS environments, not just Solaris, so that's a bit of an incomplete statement.
Other UNIX and GNU/Linux OSes are more than optimized for it in this day and age.

RedFalcon is right. I remember seeing many GNU/Linux OSes that support the Sparc, so if I had a need for a Sparc system I would have a wide range of options as far as Operating Systems.
 
You don't already have stuff married to SPARC, so you don't need a SPARC processor. Even when you need one, you don't want one.

> socket based or embedded
Today: Socket based.

> process are still not upgradeable
Most current SPARC CPUs are not customer-replaceable units, you would need an Oracle FSE to install a brand-new processor. For instance on the Fujitsu M4000/M5000s, the CPUs are sold on a daughterboard together, and yes they are expensive. We did one upgrade, and I think four SPARC64 VII+s on two M4000 CPU daughterboards ran $employer ~$30,000.

> 8 threads per core for a total of 64 threads simultaneously
Well it's not exactly simultaneously. On the T2 and T3 you're running a max of 8 simultaneously (one per core). On T4, T5, M5, M6 you get to run 2 threads per core.
This is beneficial when you're doing threaded applications where getting data to the processor is the slowest piece of execution, but it means you're not as hot on single-threaded apps like data analytics.

>where can I purchase them besides from Oracle?
Brand new: from Oracle or an Oracle reseller... but you'll pretty much be buying a whole server in the process. They're not an easy DIYer realm. You can always eBay stuff of course.

Basically the modern SPARC CPUs are designed to be just good enough to retain the customers who hopped on the SPARC bandwagon 1995-2005 and haven't converted to Intel yet. There's little compelling reason to buy a SPARC system today unless you're running a life cycle refresh on legacy SPARC gear.
 
> 8 threads per core for a total of 64 threads simultaneously
Well it's not exactly simultaneously. On the T2 and T3 you're running a max of 8 simultaneously (one per core). On T4, T5, M5, M6 you get to run 2 threads per core.
This is beneficial when you're doing threaded applications where getting data to the processor is the slowest piece of execution, but it means you're not as hot on single-threaded apps like data analytics.

Actually, it is 8 threads per 1 core simultaneously, and that is on a T4 from 2011.
Here is info from the wiki article:

An eight core, eight thread per core chip built in a 40 nm process and running at 2.5 GHz was described in Sun Microsystems' processor roadmap of 2009.

That means eight threads simultaneously.
Intel CPUs with HyperThreading only allows two threads per core.

Saying that one does not "need" SPARC CPUs is not true at all, especially in large enterprise environments.
There are large database applications and web hosting applications which can absolutely choke 30-core Intel E7 CPU-based systems, not because they don't have the processing power, but because they don't have the threading capabilities.


Also, if one is still running single to dual-core SPARC CPUs from the mid-90s to 2005, then it is more than time to upgrade.
Even Atom and ARM processors can outperform any of those CPUs in this era.

The 2005 dual-core SPARC CPUs of that time competed against AMD X2 (939) and Intel Pentium D dual-core CPUs.
If one is still running those systems, it is probably due to Oracle licensing, and the massive cost of upgrading, but again, those systems are a decade old, and if they are still running, are probably not worth the cost, or the need, at this point in time.
 
Actually, it is 8 threads per 1 core simultaneously, and that is on a T4 from 2011.
Here is info from the wiki article:

That means eight threads simultaneously.
Actually all of the threads are not all executing simultaneously. It's a barrel processor. There's only one execution at a time per core on T1-T3 and the new ones are two at a time.

https://en.wikipedia.org/wiki/Simultaneous_multithreading

Intel CPUs with HyperThreading only allows two threads per core.
Intel Hyperthreading, in addition to having two threads per core, also executes two threads per core, and that execution is similar to the T4/T5. The T1-T3 only execute one thread per core at a time, even if they have 4,6, or 8 threads per core.

Saying that one does not "need" SPARC CPUs is not true at all, especially in large enterprise environments.
There are large database applications and web hosting applications which can absolutely choke 30-core Intel E7 CPU-based systems, not because they don't have the processing power, but because they don't have the threading capabilities.
It's perfectly true. Everybody talks about these phantom SPARC-only scenarios, but nobody provides evidence. Not to mention that the IBM POWER gear can handle anything that SPARC can. But we're bringing up edge cases that only apply to $10-20M machines to discuss the values of slow $50-100K servers that get stomped by Intel-based solutions. If you really truly need a single domain with 32TB of RAM, you quickly know what you need and you aren't asking on the [H] forums.


Also, if one is still running single to dual-core SPARC CPUs from the mid-90s to 2005, then it is more than time to upgrade.
Even Atom and ARM processors can outperform any of those CPUs in this era.

The 2005 dual-core SPARC CPUs of that time competed against AMD X2 (939) and Intel Pentium D dual-core CPUs.
If one is still running those systems, it is probably due to Oracle licensing, and the massive cost of upgrading, but again, those systems are a decade old, and if they are still running, are probably not worth the cost, or the need, at this point in time.

I didn't imply that people were running 95-05 gear. I only mean that improvements in the SPARC lineup are geared towards retaining existing customers.
SPARC gear around 2003, like the V440, lagged a little behind Sun's V40z (eg 4P Opty 850). This disparity has only gotten worse over time, and people are leaving. Oracle tried to compensate by adjusting the core factor in their favor, but then that left all of the other non-Oracle products billing us 8x per core. There were ways to soften that blow, but ugh. Why bother?
 
Actually all of the threads are not all executing simultaneously. It's a barrel processor. There's only one execution at a time per core on T1-T3 and the new ones are two at a time.

Very Interesting, I did not know this.
I stand corrected, thanks for the info!
 
do SPARC processors out perform general x86 processors significantly in Java Operations Per Second (jOPS)? I just wonder if having very high jOPS performance translates into better Application Server JVM performance?
 
do SPARC processors out perform general x86 processors significantly in Java Operations Per Second (jOPS)? I just wonder if having very high jOPS performance translates into better Application Server JVM performance?

It depends on the system and the CPU(s) in use, but on one-on-one, I don't think SPARC would have a clear advantage unless the Java process(es) were optimized for that specific processor.
x86 CPUs have come a long ways and the E7 series are very powerful with up to 30+ cores with HT.
 
You don't already have stuff married to SPARC, so you don't need a SPARC processor. Even when you need one, you don't want one.

> socket based or embedded
Today: Socket based.

> process are still not upgradeable
Most current SPARC CPUs are not customer-replaceable units, you would need an Oracle FSE to install a brand-new processor. For instance on the Fujitsu M4000/M5000s, the CPUs are sold on a daughterboard together, and yes they are expensive. We did one upgrade, and I think four SPARC64 VII+s on two M4000 CPU daughterboards ran $employer ~$30,000.

> 8 threads per core for a total of 64 threads simultaneously
Well it's not exactly simultaneously. On the T2 and T3 you're running a max of 8 simultaneously (one per core). On T4, T5, M5, M6 you get to run 2 threads per core.
This is beneficial when you're doing threaded applications where getting data to the processor is the slowest piece of execution, but it means you're not as hot on single-threaded apps like data analytics.

>where can I purchase them besides from Oracle?
Brand new: from Oracle or an Oracle reseller... but you'll pretty much be buying a whole server in the process. They're not an easy DIYer realm. You can always eBay stuff of course.

Basically the modern SPARC CPUs are designed to be just good enough to retain the customers who hopped on the SPARC bandwagon 1995-2005 and haven't converted to Intel yet. There's little compelling reason to buy a SPARC system today unless you're running a life cycle refresh on legacy SPARC gear.

At that price their well out of my budget at the moment and I have no way of financing something that high in price currently. Especially if your talking about just the price of the processor at $30,000. Thank you for sharing this information though. Aren't the T5's the most current ones though even if they still cost just as much if not more?
 
Last edited:
At that price their well out of my budget at the moment and I have no way of financing something that high in price currently. Especially if your talking about just the price of the processor at $30,000. Thank you for sharing this information though. Aren't the T5's the most current ones though even if they still cost just as much if not more?

After looking, they really do cost that much, and that is just for the bare hardware, not even counting additional licensing for anything in production.
POWER CPUs from IBM are just like this; both are $$$ and one has to pay to play, sadly.

However, this is where developer and/or single-board-computers (SBC) are great.
Sadly, again, there really isn't anything in the area of POWER or SPARC, at least for consumers to use.

I have seen one POWER SBC called P-Cubed, but the listing hasn't been updated since 2012 and one of the developers last reported it was "being worked on" back in 2013, with nothing since.
However, if anyone reads this sometime in the future, please update us as I myself would be very interested in a SPARC or POWER SBC or dev board. :)
 
At that price their well out of my budget at the moment and I have no way of financing something that high in price currently. Especially if your talking about just the price of the processor at $30,000. Thank you for sharing this information though. Aren't the T5's the most current ones though even if they still cost just as much if not more?

Well that was 4 procs for $30,000, with the daughterboards.

Without an existing chassis you'll pretty much be buying the whole server as one piece. You can get a Fujitsu M10-1 down to US$16,323.00 MSRP. Push for a discount.

Not much of a DIYer item really, you'd be stuck with old gear.

If your employer is interested, Oracle might have a demo pool unit you can mess with. You'd have to reach out to them to set up details.
 
http://www.tyan.com/campaign/openpower/

not truly SPARC but still RISC. based on the IBM PPC stuff. the openpower campaign is going to get us some good lower cost (still not cheap) PPC gear.

This is using POWER8, not PowerPC, just fyi. ;)
It is definitely an interesting platform, but at almost $3k, that's a bit out of most individual's "fun/experiment" budget.

POWER, SPARC, Itanium, and to a lesser extent, MIPS and PA-RISC, are all very, very expensive platforms.
PowerPC, as much as I love it, is far more affordable, but is also incredibly aged and is virtually a dead CPU architecture outside of supercomputers, sadly.

Even the dual "G5" IBM 970MP 2.5GHz CPUs found in the Apple G5 Quad are barely able to keep up with the quad-core A9 ARM CPU in the ODROID-U3 system in my sig.
Not to mention they consume around 500+ watts of power under load, compared to a few watts from the A9; we won't even go there with the heat-output difference! :eek:

It would seem that for most, even on this forum, x86 and ARM are going to be the two primary architectures for years to come.
Anything else just isn't practical for the cost, is obsolete, or is a dead architecture with little to no support, at least for an average user.

For a [H] user however... :cool:
 
For an end-user, sure. Companies are still buying those IBM POWER and Oracle SPARC systems, though. These days you just end up virtualizing it unless you've got a highly specialized workload that needs a large system.

As to OP, I don't think you're doing to need a SPARC based system for anything. These days they're mostly centered around running Oracle databases and Java processes in large quantities.
 
Where can one find motherboard for such chips? I knew IBM makes chipsbut I thought they were mainly for Consoles such as xbox n ps. didnt know they made chips capable for a desktop.
 
The last ones available were the Apple G5 and G5 "Quad" with IBM 970FX's and 970MP's, respectively.
Yes, the XB360 and PS3 use IBM PowerPC CPUs, but both are specialized, and running anything on them will be very difficult, almost impossible on the 360.

It is possible to run GNU/Linux on the Gamecube (I've done this).
However, the main issue with all of these is limited system memory.

As for anything else PowerPC, most systems will be Apple/Motorola systems, and while they are fun, they are horribly dated for anything other than hobbyist tasks.
 
Lot of outdated knowledge here. SPARC is the fastest evolving cpu today and the fastest servers on the planet are SPARC. Oracle has released five generations of SPARC in four years. Every generation had doubled the performance, or even more. For instance, the step from T4 to the T5 servers where more like 4x the performance. That is why they are the fastest on the planet. For instance, look at the SAP enterprise business benchmarks, all SPARC at the very top.

SPARC and POWER, differs from x86 in that they are capable of building large scale up servers, with 32-sockets or 64 sockets or larger. The largest x86 server until last year was 8-sockets, and just recently there is now a 16-socket x86 server released. SPARC and POWER had 32 sockets for decades, so the operating systems are very optimized for large socket count. Linux and Windows and other x86 operating systems just recently ventured into the 16-socket arena, so they are not mature and scaling suck badly. So if you need to tackle large business workloads, you need SPARC or POWER because they offer the largest business servers. The largest POWER8 server is called E880, is 16-socket and it has 16 TB RAM, it must be considered midsize, so IBM has no large business servers anymore. The largest SPARC server from Oracle is M6-32 and has 32-sockets and 32TB RAM. Fujitsu has M10-4S with 64-sockets and 32 TB RAM. Both run Solaris. The fujitsu server has a SPARC cpu with few strong threads, it is a variant of the supercomputer "K" cpu.

SGI UV2000 is a linux cluster, and clusters can not run business workloads, they can only run scientific parallel computations. They have 10.000 of cores and 64TB RAM, i.e. it is a small cluster. Business workloads are not paralell so you can not use cluster, you need a large single scale-up server. Not a scale-out cluster.

Actually all of the threads are not all executing simultaneously. It's a barrel processor. There's only one execution at a time per core on T1-T3 and the new ones are two at a time.

Intel Hyperthreading, in addition to having two threads per core, also executes two threads per core, and that execution is similar to the T4/T5. The T1-T3 only execute one thread per core at a time, even if they have 4,6, or 8 threads per core.
Studies from Intel shows that a x86 server cpu, under full load, idles 50% of the time, waiting for data from RAM. So under maximum workload, a x86 server cpu idles for data 50% of the time. Because of cache misses all the time. All cpus have always been plagued be this "secret". CPUs have gotten faster, but not RAM. The difference between CPU and RAM is huge and there are lot of waiting for cpus, this is why they have huge caches, compled prefetch logic, etc.

And when x86 server switches thread, it takes 100s (or even 1000s) of clock cycles before it can continue running the other thread.

Contrast this with SPARC barrel cpus. Under max load, SPARC idle 5-10% of the time. This is unique and unheard of. The secret is that as soon one thread stalls and the thread needs to wait for data, the core switches immediately (on 1 clock cycle) to another thread and continue processing while waiting for data. So, there are no more waiting for data with the SPARC cpus. And that is why a 1.2 GHz SPARC T2+ cpu could easily beat IBM POWER6 cpus at 5GHz, and easily beat x86 cpus at 3GHz. They are unique. For instnace, in SIEBEL v8 benchmarks, you need 14 (fourteen) IBM POWER6 cpus at 5GHz to match four (4) SPARC T2+ cpus each running at 1.2GHz.

So, when Intel are running two threads at a time, and shifts between threads takes ages, and it waits for data, whereas SPARC never idles and always processes data - there is a huge difference in design and performance.

SPARC is designed for massive throughput, serving 1000 of client PCs. Intel can not handle that workload, as it has to wait all the time as soon it will switch and serve another client. Intel is a desktop cpu, i.e. it can run a single thread quite fast, but can not process many threads. This suits a single user running a few programs. It is not fit for serving 1000s of client PCs as a server. Look at the SAP benchmark, x86 has very low scores and can not serve many users.

It's perfectly true. Everybody talks about these phantom SPARC-only scenarios, but nobody provides evidence. Not to mention that the IBM POWER gear can handle anything that SPARC can. But we're bringing up edge cases that only apply to $10-20M machines to discuss the values of slow $50-100K servers that get stomped by Intel-based solutions. If you really truly need a single domain with 32TB of RAM, you quickly know what you need and you aren't asking on the [H] forums.
IBM POWER can not handle the largest SPARC workloads, as the largest POWER8 server only has 16-sockets and 16 TB RAM.


I didn't imply that people were running 95-05 gear. I only mean that improvements in the SPARC lineup are geared towards retaining existing customers.
SPARC gear around 2003, like the V440, lagged a little behind Sun's V40z (eg 4P Opty 850). This disparity has only gotten worse over time, and people are leaving.
This is outdated. SPARC servers are the fastest now, 64-socket Fujitsu SPARC M10-4S server beats 16 socket POWER8 E880 server.

This year the SPARC M7 cpu will arrive. It has 32 cores, 256 threads and adresses 2TB RAM. The SPARC M7 cpu will do 120GB/sec SQL queries, whereas a x86 does... 5GB/sec(?) SQL queries. The M7 server will have 32-sockets, 1024 cores, 8.192 threads and 64TB RAM. If you run large Enterprise workloads such as databases from RAM only, it will be very very quick. SAP Hana is a clustered database running on a cluster, and a single large 64TB RAM server is faster than many smaller x86 nodes totaling 64TB RAM. So this SPARC M7 server will beat a SAP Hana cluster easily. This SPARC M7 cpu is also invulnerable to Heart bleed attacks; each process gets a unique ID so it can only access RAM bytes with the same ID, so a process can not go out of array bounds (like Heartbleed does). etc. The SPARC M7 cpu is 4x faster than todays SPARC M6 cpu, which has several world records today:
http://www.enterprisetech.com/2014/08/13/oracle-cranks-cores-32-sparc-m7-chip/

The next coming Fujitsu SPARC XIfx cpu with 32 cores and two strong threads per core, will have 1.100 Gflops, whereas the fastest x86 and POWER8 both has 400 Gflops. Now I have excluded x86 cpus which has integrated graphic compute unit, remove the gpu unit and the performacnce of the x86 will go down the drain. Whereas the SPARC XIfx is a very very strong general cpu, without a separate specialized compute unit.

Generally, SPARC and POWER enterprise business servers are very very expensive. For instance, the POWER6 server used for the old TPC-C record had 32-sockets and costed $35 million. No typo. Whereas a cluster with 10.000s of cores and 64TB RAM is very cheap (it is just a bunch of PCs on a fast switch, and PCs are cheap). However, Oracle will soon release a cheap SPARC cpu called Sonoma which will have Heartbleed immunity from the SPARC M7 cpu, etc etc. It will be quite powerful and compete with x86 Xeons.
 
^ Do you have any links or references to back any of this???

Now I have excluded x86 cpus which has integrated graphic compute unit, remove the gpu unit and the performacnce of the x86 will go down the drain.
That makes absolutely no sense whatsoever.
Unless a program is taking advantage of said iGPU, that would not affect the floating point performance of an x86 CPU at all, and in fact, would decrease the power and heat, if anything.

Just because SPARC is specialized at doing a lot of little tasks, doesn't make it any better for single large tasks, which the newer SPARC CPUs fail at miserably, not because they are bad, but because they weren't designed for it.
Each CPU has its own workload and functionality which it can take advantage of.

Studies from Intel shows that a x86 server cpu, under full load, idles 50% of the time, waiting for data from RAM. So under maximum workload, a x86 server cpu idles for data 50% of the time. Because of cache misses all the time. All cpus have always been plagued be this "secret". CPUs have gotten faster, but not RAM. The difference between CPU and RAM is huge and there are lot of waiting for cpus, this is why they have huge caches, compled prefetch logic, etc.

As for the x86 memory issue you listed, you really need to show documentation or proof of this before I, or anyone else, believes a word of that.
I'm not saying x86 is perfect, far from it, but these are some pretty wild accusations toward things I've never seen in real-world or synthetic scenarios.

IBM POWER can not handle the largest SPARC workloads, as the largest POWER8 server only has 16-sockets and 16 TB RAM.
Did you forget about IBM's mainframes?
Also, x86 has more than 16-socket systems, or did you forget about blade servers, too?

I'm not saying you are wrong, but a lot of this just screams complete bull, and I hope you can provide some documentation for your claims.
 
^ Do you have any links or references to back any of this???
"You know me, you got an OPP?" :)

Sure, what links do you want. Everything I say is I have read in articles. Mathematicians don't lie.


That makes absolutely no sense whatsoever.
Unless a program is taking advantage of said iGPU, that would not affect the floating point performance of an x86 CPU at all, and in fact, would decrease the power and heat, if anything.
To take advantage of the GPU built in part of a Intel cpu, you need to program your software that way. Use SIMD instructions, etc.

Whereas on the SPARC Fujitsu XIfx, you just run your normal code, but much faster.

Just because SPARC is specialized at doing a lot of little tasks, doesn't make it any better for single large tasks, which the newer SPARC CPUs fail at miserably, not because they are bad, but because they weren't designed for it.
The new SPARC cpus are designed for large tasks. The old SPARC T1,T2 and T3 where designed for many many small tasks, true. But not the newer ones Oracle M5, M6, M7, etc. Oracle's big business is doing large business tasks. It wouldn't make sense if Oracle created SPARC servers that could not drive the Oracle database nor drive business Enterprise workloads. In fact, Oracle SPARC servers are very very cheap (compared to IBM) and very very fast at Databases, so the SPARC servers are a cheap vehicle for driving database sells (which is what Oracle earns money on). Oracle doesn't earn much money on hardware, but on Database licenses.

In fact, Oracle controls the whole stack: cpu, hardware motherboard, operating system, middleware (java), database layer, some of the business enterprise layer such as SIEBEL, etc. This means that Oracle can very tightly integrate everything so it runs much much faster than if you assembled parts yourself. For instance, Solaris now takes advantage of new cpu instructions designed specifically for Oracle database, so some Database instructions are now many many times faster than competitors if running on Solaris/SPARC. For instance, there are hardware database accelerators in SPARC M7 cpu, boosting SQL queries to 120GB/sec, whereas a x86/POWER8 probably makes 5GB/sec or so. There are numerous other improvements specifically to boost Oracle software stack:
http://www.theregister.co.uk/2014/09/29/ellison_sparc_m7/
http://www.theinquirer.net/inquirer...s-sparc-m7-chip-will-put-an-end-to-heartbleed
http://www.enterprisetech.com/2014/08/13/oracle-cranks-cores-32-sparc-m7-chip/

As for the x86 memory issue you listed, you really need to show documentation or proof of this before I, or anyone else, believes a word of that.
I'm not saying x86 is perfect, far from it, but these are some pretty wild accusations toward things I've never seen in real-world or synthetic scenarios.
Actually, every time I mention this, people ask for links, so you are not the first. But I can not find that Intel report again. To me it sounded obvious, let me explain:

The thing is, desktop workloads such as converting MP3 or so, is fine. All this small workload fits in the small 16 MB cpu cache so the cpu does not wait for data and runs the computations very effectively. It runs the calculations in a tight small for loop on the same computations over and over again. Just like a node in a cluster, they all run scientific computations on the same small grid, solving the same partial differential equation numerically with 100x 100 data points or so. This is very fast, as not data needs to be accessed from memory.

Server workloads, on the other hand, serves 1000s of clients at the same time. Every client is doing something else, has different data, one is doing GUI, one is doing business calculations, one is doing database, or the kernel is invoked, etc. 1000s or 10.000s of clients are doing different stuff all the time. Now, how in earth could a server cpu be able to fit all this workload into a small 16MB cpu cache? It must fit in the kernel, database, business software, 10000s of clients data and all their actions in 16MB. That is impossible, so you need to go out to RAM every time. And then the CPU does thread switching, jumping from client to client, so the pipeline will stall and cpu need to get new data from RAM every time it serve a new client. So the pipeline will be cleaned lot of times, and there will be lot of waiting for data. This is obvious, right?

Let us imagine you need to go out to RAM every time you serve a new client. Let us say that it takes 15 ns (in reality it might be 100ns)
http://norvig.com/21-days.html#answers
So, this 15ns latency corresponds to a 66MHz cpu. Remember those? So your Intel x86 Xeon E7v3 with 18 cores running at 2.8 GHz, drops down to 66 MHz. Game programmers that use C++, usually say that when they reach for data in RAM, performance drops 40x. That is why they avoid virtual functions as they thrash the cache because the C++ code jumps around in memory all the time.

If you consider 2.8GHz cpu going down to 66 MHz, or C++ game programmers claiming 40x performance decrease - when you leave the cpu cache and go out to RAM - it might be true that server workloads that exclusively jump around in RAM only gets 50% cpu utilization under maximal load? What surprises me is why the number is not much lower than 50%. In retrospect, I can not remember the exact numbers in the Intel study, but it was something like 40% or 60% lower performance, I think I intrapolated to 50% and just remembered that number, forgetting all the details.

But on the other hand, SPARC does not get this performance drop as it just switches thread and continues to work immediately while waiting for data. That is why a 1.2GHz SPARC T2+ cpu beated four POWER6 cpus running at 5GHz on server workloads in official SIEBEL v8.0 benchmarks.

Did you forget about IBM's mainframes?
IBM Mainframes have very slow cpus. Typically a high end Xeon are more than twice as fast. Haven't I explained this and showed links in the POWER8 thread?

Also, x86 has more than 16-socket systems, or did you forget about blade servers, too?
x86 does not have more than 16-socket systems when we talk about a single scale-up server. If we talk about clusters, sure there are large clusters consisting of many blades. But there are no larger x86 server than 16-sockets released last year. You are free to link to any if you find. You will not find any. x86 can not do large scale-up single business servers. They can only do large clusters, lot of smaller nodes connected together.

I'm not saying you are wrong, but a lot of this just screams complete bull, and I hope you can provide some documentation for your claims.
I'm not wrong. What links do you want? Tell me and I will show you. Except that Intel study, but by explaining I made my claim credible, of 50% performance drop when serving many clients.



EDIT: Since some time back, Oracle SPARC barrel cpus, can merge several threads into one if strong thread is preferred. This can be done on the fly. So SPARC can switch between massive throughput with many threads per core, or one single strong thread per core. The SPARC M7 has a new generation of cores, the S4. The previous cpus used S3 cores. Here is information about the coming Sonoma SPARC chip. It will be a scaled down version of the SPARC M7 cpu, with only 8 cores:
http://www.theregister.co.uk/2015/08/24/oracle_sonoma_processor_sparc/
 
Last edited:
Nobody needs a SPARC because Oracle is a terrible company that wants you to pay thousands of dollars for a bios update, and artificially gimps their existing servers so you have to buy the new ones when they come out....

that and they won't stop emailing you after your support ends for years

can you tell I hate Oracle?

such a shame as Sun was a great company
 
Server workloads, on the other hand, serves 1000s of clients at the same time. Every client is doing something else, has different data, one is doing GUI, one is doing business calculations, one is doing database, or the kernel is invoked, etc. 1000s or 10.000s of clients are doing different stuff all the time. Now, how in earth could a server cpu be able to fit all this workload into a small 16MB cpu cache? It must fit in the kernel, database, business software, 10000s of clients data and all their actions in 16MB. That is impossible, so you need to go out to RAM every time. And then the CPU does thread switching, jumping from client to client, so the pipeline will stall and cpu need to get new data from RAM every time it serve a new client. So the pipeline will be cleaned lot of times, and there will be lot of waiting for data. This is obvious, right?

Thanks for the info, it was informative, but all CPUs, regardless of architecture, experience cache misses, so your point is kind of mute.
It isn't like SPARC is that amazingly special that it can avoid this from happening.
 
Last edited:
Nobody needs a SPARC because Oracle is a terrible company that wants you to pay thousands of dollars for a bios update, and artificially gimps their existing servers so you have to buy the new ones when they come out....

that and they won't stop emailing you after your support ends for years

can you tell I hate Oracle?

such a shame as Sun was a great company
I am not going to say you are wrong, I have never worked with/at Oracle so I dont know. But the point was that the SPARC servers are the largest business servers on earth right now. With up to 64-sockets, and the competitors stay at 16-socket (POWER8 and x86).
 
be that as it may I will never recommend anyone deal with Oracle ever, it will not be a pleasant experience unless money means absolutely nothing I promise
 
brutalizer, what you've described is a type of simultaneous multithreading, which Intel has also supported in some manner since the P4 under the name of Hyperthreading.

Checking the Intel website just now, it appears you can get an 18 core Xeon that supports 36 threads in simultaneous execution. Granted, this is substantially less than what the latest SPARC processors can do in a single package, but avoiding context-switching is not a unique feature to SPARC as you suggest.

Building in the parallelism to support a high level of SMT is a trade-off against single (and fewer) threaded performance. I'm 99% sure that the fact that Intel's SMT has stayed at 2 threads/core is because of a conscious and market driven cost-benefit analysis by their CPU architects - it's not for lack of engineering ability. Performance is one reason that Intel dominates the server market.

Oracle (and IBM) are targeting a niche segment of the server market that Intel is not optimizing for. I don't doubt that there are certain applications where they do (substantially) better, but they don't seem to be offering a broadly superior product in terms of current market usage.

Let us imagine you need to go out to RAM every time you serve a new client. Let us say that it takes 15 ns (in reality it might be 100ns)
http://norvig.com/21-days.html#answers
So, this 15ns latency corresponds to a 66MHz cpu. Remember those? So your Intel x86 Xeon E7v3 with 18 cores running at 2.8 GHz, drops down to 66 MHz.
Both Xeons and SPARCs are going to suffer from cache misses. As I mention, the Xeon can also schedule instructions from another thread to cover this delay. Also basically all desktop and server processor since the late 90s has been pipelined and supported out-of-order execution, both of which may cover the delay as well. Also note that if the workload is substantially memory-bound then the number of simultaneous threads supported becomes less and less important.

It's difficult to speculate on the performance impact of these type of micro-architectural features, which is why CPU designers rely on simulator models running on actual workloads to determine how to architect a microprocessor. Or, in the consumer world when we can test the actual silicon.
 
Very interesting @brutalizer thanks, you in this high scale field, or just a knowledgeable hobby?
 
Very interesting @brutalizer thanks, you in this high scale field, or just a knowledgeable hobby?
Just a hobby.

Checking the Intel website just now, it appears you can get an 18 core Xeon that supports 36 threads in simultaneous execution. Granted, this is substantially less than what the latest SPARC processors can do in a single package, but avoiding context-switching is not a unique feature to SPARC as you suggest.
Sure, but Xeon can only switch between two threads, so they can not hide latency due to cache misses that often. This is proven in studies by Intel, where a server Xeon cpu idled 50% under max load. But a SPARC barrel cpu can hide the latency much better by switching among 8 threads, which is shown as SPARC has a 90% cpu utilization under full load, and by the superior benchmarks where a 1.2GHz SPARC cpu was many times faster than a Xeon at 2.8GHz.

If you were correct, that Intel also can hide latency as well as SPARC, there would be no studies from Intel, and no superior SPARC benchmarks.

Building in the parallelism to support a high level of SMT is a trade-off against single (and fewer) threaded performance. I'm 99% sure that the fact that Intel's SMT has stayed at 2 threads/core is because of a conscious and market driven cost-benefit analysis by their CPU architects - it's not for lack of engineering ability.
Well, I am 99% sure that Intel dont have barrel cpus because they did not thought of such a revolutionizing design. If Intel wanted to win the benchmarks, Intel would have copied SPARC. Just like IBM POWER has done. The POWER8 is quite similar to SPARC cpus now. POWER6 was a totally different beast than SPARC.

Performance is one reason that Intel dominates the server market.
I thought I just proved that SPARC servers have superior performance? The largest SPARC servers have 64 socket, whereas 16-socket for x86? And the Oracle SPARC M7 and Fujitsu SPARC XIfx are many times faster than x86. And back in the days a 1.2GHz SPARC cpu was many times faster than x86 Xeon at 2.8GHz. Not 40% faster, but more like 10x faster. On some benhcmarks.

Oracle (and IBM) are targeting a niche segment of the server market that Intel is not optimizing for. I don't doubt that there are certain applications where they do (substantially) better, but they don't seem to be offering a broadly superior product in terms of current market usage.
I thought I just showed that SPARC are superior on server workloads? For instance, dont you call a SPARC M7 cpu doing 120 GB/sec SQL queries superior to a x86 doing 5GB/sec(?) SQL queries?

Both Xeons and SPARCs are going to suffer from cache misses. As I mention, the Xeon can also schedule instructions from another thread to cover this delay.
Of course, but Xeon can only switch between two threads, so that dont hide too much latency. That is the reason Xeon gets 50% cpu utilzation.

It's difficult to speculate on the performance impact of these type of micro-architectural features, which is why CPU designers rely on simulator models running on actual workloads to determine how to architect a microprocessor. Or, in the consumer world when we can test the actual silicon.
Just look at TPC-C or SAP benchmarks. x86 get 320.000 saps and SPARC gets 840.000 saps. I would definitely call SPARC very much superior to x86 result. YMMV
 
be that as it may I will never recommend anyone deal with Oracle ever, it will not be a pleasant experience unless money means absolutely nothing I promise
I understand. But if you have the most extreme database workloads, then you have no other choice than Oracle. The largest investment banks running huge database workloads can afford to pay and only cares about the performance.

But I would go for a smaller and cheaper database, because I dont have extreme demands.
 
I used to have access to some mid 2000s Sun equipment, don't recall which CPU, probably too old anyhow. Might have to see if I can find some T5/M5 stuff on eBay for a reasonable price.
 
Sure, but Xeon can only switch between two threads, so they can not hide latency due to cache misses that often. This is proven in studies by Intel, where a server Xeon cpu idled 50% under max load.
Looking at idle times of different architectures under different workloads is essentially a meaningless comparison. Even if we assume the workloads of those idle numbers are identical (and they aren't), if the Xeon has 2x the IPC*frequency of the SPARC (and it very well might) then it still has higher computational throughput.

But a SPARC barrel cpu can hide the latency much better by switching among 8 threads, which is shown as SPARC has a 90% cpu utilization under full load, and by the superior benchmarks where a 1.2GHz SPARC cpu was many times faster than a Xeon at 2.8GHz.
This is total speculation on your part. You've apparently found a benchmark where the SPARC does better and are hypothesizing that it's because it does a better job hiding pipeline stalls from cache misses. It's conjecture.

Simultaneous threading is just one method of hiding cache misses, but pipelining and OOE also do this among other things. Almost certainly the Xeon does a better job with OOE and branch prediction.

If you were correct, that Intel also can hide latency as well as SPARC, there would be no studies from Intel, and no superior SPARC benchmarks.
Again, it's not a valid comparison. You're quoting an unrelated 90% utilization for the SPARC processor. I can show you server benchmarks for the Xeon utilization is close to 100%, and likewise I'm sure there are pathological cases where the SPARC has crappy utilization - it's totally workload dependent.

And again, idle times (especially under different workloads) cannot be used by themselves to compare performance of different architectures.

Well, I am 99% sure that Intel dont have barrel cpus because they did not thought of such a revolutionizing design.
Barrel processors have been around since the 1960s; they haven't been revolutionary for several decades. If Intel wanted to build one, they could have.

If Intel wanted to win the benchmarks, Intel would have copied SPARC.
Intel does win in "the benchmarks". A quick google search will show you this. I found this in about 30 seconds:
http://www.enterprisetech.com/2014/02/21/stacking-xeon-e7-v2-chips-competition/

Since you mentioned TPC-H, it's worth noting that a Xeon system sits at the top of TPC-E, TPC-H, TPC-VMS. Those are all server workloads. Based on this naive analysis it sure seems like the Xeon is faster.

In any case, I question the utility of these results given the limited number of results that have been submitted. From this it sure looks like Xeon systems generally perform better.

The largest SPARC servers have 64 socket, whereas 16-socket for x86?
It's not a race to fit as many sockets on a board as possible. For applications like rendering farms and some data centers you can afford to go with loosely coupled cores over a network. If you need something faster then there are quite a few companies, such as Cray, Qlogic, HP doing high speed fabrics with <100ns interconnect latencies.


I thought I just showed that SPARC are superior on server workloads?
No. And again, "server workloads" are a broad class. It covers a range of software, performance requirements, concurrency, etc. The fact that SPARC does better on a particular class of server workloads does not mean they are broadly superior.
 
Looking at idle times of different architectures under different workloads is essentially a meaningless comparison. Even if we assume the workloads of those idle numbers are identical (and they aren't), if the Xeon has 2x the IPC*frequency of the SPARC (and it very well might) then it still has higher computational throughput.
Not unless it idles 50% of the time. :)

It does not matter how high IPC a cpu has, if it looses the benchmarks, right?

Barrel processors have been around since the 1960s; they haven't been revolutionary for several decades. If Intel wanted to build one, they could have.
Back then, a decade ago, the SPARC Niagara T1 with 8 cores was crazy. Intel and POWER6 had two cores. They focused on getting as high GHz as possible, remember the bold words from Intel about Pentium4? Aiming for sky high GHz. Just like IBM.

Only later it turned out that high GHz is not a valid way, but instead more cores are the future. And after SPARC Niagara T1, everybody went for more cores instead of more GHz. And SPARC T1 at 1GHz were like 10x faster than x86 at 2.5GHz on server workloads, serving many clients. Not 10% faster, but 1,000% faster.

Intel does win in "the benchmarks". A quick google search will show you this. I found this in about 30 seconds:
http://www.enterprisetech.com/2014/02/21/stacking-xeon-e7-v2-chips-competition/
Intel is benchmarking their latest Xeon cpu against several years old POWER and SPARC cpus. Not really fair. Why didn't Intel benchmark against the newer SPARC M6 cpu from 2013? It is like when IBM benchmarked their POWER7 against x86 servers, and concluded that POWER7 servers can replace 100s of x86 servers. I dug a bit, and it turned out that IBM compared POWER7 against old idling Pentium3 with 256 MB servers - no wonder POWER7 could replace 100s of old x86 servers. If IBM had compared to latest x86 it would not have worked.

Since you mentioned TPC-H, it's worth noting that a Xeon system sits at the top of TPC-E, TPC-H, TPC-VMS. Those are all server workloads. Based on this naive analysis it sure seems like the Xeon is faster.
I did not mention TPC-H. I mentioned TPC-C.

Anyway, if you look at all the benchmarks you talk about, all the benchmarked servers are tiny. With up to maximally 8 sockets. That is not really interesting Enterprise workloads. If you instead look at extreme workloads, you will never see any x86 anywhere. It is SPARC or POWER all the way. For instance, look at the TPC-C that can scale up to large workloads. SPARC has the TPC-C record with 40 million tmpc. The rest of the benchmarks you mention, the top spot are fought between 2 or 4-socket servers. That is not really Enterprise. Look at extreme SAP workloads, it is Unix all the way. x86 is at the lower bottom, because it only scales to 8-sockets. So, no, x86 does not do for large Enterprise workloads.

I agree that as Intel newest Xeon might sometimes have higher scores than Unix when you look at tiny servers. But that is not really what Unix aims for, but for scalability to tackle largest workloads.

In any case, I question the utility of these results given the limited number of results that have been submitted. From this it sure looks like Xeon systems generally perform better.
Those workloads are tiny and not really Enterprise. If they started to benchmark huge workloads, x86 would not stand a chance.

It's not a race to fit as many sockets on a board as possible.
No, but the Enterprise business server market is the most lucrative with lot of money. For instance, one single IBM POWER6 server used for the old TPC-C record costed $35 million (no typo). And large SAP installations can cost $100 million. IBM sells just a couple of 100s of Mainframes each year and Mainframes account for something like ~12% of IBM's total huge revenue. Enterprise is the sh-t and everybody wants to get in there.

Nobody cares about HPC market or render farms, there are not much money there. SGI is desperate to try to fight their way into the hugely lucrative Enterprise market. He who can build the largest and most scalable servers that can tackle the largest Enterprise workloads, can price their servers how they want. The prices are ridiculously high. Compare that to a HPC server, which is just the price of a number of X of the PCs + fast switch. And a couple of PCs are very cheap. So a large HPC cluster are much cheaper than a single Unix Enterprise server. Because it is much more difficult to scale to tackle large Enterprise workloads on a single scale-up server, than building a scale-out cluster.

For applications like rendering farms and some data centers you can afford to go with loosely coupled cores over a network. If you need something faster then there are quite a few companies, such as Cray, Qlogic, HP doing high speed fabrics with <100ns interconnect latencies.
Sure, but such a Cray cluster can not handle Enterprise business workloads. You must go to a single Unix server with 16/32 sockets. Or even 64-sockets. And pay noose bleeding prices for one single server.

No. And again, "server workloads" are a broad class. It covers a range of software, performance requirements, concurrency, etc. The fact that SPARC does better on a particular class of server workloads does not mean they are broadly superior.
I am talking about Enterprise business workloads, that is the server workloads I am talking about. Not number crunching or something that can be run on a cluster. But I talk about Enterprise workloads that exclusively only run on a single large scale-up server with 16/32 sockets. It is impossible to run such business workloads on cheap clusters.
 
I am talking about Enterprise business workloads, that is the server workloads I am talking about. Not number crunching or something that can be run on a cluster. But I talk about Enterprise workloads that exclusively only run on a single large scale-up server with 16/32 sockets. It is impossible to run such business workloads on cheap clusters.
Ok, so you're talking about a small fraction of the server market, which accounts for less than 20% of total revenue and is shrinking. That's less than $10 billion annually.


The rest of the benchmarks you mention, the top spot are fought between 2 or 4-socket servers. That is not really Enterprise.
...

Those workloads are tiny and not really Enterprise. If they started to benchmark huge workloads, x86 would not stand a chance.
There are lots of massive enterprise deployments using what you'd consider "small" systems.

The data centers at Google, Facebook, etc. use largely off-the-shelf Xeon (and at times Opterons) clusters. The data sizes in these applications are essentially the largest in the world at over a petabyte in a single building.

Rendering farms have absolutely massive computational requirements. Again, better suited to Xeons.

HPC applications, also dominated by Xeons.

Yes certain workloads will perform better on SPARC; presumably those that benefit from fine-grained parallelism. But it represents a small segment of the overall market.


Enterprise is the sh-t and everybody wants to get in there ...
Nobody cares about HPC market or render farms, there are not much money there. SGI is desperate to try to fight their way into the hugely lucrative Enterprise market.
Nope. The HPC market size slightly over $10 billion, and is expected to grow. That's slightly more than the entire non-x86 server market combined.

Additionally, the vast majority of server revenue goes towards what you'd call "small" systems.


So a large HPC cluster are much cheaper than a single Unix Enterprise server. Because it is much more difficult to scale to tackle large Enterprise workloads on a single scale-up server, than building a scale-out cluster.
Again, enterprise workloads are more than just SAP. There's data centers, virtualization services, etc. You're focusing on such a niche segment.


Sure, but such a Cray cluster can not handle Enterprise business workloads.
Intel, AMD, HP, Qlogic, Cray, are all investing in low latency fabrics for scaling computational resources. This is apparently where the majority of the industry thinks things are going.

In AMD's case, they're planning on using SeaMicros network with ARM processors. I can't really comment on whether that strategy will work out, but apparently somebody there thinks they don't need a SMT-capable high-IPC core to scale for server markets.

You must go to a single Unix server with 16/32 sockets. Or even 64-sockets. And pay noose bleeding prices for one single server.
I don't accept that most enterprise applications require 16+ socket on a single board. The market data suggests otherwise. In any case, the latest Xeons apparently support 32 socket systems, as soon as someone will build it.
 
Ok, so you're talking about a small fraction of the server market, which accounts for less than 20% of total revenue and is shrinking. That's less than $10 billion annually.
Yes, I should be clearer I admit. I am talking about the _high margin_ server market. Larry Ellison said that he did not care if Oracle's x86 division shrank to zero because it is a low margin market. IBM recently sold off its x86 division to low cost Lenovo, because x86 is a low margin market. HP did something similar because x86 is low margin. These largest server vendors don't earn any money on x86. x86 is not lucrative. You work hard and sell lot of servers, but the effort is not worth it. They care about the Enterprise business server market, that is where all the money is. That is the sh-t.

There are lots of massive enterprise deployments using what you'd consider "small" systems.

The data centers at Google, Facebook, etc. use largely off-the-shelf Xeon (and at times Opterons) clusters. The data sizes in these applications are essentially the largest in the world at over a petabyte in a single building.

Rendering farms have absolutely massive computational requirements. Again, better suited to Xeons.
All these you mentioned, are clusters. And sure, for clusters x86 is fine because you only need up to a few sockets, but lot of these compute nodes. They need to be cheap. Quantity before quality. And that is exactly x86.

OTOH, the Enterprise business is high margin and that is where the big money is. All the server vendors are trying desperately to get into the high margin business market. And that market is dominated by Unix 16/32 socket and Mainframes. There are no clusters here.

HPC applications, also dominated by Xeons.
Nobody would want to do scientific calculations on expensive Unix servers when you can get cheap x86 clusters. The point is that scientific calculations are embarassingly parallel workloads, so they run fine on clusters. That is why you use clusters. OTOH business workloads can not run on clusters, they can only run on large servers. Of course could you use large Unix servers to do HPC calculations as well, but that would not be economical. You could probably buy 100s of compute nodes for the price of one 16-socket Unix server.

Yes certain workloads will perform better on SPARC; presumably those that benefit from fine-grained parallelism. But it represents a small segment of the overall market.
It represents the high margin market. That is what counts. Several (all?) large server vendors are trying to get rid of their x86 division and exit x86. More pain than gain. Not worth the effort.

Nope. The HPC market size slightly over $10 billion, and is expected to grow. That's slightly more than the entire non-x86 server market combined.
Not worth the effort. Sun Microsystems did choose to exit the HPC market. It was too volatile. They sold a few tailored supercomputers (large clusters) per year, and if that customer could not buy, Sun where toast. The same with Oracle. There are too few customers on the HPC market.

-x86 are mass market, very low margin.
-Enterprise business are high margin, easy money. Sweet spot.
-HPC very few customers. High effort to create a tailor made HPC cluster to one customer. Nobody wants to get too dependent on a single customer.

Additionally, the vast majority of server revenue goes towards what you'd call "small" systems.
True. But that is the low margin market. You need to sell, maybe 100x more x86 servers to earn the same revenue as selling a few Unix servers.

Again, enterprise workloads are more than just SAP. There's data centers, virtualization services, etc. You're focusing on such a niche segment.
The enterprise workloads are traditionally meant business servers such as SAP and large business server installations. No clusters.

Intel, AMD, HP, Qlogic, Cray, are all investing in low latency fabrics for scaling computational resources. This is apparently where the majority of the industry thinks things are going.
Yes, but a cluster can never replace one scale up server when we talk about business workloads.

I don't accept that most enterprise applications require 16+ socket on a single board. The market data suggests otherwise.
You dont need to accept it. Technology decides that business workloads can only be run on single scale-up servers with 16 or 32 sockets - if you need extreme performance.

What market data do you speak of?

In any case, the latest Xeons apparently support 32 socket systems, as soon as someone will build it.
Eh.... Vendors have tried to build large scale-up x86 servers for many many years. SGI has tried to do that for decades. It seems you believe it is a simple matter of building a large scale-up server with 16 or 32 sockets for x86? Nobody has succeeded until a few months ago. And I bet that SGI UV300H server with 16 sockets, scales awfully bad. It is the first generation scale-up x86 servers. SGI will probably compete on price, cause Unix will scale the sh-t out of any x86 server.
 
Nobody cares about HPC market or render farms, there are not much money there. SGI is desperate to try to fight their way into the hugely lucrative Enterprise market. He who can build the largest and most scalable servers that can tackle the largest Enterprise workloads, can price their servers how they want. The prices are ridiculously high. Compare that to a HPC server, which is just the price of a number of X of the PCs + fast switch. And a couple of PCs are very cheap.

Um, no.
I agree with some of what you said for the most part, but this section is just laughable.

No, everyone but IBM and Oracle cares about HPC markets, this is why NVIDIA is doing so well.
If no one cared about this, then why are the world's top supercomputers all x86 with CUDA GPUs for HPC?
Shouldn't they all be SPARC or POWER/PowerPC? ;)

So a large HPC cluster are much cheaper than a single Unix Enterprise server. Because it is much more difficult to scale to tackle large Enterprise workloads on a single scale-up server, than building a scale-out cluster.
I agree with this, again, to a point, but that doesn't mean that no one cares about HPC systems or clusters.
IBM and Oracle may not, but that is because they have to way to get licensing/money/lawyers/copyright/suck-your-money-dry tactics with those systems.

IBM and Oracle have become 10% technology and 90% lawyer companies and you know it to be true.
Their processors, under certain/specific workloads, are faster than x86 processors, but x86 is very cheap and versatile, and I don't remember anyone needing any licenses to run them, unlike POWER and SPARC systems, which apparently are struggling to stay relevant because cluster systems (aka, modern supercomputers) are become more cost-effective and are reliable without all of the licensing non-sense from these companies.

As much as I would love to see the death of x86, it won't be by POWER or SPARC, I can tell you that much.
jimmyb hit the nail on the head with his last post.
 
Um, no.
I agree with some of what you said for the most part, but this section is just laughable.

No, everyone but IBM and Oracle cares about HPC markets, this is why NVIDIA is doing so well.
If no one cared about this, then why are the world's top supercomputers all x86 with CUDA GPUs for HPC?
Shouldn't they all be SPARC or POWER/PowerPC? ;)
So you mean that NVIDIA is doing well because the HPC market has saved them? That does not sound reasonable. Do you have links?

Top supercomputers are not SPARC/POWER because they are much more expensive. Of course you could build a large supercomputer out of SPARC/POWER but that would be too expensive. Supercomputers are a very nische market, very very small. Only a few customers buy supercomputers. You dont want to depend on a few customers. Your business would be very vulnerable.

The big Unix / Mainframe vendors care about business servers, that is where the lucrative high margin is. HPC is a very very small nische market that they dont care too much about.

IBM and Oracle have become 10% technology and 90% lawyer companies and you know it to be true.
This is true for IBM, as they are exiting all hardware business. They have sold off most of their hardware, I saw a list of sold tech and it was quite large (I dont remember, but it was something like; hard disks, printers, x86 servers, etc etc etc). Left is only Unix/Mainframes - as they are very high margin, so IBM is not selling them soon. IBM does only high margin business and walks away from low margin. IBM is transforming into a consulting/services company.

IBM earns lot of money from their patent trolling. At one point, the company was receiving $2 billion annually from patent trolling. Twitter paid IBM the handsome sum of $36 million.
http://arstechnica.com/business/2014/03/twitter-paid-36-million-over-ibm-patent-threat/

Here is a telling story about IBM patent trolling:
http://www.techdirt.com/articles/20071021/141623.shtml
"IBM accused Sun of patent infringement, but when Sun engineers and lawyers pointed out how they didn't infringe on the patents in question, IBM's lawyers responded: "OK, maybe you don't infringe these seven patents. But we have 10,000 U.S. patents. Do you really want us to go back to Amarok [IBM headquarters in New York] and find seven patents you do infringe? Or do you want to make this easy and just pay us $20 million?" - Sun payed IBM.

Another patent trolling story from IBM told by Gosling, the father of Java:
http://nighthacks.com/roller/jag/entry/quite_the_firestorm
"...In Sun's early history, we didn't think much of patents. While there's a kernel of good sense in the reasoning for patents, the system itself has gotten goofy. Sun didn't file many patents initially. But then we got sued by IBM for violating the "RISC patent" - a patent that essentially said "if you make something simpler, it'll go faster". Seemed like a blindingly obvious notion that shouldn't have been patentable, but we got sued, and lost. The penalty was huge. Nearly put us out of business. We survived, but to help protect us from future suits we went on a patenting binge. Even though we had a basic distaste for patents, the game is what it is, and patents are essential in modern corporations, if only as a defensive measure...."

In fact, the nick name Big Blue is from their many lawyers in blue suits. At one point they had more lawyers than engineers.

OTOH, Oracle is investing heavily in technology, they have transformed from a software company to a software/hardware company (IBM is exiting all hardware, Oracle is entering hardware). Oracle is investing more money in R&D than Sun ever did. Oracle released five generations of SPARC cpus in four years. No one can say that Oracle rests on its heels. They have the best database too. They own the whole stack: hardware, OS, database, middleware/Java and also some business software. Oracle are aiming to be the best in each layer. It is obvious Oracle is investing very heavy in tech. IBM is selling off all tech and investing in lawyers instead. So, no, your claim is only true for IBM. Not for Oracle. Oracle has lot of innovative tech. Or do you think anyone can match SPARC M7 server with 32 sockets, 64TB RAM, 1.024 cores and 8.192 threads? That is a monster!

Oracle's SPARC servers are increasing performance >100% every generation. Is that not dedication to tech? That is why Oracle has the fastest servers on the market soon. IBM has stayed at 16-socket with their largest POWER8 server. The only 32 or 64 socket servers on the market are SPARC now.

Their processors, under certain/specific workloads, are faster than x86 processors, but x86 is very cheap and versatile, and I don't remember anyone needing any licenses to run them, unlike POWER and SPARC systems, which apparently are struggling to stay relevant because cluster systems (aka, modern supercomputers) are become more cost-effective and are reliable without all of the licensing non-sense from these companies.
It is true that clusters are becoming cost effective, however they can not be used for business workloads. So the high margin business market belongs to Unix servers with 16/32 sockets. Clusters can not handle such workloads. So clusters are not taking over Unix. It can't happen.

jimmyb hit the nail on the head with his last post.
I consider that post as wrong in many aspects.
 
Back
Top