Downfall of UltraSparcs

4fingerninja

Weaksauce
Joined
Mar 15, 2012
Messages
106
Long story short, I was digging through clutter and stumbled apon my old sun machine, (I will get the model number later) with an ultrasparc in it. I remember that this thing would destroy the Intel/AMD machines back in the day. But it never took off in the end user world, and I would like to wonder why? To this day it still is not supported by Windows ( to my knowledge). For such an old machine, it still runs solaris/linux like an older dual core.
 
my friend has one, we tried to figure out how to make it fold, we gave up
 
Never took off for end users because they weren't for end users; ultrasparcs went into servers, and high end unix workstations (and I think there were some ultrasparc laptops too). No windows because neither Microsoft nor Sun would have motivation to make windows run on a Sun CPU. NT 3 and 4 did run on some non-x86 processors, but windows 2000 dropped them all (and later added Itanium), and now NT 6.2 added support for ARM, what's old is new again.
 
Sometimes marketing, sales promotion, even silly reasons affect buyers. No buyers, or a poorly run company..
 
I didn't really think about the whole end user thing. It makes sense as the amount of Sun machines that I have seen are scarce. It is a very well built machine with solid parts, which would also add the fact that nobody was able to afford such a machine. I know Sun/Oricale are big players in the software world, but do they still manufacturer any goods? And if not, is it rightful to assume that they just did not have the market to keep going?
 
For one thing Unix boxes were always super expensive. Sun also did offer some cheapies (Ultra 5/10, Blade 150) but they were cut down at the knees with terrible IDE. During their heyday most people used computers for word processing and games neither of which required or even ran on Unix. Limited software availability + high cost is what killed them. Apple would have died too but they had a much lower price point which was the only thing that saved them (along with the "omg this is cute" late 90s imac, etc.).

To put it into perspective a Sun box would have easily cost 10 grand and an SGI double that or more and those are just for single CPU desktops.
 
Last edited:
Sun had no chance convincing regular desktop users to switch from Windows. Even if they could reduce costs, they offed zero backward compatibility with x86 DOS/Windows.

Also, I wouldn't call their GUI at the time "user friendly." Windows 95 and Mac OS had a lot more going for them.
 
Backwards compatibility holds back PCs. Same reason PPC, Itanium, *insert new arch here* didn't take off.
 
Backwards compatibility holds back PCs. Same reason PPC, Itanium, *insert new arch here* didn't take off.

PPC wasn't bad on Macs. The problem there was that IBM couldn't produce the chips Apple needed for their Powerbooks. Weren't the G5's desktop-only due to heat issues?

Itanium was a good cpu in need of a better compiler, IIRC. That and the first round of Itaniums weren't exactly thrilling people. Now Intel's stuck with them thanks to their deal with HP since HP-UX, OpenVMS, and NonStop all run in Itanium.
 
PPC wasn't bad on Macs. The problem there was that IBM couldn't produce the chips Apple needed for their Powerbooks. Weren't the G5's desktop-only due to heat issues?

Itanium was a good cpu in need of a better compiler, IIRC. That and the first round of Itaniums weren't exactly thrilling people. Now Intel's stuck with them thanks to their deal with HP since HP-UX, OpenVMS, and NonStop all run in Itanium.

IBM didn't want to go the direction Apple wanted them to. Lowering heat output and power consumption while maintaining or increasing performance was what Intel touted... Intel's "tick tock" scheme was exactly what Apple wanted.

Their Quad G5 2.5GHz Power Mac for example had to use a closed loop water cooling system just to keep cool... much too costly and broke over time. That configuration wasn't even the "3GHz Quad" they were really wanting either. Apple had to dump IBM, IBM wasn't delivering. Now, IBM has made better of this but their new solutions aren't for the desktop, but instead high-density high-performance servers.

They did make a very low clocked (and very slow) G5 chip for the iMac too... but Intel could do MUCH better. And if apple can't get the iMac+G5 situation figured out, they sure as hell weren't going to get that in a laptop form factor.

I wish Intel could truly forget about Itanium but sadly they can't. They don't even seem to care much about it now (and by now, I mean since like 2006), but any time and effort they currently put toward it would be much better suited with their other projects.
 
Sun sold a TON of Sparc computers in the 80's and 90's. Mostly to researchers, engineers, networking companies, and the government.

They were very expensive compared to PC's, but remember 'back then' you could really do anything on a PC. This was before Linux, Windows NT, and the hardware we have today. You could get a quad processor desktop with 512MB of RAM back when your average Dell PC was a 486/66 with 4MB. They all had networking, SCSI storage, tons of graphics options....They rendered Toy Story on a farm of Sun workstations.

And the servers were another story. They sold so many during the dot-com boom. Today if you're starting a website you just fire up some cloud instances, back then you dropped a hundred grand on Sun Enterprise Server. Sure, a Pentium III would be faster one-on-one, but you could stick 64 of them in an E10000. Even in the 90's when a beefy Windows server was a dual Pentium Pro you could get a Sun with 6 CPU's and over 10 disks.

Plus, they had Solaris as the OS to back it all up. Their memory buses and everything were faster as well to handle that work thrown at them. And you could even hot swap memory and CPU's that failed without taking the machine down.

But eventually as PC's got faster and more reliable, and acquired the fancy features from their big brothers, and most importantly Linux and Windows started to mature, all the big UNIX vendors started going away. SGI, Sun, HP, DEC.
 
Sparc is good.

Sparc snaps together like Lego, which is a reason Oracle runs their cloud on Sparc hardware, builds some of the fastest supercomputers on thousands of Sparc processors, and sells the hell out of Sun Sparc servers in their own right and not just in vertical solutions.

Hardware Systems Business

Our hardware systems business consists of two operating segments: (1) hardware systems products and (2) hardware systems support. Our hardware business represented 17%, 19% and 9% of our total revenues in fiscal 2012, 2011 and 2010, respectively. We expect our hardware business to have lower operating margins as a percentage of revenues than our software business due to the incremental costs we incur to produce and distribute these products and to provide support services, including direct materials and labor costs. We expect to make investments in research and development to improve existing hardware products and services and to develop new hardware products and services.

Hardware Systems Products: We provide a complete selection of hardware systems and related services including servers, storage, networking, virtualization software, operating systems, and management software to support diverse IT environments, including public and private cloud computing environments. We engineer our hardware systems with virtualization and management capabilities to enable the rapid deployment and efficient management of cloud infrastructures. Our hardware systems products consist primarily of computer server, storage and hardware-related software, including our Oracle Solaris operating system. Our hardware systems component products are designed to be “open,” or to work in customer environments that may include other Oracle or non-Oracle hardware or software components. We have also engineered our hardware systems products to create performance and operational cost advantages for customers when our hardware and software products are combined as Oracle Engineered Systems.


Our Oracle Engineered Systems include Oracle Exadata Database Machine, Oracle Exalogic Elastic Cloud, Oracle Exalytics In-Memory Machine, SPARC SuperCluster, Oracle Database Appliance and the Oracle Big Data Appliance. By combining our server and storage hardware with our software, our open, integrated products better address customer on-premise and cloud computing requirements for performance, scalability, reliability, security, ease of management and lower total cost of ownership.

We offer a wide range of server systems using our SPARC microprocessor. Our SPARC servers are differentiated by their reliability, security, scalability and customer environments that they target (general purpose or specialized systems). Our midsize and large servers are designed to offer greater performance and lower total cost of ownership than mainframe systems for business critical applications and for customers having more computationally intensive needs. Our SPARC servers run the Oracle Solaris operating system and are designed for the most demanding mission critical enterprise environments at any scale.

Oracle FY2012 Annual Report (SEC)
 
Last edited:
Its competition. Sparc existed because they could do that cheaper than buy from Intel. It performs differently because if you're going to go all in you might as well tune it for your work load etc. But the reason it continued to exist was competition. Even compatibility isn't enough to carry the business for long...
 
In short, it didn't take off on the consumer-front because it wasn't x86.
In fact, ARM processors are only now starting to really take off in this area because of smartphones, tablets, and now Windows RT.

The biggest reason for SPARC not being supported is due to the lack of software compatibility.
x86 software will not run on and is not compiled for the SPARC architecture.


To this day it still is not supported by Windows ( to my knowledge).
Windows blows for processor architecture support.
In the 90's, they did support other RISC-based processor architectures, and now with Windows RT, finally supports the ARM architecture.

If it's not x86 or x86_64, then Windows most likely won't support it.
If you want or need processor architecture support for non-x86 processors, then look to Linux and a few UNIX distros.


Never took off for end users because they weren't for end users; ultrasparcs went into servers, and high end unix workstations (and I think there were some ultrasparc laptops too). No windows because neither Microsoft nor Sun would have motivation to make windows run on a Sun CPU. NT 3 and 4 did run on some non-x86 processors, but windows 2000 dropped them all (and later added Itanium), and now NT 6.2 added support for ARM, what's old is new again.
I agree.
 
I've got the UltraSparc love!

I bought this guy a few years ago to play with Solaris 10 on.

Its got 5gb ram / (6) UltraSparc II 400mhz procs.

It makes the lights dim when you turn the KEY to power it on.

They are ROCK solid, i have literally hot-removed 4 processors at once and it keeps on ticking.

I'd be happy to take more pics if anyone has an interest in these old boxes.

sun-01.jpg
 
Ask and you shall receive!

First, here it is with the front door open. You can see the (8) SCSI Hard drive slots - mine only has (6) 72gb drives. Their back plane is pretty interesting, it uses what's called FC-AL - fiber channel arbitrary loop. They actually have a fiber connection on the rear of the units from the backplane to the IO cards. Also note the KEY to turn it on, the same key also locks the door :)

IMG_0238.jpg


Around the back, you can start to see how the modular cards work in this thing. Starting from the Left side is the PSU. The next card is from the scsi backplane and has SC fiber connections that go to the IO cards.

The 3 blank cards are actually the CPU & Memory cards.

The next 2 beside those are the IO cards. Each IO card apparently comes with (1) Fast Ethernet port. This machine was also outfitted with 2x QFE cards and a Framebuffer card. Interesting side note, Framebuffer cards (video card) were an OPTION on these servers! They were primarily meant to be set up using a serial console port. I believe this framebuffer is only 8-bit and has a proprietary connector.

The last card on the right appears to have (2) parallel ports and also the port for the keyboard/mouse.

IMG_0241.jpg


IMG_0242.jpg


The next pictures are of the CPU & Memory cards. Each of these cards has (2) 400mhz UltraSparc II processors. 2 of the cards are fully populated w/ 2gb RAM each, and the 3rd card only has 1Gb, for a system total of 5Gb.

IMG_0243.jpg


IMG_0244.jpg


These are the two IO cards that also have the QFE cards and framebuffer.

IMG_0246.jpg


Lastly, here is a shot of the rear with the IO & CPU cards out. You can see they are all modular and slide in.

IMG_0248.jpg
 
Another interesting side note that nobody told me when i bought this thing -- the system self test is literally 10 minutes before it begins to boot the OS. From what i've read this is pretty normal for a fully populated system like this.
 
Wow, I never knew what awesome systems engineering the sparc boxes had.

Most of the old RISC machines were super robust, and will be running years after even today's highest quality systems are finally run down.
 
x86 has just been borrowing/implementing technologies that have been around in SPARC and other high end systems for years. Multi-channel memory, multi sockets, integrated memory controllers, etc are all things that those boxes had back in the 90s. Heck, even this whole x86 VM trend has been around in the form of LPARs and LDOMs for the other architectures too. You don't need a machine that starts at $20k+ to mess with that stuff anymore. SPARC had its setbacks over the years, but IMO it'll be around for a while.
 
The frame-buffer video connector is a 13W3 style connector. Very popular among workstation class machines because it carried RGB on coax cables versus pins, very little crosstalk and highly color accurate. There's adapters out there to make them VGA as most of them are fixed-frequency, fixed-res cards. And yes, they're marginally low bit by today's standards, but will do 16-bit color quite easily.

Also, if you have the sun keyboard that goes with these things, the "BIOS" on them is an OS in itself. From the OS, you can hit STOP-A to be dropped back into the BIOS shell and configure the hardware on the fly. These machines were designed to never be shut off.
 
The frame-buffer video connector is a 13W3 style connector. Very popular among workstation class machines because it carried RGB on coax cables versus pins, very little crosstalk and highly color accurate. There's adapters out there to make them VGA as most of them are fixed-frequency, fixed-res cards. And yes, they're marginally low bit by today's standards, but will do 16-bit color quite easily.

Also, if you have the sun keyboard that goes with these things, the "BIOS" on them is an OS in itself. From the OS, you can hit STOP-A to be dropped back into the BIOS shell and configure the hardware on the fly. These machines were designed to never be shut off.

Then type "go" at the ok prompt to be returned to the OS.

Just hope you didn't do too much or you wouldn't get back to the OS and would need to reboot.
 
My place of employement built several generations of the SPARC CPU (Hummingbird, Cheetah, Panther, Niagra, Niagra 2). I was amazed at how many cores these things had back in the day. The computers themselves were built like tanks. I would put the old non blade "towers" at 60 pounds. All steel case. The blades were lighter but still well built.

We were paying about $10K per computer and about 2004-2005 we finally switched to Linux just due to the massive cost reduction. I had heard through the grapevine we were selling each die (compute chip) to SUN for $1K. And the yield was terrible. When we first started we were getting like 5 die per 8" (200mm) wafer.

But in 10 years and dealing with about 250 of these things, I only remember replacing about 2. And as someone mentioned I had to lookup how to reboot and shutdown because you honestly never have to turn them off.

My personal opinion from dealing with them was that I did not like Solaris. Some of the syntax is just not as intuitive as Linux. Particulary vi, vi on (our) Solaris did not like the arrow keys and would go haywire if you used them.

When we started replacing the SPARCs we could not use the keyboard/mice that were older and had the somewhat PS/2 style interface. But we did go ahead recycle the USB ones from the Blades because they were quite robust hardware. We slowly began to realize the USB SUN mouses didn't quite adhere to the USB standard and would occasionally just drop off. Unplugging them worked, but sometimes you would have go to another port.

They were impressive for the specs, but ultimately I hated Solaris itself.
 
^ It's the reason I prefer Linux over UNIX.
UNIX is completely rock-solid, but it is also very rigid, and sometimes getting it to do what you want can be a colossal PIA compared to Linux.

While the hardware was solid, the software, as you stated, did have some issues.
 
Your encounter with Solaris pretty much sums up why I lost interest in it. I bought this box (e3500) to play with Solaris and try and gain some file structure and shell experience. I found it to be very counter intuitive to what I've learned using linux for 10+ years. Really feels like you have to reach around your elbow for a lot of simple tasks. Heck, network interfaces for instance.
 
I have several Ultraparcs. I like to collect non-common CPU's.
The biggest reason they didnt take off in home computers is because they are RISC not CISC
 
I have several Ultraparcs. I like to collect non-common CPU's.
The biggest reason they didnt take off in home computers is because they are RISC not CISC

It has nothing to do with being RISC, they were very costly and only supported limited software/OSes and enterprise applications.

Also, "non-common" CPUs? You mean non-x86 CPUs? :rolleyes:
 
right but apple procs had an OS being specifically designed for them and for the home user.
 
It has nothing to do with being RISC, they were very costly and only supported limited software/OSes and enterprise applications.

Also, "non-common" CPUs? You mean non-x86 CPUs? :rolleyes:

No I meant NON-Common as in CPU's you dont come accross very often. I have some x86 CPU's, It was the only way I could think of describing my collection.
 
right but apple procs had an OS being specifically designed for them and for the home user.

Well you could say it's SUN's fault for not doing the same thing... but they didn't care about that part of the market segment hence they didn't cater to it.
 
recently SPARC has been pretty crappy on single threaded performance, but very good at multi-threaded server loads (surprise?), additionally, a SPARC M class server is like $60k+

tis nuts I say :)
 
Your encounter with Solaris pretty much sums up why I lost interest in it. I bought this box (e3500) to play with Solaris and try and gain some file structure and shell experience. I found it to be very counter intuitive to what I've learned using linux for 10+ years. Really feels like you have to reach around your elbow for a lot of simple tasks. Heck, network interfaces for instance.

I'm sure you know, but Linux by definition is just the kernel. Everything outside of that is the distro, and distro's can vary wildly in how they are setup. ie: use Ubuntu vs RHEL/CentOS. If someone wanted to be a complete jackhole they could make a really unfriendly distro. It probably wouldn't gain much traction, but just saying it's possible to make a linux distro more of a PITA to work on than Solaris..

Granted some things aren't as easy off the cuff to do, and there is some elbow grease involved but it's that way for a reason. Some of it is based on mentality of when the OS was designed, and some of it just being SysV. RH is a weird hybrid between SysV and BSD. After using RH at first, I hated Sol and SysV but after using FreeBSD/OpenBSD exclusively and working on Solaris for a few years I came to appreciate it. It helps to understand the mentality/philosophy to get a better grip on it. It also helps if you have some old school Solaris admins around to explain shit. RH seems to try to take the "best of both" and combine them. Problem is, that tends to create problems when you go to a big boy UNIX and don't understand the difference between the two.

IMO one of the benefits of SPARC was/were/are the tools to troubleshoot hardware and software issues. dtrace is an awesome tool, and if you've ever been in the position of having to figure out what process/app/system-call is eating up system resources, dtrace can be a huge lifesaver if you know how to use it. They've ported dtrace to linux thankfully, but theres still other reasons to run Solaris.

From what I've seen, you're going to spend money on hardware regardless. x86 seems pretty cheap until you gotta scale and realize that you need 25+ boxes just to run some app, plus your DB cluster, web servers, and caching tier. Then you gotta deal with power, security/patches on X number of boxes, yadda yadda. IMO with SPARC boxes you will end up paying the same hardware cost, just up front instead of over time. It's just a matter of picking your poison. Granted, for certain workloads SPARC doesn't make sense but that's a different topic.
 
Back
Top