I won the bid, now to start building!

enuro12

Gawd
Joined
Jan 25, 2004
Messages
781
I've ordered the parts for the first of two V-Hosts. This will be my first production VMware or XenServer. I've got a couple Hyper-V. Really Hyper-V would work in this situation as well, i just like the bare bones hypervisor.

It looks like i'll get the parts Wednesday which is couple weeks early, so i plan to install some OS's and benchmark native speeds before I install the hypervisor.

I'm also hoping the Elysium case can fit the board...


1x Antec CP-1000 1000W Continuous
16x Crucial Ballistix sport 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 Desktop
1x TYAN S8812WGM3NR Quad Socket G34 AMD SR5690
4x AMD Opteron 6128 Magny-Cours 2.0GHz Socket G34 8cores
4x OCZ 120 GB Vertex 3
 
Last edited:
I've ordered the parts for the first of two V-Hosts. This will be my first production VMware or XenServer. I've got a couple Hyper-V. Really Hyper-V would work in this situation as well, i just like the bare bones hypervisor.

It looks like i'll get the parts Wednesday which is couple weeks early, so i plan to install some OS's and benchmark native speeds before I install the hypervisor.

I'm also hoping the Elysium case can fit the board...


1x Antec CP-1000 1000W Continuous
16x Crucial Ballistix sport 8GB (2 x 4GB) 240-Pin DDR3 SDRAM DDR3 1333 Desktop
1x TYAN S8812WGM3NR Quad Socket G34 AMD SR5690
4x AMD Opteron 6128 Magny-Cours 2.0GHz Socket G34 8cores
4x OCZ 120 GB Vertex 3

1 sata port ? wtf..

s8812_2d_s.jpg
 
lol yea and its SATA 2
It does support SAS 6.0gb but i'm using a couple expansion cards. IBM M1015 for the RAID 10 SSD Drives & a Dell PERC 6i for the Storage Drives
 
is this quad setup overkill? So you are going to run ESXi off the SSD's and then Perc 6i for datastore?
 
is this quad setup overkill? So you are going to run ESXi off the SSD's and then Perc 6i for datastore?

32 Cores total, for only 3 windows VM's it is now. Later when they're ready to expand we'll be happy it is there.

The SSD's are for the TS & DB server. The Perc 6i will house the network share files & database replication.
 
32 Cores total, for only 3 windows VM's it is now. Later when they're ready to expand we'll be happy it is there.

The SSD's are for the TS & DB server. The Perc 6i will house the network share files & database replication.

Or you could order the additional CPU's and RAM when you need it and the price comes down...
 
So all this is for 3 production server VMs, but there is no power supply redundancy or ECC memory? Seems like there's more power & risk than need be.
 
Last edited:
Or you could order the additional CPU's and RAM when you need it and the price comes down...
I'm not counting on ram coming down in price from $50/ 8GB

So all this is for 3 production server VMs, but there is no power supply redundancy or ECC memory? Seems like there's more power & risk than need be.

If HA or FT were an issue we'd spend the extra cash on it. However being offline for a couple hours while is swap a $100 PSU isn't going to be a problem. I could explain why this customer doesn't need ECC however i'd be wasting my keystrokes.
 
It's almost always a good idea to go with ECC for 4GB and higher density RAM. Just for good measure, it would suck for the whole hypervisor to lock up and all the VM's with it. :(
 
It's almost always a good idea to go with ECC for 4GB and higher density RAM. Just for good measure, it would suck for the whole hypervisor to lock up and all the VM's with it. :(

This happened to me, BTW. I got an insane deal on cheap RAM, but it didn't even last a year and the failure necessitated a complete teardown and rebuild. We would have saved a lot of money in the long-run by just buying better hardware.
 
This happened to me, BTW. I got an insane deal on cheap RAM, but it didn't even last a year and the failure necessitated a complete teardown and rebuild. We would have saved a lot of money in the long-run by just buying better hardware.

several times over the years, i have saved a few bucks and locked myself into a solution with no expandability, feature or size-wise :( Frankly, i'd rather see you get fewer faster cores (like a quad-core 1230 - with HT, you get 8 threads...)
 
This happened to me, BTW. I got an insane deal on cheap RAM, but it didn't even last a year and the failure necessitated a complete teardown and rebuild. We would have saved a lot of money in the long-run by just buying better hardware.

With lifetime warranty on your ram i don't know why you'd need to do a complete rebuild... Sounds a bit strange to me. 3 years on everything else minus the processors. Sounds like you got a bum piece of hardware. Sounds a bit strange you'd need to rebuild the entire system.

This will be the biggest and fastest single piece of hardware i'll be managing. But of my 30+ servers (only 2 with ECC Registered, which happen to be the slowest) i've had NO problems with name brand hardware failing. Sure a few drives once in a while, same with a dud board on occasion.

The more i think about it, the more interesting your 'event' sounds.
 
It's almost always a good idea to go with ECC for 4GB and higher density RAM. Just for good measure, it would suck for the whole hypervisor to lock up and all the VM's with it. :(

I've always wondered about this. Due to the amount of ram this beast supports i decided to do some research. IBM has a few nice articles which matched everything else i found on the net. Basically saying ECC is great for the enterprise & where an insane amount of memory operations are happening. But even then they would only see something silly like 7 crashes per 100 servers per 3 years.

Based on that alone ECC just doesn't make sense at double the cost per GB.

So look at it like this. If it takes 10 minutes for the server to reboot, and another 10 minutes to re-enter the data (Schedules & basic medical information) for a business that makes $10,000 a week, it will cost them $116.66 every 3 years to re-input the data lost in the crashes. The cost to go to ECC memory is right at double the cost of non ECC, so it would take 6.85 years to make up the difference. That is IF they had 100 servers, but they only have 2, so multiply all that by .14 and you'll see why i'm not using ECC.
 
so imagine that now of those 7 crashes per 100 per 3 years costs you say an hour of down time in a business where that could cost you thousands of $$$ or more, i know you did some math, but unless you see their books, you dont know what the down time would cost them if you factor in employee's unable to work because data is not available.

personally, sounds like over kill to me and money not being spend well.

Your putting your database replication on the same physical server as the database? So what happens if the mobo goes out cause you got a cheap $100 PSU to power all of that? (not cheap, but compared to what you should be using.... ya, cheap...

Sounds half assed to me personally, i would rather ditch 2 of the CPU's and some ram and get a redundant PSU and consider a better backup option for the database.


it is great you had luck in the past with your set up's, but one day it is going to bite you in the butt and cost some company alot of money.
 
You need to count more than lost income, you need to count lost employee productivity. You need to count possibly overtime for employees staying later to catch up that day where things were lost.

I'm going to be honest and say that you're not all that familiar with how a business works. It is many more times complicated than you're factoring in.

No matter what you're doing the goal should be five nines. Now don't misunderstand what five nine's means. It does not necessarily mean that you need 99.999% uptime of your software/hardware every year. What it means is, you should strive for 99.999% uptime when it is needed most; during normal business operations. i.e. not counting scheduled maintenance, weekends if you're closed, after hours, etc.

It's also fuzzy math because you might not realize that it crashed right away... Well, you might, because you're building it expecting it to; but that is beside the point. It will likely be more than the hour you suggest. In IBM's tests, what type of hardware was being used exactly? I'm not one to assume that all things are created equally. Also what if a crash corrupts your data and you need to restore from backups? Yes, that's also more time consumed and wasted.

I never would have spec'd a box out this way at all. But to be fair, I really hope it works out for your situation. It just reminds me of something I'd need to yell at my brothers for doing... Because they "know computers."

Again, I'm off topic and these are personal opinions; not to mention all over the spectrum.

It's still a beast of a box.
 
I'm going to take this further, like it really hasn't been said before, but I would never custom build a server for a business. For the reasons mentioned above, and for the pure support factor, etc, on top of that, you can buy from Dell, HP, etc, where they have fully tested the components and have certified their hardware to work with VMware..etc, where applicable.

They can provide rapid response with support/parts, if you have an outage, unless you plan on having spare parts hanging around.
 
I'm going to take this further, like it really hasn't been said before, but I would never custom build a server for a business. For the reasons mentioned above, and for the pure support factor, etc, on top of that, you can buy from Dell, HP, etc, where they have fully tested the components and have certified their hardware to work with VMware..etc, where applicable.

They can provide rapid response with support/parts, if you have an outage, unless you plan on having spare parts hanging around.

This can be a gray area at times, usually for the cost of said Dell or HP you can buy two entire Supermicro boxes. You might as well have an entire spare laying around.

That being said, I've had to call HP and Dell for those boxes several times, but have had almost no issues with our Supermicro's while in production. I've had Dell's the longest though, Supermicro coming in second, with HP's being our newest. Actually for what I paid on the two newest HP's, I could have got four similarly spec'd Supermicro's and had two spares sitting around to grab parts from (or stick right into production) while waiting for an RMA.

If there is one thing I've learned, is that Supermicro is rock solid. If you want better support while still saving money vs. HP or Dell, go through a Supermicro VAR like Rackmounts Etc.
 
BTW are you the in-house tech? If the PSU goes down in the middle of the day, can you guarantee you'll be there within the hour to swap it out, and will you have another spare on hand whenever that happens? Same goes for the memory.... If it were my business your theory about the unlikelihood of failure would be the first red flag. The second would be that there are multiple single points of failure that could stop business for the day (or longer) dead in it's tracks. I could see if the business was really trying to save every last penny and they couldn't afford the redundancies, but the over-spec on the processors and memory could easily cover the cost of proper redundant hardware. I don't care how many servers you've built or how many years they ran in production, random failures can and do happen and increasing the probability of having one for the sake of memory / core bragging rights makes no business sense at all. In reality the majority of businesses could care less what's in the box; they just want the one that will work as reliably as possible for as long as possible.
 
BTW are you the in-house tech? If the PSU goes down in the middle of the day, can you guarantee you'll be there within the hour to swap it out, and will you have another spare on hand whenever that happens? Same goes for the memory.... If it were my business your theory about the unlikelihood of failure would be the first red flag. The second would be that there are multiple single points of failure that could stop business for the day (or longer) dead in it's tracks. I could see if the business was really trying to save every last penny and they couldn't afford the redundancies, but the over-spec on the processors and memory could easily cover the cost of proper redundant hardware. I don't care how many servers you've built or how many years they ran in production, random failures can and do happen and increasing the probability of having one for the sake of memory / core bragging rights makes no business sense at all. In reality the majority of businesses could care less what's in the box; they just want the one that will work as reliably as possible for as long as possible.

this.

slightly confused by what the goal of the OP is and why not guaranteeing something will be online 99.9% of the time is not tops on the list. Maybe being a sys admin in an enterprise environment makes my judgement clouded. I see this box as being a pretty cool home box or non production box, but i wouldnt put this config in my rack.
 
I also have to question all you guys whiteboxing, do none of you bother becoming hardware partners? Seriously I partner with Dell and EMC right now (used to be with HP) and can tell you that once you start selling as a partner (and making their goals) you get some pretty sweet discounts (not talking 3% off the sticker, we are talking more like cost +3% type deals, which means that I can sell at a discount and still make money off hardware).

Personally other then a storage box (NFS CIFS type NAS) I would never put a whitebox into a prod environment that I wouldn't be more then 5 minutes away from. Yes I have whiteboxes at home that have been running for years, and my lab consists of nothing but whitebox and barebones builds. However for a customer, if something goes down I want to be able to have the vendor on the phone and a new part in hand in 4 hours or less. I also don't want to deal with the hassle of taking the defective part, RMAing it, possibly having to buy a new one anyway if it is out of warranty, or the RMA is taking too long and I need a new spare in house right now. Too many "what if" factors. When I buy OEM I am not buying because I am lazy, I am buying for 99.9% uptime, a 3-6 year maintenance contract with 4 hour restore, and piece of mind that I can go out of town and if the customer's server goes down they can call the OEM and get it fixed instead of having to call me.
 
Quad opteron with no ecc/reg is asking for trouble.

Aren't you also limiting your memory density to 64GB (for a vm server with possibly 48 cores?) by using desktop ram?

IMHO you should see if it supports quad rank and grab the DDR3-1333 ECC REG QUAD RANK dimms at ~ $100 at newegg or $80 elsewhere.

Edit: Do you have a UPS budgeted for this? Dual or triple power supplies?

You do realize you can get stripped down DL360/380's at geeks.com for $500-$700 right? These include remote OOB management, ecc reg ram, and (probably) dual power supplies.

Edit: According to Kingston and Tyan, ECC is required in all cases. The only QUALIFIED ram on the tyan page for that board is ecc/reg.
 
Last edited:
This can be a gray area at times, usually for the cost of said Dell or HP you can buy two entire Supermicro boxes. You might as well have an entire spare laying around.

That being said, I've had to call HP and Dell for those boxes several times, but have had almost no issues with our Supermicro's while in production. I've had Dell's the longest though, Supermicro coming in second, with HP's being our newest. Actually for what I paid on the two newest HP's, I could have got four similarly spec'd Supermicro's and had two spares sitting around to grab parts from (or stick right into production) while waiting for an RMA.

If there is one thing I've learned, is that Supermicro is rock solid. If you want better support while still saving money vs. HP or Dell, go through a Supermicro VAR like Rackmounts Etc.
__________________

This is not a gray area at all. If you want support you pay. If you want to maintain some sort of service level you get equipment and support with that equipment that has been certifed and tested in a lab with the OSyou're using.
I can build a server, pc, like most here, and do it extremely well, I would never do it for a business.
 
Last edited:
This is not a gray area at all. If you want support you pay. If you want to maintain some sort of service level you get equipment and support with that equipment that has been certifed and tested in a lab with the OSyou're using.
plan on I can build a server, pc, like most here, and do it extremely well, I would never do it for a business.

I just stated that I have had more problems with Dell and HP than "properly" spec'ing my own Supermicro gear to meet our needs. Over the course of six years pretty much speaks for itself. Do what works for you, I fucking hate calling Dell or HP. If I need to call all the time, it's because they made and sold me a piece of shit. Then I gladly accepted said piece of shit because of the super fantastic warranty they included to cover their piece of shit. That I also have to pay for on top of the premium of using their brand. As noted, you can also use a VAR for proper support options on "whiteboxes."

bSTxU.png


Don't get me wrong, I do like my Dell gear, but it's overpriced and underpowered because of that.

I even made the call to go with HP on our latest virtualization project. I built a 2nd cluster using Supermicro just in case. Guess what I had to do because of the HP's being a pile of random crashing shit for three months while they put out a reasonably solid BIOS for them. That's right, not use them in production because they were unstable and HP was slow as fuck to get them in working order. Random reboots are not cool, just because your BIOS fucked up with handling the power settings on the CPU.

Here's one of the Supermicro's that was powered on at the same time. I've never had to power it down, granted we just took it out of primary production now that the HP's are solid, they also have more RAM and CPU power.
dwRq.png


The HP's were also about $12,000~ a piece... Supermicro's, less than $2,000 a piece.
 
Grrrr...lol.:p

I certainly see where you are coming from..but you are the minority my friend. When you start looking at DC's that contain hundreds of servers, like mine, then you may think otherwise.

There is no doubt, HP and Dell certainly make some quality mistakes, i've been down the same road with the BIOS updates where the ILO port would cause issues with the latest Westmere processors..etc..so I hear you loud and clear there.

Having said that, you are saying you buy Whitebox hardware froma VAR with support...that makes sense..I never said you had to go DELL/HP, I just listed them as being well known.

My case is very simple. I'm not going to go out, and pick parts, while they may be on the VMware certified list, you still do not have a clue pertaining stability when you combine said parts to make a complete system. VMware didn't take all those parts and put them together with all the combinations to certify them for their software.

Now, let's move on to the big name manufacturers. They have R&D, they have Quality Control, and they are given strict guidelines from the OS companies, MS, VMware, AIX, etc....to certify that what they are creating works and is TESTED working with said OS's.

Sorry, i'll take my chances with the latter, and let me just say this..i've also been on the flipside of your coin and i've seen it time and time again, users thinking they can kludge something together, and think it will be 100% stable..well cheers to ya if it works out.;)
 
Last edited:
This just has to be asked. How much is this going to cost the client? What amount was your bid for?
 
To throw my two cents in, done properly I fully agree with MikeTrike. There are lots of SuperMicro shops out there, just as there are Dell shops, HP shops, and IBM shops. Bought through a VAR, they are often just as supported, with similar featues (BMC, etc) and on VMware HCL as well, as the highly priced IBM's, HP's and Dells.

Having said that, I agree with Vader and others that dashpuppy is making a mistake on how he's designing this system for this client. Go whitebox, I love whitebox, but make sure if you do so to take proper precautions.

One last thing...
I certainly see where you are coming from..but you are the minority my friend. When you start looking at DC's that contain hundreds of servers, like mine, then you may think otherwise.
Google certainly doesn't think your way, and I'm sure I could name others if I tried. They use all custom builds supported inhouse. Just because your using hundreds or even thousands of servers doesn't make a path such as a SuperMicro VAR any less valid. Sorry, I just had to get that out, I do agree with alot of your other points as I pointed out above, just not all. :p
 
Last edited:
Do you think Google throws servers together and puts them into production? Come on..let's be realistic. You don't think they test etc...?

Also..there weren't really any arguements around using a VAR..as long as it's supported. That's not what the original poster is doing here.


Google or no Google, still the minority.:D
 
Just wanted to make a couple notes on the hardware choices here.

1: According to TYAN's site for that motherboard you can only use ECC RAM. Now I personally have used nonECC RAM on supermicro builds for NAS boxes before that worked just fine, but for something like this I wouldn't try to use something that isn't specifically listed as being 100% supported.

2: Unless you are running one serious number crunching app, you will never remotely come close to needing 32 cores. For instance my production hosted exchange environment (2x SMTP, 2x CAS, 2x Mailbox, 2x SPAM, Encryption, Routing, Archiving, 2x AD, 2x web servers, 2 SQL 2010 servers) which currently has about 1,000 mailboxes on it barely uses any of the dual E5640 systems they are hosted on (8 cores, 16 threads). We are talking during our highest usage periods we are using maybe 20% of our CPU capacity. My production VDI box is the only one that has come close to needing all of its CPU (and even then we are talking 2 Westmeres serving 45 Win 7 x64 boxes) and that is during the morning rush as everyone logs in and starts up all their apps for the day.

3: Storage. Why would you use SSDs in a server build like this? My 600GB SAS 6 15k drives in RAID 5 murder the speeds (and cost per GB) provided by any SSD (regardless of generation) not to mention IOP performance. I have 8 600GB SAS6 drives in RAID 5 (7 in the array, 1 hot spare) for a total usable space of 2.18TB on a single card with 512MB of BBWC (which is important for something like this). In RAID 5 they scream, and in RAID 10 (if I really needed the extra performance) they just get better.

I would second the want to know how much this server is. I think you are overbulding some resources (CPU), skimping on others (RAM, storage, redundancy), and could have built a much higher performance server for the same cost if you paid more attention to actual bottlenecks in performance instead of building a machine that would crush F@H.
 
Do you think Google throws servers together and puts them into production? Come on..let's be realistic. You don't think they test etc...?

Also..there weren't really any arguements around using a VAR..as long as it's supported. That's not what the original poster is doing here.


Google or no Google, still the minority.:D

Of course they do, I just wanted to throw in some proof that just because your at the hundreds(or thousands) of servers level doesn't mean you have to go with a solution that involves a big name vendor like Dell or HP. As long as you have the support implications covered, and understand what it involves. It's not a solution for everyone and going with a Dell or HP based solution for most is a good safe bet.

The fact that just because you and I might not do it, doesn't make it any less valid of a choice, though perhaps, yes, the minority. Still I fully agree that what OP is doing is not really the correct way to go about things and hope for his sake that this "server" solution has a horseshoe up its ass.

I would second the want to know how much this server is. I think you are overbulding some resources (CPU), skimping on others (RAM, storage, redundancy), and could have built a much higher performance server for the same cost if you paid more attention to actual bottlenecks in performance instead of building a machine that would crush F@H.
Agreed. It would make for a killer F@H rig though... :p
 
Last edited:
The fact that just because you and I might not do it, doesn't make it any less valid of a choice, though perhaps, yes, the minority. Still I fully agree that what OP is doing is not really the correct way to go about things and hope for his sake that this "server" solution has a horseshoe up its ass.

^This

It would make for a killer F@H rig though... :p

^And this for the [H]orde!
 
Of course they do, I just wanted to throw in some proof that just because your at the hundreds(or thousands) of servers level doesn't mean you have to go with a solution that involves a big name vendor like Dell or HP. As long as you have the support implications covered, and understand what it involves. It's not a solution for everyone and going with a Dell or HP based solution for most is a good safe bet.

Google and Backblaze are both excellent examples of how to do whitebox right. However both have their whiteboxes in monitored datacenters and have staff on site who do nothing other then maintain the hardware. They also have a great deal of redundancy built in, not so much in the hardware onboard sense, but in a hot/cold cluster sense, and in google's case a multiple datacenter hot site cold site approach.

Again whitebox can be an excellent way to save money if done properly and if you understand / accept the risks that come with a whitebox solution. For instance the "SAN" that hosts all my backups as well as the storage for my company and hosted email archive is a i5 based Openfiler box. We stuck 12 2TB WD RE4 drives into it, 8GB of RAM, a LSI Perc 16i card, and used a Supermicro board that has ILO. We also used a supermicro case that has redundant 700W PSUs (space for 3 but we are only using 2). We boot off a USB drive (something OP should consider if this is VMware) and the thing works amazingly well. I also have a whitebox for my VDI server as it was the only way I could get 4 GTX 560s into a server chassis, again this works amazingly well HOWEVER I have spares on site, and my session broker takes care of all the dropped sessions to re-route them to a second "cold" box in the event we have a motherboard, CPU, RAM, or RAID card failure (everything else is redundant, and even RAM could loose a stick and be OK, though take a serious performance hit).

Minimizing single points of failure is key in any production situation.
 
I'm not going to bother with responces to everyones thoughts. I will share this bit. Frankly i'm wondering if any of you who say Tyan is not supporting non ECC ram bothered reading the specifications.

While Tyan did not list ANY memory that is non ECC as working. The board specifications clearly state:
Memory Supported DIMM Qty (32) DIMM slots
DIMM Type / Speed U/RDDR3 & LV RDDR3, 800/1066/1333 MHz
Capacity Up to 512GB
Memory channel 4 Channels per CPU
Memory voltage 1.5V or 1.35V

Hello, you have asked about the S8812 board. Non registered RAM will work on this board. Please review the manual to make sure you purchase the correct configuration due to the size of the board.

Manual: http://www.tyan.com/support_download_manuals.aspx?model=S.S8812

Author: chuck wurx UPDATE
Time: 8/1/2011 5:47:11 PM
Can i use standard desktop NON ECC NON Registered ram in this board? An example is this link, http://www.newegg.com/Product/Product.aspx?Item=N82E16820148420

Thanks, Chuck

Regards,
TYAN Technical Support
 
I'm not going to bother with responces to everyones thoughts. I will share this bit. Frankly i'm wondering if any of you who say Tyan is not supporting non ECC ram bothered reading the specifications.

While Tyan did not list ANY memory that is non ECC as working. The board specifications clearly state:
Memory Supported DIMM Qty (32) DIMM slots
DIMM Type / Speed U/RDDR3 & LV RDDR3, 800/1066/1333 MHz
Capacity Up to 512GB
Memory channel 4 Channels per CPU
Memory voltage 1.5V or 1.35V

To be fair, I'd go back and ask them to clarify this part: "Non registered RAM will work on this board."

That just says non-registered RAM, which I take as ECC NON-Registered at first glance. They could have just been answering fast, but yeah that's just me.
 
To be fair, I'd go back and ask them to clarify this part: "Non registered RAM will work on this board."

That just says non-registered RAM, which I take as ECC NON-Registered at first glance. They could have just been answering fast, but yeah that's just me.

Completely possible. I've ordered 3 types of ram. Luckily i burn about 20 8GB kits a month. So if i'm stuck with the non ECC i'll be rid of it pretty quick.
 
To be fair, I'd go back and ask them to clarify this part: "Non registered RAM will work on this board."

That just says non-registered RAM, which I take as ECC NON-Registered at first glance. They could have just been answering fast, but yeah that's just me.

+1 to this.

I have never seen standard desktop dimms referred to as UDIMMs or as Non registered.
 
I'm not going to bother with responces to everyones thoughts. I will share this bit. Frankly i'm wondering if any of you who say Tyan is not supporting non ECC ram bothered reading the specifications.

While Tyan did not list ANY memory that is non ECC as working. The board specifications clearly state:
Memory Supported DIMM Qty (32) DIMM slots
DIMM Type / Speed U/RDDR3 & LV RDDR3, 800/1066/1333 MHz
Capacity Up to 512GB
Memory channel 4 Channels per CPU
Memory voltage 1.5V or 1.35V

U/R refers to unbuffered/registered & has nothing to do with ECC. There are plenty of options for Unbuffered ECC ram. In any case, you can't get (that I can find....if you find please post) 2 * 4GB ecc unbuffered for $50...it's more like $80-$100.

DDR3 ECC+Unbuffered @ Newegg.
 
Back
Top