New MSSQL box - no recent comparisons of CPU's?

MrGuvernment

Fully [H]
Joined
Aug 3, 2004
Messages
21,894
Hey all,

i am putting together a new MSSQL box for work, we do not yet know exactly the type of data going through this box, more so how much, but were told that a Quad core system would be required.

Now, i did some looking, but don't see any recent comparisons of Intel vs AMD for SQL 2008 R2 running on server 2008 RS x64 for both.

Any pro's out there who could enlighten me?

I have spec'd out systems with quad Xeon's with HT and then some 12 core AMD's using 2x 6 core's and so on, and i can get the price about the same, with AMD giving me more core's.

I know most MSSQL systems use allot of I/O so for now i am looking at 4 x intel 160G M series in raid 10, as this system will be mostly reporting, but will be dump too from replication about every 15 mins from another backup server attached to a main server.
 
When you say "putting together" do you actually mean you're building it, or just specing out from some OEMs? I'd say first rule of server-club is "don't build it", and the second rule of server club is "for the love of god listen to rule # 1".

That aside, I'd go with Intel, just because committing to AMD will cut your OEM choices by a large margin.

Also, even in a screaming perfectly configured 15K RAID array, chances of the CPU being the bottleneck is fairly low.
 
building it, it isn't a all out production box, something for internal reporting and fooling around.

Also, 6 of our servers here are custom built by me and are all running strong, some going on 6 years now, i know the first rule, and i do have some dell's running our more important stuff, but being in Costa Rica, once you factor in duty and taxes on bring in a full out 2U -4U dell server, your price just doubled. I do keep spare servers around though just in case and other backups. Knock on wood, has worked out so far.

Good to know, all of my reading seems to point more to I/O, so memory and HD's are the #1 thing to worry about.
 
thanks for the link, useful

http://www.anandtech.com/show/4193/cheap-and-low-power-server-cpus-compared/4


Usually for support why people go OEM, as many OEM's will come out next day and replace anything, but in CR that wont happen, i can get parts here faster than any store or dell does, so for me it makes sense to build them all and to date i have not had one die on me, just took one offline that has been running for almost 8 years, old socket 604 dual xeon 2.4Ghz with 2G of ram, was a MySQL with 15k SCSI drives, turning it into a dev test bed.

Dell boards and PSU and ram are the same things you can buy from providers, just with custom bios's and form factors., they just put it in a purdy box, and offer support services :)

but when the budget allows, i don't mind ordering a nice dell, like our roaming profile server is a nice power vault box with 6x 76G 15k SAS drives and a quad xeon and only 3G of ram that cost nearly $5k.

Anyways, this is what i have put together so far:

  • Adaptec RAID 5805 PCI Express x8 SATA/SAS Controller Card, 8 Internal Port, Low Profile, 512MB DDR2 Cache (include the larger bracket)
  • 4X Intel X25-M 2.5in 80GB SATA 3.0Gb/s Solid State Drive, MLC, Model: SSDSA2MH080G2K5
  • Norco RPC-450 4U Heavy Duty Industrial Standard Rackmount Chassis, 3 x 5.25-inch Drive Bays, 11 x 3.5-inch Drive Bays
  • Silver SilverStone 550W + 550W PS/2 Redundant Power Supply ST55GF, Active PFC, Supports SATA, Dual 40mm Fans.
  • Intel S5520HCR Dual LGA1366 SSI EEB Server Motherboard
  • Intel Xeon E5620 Westmere-EP 2.4GHz 5.86GT/s Socket 1366 Quad-Core 32nm
  • 2 x Crucial 6GB (2GBx3) DDR3 1333(PC3-10600) 240-Pin Triple Channel Server Memory, CL9, ECC, Registered

Total: $2,960.87
 
For shipping and whatnot, have you tried JetBox (www.jetbox.com)? It's a logistical carrier between the US and Costa Rica. That might be cheaper shipping and whatnot for the Dell's, however they may or may not honor your warranty if you ship them to a US address and then Jetbox ships to you in CR.

I sent you a private message about something as well. I want to talk about ICE down there if you can.
 
Yes, jetbox is flakey, i use Aerocasillas.com for all of my orders, heard too many horror stories about jetbox, i am opening a DHL account, since Aerocasillas annoyed me this holiday season, and since DHL has their own planes, should work out good for more expensive rush orders since Aerocasillas claims they cant garuntee anything because they are simply a delivery service and do not own any planes.

will check ma PM.
 
Wrong Intel drives for SQL, you need X25-E's with SLC memory.
 
doh!

M series wont hold up i assume? too many read and writes?

[EDIT] wow, those E drives are pricey, perhaps i will be using SAS drives
 
SAS will be cheaper, although you'll need more for the same I/O. You'll want 4 array's anyway, OS, DB, Log, TempDB.
 
I am running MSSQL 2008 R2 in a VM on shared SAN storage and it works just fine. Determine the need, size for it rather than some arbitrary "you absolutely need this for performance" especially since you are only using it for internal reporting and fooling around. Your above spec is total overkill imho.
 
I am running MSSQL 2008 R2 in a VM on shared SAN storage and it works just fine. Determine the need, size for it rather than some arbitrary "you absolutely need this for performance" especially since you are only using it for internal reporting and fooling around. Your above spec is total overkill imho.
Agreed. Your build could be close or completely overkill, though I'm leaning toward the latter.

Can you provide details on the application(s) being run on this? Would this box also serve other roles concurrently? If it is a vendor product, then what are the minimum and/or recommended hardware specs? As for now, it's difficult to suggest an accurate hardware baseline without context on the kinds of IO happening.
 
I am waiting for some info back from the provider as to the amount of data we could expect, i picked the Xeon as there will be upwards of 5 people using this box for various reporting needs initially connecting to it (our developers and buisness intel guys), and then possibly 15+ people accessing reports that will get data from this server, some very complex like KPI and such.

We also expect growth from this new change and i want to have a good system so as not to be nailed in say 6 months with performance issues, as upgrading the I/O back end is easier then redoing an entire system, i can also toss in another CPU and more memory if needed, i am thinking ahead for an upgrade path to leave me some ease of doing it.

i thought start off with 4 x drives in a raid 10, then add more over the coming months as needed to expand the raid array for the main DB as well as more ram to try and keep as much of the database in ram as possible for better performance.


the OS will be on it's own raid 1 with 2x SATA drives

The log file i was going to dump to our NAS box, or if that is not a good idea, i can factor that in.

i didn't think about the temp DB, thanks for mentioning that!

I will ask the providers who run the system currently for others if they can get me an idea of I/O numbers to expect, so far they have not gotten back to me even about database size, but we do know is not too large as the other dell server we set up (had it from our old systems we took offline) had 4 x 50G SSD's in raid 10 and they said it will be fine for size.
 
good choice on the case, i have one and love it.


IMG_0327.JPG

IMG_0328.JPG
 
Hey all,

i am putting together a new MSSQL box for work, we do not yet know exactly the type of data going through this box, more so how much, but were told that a Quad core system would be required.

IMO, you need some more specifications before you can put together a server.

For processor, I would recommend a Xeon processor, X5500/X7500 or newer. See this blog post or this one as some places to start; clock-for-clock, the 5500/7500 series processors handles many types of SQL operations substantially faster due to improved branch prediction algorithms, improved cache infrastructure, etc. (and newer generations - 5600, etc continue to improve, more cores, etc)

Are you purchasing socket licenses? At a typical price of $9k/socket for SQL Server Standard or $20k+ per socket for SQL Server Enterprise, you definitely need to decide if you're going to be looking at a 1,2 or 4 socket server (or whether you'll need this capability in the future). (some Xeon processors don't support 4-way configurations).

tpc-e or tpc-c cover some imaginary OLTP type SQL loads. tpc-h covers more ad hoc, reporting style loads.

You mentioned disk. This is definitely an important aspect of your build to consider! How many spindles do you need? If you have no idea, you should really do additional research before you shop. A common configuration is a 2-spindle RAID-1 for the OS, 2 spindle RAID-1 for the .ldf, 2-6 spindle RAID1 (or RAID-10) for the tempdb (depending on load) and multiple 6-drive RAID-10 arrays for different logical databases, tables, indexes, etc. Some implementations use the simple recovery model and never use the tempdb - in this case, you could easily put both on the same array.

For spinning media, you'll pay nearly twice as much for 15k RPM drives... they'll provide more IOPS, especially in OLTP environments. In some datawarehouse/ad-hoc type environments, you'd be better off with larger 10k RPM drives. (and multiple arrays).

For SSD - MLC components (especially the most recent manufacturing process) provide 3000 write cycles. (more with advanced ECC algorithms, reserve blocks). SLC will provide 100k. SLC is going to cost you... check out FusionIO - they have good monitoring tools to keep an eye on reserve blocks. Good data is hard to come by; check out this paper from western digital. A lot of tech sites quote the numbers (expected reads/writes) from this paper.

An inexpensive way to add more spindles is to use direct-attached storage; iSCSI/Fiber Channel storage are more expensive, but provide more options - sharing storage arrays between multiple servers, SQL Server Clustering, snapshots...

Don't use an inexpensive NAS box for the .ldf.

If you don't know how to analyze your proposed workload, I'd be happy to point you in the right direction to start generating estimates.


Anyways, this is what i have put together so far:


  • Total: $2,960.87



  • Other posters had good feedback on your component choices. Don't forget +$9k for a 1-socket SQL Server license. (as soon as you increase the total build cost by a 4x due to software licensing, that extra $1k to get a server from an OEM with a 3 year 5x10, 4 hour on-site warranty doesn't look so bad!)
 
calebb, cant thank you enough at the info you provided, i got some reading to do! the license is being provided by our mother company, SQL 2008 R2 enterprise, since they are handling the other servers in the main host for the main servers. I am starting to wonder if the new provider even know what you know about MSSQL set ups!

dashpuppy, i love the case, i have 6 of them already with other custom servers built in them.

MSSQL is a big step up it seems from our previous MySQL boxes, we also didn't, or wont have the load we could have with this new system coming online. As noted this wont be a main production box,, but just one we can use to do reporting, with out worry of affecting the main site, it will be replicated to every 15 mins.

Once we move to this new system i will be moving another DB server i have right now (2x quad Opeteron, 10G of ram and a raid 10 of SATA drives on an adaptec 3805 controller) into another replication box solely to be offsite back up from the main servers.
 
dashpuppy, i love the case, i have 6 of them already with other custom servers built in them.

.

The only thing i hate, is you have to take one of the 2 cages out to change hard drives as they only seat in from the rear. Unless you buy the pull outcaddy unit's.
 
MSSQL is a big step up it seems from our previous MySQL boxes, we also didn't, or wont have the load we could have with this new system coming online. As noted this wont be a main production box,, but just one we can use to do reporting, with out worry of affecting the main site, it will be replicated to every 15 mins.

I still say that you should just take one of your existing servers, run a virtualization hypervisor on it to handle whatever that server is doing now and then put MSSQL into a VM as well. I seriously can't imagine that this wouldn't work based on your expected load.

However, as a gearhead I can see the appeal of buying new stuff, unboxing it, taking in the smell of shiny new PCBs, etc. ..., so it kind of depends on what your personal objective is. ;)
 
And why not? I've built out plenty of barebones Tyan servers without any problems.
Mostly replacement parts avaliability. Unless you're going to order 3 motherboards for each server you build, I'd rather go with an OEM with a huge supply chain and guaranteed replacement stock for 5 years after manufacture.

In the past 3 weeks I've had to deal with hardware failure on 3 servers (probably related to some cooling issues from 3 weeks ago that are finally taking a toll). 2 Motherboards and 1 power supply. HP had me the replacement MoBo in 45 minutes (yes, 45 minutes) on a 6 year old box. Dell had me a full replacement server [sans hard drives] delivered in about 6.5 hours because it had to come 1200 miles. Total downtime less than 8 hours between the 2 of them.

Now if I had say an Intel 3000 based board in some beige box, what do you think the chances of finding that exact Motherboard in stock locally would be? How long would that take? If it's not local, what's the fastest shipping option from somewhere that stocks them? Being in Canada, most American sites ship old stock from the states, if they'll even sell it to us, so it has to clear customs. And if I can't find the exact model, have to find a similar model with the same chipset.....and so on.

Overall, going with a OEM with decent support is absolutely peanuts compared to the potential downtime in the name of saving maybe $1000 on hardware upfront. Especially on a production SQL server, where you're looking at minimum $10k in licensing alone.

On non-critical servers - sure, have fun. Hell, I still have some desktops being used as servers that have Windows 98 licenses on the side on the archive/testing side. But the archive/testing side never requires brand new hardware. If you say it does, you're wasting someone's money. And if it's only in testing until it goes into production, go back to rule # 1.
 
certainly makes sense , i guess for our office it hasnt been an issue as i keep on top of new hardware and OS's, we dont run any 2003 serves, they areall 2008, new servers we build run server 2008 R2, all desktops run windows 7, all of our systems are web based or use web services on the back end, database systems are MySQL and now MSSQL, i just upgraded our exchange 2007 to exchange 2010 for new features.

We are small enough right now that i can implement new systems with out breaking anything, all systems are done in house and our dev team loves new systems to use new features.

i do agree if you can get good support do it, but as said, in costa rica it can be flakey that i wont risk semi okay OEM support by people who have no clue to just having an extra server lying around to switch over if needed.
 
Back
Top