[H]OT Deal? DELL PowerEdge R610/R710 Servers sub $200? ($149-199)

BTW, some of the pictures of the used Dell servers at Amazon appear to be very deceptive. I mean they take a picture of 6 servers in a rack when you are purchasing a single server.
 
what'd you use the fusion drive for? Like a db cache?
It's basically a big SSD that plugs into a x8 pcie slot, except you can't boot from it. Even though it's 7 years old, it's about as fast as a modern nvme drive except with about twice the iops. I use it to hold the db, as well as store ramdrive images (ramdrive is for the tempdb files and to hold the IIS drive for fast .NET execution).
 
I have an R710, it’s “ok”. Perc is shit, like 40mbyte/sec. h710 upgrade will more than double that, but last I saw an upgrade kit, was over $100. Need to have a 56xx processor to run esx 6.5 or i think have to run 6.0 or lower with an older processor. 6.0 is fine tho really. I only use mine now when I vmotion vcenter over to it to patch my t630. I personally wouldn’t buy less than an R720 now.
 
Anyone who owns one, how much power do they actually use? How much $ per month would you say it costs to run one of these?

I've got one with a pair of L5640 72gb ram and six 3.5" sata disks running esxi 6.5 with an average of nine VMs running. Pulls 189w and is quieter than my desktop. It does push out some heat though, but it's in the basement so whatever.
 
I am new to this stuff, if I am running esxi and have a vm with freenas, how many cores should I assign and how much memory to freenas? will be on a r720 with dual 8 core cpus, 128 gb ram and 24 tb of hdds (8 drives on hardware raid) This is for home use. thanks!
 
Last edited:
Curious what are most people using these for at home besides a storage server? You say you could run 4-5 vms.. what do people run on them exactly?

So if I wanted to setup one of these to be primarily used for storage (8-12gb total storage (raid 5)) what all would you need to get one up and going?

It's just for poops and giggles for the most part, though there there may be a handful of people who have legitimate uses for it. These server are all over the place on ebay. I bought an R610 with 48GB RAM 2 x 6 core/12 thread Xeon CPUs and a pair of 10k RPM SAS drives for right around $200 about a year ago. With some clever riser card/cable configuration I was able to connect a modern GPU to it and used it for 4k video editing box. Back then my main machine was an i7 3770k which worked fine for the editing but the processing took hours which means no gaming while the processing took place. Moved that work load over to the server so I could play while my videos were being transcoded.

It sees little use now that my main machine is a 3900x. Ultimately these processors are just too old to be of any REAL value IMO. I also have a R7 2700 that blows away the dual Xeon server in both multi-threaded and single threaded workloads while being whisper quiet. These 1U servers are LOUD
 
These are definitely an edge use case. My usage is for a SQL database that's highly RAM and iops dependent, so it's a good fit.
Pros:
Cheap
Built like a tank, great reliability and redundancy (dual PSUs, 4 ethernet ports), enterprise level build quality
Ability to put in lots o' memory (I have 144GB in mine that I got for $100 on ebay)
CPU power is OK, not great, but good enough for most things.
Well documented and patched.

Cons:
Most are limited to SATA II
PERC cards are limited to 2GB drives and have proprietary cables - upgrading is expensive
Only PCIe x8 slots
Big, heavy, inefficient compared with more modern hardware.

RamonGTP: Did you do a write-up on how to mod the pci-e slot for video cards? If that was you, thank you! I did it and now have a GT730 running in mine. Nice for higher resolutions and digital output.
 
These are definitely an edge use case. My usage is for a SQL database that's highly RAM and iops dependent, so it's a good fit.
Pros:
Cheap
Built like a tank, great reliability and redundancy (dual PSUs, 4 ethernet ports), enterprise level build quality
Ability to put in lots o' memory (I have 144GB in mine that I got for $100 on ebay)
CPU power is OK, not great, but good enough for most things.
Well documented and patched.

Cons:
Most are limited to SATA II
PERC cards are limited to 2GB drives and have proprietary cables - upgrading is expensive
Only PCIe x8 slots
Big, heavy, inefficient compared with more modern hardware.

RamonGTP: Did you do a write-up on how to mod the pci-e slot for video cards? If that was you, thank you! I did it and now have a GT730 running in mine. Nice for higher resolutions and digital output.

sadly, I can’t take credit for that. My way involved a dremel cutting tool and carefully opening up the back end of one of the 8x slots so I can fit the riser cable in there. Obviously doesn’t have all 16 lanes this way but I don’t think it’s affective performance much at all, especially considering i only have a RX480 on it.
 
As someone who knows nothing about servers and is finally going to dive in once the right hardware is found, would this HP ProLiant be as good a value as the Dells?
It's a little pricier than the Dell, but as far as performance, it's the HP twin of the Dell. Keep in mind both companies do things differently in terms of their bios, system layout, compatibility, and parts, so you might find you like one brand better than the other. I like them both as they're like tanks. :)
 
As an Amazon Associate, HardForum may earn from qualifying purchases.
  • Like
Reactions: viivo
like this
Curious what are most people using these for at home besides a storage server? You say you could run 4-5 vms.. what do people run on them exactly?

So if I wanted to setup one of these to be primarily used for storage (8-12gb total storage (raid 5)) what all would you need to get one up and going?
If you're using it for storage, you have a ton of options--you can run freenas on it baremetal or in a VM, or you could literally just install win10 and make a giant share using the raid controller and share it. The controllers typically have a battery backed cache so you'll have 512MB-1GB+ of nvme level performance before it hits the drives, but even then sas drives are fast and you can replace them with ssds if you want.

The only drawback is the noise of the fans, but these are industrial machines so it comes with the territory.
 
So when they say run vm's they are referring to like a plexi server? (VM is virtual machine right?). so running that in its own virtual server/os?
I had these questions too when starting out. So a 'hypervisor' is the 'os' that runs the virtual machines. A hypervisor can be installed 'bare metal' (directly on the hardware like any other OS), or can be run inside a host operating system like windows (Microsoft's Hyper-V can do this). Some of the more popular ones are Esxi (VMware), Hyper-V (Microsoft), and there's even open source free ones like Proxmox.

I haven't yet played with any of these yet, but made my decision to go with Proxmox since it doesn't tie me into a pay platform down the road.
 
Pretty solid deals here, I am still running an r710 as a file server that chugs right along.
Reliability on these things is awesome--it's why you still see servers for sale that are well beyond their prime in terms of specs. Like my dual lga771 HP DL380 Gen 5. It runs great and is super reliable, and the original MSRP of over $7000 makes you quickly realize why--they put the money into it for sure.
 
Install esxi on it. It's easy and effective. Then make vm's with 2-4 cores test on those vm's. Break em, build em, destroy em, do it all over again. Install an ssd for the ones that you are going to destroy often.
Yep, or install Proxmox and do the same. And push it to see how many VMs you can put on there. A lot of people keep linux isos handy to spin up a VM of one whenever they want one. You can also run windows inside VMs so you don't have to worry about updates breaking things. Snapshot the VM and just make a VM off the snapshot every time. :)
 
Id love one of these old rack mount servers but the problem is the noise.
The good thing about these is that they are designed to be remote controlled and headless so you can pretty much put them anywhere. I have some in the attic and garage at my parents house and in other places they can't even hear them. I've even heard of people putting them under the bed. :D
 
Even still, man these things are whiny.

What I really want is one of those tiny Dell tower servers that I can stuff an HBA card in and run FreeNas. Maybe even one of those HP Proliant microservers.
The 1u ones can be because their fans have to spin so much faster than 2u fans. But the 2u units when not loaded can be quite silent. Not desktop pc slient, but definitely not datacenter jet howling.

Those Dell and HP towers are just tower versions of these servers. Their layout will be a bit different, but the noise profile will be similar--they're still not a desktop. That being said, if you find one of these on sale or used they can be a serious bargain--so much so that you can be tempted to turn them into a gaming rig. :eek:
 
Power usage is no picnic either. We have a bunch of the R series servers and I hate getting on calls with field techs, can never hear them.
They must be a bad environment then. I was talking to my wife yesterday over facetime while working on the R410 and the only time I couldn't hear her perfectly was when I initially booted it and the fans went 100% and that was for only 5 seconds.

The fans will rightfully get loud if the ambient temps are 80+.
 
Can I stuff noctua fans in it? :p
If you get a Supermicro one you typically can modify certain things like the fans. These are more like the server analog to regular computers where you can pick the parts and make them how you want them.
 
That too and the heat.

But hey, if the goal is to get a small VM lab going to practice Xen or VMware, its a great way to get into some hardware for it.

These are ridiculous for the purposes of a NAS. In fact, even the Dell T30 that was so popular a few years back is overkill. A synology with 5 bays is like 600 bucks. After you get done putting a system together or buying the T30 and adding a HBA card to it, the synology becomes cheaper. What a world.
The heat can be great in the winter. :D I actually power down some of the servers in the summer.

You can do even more than a VM lab--people will build whole data centers with dedicated connections with family, servers of all sorts, pfsense, nas, and more. There's whole communities dedicated to this like servethehome and reddit homelab. These are also great places to find deals on equipment, sometimes even for just the price of shipping. ;)

And if you get the equipment used generally you'll have more capability than just a Synology which starts getting long in the tooth if you're really putting a lot of vms on it.
 
Anyone who owns one, how much power do they actually use? How much $ per month would you say it costs to run one of these?
So I have several of these and use them along side desktops. The UPSs all have a power draw indicator so I can tell how much more power the servers draw than the desktops. And honestly it's not that much more--50-100w peak. When idling or low usage they will use more, but at peak they're actually about the same. And just think about it this way--servers generally come with 500-700w power supplies, and that's pretty much the standard with desktops today, so I they're in the same ballpark except when idling as they're not made to idle as low.

Cost will depend on your utilities. We have a nuc plant in the back yard so the water bill is higher than keeping a whole house of computers on 24x7, lol.
 
Just listened to the sound of a R720 on yt, one video he was comparing a 2950 to a R710 (not sure how much difference a 710 is to a 720) but the 2950 was WAY louder than the 710.. the other video I watched was a r720 and overall I don't think it was to bad.. nothing like the 2950 was.
I love my 2950 startup noise. :D It's like having a turbine engine start up in front of you. :) But that's me. :oops: The R710 isn't much quieter upon startup, but it is quieter running--probably because it's more powerful than the 2950 and doesn't have to work as hard for what I throw at it. I haven't been able to hear a 720 in person--too expensive. :eek:
 
Keep in mind VMWare is part of the reason these servers are being phased out of production. VMware's HCL support is incredibly poor and they like to phase things out before the warranty even expires. x20 series Dells are not supported for ESXi 6.7, so the last version you can load is 6.5. I don't believe the x10 series support 6.5 at all, and 6.0 is going EOL in less than a month. That said if you want to run Windows, there shouldn't be any issues there, nor should there be any issues using Linux. If you're having hypervisor compatibility issues I'd suggest either using Hyper-V on Windows or Proxmox for a Linux based hypervisor. Both of these can support just about anything you can come up and you won't have to spend a bunch of time trying to side load your Intel NIC drivers. *Glares at VMWare*

As for noise it's been so long since I've even heard a 2950 I wouldn't want to comment on it. Just in general any type of system that is a 2U will be slightly quieter and less whiny than a 1U. More modern systems have better power management, and try to slow the fans down more than older systems would. But most of these servers are still pulling like 200W from the wall if they are filled with disks, so it's $$$ to keep them on 24/7. In a datacenter no one really cares that much, but in your house you'll notice if there is a $20 a month jump in your power bill. With LED lighting now, one server probably consumes more power than all of the lights in your home do every month.

Pros:

Cheap to purchase
ECC memory
hot swap drive support / hardware RAID
lots of HDD slots
Dual PSU
Well built with quality components (For the most part)
Way more RAM slots
Out of band management (iDrac, etc)

Cons:

loud
power hungry
incredibly long, might not fit into your rack
old / possible hardware issues
software issues with vmware

For the power costs alone it's certainly possible to just build a new desktop and you'll probably break even around the 3 year mark. If you don't need tons of disks or memory, it might be a better option. If you're buying an 8 year old server, you'll probably either spend half your time on ebay finding replacement disks or end up having to buy new ones anyway, so keep that in mind in the costs.
Great summary! The software support issues is one of the reasons I plan to use Proxmox as even the newest version will support my old Dell 2950s, and apparently I could even create a cluster with all the hardware I have. :eek: That would definitely be interesting, but will start to burn some serious power.

The good thing about costs is that there were so many of these produced and at such high quality that parts are common and cheap. I recently bought a load drives for the price of what new caddies would have been. And I'm talking stupid cheap--I actually bought one of my 2950s for only $30 bare with cpus only a few years back, and now that I think about it, I haven't paid even paid $100 for any of my servers. :) If you look for them the deals are out there on a level you won't find with desktop hardware. (y)
 
I mean, if someone is trying to get into the home server lab game I have a X8DTI-LN4F that has two E5620 in it and some RAM that'd I'd sell pretty cheap.
Yep, that's a turnkey that you can put in any case and be up and running. :)
 
Yep, that's a turnkey that you can put in any case and be up and running. :)
It's a fine setup. I just swapped it out of the chassis for a Threadripper. I will probably post it up in fs/ft if no one wants it locally. My buddy's kid might want to play with it.
 
Yeah, after re-considering and reading all the feedback I too would pass on these. For what it's capable of, I don't think it's worth the cost in power really. Like you said, unless you needed something to run like a database or something that would require lots or ram and storage, what the hell are you really going to do with it.
People talk about the power usage of these as if they need 1.21 Gigawatts. :ROFLMAO: They're not bad at all, especially when you've got only one. I typically have 2x of them powered on and can't tell when when I've turned them off when I look at the power bill--the house AC units use much more power. (y)

They're ideal for virtual machines, which opens the doors to a lot of applications. :) I think without virtualization they don't have much more application besides a highly reliable machine or file server.
 
BTW, some of the pictures of the used Dell servers at Amazon appear to be very deceptive. I mean they take a picture of 6 servers in a rack when you are purchasing a single server.
Amazon?!! Deceptive?!? Say it isn't so?!? :ROFLMAO: :ROFLMAO:

One of the issues with buying used equipment is 'those type' of sellers. Just look carefully (like anything else used) and you'll be fine. (y)
 
Nice! How much ram did you have and what processors?
I had the older E5-2670's 8 core CPU's and 128GB Ram,
intel_dual_xeon-Windows10.jpg
 
I love my 2950 startup noise. :D It's like having a turbine engine start up in front of you. :) But that's me. :oops: The R710 isn't much quieter upon startup, but it is quieter running--probably because it's more powerful than the 2950 and doesn't have to work as hard for what I throw at it. I haven't been able to hear a 720 in person--too expensive. :eek:

I have my R720 all setup now and its not loud at all, on startup it winds up pretty good but during normal use with esxi 6.7, windows 2019 server vm and a few linux vm's its not loud at all. Still trying to get 10g ethernet working on my 3705-E switch and x520 nic.. can't seem to get a link.
 
I have my R720 all setup now and its not loud at all, on startup it winds up pretty good but during normal use with esxi 6.7, windows 2019 server vm and a few linux vm's its not loud at all. Still trying to get 10g ethernet working on my 3705-E switch and x520 nic.. can't seem to get a link.
Awesome. :) Are you using sfp modules, dac cable or rj45?
 
Awesome. :) Are you using sfp modules, dac cable or rj45?

I have these parts to make it work but guessing I need something else.. I had a link light on the switch for a min once.. never had network connection though. I also can see a red light coming out of the switch on port A only.

https://www.ebay.com/itm/Genuine-Intel-10Gbe-FTLX8571D3BCV-IT-E10GSFPSR-E65689-001-for-Adapter-X520-X710/153647489509?ssPageName=STRK:MEBIDX:IT&_trksid=p2057872.m2749.l264

https://www.ebay.com/itm/Cisco-X2-10GB-SR-X2-10GBase-SR-Module-Cisco-Genuine-Transceiver-1-YR-Warranty/273941713502?ssPageName=STRK:MEBIDX:IT&_trksid=p2057872.m2749.l2649

https://www.amazon.com/gp/product/B01CDDSYSQ/ref=ppx_yo_dt_b_asin_title_o06_s00?ie=UTF8&psc=1

Ideas what I need to change/try?

*** rebooted server and switch and 10gb started working.
 
Last edited:
As an eBay Associate, HardForum may earn from qualifying purchases.
As an Amazon Associate, HardForum may earn from qualifying purchases.
Back
Top