12th Generation Dell Servers

Seems like I have used other drives in my Dell servers but I cannot recall right now if they were pulls from other Dells or not.
 
Got the T320 in, and unfortunately like the last time, it throws a drive fault. In the LSI config (from the ctlr-R prompt) it shows the cachecade as ok. But it throws the amber led after a minute or so, and in vmware health status shows the drive faulted. Tried with a intel 320, and crucial m4. About the only other thing I could try is putting 2 of them in there, perhaps it wants to see a raid1 with 2 ssd's, but I don't have another sled and don't feel like popping a drive out & using that one, as I'm 99% sure it won't work anyway. Maybe they will release a firmware update down the road that allows it, once it's not the latest & greatest controller.

Vendor lock in pisses me off. At least I put in my my own ram, $360 vs $960 for 48gb.
 
Got the T320 in, and unfortunately like the last time, it throws a drive fault. In the LSI config (from the ctlr-R prompt) it shows the cachecade as ok. But it throws the amber led after a minute or so, and in vmware health status shows the drive faulted. Tried with a intel 320, and crucial m4. About the only other thing I could try is putting 2 of them in there, perhaps it wants to see a raid1 with 2 ssd's, but I don't have another sled and don't feel like popping a drive out & using that one, as I'm 99% sure it won't work anyway. Maybe they will release a firmware update down the road that allows it, once it's not the latest & greatest controller.

Vendor lock in pisses me off. At least I put in my my own ram, $360 vs $960 for 48gb.

ram & cpu :)

it's weird that you are getting errors, i have regular 1tb drives on my R415 with a perc card and no issues at all.
 
Which controller? This is the 710p, and it was still throwing errors even after deleting the cachecade, just leaving the ssd in there unconfigured.
 
ok, I tried a few different things, and now it seems to be working without throwing an error. I'm using the 80gb intel 320 again. Previously I had nuked the drives completely, this time I wiped the drive, then re-initialized it in windows (giving it an MBR, but didn't create a partition). Also before I was putting it in the last of the drive bays (#7), leaving #6 empty (0-5 are in use), this time I put it in #6. Also I created the cachecade in the system setup gui (post -> F11 system setup -> device setup), rather than the very basic lsi ctrl-R utility.

I'm not sure which of these mattered, but it's a step in the right direction. I'll have to do some testing once the rest of the server is up and running to see if it's actually working.
 
ok, I tried a few different things, and now it seems to be working without throwing an error. I'm using the 80gb intel 320 again. Previously I had nuked the drives completely, this time I wiped the drive, then re-initialized it in windows (giving it an MBR, but didn't create a partition). Also before I was putting it in the last of the drive bays (#7), leaving #6 empty (0-5 are in use), this time I put it in #6. Also I created the cachecade in the system setup gui (post -> F11 system setup -> device setup), rather than the very basic lsi ctrl-R utility.

I'm not sure which of these mattered, but it's a step in the right direction. I'll have to do some testing once the rest of the server is up and running to see if it's actually working.

Probably because the raid configuration is saved on the card and on drives :) doing a fresh raid config probabl removed the error...
 
I didn't nuke the array drives, just the SSD. If I get time I might try it again to see what step actually fixed it.
 
ok, I tried a few different things, and now it seems to be working without throwing an error. I'm using the 80gb intel 320 again. Previously I had nuked the drives completely, this time I wiped the drive, then re-initialized it in windows (giving it an MBR, but didn't create a partition). Also before I was putting it in the last of the drive bays (#7), leaving #6 empty (0-5 are in use), this time I put it in #6. Also I created the cachecade in the system setup gui (post -> F11 system setup -> device setup), rather than the very basic lsi ctrl-R utility.

I'm not sure which of these mattered, but it's a step in the right direction. I'll have to do some testing once the rest of the server is up and running to see if it's actually working.

Interesting, I'll have to try that on the next server I order. Got a few SSDs lying around.
 
Is cachecade something you need to purchase a license code for or is it a standard feature of the H700 and H710?
 
Why do your servers have drives in them? Get a SAN, VMware.

Because not all small shops need SAN's, they add a lot of extra expense and can complicate small designs.

What is the point of having redundant nodes/switches/nics but not have a redundant SAN, you ideally would need two. This is required in order to reduce risks as you know.

Yes, the "better" SAN's have dual controllers, dual everything, but you pay what you get for, but they could still go down and that would affect all your vms.

If you use local storage, replicate to a storage device/NAS, then replicate off site/online, you do not only reduce your costs but you also cut down on complexity.

If you have say 3 nodes running 20 vm's each whilst using using a SAN and it dies, you are potentially down on 60 vms, but if you had local storage and one node goes down, you lose 20...... you could boot those 20 vms from your backup device on the other nodes and be up and running much quicker ...... If you had dual SAN's this would not be an issue.

Local storage tends to be suited more for smaller environments or businesses that don't need vMotion, however, now with 5.1 you can do vMotion without the need of shared storage (awesome)

All depends on your business requirements and how you manage risks or more realistically in some SME's, what you can afford...........

Please don't think that just because you use VMware for virtualisation that you must need a SAN i.e no local disks.

Another good option (not suitable for all) is actually utilising the local disks on all the nodes and then using VSA or similar, to create a SAN/Clustered SAN.

Many options available and VMware are really helping our recovery times for small business, their Essentials packages are excellent value for money.

Sorry for long post, hopefully you can see where I am coming from.
 
Last edited:
Its not a security douche bag its me or another competent admin. You wouldn't be the first tech I got fired.

And no I don't stand in your bubble I'm down at the end of the isle working and just monitoring.

Get off your high horse.

I always tell them NO just dispatch the drive, I dont need a tech to swap a caddy onto a drive

Why do your servers have drives in them? Get a SAN, VMware.

I store snapshots on the local servers, in case the SAN takes a dump. So I can bring the VMs back immediately.....

and its hyper-v.....
 
Dell's so cheap they don't even ship a caddy with the replacement drive? LOL!
 
Dell's so cheap they don't even ship a caddy with the replacement drive? LOL!

and since when has a caddy ever failed ? like never, and really, how hard is it to pull the 4 screws out and swap the drive.
 
As a side note I recently found out that you can get the equivalent of the R720xd in the T620 but configure the chassis for 32 2.5" drive bays.

The trade off is 2U vs 5U but you can gain 6 extra drive slots and also have room for GPU computing on the T620. They really need to include the T620 in the list of rackable servers...
 
and since when has a caddy ever failed ? like never, and really, how hard is it to pull the 4 screws out and swap the drive.

I've had the "help" at a datacenter swap a drive but not put any screws on the caddy.
 
Dell's so cheap they don't even ship a caddy with the replacement drive? LOL!

How do you think they keep prices down? By not handing out caddy's.

Any Admin that is too good to move a drive to a caddy will soon find themselves obsolete. ;)
 
How do you think they keep prices down? By not handing out caddy's.

Any Admin that is too good to move a drive to a caddy will soon find themselves obsolete. ;)
tell that to the guys in china that are selling the ones they pull off drives people send back on ebay lol
 
I always tell them NO just dispatch the drive, I dont need a tech to swap a caddy onto a drive

I likewise always told them not to dispatch a tech. I rarely had problems with hard drives on PowerEdge 1950/2950 models, it always the bloody memory. Seems like I was constantly getting new RAM in from Dell (and by new I mean refurbished no-name brand RAM - for RMA orders :confused:).
 
Seeing that you have quite a bit of experience with these Dell servers, I was wondering if you would know if it is possible to use an intel 520 SSD or similar as a boot drive. I've ordered the T320 with H710, 8x3.5" hot plug, but would like to know if the SSD could be attached to the on board sata? If not, is it possible to get a caddy with 2.5" adapter and do it via the H710? Thanks.
 
You can definitely hook sata drives up to the mainboard, you'll need a 8087 breakout cable, they aren't individual sata ports. You should be able to boot from it, can't say I've tried it myself though.
 
You can definitely hook sata drives up to the mainboard, you'll need a 8087 breakout cable, they aren't individual sata ports. You should be able to boot from it, can't say I've tried it myself though.

Thanks, Dave. Just checked the manual. It has a only one single diagram of the board layout and connections. The labeling is pretty vague too, but I can confirm the mini-SAS connection. I'll check it out when it arrives. Cheers.
 
Back
Top