New Server - Burn In Suggestions?

fiyawerx

n00b
Joined
Sep 12, 2014
Messages
9
Hey guys... first let me say this forum has been immensely helpful in the research I did trying to come up with a new home vm / nas solution. I've played around with napp-it in the past but on some old dying hardware, so I wanted to build something new. After all my research, I decided on the all in one route, with the following hardware (some due to sales/deals, like the 4TB HGST drives). I wanted 6 drives, but with NewEgg's limit of 5 during the sale, Now that I actually HAVE them I might stick with 5. I've read that I may get a slight performance hit running 5 disk raidz2, but the majority of my needs for the storage will be plex / photos / home theatre stuff and other system backups.

Here's what I got - hopefully I didn't miss anything Major. So far ESXI 5.5u2 is running out of the box (including network drivers), and I'm getting ready to set up the Napp-It appliance to test that out, just to see that everything is baseline functional. My main question, is what sort of testing should I go about to burn the system in before giving it the all clear? Any specific benchmarks? Good software tests? I don't mind reformatting or anything if I need to, as I won't be moving any real data over to it until I'm comfortable.

I appreciate any advice you guys might have, and thanks again, looking forward to this thing!

Specs:
CASE: FRACTAL DESIGN R4|FD-CA-DEF-R4-BL
MOBO: SuperMicro X10SL7-F
PSU: ROSEWILL| CAPSTONE-550-M
CPU: INTEL|XEON E3-1230V3 3.3Ghz
RAM: MEM 8Gx2 ECC|CRUCIAL CT2KIT102472BD160B (Waiting for price to come down for second 16)
HDD1: SSD 256G|CRUCIAL CT256MX100SSD1 - Using this as my primary esxi datastore for now, possibly a mirror in the future, but my actual VM's are not critical, and I will try to have them backing up to the storage pool.
HDD2 (x5): 4TB|HGST H3IKNAS40003272SN

And I'm not usually one for cable management, but I felt inspired. Hopefully I didn't make any critical errors here. Thanks in advance!

J2YuuJol.jpg
 
I just set my server up a few weeks ago doing a similar setup and hardware. I ran MemTest86+ for 4 days straight on my RAM to make sure they cleared. I did some base benching on the HDDs, than ran HDDs testing in a ZFS raid setup I was going to use. Once I did a bit of tweaking to get more performance, I was satisfied with the hardware stability I started putting my data onto the server.
 
Thanks. I ran memtest overnight so far, no problems, right now I'm running hdtune's long scan on them one by one to check out the sectors. Wasn't too sure what the other standards were for checking. What did you use to check the hdds?
 
Thanks. I ran memtest overnight so far, no problems, right now I'm running hdtune's long scan on them one by one to check out the sectors. Wasn't too sure what the other standards were for checking. What did you use to check the hdds?

I always run multiple passes of badblocks in destructive read/write mode on my hard drives. That will help you verify all the sectors are good.

Code:
# badblocks -wsv /dev/<device>
Checking for bad blocks in read-write mode
From block 0 to 488386583
Testing with pattern 0xaa: done
Reading and comparing: done
Testing with pattern 0x55: done
Reading and comparing: done
Testing with pattern 0xff: 22.93% done, 4:09:55 elapsed. (0/0/0 errors)
[...]
Testing with pattern 0x00: done
Reading and comparing: done
Pass completed, 0 bad blocks found. (0/0/0 errors)

Make sure you ONLY use the -w flag on a drive without critical data as it is destructive. I always use this to burn in my hard drives before I ever write a bit of data to it.
 
I always run multiple passes of badblocks in destructive read/write mode on my hard drives. That will help you verify all the sectors are good.
--
Make sure you ONLY use the -w flag on a drive without critical data as it is destructive. I always use this to burn in my hard drives before I ever write a bit of data to it.

No problem there since these are new, I'll give that a try. Is there any issue / possibility of running it on multiple drives simultaneously?
 
No problem there since these are new, I'll give that a try. Is there any issue / possibility of running it on multiple drives simultaneously?

No issues, I always ran it on multiple drives at once, otherwise it would have taken weeks.... lol
 
Hey guys... first let me say this forum has been immensely helpful in the research I did trying to come up with a new home vm / nas solution. I've played around with napp-it in the past but on some old dying hardware, so I wanted to build something new. After all my research, I decided on the all in one route, with the following hardware (some due to sales/deals, like the 4TB HGST drives). I wanted 6 drives, but with NewEgg's limit of 5 during the sale, Now that I actually HAVE them I might stick with 5. I've read that I may get a slight performance hit running 5 disk raidz2, but the majority of my needs for the storage will be plex / photos / home theatre stuff and other system backups.

Here's what I got - hopefully I didn't miss anything Major. So far ESXI 5.5u2 is running out of the box (including network drivers), and I'm getting ready to set up the Napp-It appliance to test that out, just to see that everything is baseline functional. My main question, is what sort of testing should I go about to burn the system in before giving it the all clear? Any specific benchmarks? Good software tests? I don't mind reformatting or anything if I need to, as I won't be moving any real data over to it until I'm comfortable.

I appreciate any advice you guys might have, and thanks again, looking forward to this thing!

Specs:
CASE: FRACTAL DESIGN R4|FD-CA-DEF-R4-BL
MOBO: SuperMicro X10SL7-F
PSU: ROSEWILL| CAPSTONE-550-M
CPU: INTEL|XEON E3-1230V3 3.3Ghz
RAM: MEM 8Gx2 ECC|CRUCIAL CT2KIT102472BD160B (Waiting for price to come down for second 16)
HDD1: SSD 256G|CRUCIAL CT256MX100SSD1 - Using this as my primary esxi datastore for now, possibly a mirror in the future, but my actual VM's are not critical, and I will try to have them backing up to the storage pool.
HDD2 (x5): 4TB|HGST H3IKNAS40003272SN

And I'm not usually one for cable management, but I felt inspired. Hopefully I didn't make any critical errors here. Thanks in advance!

Sorry for the off-topic question, but can you tell me what temps you are seeing at idle
and what RPM your stock fan runs at (ballpark estimate at idle).

I am testing a very similar setup (E3-1220V3 stock fan on X10SL7) and my CPU temp sits at 55C idle which seems high to me. I was expecting more like in the low 40 range.

Oh...btw .. i'm testing out the poor man's version of the FD R4 ...aka NZXT Source 210.
I'll likely be getting one of each in the near future.

TIA
 
Sorry for the off-topic question, but can you tell me what temps you are seeing at idle
and what RPM your stock fan runs at (ballpark estimate at idle).
TIA

Currently, It's about 78f in my office - and according to the IPMI system management:

FANA Normal 900 R.P.M (Set to "Standard" speed in SM config)

CPU Temp Normal 42 degrees C (Only one VM running atm scanning all 5 hdd's with hdtune pro - not very intensive, but I'm using the stock cooler with the stock goop as well)

System Temp Normal 49 degrees C
Peripheral Temp Normal 39 degrees C
PCH Temp Normal 46 degrees C
VRM Temp Normal 41 degrees C
DIMMA1 Temp Normal 33 degrees C
DIMMA2 Temp N/A Not Present!
DIMMB1 Temp Normal 31 degrees C
DIMMB2 Temp N/A Not Present!

Setting fan speed to "full speed" changes it to 2k rpm (can still barely hear it through the case) and drops the CPU Temp to

CPU Temp Normal 36 degrees C
System Temp Normal 43 degrees C
Peripheral Temp Normal 40 degrees C
PCH Temp Normal 44 degrees C
VRM Temp Normal 35 degrees C
DIMMA1 Temp Normal 30 degrees C
DIMMA2 Temp N/A Not Present!
DIMMB1 Temp Normal 30 degrees C
DIMMB2 Temp N/A Not Present!

A reading from HDTUNE (These drives have been scanning solidly for about 6 hours now between 120MB/s and 160MB/s)

7VZd3An.png


I picked up a box of "Silent" 120mm cooler masters for a few bucks with this too, might put one on the case door facing the mobo and see if that does anything. I expect some more noise that way too, so maybe not worth the trade-off for my particular load. I'll bump up the vm cpus and stress them a bit later and see how hot the temps get too. I don't really have anything to compare this to so I'm not sure if these temps are considered ok or not.
 
Bummed :(

Ok, actually... confused :confused:

The first scan showed a section of consistantly bad sectors on this drive:
GK9ZqTv.png


I redid the scan, selecting to start just before the bad section reported above, and now everything is clear?
d9uKfXk.png


Current Smart Status:
HI22K4i.png
 
Last edited:
Currently, It's about 78f in my office - and according to the IPMI system management:

FANA Normal 900 R.P.M (Set to "Standard" speed in SM config)

CPU Temp Normal 42 degrees C (Only one VM running atm scanning all 5 hdd's with hdtune pro - not very intensive, but I'm using the stock cooler with the stock goop as well)

System Temp Normal 49 degrees C
Peripheral Temp Normal 39 degrees C
PCH Temp Normal 46 degrees C
VRM Temp Normal 41 degrees C
DIMMA1 Temp Normal 33 degrees C
DIMMA2 Temp N/A Not Present!
DIMMB1 Temp Normal 31 degrees C
DIMMB2 Temp N/A Not Present!

Setting fan speed to "full speed" changes it to 2k rpm (can still barely hear it through the case) and drops the CPU Temp to

CPU Temp Normal 36 degrees C
System Temp Normal 43 degrees C
Peripheral Temp Normal 40 degrees C
PCH Temp Normal 44 degrees C
VRM Temp Normal 35 degrees C
DIMMA1 Temp Normal 30 degrees C
DIMMA2 Temp N/A Not Present!
DIMMB1 Temp Normal 30 degrees C
DIMMB2 Temp N/A Not Present!

A reading from HDTUNE (These drives have been scanning solidly for about 6 hours now between 120MB/s and 160MB/s)

<snipped image>

I picked up a box of "Silent" 120mm cooler masters for a few bucks with this too, might put one on the case door facing the mobo and see if that does anything. I expect some more noise that way too, so maybe not worth the trade-off for my particular load. I'll bump up the vm cpus and stress them a bit later and see how hot the temps get too. I don't really have anything to compare this to so I'm not sure if these temps are considered ok or not.

Thank you for that info.
In reviewing the E3-1200 v3 Family Datasheet, it looks like temps
are fine. Looks like 40C - 73C would be okay.
 
On another scan (all 5 drives) the same drive is showing up with different damaged spots now? Smart info is still showing good... could this be a cable issue or does it seem like this would indicate a bad drive?

0gEwy5v.png
 
Bummed :(

Ok, actually... confused :confused:

The first scan showed a section of consistantly bad sectors on this drive:
GK9ZqTv.png


I redid the scan, selecting to start just before the bad section reported above, and now everything is clear?
d9uKfXk.png


Current Smart Status:
HI22K4i.png

According to the smart data that drive is fine, I wouldn't be worry about it anyway.
 
According to the smart data that drive is fine, I wouldn't be worry about it anyway.

Thanks. Something definitely seems off now though, restarted all the tests after a reboot, it was green for a bit, and now seeing it like this:

qAuzbkql.png


Same drive. I'm just worried because if there IS a problem, I want to take care of it now before I start actually loading data on it and having zfs just pull the drive because it hit some errors. Smart still SEEMS ok. Going to replace the cable (these were pretty cheap ones - ordered extras) and see if it comes back any cleaner. Once I get a good hdtune scan I'm going to run badblocks on them all as well.
 
I've never used the HDtune's error scan before so I can't attest to how reliable it is. But badblocks will for sure let you know if the sectors are good, if you still have any doubts.
 
I've never used the HDtune's error scan before so I can't attest to how reliable it is. But badblocks will for sure let you know if the sectors are good, if you still have any doubts.

Going to give that a test now, just ordered a 6th drive, too, so will be running more tests once that comes in.
 
Before I add any new drive to a raid array I always dd the entire drive. First I dd /dev/zero to the drive, then dd the drive to /dev/null. This will test both read and write for each cluster.

So basically:

Code:
dd if=/dev/zero of=/dev/[drive]
dd if=/dev/[drive] of=/dev/zero

I obviously don't want to test that code right now so I may possibly have that wrong but pretty sure that's the syntax.

I keep an eye on dmesg for any oddities while I do this. First sign of any kind of i/o error the drive gets RMAed.

I imagine the tools mentioned here probably do the same thing so that works too.
 
Back
Top