I want like 24 of those in raid 10.Would come up to around 110TB of usable space. I honestly don't even know what I'd do with all that, I have 19TB total between my 3 arrays and it's more than I need... for now.
Download all the porn!
Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature currently requires accessing the site using the built-in Safari browser.
I want like 24 of those in raid 10.Would come up to around 110TB of usable space. I honestly don't even know what I'd do with all that, I have 19TB total between my 3 arrays and it's more than I need... for now.
NAME
zfshome
raidz2-0
Western Digital RED 4TB
Western Digital RED 4TB
Western Digital RED 4TB
Western Digital RED 4TB
Western Digital RED 4TB
Western Digital RED 4TB
raidz2-1
Western Digital RED 4TB
Western Digital RED 4TB
Western Digital RED 4TB
Western Digital RED 4TB
Western Digital RED 4TB
Western Digital RED 4TB
logs
mirror-2
Intel S3700 100GB (underprovisioned to 15GB)
Intel S3700 100GB (underprovisioned to 15GB)
cache
Samsung 850 Pro 128GB
Samsung 850 Pro 128GB
stupid
what's stupid about 10 4TB WD Red's?
That packaging is odd, but that's about it.
You need a hug today EnderW?![]()
Zarathustra[H];1041110029 said:I like teasers and all, but I think his point is, we want to see cool and interesting server solutions, not really unboxings. I don't feel as strongly about it as EnderW does, but I see where he is coming from.
I understand the point too, but I'll play the other side of the card (just for arguments sake).
Getting all butt-hurt about someone posting 10 HD's in the packaging is just funny. They were just showing off. Someone jealous much?
[/LIST]
my comment was not directed to Wibla's post, it was the fact I had to scrap the new thread I was working on because someone posted a reply too soonwhat's stupid about 10 4TB WD Red's?
That packaging is odd, but that's about it.
You need a hug today EnderW?![]()
wow what the fuck are you talking about?I understand the point too, but I'll play the other side of the card (just for arguments sake).
This entire thread is a "showoff" thread, that's the whole point of the entire thread. Getting all butt-hurt about someone posting 10 HD's in the packaging is just funny. They were just showing off. Someone jealous much?
There needs to be a new saying around here. From this moment forward we shall refer to what EnderW was doing as [E]mo, or simply [E].
Some examples:
- That guy was totally being an [E]-kid.
- [E]mo people always act like anyone else cares.
- [E] jeans look funny.
- That kid is so [E] I can't tell what sex it is.
wow what the fuck are you talking about?
http://en.wikipedia.org/wiki/Jumping_to_conclusions
Zarathustra[H]: how much did you pay for that chassis? How do you like the hotswap bays and sled construction? I have been weary of norco in the past because of their history of questionable backplanes, got any closeups of the trays and backplane?
Edit: what psu is that?
snip.
with max 4 users there is no need for real server os. Only non-server parts in my setup that have caused problems are the Norco backplanes, one of them actually caught fireWow that is impressive! But Windows 7? At least use a real server operating system.![]()
with max 4 users there is no need for real server os. Only non-server parts in my setup that have caused problems are the Norco backplanes, one of them actually caught fire![]()
Apparently the PSU had shutdown. Not aware of the root cause I switched it back on and then smelled something burning and the actually saw small fire on the Norco backplane. Result,Zarathustra[H];1041129522 said:Yikes! What happened?
Apparently the PSU had shutdown. Not aware of the root cause I switched it back on and then smelled something burning and the actually saw small fire on the Norco backplane. Result,
*snip*
supplier had never heard of problems with Norco backplanes but started selling his own brand of cases claiming these are equipped with reliable backplanes
Failing backplanes is another reason why I have 6 disk raid arrays as with the 4 disk backplanes I thus avoid having more than one disk on each backplane. Not to long ago another backplane failed (not so dramatically) degrading three raid-5 arrays. Shutdown the server replaced the faulty backplane and recovered all three arrays. Also with the burned backplane no data was lost.
Seagate consumer drives ST3000DM001 and ST4000DM000 and drives themselves until now performing reliable.What are the 4tb, 3tb drives are your running
Ouch! Did that take out anything with it too like drives?
Norco seems to have a really bad track record for backplanes. First time I hear of one crapping out THAT badly though.![]()
[root@nas ~]# zpool status pool
pool: pool
state: ONLINE
status: The pool is formatted using a legacy on-disk format. The pool can
still be used, but some features are unavailable.
action: Upgrade the pool using 'zpool upgrade'. Once this is done, the
pool will no longer be accessible on software that does not support
feature flags.
scan: scrub repaired 0 in 5h7m with 0 errors on Sat Sep 27 15:07:01 2014
config:
NAME STATE READ WRITE CKSUM
pool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
scsi-35000c50034f36cff ONLINE 0 0 0
scsi-35000c50034eb58bb ONLINE 0 0 0
scsi-35000c50034f44577 ONLINE 0 0 0
scsi-35000c50034e85e4b ONLINE 0 0 0
scsi-35000c50034f422b7 ONLINE 0 0 0
scsi-35000c50034e85c3f ONLINE 0 0 0
scsi-35000c50040cf0c4f ONLINE 0 0 0
scsi-35000c500409ae567 ONLINE 0 0 0
scsi-35000c500409946ff ONLINE 0 0 0
scsi-35000c5003c95a907 ONLINE 0 0 0
scsi-35000c50034fbe17b ONLINE 0 0 0
scsi-35000c50034f3dfc7 ONLINE 0 0 0
raidz2-1 ONLINE 0 0 0
scsi-35000c50034f3cc5f ONLINE 0 0 0
scsi-35000c50034f3e81f ONLINE 0 0 0
scsi-35000c50034ea0857 ONLINE 0 0 0
scsi-35000c50034ff6167 ONLINE 0 0 0
scsi-35000c50034f3decf ONLINE 0 0 0
scsi-35000c50034f421c7 ONLINE 0 0 0
scsi-35000c50034f3daeb ONLINE 0 0 0
scsi-35000c50034ff1b8b ONLINE 0 0 0
scsi-35000c50034f42db7 ONLINE 0 0 0
scsi-35000c50034f3d3ab ONLINE 0 0 0
scsi-35000c50034e011d3 ONLINE 0 0 0
scsi-35000c5003c95abdf ONLINE 0 0 0
errors: No known data errors
[root@nas ~]# df -h|grep -v tmpfs|grep -v oot
Filesystem Size Used Avail Use% Mounted on
pool 36T 14T 22T 39% /pool