Yeah the 2TB drive limit on these controllers is actually what they are being moved up to (current 1TB drives) moving it up to 2TB drives. One of the pitfalls of using older hardware.
There shouldn't be a lot of load on this box, about 7 workstations accessing it for data storage/retrieval...
It does seem like it, Everything on the pool is currently replicated to a backup solution so I could play around with it... However after thinking about it for a savings of sub 500GB i dont really care to bother with it, or the potential hassle that comes with it.
When the DDT becomes to big...
I was thinking it would be good for the VM backup dataset as the core of pretty much all the machines is the same. However If the gain is not that great compression may give me a pretty similar result without the performance penalty. I did know that once the DDT no longer fit in RAM it would...
If memory serves dedupe can be enabled on a dataset level... or is it pool wide?
I remember reading somewhere that it was 10GB of ram per 1TB deduplicated, or somewhere along those lines.
It will have 4x trunked gigE for data transfer and it may get a fibre card for faster esxi replication...
I am looking for a bit of feedback on what CPU to put in my soon to be rebuilt ZFS storage server.
Old spec was Q6600 8GB ram, 21 spindles in 3x 7 drive raidz60.
I recycled the CPU/mobo/RAM out of it into another project so its basically a ground up rebuild.
I plan on moving it up to 2x...
I have a basically new Intel SBCE blade center with 4 blades, it was used for about a week for a proof of concept that ended up not going through. Configuration as follow;
Intel SBCE Blade Center;
3x IXM5414E Blade Server Ethernet Switch
Brocade FCSW4BR 4GB SAN Switch Module
SBCECMM2...
So I'm not sure if anyone would be interested in this but i figured I would post it.
I have a basically new Intel Blade Center server. This unit has less then 10 days of run time on it, it was purchased for a project and we decided to go another route.
This is an Intel Blade CEnter SBCE...
Where in Canada is it?
I just quoted fedex to ship a SBCE full of blades on a skid for $350, thats including my discount... so you are probobly looking at $500 or so...
Yup, loaded with 250GB drives, and less then 100 hours on them :)
The blades are all Intels, dual 5160s, 2GB RAM, Brocade Switch module, Console module, 4GB mezzanine's.
Cant really complain... I'm just dredding how loud the Blades are going to be.
Here comes your tutorial;
1) Aquire properly sized USB key
2) Connect USB key to computer
3) Go to Start > Run, Type in "CMD (no quotes) Press Enter
4) In the new CMD Prompt windows type "Diskpart" again no quotes
In the new diskpart window type the following commands to format and make...
Your lucky... here its not uncommon to see 35*C in the summer.... most of the time for days at a time... If every there was a time I wanted to die.... its then.
Math looks good to me... Hmm maybe I dont remember as well as I used to :(
Your power cost is about the same as mine, mine...
Is hydro expensive for you? $20 for the 2950 seems a bit hight to me...
Last time I metered my draw it was a loaded PE 1850, HP ML110 G5 with 12 spindles, a 48 port procurve, an old 15" CRT, and it worked out to ~$18 a month.... I have a hard time seeing that 2950 pulling more than all...
Depending on the throughput you are expecting you may run out of bandwidth. if the integrated NIC is PCI, and you add another PCI NIC, you will be limited to ~100MB/s between the two, and probobly some nasty colissions
Are you able to test the controller in another machine? It may be DA giving you greif.
Also try it with just a single Sata 3Gb/s drive connected to the controller.... It may not be negotiating down to Sata2 properly.
The drives are NOT capable of 6Gb/s. The interconnect "pipe" if you will between the drive and the controller is capable of that speed. Not the drive itself, you will be hard pressed to saturate a 1.5Gb/s connection except under burst condititions with most conventional HDDs.
What firmware...
Poweredge 1950/2950's can be had for ~$4-600 on ebay and they will do 64bit guest. 1850's will run 4.1 just fine (currently have 3 at home) they are 64bit capable, though only a host level. They will only do 32bit guest. the 1950/2950 being socket771 based will do 64bit guest.
Shoot me a...
I have the UD5 version of that board on my test bench, it runs fine with 24GB Kingston 1600 hyperX, 1600 Vengence, XMS3 CL7 and CL9. The only issues that I have seen is mostly on the sabertooth (very picky board, because of the cheap LGA sockets), and it is mostly with Patriot memory. Aside...
I have a couple dozen machines all running 24GB on x58 without issue. Most of them even running 1600Mhz ;)
You shouldent have any problems, unless its a Sabertooth with Patriot memory >.> they seem to be very picky.
You are correct, you will need to carve LUNs out of that 4TB pool. Remember to make them 2TB minus 512Bytes (or smaller), Then just present the LUNs to ESXi.
I doubt that you will need a dual CPU system for a home lab, save the money and pickup better storage and RAM first. Lynnfeilds are great for Home labs, and hell even most production systems.
I would stick with Intel... We do here. When the new AMD G34 platform launched a wile ago we had...
Its actually pretty awful up here >.>
Nope, im west/cent in alberta, 3 hours north of the Montana border in Calgary. We service all of Alberta, and some of BC.
Your right about the cost of living though in Toronto its through the roof... though it is relativly high here as well (costs...
The thing that most people here seem to be missing, or not aware of since it seems alot of people posing have no idea how the pirate 'scene' works outside of torrenting...
Everything is automated... once a release group preps something and gives it the OK for release (maybe a dozen people...
Sounds like the IT market is oversaturated down there...
Up here in Canada (atleast my section of it) IT jobs are relativly plentiful... now we dont have alot of them open, but there are enough that if someone was looking for one he could find one.
I'm not sure if the people discussing...
I meant to say "I Might Offer"
We have had claims rejected from WD, and Seagate for writting on the outside of the drives... and we send hundreds of drives back to them.
Worth it to go spend the $40 and buy a lable maker, then you can lable your other gear as well, switch ports, cables...
Lol, Needed a bunch so why not... we use them up pretty quick around here to it makes more sence to order an entire case instead of 5-15 at a time... costs a bit less to.
Ordered them directly from Scythe :cool:
Never knew that about Sleeve bearings... I will need to keep that in mind.
I have been out of the loop for a bit with cooling. Are YateLoons still considered the best fan out there (120mm)?
If so what model are the popular ones... I am having a tough time finding them listed anywere... If I can track down a model I was planning to have my purchaser bring me a...
Its in an HP Ml110 G5, 3x PCI-E slots in 8 x mechanical... not sure what the electrical is... but that shouldent be the cap as even a single PCI-E 2.0 lane is capable of 500MB/s so even assuming its a 4x slot thats 2GB/s i can push through that slot.. But I will check when I get the chance...
6x 1Tb 7200.12 Seagates, on a 1068e, dual core 3075 xeon, 8GB RAM..
ZFSGURU-benchmark, version 1
Test size: 32.000 gigabytes (GiB)
Test rounds: 3
Cooldown period: 2 seconds
Sector size override: default (no override)
Number of disks: 6 disks
disk 1: gpt/da0
disk 2: gpt/da1
disk 3...
I reread through the thread and saw that a couple reboots solved the issue, I will try that here shortly after the benchmarks are done... your going to be scratching your head when you see these...
I will be leaving it up on my EU seedbox indefinitly.
I tossed an install of this on my spare storage dev box that I have been playing with... getting some odd benchmark results... I am rerunning the tests now and will post the results when it finishes.
Configuration is dualcore...
Both machines really... I am wondering why I cannot max out GigE between a 6 spindle array and another machine.
Storagemachine with 6 spindles is acting as a ZFS backend for the ESX machine with guest OS's rtunning on it (the screenshots are from those gues OS's testing disk performance...
I figured this would be the right place to share some of my findings and recent tests involving zfs, since it seems alot of people have picked up interested here in the past months.
I have a fairly good sized zfs storage server currently in production (15TB, 21 spindles), that has been...
Looking to sell this leftover i7 970, we picked up two for a project and only ended up needing one.
Would like to have it sold before the end of the week so its priced to move!
Would be interested in trading for 1156 Xeon gear (mobo, CPU, 4GB 1333Mhz RAM modules)
$550 shipped in north...