No need for anything fancy like blades.
Anything LSI 9211-8i labeled on ebay should work with a 24-port SAS Expander, some are feature locked by the vendor (IBM, HP, etc) but can be flashed to a plain 9211-8i.
You can daisy chain them, but you have to remember, the SAS controller has a set...
1: It's driverless, your computer has no idea it is there. It needs to be connected to a SAS card which then presents the disks connected to it to the PC as connected to itself.
2: The card isn't detected by the mainboard, it's plugged into a PCIe slot simply to get power.
3: Going to leave...
Depends really.
First, pfSense, Trixbox, DNS Master/Slave and ADC Pri/Backup can be run on ultra low power boxes and still have performance to spare. This with certain systems being perfectly suitable to be run on the same hardware and OS.
Second, I'm wondering on how many ESXi boxes he...
Hi guys, for any of you using this mainboard, last week I reported some bugs with the board:
- CPU Fan 1 and Chassis Fan 1 headers always run 100% speed.
- USB Keyboard hangs in bios if not cold booted.
- USB Mouse only runs verticaly in BIOS.
- Clock settings for my DDR-3 2000 ram...
I've had a eee T91 for a few years now and I barely ever use the touchscreen for that, unless to pauze a YT video.
For actual laptops, I'm thinking its the same thing, you'll use touch maybe 2-3 times a week (if you use the laptop daily), but outside that, moving from keyboard to stretching...
I'm thinking Optiplex 760's or somewhere in those parts.
Even positioning of the OEM sticker and the latch next to it match to that model.
EDIT: that pallet is a mix of 2 size models though.
But fairly certain that its Optiplex 760 series or a slightly older model.
With the 2nd and 3de smallest...
How exactly do you mean?
You can't get Apache to run port 500 on 1 VM? Or are you having trouble assigning port 500 because it conflicts with other VM's?
1 up for the M1015, using 2 of them both paired with a HP SAS Expander.
Had no trouble flashing it on P35 mobo from M1015 to IT and reflashing it recently from IT to IR with a 990FX mobo (Asrock Fatal1ty).
The card is a beast. And DIRT cheap at that.
The original firmware I had on the 60's made the disks hard lock until power cycle after 45xx hours of operation (can't remember the exact number), it was a bug in the firmware code, where it tried to update the SMART counter for hours run, at that specific value and the disk would lock up...
If I'm not mistaken, ZIL removal is meant for you to start procedure to remove it, ZFS empties the ZIL cache to disk, then tells you its ok to remove.
If the ZIL disk dies in the middle of writes, you're still in a world of hurt and left with incomplete writes.
dsumike: did you do firmware updates for that M4?
When I bought my 30 and 60GB ones, I had 450/250 R/W, with firmware updates that became 550/480.
Crucial doesn't use Sandforce and with that, they have to write their own firmware code, took them a while to get a hang of it, but now these...
I use Dxtory (paid for it though) and mostly record RAW to my SAN or lossless, depending on the framerate I'm getting, if it pegs at 60fps recording lossless, I'm happy, if it dips, I go RAW.
Raw 1080p@30fps 200GB/30 minutes, double that for 60fps.
Lossless 1080p@30fps I think to remember...
2 reasons, 1 because I had the ram and 2 because I'm using Dedup and Compression on several ZFS volumes, ZFS just noms away at any ram you give it.
And I'm moving tons of data back and forth, so huge caches are very handy.
Hoping that someone is mad enough to bring a eeebox "price and scale" thing on the market with the upcoming Intel "server" Atom. Hoping even further they go mad and do a dual socket version!
It is already announced it supports ECC ram and has Virtualisation support (current Atoms already do...
When the vendor sais "no", don't accept it, keep pushing for a refund.
Google around and you'll find plenty of people that did it and some that were so funny as to then give their refund to their favorite Linux distro as a donation.
Bwah, after some consideration about all the hardware I have here, I think I'm gonna throw it all around to a more conventional setup with ESXi. With a central dedicated hardware SAN. (now I have one but also have local sans on the ESXi's)
One note about what I did now, ESXi on USB and SD, is...
I'm thinking it means that there is a software based cap set at 32GB in the allocation limit.
So even if you have 64G, the "Unlimited" cap you can set is actually 32G and it won't go over that.
For CPU's I'm unclear, but reflecting on VMWare's overall behavior, I'd think that you can drop ESXI...
Nvidia Kernel Modules and Nvidia X driver are 2 different things.
In your case, it seems the kernel module is loading fine, but the X driver isn't (or is missing).
I would change your system so that it can load the vmware graphics driver for X and then go ahead and try to get the nvidia driver...
Yes and if you are unlucky that you use something like the VMware proprietary virtual storage controller, instead of the emulated LSI, Windows might not start (it can handle driver installs live, but not if the storage controller it boots from is unsupported, then you have to do recovery console...
On the boards I have that do Onboard + Addon, the bios is only shown on the card I select as primary.
But correct me if I'm wrong, but is your problem with those live CD's? Or did you do an actual install and configure it yourself too?
Because I could imagine that those live CD's might try to...
Just wanted to add, a large chunk of the small to medium sized companies (especially the older ones) are very alien to work in as an ITer that lives his professional life at a corporate level. I've worked at Wang (when it was still called Wang :p), Dell, Dolmen, Staples, before I was fed up with...
Yes, but if you have 2 disks you want to join together, RAID0 makes more sense any day, you'll be fucked in data security either way, but for volatile data, at least with Raid0 you get a bonus in performance which is possibly the only thing you ever need for volatile storage.
I completely...
Extents aren't Raid0 though and 100% agreed about its use (can't think of a single one :@)
For what you are saying about stateless boot images, I can understand and in an enterprise environment, yes, it's a marginal effort to do everything from bootp.
But for not so large companies and home...
Since the card is an addon card and you have the Intel 2000, you should treat it that way. Use the onboard vga for esxi console and vm consoles (aka the card that is used as the vmware virtual device on the machines). Most mainboards let you configure to use onboard video as the main display...
I didn't mean to imply it was Linux, but since you port other stuff from Linux (and people are still porting drivers from Linux), the md code should be viable for the same treatment, you already support sticking drives together in volumes, so it shouldn't be that hard to do mirroring.
On the...
Nope, no BBWC and the dd for the local datastore is the dd inside the vm root, which is on the datastore.
The OS Image is a 4GB vmdk disk located on the local datastore, the ZFS store isn't (those are RDM'd).
And the ZFS pool dd is with the ZIL completely disabled to be sure that there's...
It is 1 single setting to flag in Advanced Settings.
Yeah, hence why I commented on what ends up being on disk rather then the details of the transport. Which you can explain better as a VMWare insider :p
The question was about possibility and viability, the OP already acknowledged that...
You can RDM local disks and it works nicely for me.
As for your confussion, raw access gives the VM complete access to the disk, SMART can be read, partitions made, ESXi doesn't change a thing written to disk, the data put by the VM on the disk is exactly the same as when the disks were used on...
The thing with WHS storage vs a ZFS system, is that WHS, if you want redundancy, can waste space rather badly, giving you no more redundancy over Raid 5 or 6 while not having all that great multi disk performance either.
Using a setup like "Any Linux or Solaris Version/Offspring" with or...
Since ESXi isn't loving onboard raid controllers all to much (ICH etc), I made a mirror of my boot disks on my SAS Controller.
Hardware Specs:
MSI P35 Platinum (Onboard ICH9R, Marvel SATA, Realtek NIC, all disabled)
Intel Core2Quad Q6600 2.4 @3.6Ghz (400FSB)
8GB DDR2-800
IBM M1015 w/ 9211IR...
How can you know the difference?
I ordered them as Mini-SAS(Host) to SATA(Backplane), the label on the packaging states the same.
Any physical way I can verify they are the correct ones?
So, I have everything I ordered now.
Plugged everything in, IBM card shows up at boot, 0 drives detected.
I go into the CTRL+H interface, see the HP Expander there. So thats good, IBM card is working and its perfectly detecting the HP card, when using 1 and 2 cables to link the cards...