Solaris comstar disk export

simplesam

Limp Gawd
Joined
Jul 1, 2009
Messages
136
Has anyone here used solaris express and comstar to export raw disks, skipping the whole zfs file system?

Has anyone tried the comstar mptt driver, to export disks over lsi 1068 hba's? Did it work?

How about exporting via comstar over infiniband? Did it work? Does it work with older mellanox cards off of ebay, or just with the connectx line?

What I'd like to do, if mptt works and works well:
Connect dozens of disks to a solaris raid card via sas expanders, create multiple raid volumes.
Add two dual port sas hba's in target mode.
Export the hardware raid volumes as raw disks, to 2-4 windows machines with sas hba's in them.

Not clustered storage, just centralized raid storage.

I'd like to experiment with it. But I can't see spending the cash for cards, just to experiment with something that may not work at all. I'm hoping that someone has already tried something like this.
 
What's your application for this? iSCSI over 10GBe is another thing to consider: cheap HBAs, standardized switches, less worries. Higher latency, sure. But unless your application depends on low latency, I'd go with CIFS shares or iSCSI.

I do have a 1068 card I could test with if you really want. I don't have the right kind of cables to do it the right way, but I could rig something if you're interested.
 
What's your application for this? iSCSI over 10GBe is another thing to consider: cheap HBAs, standardized switches, less worries. Higher latency, sure. But unless your application depends on low latency, I'd go with CIFS shares or iSCSI.

I was thinking that Dell sas 5/e hba's can be had cheap on ebay, and that I could just direct cable connect the ports for now.

With sas wide ports at 12gb now, and 24gb sas2 hba's available soon, I figure that it could be cheap and fast direct attached raid storage for multiple machines. With lower cpu usage than 10gb iscsi.
 
They're cheap, but once you fill all the PCIe slots you're out of expansion capability. SAS switches seem to run $9000 for a 9-port switch (!) compared to $4500 for a 12-port switch. 10 gigabit ethernet might have higher CPU utilization, but if all you're using the box for is storage, who cares whether it's 40% used or 60?

Also, with iscsi you can use a mixed-speed network. A Dell 6248 switch that takes 4 10g cx4 connections and 48 (!) gigE connections is under $3000. Then you probably don't even need NICs for the storage client machines, just use the existing NICs.

I repeat: what application will you be running? It might be a completely reasonable thing to do to use SAS rather than Ethernet... but depending on your initial and final scale, budget, and demands you might be barking up the wrong tree.
 
Also if you wanted to do IB then you can get some decently priced SDR switches, but then again its all gonna be based on what you are actually trying to accomplish and your budget.
 
Let's not get carried away with building a config. Right now, I'm just talking about testing... and wondering if anyone has tried the mptt driver.

If testing works, it could be mixed storage for:
16,000 active email accounts (not exchange). and
Storage for a couple web servers, with several thousand small sites. and
Storage for actively running virtual machine os drives. and
Storage for vm data drives. and
Storage for backup images of vm os and data drives.
Different types of raid would be configured for the various needs.

But it would need a lot of testing before I moved anything but backup jobs onto it.

If mptt works, scaling to 2-4 potential hosts via point to point sas is as far as I would test for now.

Reasonable scaling of disks shouldn't be a problem. One or two decent raid controllers can drive many disks via sas expanders, and configure many raid groups.

One other thing I would want to test is: I know zfs can use the host ram as disk cache. But zfs wouldn't be used. So I want to test and explore if the host ram can be a cache for exported raw disks.

If someone says that mptt works well, i should be able to pickup a few sas hba and cables to test with, for under $300.
 
And you want to do this on Solaris?

Abso-fucking-loutely not. It's too unstable and unpredictable under loads. Not comstar; Solaris itself.

I've been working with Sun for nearly 20 years. Even attempting what you're talking about, is data suicide.
 
And you want to do this on Solaris?

Abso-fucking-loutely not. It's too unstable and unpredictable under loads. Not comstar; Solaris itself.

I've been working with Sun for nearly 20 years. Even attempting what you're talking about, is data suicide.

OK. That's the answer I need.

That is the impression I got from zfs. It may be great in theory, but there are too many threads in the solaris zfs forum that say "help, my data is destroyed". Threads that sit there unanswered.

I was hoping that raw disks would be ok.

--------
As far as I know, no other os has a target mode driver for 1068 controllers. If there is a package that does this for another os, please let me know.

Volumes would be created on the hardware raid card. So as long as os ????? doesn't wreck the data, the disks and raid config could always be moved directly to the initiator system and imported to another raid card.

But if an os is unstable and wrecks the disk structure... Well, that would be bad.
 
OK. That's the answer I need.

That is the impression I got from zfs. It may be great in theory, but there are too many threads in the solaris zfs forum that say "help, my data is destroyed". Threads that sit there unanswered.

I was hoping that raw disks would be ok.

Oh goddess no. Solaris 10 is so horrifyingly unstable, I get a lot of calls from shops going "HELP. OUR PRODUCTION ENVIRONMENT IS GONE!" These are shops running UFS or VxFS, too. Containers are even worse. I get one or two emails a week asking if I can restore a system with corrupted base install due to container glitches; I can only give them the same "advice" Sun gives them - reboot, restore from tape, reinstall. I wish I was joking. Twenty years, and you could not pay me to advocate their stuff now.

As far as I know, no other os has a target mode driver for 1068 controllers. If there is a package that does this for another os, please let me know.

Volumes would be created on the hardware raid card. So as long as os ????? doesn't wreck the data, the disks and raid config could always be moved directly to the initiator system and imported to another raid card.

But if an os is unstable and wrecks the disk structure... Well, that would be bad.

FreeBSD supports a few target modes, but I'm not sure what exactly it is you're trying to do. Honestly, it sounds like you shouldn't be doing this, and should be instead investigating SAN systems. There is no way currently for any RAID card save very few to import arrays from another card, without major, major migraines. Even the LSI stuff is a nightmare.
 
Back
Top