.

I think gigabyte spent much money for developing I-RAM and not giving results that was expected. So there are no newer version of i-RAM.
 
If they were smart, they would of developed it for pciE (1x,2x,4x,8x,16x)

even 1x is what, 800 something MB/sec in both directions?
 
sata 2 would be fine but make it ddr2 and at least able to support 16gb
 
I'd be happy with SATA speeds going over a ribbon. 1G RAM chips are cheap - but anything holding more than 4 at a time is priced astronomically high. Got a lot of 1G DDR chips as I'm getting ready to retire a few 939 boards. Might have done a pair of IRAM's just to increase capacity to an 8 gig volume.
 
As cheap as DDR2 1GB DIMMs are right now, they should put out a 8, 12 or even 16 DIMM version, as you can get memory so cheaply now that these things are really tempting as super-fast drives for gaming, OS or whatever.
 
There's always the hyperdrive4 if you're desperate for 16 gigs... even if it is a few dozen times more expensive then the i-ram. No sata2 on it though, which for a relatively new product, makes virtually no sense.
 
There's always the hyperdrive4 if you're desperate for 16 gigs... even if it is a few dozen times more expensive then the i-ram. No sata2 on it though, which for a relatively new product, makes virtually no sense.

That device is meant to be used where the bottleneck is IO rate (read: servers). Where throughput is a bottleneck, SATA300 vs SATA150 doesn't matter. So how does this not make sense?
 
It would matter if the device could transfer > 150MB/sec. For a RAM based drive, I don't understand why it wouldn't be able to do this. RAM transfer rates measure in the GB/sec.
 
No. That Hyperdrive SSD is too expensive to use where transfer rates are a real concern. Where transfer rates matter, the volume of data is usually very large. It would be far more cost effective to RAID a few hard drives to achieve the necessary transfer rate (that is where the interface speed doesn't matter).

I don't think that the iRAM is practical for desktop use because it has too small of a capacity, and the price is too high. For a workstation or server, the price is less of an issue, but the volatility of the RAM (battery backup or not) makes it far less attractive than flash based drives, and the capacity is still too small to be very useful.

Flash-based SSD aren't quite ready for mass consumption yet, and the iRam is an evolutionary dead-end.
 
Am I reading the price right on the hyperdrive4? Are they asking $2400 for the board? Stupid expensive.
 
There is another company making one. They are called ACard and I took some pics of the DDR2 and SODIMM DDR2 version.



computex_day1_acard_004t.JPG




No word on price other than it will be 'comparable to the i-RAM'. They are sending me one in Septermber or early August.
 
reading from the ACARD

"Automatic data backup/restore to/from 2.5" IDE Disk for preventing data loss in DDR"

so aside from the broken english does this mean it also has a 2.5" drive INSIDE the case? that would be awesome. if it was able to keep up with the ram when its not in use and keep an image of whats in the ram then the thing is almost fool proof.

even baring thing like a prolonged power outage or even an APC running out of juice in said power outage and then internal backup battery running out if the 2.5" drive had a copy of the data and you can easily restore it then fantastic. this makes even more sence for the desktop since if someone was going to move or go to a lan they dont have to stress out and watch the clock for fear of loosing the OS drive.
 
Vista pretty much killed the IRam TBH.

I ram was great for XP, but the space requirements of vista, newer operating systems, and programs really made the IRam a thing of the past.


I have no idea why they scrapped the IRam 2, they could have just tweaked it a bit, added some more memory banks, allowed SATA2, and called it a day.
 
Sounds interesting. Like to know the price on it. The web site did not have any information. Keep us posted.
 
My system partition is only ten gigs, and I could pretty easily squish everything down into six or eight gigs with a little creativity. I'm extremely interested in that ACARD unit, depending on price.
 
If they were smart, they would of developed it for pciE (1x,2x,4x,8x,16x)

even 1x is what, 800 something MB/sec in both directions?

Doesn't matter, because unless you're raiding somehow, you aren't going to exceed the limis of the Sata interface anyhow.



edit: I'm leaving my original statement so people understand why my statement was misunderstood.




But, here's the edit

Doesn't matter, because unless you are raiding somehow, you aren't going to be able to overcome the limits of the sata interface anyhow, because it will be your bottleneck.
 
Doesn't matter, because unless you're raiding somehow, you aren't going to exceed the limis of the Sata interface anyhow.

sorry .. but that's not making much sense
that's like saying I'm going to hook this 5000Mb/s Drive to a 300Mb/s SATA cable and not exceed the limit of that interface? hmm

Anyway, the OP was saying that they should eliminate internal DISK interaces (cables)
Go straight to the PCI/PCI-E bus. This should be easy, it's generaly what a RAID or SCSI controller card is doing ... right ?
 
Doesn't matter, because unless you're raiding somehow, you aren't going to exceed the limis of the Sata interface anyhow.
What? RAM exceeds the transfer rates of sata by a good deal. Suppose you had built the i-Ram on old, slow PC-133 SDRAM. SDRAM reads or writes 8 bytes at a time, 133 million times a second. That's about 1GB/s. Serial ATA is only 150 or 300 MB/s so far. There are propositions to extend that to 600 and 1200 MB/s, but they're not expected for a few years. Only then will we exceed the transfer rate of single-channel memory that's been obsolete since 1999. If you build it from, say, dual-channel DDR2-800, that's got theoretical bandwidth of 12.8 GB/s. x16 pci-express is only 4 GB/s. You'd need 52 lanes to keep up with that device.
I have no idea why they scrapped the IRam 2, they could have just tweaked it a bit, added some more memory banks, allowed SATA2, and called it a day.
And added SMART, please. Then reasonable devices could interact with it.
 
sorry .. but that's not making much sense
that's like saying I'm going to hook this 5000Mb/s Drive to a 300Mb/s SATA cable and not exceed the limit of that interface? hmm

Anyway, the OP was saying that they should eliminate internal DISK interaces (cables)
Go straight to the PCI/PCI-E bus. This should be easy, it's generaly what a RAID or SCSI controller card is doing ... right ?

You and unhappy_mage both misunderstand me.


No, I haven't seen a single raid, or scsi controller that didn't have cables going to the harddrives. :), so I don't know what the hell you are talking about. Please find me a raid controller, or scsi controller that has no DATA CABLE ports on it.

The purpose of the I/F bus (pci, pci-e, pci-x, etc..) is to do that data transfer, yes, but in the instance of the i-Ram, it's only purpose for being in a slot is for *POWER*. It's a "Harddrive in a pci slot", but it still needs to be connected to a port that's on your motherboard or a seperate controller.

There's more tech (no, it's not impossible to do...) to making that thing you throw in the slot act as a drive without needing an interface cable, but I don't think that's what you were saying.

My experience with the I-Ram was that it NEVER came close to being able to control data ANYWHERE near the specification limits of of the STORAGE PROTOCOL (i.e. there are no i-rams doing 150 or 300 megabytes a second, no matter how fast the ram you put on them is), The numbers I've seen have also been SLOWER than the port (i.e. pci) interface's capabilities.


The actual data transfers on the iram occur over the INTERFACE CABLE, which is still capped at Sata2 speeds. It doesn't matter if you plug it into a PCI-E x 1024 port that can do 4 terabytes a second of data, because the device doesn't output data faster than the storage protocol (sata) specs, because the port (no matter if it were pci, isa, pci-x,, pci-e x1, pci-e x4, pci-e x8, or pci-e x16) does not DO the data transfers. The transfers still occurs over that little red plug.

battery.jpg


Or didn't you know that?

Now, if they design an i-Ram or any other device that transfers the data over a PCI-E port (i.e., it wouldn't have, need, or use "sata" conventions) then it would be faster, but only if they coupled it with a PROTOCOL that's faster.


That's why, even though the previous i-rams had ram that could do 3.2 gigaBYTES a second of transfer, they only got 45-115 megaBYTES of transfer.. and could even be OUTPERFORMED by raid'ed physical harddrives, which, by your reasoning, shouldn't be possible, if the drive's total throughput is only limited by it's internal throughput. The i-Rams' internal throughtput could be 3.2 gigabytes a second, but it just doesn't do that.. and why? Because sata CAN'T do that.

http://techreport.com/reviews/2006q1/gigabyte-iram/index.x?pg=4


unhappy_mage said:
What? RAM exceeds the transfer rates of sata by a good deal. Suppose you had built the i-Ram on old, slow PC-133 SDRAM. SDRAM reads or writes 8 bytes at a time, 133 million times a second. That's about 1GB/s. Serial ATA is only 150 or 300 MB/s so far. There are propositions to extend that to 600 and 1200 MB/s, but they're not expected for a few years.

See, You misunderstand me as well. I wasn't saying ram was too slow. I was saying that no matter how fast you make the ram, it won't matter. Not until you make the weakest link faster (in this case, sata2) your still screwed. The only reason a card, like, say, the areca x8's can do more than what sata2 can do is because they have a FASTER interface internally (by raiding multiple sata channels and sata devices together) . But, a single SATA2 port CANNOT go faster than 300megs a second, and, really, will never get anywhere near there, anyhow.

If you build a ram based device that can transfer 40 bajillion terabytes of data a second, AND then you plug it into a SATA2 port, you're STILL only going to get 300 mb/s

Until they figure out how to make a "storage protocol" that can actually SEND data faster than 300mb/s (although there are ones that do that) it makes NO SENSE to plug a device into a port that can do 12.8 gigs a second, if the PROTOCOL CAN'T HANDLE THAT SPEED.

Sata2 is capped at 300megs a second. Whatever protocol you decide on will have to be BUILT INTO THE DEVICE. So, lets say there's some protocol that can transfer "storage" data at 3.2 gigabytes a second. You're going to have to put a chip on the controller card that does that protocol.

Understand?

Internal transfer speed of the drive system, is STILL limited by the bus it's plugged into, as well as whatever the capabilities are of the storage protocol in use is.

so if the device is faster than the protocol, the PROTOCOL will limit the speed. If the device & protocol are faster than the port, the PORT will limit the speed.

So, 3.2 Gigabytes a second device (PC3200 ram) + 300 megabytes a second protocol (sata2) + 4 gigabytes a second port (pci-e x16) = 300 megabytes a second throughput.

Only by bringing up the protocol, will you get more out of the device. So, you'd need "Sata version 5" or "Scsi 1280" or "Pata-2000" to get your data OUT of the ram, and OUT of the PCI-e port any faster.
 
No, I haven't seen a single raid, or scsi controller that didn't have cables going to the harddrives. :), so I don't know what the hell you are talking about. Please find me a raid controller, or scsi controller that has no DATA CABLE ports on it.
Here you go.
There's more tech (no, it's not impossible to do...) to making that thing you throw in the slot act as a drive without needing an interface cable, but I don't think that's what you were saying.
It's not what I was saying, but one could easily make a sata controller part of the gadget. Pci or pci express sata controller chips abound, and all you'd have to do is slap one on and hardwire the existing card to it.
My experience with the I-Ram was that it NEVER came close to being able to control data ANYWHERE near the specification limits of of the STORAGE PROTOCOL (i.e. there are no i-rams doing 150 or 300 megabytes a second, no matter how fast the ram you put on them is), The numbers I've seen have also been SLOWER than the port (i.e. pci) interface's capabilities.
131.9 MB/s is better than PCI can do. 133 MB/s is the theoretical limit, but 120 MB/s is about all you can hope for in real-life usage. This is pushing the limits of the 150 MB/s bus the I-Ram is on.
The actual data transfers on the iram occur over the INTERFACE CABLE, which is still capped at Sata2 speeds. It doesn't matter if you plug it into a PCI-E x 1024 port that can do 4 terabytes a second of data, because the device doesn't output data faster than the storage protocol (sata) specs, because the port (no matter if it were pci, isa, pci-x,, pci-e x1, pci-e x4, pci-e x8, or pci-e x16) does not DO the data transfers. The transfers still occurs over that little red plug.
Sata 1, actually - 1.5 gbps. And I wasn't suggesting that the I-Ram would be faster if it were pci express, just that it would be faster if it had a faster interface. Pci express x4 is faster than any sata interface, so it makes a good example of a fast bus.
Now, if they design an i-Ram or any other device that transfers the data over a PCI-E port (i.e., it wouldn't have, need, or use "sata" conventions) then it would be faster, but only if they coupled it with a PROTOCOL that's faster.
Sure, no argument there.
If you build a ram based device that can transfer 40 bajillion terabytes of data a second, AND then you plug it into a SATA2 port, you're STILL only going to get 300 mb/s

Until they figure out how to make a "storage protocol" that can actually SEND data faster than 300mb/s (although there are ones that do that) it makes NO SENSE to plug a device into a port that can do 12.8 gigs a second, if the PROTOCOL CAN'T HANDLE THAT SPEED.
me said:
RAM exceeds the transfer rates of sata by a good deal. Suppose you had built the i-Ram on old, slow PC-133 SDRAM. SDRAM reads or writes 8 bytes at a time, 133 million times a second. That's about 1GB/s. Serial ATA is only 150 or 300 MB/s so far
Well, I'm glad you agree, but I'd appreciate it if you'd agree a little less aggressively.
Understand?
Yes. There's no need to be condescending. Your previous post made it sound like you don't, so I pointed out some flaws in that reasoning.
Only by bringing up the protocol, will you get more out of the device. So, you'd need "Sata version 5" or "Scsi 1280" or "Pata-2000" to get your data OUT of the ram, and OUT of the PCI-e port any faster.
If Gigabyte built a device like µmem has, they would indeed do full bus transfer rates.
 

Technically, it doesn't have ports, but .. your still "wrong" in that case... it's a Zero Channel scsi controller.
ME said:
No, I haven't seen a single raid, or scsi controller that didn't have cables going to the harddrives. :),
You still have to plug the drives into your motherboard. This device co-opts the onboard scsi ports and they become pass-through. By your own statement, the other poster is looking for a "Cable free" design. There's still cables with Zero Channel.



unhappy_mage said:
It's not what I was saying, but one could easily make a sata controller part of the gadget. Pci or pci express sata controller chips abound, and all you'd have to do is slap one on and hardwire the existing card to it.
You'd HAVE to do so, or it wouldn't work.

unhappy_mage said:
133 MB/s is the theoretical limit, but 120 MB/s is about all you can hope for in real-life usage. This is pushing the limits of the 150 MB/s bus the I-Ram is on.

PCI is saturable at 100% efficiency, as has been shown with scsi cards that hit precisely the 133.3 mb/s (although I've personally gotten a benchmark of 136 megabytes a second out of a scsi array.. but I chalked it up to margin of error, that same array, when moved to pci-x pulled over 250mb/s, so, I know where the wall can be hit..). I'd like to see what would happen if it were a sata2 connection on pci.. would it stay at that rate? We also don't know what the bed of the test machine was, was the sata controller attached to pci bus or pci-e bus? (I don't mean, what was the card plugged into, I mean, what was the onboard sata-port on the motherboard attached to)

unhappy_mage said:
Sata 1, actually - 1.5 gbps. And I wasn't suggesting that the I-Ram would be faster if it were pci express, just that it would be faster if it had a faster interface. Pci express x4 is faster than any sata interface, so it makes a good example of a fast bus.

What I'm saying is, until you put something on the iram that's faster than sata2, it won't surpass THAT speed, no matter what slot type you plug it into.

It *may* be limited by the pci bus, true. It -DEFINATELY- would be limited by the pci-bus if it had an onboard sata controller and no need for external cabling to the motherboards built-in ports. -IF- they do make a "Ramdrive in a card" they'll need pci-e x4 so that it has the headroom, but, as long as they have a cable coming from the card, and going to the motherboard, it won't matter.



unhappy_mage said:
If Gigabyte built a device like µmem has, they would indeed do full bus transfer rates.

I'm unfamiliar with this product, so, I can't comment on it. My preliminary look into it, makes me think that it won't get a drive letter, and only works as a very large drive cache, basically extending itself as an upgrade to, say, the 16mb cache on a seagate 7200.10.. but you won't get a 16 gigabyte C:\ out of this thing.


The point is, (and I did not get this from the other guys comments) is that if you stick a card in a slot, then run a sata cable from it to the motherboard, you'll be limited by what that sata2 port on the motherboard can do. It's the weakest link, assuming that the motherboard was designed properly, and the sata2 port is on a PCI-E full speed bus.

If you build a card that has an integrated sata2-controller (300 megabytes a second) and you stick it in an ISA slot, you'll only get the speed of the ISA slot. Because it's much slower than the protocol (sata2), but if you put that same controller in a PCI-e x16 slot, your weak link will, again, be the sata2 protocol, so, no matter what you do, the MOST you can ever hope for (assuming 100% efficiency, no overhead) is 300 megs a second. (which is nice, but not amazing) What they should/could do is design the i-Ram2 (or i-Ram 3) to be a Raid0 device (optional), and have each memory slot have it's own sata2 channel, so you'd be able to effectively have a "4 drive" raid0, and your total throughput would be increased. - That is something I'd like to see.


I apologize if my tone was aggressive. Sometimes I mean it to be, sometimes I do not. In this case, I did NOT mean to be aggressive or condescending.



I just want one of these However, I have a sneaking suspicion that the price of this is in the 7 figure range. There's only a few of us on this board that can swing it (ockie, I think, could as well) but I don't want to postpone my retirement :) I asked for a quote, i could be pleasantly surprised and find it only in 6 figure range :) because the ram, itself, would be, oh.. 200K (assuming they are using 4gb sticks, they'd need 256 of them).
 
PCI is saturable at 100% efficiency, as has been shown with scsi cards that hit precisely the 133.3 mb/s (although I've personally gotten a benchmark of 136 megabytes a second out of a scsi array.. but I chalked it up to margin of error, that same array, when moved to pci-x pulled over 250mb/s, so, I know where the wall can be hit..).
Really? Do you have a link that shows what you're talking about happening? I've played with a lot of different machines and a lot of PCI buses, and never got sustained transfer rates over 120 MB/s. Could the 133 MB/s figure be margin of error? I suppose it could also be 133 decimal megabytes, as that'd be 126 MB/s and change.
I'm unfamiliar with this product, so, I can't comment on it. My preliminary look into it, makes me think that it won't get a drive letter, and only works as a very large drive cache, basically extending itself as an upgrade to, say, the 16mb cache on a seagate 7200.10.. but you won't get a 16 gigabyte C:\ out of this thing.
Nope, you get an 8gb thing that looks like a disk to the OS. I don't know what OSes they have drivers for, but it came up in a Solaris discussion forum, so that's one of them. But it appears as a normal block device to the OS in that case.
What they should/could do is design the i-Ram2 (or i-Ram 3) to be a Raid0 device (optional), and have each memory slot have it's own sata2 channel, so you'd be able to effectively have a "4 drive" raid0, and your total throughput would be increased. - That is something I'd like to see.
But then you need more sata ports to use the thing. And making it optional would drive costs up even more. Not many people bought the I-Ram, or they'd've made the I-Ram 2 actually come out.
I apologize if my tone was aggressive. Sometimes I mean it to be, sometimes I do not. In this case, I did NOT mean to be aggressive or condescending.
No harm done, no offense meant and none taken.
I asked for a quote, i could be pleasantly surprised and find it only in 6 figure range :)
I don't think so... but good luck, and let me know what you find out ;) Also, they don't use "sticks" of memory - take a look at the user manual for the ram-san 400, which is all the tera ram-san really is - 8 of those suckers stacked. Anyways, page 110 shows the boards they really do use. They look like PCI-X cards, with enough chips for 8 sticks of ram on them (assuming they're double-sided). Making their own circuit boards probably drives costs up a bit, but it's probably a drop in the bucket ;)

Edit: This press release cites an entry price of $28k for the RamSan 300. That's probably the 16GB version, or about $1750 a gigabyte. Ouch.
 
Really? Do you have a link that shows what you're talking about happening? I've played with a lot of different machines and a lot of PCI buses, and never got sustained transfer rates over 120 MB/s.
Personal experience. Not a website link handy, but I'm sure one could be found :)

unhappy_mage said:
Nope, you get an 8gb thing that looks like a disk to the OS. I don't know what OSes they have drivers for, but it came up in a Solaris discussion forum, so that's one of them. But it appears as a normal block device to the OS in that case.
If true, then that's interesting. I could not find anything on the site you gave that confirmed that.

unhappy_mage said:
But then you need more sata ports to use the thing. And making it optional would drive costs up even more. Not many people bought the I-Ram, or they'd've made the I-Ram 2 actually come out.
Yes, I realize that. But if it's an "all internal" card and all the sata2 negotiation is happening internally, then there'd be no extra cabling, just 4 sata channels in a chip (or multiple chips, if needed) that then gets pushed down that pci-e x4 slot.

unhappy_mage said:
No harm done, no offense meant and none taken.
Good
unhappy_mage said:
I don't think so... but good luck, and let me know what you find out ;) Also, they don't use "sticks" of memory - take a look at the
I was just gauging pricing based on what "4 gb stick of ram costs" I am sure they
have some proprietary storage medium that just uses the actually IC's socketed in or whichever.

Enough IC's to make a terabyte of data are on 256 quantity, 4 gigabyte sticks.

unhappy_mage said:
user manual for the ram-san 400, which is all the tera ram-san really is - 8 of those suckers stacked. Anyways, page 110 shows the boards they really do use. They look like PCI-X cards, with enough chips for 8 sticks of ram on them (assuming they're double-sided). Making their own circuit boards probably drives costs up a bit, but it's probably a drop in the bucket ;)

By making their own PCB's, it's actually probably cheaper for them (but not for us) because otherwise they'd have to have a PCB that would accept 32 sticks of "conventional" ram (per san-400) and then they'd also need THOSE sticks.. If you can make a card that stores the IC's without from those 32 sticks of ram, but doesn't have the PCB, gold, lead, etc then you drop the price some.


As the san-400 comes in 32-128 gb denominations, and has 16 memory board slots (as seen on page 12), I am going to assume that each one of those blades are 8 gigs. (and their design must require 4 modules minimum, or maybe just their sales department does)

They COULD have made a motherboard with 32 DDR2 slots, but they opted for proprietarization instead, which means you have to buy "their ram"

unhappy_mage said:
Edit: This press release cites an entry price of $28k for the RamSan 300. That's probably the 16GB version, or about $1750 a gigabyte. Ouch.

That's not that bad when you consider it's value and speed. You couldn't get that sort of speed with a physical disk array without having to considerably increase the space requirements, and power requirements. But the san300 is half as effective as the 400.

No, this won't be something the end user buys... :)
Yes, I realized the terasan was 8 x san400s.


Assuming no loss of overhead, with 3000 megabytes of throughput sustained, you'd need 60 drives in a speed only raid to get that. Yes, I realize that it's a lot cheaper, but, (as you've discussed in another thread --- !) The power requirements (and heat disappation) for 60 drives is something to be concerned with. [I'm making this statement assuming a 50 megabyte per second sustained throughput per drive]. Yes, it's nowhere near $1750 a gig, even if you had 200 drives dedicated to the task of making 1 super high speed array, but taking into consideration the heat, noise, and power byproduct of 60 drives, you can see where some might prefer a ram only solution.

The teraSan, with it's 8 devices, requires only 2500 watts. and can output 24 GIGABYTES a second.

It'd take 480 harddrives to do that, working at 100% efficiency with a 50megabyte per second sustained throughput. Think of the physical space for 480 harddrives, and the power consumption thereof, and the task of wiring 480 drives, and cooling them.

I know I am using an arbitrary number of 50 megabytes a second, but I don't think it's unreasonable to consider that a per drive average for 24/7 sustained speed.


The fact is, we have nothing that could implement the 24 gigabytes / second throughput capabilities of the terasan, even in our data center, we're not that fast. I think our entire data structure caps around 3 gigs a second, so we'd be more appropriately targetted for a single ramsan400 to get the storage medium up to the level the data structure supports. What I mean by that is, no matter how fast of a device we have, I don't think we can transfer more than 3 gigs a second across our storage network. That's plenty fast when you have just a few 1000 hosting clients. The internet backbone interface is slower than that, anyhow.

But it might be nice for application servers, etc..

Maybe that lady in sweden could use one of these, though.
 
The cluster that run EVE-Online is about to get a third RAMSan 400 added for the database. But then again that is 30k+ users connected to it.

Right, 30K users with a bunch of traffic. We've got a few thousand hosting clients, and almost all of them have basic websites. I'm sure eve-online's monthly bandwidth is far higher than ours.

Besides, a 30K user database with constant updates is a LOT more access intensive, even, than a website with 30K concurrent users downloading data simultaneously
 
If true, then that's interesting. I could not find anything on the site you gave that confirmed that.
Their "device drivers" page mentions that they're block device drivers.
Yes, I realize that. But if it's an "all internal" card and all the sata2 negotiation is happening internally, then there'd be no extra cabling, just 4 sata channels in a chip (or multiple chips, if needed) that then gets pushed down that pci-e x4 slot.
That'd work. But it'd probably be simpler to leave sata and raid out of it all together. Some emulation in software could probably make it look like you had a real disk behind a real controller, but having a whole internal sata bus is probably more work than is necessary.
As the san-400 comes in 32-128 gb denominations, and has 16 memory board slots (as seen on page 12), I am going to assume that each one of those blades are 8 gigs. (and their design must require 4 modules minimum, or maybe just their sales department does)
Or perhaps they sell it full of old low-density memory to save on costs.
That's not that bad when you consider it's value and speed. You couldn't get that sort of speed with a physical disk array without having to considerably increase the space requirements, and power requirements. But the san300 is half as effective as the 400.
"Half" is good enough for me, thanks ;) The big brother starts at 65 and goes to $220k.
The teraSan, with it's 8 devices, requires only 2500 watts. and can output 24 GIGABYTES a second.

It'd take 480 harddrives to do that, working at 100% efficiency with a 50megabyte per second sustained throughput. Think of the physical space for 480 harddrives, and the power consumption thereof, and the task of wiring 480 drives, and cooling them.
Yeah, 480 drives at, say, 9 watts active, would be 4320 watts. But 480 drives would also be 35 TB. It wouldn't fit into 3u, but it'd only cost $86400... You can fit 32 of those drives into this (using these, which means you need 15 of them. That's more than one rack. It's an interesting thought, anyways. One chassis-full of 10k.1s costs $180*32=$5760 for disks, $250*8=$2000 for cages, and around $500 for case and power supply. You're at $8260 per box already, without accounting for controllers or cables. That'd probably push it to $10k per box. But being the first one on your block with a fully populated SAS domain would be pretty cool ;)
 
... snip ... 480 drives ... snip ...

I'm not saying that it's not possible to accomplish what these devices can do with a moving parts solution, but some just don't want a moving parts solution. That 2500w is, btw, I'm sure, at full utilization, Sometimes a static device is preferable. You'll pay a premium for that.. Look at the cost of those 32GB SSD drives that aren't anywhere near as fast as this device (like, 0.2% of the speed).. $20+ a gigabyte, and ONLY 32 gigs. No 24 gigabytes a second throughput, etc..etc.. If speed, in a static device, is important to you, take that $20 a gig, multiple times 400-500, and you'll see the value of something thats 1750 a gig (however, I doubt the terasan is 1.75 million)

Yet, a 36gb raptor can be found for under $50 if you know where to look, so why would you pay $700 for 32gigs? There must be a reason.

I would tend to think that SSD is exponentially more reliable than 7200rpm drives :)

oh, and that 480 drives in raid0, with 35TB (i'm assuming you mean 750gig drives) you have no recourse if drives fail. The terasans and sans400's have harddrive backup. To get redundancy, you've got to have even MORE drives :) - The 480 drives just gets you to the throughput..
 
I just ordered 4GB of RAM for my system to see if I can set up a Gentoo filesystem in a 3.25GB RAMdisk, then chroot to it and use it as a normal system. I'll have to work out something that will sync changes between the ramdisk and my hard disk, maybe using rsync or something. There's no real purpose here - RAM is cheap and playing around is fun.
 
I just ordered 4GB of RAM for my system to see if I can set up a Gentoo filesystem in a 3.25GB RAMdisk, then chroot to it and use it as a normal system. I'll have to work out something that will sync changes between the ramdisk and my hard disk, maybe using rsync or something. There's no real purpose here - RAM is cheap and playing around is fun.

If you want to really go crazy checkout "drbd" it is a real time file syncing solution.
 
1 last benchmark image, here i am comparing my:
RAID 0 - 2 x Seagate Baracuda 160GB 8mb cache on SATA2
running on NForce 4 SATA-RAID controller onboard

with iram in this configuration:
Sata ctrlr 1, secondary channel
512MB 3200 Mushkin double sided
512MB 3200 Mushkin double sided
512MB 3200 Mushkin single sided
512MB 3200 Corsair single sided

comparison.gif


These tests were performed and exported using Passmark Performance Test 6.1 Pro
http://passmark.com
 
Wow that is crazy fast.

Couldn't Gigabyte release a new i-RAM with 3 gb/s interface and have multiple sata ports on the card? I haven't looked deeply into the i-ram (since I can't afford one right now) but I would assume that it just connects to the IDE mobo connector.

It'd be awesome if the next gen iRAM would have multiple sata connectors/ram slot groups dedicated to each sata connector and each ram family would register as a separate drive... Run four iRam families in RAID0...
 
Wow that is crazy fast.

Couldn't Gigabyte release a new i-RAM with 3 gb/s interface and have multiple sata ports on the card?
of course, they could, but:
since I can't afford one right now
that is a problem and most likely the reason why they do not. I doubt that the iRAM was selling like hotcakes and surely Gigabyte is in the market to make money. If their market analysis says that creating a new version of the iRAM is not a sound business idea, why should they make it?
 
if they made a version that was sata2 or better yet pure pcie (at least 4x) and at least 8gb of ddr2 then I'm pretty sure there would be market for them. Heck I know i would be in for at least 2.
 
if they made a version that was sata2 or better yet pure pcie (at least 4x) and at least 8gb of ddr2 then I'm pretty sure there would be market for them. Heck I know i would be in for at least 2.

QFT. I'd want two also.
 
Back
Top