we've been using these guys with more or less success
icy dock are of the best quality imho
https://www.startech.com/HDD/Brackets/2-5-inch-SATA-SAS-HDD-to-3-5-inch-SATA-Adapter~25SATSAS35
https://www.newegg.com/Product/Product.aspx?Item=N82E16817994064...
try back blaze they extremely inexpensive and getting to a "good enough" point in terms of service
(yeah, i know about days of outage but hell s3 was dead in the water twice this year alone !)
we have to deal with their network switches and servers for some of our non-U.S. customers (big telecom if you care)
it's the most horrible hardware vendor we've ever seen :(
any reasonably priced aws backup being aware of sql server transactions ? we'd like to use veeam but they have issues on aws with no eta to resolve ;( thanks !
haters gonna hate ;)
your cell phone operator 100% runs emc vnx for billing processing and this little baby runs windows ;)
back to star winds they have linux version as well afaik
we used one to demo iscsi back in 2011 , 2012 or so
guys thanks for your suggestions !!
temperature is ok (they run a little bit hot but opening a case and using huge fan just to cool them down didn't change anything)
it turns out write performance drop is by design - cache gets filled and that's it ;(...
you'll end up with something like samsung evo (pro?) 960 mounted into PCIe bracket because you don't really have m.2 slot
check this out
https://www.starwindsoftware.com/blog/benchmarking-samsung-nvme-ssd-960-evo-m-2
they benchmarked m.2 with an asus bracket at it's 0% difference in between...
hi
so i have an issue : current set of a samsung 960 evos (got 4 of them) work in weird way
initially write performance is kind of ok and according to specs
after some time write performance is 1/3 - 1/4 of what it should be
initially i though it's my raid done in the wrong way
but...
virtualize your setup
get something like disk2vhd or starwinds v2v converter and build a vhd(x) out of your disk
create vm with scsi disk bootable and you're golden
many ssd drives put into array are actually less prone to failure for a reason
load is split between all the drives in array => single drive get less work to do => less burnt cells to replace
scale definitely has i/o performance issues
their software defined storage stack is pants ;(
just upgrading hdds -> ssds won't help much
they don't have all-flash configs for a good reason
https://www.scalecomputing.com/products/hardware-platforms/
we have a customer who wiped off their kvm...
there's an excellent write up about soho builds like you have
https://www.starwindsoftware.com/blog/choosing-ideal-mini-server-for-a-home-lab
i'd also suggest to check xbyte periodically
they have now new r620 with 2x300gb sas and dual cpus for $800 (it was 600 only few days ago)...
you never team iSCSI connections - go with MPIO
to check network performance start with a RAM disk and don't move on with a "real" storage before RAM-based storage won't give you wire speed
nttcp & iperf are your friend here
followed by DiskSDP
These are ConnectX-2 below, so rather slow and HP badge will complicate firmware upgrades. I'd go for ConnectX-3 (Pro), ou can find used / refurb on eBay for cheap.
--
NIC: HP MELLANOX (2) $34.60
nutanix has kvm based community edition which is 100% free
starwinds have free version as well - no capacity limitations, production use is ok
stormagic is performance hog :( 50k iops per node all-flash is ridiculous in 2016 ,my macbook literally runs circles around their "storage" !!
i'm not...
deduplication has nothing to do with data loss here
there's a silent data corruption and if it happens over dedupe chunk database
whole collection of backups will go south
microsoft maintains multiple copies only if block reaches 100 references threshold value
if you have two separate...
nah windows server 2016 tp5 ;)
3 nodes yes but you can lose only 1 disk
if you don't do 3 copies of data of course
which is expensive
What’s new in Storage Spaces Direct Technical Preview 5
Deployments with 3 Servers
Starting Windows Server 2016 Technical Preview 5, Storage Spaces Direct...
+1 to helium
they all fail these days so you have to pick up between shit and shittier ;)
some stats ///
What We've Learned from Running 61,590 Hard Drives in Our Data Center