Anyone using Storage Spaces Direct (S2D)?

natelabo

n00b
Joined
Aug 9, 2012
Messages
8
Just wondering if anyone has played with S2D in Server 2016 TP5? I want to test but haven't had a chance to. I would like some real world information on performance. Can it even run if I'm limited to 1Gb links?
 
What's the difference with Direct? I've just started experimenting with Storage Spaces under 10 Pro.
 
we're playing with s2d since early tp4

it works rather well but has numerous drawbacks: 4 nodes to start from and datacenter license everywhere :(

1 gbe is nothing ..

you need not only 10 gig ethernet but also rdma capable hardware to put s2d into production

Hardware options for evaluating Storage Spaces Direct in Technical Preview 4

"Network interface cards
Storage Spaces Direct requires a minimum of one 10GbE network interface card (NIC) per server.

Most configurations, like a general purpose hyper-converged configuration will perform most efficiently and reliably using 10+ GbE NIC with Remote Direct Memory Access (RDMA) capability. RDMA should be either RoCE (RDMA over Converged Ethernet) or iWARP (Internet Wide Area RDMA).

If the configuration is primarily for backup or archive like workloads (sequential large IO) it can be a 10GbE network interface card (NIC) without Remote Direct Memory Access (RDMA) capability."

Just wondering if anyone has played with S2D in Server 2016 TP5? I want to test but haven't had a chance to. I would like some real world information on performance. Can it even run if I'm limited to 1Gb links?
 
storage spaces are software raid

think about storage spaces + refs = poor man's ZFS ;)

storage spaces direct are vsan from microsoft basically

but expensive vsan really ;)

What's the difference with Direct? I've just started experimenting with Storage Spaces under 10 Pro.
 
Olga-SAN 4 nodes? I guess you're playing with an earlier WS 2016 TP.
In the latest version it does support 3-node S2D scenario. So it's already 25% less.

However, no optimization from Microsoft comes for free, so in a 3-node S2D cluster you can only lose 1 drive before the system goes offline.
Properly designed replicated local storage systems like StarWind VSAN can lose up to 4 drives in a 2-node config and still be operational. This is what happens when you do RAID 1 over local RAID 5/6/10. BTW Another vendor with that level of resiliency is Simplivity.

Still scratching my head thinking why is MS trying to build their own storage thing instead of just buying out StarWind...
 
nah windows server 2016 tp5 ;)

3 nodes yes but you can lose only 1 disk

if you don't do 3 copies of data of course

which is expensive

What’s new in Storage Spaces Direct Technical Preview 5

Deployments with 3 Servers
Starting Windows Server 2016 Technical Preview 5, Storage Spaces Direct can be used in smaller deployments with only 3 servers.

Deployments with fewer than four servers, support only mirrored resiliency. Parity resiliency or multi-resiliency are not possible, since these resiliency types require a minimum of four servers. With 2-copy mirror the deployment is resilient to 1 node or 1 disk failure, and with 3-copy mirror the deployment is resilient to 1 node or 2 disk failures.
 
Back
Top