Thanks to everybody for joining #VSANchat today! If you have any questions around VSAN, please feel free to tweet to us @vmwarevsan! You can also check the Virtual Blocks blog for more information at blogs.vmware.com/vir...!
@CormacJHogan We are using two 10GB NICs per host in a single vSwitch in active passive. WOrks for us, but only provides fault tolerance. THere are VMware docs out there, but a simple, concise best practices doc would be great.
@CormacJHogan Is dual NICs in active/passive config on a single vmk/vSwitch the best way to utilize 2 dedicated VSAN NICs/server? Can active/active provide the fault tolerance & improve total available throughput?
one doc mentioned VSAN networking has the same requirements as vMotion. We use 2 vnics per host for vMation. That;s not possible with VSAN. Failover is good, but does not require 2 VLANS.
@num1k So there's no guarantee that any network configuration will improve performance. So really, there are a couple of different ways to configure VSAN networking
@CormacJHogan We are finding individual spindle speeds are our performance limiter. But just turn up the stripe width, VM can push more IOPS. #itgoesto11
SRM can be used to protect the Stretched Cluster to a 3rd sight, but not between the 2 sites. Add vSphere Replication from the Stretched Cluster to the 3rd site (with VSAN on the backend) and you can get a 5 min RPO.
Health Check enhancements are nice to see. Not having to deploy advanced monitoring products right away speeds up deployment while still having insight into storage. The better overall performance is great too.
I missed VMworld this year, but I gather that stretched clusters will be (or might now be) supported. True multi-site capabilities in VSAN would be fantastic.
Will we ever be able to offload vSAN traffic to 10GigE interconnects, without a switch? I had this huge issue before when I was trying to do it, and found that it wasn't possible. We wanted it for our 1GigE robo sites.
Not at present. There needs to be VSAN connectivity between three hosts (witness too), and you need a switch to do that. However, this isn't the first time we've heard this, especially for ROBO, so we're looking into ways to do it
@VmAlchemist VDI is definitely a popular use case - the cost, predictable scaling and performance make that a great fit ... are you deploying on all-flash or hybrid?
Replaced aging iSCSI/FC SANs at multiple sites with VSAN clusters. Better performance, fault tolerance, cheaper, more flexible, more GB/dollar, easier to deal with.