One of the key things that #ServerSAN gives you is elasticity. The ability to easily grow the "SAN" by simply adding more compute and/or storage on the fly
The idea of data migration is often overlooked. Customers are telling me they spend 3+ months rolling in new storage and another 3 rolling it out. Scale out / scale in without a manual migration is a BIG deal
At today's level of sophistication I'm kinda doubious that I really want 100 servers running storage software across them rather than 2-3 TinTris. What's going to be easier to manage?
Yes...but do not forget the advanced software algorithms and trick meta data manipulations, etc. We've had CPU power and 10 10Gbe (and Infiniband) for some time. Flash also must be considered too, as you say.
Server SAN has an additional benefit over traditional SAN - processing power can be moved to the data as well as the traditional model of moving the data to the processors. This gives potential flexibility of application design and fungibility of resources
I think as we start talking about applications integration into the software storage layer we're getting into that hyperscale 1 architect across the entire infrastructure. Mortals buy apps that need standard interfaces
David Floyer Howard - True, it starts as standard interfaces. However, I would expect database systems (including Hadoop) to be able to take advantage of Server SAN architecture early on. The vendors will need to develop the capabilities for hyperscale.
In terms of interfaces I'm looking most forward to T10 standardizing Atomic-write and having that pass on to the storage system, Full VAAI/ODX support for per-VM snapshots too,
Perhaps server San term relies implies/assumes or requires some degree of 'intelligent software management' of the stack. That's the secret sauce and key enabler...regardless of whether the SFW is tied to an appliance or server or whatever.
ServerSAN is disruptive.VSAs (and server-side caching) can really do the hard part of the job leaving the hardware only the physical layer to mange (storing data nd RAID)
It;s not so much Commoditization as general advancement. x86 servers have been essentially interchangeable for a decade. Today there's excess performance in processor, SSD and 10Gbps network so let's use it for storage.
the problem becomes the user thinking they can build servers from scratch. you're still going to need a source of procurement for warranty and interoperability
There have been 2 big reasons Software storage was slow. 1 - Not enough disks/controller 2 - Synchronous mirroring across slow Ethernet. SSD fixes first, 10Gbps Ethernet the second
The data redundancy is now being delivered in software, now the question is how to deliver high performance data efficiency (dedupe, compression, optimization). We always want something more that what software offers.
So many cores to play with there is no need for dedicated ASICs etc like 3PAR. New platforms can handle RAID and data optimization like compression and dedupe in SW. Aided by SSD.
Storage Godfather (HPEStorageGuy) David Scott has said that he has the engineering team justify keeping the ASIC with each new release of HP 3PAR ASIC. They're a smart team and will make the right choices.