Skip to main content

Open storage is disrupting the enterprise market

servers
(Image credit: Shutterstock.com / Gorodenkoff)

Virtually every organization in the world relies on digital automation, making data more valuable today than ever before. Preventing data loss is thus a paramount concern, making storage system reliability an absolute imperative for organizations of all sizes. Delivering on this requirement, enterprise-class open source software is emerging as the key to resilient, enterprise-grade storage affordable for all. 

Any storage system today must offer a comprehensive array of features and capabilities. Some of the most desirable functions, such as silent corruption detection, data reduction, and write cache resilience in the face of power outages, were traditionally restricted to proprietary storage solutions targeting the largest enterprises. Open storage is changing this industry dynamic by dramatically lowering the true cost of these important enterprise-level features.

The cost of storage impacts today’s organization in a number of ways, some more subtle than others. Large enterprises are often more agile because their IT is more capable. They can afford to pay for the full suite of features offered by traditional storage vendors. This means they can unlock the widest range of product capabilities they consume while smaller competitors often struggle to work around those same vendors’ artificially-imposed limitations.

About the author

Morgan Littlewood is SVP, TrueNAS Product Management at iXsystems, Inc.

Making enterprise storage features affordable to all is disruptive. It’s disruptive not only to the storage industry but to any industry where larger competitors are more agile because they can afford more comprehensive and capable IT. Open storage democratizes enterprise storage and levels the playing field.

Artificial limitations

Consider a non-storage example in the form of VMware’s vSphere virtualization platform. In the 6.x line, vSphere Standard licenses do not allow virtual machines (VMs) to be migrated between vCenters or across high-latency, long-distance connections. These capabilities require the significantly costlier Enterprise Plus license. 

The result is vSphere Enterprise Plus license holders have more options for load-balancing their workloads and performing maintenance or migration activities. Organizations using Enterprise Plus can evacuate clusters or, even entire data centers, for planned maintenance or as part of a data center refresh. Natural disasters are a great example of when this capability can make a demonstrable difference. Some events, such as hurricanes, we can see coming. Ahead of such an event, an Enterprise Plus customer could migrate all their workloads from their primary data center to a secondary site out of the path of the hurricane with a single script and without interrupting any of the workloads in question—zero downtime. 

In contrast, vSphere Standard customers have to rely on solutions that either require downtime or could cause potential data loss during the failover. vSphere Standard and vSphere Enterprise Plus are identical in terms of the product installed by the customer. Only the license is different. This is a classic example of the artificial limitation of a product. 

A decade of disruption

The storage world is no different. Artificial limitations are frequently placed on storage solutions, especially those aimed at enterprises. This practice, combined with high initial capital costs and exorbitant support costs designed to force a 3-year refresh cadence, ultimately led to the storage wars. The high costs of storage have forced users to consider alternative approaches that fit their budgets and handle the exponential data growth.

For the past decade, the storage wars have seen a parade of storage startups aiming to displace the traditional storage vendors with new approaches to storage, such as software-defined storage (SDS) and hyper-converged infrastructure (HCI).

Open source storage solutions have embraced SDS and HCI, becoming natural winners in the long-term storage wars. Their capabilities and maturity have advanced quickly as many contributors have devoted resources to their advancement. Features previously only available at a premium, such as snapshots and replication, have become standard items for projects like OpenZFS. Some proprietary storage solutions still charge a premium to allow frequent snapshots or replications, gate access to automation APIs, or otherwise create friction for customers using a lower-cost license.

Open enterprise storage, however, dispenses with the coin-operated feature approach. These storage solutions ship with a full suite of features and have no artificial capacity limits. This means organizations can deploy storage solutions with the same set of features across their entire enterprise, including development, test, backup, and production. Support costs can be optimized for the role of each deployment. Neither manual workload placement nor convoluted migration strategies are required to work around licensing restrictions.

Open enterprise storage offers organizations the flexibility, agility, and security to take full control over both the architecture and the destiny of their data storage. Open source enables the addition of compelling new features due to community demand or demand from individual organizations. This flexibility reduces the business risk of stranded investments as requirements change.

Open source software enables open enterprise storage and makes all of the above possible without sacrificing features or compromising resiliency. It also opens up enterprise storage to be offered at prices that allow organizations to keep increasing the variety and volume of data they keep. Computing costs have benefited enormously from Open Source and Linux; storage costs are doing the same.

Morgan Littlewood serves as SVP TrueNAS Product Management for iXsystems. Prior to joining iXsystems, Morgan held executive posts with Cisco and Violin Memory in support of enterprise storage and networking.