This decade is shaping up to be the Golden Twenties of unstructured data. According to Gartner, unstructured data growth rates have hit 30% per year. It means total unstructured data volumes will almost quadruple by 2027. Such data growth is a challenge in itself. But unstructured data also comes in a variety of sizes and can be stored as files or objects, with increasingly demanding storage performance needs. It has resulted in a new category of storage emerging to provide unified fast file and object storage.
Object storage is becoming increasingly important and well-established, driven by the web and the rise of the cloud. The historic view of object storage as the least performant has changed. Customers increasingly need to interrogate large amounts of unstructured data that can be in object format, as well as file.
Need for file data speed – and object storage
Fast object storage has traditionally been a convenient way to archive large amounts of data. But the growth in cloud-native apps, which have demanding workloads and use object storage as their default, has created a new market for high-performing object storage.
Applications and use cases are evolving from file to object access. It means organisations require a platform to support both access methods and ensure investment protection. All these factors have led to the emergence of high-performance storage solutions that combine access to files and objects.
Fast file and object: I/O performance and throughput
There’s been an explosion in analytics and machine learning, driven by the need to distil value from enormous amounts of raw data. An example is US-based Paige’s pioneering use of machine learning for cancer diagnosis. It needs petabyte-scale storage capacity with rapid access and high throughput to allow machine recognition across millions of images in patient tissue samples. This demands high-performance access to file and object data.
Whether it is to analyse very large datasets or to perform a massive restore operation after a ransomware attack, unstructured data can require very high access performance, where low latency needs to be coupled with high throughput. For data analytics, this means speeds measured in tens of gigabytes per second.
When restoring systems following an outage or ransomware attack, enterprise customers should look for throughput numbers that get close to 300TB per hour. It will limit downtime and the financial and reputational damage that comes with this.
Additionally, the platform must provide high performance both from a latency and throughput point of view automatically and without tuning. The world of unstructured data and modern analytics is evolving quickly. It makes it difficult to predict what tools, file format, data set size or access methods will be required tomorrow. Any storage solution that requires manual configuration or tuning to deliver high performance for a given use case will stifle innovation and delay projects.
Fast file and object driving unparalleled benefits
Unlike traditional structured data, such as a database supporting an ERP system, which tends to be fairly static, unstructured data can span many locations and access methods during its lifecycle. Today’s emerging fast file and object storage products support network file system (NFS) and server message block (SMB) file protocols. These are compatible with the way many existing enterprise applications are written.
Additionally, fast file and object solutions can also handle unstructured data in object-access formats resulting from their cloud origins. Fast file and object storage is also ideal for hybrid clouds, with unstructured data that can transition between on-site and cloud locations.
Customers today need to look for capacity, first and foremost, in a fast file and object storage product. The platform needs to scale to your needs, which for many enterprises could be petabytes. Unstructured data can grow quickly. Scaling the solution, therefore, needs to be easy and not involve complex network configuration or manual data rebalancing tasks.
Secondly, it must have file and object storage access, offering the key protocols such as NFS and SMB for file and S3 for object access. Thirdly, it must be built for fast access and high throughput. Low latency — especially for read and metadata access operations — is required to unlock the potential of AI/ML and many modern analytics frameworks. All-flash storage offers this fast access, thanks to its solid-state nature.
Fast file and object storage going for the ‘gold’
The world of data storage is truly embarking on the Golden Twenties. That is underlined by the explosive growth of modern analytics, machine learning and ransomware attacks. It will require storage solutions built for large volumes of unstructured data, with roaring performance levels and flexibility in access methods.
Organisations need a more efficient way to manage and unlock the value of unstructured data without adding more data storage complexity. Fast file and object storage platforms are the answer to the data challenges of both today and tomorrow. They can help enterprises win the race for true data value. These platforms are designed to help organisations simplify management complexity, reduce data silos, and answer the demands of modern data applications – while driving efficiencies and reducing costs. Will your company answer the file and object storage call?