IBM Spectrum Scale 4.2 and the IBM Elastic Storage Server 3.5:

It is now possible to create a common pool of storage using unified file and object storage. Companies moving to object storage, especially those looking to take advantage of OpenStack Swift for cloud storage systems will be able to convert files to objects. Applications will be able to address data using either file or object requests which means customers will not need to spend money on upgrading software.

For customers doing complex analytics, there will now be Native HDFS support. Data will now be seen as a single Data Lake rather than discreet storage systems. The big advantages of this are that data transfer between storage systems is eliminated and analytics results will be available to all applications immediately.

Other performance enhancements include new Quality of Service (QoS) features that will manage the rebuild times and deliver greater asynchronous replication support. These come with a new policy manager that will allow date and time policies to be created. There will also be the ability to lift data to a higher storage tier based on a file’s “heat” or access demand. It will be interesting here to see just how this latter feature works in practice. For example accessing one out of a set of linked files should only change the heat on the first file not the rest and it is unclear as to whether this will also employ a degree of predictive access ratings.

Continuing the improvement in User Interfaces announced with Spectrum Protect, IBM has announced a new GUI for Spectrum Scale. It will be interesting to know if whether this is part of a single UI across all Spectrum Storage product families. This was the implication when the Spectrum Protect announcement was made and IBM needs to be clearer about this.

The most exciting part of this announcement is the delivery of a turn-key Spectrum Scale Virtual Machine (VM) that customers will be able to download. It will enable them to try the new Spectrum Scale enhancements and can be run on devices from laptops to servers accessing external storage.

IBM also announced what it is terming its “Big Storage” technology preview. This is described as an enterprise-class cloud-based storage service tier. The target market is very large archives with limited retrieval requirements. For industries such as oil and gas who need to retain details of rigs for 25 years after decommissioning and aerospace companies who may need access to the design documents of older aircraft, this has an immediate appeal. It will also appeal to governments who are increasingly placing large amounts of historical data online where there is a surge in interest when first released but after a fairly short period, access drops considerably.

This new Big Storage approach will enable local and cloud storage to be integrated through several approaches using OpenStack Swift. It means that data in archive systems, OpenStack cloud-based storage from multiple vendors, local data storage and IBM SoftLayer will all be able to become part of the same storage solution.

IBM Spectrum Control 5.2.8:

There is to be a new product Spectrum Control Advanced Editions which will deal with non-SAN storage. This will include a wide range of predictive analytics and snapshot management that was previously only available in virtual Storage Center.

As part of bringing all the products in the IBM Spectrum Storage brand closer together, there will be integrated support for the Spectrum Scale object storage announcement. At the same time there will be increased management and visibility of flash storage. One key feature is the addition of capacity management on all IBM FlashSystem models. This will ship on December 11, 2015.

Storage administrators will now be able to compare performance across multiple clusters. This will include the ability to see exactly what workloads are running on any given cluster and which clusters have the highest workloads. This will enable administrators to better balance the use of their storage clusters to get the best performance.

There is also a new customisable alerting system that will deal with storage health, capacity, performance and configuration. Users can decide what information they want from the events system and where administrators are looking to give business units greater visibility into their storage usage, they can redirect alerts to those users.

Conclusion

This is a major set of updates for the IBM Spectrum Storage family and there are likely to be even more early next year. IBM has already signalled changes to the way the software will be licensed with the goal of making it easy to compare on-premises and cloud-based storage costs. It is also extending the licence across all types of storage to prevent customers having to buy multiple licences which mean additional cost. This will all happen over the next 15 months as IBM works through a series of pilot programs.

Storage is one of the new business areas that IBM is concentrating on and investing heavily in. With $1 billion to spend on revamping and updating the entire Spectrum Storage brand, it seems that the storage team under Jamie Thomas is keen to show customers that the money is already being put towards product improvements.

The big question is whether this will translate into a higher market share for IBM? There are no guarantees in storage with new entrants appearing all the time. Over the last few months, however, there has been a slowdown in new entrants but an increase in money being invested in existing storage start-ups asking for series B and C funding. IBM knows it will have to fight to retain its position let alone increase market share but if it keeps delivering regular updates like these, it will expect to see a solid growth in storage revenues.

LEAVE A REPLY

Please enter your comment!
Please enter your name here