OpenStack Interop Challenge

It’s now almost 18 months since the OpenStack Foundation addressed customer concerns over interoperability. At the OpenStack Summit, Vancouver It told OpenStack distributors to remove proprietary code from their products. It also introduced the OpenStack Powered logo. Since then, the OpenStack community has worked hard to deliver interoperabilty.

At the OpenStack Summit, Austin in May 2016 IBM issued an interoperability challenge. Don Rippert, general manager for IBM Cloud Strategy, Business Development and Technology called for OpenStack vendors to prove interoperability in Barcelona. This week sixteen OpenStack vendors stood on stage, although eighteen are credited with passing the interoperability tests.

What interoperability have vendors demonstrated?

Before getting to this point, vendors also have to meet the OpenStack Foundation “Powered By” criteria. That requires them to support a given set of APIs and sets of code that are defined on the OpenStack website

The Community Interop Challenge was fairly simple. All the participants were asked to run the same workload and distribution tools across the OpenStack products they offer. This took the form of a 3-tiered LAMPStack enterprise application using Ansible and OpenStack Shade. They then had to execute a second workload using Docker Swarm scripts and Terraform as part of the collaborative effort.

Don Rippert, general manager for IBM Cloud Strategy, Business Development and Technology
Don Rippert, general manager for IBM Cloud Strategy, Business Development and Technology

To prove to the community that this was for real the challenge was completed live on stage. Sixteen of the eighteen vendors who claim they can do this sent teams to Barcelona. All stood on stage and delivered on the challenge. The 18 vendors include: AT&T, Canonical, Cisco, DreamHost, Deutsche Telekom, Fujitsu, HPE, Huawei, IBM, Intel, Linaro, Mirantis, OSIC, OVH, Rackspace, Red Hat, SUSE and VMware.

In an IBM press release Rippert said: “What customers want from open source projects is innovation, integration and interoperability. Nobody has doubted the innovation and integration capabilities within the OpenStack projects, however some doubted whether the vendors supporting OpenStack would work together to achieve interoperability. Today with this significant milestone, we are proving to the world that cross-vendor OpenStack interoperability is a reality. When it comes to OpenStack, our hope is that this demonstration of working interoperability will reduce customer fears of vendor lock-in. We at IBM look forward to continued work with the community and fellow OpenStack vendors to continually improve interoperability to meet the goals of our customer base.”

Why does this matter?

Vendor lock-in is a real thing. With OpenStack prior to the Vancouver summit, it was becoming harder to deploy the same workload to multiple clouds in one go. This is counter to the whole idea of an open source cloud. With so much emphasis on hybrid cloud end users need to know they can deploy their workload wherever they want. More importantly it has to be quick, easy and not require customised deployment scripts.

While the use case for the on-stage test is limited it has shown that this is now possible. It has substantially lowered the barrier to a multi-cloud world for end-user customers. It also benefits cloud providers who are deploying OpenStack in their data centres. They no longer have to deploy multiple OpenStack clouds in order to provide hybrid cloud support for customers.

Is interoperability now a done deal?

No! OpenStack interoperability is a long way from completion. The proof on stage at the OpenStack Summit Barcelona is a good start. We need to see the deployment of more complex applications just as easily. This is where the applications are written to take advantage of API calls. It will still need bounding as a set of tests with the calls to just the core OpenStack projects. But as the OpenStack vendors support more and more of the OpenStack projects we need a test regime to show that interoperability is maintained.

The next challenge to address is portability. There is a valid use case that sees a customer moving between OpenStack clouds from different vendors. Customers reasons for this include cost, a lack of support from a preferred cloud provider or simply through choice. Taking an established workload and moving it takes time. Rackspace engineers in London admitted recently that it takes around a week to migrate a customer workload. The problem is not the core OpenStack components but all the little things like API calls.

Solving the API problem

Mark Baker, Ubuntu Server and Cloud Product Manager, Canonical
Mark Baker, Ubuntu Server and Cloud Product Manager, Canonical

This is where the community is still split on how to resolve the problem. Mark Baker, Ubuntu Server and Cloud Product Manager, Canonical sees several issues. The first is the challenge of APIs as part of the migration. He points to the work that Eucalyptus did around AWS before they were acquired by Hewlett Packard Enterprise (HPE).

They were supporting AWS APIs to allow customers to develop on-premises and deploy to the cloud. The also allowed customers to pull AWS workloads back to their on-premises cloud. The problem was choosing what APIs to support, how much of the API to support and what to do when the customer used something else. This is something HPE is still working through.

Baker believes that the current orchestration tools within OpenStack are not capable of dealing with this level of complexity. He is quick to point to Ubuntu’s Juju project saying: “Customers can use Juju charms rather than the underlying APIs. We take care of the mapping to the APIs making it easy for them to move between clouds.”

Baker is not just talking about OpenStack clouds here. Ubuntu sits on multiple cloud providers platforms. Baker says that Juju’s orchestration engine means customers can deploy a workload to OpenStack, AWS, Azure, Google Compute and any other cloud platform that emerges.

The hidden challenge of certification and control

Baker also talks about a more serious and hidden issue – certification. He said: “Take a workload in one version of Linux. The technology may have certifications and control that are required by the customer for regulatory or security needs. There is no guarantee that when they move to a different version of Linux that certifications and controls will be there.”

This is not something that OpenStack is addressing at the moment with interoperability testing. Baker points out because there is no check in the orchestration engine for certifications. As a result: “[customers] cannot just drop workloads to a new Linux or cloud. Customers are not aware of this problem and this is something that must be addressed.”

Conclusion

Interoperability is a big issue for everyone. The OpenStack Foundation has achieved a major milestone with the tests that took place on stage at the OpenStack Summit, Barcelona. It does need to lay out a process to bring vendors such as Oracle into the main interoperability group. It also has to say how it will deal with OpenStack distributions that do not demonstrate they are willing to play well with others.

There is a lot that needs doing to solve the migration interoperability challenge. This is complex and will require some serious thinking by the community. It may it look to improve its current orchestration tooling. Alternatively it could consider approaching Ubuntu and cutting a deal around Juju? Of course, Ubuntu could force the issue and make Juju an OpenStack project. That would really create a stir. It may just be what is needed to move forward.

For now, customers can begin to deploy some of their hybrid cloud workloads to over eighteen OpenStack clouds.

LEAVE A REPLY

Please enter your comment!
Please enter your name here