Is OpenStack ready for prime time?

Is OpenStack ready for prime time?
David Auslander is a Principal Program Manager in Microsoft's Azure Engineering Customer Advisory Team, providing the bridge between Azure Engineering and Microsoft's customers. David has over 30 years of experience in technology and has held technology leadership positions at Cognizant Technology solution, CSC, EMC, ADP and Sun Microsystems.

OpenStack has become a force to be reckoned with in the cloud computing market. The 451 Research Group forecasts OpenStack related sales at $3.3 billion (£2.13bn) by 2018 – pretty good for an open source development project.  But is it truly ready for the big time?

A potted history

OpenStack was introduced in 2010 as a project of NASA, who dropped out in 2013, and Rackspace. In 2011 Ubuntu adopted OpenStack and became the first “vendor” to integrate with the platform. In 2012 Red Hat began a project to integrate with OpenStack and introduced commercial support by July 2013. Over time many other organisations have joined the foundation as sponsors and contributors. Recently released OpenStack Kilo (version 11) has approximately 400 new features and was the product of almost 1500 contributors.

There is a downside to the open source model: lots of developers with lots of ideas about what should be included breeds complexity

The intent of the project was to create an open source cloud platform that enabled organisations to build their own cloud environments and have them communicate with other open cloud environments. The intended benefits of the OpenStack project were:

  • An open source style support model where developers all over the world were ready to help.
  • A essentially free open source pricing model.
  • Seamless workload migration between OpenStack clouds, without worrying about the underlying virtualisation technologies. This was intended to give rise to a true hybrid cloud paradigm.
  • Do-it-yourself cloud environment setup, creation and management

As time went on and the OpenStack foundation grew, projects were added and the core began to take shape. An every six month release cycle was put in place and OpenStack seemed headed for greatness. But there is a downside to the open source model: lots of developers with lots of ideas about what should be included breeds complexity.

A sprawling ecosystem

Over time, what started out as a platform for creating cloud compute, network and storage instances has evolved projects and components to cover many ancillary items.

The core components generally available in OpenStack are:

  • Nova – cloud computing fabric controller
  • Glance – provides discovery, registration, and delivery services for disk and server images
  • Swift – a scalable redundant object and file storage system
  • Horizon – graphical interface dashboard services. Third party products compatible
  • Keystone – centralised directory based identity services
  • Neutron (formerly Quantum) – system for managing networks and IP addresses
  • Cinder – block storage services. Includes support for Ceph, Cloudbyte, EMC, IBM, NetApp, 3PAR and many others
  • Heat – Orchestration services for multiple could components utilising templates
  • Ceilometer – Telemetry Service provides a single point of contact for billing systems

Example components that have been added or are in the works are:

  • Trove – database as a service provisioning
  • Sahara – Elastic Map Reduce service
  • Ironic – bare metal Provisioning
  • Zaqar – multi-tenant cloud messaging
  • Manila – shared file system
  • Designate – DNS as a service
  • Barbican – security API

Once the core components were developed by the OpenStack foundation, the ancillary projects were layered into the infrastructure. These are all great ideas, but all contributing to a larger and more complex stack. This complexity shows itself mostly in the configuration and deployment of the infrastructure.

A typical OpenStack deployment looks approximately like the following:

The above diagram is highly simplified and based on the VMware Integration with OpenStack (VIO) product. Note the inclusion of the VMware Services Processor (VSP) virtual machine.

All of the components of an OpenStack infrastructure are deployed as virtual machines whose numbers can be increased or decreased easily according to demand. If a multi-segment cloud is necessary (for example geographically distributed) multiple replicas of the above can be created and associated as regions within the same cloud. Even with the use of a vendor integrated OpenStack product (as above), deploying an infrastructure of this type requires considerable planning, integration and configuration of the various components.

Another aspect that arises from the complexity of the deployment is that increasingly organisations will turn to a vendor for help in deploying OpenStack. A not totally unforeseen consequence is that many vendors have developed their own integrated versions of OpenStack – this is where that big number at the beginning of the article comes from.

The best practice is to roll out core services, and then only add the ancillary services that are necessary

Some of the vendors who market OpenStack integrations are VMware, Red Hat, IBM, HP, Cisco, Ubuntu and F5. Even my own company (CSC) when developing their hyper-converged private cloud offering set, chose to use integrated versions, from VMware and Red Hat, for two of the three offerings.

So, is OpenStack ready for prime time? I believe with the right planning, and the support of an integrated vendor, yes, OpenStack is a viable paradigm for creation of private and hybrid cloud environments. Let us look at the original list of intended benefits:

  • Open source support model – still holds true but you can (and should) add on vendor support to better protect your investment of time and money
  • Open source pricing model – vendor-integrated OpenStack solutions are still considerably less expensive than the vendor’s non-open Source solutions.
  • Seamless workload migration – If configured properly all of the benefits of workload migration can be realised.
  • DIY cloud – This requires either staff that are well trained in OpenStack deployment or help from a vendor. Once your staff is trained and has lived through a deployment or two (in the lab) it is all DIY from there. Our staff at CSC, having worked with both vendors is now capable of delivering a private cloud environment to a customer in a matter of days.

On the issue of complexity, a great deal of attention has been paid to the seamless integration of the core components, in recent OpenStack releases.

At its heart OpenStack is a pluggable, modular architecture where new components can be spun up easily. The best practice here is to roll out the core services and then only add the ancillary services that are necessary.

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *