OpenStack For VMware Administrators and Architects
In my previous role as the OpenStack and VMware Evangelist for Rackspace, I had the opportunity to speak with numerous companies about their OpenStack implementations or proof-of-concepts. These interactions included customer meetings, user group meetups, and conference sessions. A trend I noticed was the increasing number of VMware administrators and architects who were becoming more involved in their company’s OpenStack initiatives. For example, Scott Lowe and I have given an “OpenStack For VMware Administrators” talk at the previous two OpenStack Summits in Hong Kong and in Atlanta and at each talk, I asked how many attendees also had responsibility for their company’s VMware infrastructure. In Hong Kong, ~25% of the room raised there hands but 6 months later in Atlanta, ~50% of the attendees raised their hands. The common theme I heard then and which I still hear today is that as Enterprises increasingly look at OpenStack, they typically assign the responsibility for doing so to their VMware SMEs who are already familiar with virtualization technologies.
As I continue to speak with these “VMware turned OpenStack” administrators, several common themes have emerged around challenges they are seeing in trying to deploy OpenStack:
- Making OpenStack Enterprise-grade – OpenStack is a rapidly maturing cloud platform that is being used in a number of enterprises today. However, hardening the platform and making it resilient requires considerable engineering and development skills. As a large customer told me recently, “we did not realize how much money, manpower, and time would be required to make our OpenStack deployment production-ready.”
- Learning New Skills – VMware administrator are being asked to become OpenStack experts in a very short amount of time. That is a challenge when you are unable to fully leverage your existing knowledge base and experience and have to operate a new cloud platform using a completely different set of tools.
- Gaps in Operational Capabilities – As OpenStack continues to mature, one of the big gaps that need to be filled is in the area of operational capabilities, particularly around monitoring and logging. Today, users typically have to cobble together various tools, such as Nagios and Logstash, and customize them in order to have visibility into their cloud.
- Vendor Support – Most vendor solutions today address some but not all the elements required to run an OpenStack cloud. This typically means that users have to source their solution from multiple vendors and manage multiple touch-points to receive support from those vendors.
Introducing VMware Integrated OpenStack (VIO)
This brings us to the announcement that VMware is making today about their new OpenStack solution – VMware Integrated OpenStack (VIO). As some of you are aware, VMware has been a top contributor to the OpenStack project since they joined the project’s foundation in 2012. These contributions have included integration of vSphere and NSX technologies into the OpenStack platform so they can be managed via OpenStack’s Open APIs while continuing to provide differentiating value. With VIO, VMware has taken the next step to create a solution that makes it easier than ever for current VMware administrators to be able to deploy and mange OpenStack.
In a nutshell, VIO packages the stable release of OpenStack into an OVA and delivers it as a vApp so that OpenStack can be deployed as an “over-cloud” on to a VMware based “under-cloud.” The OpenStack based “over-cloud” consists of the control plane services that will allow users and administrators to launch resources using the integrated OpenStack projects and various plugins such as third-party networking and storage technologies. This “over-cloud” also exposes the OpenStack APIs so that developers can provision their own cloud resources without requiring infrastructure admin assistance. Meanwhile, the VMware based “under-cloud” allows the OpenStack control services to leverage underlying vSphere capabilities such as HA, vMotion, and DRS.
Inside VMware Integrated OpenStack (VIO)
Prior to VIO, a typical OpenStack deployment with vSphere integration consists of 2 or more Controller nodes (assuming an HA configuration) that runs the required services to manage your cloud resources and some number of Nova compute nodes that communicate with vCenter servers and serve as the integration points to vSphere clusters. You can find details on how that integration works in a series of blog posts I had previously written.
With the introduction of VIO, the Controller and Nova Compute nodes will be deployed as virtual machines in a vSphere cluster. leveraging the HA capabilities of the underlying cluster and simplifying the work it would normally take to make the OpenStack control plane services highly available. The Compute nodes will manage some set of vSphere, allowing developer to create and launch VMware-backed cloud resources via OpenStack APIs. Note that the beta and initial GA releases of VIO will only support vSphere as a hypervisor and NSX or a specially modified Virtual Distributed Switch (vDS) for Neutron Networking-as-a-Service. While it is possible to deploy as the OpenStack control plane services on the same vSphere cluster as your VMware-back cloud resources, I would recommend using a separate management cluster for your Controller and Compute nodes.
It is also worth noting that since VIO leverages all the integration work and contributions that VMware has made to the OpenStack project, users will be able to take advantage of the Block-Storage-as-as-Service capabilities and third-party storage plugins available via vSphere’s integration with the OpenStack Cinder project. You can find details on how that integration works in a blog post I had previously written.
To address the challenges that exist with managing an OpenStack cloud, the “under-cloud” provides the “VMware turned OpenStack administrators” with a familiar set of tools for deploying and managing their OpenStack infrastructure. These Opencast-integrated tools include components from the vCloud Suite, such as vCenter Operations Manager (vCOPs) and Log Insight.
Most readers will recognize that VMware had rolled out a precursor to VIO in the form of something called vSphere OpenStack Virtual Appliance (VOVA). While VOVA is targeted at those who want to play with OpenStack in their labs, VIO is much more robust and intended to be used to provide a cloud with feature-rich APIs to developers. In particular, VIO is designed to address some of the challenges I outlined earlier faced by enterprises as they look to deploy OpenStack.
Learning More About VIO
So how can someone learn more about and get theirs hands on VIO? As was announced today, VIO in now in private beta with GA targeted for the first half of 2015. To request access to the beta program and/or to get more information on VIO, go to www.vmware.com/products/openstack. To read more about VIO, you can read the official VMware blog and Chris Wahl’s excellent blog post on the subject.
VIO in the EMC Federation
VIO will no doubt be a valuable solution to help drive the adoption of OpenStack, particularly in the enterprise, where there are so much existing VMware infrastructures and expertise. In particular, I see VIO as a great solution for VMware administrators who have been tasked to provide developers with a cloud that is more agile and has more robust developer-friendly APIs. With VIO being one of the EMC Federation’s OpenStack options, EMC II is committed to making sure that we will have the best storage technologies for VIO. Along those lines, expect that ViPR and its rich set of data services capabilities will become a compelling choice for customers who want to run best-of-breed object storage with VIO.
However, we will also have customers who may, for various reasons, choose not to go down the VIO route for their OpenStack initiative. Since the Federation is committed to providing customer choice, EMC will also be doing what we can to help those customer succeed with their OpenStack deployments. Chad Sakac talks more about this over at his Virtual Geek blog.
Stay tuned for more details on how we plan to provide solutions that also will address the challenges of deploying OpenStack that I outlined in this post. Meanwhile, contact me if you want to work on building some exciting solutions as a part of Team OpenStack @EMC.