My Rackspace colleague, James Thorne, has a post on his extremely useful blog that uses the 4.2.1 release of the Rackspace Private Cloud (RPC) to install the Havana release of OpenStack on a laptop using Vagrant and Chef. Since James’ post actually uses the GA version of RPC and includes Neutron Networking, I highly recommend that readers use the instruction in his post instead.
Not long ago, I posted an article outlining how you could install a HA OpenStack environment on your laptop or workstation, using Vagrant with a virtualization tool, such as VirtualBox, VMware Workstation, or VMware Fusion. That post borrowed heavily from another post by my Rackspace colleague, James Thorne and was designed to allow someone to get up and running quickly on OpenStack for testing and demonstrating high-availability. Since then, a new release of OpenStack, code-named Havana, has been released, with some important new features.
In this post, I’ll walk through installing OpenStack (Havana) on a laptop using Vagrant with VirtualBox, Workstation, or Fusion; this deployment should be suitable for testing and demos. However, instead of deploying a pair of HA Controllers, I’ll provide instructions on deploying a single controller but throw in a Cinder Volume node as a bonus. Also, instead of having you “flip” back and through between mine and James’ blog, I’ll put the bulk of the instructions and commands into this post; however, I am borrowing heavily from James and the great work he has put into his blog.
Before we begin the installation, here are some notes and caveats you should be aware of:
- We will be using Opscode’s Chef to deploy an early access release of the Rackspace Private Cloud (RPC), version 4.2. RPC 4.2 is a beta product that is based on the downstream Havana trunk. So although Rackspace has changed the Horizon dashboard skin to show the RPC logo, the underlying code is 100% OpenStack trunk.
- Because Rackspace is not currently supporting the Heat project in its early access release of RPC, Heat is not included in the generic install. However, in the post, I will walk you through how to add Heat to your deployment.
- Swift object storage is not part of this install. I’ll update this post or writeup a new post on how to install Swift when I’ve had more time to potentially work on an entire blog series on the Swift project.
- While Neutron Networking is part of RPC 4.2 and the recommended networking project to use in production, I am going to walk though the old-style Nova Networking in this post. I am primarily doing this because of some issues I am having getting Neutron to work with my particular setup; I expect to update this post as soon as I have those issues worked out. My thought is to get this post out there so folks can start playing with other aspects of OpenStack, like the new dashboard in Havana, Heat, Ceilometer, Cinder, etc..
- Since RPC 4.2 is an early access release, Rackspace does not provide support for the product; again, think of it as a beta product. Full support will be available next month when RPC 4.2.1 is generally available.
Setting Up Your Lab/Demo Environment
So what are we building on your laptop or PC to test OpenStack? The sample Rackspace Private Cloud reference architecture below shows the 5 node environment we will be creating, with 1 Chef server node, 1 Controller node, 2 Nova Compute nodes, and 1 Cinder node. Note that most OpenStack services will run on our Controller node.
Installing And Configuring Vagrant
- The first step is to install Vagrant; I recommend using version 1.3.4 or later.
- Then install whichever virtualization software (VirtualBox, VMware Fusion, VMware Workstation) you intend to use as your Vagrant provider, including any Vagrant plugin that is required.
[You can get more details on the above steps at James’ blog]
- Download the Vagrant box appropriate for the virtualization software you have chosen. In this case, we will be using the precise64 Vagrant box to install Ubuntu 12.04 LTS on all our nodes:
- Create a directory for this environment (Below is an example) and create the initial Vagrantfile:
mkdir -p ~/vagrant/havana
- Configure your favorite editor (Mine is vim, which is what I use for my example below):
apt-get install vim -y
export EDITOR=$(which vim)
- Open the newly created Vagrantfile and REPLACE the contents with the following entries (Note that if necessary, you can scroll down the text box to see the entire Vagrantfile):
[You can also access the Vagrantfile here.]
- Now it’s time to start-up your Vagrant environment:
[For VirtualBox]: vagrant up
[For VMware Fusion]: vagrant up –provider vmware_fusion
[For VMware Workstation]: vagrant up –provider vmware_workstation
vagrant status (To Confirm all nodes are running)
Setting Up Chef Server
- ssh to your chef node and log on as root (password is vagrant):
vagrant ssh chef
- Then install Chef Server:
chmod +x install-chef-server.sh
[Not sure why, but I’ve had to run the install 2x every time to get everything installed correctly; I suggest you do the same]
- After install, re-source your environment so you can use knife commands:
- Install the RPC 4.2 Cookbooks:
apt-get install git -y
git checkout v4.2.0
git submodule init
git submodule sync
git submodule update
knife cookbook upload -a -o cookbooks
knife role from file roles/*rb
- Since RPC 4.2 does not currently include the Heat project, we will need to manually add the Heat role to the “run_list” for our “single-controller” role:
- Add the following line to the run_list:
- Next, create the RPC 4.2 Chef Environment:
- Open the newly created Environment file and REPLACE the contents with the following entries (Note that if necessary, you can scroll down the text box to see the entire Vagrantfile):
[You can also access the Environment file here.]
Setting Up The OpenStack Nodes
- Create a password-less SSH Public/Private key, hitting enter to accept all defaults.
- Copy the SSH Public key to all the OpenStack nodes:
- Setup your CHEF_SERVER_URL Environment variable:
- Now install and register the Chef client on each node and also set the RPC 4.2 Chef Environment on each node:
[Please note that WordPress (which hosts this blog) is not properly formatting double dashes; for the “knife bootstrap” commands below, please type the commands instead of doing a copy and paste OR copy and paste and then delete the dashes and re-enter them manually]
knife bootstrap controller –environment rpcv420 –server-url $CHEF_SERVER_URL
knife bootstrap compute1 –environment rpcv420 –server-url $CHEF_SERVER_URL
knife bootstrap compute2 –environment rpcv420 –server-url $CHEF_SERVER_URL
knife bootstrap cinder –environment rpcv420 –server-url $CHEF_SERVER_URL
- Then add the appropriate OpenStack roles to each node:
[Please note that WordPress (which hosts this blog) is not properly formatting single quotes; for the “knife node run_list” commands below, please type the commands instead of doing a copy and paste OR copy and paste and then delete the quotes and re-enter them manually]
knife node run_list add controller ‘role[single-controller]’
knife node run_list add compute1 ‘role[single-compute]’
knife node run_list add compute2 ‘role[single-compute]’
knife node run_list add cinder ‘role[cinder-volume]’
- ssh to each of your OpenStack nodes and log on as root (I prefer doing so from the chef node since that logs me on as root automatically).
[I recommend performing the install in the following order – controller, compute1, compute2, cinder]
Once you’ve gone through all the nodes, RPC 4.2 (powered by OpenStack Havana) should be up and running on your laptop/workstation.
Setting Up the Cinder-volume Node
The following steps will configure a 4 GB test loopfile that you can use for Cinder block storage services. You can find more details in the OpenStack Block Storage Service Administration Guide.
- ssh to your cinder node and log on as root (If you are not already there from the install):
- Create a 4 GB test loopfile:
- Mount the test loopfile:
- Initialize it as a lvm ‘physical volume’:
- Create the lvm ‘volume group’:
- Confirm the cinder-volume has been created (you should see a 4 GB cinder volume):
- Restart the cinder-volume service on the cinder node
- ssh to your controller node and log on as root to restart the other cinder services:
service cinder-api restart
service cinder-scheduler restart
- Now we are going to create a test volume, but first you may have to source the “openrc” file to give yourself sufficient credentials to use the OpenStack APIs:
- Create a 1 GB Cinder volume and confirm it was created:
[Please note that WordPress (which hosts this blog) is not properly formatting double dashes; for the “cinder create” command below, please type the command instead of doing a copy and paste OR copy and paste and then delete the dashes and re-enter them manually]
cinder create –display_name testvol 1
- At this point, you can choose to conserve some resources on your laptop by logging off your OpenStack nodes into your laptop’s or workstation’s shell and shutting down the chef node:
Exit (Until you are logged off all OpenStack nodes)
vagrant halt chef
Before logging on to the Horizon dashboard to play around or to perform a demo, let’s do some initial configuration.
- If you are not currently logged into one of the OpenStack nodes as root, do so now (I usually log on to the controller node); you may have to source the “openrc” file to give yourself sufficient credentials to use the OpenStack APIs:
- Upload an image to Glance and confirm upload (The example below will upload a small Linux image called “cirros”):
[Please note that WordPress (which hosts this blog) is not properly formatting double dashes; for the “glance image-create” command below, please type the commands instead of doing a copy and paste OR copy and paste and then delete the dashes and re-enter them manually]
glance image-create –name cirros-0.3.1-x86_64 –is-public true –container-format bare –disk-format qcow2 –copy-from http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-x86_64-disk.img
- Now we’ll add some rules to our default security group to allow “ping” and “ssh” to the Cloud instances we’ll be launching:
nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0
nova secgroup-add-rule default tcp 22 22 0.0.0.0/0
- Next we’ll create a floating IP pool, called “public,” with IP addresses that can be assigned to your instances to allow external (from your laptop or workstation) access (The example below creates a pool of 16 contiguous addresses, called “public,” in the 192.168.236.0/24 network):
Taking The Dashboard For A Spin
Time to spin up your first Cloud instance and attach a persistent block volume to it.
- From your web browser, launch the Horizon dashboard via the Controller node IP address:
[Note again that the dashboard has the Rackspace Private Cloud skin; however, it is still essentially the Horizon dashboard.]
- Log in as the Cloud Administrator using User Name: Admin and Password: secrete.
- Navigate to the “Project” tab on the left-hand side of the screen and you’ll see an overview of this project/tenant:
Spinning Up And Configuring Instances
- Go to “Instances” and lick “Launch Instance.” Choose the cirros image, you just uploaded to Glance, to launch your first Cloud instance and then watch the dashboard to see the instance “Spawning” until it is “Running”:
- Assign a floating IP address to your instance by going to “Access & Security” and choosing the “Floating IPs” section. Click on the “Allocate IP To Project” button and from the dialog box, allocate an IP from the public floating pool you created earlier.
- Once a floating IP address has been allocated, assign it to the Cloud instance by clicking on the “Associate” button and choosing the new IP address and the instance you created earlier:
- Go back to the “Instances” page and note that your instance now has 2 IP addresses assigned to it, including the floating IP address:
- From your workstation, ssh to the instance using the floating IP address and log on as the user “cirros.” (The example below uses 192.168.236.65 as the floating IP address):
- Launch a second Cloud instance using the cirros image or some other image you may have uploaded:
- Navigate back to the “Admin” tab on the left-hand side of the screen and move to the “Hypervisors” page. You should see that the 2 instances you created are distributed across the 2 Compute nodes as would be expected based on the behavior of the Nova-scheduler:
- Go the “Volumes” page and note the Cinder volume we created earlier:
- Create a new volume by clicking on the “Create Volume” button and filling out the “Volume Name” and “Size (GB)” fields in the dialog box (See example below):
- Attach 1 of the 2 volumes to an instance by clicking on the “Edit Attachments” button for the chosen volume; choose an instance in the “Attach to Instance” field and fill out the “Device Name” field before clicking on the “Attach Volume” button. (See example below):
- When you are done, you can shutdown all OpenStack nodes with a single command:
- Alternatively, you can suspend all OpenStack nodes with a single command:
You now have an environment to play with and to demo OpenStack on your laptop or workstation. For more information on how to configure and to use OpenStack, I recommend looking at the Documentation sections of the OpenStack Foundation website and the Rackspace Private Cloud Knowledge Center. To dive deeper into configuring OpenStack with Chef and vagrant, I recommend perusing the Professional OpenStack website and also purchasing the “OpenStack Cloud Computing Cookbook” by my colleagues, Cody Bunch and Kevin Jackson.
- OpenStack Havana Heats Up the Cloud (eweek.com)