Running OpenStack under VirtualBox – A Complete Guide (Part 2)

In the previous post, we very simply got OpenStack running under VirtualBox. This next part takes this further by getting multiple compute nodes installed to spread the load of your instances. It also paves the way for Part 3 where we will then start to look at Swift, OpenStack’s storage implementation of S3.

Part 2 – OpenStack on a multiple VirtualBox VMs with OpenStack instances accessible from host

We will be using the same set up as for Part 1.

The proposed environment

  • OpenStack “Public” Network: 172.241.0.0/25
  • OpenStack “Private” Network: 10.0.0.0/8
  • Host has access to its own LAN, separate to this on 192.168.0.0/16 and not used for this guide
  • One VirtualBox VM running the software needed for the controller
  • One VirtualBox VM running the software needed for a compute node

This guide will assume you have followed Part 1. We are simply adding in a compute node now, so that will make the virtual machine you created in Part 1 will become the Cloud Controller (CC). Also, OpenStack has been designed so that any part of the environment can run on a separate server. For this guide we will have the following

  • Cloud Controller running MySQL, RabbitMQ, nova-network, nova-scheduler, nova-objectstore, nova-api, nova-compute
  • Compute node running: nova-compute

The Guide

  • Add a new VirtualBox Guest
    • Name: cloud2
      • OS Type: Linux
      • Version: Ubuntu (64-Bit)
    • 2048Mb Ram
    • Boot Hard Disk
      • Dynamically Expanding Storage
      • 8.0Gb
    • After this initial set up, continue to configure the guest
      • Storage:
        • Edit the CD-ROM so that it boots Ubuntu 10.10 Live or Server ISO
        • Ensure that the SATA controller has Host I/O Cache Enabled (recommended by VirtualBox for EXT4 filesystems)
      • Network:
        • Adapter 1
          • Host-only Adapter
          • Name: vboxnet0
        • Adapter 2
          • NAT
          • This will provide the default route to allow the VM to access the internet to get the updates, OpenStack scripts and software
      • Audio:
        • Disable (just not required)
    • Boot the guest and install Ubuntu as per normal
  • Assign static IPs to the cloud controller
    • Ensure that the Cloud Controller you created in Part 1 has static addresses for eth0 and eth1.
      • For the sake of this guide, I’m assuming you have assigned the following
        • eth0: 172.241.0.101/255.255.255.128.  This address is your Cloud Controller address (CC_ADDR) and will be the API interface address you will be communicating on from your host.
        • eth1: stays as dhcp as it is only used for NAT’d access to the real world
    • Your compute nodes don’t need to be set statically, but for the rest of this guide it is assumed the addresses are as follows
      • Cloud2
        • eth0: 172.241.0.102/255.255.255.128
        • eth1: stays as dhcp as it is only used for NAT’d access to the real world
  • Grab this script to install OpenStack
    • This will set up a repository (ppa:nova/trunk) and install MySQL server where the information regarding your cloud will be stored
    • The options specified on the command line match the environment described above
    • wget --no-check-certificate \
      https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
  • Run the script (as root/through sudo) specifying that you want a compute node and that you’ve specified the IP address of the Cloud Controller
    • sudo bash ./OSinstall.sh -A $(whoami) -T compute -C 172.241.0.101
  • No further configuration is required.
  • Once completed, ensure that the Cloud Controller knows about this new compute node
    • On the Cloud Controller run the following
      • mysql -uroot -pnova nova -e 'select * from services;'
      • sudo nova-manage service list
      • You should see your new compute node listed under hosts
      • If you don’t have a DNS server running that resolves these hosts add your new compute node to /etc/hosts
        • 172.241.0.102 cloud2
  • On the new compute node run the following
    • sudo nova-manage db sync
  • As you copied your credentials from the cloud controller created in Part 1, you should just be able to continue to use this from your host – but this time you can spin up more guests.
    • If you changed the eth0 address of your cloud controller, ensure your cloud/creds/novarc environment file has the correct IP.
  • Repeat the steps above to create further compute nodes in your environment, scaling seamlessly
Advertisements

14 comments on “Running OpenStack under VirtualBox – A Complete Guide (Part 2)

  1. Pingback: Running OpenStack under VirtualBox – A Complete Guide (Part 1) « System Administration and Architecture Blog

  2. Pingback: Installazione di OpenStack Nova (2) » azns.it

    • Yes, I want to do one on including Glance and Dashboard, which really gives a good feature set to running it under VirtualBox. And maybe something along the lines of Crowbar or Orchestra for deployment – but is overkill for vbox. Vagrant is more suited here but in the real world you’d want to use an enterprise deployment mechanism, but either way I’ll put some new guides up soon.

      • So are you actually going to write the tutorial? And if you do, could you also specify your host system’s specs in more detail (GHz, etc.)?

        Thanks,
        Haneef Mubarak

      • Hi,
        I’ve other conflicting priorities at the moment to continue this tutorial but I’ll get around to updating the methods in-line with fixes seen in Diablo and the forthcoming Essex release.

        Regards,
        Kev

  3. hi there i m not able to fully run the image on the node2. i can see in the console:
    Sending discover…
    wget: can’t connect to remote host (169.254.169.254): Network is unreachable

    how can i fix it?

  4. @fabio – I had the wget: can’t connect to remote host (169.254.169.254) too. The guest on the compute node was not getting the dhcp offer back from the network node. I changed the VirtualBox nics for the eth0s to allow promiscuous mode and then it got the IP.

    Before that I had to change from using
    –glance_host={controller_ip}
    –glance_port=9292
    to just
    –glance_api_servers={controller_ip}:9292

  5. @dara thank you vary much for your reply.
    i cannot still get it work. could you pls clarify a couple of more doubts i have?
    1) on the compute-node did you assign an ip to the bridge? some manual says that i also have to bind br100 with the eth card (http://docs.openstack.org/bexar/openstack-compute/admin/content/ch03s03.html#d5e257 under Configuring Multiple Compute Nodes). did you follow also all those steps or not?
    2) when you say to change eth0 nic to allow promiscous mode (ifconfig eth0 promisc): do you mean on the compute-node, on the controller or on my machine maybe? (i actually turned all the interfaces on the node to promisc but no success)
    3) somewhere else people suggests to load an iptables rule (iptables -t nat -A PREROUTING -d 169.254.169.254/32 -p tcp -m tcp –dport 80 -j DNAT –to-destination $nova_api:8774). did you issue also that command on the node, or turning eth0 to promisc was enough?
    again thanks, i ‘m not far from having a cloud fully working, but now is already one week struggling with this issue.

    • 1. I did not assign an IP to the bridge on the compute node.

      2. No it is a setting on the VirtualBox NIC “hardware”. Shutdown the VirtualBox VMs. Select VM, goto settings, network, select the adapter that corresponds to guest eth0, in advanced, you should see a setting called “promiscuous mode” that defaults to deny – change this to “allow all”.

      I did this on both controller and compute VMs. I didn’t need to do ‘ifconfig eth0 promisc’

      I used tcpdump on the various interfaces and found the DHCP offers were not getting back up into the compute node – for no good reason. This could be something very specify to virtual NICs and NIC drivers. So VirtualBox version and Linux guest version. What kind of VBox NICs are you using? I’m using virtio-net

      3. I didn’t have to do that. If the guest on the compute node gets an IP on 10.0.0.0, it will get a default gateway 10.0.0.1 which is the IP of the bridge on the crontroller/network node, which should forward it on. Can your network-node get to this address?

  6. I can run a new instance in the compute node (cloud2) however I can not get access to that instance.

    clouduser@cloud1:~$ euca-describe-instances
    RESERVATION r-j0r8006n myproject default
    INSTANCE i-00000007 ami-00000003 10.0.0.7 10.0.0.7 running openstack (myproject, cloud2) 0 m1.tiny 2012-01-02T17:00:48Znova aki-00000001 ari-00000002

    Here is my try,

    clouduser@cloud1:~$ ssh -i cloud/creds/openstack.pem root@10.0.0.7
    ssh: connect to host 10.0.0.7 port 22: No route to host

    Thanks for your help.

  7. Hi, I’m also having issue with the DHCP offers not making it back to the guest VM. The offers are seen on eth0 (using tcpdump) but don’t seem to be forwarded to the bridge so that the guest VM could see them (i.e. not seen on vlan100, vnet0 and other bridge interfaces). The promiscuous setting change on eth0s didn’t help (using virtio-net for eth0). Any tips would be great, thanks!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s