Easy OpenStack Folsom with VirtualBox and Vagrant

Testing OpenStack is now as easy thanks to VirtualBox and Vagrant. To run a mini test environment with Compute, Cinder, Keystone and Horizon you just need the following tools:

  • VirtualBox
  • Vagrant
  • Git client

Getting Ready

To set up a sandbox environment within VirtualBox to run OpenStack Folsom you will need to download:

Installation of these tools are simple – follow the on-screen prompts.

When ready we need to configure the VirtualBox “Host-Only” Networking. This networking mode allows us to communicate with our VirtualBox guest and our underlying host.
We will set up the following:

  • Host-Only Network: IP 172.16.0.254; Network 172.16.0.0/255.255.0.0; Disable DHCP
  • Host-Only Network #2: IP 10.0.0.254; Network 10.0.0.0/255.0.0.0; Disable DHCP

(Hint: there is a bash script @ https://raw.github.com/uksysadmin/OpenStackInstaller/folsom/virtualbox/vbox-create-networks.sh to create these for you).

How To Do It

To create a VirtualBox VM, running Ubuntu 12.04 with OpenStack Folsom from Ubuntu’s Cloud Archive, carry out the following

1. Clone the GitHub OpenStackInstaller scripts

git clone https://github.com/uksysadmin/OpenStackInstaller.git

2. Make the scripts the ‘folsom’ branch

cd OpenStackInstaller
git checkout folsom

3. Run ‘vagrant’ to launch your OpenStack instance which will come up with IP 172.16.0.201

cd virtualbox
vagrant up

4. After a short while your instance will be ready. Note that on the first run, Vagrant will download a 384Mb Precise64 “box”. Subsequent launches will not require this step.

Launch a web browser at http://172.16.0.201/horizon and log in with:

Username: admin
Password: openstack

(Note, to edit the IP it is assigned, modify virtualbox/vagrant-openstack-bootstrap.sh (Warning its a bit of a sed hack!).

Ubuntu 12.04 Alpha + Beta Kernel Panic Fix

If you are getting a Kernel Panic accompanied by text such as

init: log.c:786: Assertion failes in log_clear_unflushed:
 log->remote_closed

Then see this thread: https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/935585 regarding a bug introduced in a recent upstart package.

Fix is simple

  1. apt-get install python-software-properties
  2. add-apt-repository ppa:jamesodhunt/bug-935585
  3. apt-get update
  4. apt-get upgrade

When you reboot all should be great thanks to James Hunt.

Upgrade to Ubuntu 11.10 problem: Waiting for network configuration then black screen solution

Have you just upgraded to Ubuntu 11.10 Oneiric Ocelot and now getting the “Waiting for network configuration” message followed by “Waiting up to 60 seconds more for network”? This then might be accompanied by a black blank screen.

[update] I’ve updated this post to reflect the copy step mentioned in the bug post below is surplus as /run is mounted tmpfs – the refined steps are below. The fix is removing the old /var/run and /var/lock then pointing those old locations to /run and /run/lock respectively. I’m suspecting this bug only comes about after an upgrade from your existing session (e.g. apt-get dist-upgrade) where it must have trouble removing these directories because existing services have files needed in there.

[update 8th March 2012] Ubuntu 12.04 is just around the corner. I strongly advise you resist upgrading to 11.10 at this stage when 12.04 is to be released next month.

The bug is here (https://bugs.launchpad.net/ubuntu/+source/sysvinit/+bug/858122) and the fix is based on this: https://bugs.launchpad.net/ubuntu/+source/dbus/+bug/811441/comments/24 :

  1. Hit Ctrl+Alt+F1 at the blank screen to get you to a non-X terminal (tty1)
  2. Login in with your username and password
  3. Change to root with: sudo -i and enter your password
  4. mkdir -p /run /run/lock
  5. rm -rf /var/run /var/lock
  6. ln -s /run /var
  7. ln -s /run/lock /var
  8. reboot

You should have 11.10 back again.

OpenStack Diablo, updates and work in progress!

It has been a while since I blogged, and in that time OpenStack has come on leaps and bounds with Diablo being the latest official release. This will change as I work pretty much full-time on testing OpenStack as an end-user (and day job as architect) based on Diablo. This will also help with some book projects that are in the pipe-line for which I’m very humbled and excited about. I’ll blog my experiences as I go along – after all, it’s the reason you’ve stumbled upon this corner of the internet in the first place to learn from my experiences in using OpenStack.
The project I’m working on will be based on Ubuntu running the latest release of OpenStack, Diablo (2011.3). I’ll be investigating Crowbar from Dell to see how remote bare-metal provisioning of OpenStack is coming along – a crucial element for this to be adopted in established enterprises where it is the norm to roll-out enterprise class software in this way. I’ll try to squeeze in JuJu too. Most importantly though is playing catch up on the raft of projects that are flowing through OpenStack from Keystone for authentication, Quantum (although probably more relevant to Essex as this develops) as well as playing catch up on where Swift, Glance and the Dashboard are.

Protecting SSH against brute force attacks

Running a public AWS instance is always asking for unexpected trouble from script kiddies and bots trying to find a vector in to compromise your server.
Sshguard (www.sshguard.net) monitors your log and alters your IPtables firewall accordingly to help keep persistent brute force attackers at bay.

1. Download the latest version from http://www.sshguard.net @ http://freshmeat.net/urls/6ff38f7dc039f95efec2859eefe17d3a

wget -O sshguard-1.5.tar.bz2

http://freshmeat.net/urls/6ff38f7dc039f95efec2859eefe17d3a

2. Unpack

tar jxvf sshguard-1.5.tar.bz2

3. Configure + Make

cd sshguard-1.5
./configure --with-firewall=iptables
make

4. Install (to /usr/local/sbin/sshguard)

sudo make install

5. /etc/init.d/sshguard (chmod 0755)

! /bin/sh
# this is a concept, elaborate to your taste
case $1 in
start)
/usr/local/sbin/sshguard -a 4 -b 5:/var/sshguard/blacklist.db -l
     /var/log/auth.log &
;;
stop)
killall sshguard
;;
*)
echo "Use start or stop"
exit 1
;;
esac

6. /etc/iptables.up.rules

# Firewall
*filter
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:INPUT DROP [0:0]
-N sshguard
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp --dport http -j ACCEPT
-A INPUT -p tcp --dport ftp-data -j ACCEPT
-A INPUT -p tcp --dport ftp -j ACCEPT
-A INPUT -p tcp --dport ssh -j sshguard
-A INPUT -p udp --source-port 53 -d 0/0 -j ACCEPT
-A OUTPUT -j ACCEPT
-A INPUT -j DROP
COMMIT
# Completed

7. Read in the IPtables rules

iptables-restore < /etc/iptables.up.rules

8. Start Sshguard

mkdir /var/sshguard&&/etc/init.d/sshguard start

Verification

tail -f /var/log/auth.log
iptables -L -n

OpenStack Nova CentOS Instance

I’ve been working on tweaking a CentOS 5.3 image you can download from http://open.eucalyptus.com/wiki/EucalyptusUserImageCreatorGuide_v1.6 as there seems to be a big bias towards running Ubuntu under OpenStack. This is great for getting OpenStack up and running, but for us evangelists that operate a RHEL family house, its crucial to be able to demonstrate like-for-like offerings against what you currently run to help promote its use.

This guide should get you to a point where you have a usable, useful CentOS image for your environment. When I get around to it I’ll upload my version for use in your environment with the modifications laid out in this blog post.

The Guide

  • Start off by downloading a compatible image from Eucalyptus: http://open.eucalyptus.com/wiki/EucalyptusUserImageCreatorGuide_v1.6. I’ll work on the 64-Bit CentOS 5.3 image for this guide.
  • mkdir cloud/images and unpack the tarball here
    • mkdir -p cloud/images
    • cd cloud/images
    • tar zxvf <path_to_tarball>/euca-centos-5.3-x86_64.tar.gz
    • cd euca-centos-5.3-x86_64
    • At this stage we’d normally upload the image to OpenStack but some modifications are needed such as increasing the size of the image to accommodate some new packages so we must first mount the image (read-only because we’re not needing to make edits to this yet) as follows
      • mkdir image
      • sudo mount centos.5-3.x86-64.img image -o loop,ro
    • Increase the size of the image as follows and copy the contents
      • dd if=/dev/zero of=newcentos.img bs=1M count=2048
      • mkfs.ext3 newcentos.img
      • mkdir newcentos
      • sudo mount newcentos.img newcentos -o loop,rw
      • sudo cp -pR image/* newcentos/
      • sudo umount image
  • Modify the image as follows
  • IMPORTANT! (ensure you’re chrooted as described below to your mounted image and you have verified that you’re not modifying your running environment – I accept no responsibility because you can’t read)
    • sudo su -
    • chroot ~/cloud/images/euca-centos-5.3-x86_64/newcentos
    • mount proc -t proc /proc
  • Now to modify the image and install some new packages…
    • yum update
    • yum install redhat-lsb sudo enhanced-vim
    • Remove /etc/udev/rules.d/* to stop the lengthy wait on boot
    • edit /etc/sysconfig/network and disable ZEROCONF (your instance will fail to download meta data from OpenStack nova-api otherwise)
      • NOZEROCONF=yes
    • Edit /etc/profile.d/vim.sh
      • if [ -n "$BASH_VERSION" -o -n "$KSH_VERSION" -o -n "$ZSH_VERSION" ]
        then
        [ -x /usr/bin/id ] || return
        tmpid=$(/usr/bin/id -u)
        [ "$tmpid" = "" ] && tmpid=0
        # for bash and zsh, only if no alias is already set
        alias vi >/dev/null 2>&1 || alias vi=vim
        alias view >/dev/null 2>&1 || alias view='vim -R'
        fi
    • Ensure /dev/null is writeable by all
      • chmod 777 /dev/null
  • That’s the modifications done, but feel free to add your own to suit your own environment so to wrap it up
    • umount /proc
    • logout
    • logout
    • sudo umount newcentos
    • To make things neat rename it appropriately
      • mv newcentos.img centos-5.5-x86_64.img

Upload CentOS image to OpenStack

  • Now you have a CentOS image suitable for OpenStack you need to upload it to OpenStack.
  • The tarball ships with 2 lots of kernels and ramdisks. I’ll assume you’ll be using KVM, but change the instructions to suit a Xen hypervisor.
    • Upload the kernel and make note of the ami
      • euca-bundle-image -i kvm-kernel/vmlinuz-2.6.28-11-generic
        --kernel true
      • euca-upload-bundle -b mybucket
        -m /tmp/vmlinuz-2.6.28-11-generic.manifest.xml
      • euca-register mybucket/vmlinuz-2.6.28-11-generic.manifest.xml
    • Upload the ramdisk and make a note of the ami
      • euca-bundle-image -i kvm-kernel/initrd.img-2.6.28-11-generic
        --ramdisk true
      • euca-upload-bundle -b mybucket
        -m /tmp/initrd.img-2.6.28-11-generic.manifest.xml
      • euca-register mybucket/initrd.img-2.6.28-11-generic.manifest.xml
    • Upload the machine image you modifed above, specifying the ami values from the steps above to specify the kernel and ramdisk to load with this
      • euca-bundle-image -i centos-5.5-x86_64.img
        --kernel aki-XXXXXXXX --ramdisk ari-XXXXXXXX
      • euca-upload-bundle -b mybucket
        -m /tmp/centos-5.5-x86_64.img.manifest.xml
      • euca-register mybucket/centos-5.5-x86_64.img.manifest.xml
  • That’s it done (you may have to wait a short while whilst it uploads to the nova-objectstore server) – you should now see your new AMI available
    • euca-describe-images
      • IMAGE    ami-reey5wk5    mybucket/centos.5-5.x86-64.img.manifest.xml   
        myproject    available    private        x86_64    machine    ami-f4ks8moj   
        ami-jqxvgtmd
  • You can now use this to launch an instance
    • euca-run-instances ami-reey5wk5 -k openstack -t m1.tiny

Running OpenStack under VirtualBox – A Complete Guide (Part 1)

UPDATE: I’ve been working on a new version of the script which can be used to create an OpenStack host running on Ubuntu 12.04 Precise Pangolin and the Essex release.
I’ve now got a video to accompany this which is recommended over this guide
Head over to  ‎http://uksysadmin.wordpress.com/2012/03/28/screencast-video-of-an-install-of-openstack-essex-on-ubuntu-12-04-under-virtualbox/

Running OpenStack under VirtualBox allows you to have a complete multi-node cluster that you can access and manage from the computer running VirtualBox as if you’re accessing a region on Amazon.
This is a complete guide to setting up a VirtualBox VM running Ubuntu, with OpenStack running on this guest and an OpenStack instance running, accessible from your host.

Part 1 – OpenStack on a single VirtualBox VM with OpenStack instances accessible from host

The environment used for this guide

  • A 64-Bit Intel Core i7 Laptop, 8Gb Ram.
  • Ubuntu 10.10 Maverick AMD64 (The “host”)
  • VirtualBox 4
  • Access from host running VirtualBox only (so useful for development/proof of concept)

The proposed environment

  • OpenStack “Public” Network: 172.241.0.0/25
  • OpenStack “Private” Network: 10.0.0.0/8
  • Host has access to its own LAN, separate to this on 192.168.0.0/16 and not used for this guide

The Guide

  • Download and install VirtualBox from http://www.virtualbox.org/
  • Under Preferences… Network…
  • Add/Edit Host-only network so you have vboxnet0. This will serve as the “Public interface” to your cloud environment
    • Configure this as follows
      • Adapter
        • IPv4 Address: 172.241.0.100
        • IPv4 Network Mask: 255.255.255.128
      • DHCP Server
        • Disable Server
    • On your Linux host running VirtualBox, you will see an interface created called ‘vboxnet0′ with the address specified as 172.241.0.100. This will be the IP address your OpenStack instances will see when you access them.
    • Create a new Guest
      • Name: Cloud1
        • OS Type: Linux
        • Version: Ubuntu (64-Bit)
      • 1024Mb Ram
      • Boot Hard Disk
        • Dynamically Expanding Storage
        • 8.0Gb
      • After this initial set up, continue to configure the guest
        • Storage:
          • Edit the CD-ROM so that it boots Ubuntu 10.10 Live or Server ISO
          • Ensure that the SATA controller has Host I/O Cache Enabled (recommended by VirtualBox for EXT4 filesystems)
        • Network:
          • Adapter 1
            • Host-only Adapter
            • Name: vboxnet0
          • Adapter 2
            • NAT
            • This will provide the default route to allow the VM to access the internet to get the updates, OpenStack scripts and software
        • Audio:
          • Disable (just not required)
    • Power the guest on and install Ubuntu
    • For this guide I’ve statically assigned the guest with the IP: 172.241.0.101 for eth0 and netmask 255.255.255.128.  This will be the IP address that you will use to access the guest from your host box, as well as the IP address you can use to SSH/SCP files around.
    • Once installed, run an update (sudo apt-get update&&sudo apt-get upgrade) then reboot
    • If you’re running a desktop, install the Guest Additions (Device… Install Guest Additions, then click on Places and select the VBoxGuestAdditions CD and follow the Autorun script), then Reboot
    • Install openssh-server
      • sudo apt-get -y install openssh-server
    • Grab this script to install OpenStack
      • This will set up a repository (ppa:nova/trunk) and install MySQL server where the information regarding your cloud will be stored
      • The options specified on the command line match the environment described above
      • wget https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
    • Run the script (as root/through sudo)
      • sudo bash ./OSinstall.sh -A $(whoami)
    • Run the post-configuration steps
      • ADMIN=$(whoami)
        sudo nova-manage user admin ${ADMIN}
        sudo nova-manage role add ${ADMIN} cloudadmin
        sudo nova-manage project create myproject ${ADMIN}
        sudo nova-manage project zipfile myproject ${ADMIN}
        mkdir -p cloud/creds
        cd cloud/creds
        unzip ~/nova.zip
        . novarc
        cd
        euca-add-keypair openstack > ~/cloud/creds/openstack.pem
        chmod 0600 cloud/creds/*

    Congratulations, you now have a working Cloud environment waiting for its first image and instances to run, with a user you specified on the command line (yourusername), the credentials to access the cloud and a project called ‘myproject’ to host the instances.

    • You now need to ensure that you can access any instances that you launch via SSH as a minimum (as well as being able to ping) – but I add in access to a web service and port 8080 too for this environment as my “default” security group.
      • euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
        euca-authorize default -P tcp -p 80 -s 0.0.0.0/0
        euca-authorize default -P tcp -p 8080 -s 0.0.0.0/0
        euca-authorize default -P icmp -t -1:-1
    • Next you need to load a UEC image into your cloud so that instances can be launched from it
      • image="ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz"
        wget http://smoser.brickies.net/ubuntu/ttylinux-uec/$image
        uec-publish-tarball $image mybucket
    • Once the uec-publish-tarball command has been run, it will present you with a line with emi=, eri= and eki= specifying the Image, Ramdisk and Kernel as shown below. Highlight this, copy and paste back in your shell
      Thu Feb 24 09:55:19 GMT 2011: ====== extracting image ======
      kernel : ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz
      ramdisk: ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd
      image  : ttylinux-uec-amd64-12.1_2.6.35-22_1.img
      Thu Feb 24 09:55:19 GMT 2011: ====== bundle/upload kernel ======
      Thu Feb 24 09:55:21 GMT 2011: ====== bundle/upload ramdisk ======
      Thu Feb 24 09:55:22 GMT 2011: ====== bundle/upload image ======
      Thu Feb 24 09:55:25 GMT 2011: ====== done ======
      emi="ami-fnlidlmq"; eri="ami-dqliu15n"; eki="ami-66rz6vbs";
    • To launch an instance
      • euca-run-instances $emi -k openstack -t m1.tiny
    • To check its running
      • euca-describe-instances
      • You will see the Private IP that has been assigned to this instance, for example 10.0.0.3
    • To access this via SSH
      • ssh -i cloud/creds/openstack.pem root@10.0.0.3
      • (To log out of ttylinux, type: logout)
    • Congratulations, you now have an OpenStack instance running under OpenStack Nova, running under a VirtualBox VM!
    • To access this outside of the VirtualBox environment (i.e. back on your real computer, the host) you need to assign it a “public” IP
      • Associate this to the instance id (get from euca-describe-instances and will be of the format i-00000000)
        • euca-allocate-address
        • This will return an IP address that has been assigned to your project so that you can now associate to your instance, e.g. 172.241.0.3
        • euca-associate-address -i i-00000001 172.241.0.3
      • Now back on your host (so outside of VirtualBox), grab a copy of cloud/creds directory
        • scp -r user@172.241.0.101:cloud/creds .
      • You can now access that host using the Public address you associated to it above
        • ssh -i cloud/creds/openstack.pem root@172.241.0.3

    CONGRATULATIONS! You have now created a complete cloud environment under VirtualBox that you can manage from your computer (host) as if you’re managing services on Amazon. To demonstrate this you can terminate that instance you created from your computer (host)

    • sudo apt-get install euca2ools
      . cloud/creds/novarc
      euca-describe-instances
      euca-terminate-instances i-00000001

    Credits

    This guide is based on Thierry Carrez’ blog @ http://fnords.wordpress.com/2010/12/02/bleeding-edge-openstack-nova-on-maverick/

  • Next: Part 2 – OpenStack on a multiple VirtualBox VMs with OpenStack instances accessible from host

Installing OpenSUSE 11.3 under Virtual Box 3.2

You will need

  1. VirtualBox
  2. OpenSUSE Live CD

Instructions

Guest Additions

  1. Update the packages
    zypper up
  2. Reboot
  3. Install Kernel Development Packages
    sudo zypper in -t pattern devel_kernel
  4. Mount the VirtualBox Guest Additions CD[VirtualBox Menu] Devices… Install Guest Additions
  5. Run the installer
    sudo /media/VBOXADDITIONS_3.2.10_66523/VBoxLinuxAdditions-x86.run
  6. Reboot