CloudCamp Warrington, February 23rd hosted by @appsense #aws #openstack

I’ll be doing a short (well, I can get carried away but the plan is short) talk on what Autotrader.co.uk is doing in terms of cloud computing which includes efforts and plans around OpenStack and AWS.

You will find some great people at the informal event – from people who are actively involved in cloud technology, to those that are just starting their cloud journey.

If you’re in the North West of the UK and fancy coming along – it is being kindly hosted by the Warrington offices of AppSense Ltd.  The evening is turning out to be a great forum and the hope is that more events in the area will follow.  If you have an interest in the cloud bubble, please come along for a chat over some drinks and some food.

Full details here: http://cloudcamp.org/warrington

Mod_rpaf: Extract real-IP from behind reverse proxy/load balancer

There are many reasons why you’d want to front your Apache-based site behind a reverse proxy but one of the sacrifices you make in speed and security, you would by default lose a sensible way to extract the real IP (or at least have a very good chance to) of the client making the request. This is because your reverse proxy or load balancer in many situations becomes the client and to Apache this means you will only ever see the IP (the internal IP if its local to your Apache server farm) which become useless if you are doing analytics on the Apache logs or making decisions for the request based on the IP (for example Geo decisions, or ModSecurity).
To circumvent this, you’d want to use mod_rpaf (http://stderr.net/apache/rpaf/). This module is easily integrated into your Apache environment using the following method (Apache 2.x):

sudo apxs -i -c -n mod_rpaf-2.0.so mod_rpaf-2.0.c

And then configuring Apache to pick up this module:

# if DSO load module first:
LoadModule rpaf_module modules/mod_rpaf-2.0.so
RPAFenable On
RPAFsethostname Off
RPAFproxy_ips 10.10.10.1
RPAFheader X-Forwarded-For

This says look in the X-Forwarded-For header for the existence of the IPs listed in the RPAFproxy_ips line, and if found, pick up the external IP.

Unfortunately in practice this doesn’t work so well for a couple of reasons:

1. When auto-scaling an environment such as in Amazon, and you’re using something like ELB, you won’t know what internal IP the ELB will talk to Apache on – causing you to not be able to auto-configure mod_rpaf at install/run-time.
2. It makes an assumption on the external IP: it just takes the last IP seen on the X-Forwarded-For line.

Number 2 caused me issues because it is possible to have an internal IP address as the last entry on the X-Forwarded-For line – and if inspected properly, the external IP would be somewhere along the X-Forwarded-For line. To get around this and the ability to have more flexibility in the mod_rpaf config so that the code invoked to expose the real IP when an internal address was seen I created the following patch:

--- mod_rpaf-2.0.c 2011-06-23 13:51:53.000000000 +0100
 +++ mod_rpaf-2.0.c.new 2011-06-24 16:08:18.000000000 +0100
 @@ -71,6 +71,7 @@
 #include "http_protocol.h"
 #include "http_vhost.h"
 #include "apr_strings.h"
 +#include "string.h"
module AP_MODULE_DECLARE_DATA rpaf_module;
@@ -136,10 +137,14 @@
 }
static int is_in_array(const char *remote_ip, apr_array_header_t *proxy_ips) {
 - int i;
 + int i,len;
 + char tmp[16];
 char **list = (char**)proxy_ips->elts;
 for (i = 0; i nelts; i++) {
 - if (strcmp(remote_ip, list[i]) == 0)
 + len=strlen(list[i]);
 + strncpy(tmp,remote_ip,len);
 + tmp[len]='';
 + if (strcmp(tmp, list[i]) == 0)
 return 1;
 }
 return 0;
 @@ -155,6 +160,7 @@
 static int change_remote_ip(request_rec *r) {
 const char *fwdvalue;
 char *val;
 + int i;
 rpaf_server_cfg *cfg = (rpaf_server_cfg *)ap_get_module_config(r->server->module_config,
 &rpaf_module);
@@ -183,7 +189,10 @@
 rcr->old_ip = apr_pstrdup(r->connection->pool, r->connection->remote_ip);
 rcr->r = r;
 apr_pool_cleanup_register(r->pool, (void *)rcr, rpaf_cleanup, apr_pool_cleanup_null);
 - r->connection->remote_ip = apr_pstrdup(r->connection->pool, ((char **)arr->elts)[((arr->nelts)-1)]);
 + for(i=arr->nelts-1; i >= 0; i--) {
 + if (is_in_array( apr_pstrdup(r->connection->pool, ((char **)arr->elts)[i]), cfg->proxy_ips) == 0 )
 + r->connection->remote_ip = apr_pstrdup(r->connection->pool, ((char **)arr->elts)[i]);
 + }
 r->connection->remote_addr->sa.sin.sin_addr.s_addr = apr_inet_addr(r->connection->remote_ip);
 if (cfg->sethostname) {
 const char *hostvalue;

Patch and compile/install as follows:

patch -p1 < patch_mod_rpaf
sudo apxs -i -c -n mod_rpaf-2.0.so mod_rpaf-2.0.c

This allows you to use a modified config file which allows you to run Apache behind, say, ELB and it extracts the last seen external IP (i.e. not 10. 172. or 192.168.). Of course, edit to suit your particular environment:

# if DSO load module first:
LoadModule rpaf_module modules/mod_rpaf-2.0.so
RPAFenable On
RPAFsethostname Off
RPAFproxy_ips 10. 172. 192.168.
RPAFheader X-Forwarded-For

When it sees a line like

X-Forwarded-For: 192.168.100.227, 209.88.21.195, 10.58.59.219, 192.168.123.123

It will pick out 209.88.21.195 as the real IP.

Protecting SSH against brute force attacks

Running a public AWS instance is always asking for unexpected trouble from script kiddies and bots trying to find a vector in to compromise your server.
Sshguard (www.sshguard.net) monitors your log and alters your IPtables firewall accordingly to help keep persistent brute force attackers at bay.

1. Download the latest version from http://www.sshguard.net @ http://freshmeat.net/urls/6ff38f7dc039f95efec2859eefe17d3a

wget -O sshguard-1.5.tar.bz2
    http://freshmeat.net/urls/6ff38f7dc039f95efec2859eefe17d3a

2. Unpack

tar jxvf sshguard-1.5.tar.bz2

3. Configure + Make

cd sshguard-1.5
./configure --with-firewall=iptables
make

4. Install (to /usr/local/sbin/sshguard)

sudo make install

5. /etc/init.d/sshguard (chmod 0755)

! /bin/sh
# this is a concept, elaborate to your taste
case $1 in
start)
/usr/local/sbin/sshguard -a 4 -b 5:/var/sshguard/blacklist.db -l
     /var/log/auth.log &
;;
stop)
killall sshguard
;;
*)
echo "Use start or stop"
exit 1
;;
esac

6. /etc/iptables.up.rules

# Firewall
*filter
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:INPUT DROP [0:0]
-N sshguard
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp --dport http -j ACCEPT
-A INPUT -p tcp --dport ftp-data -j ACCEPT
-A INPUT -p tcp --dport ftp -j ACCEPT
-A INPUT -p tcp --dport ssh -j sshguard
-A INPUT -p udp --source-port 53 -d 0/0 -j ACCEPT
-A OUTPUT -j ACCEPT
-A INPUT -j DROP
COMMIT
# Completed

7. Read in the IPtables rules

iptables-restore < /etc/iptables.up.rules

8. Start Sshguard

mkdir /var/sshguard&&/etc/init.d/sshguard start

Verification

tail -f /var/log/auth.log
iptables -L -n

Running OpenStack under VirtualBox – A Complete Guide (Part 2)

In the previous post, we very simply got OpenStack running under VirtualBox. This next part takes this further by getting multiple compute nodes installed to spread the load of your instances. It also paves the way for Part 3 where we will then start to look at Swift, OpenStack’s storage implementation of S3.

Part 2 – OpenStack on a multiple VirtualBox VMs with OpenStack instances accessible from host

We will be using the same set up as for Part 1.

The proposed environment

  • OpenStack “Public” Network: 172.241.0.0/25
  • OpenStack “Private” Network: 10.0.0.0/8
  • Host has access to its own LAN, separate to this on 192.168.0.0/16 and not used for this guide
  • One VirtualBox VM running the software needed for the controller
  • One VirtualBox VM running the software needed for a compute node

This guide will assume you have followed Part 1. We are simply adding in a compute node now, so that will make the virtual machine you created in Part 1 will become the Cloud Controller (CC). Also, OpenStack has been designed so that any part of the environment can run on a separate server. For this guide we will have the following

  • Cloud Controller running MySQL, RabbitMQ, nova-network, nova-scheduler, nova-objectstore, nova-api, nova-compute
  • Compute node running: nova-compute

The Guide

  • Add a new VirtualBox Guest
    • Name: cloud2
      • OS Type: Linux
      • Version: Ubuntu (64-Bit)
    • 2048Mb Ram
    • Boot Hard Disk
      • Dynamically Expanding Storage
      • 8.0Gb
    • After this initial set up, continue to configure the guest
      • Storage:
        • Edit the CD-ROM so that it boots Ubuntu 10.10 Live or Server ISO
        • Ensure that the SATA controller has Host I/O Cache Enabled (recommended by VirtualBox for EXT4 filesystems)
      • Network:
        • Adapter 1
          • Host-only Adapter
          • Name: vboxnet0
        • Adapter 2
          • NAT
          • This will provide the default route to allow the VM to access the internet to get the updates, OpenStack scripts and software
      • Audio:
        • Disable (just not required)
    • Boot the guest and install Ubuntu as per normal
  • Assign static IPs to the cloud controller
    • Ensure that the Cloud Controller you created in Part 1 has static addresses for eth0 and eth1.
      • For the sake of this guide, I’m assuming you have assigned the following
        • eth0: 172.241.0.101/255.255.255.128.  This address is your Cloud Controller address (CC_ADDR) and will be the API interface address you will be communicating on from your host.
        • eth1: stays as dhcp as it is only used for NAT’d access to the real world
    • Your compute nodes don’t need to be set statically, but for the rest of this guide it is assumed the addresses are as follows
      • Cloud2
        • eth0: 172.241.0.102/255.255.255.128
        • eth1: stays as dhcp as it is only used for NAT’d access to the real world
  • Grab this script to install OpenStack
    • This will set up a repository (ppa:nova/trunk) and install MySQL server where the information regarding your cloud will be stored
    • The options specified on the command line match the environment described above
    • wget --no-check-certificate \
      https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
  • Run the script (as root/through sudo) specifying that you want a compute node and that you’ve specified the IP address of the Cloud Controller
    • sudo bash ./OSinstall.sh -A $(whoami) -T compute -C 172.241.0.101
  • No further configuration is required.
  • Once completed, ensure that the Cloud Controller knows about this new compute node
    • On the Cloud Controller run the following
      • mysql -uroot -pnova nova -e 'select * from services;'
      • sudo nova-manage service list
      • You should see your new compute node listed under hosts
      • If you don’t have a DNS server running that resolves these hosts add your new compute node to /etc/hosts
        • 172.241.0.102 cloud2
  • On the new compute node run the following
    • sudo nova-manage db sync
  • As you copied your credentials from the cloud controller created in Part 1, you should just be able to continue to use this from your host – but this time you can spin up more guests.
    • If you changed the eth0 address of your cloud controller, ensure your cloud/creds/novarc environment file has the correct IP.
  • Repeat the steps above to create further compute nodes in your environment, scaling seamlessly

Running OpenStack under VirtualBox – A Complete Guide (Part 1)

UPDATE: I’ve been working on a new version of the script which can be used to create an OpenStack host running on Ubuntu 12.04 Precise Pangolin and the Essex release.
I’ve now got a video to accompany this which is recommended over this guide
Head over to  ‎https://uksysadmin.wordpress.com/2012/03/28/screencast-video-of-an-install-of-openstack-essex-on-ubuntu-12-04-under-virtualbox/

Running OpenStack under VirtualBox allows you to have a complete multi-node cluster that you can access and manage from the computer running VirtualBox as if you’re accessing a region on Amazon.
This is a complete guide to setting up a VirtualBox VM running Ubuntu, with OpenStack running on this guest and an OpenStack instance running, accessible from your host.

Part 1 – OpenStack on a single VirtualBox VM with OpenStack instances accessible from host

The environment used for this guide

  • A 64-Bit Intel Core i7 Laptop, 8Gb Ram.
  • Ubuntu 10.10 Maverick AMD64 (The “host”)
  • VirtualBox 4
  • Access from host running VirtualBox only (so useful for development/proof of concept)

The proposed environment

  • OpenStack “Public” Network: 172.241.0.0/25
  • OpenStack “Private” Network: 10.0.0.0/8
  • Host has access to its own LAN, separate to this on 192.168.0.0/16 and not used for this guide

The Guide

  • Download and install VirtualBox from http://www.virtualbox.org/
  • Under Preferences… Network…
  • Add/Edit Host-only network so you have vboxnet0. This will serve as the “Public interface” to your cloud environment
    • Configure this as follows
      • Adapter
        • IPv4 Address: 172.241.0.100
        • IPv4 Network Mask: 255.255.255.128
      • DHCP Server
        • Disable Server
    • On your Linux host running VirtualBox, you will see an interface created called ‘vboxnet0’ with the address specified as 172.241.0.100. This will be the IP address your OpenStack instances will see when you access them.
    • Create a new Guest
      • Name: Cloud1
        • OS Type: Linux
        • Version: Ubuntu (64-Bit)
      • 1024Mb Ram
      • Boot Hard Disk
        • Dynamically Expanding Storage
        • 8.0Gb
      • After this initial set up, continue to configure the guest
        • Storage:
          • Edit the CD-ROM so that it boots Ubuntu 10.10 Live or Server ISO
          • Ensure that the SATA controller has Host I/O Cache Enabled (recommended by VirtualBox for EXT4 filesystems)
        • Network:
          • Adapter 1
            • Host-only Adapter
            • Name: vboxnet0
          • Adapter 2
            • NAT
            • This will provide the default route to allow the VM to access the internet to get the updates, OpenStack scripts and software
        • Audio:
          • Disable (just not required)
    • Power the guest on and install Ubuntu
    • For this guide I’ve statically assigned the guest with the IP: 172.241.0.101 for eth0 and netmask 255.255.255.128.  This will be the IP address that you will use to access the guest from your host box, as well as the IP address you can use to SSH/SCP files around.
    • Once installed, run an update (sudo apt-get update&&sudo apt-get upgrade) then reboot
    • If you’re running a desktop, install the Guest Additions (Device… Install Guest Additions, then click on Places and select the VBoxGuestAdditions CD and follow the Autorun script), then Reboot
    • Install openssh-server
      • sudo apt-get -y install openssh-server
    • Grab this script to install OpenStack
      • This will set up a repository (ppa:nova/trunk) and install MySQL server where the information regarding your cloud will be stored
      • The options specified on the command line match the environment described above
      • wget https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
    • Run the script (as root/through sudo)
      • sudo bash ./OSinstall.sh -A $(whoami)
    • Run the post-configuration steps
      • ADMIN=$(whoami)
        sudo nova-manage user admin ${ADMIN}
        sudo nova-manage role add ${ADMIN} cloudadmin
        sudo nova-manage project create myproject ${ADMIN}
        sudo nova-manage project zipfile myproject ${ADMIN}
        mkdir -p cloud/creds
        cd cloud/creds
        unzip ~/nova.zip
        . novarc
        cd
        euca-add-keypair openstack > ~/cloud/creds/openstack.pem
        chmod 0600 cloud/creds/*

    Congratulations, you now have a working Cloud environment waiting for its first image and instances to run, with a user you specified on the command line (yourusername), the credentials to access the cloud and a project called ‘myproject’ to host the instances.

    • You now need to ensure that you can access any instances that you launch via SSH as a minimum (as well as being able to ping) – but I add in access to a web service and port 8080 too for this environment as my “default” security group.
      • euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
        euca-authorize default -P tcp -p 80 -s 0.0.0.0/0
        euca-authorize default -P tcp -p 8080 -s 0.0.0.0/0
        euca-authorize default -P icmp -t -1:-1
    • Next you need to load a UEC image into your cloud so that instances can be launched from it
      • image="ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz"
        wget http://smoser.brickies.net/ubuntu/ttylinux-uec/$image
        uec-publish-tarball $image mybucket
    • Once the uec-publish-tarball command has been run, it will present you with a line with emi=, eri= and eki= specifying the Image, Ramdisk and Kernel as shown below. Highlight this, copy and paste back in your shell
      Thu Feb 24 09:55:19 GMT 2011: ====== extracting image ======
      kernel : ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz
      ramdisk: ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd
      image  : ttylinux-uec-amd64-12.1_2.6.35-22_1.img
      Thu Feb 24 09:55:19 GMT 2011: ====== bundle/upload kernel ======
      Thu Feb 24 09:55:21 GMT 2011: ====== bundle/upload ramdisk ======
      Thu Feb 24 09:55:22 GMT 2011: ====== bundle/upload image ======
      Thu Feb 24 09:55:25 GMT 2011: ====== done ======
      emi="ami-fnlidlmq"; eri="ami-dqliu15n"; eki="ami-66rz6vbs";
    • To launch an instance
      • euca-run-instances $emi -k openstack -t m1.tiny
    • To check its running
      • euca-describe-instances
      • You will see the Private IP that has been assigned to this instance, for example 10.0.0.3
    • To access this via SSH
      • ssh -i cloud/creds/openstack.pem root@10.0.0.3
      • (To log out of ttylinux, type: logout)
    • Congratulations, you now have an OpenStack instance running under OpenStack Nova, running under a VirtualBox VM!
    • To access this outside of the VirtualBox environment (i.e. back on your real computer, the host) you need to assign it a “public” IP
      • Associate this to the instance id (get from euca-describe-instances and will be of the format i-00000000)
        • euca-allocate-address
        • This will return an IP address that has been assigned to your project so that you can now associate to your instance, e.g. 172.241.0.3
        • euca-associate-address -i i-00000001 172.241.0.3
      • Now back on your host (so outside of VirtualBox), grab a copy of cloud/creds directory
        • scp -r user@172.241.0.101:cloud/creds .
      • You can now access that host using the Public address you associated to it above
        • ssh -i cloud/creds/openstack.pem root@172.241.0.3

    CONGRATULATIONS! You have now created a complete cloud environment under VirtualBox that you can manage from your computer (host) as if you’re managing services on Amazon. To demonstrate this you can terminate that instance you created from your computer (host)

    • sudo apt-get install euca2ools
      . cloud/creds/novarc
      euca-describe-instances
      euca-terminate-instances i-00000001

    Credits

    This guide is based on Thierry Carrez’ blog @ http://fnords.wordpress.com/2010/12/02/bleeding-edge-openstack-nova-on-maverick/

  • Next: Part 2 – OpenStack on a multiple VirtualBox VMs with OpenStack instances accessible from host

Amazon EC2 – Ubuntu Quickstart Guide

You will need

  1. A web browser
  2. An Amazon AWS Account
  3. Download PuTTY

Instructions

  1. Create a new key pair in AWS
    https://console.aws.amazon.com/ec2/home#c=EC2&s=KeyPairs

    It will automatically download the key for you – go put it somewhere safe (c:\amazon\keys\your-key.pem)

  2. Load up puttygen.exe
  3. Conversions… Import Key
  4. Import c:\amazon\keys\your-key.pem
  5. Save Public Key:
    c:\amazon\keys\PublicKey.ppk
  6. Save Private Key
    c:\amazon\keys\PrivateKey.ppk

    Ideally set a password but not required – what this means is that when you go to connect and it uses your key it will ask for this password

  7. Launch and instance from AWS
    https://console.aws.amazon.com/ec2/home#s=Home
  8. Choose Community AMI

    Search for  ami-480df921 (It may take a while – be patient)
    This is Canonical’s 32-Bit Ubuntu 10.04

  9. Click Select then choose a t1.micro (or relevant size) instance
  10. Keep the rest of the defaults but when it asks for keypair – use the one you created in step 1 from the drop-down
  11. Go to the end and it will launch…
  12. In your Instances
    https://console.aws.amazon.com/ec2/home#s=Instances

    Select your instance then Instance Actions then Connect.
    Copy the hostname

  13. In PuTTY paste the hostname into the Hostname or IP box
  14. Under SSH… Auth browse to c:\amazon\keys\PrivateKey.ppk
  15. Then back under Sessions click Connect
  16. When prompted log in as “ubuntu”