Upgrade Ubuntu 12.04 Precise to 12.10 Quantal causes black screen

Disappointingly, I’ve just upgraded my 12.04 desktop to Ubuntu 12.10 Quantal release and the upgrade was not smooth. X failed to run. Given I use ATI’s proprietary driver, I assumed this was the issue. Unfortunately this didn’t seem to be the case as running the “Failsafe X” resulted in a black screen.

Recalling the pain when upgrading from 11.04 to 11.10 (https://uksysadmin.wordpress.com/2011/10/14/upgrade-to-ubuntu-11-10-problem-waiting-for-network-configuration-then-black-screen-solution/) and getting similar symptoms (although those issues related to networking) I ran through the same steps and I had X back again. The steps I did where:

  1. Boot into a safe session by selected Advanced Ubuntu Options from the Grub menu then choosing recovery mode
  2. Drop to a root shell
  3. rm -rf /var/run /var/lock
  4. ln -s /run /var
  5. ln -s /run/lock /var
  6. reboot

I then had X back and lightdm working. I then went to log in, but Unity didn’t seem to work (I still need to troubleshoot and will update here). My current work around was to install Gnome.

  1. Get to a console (try CTRL+ALT F1, although this mightn’t work – reboot into a root shell again)
  2. (If you don’t have networking, as the root user run the following: dhclient eth0)
  3. apt-get update
  4. apt-get install gnome
  5. reboot or restart lightdm (service lightdm restart)
  6. Under your username, choose the Ubuntu symbol above the > symbol to choose window managers and select Gnome
  7. Log in

To get Unity back:

  1. Fire up a console
  2. sudo apt-get remove –purge unity
  3. sudo apt-get install unity ubuntu-desktop

(thanks @EwanToo!)

I have no option to install proprietary hardware now – so I ‘ll see how my ATI FireGL card performs under the GPL driver…

Advertisements

Screencast / Video of an Install of OpenStack Essex on Ubuntu 12.04 under VirtualBox

A 12 minute screencast showing an installation of OpenStack ‘Essex’ on Ubuntu 12.04 running on VirtualBox

OpenStack Essex Installation Screencast

OpenStack Essex Installation Screencast

Note that this screencast has no sound.

  1. Configure VirtualBox with the following
    Network Interfaces:
    eth0 (nat)
    eth1 host-only: 172.16.0.0/16
    eth2 host-only: 10.0.0.0/8
    Memory: 1536Mb
    Hard Disk: 20Gb
    System Processor (optional but recommended): Increase CPU from 1
  2. Install Ubuntu 12.04, specifying eth0 as your default interface
  3. Configure networking:
    eth1 is your public network set to be 172.16.0.0/16
    eth2 is your private VLAN
  4. Run an update on the machine, and reboot
  5. Install Git which allows you to pull down a script to perform the installation of OpenStack
  6. Grab the script using the following:
    git clone https://github.com/uksysadmin/OpenStackInstaller.git
  7. Ensure you’re running the ‘essex’ version of the script by running: git checkout essex
  8. Run the script with the following:
    ./OSinstall.sh -F 172.16.1.0/24 -f 10.1.0.0/16 -s 512 -t demo -v qemu
  9. Upload an image using the test supplied script:
    ./upload_ubuntu.sh -a admin -p openstack -t demo -C 172.16.0.1
  10. Log into the Dashboard: http://172.16.0.1/ with ‘demo/openstack’
  11. Create your access keys
  12. Edit the default security group (add in SSH access and ability to ping)
  13. Launch your instance
  14. Log into your new instance!

Ubuntu 12.04 Alpha + Beta Kernel Panic Fix

If you are getting a Kernel Panic accompanied by text such as

init: log.c:786: Assertion failes in log_clear_unflushed:
 log->remote_closed

Then see this thread: https://bugs.launchpad.net/ubuntu/+source/upstart/+bug/935585 regarding a bug introduced in a recent upstart package.

Fix is simple

  1. apt-get install python-software-properties
  2. add-apt-repository ppa:jamesodhunt/bug-935585
  3. apt-get update
  4. apt-get upgrade

When you reboot all should be great thanks to James Hunt.

Updated OpenStackInstaller script for Precise and Essex installs

I’ve updated the OpenStackInstaller script which now gives you a Development (Trunk) OpenStack Essex installation on Ubuntu Precise (Currently Alpha 2) with the following

Nova Compute (and associated services)
Keystone
Glance

This set up allows you to use nova client tools to launch instances

Install Ubuntu Precise
apt-get update
apt-get dist-upgrade
reboot

(as root)

  1. git clone https://github.com/uksysadmin/OpenStackInstaller.git
  2. cd OpenStackInstaller
  3. git checkout essex
  4. ./OSinstall.sh

A lot of this wouldn’t be possible without the help of people in #openstack on freenode.org.
For an equally awesome installation from scripts for a Diablo release view these scripts: https://github.com/managedit/openstack-setup

Ubuntu 11.10 Oneiric Ocelot on the desktop – my thoughts (and it’s not good)

So Ubuntu 11.10, aka Oneiric Ocelot has been out for a short while now and so far it has been nothing but pain for people upgrading from an earlier release. Not only are the bugs racking up (and some are showstoppers like my post regarding the “Waiting for network configuration” shows, but the move to Unity seems disastrous and is losing people’s allegiance to the once admired desktop Linux of choice for many.
Has Ubuntu lost its way here? Ubuntu’s parental backers, Canonical, are concentrating their efforts on their Ubuntu Cloud Infrastructure project and Ubuntu 11.10 on a server is great, even bringing with it an easier way to get OpenStack installed.
For me, Unity is a mistake. It made sense on my netbook, does it make sense on a touchscreen maybe, but it doesn’t make sense on my desktop. Integration with even the most basic apps are causing problems (Gvim anyone? Empathy?), its sluggish (Gwibber status updates take..an..age..to..input..).

Overall I’ve lost my faith in Ubuntu on the desktop, which is a shame as it was on the way to make adoption to an Open Source desktop possible.

Upgrade to Ubuntu 11.10 problem: Waiting for network configuration then black screen solution

Have you just upgraded to Ubuntu 11.10 Oneiric Ocelot and now getting the “Waiting for network configuration” message followed by “Waiting up to 60 seconds more for network”? This then might be accompanied by a black blank screen.

[update] I’ve updated this post to reflect the copy step mentioned in the bug post below is surplus as /run is mounted tmpfs – the refined steps are below. The fix is removing the old /var/run and /var/lock then pointing those old locations to /run and /run/lock respectively. I’m suspecting this bug only comes about after an upgrade from your existing session (e.g. apt-get dist-upgrade) where it must have trouble removing these directories because existing services have files needed in there.

[update 8th March 2012] Ubuntu 12.04 is just around the corner. I strongly advise you resist upgrading to 11.10 at this stage when 12.04 is to be released next month.

The bug is here (https://bugs.launchpad.net/ubuntu/+source/sysvinit/+bug/858122) and the fix is based on this: https://bugs.launchpad.net/ubuntu/+source/dbus/+bug/811441/comments/24 :

  1. Hit Ctrl+Alt+F1 at the blank screen to get you to a non-X terminal (tty1)
  2. Login in with your username and password
  3. Change to root with: sudo -i and enter your password
  4. mkdir -p /run /run/lock
  5. rm -rf /var/run /var/lock
  6. ln -s /run /var
  7. ln -s /run/lock /var
  8. reboot

You should have 11.10 back again.

Protecting SSH against brute force attacks

Running a public AWS instance is always asking for unexpected trouble from script kiddies and bots trying to find a vector in to compromise your server.
Sshguard (www.sshguard.net) monitors your log and alters your IPtables firewall accordingly to help keep persistent brute force attackers at bay.

1. Download the latest version from http://www.sshguard.net @ http://freshmeat.net/urls/6ff38f7dc039f95efec2859eefe17d3a

wget -O sshguard-1.5.tar.bz2
    http://freshmeat.net/urls/6ff38f7dc039f95efec2859eefe17d3a

2. Unpack

tar jxvf sshguard-1.5.tar.bz2

3. Configure + Make

cd sshguard-1.5
./configure --with-firewall=iptables
make

4. Install (to /usr/local/sbin/sshguard)

sudo make install

5. /etc/init.d/sshguard (chmod 0755)

! /bin/sh
# this is a concept, elaborate to your taste
case $1 in
start)
/usr/local/sbin/sshguard -a 4 -b 5:/var/sshguard/blacklist.db -l
     /var/log/auth.log &
;;
stop)
killall sshguard
;;
*)
echo "Use start or stop"
exit 1
;;
esac

6. /etc/iptables.up.rules

# Firewall
*filter
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:INPUT DROP [0:0]
-N sshguard
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp --dport http -j ACCEPT
-A INPUT -p tcp --dport ftp-data -j ACCEPT
-A INPUT -p tcp --dport ftp -j ACCEPT
-A INPUT -p tcp --dport ssh -j sshguard
-A INPUT -p udp --source-port 53 -d 0/0 -j ACCEPT
-A OUTPUT -j ACCEPT
-A INPUT -j DROP
COMMIT
# Completed

7. Read in the IPtables rules

iptables-restore < /etc/iptables.up.rules

8. Start Sshguard

mkdir /var/sshguard&&/etc/init.d/sshguard start

Verification

tail -f /var/log/auth.log
iptables -L -n

Running OpenStack under VirtualBox – A Complete Guide (Part 1)

UPDATE: I’ve been working on a new version of the script which can be used to create an OpenStack host running on Ubuntu 12.04 Precise Pangolin and the Essex release.
I’ve now got a video to accompany this which is recommended over this guide
Head over to  ‎https://uksysadmin.wordpress.com/2012/03/28/screencast-video-of-an-install-of-openstack-essex-on-ubuntu-12-04-under-virtualbox/

Running OpenStack under VirtualBox allows you to have a complete multi-node cluster that you can access and manage from the computer running VirtualBox as if you’re accessing a region on Amazon.
This is a complete guide to setting up a VirtualBox VM running Ubuntu, with OpenStack running on this guest and an OpenStack instance running, accessible from your host.

Part 1 – OpenStack on a single VirtualBox VM with OpenStack instances accessible from host

The environment used for this guide

  • A 64-Bit Intel Core i7 Laptop, 8Gb Ram.
  • Ubuntu 10.10 Maverick AMD64 (The “host”)
  • VirtualBox 4
  • Access from host running VirtualBox only (so useful for development/proof of concept)

The proposed environment

  • OpenStack “Public” Network: 172.241.0.0/25
  • OpenStack “Private” Network: 10.0.0.0/8
  • Host has access to its own LAN, separate to this on 192.168.0.0/16 and not used for this guide

The Guide

  • Download and install VirtualBox from http://www.virtualbox.org/
  • Under Preferences… Network…
  • Add/Edit Host-only network so you have vboxnet0. This will serve as the “Public interface” to your cloud environment
    • Configure this as follows
      • Adapter
        • IPv4 Address: 172.241.0.100
        • IPv4 Network Mask: 255.255.255.128
      • DHCP Server
        • Disable Server
    • On your Linux host running VirtualBox, you will see an interface created called ‘vboxnet0’ with the address specified as 172.241.0.100. This will be the IP address your OpenStack instances will see when you access them.
    • Create a new Guest
      • Name: Cloud1
        • OS Type: Linux
        • Version: Ubuntu (64-Bit)
      • 1024Mb Ram
      • Boot Hard Disk
        • Dynamically Expanding Storage
        • 8.0Gb
      • After this initial set up, continue to configure the guest
        • Storage:
          • Edit the CD-ROM so that it boots Ubuntu 10.10 Live or Server ISO
          • Ensure that the SATA controller has Host I/O Cache Enabled (recommended by VirtualBox for EXT4 filesystems)
        • Network:
          • Adapter 1
            • Host-only Adapter
            • Name: vboxnet0
          • Adapter 2
            • NAT
            • This will provide the default route to allow the VM to access the internet to get the updates, OpenStack scripts and software
        • Audio:
          • Disable (just not required)
    • Power the guest on and install Ubuntu
    • For this guide I’ve statically assigned the guest with the IP: 172.241.0.101 for eth0 and netmask 255.255.255.128.  This will be the IP address that you will use to access the guest from your host box, as well as the IP address you can use to SSH/SCP files around.
    • Once installed, run an update (sudo apt-get update&&sudo apt-get upgrade) then reboot
    • If you’re running a desktop, install the Guest Additions (Device… Install Guest Additions, then click on Places and select the VBoxGuestAdditions CD and follow the Autorun script), then Reboot
    • Install openssh-server
      • sudo apt-get -y install openssh-server
    • Grab this script to install OpenStack
      • This will set up a repository (ppa:nova/trunk) and install MySQL server where the information regarding your cloud will be stored
      • The options specified on the command line match the environment described above
      • wget https://github.com/uksysadmin/OpenStackInstaller/raw/master/OSinstall.sh
    • Run the script (as root/through sudo)
      • sudo bash ./OSinstall.sh -A $(whoami)
    • Run the post-configuration steps
      • ADMIN=$(whoami)
        sudo nova-manage user admin ${ADMIN}
        sudo nova-manage role add ${ADMIN} cloudadmin
        sudo nova-manage project create myproject ${ADMIN}
        sudo nova-manage project zipfile myproject ${ADMIN}
        mkdir -p cloud/creds
        cd cloud/creds
        unzip ~/nova.zip
        . novarc
        cd
        euca-add-keypair openstack > ~/cloud/creds/openstack.pem
        chmod 0600 cloud/creds/*

    Congratulations, you now have a working Cloud environment waiting for its first image and instances to run, with a user you specified on the command line (yourusername), the credentials to access the cloud and a project called ‘myproject’ to host the instances.

    • You now need to ensure that you can access any instances that you launch via SSH as a minimum (as well as being able to ping) – but I add in access to a web service and port 8080 too for this environment as my “default” security group.
      • euca-authorize default -P tcp -p 22 -s 0.0.0.0/0
        euca-authorize default -P tcp -p 80 -s 0.0.0.0/0
        euca-authorize default -P tcp -p 8080 -s 0.0.0.0/0
        euca-authorize default -P icmp -t -1:-1
    • Next you need to load a UEC image into your cloud so that instances can be launched from it
      • image="ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz"
        wget http://smoser.brickies.net/ubuntu/ttylinux-uec/$image
        uec-publish-tarball $image mybucket
    • Once the uec-publish-tarball command has been run, it will present you with a line with emi=, eri= and eki= specifying the Image, Ramdisk and Kernel as shown below. Highlight this, copy and paste back in your shell
      Thu Feb 24 09:55:19 GMT 2011: ====== extracting image ======
      kernel : ttylinux-uec-amd64-12.1_2.6.35-22_1-vmlinuz
      ramdisk: ttylinux-uec-amd64-12.1_2.6.35-22_1-initrd
      image  : ttylinux-uec-amd64-12.1_2.6.35-22_1.img
      Thu Feb 24 09:55:19 GMT 2011: ====== bundle/upload kernel ======
      Thu Feb 24 09:55:21 GMT 2011: ====== bundle/upload ramdisk ======
      Thu Feb 24 09:55:22 GMT 2011: ====== bundle/upload image ======
      Thu Feb 24 09:55:25 GMT 2011: ====== done ======
      emi="ami-fnlidlmq"; eri="ami-dqliu15n"; eki="ami-66rz6vbs";
    • To launch an instance
      • euca-run-instances $emi -k openstack -t m1.tiny
    • To check its running
      • euca-describe-instances
      • You will see the Private IP that has been assigned to this instance, for example 10.0.0.3
    • To access this via SSH
      • ssh -i cloud/creds/openstack.pem root@10.0.0.3
      • (To log out of ttylinux, type: logout)
    • Congratulations, you now have an OpenStack instance running under OpenStack Nova, running under a VirtualBox VM!
    • To access this outside of the VirtualBox environment (i.e. back on your real computer, the host) you need to assign it a “public” IP
      • Associate this to the instance id (get from euca-describe-instances and will be of the format i-00000000)
        • euca-allocate-address
        • This will return an IP address that has been assigned to your project so that you can now associate to your instance, e.g. 172.241.0.3
        • euca-associate-address -i i-00000001 172.241.0.3
      • Now back on your host (so outside of VirtualBox), grab a copy of cloud/creds directory
        • scp -r user@172.241.0.101:cloud/creds .
      • You can now access that host using the Public address you associated to it above
        • ssh -i cloud/creds/openstack.pem root@172.241.0.3

    CONGRATULATIONS! You have now created a complete cloud environment under VirtualBox that you can manage from your computer (host) as if you’re managing services on Amazon. To demonstrate this you can terminate that instance you created from your computer (host)

    • sudo apt-get install euca2ools
      . cloud/creds/novarc
      euca-describe-instances
      euca-terminate-instances i-00000001

    Credits

    This guide is based on Thierry Carrez’ blog @ http://fnords.wordpress.com/2010/12/02/bleeding-edge-openstack-nova-on-maverick/

  • Next: Part 2 – OpenStack on a multiple VirtualBox VMs with OpenStack instances accessible from host

Running OpenStack under VirtualBox

This page has been superseded by Running OpenStack under VirtualBox – A Complete Guide.

(There’s still some good things on here though)

Running OpenStack under VirtualBox is detailed on many pages on the internet.  The Wiki at OpenStack.org has an intro on getting this going and why you would want to do this.  One big reason is that getting this great cloud software running under virtual hardware means you can set up multi-node clusters without a big outlay in hardware. This allows you to develop your cloud environment under the safety and convenience of your own machine.

The steps below are a mixture of instructions from http://fnords.wordpress.com/2010/12/02/bleeding-edge-openstack-nova-on-maverick/ and http://wiki.openstack.org/NovaInstall/.

What you will be setting up

Instructions

  1. Install VirtualBox
  2. Create a Ubuntu 64 Guest
    • 1vCPU
    • 1024Mb Ram
    • 8Gb Disk
    • Enable Hardware VT-x/AMD-V if available
    • Add in an extra NIC, Host-only Adapter
  3. Once installed, run the updates (and optionally install the Guest Additions if you’re running the desktop version) and reboot.
  4. Assign a static IP to your eth1 interface (Host-only) – you will use this to access the guest from your host.
  5. To install OpenStack follow the instructions http://fnords.wordpress.com/2010/12/02/bleeding-edge-openstack-nova-on-maverick/

Warning for nested virtualization

Since you are running virtualization software under virtualization software (nested virtualization) some words of warning:

Intel VT-x: KVM does NOT currently support nested virtualization

To run instances under OpenStack under VirtualBox, you must specify that software emulation be used

sudo apt-get install qemu

Edit /etc/nova/nova.conf to enable qemu (software virtualization) support

--libvirt_type=qemu

AMD-V: Enable nested KVM virtualization support

To enable AMD’s KVM support create a file /etc/modprobe.d/kvm_amd.conf with the following in

options kvm_amd nested=1

And restart all the OpenStack services

service libvirt-bin restart; service nova-network restart; service  nova-compute restart; service nova-api restart; service nova-objectstore  restart; service nova-scheduler restart

Known Issues

Currently, at the time of writing, there is a bug in some of the Python scripts used to launch instances that may cause the following error to be thrown: KeyError: ‘imageId’ when creating instance with EC2/S3. A patch is available.

wget http://launchpadlibrarian.net/64364074/x.patch -O /tmp/KeyError_imageId.patch
cd /usr/lib/pymodules/python2.6
sudo patch -p0 < /tmp/KeyError_imageId.patch
service libvirt-bin restart; service nova-network restart; service   nova-compute restart; service nova-api restart; service nova-objectstore   restart; service nova-scheduler restart

Amazon EC2 – Ubuntu Quickstart Guide

You will need

  1. A web browser
  2. An Amazon AWS Account
  3. Download PuTTY

Instructions

  1. Create a new key pair in AWS
    https://console.aws.amazon.com/ec2/home#c=EC2&s=KeyPairs

    It will automatically download the key for you – go put it somewhere safe (c:\amazon\keys\your-key.pem)

  2. Load up puttygen.exe
  3. Conversions… Import Key
  4. Import c:\amazon\keys\your-key.pem
  5. Save Public Key:
    c:\amazon\keys\PublicKey.ppk
  6. Save Private Key
    c:\amazon\keys\PrivateKey.ppk

    Ideally set a password but not required – what this means is that when you go to connect and it uses your key it will ask for this password

  7. Launch and instance from AWS
    https://console.aws.amazon.com/ec2/home#s=Home
  8. Choose Community AMI

    Search for  ami-480df921 (It may take a while – be patient)
    This is Canonical’s 32-Bit Ubuntu 10.04

  9. Click Select then choose a t1.micro (or relevant size) instance
  10. Keep the rest of the defaults but when it asks for keypair – use the one you created in step 1 from the drop-down
  11. Go to the end and it will launch…
  12. In your Instances
    https://console.aws.amazon.com/ec2/home#s=Instances

    Select your instance then Instance Actions then Connect.
    Copy the hostname

  13. In PuTTY paste the hostname into the Hostname or IP box
  14. Under SSH… Auth browse to c:\amazon\keys\PrivateKey.ppk
  15. Then back under Sessions click Connect
  16. When prompted log in as “ubuntu”