Building a plivo AMI

I’ve been experimenting with Amazon’s web services recently. I’ve also been playing with voice apps on both Twilio and Tropo. Then I found plivo. Happy days.

Plivo is an open source service that offers functionality comparable to the hosted services. The authors have also made it outrageously easy to install by packaging the whole thing into two easy to run scripts. There was no EC2 AMI so I set out to create one. It turns out to be fairly straightforward, and all possible through the web console.

Choose a base AMI

The first step is to choose a base AMI. I used the Ubuntu 10.04 amd64 standard AMI in the eu-west-1 region (ami-4290a636). Then I logged in, ran the plivo install commands, waited, waited some more, waited a little longer, and all was done.

Now, to secure the AMI before publishing it, I removed the ssh keys, authorized_keys, and the bash history. This is not as simple as it sounds. I also logged in from a host that I knew would show up in the “last logged in from” section.

I logged in and ran the following commands:

sudo shred -u /etc/ssh/ssh_host_*
shred -u ~/.ssh/authorized_keys
shred -u ~/.bash_history

Now I went into the web console, selected the instance, chose the Instance Actions menu and selected Create Image (EBS AMI). Then under AMIs, I selected my new image, and changed the permissions to public.

Note that in order to take a snapshot, the instance pauses for a second. During that pause, I lose my SSH connection, and having just destroyed SSH on the machine, I cannot get back in. So I have to terminate (kill) the instance and boot it a fresh from the new AMI. This creates new SSH host keys and puts my SSH key back into authorized_keys.

Elegance

I’m sure there’s a more elegant (and potentially elaborate) way of doing this. But it worked for me. It was quick and painless. Now there’s a public plivo AMI in the eu-west-1 region. I’ll look into how I get it into other regions, and if I need to pay for the storage to have it publicly available.

Result

The result is the new public ami-acd0e1d8 in the eu-west-1 region. If you choose to test the AMI, please let me know how you get on in the comments here.

Fixing NTFS on Ubuntu

James had a hard drive problem. He pulled the disk out of his laptop and brought it to me. Firstly I created a full image of the broken partition like so:

sudo dd if=/dev/sdb2 bs=1k conv=sync,noerror of=/path/to/image

Then I tried TestDisk. It worked like a charm and fixed the apparently broken NTFS boot sector. I thought that when James put the drive back in the laptop, it might “just work”, but apparently it didn’t. I had saved some of the most important files, but not all. James then wiped the drive to get a working machine again.

So now I had to restore files from an image of a broken partition. Turns out to be dead easy. The key ingredients were loopback and TestDisk.

sudo losetup /dev/loop0 /path/to/image
sudo testdisk /dev/loop0

It took me a while to figure out that I needed to choose partition table type none. I was dealing with an image of a single partition, so there was no partition table. After that, TestDisk behaved just like normal. I rebuilt the NFTS boot sector and then mounted the image like so:

sudo mkdir /mnt
sudo mount /dev/loop0 /mnt

This warned about the disk not having been shut down properly, ran something or other to clean it up, and then bingo, all the files were mounted and visible. I copied all the data from /mnt to an external drive, and will give that to James to restore from. Too easy!

My experience at CHS11

Friday night and yesterday I was at Culture Hack Scotland 2011 (#chs11). It was a 24 hour hackday. A hackday is a session where designers, developers and other hackers get together and create stuff. Typically there’s a theme, and this theme was culture with specific focus on the Edinburgh Festivals. The event was put on by the Festivals Lab.

In this post I’ll share my experience, talk about the future of what I started, and offer some suggestions for future events.

My experience

My interest at the beginning of the session was the people in the room. There seemed to be a lot of outward focus. People were building stuff for other people, for the public, for some sort of audience. I wanted to do something for the people physically present. I wanted to make some kind of contribution to the shared social experience of the event.

I started out thinking about photo and video. Taking portraits of participants maybe, or creating a video diary corner. After a couple of hours I hit on an idea. I wanted to do something with qrcodes and people. (A qrcode is a square barcode that many smartphones can scan, more on wikipedia.) Tagging physical people with qrcodes so they could be scanned by other participants in the event.

I had a vision of people complimenting each other, providing encouragement, and so on by scanning each other. So a person might be sitting working furiously and another participant walks past, scans them, and shares a positive message.

The execution seemed simple enough. I’d generate qrcodes that linked to twitter with the tweet message pre-filled, including the person’s twitter username. So I can scan Jill and immediately have a pre-written tweet saying something @jill.

The whole thing was incredibly simple. I wanted to launch in the morning, but spent ages getting the qrcodes printed. Tom Inglis helped a great deal here. He physically purchased and printed the stickers! I arrived just before 9am fired up and ready to go, but it was well after 1pm before I had the stickers ready to print. That could have been a lot faster, it was an easy problem to solve.

The next step was to tag people, which went fairly well. Most people were receptive. In total I linked 82 qrcodes to twitter accounts. In total, I count 21 tweets with the #tagrrd hashtag. So there were 4 qrcodes for ever one actual tweet. Those 21 tweets were produced by 16 authors over about 9 hours. That’s about one scan every 30 minutes.

I had hoped for much higher participation, but I think my execution let me down in the morning. I think if I had gotten the tags out earlier in the day, I could have spent more time encouraging people to use them.

Future of tagrr

I can see some interesting potential the concept. For example, I like the idea of creating a simple brand around a qrcode. Surrounding it in a red box for example. Then potentially sticking qrcodes around the city, maybe along the lines of geocaching. I also think the same idea could be done at other events with the tags handed out as people arrive. People might use them more if they were part of the event experience from the beginning.

I’ll keep my eyes peeled for other events where I could try to test the idea further. If you’re reading this and hosting an event in Edinburgh, let’s discuss the possibilities.

Suggestions

At future hack days, I’d love to see more tech oriented communication during the event. The #chs11 hashtag allowed people to communicate around twitter, which was ok. I think there’s room for easy improvement here.

A screen dedicated to showing a specific hashtag for developers would have been good. Somewhere developers could post questions, receive answers, and so on. Maybe that happened around the #chs11 tag and I missed it all, but a screen in the room would be good I reckon. Another nice option is the WordPress p2 theme. It’s a sort of vaguely private mini-twitter. Can all be done for free on wordpress.com.

Personally, and this is totally personal feedback, I would provide less food more often. The food appeared to be quite expensive, which is nice. I reckon the experience could be improved slightly by having less expensive food always available. For example, a fridge with sandwiches and snacks in it. It would be cool to have them freshly delivered at breakfast and lunch, but ultimately, probably not essential. Likewise with drinks. Having a coffee machine in the room, always on, continuously filled, for the whole 24 hours would be awesome.

A halfway demo might work well. Giving people the option, not required, to present their project after breakfast for example. Let the guys who worked overnight show off what they’ve done, maybe bring new people into their work. Likewise, people could pitch tough problems they’ve hit, see if other folks in the room have solutions to offer.

Overall, the event was awesome. Personally, I had a great time. The highlight for me was the sense of cooperation between the participants. There was a great spirit of collaboration, people sharing problems, bouncing idea between different teams, and so on. There was amazing talent in the room.

Audio CDs on Ubuntu on Lenovo X301

Quick geektastic post. Under Ubuntu 10.04 lucid lynx I can’t play audio CDs. When I put them into the drive, an error pops up every few seconds saying:

Unable to mount Audio Disc
DBus error org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus)

Eventually I stumbled upon this bug and found a solution. I open Nautilus, Edit > Preferences > Media > Never prompt or start programs on media insertion. Bingo, now I can insert a CD and it will play. I don’t think it’ll work in Rhythmbox because that’s so tightly integrated with Gnome, but I was able to play the CD in VLC and presumably I’d be able to rip it in something equally unconnected to Gnome.

Glorious, now I can rip some of my 6 year old CDs I just found. Happy days. :-)

Here’s a random picture from flickr for the non techy readers to enjoy…

Missing com.google.android.maps

Thanks to this post, and a whole heap of other stuff, I finally sorted out a Google Maps problem on my Nexus One.

When trying to install some apps, I would see this error message in logcat:

requires unavailable shared library com.google.android.maps

I had the Google Maps app installed and working, but that didn’t fix the issue. It turns out, I had to add two other files and restart the phone. I found those files in the google zip from Cyanogenmod. It took a little fiddling, but I was able to use these instructions to remount /system in write mode. First step was to take the following two files from the google apps zip (gapps-hdpi-20101114-signed.zip) and put them onto my sd card.

/system/etc/permissions/com.google.android.maps.xml
/system/framework/com.google.android.maps.jar

Then to load them onto the phone, I opened the terminal emulator and ran:

su
mount -o rw,remount -t yaffs2 /dev/block/mtdblock3 /system
cp /mnt/sdcard/
com.google.android.maps.xml /system/etc/permissions/
cp /mnt/sdcard/
com.google.android.maps.jar /system/framework/

Then after I rebooted the phone, I was able to install apps that depend on Google Maps. I can now check bus and train times, and do all kinds of other cool stuff with maps! :-)

Note, this is only relevant if you do not want all Google Apps installed. I only have the map application installed as I don’t sync my phone with any Google services. If you’re using all the Google Apps, I suggest reinstalling as these steps should not be necessary.

To add a little colour, here’s an unrelated picture from flickr, courtesy of epSos.de.

Ripping MP3s from YouTube on Ubuntu

This is the first post in a new category called notes. Things I want to remember and don’t have anywhere else to write down.

Install Medibuntu, WinFF, Firefox + Greasemonkey + Youtube without Flash Auto, and probably libavcodec-extra-52 from Medibuntu and easytag. Download the video within Firefox, save it to disk. Open WinFF, open the video, choose output format as Audio, set your options, and click convert. Then open EasyTAG if appropriate. Easy peasy. :-)

My very own OpenID server

I just installed SimpleID. Now I have my very own OpenID server. I no longer need to subject myself to the pain of myopenid.com. After they consistently ignored all my requests to fix a major bug in their system, I’ve gone elsewhere. Happy to be running my own server and away from JanRain and their abysmal non-support.

Install was painless. It took me maybe 30 minutes because my internet connection was running so slowly. On a fast line, it would have been a 5 minute install. Very cool. I did make one change to SimpleID, as per this ticket, to make it a little more secure.

My next project is to install Prosody so I have my own jabber/XMPP server as well. :-)

A WordPress hosting cooperative

Maybe you make WordPress sites for cash. Maybe you design themes or write plugins. Then, after your work is done, your clients (or friends, lovers, etc) need to be supported. Somebody needs to keep WordPress and her plugins up to date, secure, and backed up.

Would you like to share that load with some co-cooperators in a WordPress hosting cooperative? Imagine a small group of developers collectively managing 50 or 100 WordPress sites instead of individually managing 10 or 20.

Logistics

Ok, you’re sold on the vision, what about the details?

Initially, a loose association of a few individuals, no legal structure. I’m willing to act as the banker for the startup period. I’ll register a domain name and pay for a few servers. I promise to transfer ownership of the domain and any other assets when (or if) a legal organisation is created at any point in the future. Or, if I choose to move away, to transfer the domain and other assets to another person in the group.

My suggestion is that we adopt a split pricing model. We set a fair market price for customers. In the beginning, it’s probably simper to charge per blog irrespective of traffic, disk or cpu usage. We can change this policy as soon as we need to.

Members then pay a pro-rated share of costs based on their number of sites. For example, we have 10 customers paying $10 a month, $100. Expenses are $150 a month, we have 5 members with 4 sites each, $50 over 20 sites, each member pays $2.50 per month per site.

To distinguish between customer and member sites, we can say if money changes hands, it’s a customer site. So a member might pay for 8 of their client’s sites at customer rates, and 3 for their family at member rates. The distinction is whether or not the member receives cash from somebody for that site. We trust each member to be honest.

Payment optional

It’s not as crazy as it sounds, honest! I suggest we adopt a post-paid, payment optional policy. At the end of each month, we send invoices marked payment optional. Customers can choose not to pay and their sites will be taken offline in reasonable time period.

The advantage of this model is we don’t ever have to deal with refunds, price disputes or otherwise. If the client is happy with the service they already received, great, if they’re not, they don’t have to pay and we part ways amicably.

Principles

  • Transparency: All financials are publicly visible.
  • Profits: Until we have a legal organisation, any profits are kept in the group to pay for expenses. No payouts to members until the legal structure is sound.
  • Do-ocracy: Until we decide to change it, we each contribute what we can and what’s needed to keep the system online.
  • Respect: Inspired by the Ubuntu project, in joining the group, we each commit to treat other members and customers with the utmost of respect at absolutely all times.

Next step

These are my initial thoughts as I wrote this post in half an hour. If you’d like to join the discussion, become a member or a customer, post a comment below, shoot me a message, or otherwise open the communication lines. :-)

A new short url

I’ve just acquired two short domains. They are cal.io and chm.ac. The first is cool, the second is a short version of my standard username, chmac (and formed from my initials, how original!).

I’m thinking I want to move this site, my email, and my other services over to one of the two. I can’t decide which. I think I like cal.io better, but calio.com is taken and I’m already invested in chmac. It’s my username across most sites. I’m 12 of the top 20 results in a search for chmac. I own that space.

Calio is a surname, there’s all kinds of stuff going on there. It’s an exciting new space, but it might also mean walking away from chmac, which I’m already on top of.

What do you think? cal.io or chm.ac?

Update: In the 3 days since I wrote this, while I waited for the domains to be registered, I think I’m decided on cal.io as my new domain. But, I’m still interested to hear feedback.

Links in twitter feeds in Liferea

I use Liferea to consume feeds. In turn, I consume twitter by RSS. However, twitter’s RSS feeds suck. Urls are not clickable, user names are not links, nothing. It’s flat text.

Using Liferea’s ability to locally parse feeds and a little inspiration, I hacked up a sed script to make my twitter feed all pretty. It works great for me, YMMV.

I published the script here, under the GPL. To use it, save the source into a file somewhere, make that file executable, then choose “Use conversion filter” in Liferea and select the file you just created. If you have problems, you could try leaving a comment here, I might be able to help.

Insufficient boot space on Ubuntu

When installing the latest batch of updates to Ubuntu 10.04 I hit a problem, I ran out of space on my /boot partition. A dialog popped up warning of low space on /boot. Then the install of updates failed because the new kernel couldn’t be completed.

The solution was remarkably simple, I post it here in the hope it might help others. Firstly, I removed the oldest kernel I had installed. I opened Synaptic (System > Administration) and then searched for my current kernel version (2.6.32). I saw I had 4 kernels installed. I then searched for 2.6.32-21, the oldest kernel. I marked these packages for complete removal:

  • linux-headers-2.6.32-21
  • linux-headers-2.6.32-21-generic
  • linux-image-2.6.32-21-generic

Then I removed those and to finish, I marked for re-installation the same packages but the -24 version (the latest kernel that failed to install). Now all is happy once again. :-)

Switching to twentyten

I’ve just upgraded to WordPress 3.0 and switched to the brand new default theme called twentyten. If you’re reading this in your feed reader, come by and check out the new look.

I’ll update my picture (people seem shocked when they see it after meeting me in person!), and modify the menu using the new menu editor. I’ll try to make navigating a little easier. If you haven’t already tried it, I recommend the new version of WordPress.

Regenerating nautilus thumbnails

Sometimes nautilus will try to generate a thumbnail for a video file while it’s downloading. Then  nautilus remembers that it tried, and failed, to generate a thumbnail for that file. Once the file has finished downloading, the thumbnail remains broken. I’ve had this issue for a while, today I chose to find a solution.

I found this post by Barak Korren. Barak wrote a short nautilus script in python to allow the easy deletion of a thumbnail in Nautilus. Here’s a step by step guide to getting it working.

Download this file and put it into your ~/.gnome2/nautilus-scripts directory. The script is by Barak, I uploaded a plain text version here to make it easier to download. Make the script executable, you can run chmod +x ~/.gnome2/nautilus-scripts/delete_thumbnails.py in a terminal to do this. Now go to that directory in Nautilus, and you’re in business.

To test, right click on a file with a thumbnail. You should see a new menu, Scripts, under which you’ll see “delete_thumbnails.py”. Click that option and the thumbnail will be deleted. Press F5 to reload the folder in nautilus, and you should see a new thumbnail generated.

Thanks for a such a handy script Barak.

Installing lyx without the bloat

For a few months now I’ve been researching programs to write in. I have OpenOffice, I tried AbiWord, I use gedit for text files. They’re all good programs, but they’re not what I want to write in. I want something ultra simple. Very basic formatting, spellcheck, light quick load time. The best option I found was Tomboy, a sticky note application. It supports very simple formatting, has a spellcheck, and is dead simple. It loads almost instantly. But, it saves notes automatically in its own format. There’s no way for me to save different versions, choose a file name or location, etc. It’s fine for the writing, but I need to go elsewhere to save.

In the last couple of days, I discovered gwrite. It’s a very simple WYSIWYG HTML editor. It has the potential to be exactly what I want, but it’s very young software and still has a few usability bugs. I’ve reported them to the program’s author, so maybe it’ll improve in time. I might even look at the source code and see if I can provide some patches myself.

But, that’s not the point of this post. This post is about lyx, which is a seriously cool application I’ve just discovered. It’s a “writing tool”, not a word processor. It’s a tool designed for scientific and other authors to simply write text. It’s based on an underlying technology called LATEX. As I understand it, and I’m completely new to this whole thing, LaTeX allows an author to just write. The layout of chapters, titles, indentations, bullet points, and all that jazz, is handled by LaTeX macros. What does that mean? Well, I think it means I just write, then lyx, LaTeX and TeX make it look beautiful.

So, all excited, I decided to install lyx. This is where I hit a problem. I was prompted to download 438MB of data and use 745MB of disk space. That’s outrageously huge for a single program. I was blown away, it makes installing lyx many times larger than OpenOffice. I was strongly intrigued by what took up so much space, so I had a little sniff. It turns out that more than 70% of the download size and almost 60% of the disk space is used by documentation. Mostly, documentation for underlying applications which I didn’t specifically choose to install, they’re required to make lyx work.

Being on a slow internet connection, I decided waiting the day or two for 438MB to download was just too much. There must be another way. A little research later, I found my solution in a program called equivs. Equivs is a pair of tools to create shadow or dummy debs. In my case, this meant creating a dummy package to make apt think that I had already installed the massive collection of documentation that was necessary to install lyx. Thus I was able to install lyx by downloading only 117MB of data and using only 302MB of disk space. Still astronomically huge, but less than half of what I was originally facing.

And so, onto the point of this post. How does one do that? If you want a simple answer, here it is. Step 1, install this file. Step 2, install lyx as normal. Bingo, jobsagoodun. :-)

For those who are interested, I’ll explain the process on Ubuntu 10.04. Install equivs in the usual way (sudo apt-get install equivs will do the trick). Now create a new directory, I called it equivs-texlive-dummy-docs. In that directory, run equivs-control texlive-dummy-docs.ctl. Now edit the newly created file. Mine looked like this. Next run equivs-build texlive-dummy-docs.ctl. This command creates a new file called texlive-dummy-docs_1.0_all.deb. That file can be installed with sudo dpkg -i texlive-dummy-docs_1.0_all.deb.

It took me a few hours to put all this together. Hopefully if you’re facing the same challenge, you can install one file and be done. :-)

Update: I discovered that all these packages are installed because apt is configured to install recommended packages by default. I tried installing lyx without any of the recommended packages using sudo apt-get install --no-install-recommends lyx, but previewing documents from lyx didn’t work. Instead I reverted to my equivs texlive-dummy-docs package. If you feel passionately about this topic, as I do, please chime in on this bug.

Prepaid ICE sim card in Costa Rica

Yesterday I bought a prepaid ICE sim card in Costa Rica. Last time I was in Costa Rica they didn’t exist. Then they were available, but very hard to find I read. I walked into a shop called abCelular in San Isidro, and after a bit of confusion, walked out with a 2500 colones sim. Yay. :-)

They asked for an ID card. I offered my UK driving license. That was fine, except their computer would only accept numbers in the “identification document number” spot. So instead, I gave them my passport, where the document number is only digits. I probably could have pushed the issue with my driving license, maybe they would have just entered the numerical part or something, but the passport seemed easier, and I was in a hurry.

The transaction was painless. I showed no phone and they only looked at the photo page of my passport, but wouldn’t have seen it at all if I’d pushed the driving license. It seems that prepaid SIMS are finally available in Costa Rica… :-)

Now if only I could get service inside my house…

CS Greasy on Launchpad

I created a project on Launchpad for the first time today. It’s called CS Greasy, a collection (or soon to be a collection) of Greasemonkey scripts related to CouchSurfing. It took a little time to figure it out, but thanks to Kasper’s help, I think we’ve got it working now.

I created a new team called ~csgreasy. The tilda (~) distinguishes teams and users from projects. So the project name is csgreasy, the team name that owns the project is ~csgreasy. The team is open, so anyone can join. Upon joining, new members can commit code immediately. Once you’ve joined the team, the commands to check out and commit code are:

bzr branch lp:csgreasy/trunk
# make some changes
bzr push lp:csgreasy/trunk

After the first push, subsequent changes can be pushed with just `bzr push`, the location will be remembered from last time. Now anyone with bazaar and some javascript skills can contribute. To get started, install bazaar, register on launchpad, join the team, branch, start hacking, push back changes. Happy hacking… :-)

Creating Launchpad projects

Creating the project was relatively simple. There were a couple of steps I didn’t fully understand at first, it was simple once I got it.

Firstly, I registered a project. Second, I created a team. Then, instead of pushing branches to lp:~user-name/project-name/branch-name, I can push to ~team-name/project-name/branch-name. Using the team name instead of my own username means that the code is owned by the team and can be edited by anyone else in the team. A team on launchpad is essentially a regular user that consists of multiple other users. Very handy. That’s the whole process. :-)

Proposing WP Flavours

Instigated in part by this discussion, I think the time has come to start forking WordPress. I think there is space for a few different forks, or flavours, of WordPress. I can imagine flavours focused on security, privacy and probably others. For example, a flavour that disables all the post versionining. A flavour that strips out other parts of the code to suit a specific need.

To serve these aims, I propose to create wpflavours (or wpflavors). I imagine a site where flavours can be downloaded, an svn repo where patch sets can be maintained (maybe using quilt), and potentially a mailing list for group communication. Maybe we could host the whole thing on google code or some other public code / svn service. I suspect we’ll need server space to automate the patching and packaging process though.

If there’s sufficient interest (say anyone else interested in writing patches), I’ll register the domains, setup a simple WordPress site, figure out svn, setup a mailing list, and we’ll see what happens. If you’re interested comment publicly below or get in touch privately.

My first greasemonkey script

On several ocassions I’ve looked for an animated weather map where I can see the predicted weather for a region. After some struggling, I found maps on weather underground that were close to what I wanted.

However, when I changed the date of the map, it loaded a whole new page with the new map for the new date. It was cumbersome to cycle through the dates. I figured I could write a little Greasemonkey script to make life easier. Some 6 or so hours of hacking later, it’s done. It was much more finicky than I anticipated, but it’s done, and it works. I present Wunderground AJAXifier.

What is Greasemonkey?

Greasemonkey is a plugin for Firefox that allows you to use custom scripts on various web sites. For example, I use the YouTube without flash auto script. When I load a YouTube video, the script removes the flash player and replaces it with an embed code that fires up my default browser plugin (VLC, xine, mplayer, etc). The script also creates a few links to download videos directly to my computer. Easy peasy.

There are thousands of scripts on userscripts.org. Mine is here. Be warned, the first few scripts I installed were malicious, they redirected me to the author’s web site. I recommend you check the reviews and read the source code before installing any scripts.

Philosophy

I think greasemonkey is a really big development for browsers. It provides an easy way for users to customise and control their web experience. For example, it’s now relatively easy to reorganise your favourite web site to improve the layout, add a WSIWYG editor, and more. It’s a significant step for users to regain control of their web experience from site publishers. Power to the people! :-)

Removing onclick or onchange with Greasemonkey

It took me quite a while to figure out how to remove the page’s default onchange event. I found the solution thanks to joeytwiddle on #greasemonkey. The trick is to use the wrappedJSObject method. Here’s a quick example:

var myel = document.getElementById('callum');
console.log(myel.onchange); // null, see XPCNativeWrapper
console.log(myel.wrappedJSObject.onchange); // works
myel.wrappedJSObject.onchange = null; // unsets the onchange handler

It took me a while to figure this out, hopefully this post helps somebody else.

Here’s a completely unrelated image from a flickr search for greasemonkey to brighten the post.

VirtualBox host to guest networking

Update 2013: This article is out of date. VirtualBox now includes a host-only network type. On my laptop I create 2 networks, one NAT to provide the VM with internet, and one host-only to provide the laptop access to the VM, even if the laptop is not on the internet.

Update: I just repeated this process with Ubuntu 11.04 host, 10.04 guest. It worked as described here. I also automated the setup on the host, and added a note at the bottom of the post explaining how I did that.

I’m creating a new development server on VirtualBox. I was using VMWare until recently, but since upgrading to Ubuntu 9.04 64bit, I’ve decided to try VirtualBox instead. I also recommended VirtualBox to my brother, so by using it myself I’ll be better able to support him if he has any issues.

Installing a new virtual machine was a breeze. After I activated hardware virtualisation in my bios, I installed a 64bit version of Ubuntu server 8.04 LTS. The install failed a couple of times, not sure why, but third time lucky.

My first major stumbling  block was connecting to the virtual machine from the host machine. By default VirtualBox gives the guest (virtual machine) a NAT ethernet connection. So the guest can connect to the network, including the internet, but the host can’t connect to the guest. I’m creating a development server, so that’s precisely what I want to do, connect from the host to the guest. With a little research, it turns out there’s an easy solution (on Linux hosts).

The VirtualBox article on Advanced Networking in Linux was my guide. I’ll document all the steps I took here.

Install bridge-utils, vtun and uml-utilities:

sudo apt-get install bridge-utils vtun uml-utilities

Create the bridge:

sudo brctl addbr br0
sudo ip link set up dev br0
sudo ip addr add 10.9.0.1/24 dev br0

Create a tap device for the guest to use, put your username in place of USER:

sudo tunctl -t tap0 -u USER
sudo ip link set up dev tap0
sudo brctl addif br0 tap0

If you need multiple guests connected, repeat this step replacing tap0 with tap1, tap2 and so on. Always use br0.

Now modify the virtual machine settings and map one of the network adapters (probably the second one) to the device tap0. Choose Attached To Host Interface and select the device tap0. I left the first network adapter as a NAT adapter so the virtual machine has internet access. In this configuration, I can disconnect the guest from the internet and / or the host separately.

When the virtual machine has started, setup the network. Assuming the guest is an Ubuntu machine, run these commands on the guest. If you linked the first network adapter to tap0 then use eth0 on the guest, if you chose the second network adapter use eth1, 3 to eth2, 4 to eth3 and  so on.

sudo ip link set up dev eth1
sudo ip addr add 10.9.0.2/24 dev eth1

Now test it all works. On the host machine try ping -c4 10.9.0.2 and on the guest try ping -c4 10.9.0.1. Assuming both machines are set to respond to pings (default in Ubuntu), you should see 4 successful pings.

If this works, you can set the address permanently by editing /etc/network/interfaces and adding this text.

# Host only network
auto eth1
iface eth1 inet static
address 10.9.0.2
netmask 255.255.255.0
network 10.9.0.0
broadcast 10.9.0.255

I’ve used the 10.9.*.* addresses as an example. You can use any private network address (10.*.*.*, 192.168.*.* or 172.16.*.*-172.31.*.*). The most commonly used addresses are 192.168.*.* and 10.0.*.* or 10.1.*.* so I recommend staying away from them. You want to choose addresses that won’t clash with anything else on your network.

Edit: Finally, I added a script to automate the setup on the host machine. I created a script called /etc/init.d/virtualbox-bridgenetwork with the following contents:

#!/bin/bash
# Create the br0 interface
brctl addbr br0
ip link set up dev br0
ip addr add 10.9.0.1/24 dev br0
# Create tap0 for the vm to connect to
tunctl -t tap0 -u USER
ip link set up dev tap0
brctl addif br0 tap0

You need to change USER to your own username and modify the IP to whatever you were using. Then to make this script run automatically at boot time, run:

sudo update-rc.d virtualbox-bridgenetwork defaults

Now the br0 and tap0 interfaces should be automatically created at boot time.

Mapping plans

Since receiving my spot messenger I’ve been looking at mapping. I’d like to create a live route map that documents my travels. I’ve taken inspiration from Mark Beaumont’s map. Mark is cycling from Anchorage, Alaska to Ushuaia, Argentina, you can read more on his site.

The spot will update my location every 10 minutes. For $50 a year Spot will give me a map that shows my current location plus, I believe, up to 30 days of history. Example. I’d like to show my whole trip, more than 30 days of history. A little hacking turned up this. It’s the feed that drives spot’s map. Hopefully with a little jiggery-pokery, I can use the feed to create my own map with pieces added.

I’d like to show the following:

  • My route and current location, up to the last update
  • Points of interest that I highlighted along the way
  • Pictures or video showing the location where they were taken
  • Status updates plotted on the map at the place they were sent from
  • Blog posts linked to the relevant points on the map

Is there anything else you would like to see on the map? Are you familiar with open source mapping? Do  you want to help me put this together? Do you know of any mapping services I could use? Thanks in advance for your help.

Here’s a random picture from a flickr search for maps to brighten your screen.

Mounting LVM vmware disks

I’ve spent a couple of weeks trying to recover some data from an old vmware machine. I didn’t want to install vmware on my new OS, so I looked into the vmware-mount program. The documentation refers to vmware-mount.pl, but I couldn’t find that file at first. It looks like since VMWare 2.0, vmware-mount.pl and vmware-loop have been replaced by a single vmware-mount binary, which behaves slightly differently.

I initially had problems with vmware-mount from VMware-server-2.0.0-122956.i386.tar.gz. I was getting this error:

vmware-mount: error while loading shared libraries: libfuse.so.2: cannot open shared object file: No such file or directory

I saw other reports of the same error, but no solution. I was using vmware-mount from a 32bit build on a 64bit OS, so instead I tried the vmware-mount from VMware-server-2.0.1-156745.x86_64.tar.gz. Then I got an error along the lines of:

SSLLoadSharedLibrary: Failed to load library libcrypto.so.0.9.8:/usr/local/bin/libdir/lib/libcrypto.so.0.9.8/libcrypto.so.0.9.8: cannot open shared object file: No such file or directory
Core dump limit is 0 KB.
Child process 26541 failed to dump core (status 0x6).

VMware Server Error:
VMware Server unrecoverable error: (app)
SSLLoadSharedLibrary: Failed to load library libcrypto.so.0.9.8:/usr/local/bin/libdir/lib/libcrypto.so.0.9.8/libcrypto.so.0.9.8: cannot open shared object file: No such file or directory
Please request support.
To collect data to submit to VMware support, select Help > About and click “Collect Support Data”. You can also run the “vm-support” script in the Workstation folder directly.
We will respond on the basis of your support entitlement.

Press “Enter” to continue…

I read this. With a little guesswork this command seemed to do the trick:

sudo ln -s /lib /usr/lib/vmware

Now vmware-mount would list my partitions. First major breakthrough.

Mounting suspended disks

It complained that my virtual machine was in a suspended state, and so it wasn’t safe to mount the disk. I found I could bypass this problem by moving all the vmdk files into a separate directory. Then running vmware-mount in that directory. It effectively ignored all the vmware machine files, and used only the hard disk files.

mkdir vmdks
sudo mv *.vmdk vmdks/
cd vmdks

Now I could mount my /boot partition within the VM, but not the second partition because it was was an LVM container. A whole new problem to solve.

Mounting LVM volumes with vmware-mount

I stumbled on my buddy John’s post. That and this helped me figure out what was required.

My first step was to mount the disk flat, using a command like:

sudo vmware-mount -f pathToVMDK.vmdk /path/to/mount

That worked, sort of. With fdisk -l /path/to/mount I could see the two partitions. But sudo vgscan couldn’t find the lvm partition. I tried sudo losetup /dev/loop0 /path/to/mount/flat, but that didn’t work either.

I figured I needed vmware-loop to mount the partition as a loop device. I searched the VMware-server-2.0.1-156745.x86_64.tar.gz file for vmware-loop, but it was nowhere to be found. That’s when I started investigating with previous versions of VMWare. It looks like the 1.0.* releases included vmware-mount.pl and vmware-loop while the 2.0.* releases only include the new vmware-mount binary.

I downloaded VMware-server-1.0.9-156507.tar.gz. In that tar file I extracted bin/vmware-mount.pl and bin/vmware-loop. These were the files I needed. I skipped vmware-mount and went straight to vmware-loop. I was able to mount the second partition directly onto a network block device (/dev/dbd0) with:

sudo vmware-loop pathToVMDK.vmdk 2 /dev/nbd0

Now I could use the lvm commands to activate and mount my lvm. Note that vmware-loop is running the whole time, so I left it in a separate terminal. I closed it with CTRL-C at the very end of the process.

sudo vgscan
sudo vgchange -ay VolGroup00
sudo mount /dev/VolGroup00/LogVol00 /mnt/

Finally, I was able to copy the files from my virtual hard drive. I made a full backup with tar and then grabbed some other specific files and unmounted the whole thing.

sudo umount /mnt/
sudo vgchange -an VolGroup00

If you’re struggling with vmware-mount and  LVM or suspended disks, I hope this helps. Comments welcome.

Ubuntu Jaunty and pidgin-facebookchat 1.61

I was able to install pidgin-facebookchat 1.61 on Ubuntu Juanty Jackalope (9.04) by first installing the relevant libjson-glib-1.0-0 from Karmic. To find the correct deb look at the different builds on the right hand side of the page. In my case on 64-bit Ubuntu the relevant deb was this one, the 32-bit version is here.

I had to install the glib-json deb first. Otherwise the pidgin-facebookchat deb warned of an unsatisfiable dependency.

It looks like pidgin-facebookchat 1.6x is being included in Ubuntu Karmic but I’d guess it won’t be backported to Jaunty.

Google Wave

This might be the most exciting technological development since email. I’m truly impressed at Google’s approach to this project. It gives me newfound faith in Google.

The guys behind Google Maps set out to answer the question “What would email look like if it were invented today?”

Their answer is truly outstanding. Wave is a collaborative communication tool. Something like email crossed with a wiki, instant messaging client, and much, much more. As I watched the video I was thinking, all well and good, but when I got to around 1 hour 8 minutes, I got really excited. In a truly genius move, Google has made the whole protocol behind this new platform open source. That allows independent organisations to build their own Wave servers, and privacy is tightly coded within the system. No Google snooping. Wow.

If you’re technically minded, watch the video here. I’m not embedding the video because it’s 1 hour 20 minutes long and you probably want to watch it in high def on YouTube directly. See more info and sign up for a demo account on Google Wave here.

I was clapping with the audience as the video ends. Truly amazing. Thanks to Pete Mall for the tip. :-)

An Ubuntu Kindle outside the US

I just bought a Kindle and successfully loaded my first book onto it in Canada, using only Ubuntu. The process I used should work anywhere outside the United States. Here’s a quick summary for overseas, would-be Kindle owners.

1] Buy the Kindle. You need a shipping address in the United States where a friend or forwarding service will receive the Kindle and send it on to you. You can use a credit card from any country to actually purchase the Kindle, but not the books.

2] Deregister the Kindle from your Amazon account.

3] Buy yourself an Amazon gift voucher (I started with $20). Just buy a gift card and have it sent to your own email address.

4] Create a new Amazon account with a new email address.

5] Register the Kindle onto your new Amazon account. The Kindle serial number is in tiny letters on the back of the device.

6] Load your gift voucher onto your new Amazon account.

7] Browse the Kindle book store, purchase a book. You’ll need to add a shipping address to your account, use a US shipping address.

8] Got Your Account > Manage My Kindle and scroll down. You’ll see a list of your purchases, choose Download to My Computer then save the file.

9] Plug your Kindle into your computer (Linux, Mac or Windows all work) and drop the file into the documents folder on the Kindle.

Voila, you have a Kindle outside of the USA.

Do not add a non-US credit card to your Amazon kindle account. Use the account only for your Kindle and only put money on the account via gift vouchers. Any non-US credit card will stop Amazon sending books to you on that account. You could repeat the process to register the Kindle to a new account, but you might run out of email addresses!

I’ll post some thoughts on the Kindle once I’ve had a chance to try it out. Right now it’s charging via USB. :-)

For those of you still wondering what  Kindle is, go here. Think ipod for books. Here’s a picture to help you visualise: