Zim on Mac OSX

Took me a bit of time and experimentation to get it working, now I have Zim running on OSX, and in the end, it was pretty painless. I think these are the steps I took.

First, install MacPorts. I downloaded the dmg then installed it. MacPorts requires XCode, but I had already installed the XCode command line tools from here (apple ID required). Every time I run anything with port, it warns about missing XCode, as I understood this, I can safely ignore the warning.

Then I found details somewhere on which packages to install. I ran this command:

sudo port install python26 py26-pygtk py26-simplejson py26-xdg

I also downloaded the zim source code package. Then I had some issues where zim wouldn’t start. Turns out I was using the default Apple version of python instead of the MacPorts version. I extracted the zim source, cd’d into the directory, and then I was able to start zim with the following command:

/opt/local/bin/python2.6 zim.py

Took a few tries to get it to start properly, but eventually I got it working. I kept re-opening it. I also had to have XQuartz installed, which I think is an OSX X implementation. Don’t know, it was already installed on my system.

If you try to follow these steps and find anything missing, let me know what to add and I’ll update this post. Hopefully this saves somebody else a bit of hassle.

Unity drove me mac

This post has been a while in the making. Bottom line, I bought a mac. I’ve had it for about three weeks now, and so far I haven’t looked back.

new macbook pro 13 inch retina

There’s something between Bangkok and operating system choice for me. I first converted to Linux in Bangkok, and I decided to buy a mac in Bangkok some 7 years later.

The decision was a long time brewing. It all started 5 odd years ago with John Berns. He converted from Ubuntu, and told me something that I shot down at the time (sorry John! ๐Ÿ˜‰ ) but have long since remembered. He said, since he got a mac, he doesn’t spend time fixing his computer, he spends his time working.

As I fully come into my fourth decade, I’m entering a new phase. My late teens and twenties were defined by idealism, militancy, and fuelled by an unbridled fiery energy. As I settle into my thirties, about to turn 31, my focus has slowly shifted. My idealism has faded somewhat, some of my zest for life has faded, and I’m more focused on productivity.

The decision to buy a mac was a decision to get stuff done. I chose a 13″ MacBook Pro with a Retina display. It’s a phenomenal piece of hardware, and the software is close enough to *nix that I thought I’d survive the transition. My precious terminal is still just a hotkey away (thanks iTerm2). My expectation was that the machine would just work. It has almost completely lived up to my expectations. It’s phenomenal hardware, an acceptable user experience, and very, very, very productive. It simply works.

Leaving Ubuntu

Leaving Ubuntu was a hard decision. I’d been a dedicated convert and evangelist for some time. I’m still running Ubuntu on all of our servers. In general, I’m a big fan of what Ubuntu has done for the Free Software space.

However, the stumbling block for me was Unity, Ubuntu’s new desktop interface. Mark has spoken about his vision for Unity, and about not just competing in the operating system space, but leading, setting a new standard. I think that’s a noble cause. But all the chaos and confusion that comes with such rapid change has a cost, and an impact.

Personally, I reached a point where my laptop was a constant struggle to use. I didn’t want to use Unity, and I specifically didn’t want fancy 3d effects at the cost of speed. I wanted a fast, simple desktop. Ubuntu had not been that for some time. I considered Mint, which is apparently the second most popular distro behind Ubuntu. But in the end I was burned out. I’d had enough of fighting with my operating system.

I think the popularity of Mint is testament to the damage that Unity and the broader Ubuntu direction are doing to the Ubuntu userbase. I think a quiet exodus of long standing, core users is underway. I think the effects will be easier to see a few years from now. Maybe it’s a good thing, maybe Ubuntu will truly compete in the desktop space. I hope so.

Personally, I’d like to see a simpler version of Ubuntu LTS with much better backports. I don’t want to upgrade every 6 months, nor do I want 2 years to get the latest and greatest software. For now, I’m sticking to Ubuntu on servers, and if I had a desktop, I’d probably install Mint. But day to day, I’m now officially a Mac user.

Ctrl-left and ctrl-right on iTerm2

I couldn’t jump word at a time on iTerm2. Eventually, I found a beautifully elegant solution using BetterTouchTool (an absolute must have, couldn’t live without, fantastic piece of software). I set up a keyboard shortcut, only for iTerm2, where I mapped the alt-left to send a keyboard shortcut to a specific application. The shortcut is ctrl-left and the application is iTerm2. Then the same for right. Works like a charm. Elegance. ๐Ÿ™‚

It works because when I’m in iTerm2, pressing alt-left, sends ctrl-left to iTerm2. While actually pressing ctrl-left switches desktops. The ctrl-left never reaches iTerm2, OSX picks it up first, but the BetterTouchTool workaround works, means I can use ctrl-left as normal, and use alt-left as normal on local and remote terminals. This has been bugging me for a couple of weeks now. Solved. Fantastic.

Post on multiple social networks

This is my WordPress site. I also use Facebook, Twitter, Identica, Plurk, LinkedIn, Tumblr, and others. To be honest, I’m not a huge social network user. I don’t quite get it when people describe Facebook as an addiction, that ain’t me. I actually avoid the news feed. Twitter I was using more, but then the connection got wiped on my phone, so it’s been a while.

But I do publish on these social networks, and when I do, I want to post on multiple social networks. There are multiple reasons behind this. First and foremost, I want to own my own data. That means, although I want to post on multiple social networks, I absolutely want to post everything to this site. If I post something on Twitter, I want it on my WordPress site as well. I want to have my own record, on my own server, of what I posted when.

Back in the day, I used ping.fm to post to multiple social networks. It worked well enough, I could post by SMS or from an Android app, on their web site, even by email. But then Seesmic acquired ping.fm, and after a while, they shut the service down. I missed it. I mourned it. I stopped posting.

Bottom line, if I can’t post on multiple social networks, I don’t post. I wouldn’t send messages to twitter without also copying them here, and it was manual and laborious. So I decided to build Composer. It builds on the foundation of ping.fm, but takes on a bigger goal. I set out with the vision to create the best place on the internet to write. Full stop.

Composer is in alpha (that comes before beta!) testing right now. It’s live, it’s online, and you can sign up today. Right now you can post to WordPress, Facebook, Twitter, Tumblr, LinkedIn, Identica, and Plurk, with more in the works.

If you have any feedback at all, just let me know, in the comments below, by email, or through the help pages on Composer.

TrueMove Thailand default PIN

My AIS sim card came with the PUK printed on it. My TrueMove sim card did not, and I couldn’t track down the default pin code anywhere online or in the manual. Tried guessing and it didn’t work, eventually I went to a true shop and after some time they told me the default pin was 0000. Hopefully that saves somebody else some hassle. ๐Ÿ™‚

MySQL and SSL on Ubuntu Precise 12.04

I’ve had a nightmare the last couple of days getting mysql replication setup over SSL. Turns out some things have changed since upgrading to Ubuntu 12.04 Precise. In the end, the solutions were simple.

First, the server-key.pem file needs to have RSA in the header. I manually edited the keys and added the RSA part, it worked, like so:


Second, I learned that certificates generated by openssl on Precise do not work with mysql on precise. To get around that issue, I generated my certs on an old 10.04 box and it worked fine. Prior to that, when trying to connect, I got the error:

ERROR 2026 (HY000): SSL connection error: SSL certificate validation failure

Finally, after two days of messing about, replication is once again running, and SSL enabled.

Turning off an Ubuntu webcam

I was a little horrified to see the webcam light on illuminated as I opened my laptop this morning. Who was watching I wondered! Took me a while to figure out what was using the web cam, then I found a simple solution. The command lsof | grep video0 told me skype was the culprit. Restart skype, problem solved, web cam off. Happy morning. ๐Ÿ™‚

On the way, I found this which suggested using modproble -r to disable the kernel module. I tried modprobe -r uvcvideo but I got “FATAL: Module uvcvideo is in use.” Then I figured out lsof would tell me what was going on.

The power of data!

We want internet access to share in the house. Currently we’re all using my phone as a wifi hotspot. When I’m out, no internet. If we go over the 400Mb/day, slow internet. So last night we bought a Meditel 3g USB modem for 229 DH (~ รขโ€šยฌ23).

Got it home, popped the sim card into the X301, the connection was terrible. Unusably slow. I was seeing regular >10s ping times. Unbearably slow. Tested again this morning, slightly better, but still awful. I kicked off a long running ping, copied the results into gedit, stripped out the text, saved as a csv, opened in Libreoffice and produced this.

It’s a graph showing ping times, on a logarithmic scale no less!

This morning I took the unit back to the shop and asked for my money back. The manager took the modem out to test it. I cut him off. As best I could in French, I explained that it works fine, but that the service in our apartment is terrible. Then I opened the laptop and present the graph. BOOM BABY. Full refund, thank you very much. The power of data. Oh, and I did manage to pull ~700MiB overnight, a small recompense!

PHP 5.2 on Ubuntu 11.04 Oneiric

For the time being, we need PHP 5.2 on some of our servers. From version 10.04 Lucid Lynx, Ubuntu started shipping PHP 5.3. I found info on how to install PHP on Lucid by using the packages from the previous version of Ubuntu, 9.10 Karmic Koala.

However, those packages are no longer online, Karmic is long past end of life. The only currently supported release that still has PHP 5.2 is 8.04 Hardy. I was able to get PHP 5.2.4 installed (eventually) using the packages from hardy.

I created two files /etc/apt/sources.list.d/hardy.list and /etc/apt/preferences.d/pin-php52. Then the following met my needs:

sudo apt-get install apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common libapache2-mod-php5 php5-cli php5-common php5-curl php5-gd php5-mcrypt php5-mhash php5-mysql

I constructed the two files mostly from things I read about how to get packages from Karmic to install on Lucid. YMMV.

Skype in Ubuntu Precise alpha

Last night I installed Ubuntu Precise Pangolin 12.04. I was inspired by Mark Shuttleworth’s post and figured because this is an LTS release, it might be ok to upgrade this early, instead of going to 11.10 first.

My biggest hassle was getting skype working. I documented the steps in the hope that it might save somebody else some hassle. This worked for me today, 8 Jan 2012, it might get out of date fast, and should be totally obsolete soon.

First, I installed a bunch of packages from oneiric. I don’t think these are available in precise yet. They are:

They can be downloaded in one go with this command (I believe, I haven’t actually tested, I downloaded them one by one). That command makes a new directory “skype-downloads” then downloads all the packages into it. From there, I ran:

dpkg -i *.deb
sudo apt-get -f install
sudo apt-get upgrade
sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libasound2:i386 libxv1:i386 libsm6:i386 libxi6:i386 libXrender1:i386 libxrandr2:i386 libfreetype6:i386 libfontconfig1:i386 libqtgui4:i386

This installs all the downloaded files, upgrades / fixes some of them, then installs a whole load more i386 dependencies. Finally, after all that, I was able to install the skype .deb I downloaded from skype.com. After downloading the file, I’d suggest using dpkg to install it like this:

dpkg -i /path/to/downloaded/skype-ubuntu_2.2.0.35-1_amd64.deb

Then I was able to start skype. However, it wouldn’t show in the systray. To solve that, I used this command:

gsettings set com.canonical.Unity.Panel systray-whitelist "['Skype']"

This command sets the value found in dconf-editor at desktop > unit > panel, called systra-whitelist. This value sets which programs can appear in the old fashioned system tray (now that we’re onto bigger and better things with unity and indicators). There’s a bug which means setting “all” in this value doesn’t work. So you need to add each program you want, in single quotes, separated by a comma. See this for more.

I don’t really understand what all of this does, I copied various bits and pieces from a few places and pieced it altogether through trial and error. This forum post talks about installing from oeneric, and this blog post listed the extra requirements.

I’m on skype video now, so this all worked! ๐Ÿ™‚

WordPress patches

The upgrading mechanism in recent WordPress version downloads an incremental upgrade zip file. I couldn’t find anything online to get the urls, so I dug into the code and I’m posting here.

The upgrade from 3.3.0 to 3.3.1 was in the file wordpress-3.3.1-partial-0.zip. I’ll try to remember to update this post as future versions come out. I’m not sure of the exact file format. I think it’s -partial-0/1/2 to mark how many minor versions back the upgrade goes.

Personally, I use these files for minor upgrades because it’s faster, but I use the full download file for a full version upgrade.

Update for 3.3.2: It seems that wordpress-3.3.2-partial-0.zip contains the changes from 3.3 to 3.3.2 and wordpress-3.3.2-partial-1.zip contains the changes from 3.3.1 to 3.3.2.

Processing incoming emails

I’ve been looking into how to programatically process incoming emails. For example, to create an email address where somebody can send a CSV file and then have that data parsed and automatically inserted into a database.

There are some interesting tools in this space. The easiest, at least in principle, appears to be Email Yak. They expose a JSON API which will trigger a POST or GET request for incoming messages, and can send message likewise. However, upon signup I got a 500 error and likewise after logging in. So I can’t currently test the service. In principle though, it looks interesting. 500 emails a month on their domain for free, 1000 on any domain for $5/month. Then it kicks up to $40 or $150 for 20k or 100k emails.

Another interesting tool is Context.IO. This is essentially a web friendly API in front of IMAP mailboxes. Their pricing model is also interesting, starting at $1.50 per mailbox per month, with a $15 minimum. Currently the service is read-only, but the option to move messages around is coming in api v2. They also have a free account which includes 3 mailboxes, and they charge 85c/GB for attachment transfer over 100MB.

This is really about extracting knowledge from email inboxes and focuses Google’s mail hosting (gmail/google apps), but will work with others. There seems to be a strong focus on attachments and conversations. Could be a useful component in quickly building another service, but I’d guess I’d want to build my own version eventually.

Google App Engine provides a mechanism to handle incoming emails and pipe them to a script. Sounds very sensible, and it would probably be possible to build a mail routing system on top of this by having the python script send the mail onwards via an API call or POST request.

I also read a few articles about having postfix send mail to a script. This one is useful. This article talks about configuring custom reply-to addresses to know which emails bounce, something called VERP apparently.


It seems to me like Yak Mail (I’ve had a reply from them about the 500 error while typing this blog post, impressed!) and Context.IO are useful pieces. Google’s mail API is also smart, and Amazon will probably add something similar to AWS before too long. If this was core to an application, I’d probably set up postfix to forward mail to a script eventually. But in the early days, I think Yak Mail is probably the way to go.


In my roundup, appeared to have completely missed the best offering of all, mailgun. Free tier includes 200 messages a day on a shared IP, pro tier is $19 minimum per month on a shared ip, $59 minimum on a dedicated IP. APIs to send and receive mail, mailboxes accessible via IMAP and POP3, charges only for storage and message counts, not mailoxes. Plus open source helper libraries available. Looks like the slickest of the lot.

Force SSH password on lftp

When trying to connect to an sftp server in lftp, it automatically tries public key authentication first. In my case, because I have so many keys, I usually get “too many authentication failures” before it gets around to trying the password. Took me a while to find the solution, but it turns out to be fairly simple.

Once lftp is running, simply issue this command:

set sftp:connect-program "ssh -a -x -o PubkeyAuthentication=false"

This causes ssh to be run with public key authentication disabled, so it tries the password immediately, and succeeds. Yay. ๐Ÿ™‚ Posting here for future reference.

Building a plivo AMI

I’ve been experimenting with Amazon’s web services recently. I’ve also been playing with voice apps on both Twilio and Tropo. Then I found plivo. Happy days.

Plivo is an open source service that offers functionality comparable to the hosted services. The authors have also made it outrageously easy to install by packaging the whole thing into two easy to run scripts. There was no EC2 AMI so I set out to create one. It turns out to be fairly straightforward, and all possible through the web console.

Choose a base AMI

The first step is to choose a base AMI. I used the Ubuntu 10.04 amd64 standard AMI in the eu-west-1 region (ami-4290a636). Then I logged in, ran the plivo install commands, waited, waited some more, waited a little longer, and all was done.

Now, to secure the AMI before publishing it, I removed the ssh keys, authorized_keys, and the bash history. This is not as simple as it sounds. I also logged in from a host that I knew would show up in the “last logged in from” section.

I logged in and ran the following commands:

sudo shred -u /etc/ssh/ssh_host_*
shred -u ~/.ssh/authorized_keys
shred -u ~/.bash_history

Now I went into the web console, selected the instance, chose the Instance Actions menu and selected Create Image (EBS AMI). Then under AMIs, I selected my new image, and changed the permissions to public.

Note that in order to take a snapshot, the instance pauses for a second. During that pause, I lose my SSH connection, and having just destroyed SSH on the machine, I cannot get back in. So I have to terminate (kill) the instance and boot it a fresh from the new AMI. This creates new SSH host keys and puts my SSH key back into authorized_keys.


I’m sure there’s a more elegant (and potentially elaborate) way of doing this. But it worked for me. It was quick and painless. Now there’s a public plivo AMI in the eu-west-1 region. I’ll look into how I get it into other regions, and if I need to pay for the storage to have it publicly available.


The result is the new public ami-acd0e1d8 in the eu-west-1 region. If you choose to test the AMI, please let me know how you get on in the comments here.

Fixing NTFS on Ubuntu

James had a hard drive problem. He pulled the disk out of his laptop and brought it to me. Firstly I created a full image of the broken partition like so:

sudo dd if=/dev/sdb2 bs=1k conv=sync,noerror of=/path/to/image

Then I tried TestDisk. It worked like a charm and fixed the apparently broken NTFS boot sector. I thought that when James put the drive back in the laptop, it might “just work”, but apparently it didn’t. I had saved some of the most important files, but not all. James then wiped the drive to get a working machine again.

So now I had to restore files from an image of a broken partition. Turns out to be dead easy. The key ingredients were loopback and TestDisk.

sudo losetup /dev/loop0 /path/to/image
sudo testdisk /dev/loop0

It took me a while to figure out that I needed to choose partition table type none. I was dealing with an image of a single partition, so there was no partition table. After that, TestDisk behaved just like normal. I rebuilt the NFTS boot sector and then mounted the image like so:

sudo mkdir /mnt
sudo mount /dev/loop0 /mnt

This warned about the disk not having been shut down properly, ran something or other to clean it up, and then bingo, all the files were mounted and visible. I copied all the data from /mnt to an external drive, and will give that to James to restore from. Too easy!

My experience at CHS11

Friday night and yesterday I was at Culture Hack Scotland 2011 (#chs11). It was a 24 hour hackday. A hackday is a session where designers, developers and other hackers get together and create stuff. Typically there’s a theme, and this theme was culture with specific focus on the Edinburgh Festivals. The event was put on by the Festivals Lab.

In this post I’ll share my experience, talk about the future of what I started, and offer some suggestions for future events.

My experience

My interest at the beginning of the session was the people in the room. There seemed to be a lot of outward focus. People were building stuff for other people, for the public, for some sort of audience. I wanted to do something for the people physically present. I wanted to make some kind of contribution to the shared social experience of the event.

I started out thinking about photo and video. Taking portraits of participants maybe, or creating a video diary corner. After a couple of hours I hit on an idea. I wanted to do something with qrcodes and people. (A qrcode is a square barcode that many smartphones can scan, more on wikipedia.) Tagging physical people with qrcodes so they could be scanned by other participants in the event.

I had a vision of people complimenting each other, providing encouragement, and so on by scanning each other. So a person might be sitting working furiously and another participant walks past, scans them, and shares a positive message.

The execution seemed simple enough. I’d generate qrcodes that linked to twitter with the tweet message pre-filled, including the person’s twitter username. So I can scan Jill and immediately have a pre-written tweet saying something @jill.

The whole thing was incredibly simple. I wanted to launch in the morning, but spent ages getting the qrcodes printed. Tom Inglis helped a great deal here. He physically purchased and printed the stickers! I arrived just before 9am fired up and ready to go, but it was well after 1pm before I had the stickers ready to print. That could have been a lot faster, it was an easy problem to solve.

The next step was to tag people, which went fairly well. Most people were receptive. In total I linked 82 qrcodes to twitter accounts. In total, I count 21 tweets with the #tagrrd hashtag. So there were 4 qrcodes for ever one actual tweet. Those 21 tweets were produced by 16 authors over about 9 hours. That’s about one scan every 30 minutes.

I had hoped for much higher participation, but I think my execution let me down in the morning. I think if I had gotten the tags out earlier in the day, I could have spent more time encouraging people to use them.

Future of tagrr

I can see some interesting potential the concept. For example, I like the idea of creating a simple brand around a qrcode. Surrounding it in a red box for example. Then potentially sticking qrcodes around the city, maybe along the lines of geocaching. I also think the same idea could be done at other events with the tags handed out as people arrive. People might use them more if they were part of the event experience from the beginning.

I’ll keep my eyes peeled for other events where I could try to test the idea further. If you’re reading this and hosting an event in Edinburgh, let’s discuss the possibilities.


At future hack days, I’d love to see more tech oriented communication during the event. The #chs11 hashtag allowed people to communicate around twitter, which was ok. I think there’s room for easy improvement here.

A screen dedicated to showing a specific hashtag for developers would have been good. Somewhere developers could post questions, receive answers, and so on. Maybe that happened around the #chs11 tag and I missed it all, but a screen in the room would be good I reckon. Another nice option is the WordPress p2 theme. It’s a sort of vaguely private mini-twitter. Can all be done for free on wordpress.com.

Personally, and this is totally personal feedback, I would provide less food more often. The food appeared to be quite expensive, which is nice. I reckon the experience could be improved slightly by having less expensive food always available. For example, a fridge with sandwiches and snacks in it. It would be cool to have them freshly delivered at breakfast and lunch, but ultimately, probably not essential. Likewise with drinks. Having a coffee machine in the room, always on, continuously filled, for the whole 24 hours would be awesome.

A halfway demo might work well. Giving people the option, not required, to present their project after breakfast for example. Let the guys who worked overnight show off what they’ve done, maybe bring new people into their work. Likewise, people could pitch tough problems they’ve hit, see if other folks in the room have solutions to offer.

Overall, the event was awesome. Personally, I had a great time. The highlight for me was the sense of cooperation between the participants. There was a great spirit of collaboration, people sharing problems, bouncing idea between different teams, and so on. There was amazing talent in the room.

Audio CDs on Ubuntu on Lenovo X301

Quick geektastic post. Under Ubuntu 10.04 lucid lynx I can’t play audio CDs. When I put them into the drive, an error pops up every few seconds saying:

Unable to mount Audio Disc
DBus error org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus)

Eventually I stumbled upon this bug and found a solution. I open Nautilus, Edit > Preferences > Media > Never prompt or start programs on media insertion. Bingo, now I can insert a CD and it will play. I don’t think it’ll work in Rhythmbox because that’s so tightly integrated with Gnome, but I was able to play the CD in VLC and presumably I’d be able to rip it in something equally unconnected to Gnome.

Glorious, now I can rip some of my 6 year old CDs I just found. Happy days. ๐Ÿ™‚

Here’s a random picture from flickr for the non techy readers to enjoy…

Missing com.google.android.maps

Thanks to this post, and a whole heap of other stuff, I finally sorted out a Google Maps problem on my Nexus One.

When trying to install some apps, I would see this error message in logcat:

requires unavailable shared library com.google.android.maps

I had the Google Maps app installed and working, but that didn’t fix the issue. It turns out, I had to add two other files and restart the phone. I found those files in the google zip from Cyanogenmod. It took a little fiddling, but I was able to use these instructions to remount /system in write mode. First step was to take the following two files from the google apps zip (gapps-hdpi-20101114-signed.zip) and put them onto my sd card.


Then to load them onto the phone, I opened the terminal emulator and ran:

mount -o rw,remount -t yaffs2 /dev/block/mtdblock3 /system
cp /mnt/sdcard/
com.google.android.maps.xml /system/etc/permissions/
cp /mnt/sdcard/
com.google.android.maps.jar /system/framework/

Then after I rebooted the phone, I was able to install apps that depend on Google Maps. I can now check bus and train times, and do all kinds of other cool stuff with maps! ๐Ÿ™‚

Note, this is only relevant if you do not want all Google Apps installed. I only have the map application installed as I don’t sync my phone with any Google services. If you’re using all the Google Apps, I suggest reinstalling as these steps should not be necessary.

To add a little colour, here’s an unrelated picture from flickr, courtesy of epSos.de.

Ripping MP3s from YouTube on Ubuntu

This is the first post in a new category called notes. Things I want to remember and don’t have anywhere else to write down.

Install Medibuntu, WinFF, Firefox + Greasemonkey + Youtube without Flash Auto, and probably libavcodec-extra-52 from Medibuntu and easytag. Download the video within Firefox, save it to disk. Open WinFF, open the video, choose output format as Audio, set your options, and click convert. Then open EasyTAG if appropriate. Easy peasy. ๐Ÿ™‚

My very own OpenID server

I just installed SimpleID. Now I have my very own OpenID server. I no longer need to subject myself to the pain of myopenid.com. After they consistently ignored all my requests to fix a major bug in their system, I’ve gone elsewhere. Happy to be running my own server and away from JanRain and their abysmal non-support.

Install was painless. It took me maybe 30 minutes because my internet connection was running so slowly. On a fast line, it would have been a 5 minute install. Very cool. I did make one change to SimpleID, as per this ticket, to make it a little more secure.

My next project is to install Prosody so I have my own jabber/XMPP server as well. ๐Ÿ™‚

A WordPress hosting cooperative

Maybe you make WordPress sites for cash. Maybe you design themes or write plugins. Then, after your work is done, your clients (or friends, lovers, etc) need to be supported. Somebody needs to keep WordPress and her plugins up to date, secure, and backed up.

Would you like to share that load with some co-cooperators in a WordPress hosting cooperative? Imagine a small group of developers collectively managing 50 or 100 WordPress sites instead of individually managing 10 or 20.


Ok, you’re sold on the vision, what about the details?

Initially, a loose association of a few individuals, no legal structure. I’m willing to act as the banker for the startup period. I’ll register a domain name and pay for a few servers. I promise to transfer ownership of the domain and any other assets when (or if) a legal organisation is created at any point in the future. Or, if I choose to move away, to transfer the domain and other assets to another person in the group.

My suggestion is that we adopt a split pricing model. We set a fair market price for customers. In the beginning, it’s probably simper to charge per blog irrespective of traffic, disk or cpu usage. We can change this policy as soon as we need to.

Members then pay a pro-rated share of costs based on their number of sites. For example, we have 10 customers paying $10 a month, $100. Expenses are $150 a month, we have 5 members with 4 sites each, $50 over 20 sites, each member pays $2.50 per month per site.

To distinguish between customer and member sites, we can say if money changes hands, it’s a customer site. So a member might pay for 8 of their client’s sites at customer rates, and 3 for their family at member rates. The distinction is whether or not the member receives cash from somebody for that site. We trust each member to be honest.

Payment optional

It’s not as crazy as it sounds, honest! I suggest we adopt a post-paid, payment optional policy. At the end of each month, we send invoices marked payment optional. Customers can choose not to pay and their sites will be taken offline in reasonable time period.

The advantage of this model is we don’t ever have to deal with refunds, price disputes or otherwise. If the client is happy with the service they already received, great, if they’re not, they don’t have to pay and we part ways amicably.


  • Transparency: All financials are publicly visible.
  • Profits: Until we have a legal organisation, any profits are kept in the group to pay for expenses. No payouts to members until the legal structure is sound.
  • Do-ocracy: Until we decide to change it, we each contribute what we can and what’s needed to keep the system online.
  • Respect: Inspired by the Ubuntu project, in joining the group, we each commit to treat other members and customers with the utmost of respect at absolutely all times.

Next step

These are my initial thoughts as I wrote this post in half an hour. If you’d like to join the discussion, become a member or a customer, post a comment below, shoot me a message, or otherwise open the communication lines. ๐Ÿ™‚

A new short url

I’ve just acquired two short domains. They are cal.io and chm.ac. The first is cool, the second is a short version of my standard username, chmac (and formed from my initials, how original!).

I’m thinking I want to move this site, my email, and my other services over to one of the two. I can’t decide which. I think I like cal.io better, but calio.com is taken and I’m already invested in chmac. It’s my username across most sites. I’m 12 of the top 20 results in a search for chmac. I own that space.

Calio is a surname, there’s all kinds of stuff going on there. It’s an exciting new space, but it might also mean walking away from chmac, which I’m already on top of.

What do you think? cal.io or chm.ac?

Update: In the 3 days since I wrote this, while I waited for the domains to be registered, I think I’m decided on cal.io as my new domain. But, I’m still interested to hear feedback.

Links in twitter feeds in Liferea

I use Liferea to consume feeds. In turn, I consume twitter by RSS. However, twitter’s RSS feeds suck. Urls are not clickable, user names are not links, nothing. It’s flat text.

Using Liferea’s ability to locally parse feeds and a little inspiration, I hacked up a sed script to make my twitter feed all pretty. It works great for me, YMMV.

I published the script here, under the GPL. To use it, save the source into a file somewhere, make that file executable, then choose “Use conversion filter” in Liferea and select the file you just created. If you have problems, you could try leaving a comment here, I might be able to help.