My AIS sim card came with the PUK printed on it. My TrueMove sim card did not, and I couldn’t track down the default pin code anywhere online or in the manual. Tried guessing and it didn’t work, eventually I went to a true shop and after some time they told me the default pin was 0000. Hopefully that saves somebody else some hassle. 🙂
I just wrote up an article explaining what an smtp relay is. I’m moving the WP Mail SMTP plugin to a new home, the process has begun. 🙂
I’ve had a nightmare the last couple of days getting mysql replication setup over SSL. Turns out some things have changed since upgrading to Ubuntu 12.04 Precise. In the end, the solutions were simple.
First, the server-key.pem file needs to have RSA in the header. I manually edited the keys and added the RSA part, it worked, like so:
-----BEGIN RSA PRIVATE KEY-----
Second, I learned that certificates generated by openssl on Precise do not work with mysql on precise. To get around that issue, I generated my certs on an old 10.04 box and it worked fine. Prior to that, when trying to connect, I got the error:
ERROR 2026 (HY000): SSL connection error: SSL certificate validation failure
Finally, after two days of messing about, replication is once again running, and SSL enabled.
I was a little horrified to see the webcam light on illuminated as I opened my laptop this morning. Who was watching I wondered! Took me a while to figure out what was using the web cam, then I found a simple solution. The command
lsof | grep video0 told me skype was the culprit. Restart skype, problem solved, web cam off. Happy morning. 🙂
On the way, I found this which suggested using
modproble -r to disable the kernel module. I tried
modprobe -r uvcvideo but I got “FATAL: Module uvcvideo is in use.” Then I figured out lsof would tell me what was going on.
We want internet access to share in the house. Currently we’re all using my phone as a wifi hotspot. When I’m out, no internet. If we go over the 400Mb/day, slow internet. So last night we bought a Meditel 3g USB modem for 229 DH (~ €23).
Got it home, popped the sim card into the X301, the connection was terrible. Unusably slow. I was seeing regular >10s ping times. Unbearably slow. Tested again this morning, slightly better, but still awful. I kicked off a long running ping, copied the results into gedit, stripped out the text, saved as a csv, opened in Libreoffice and produced this.
It’s a graph showing ping times, on a logarithmic scale no less!
This morning I took the unit back to the shop and asked for my money back. The manager took the modem out to test it. I cut him off. As best I could in French, I explained that it works fine, but that the service in our apartment is terrible. Then I opened the laptop and present the graph. BOOM BABY. Full refund, thank you very much. The power of data. Oh, and I did manage to pull ~700MiB overnight, a small recompense!
For the time being, we need PHP 5.2 on some of our servers. From version 10.04 Lucid Lynx, Ubuntu started shipping PHP 5.3. I found info on how to install PHP on Lucid by using the packages from the previous version of Ubuntu, 9.10 Karmic Koala.
However, those packages are no longer online, Karmic is long past end of life. The only currently supported release that still has PHP 5.2 is 8.04 Hardy. I was able to get PHP 5.2.4 installed (eventually) using the packages from hardy.
sudo apt-get install apache2 apache2-mpm-prefork apache2-utils apache2.2-bin apache2.2-common libapache2-mod-php5 php5-cli php5-common php5-curl php5-gd php5-mcrypt php5-mhash php5-mysql
I constructed the two files mostly from things I read about how to get packages from Karmic to install on Lucid. YMMV.
Last night I installed Ubuntu Precise Pangolin 12.04. I was inspired by Mark Shuttleworth’s post and figured because this is an LTS release, it might be ok to upgrade this early, instead of going to 11.10 first.
My biggest hassle was getting skype working. I documented the steps in the hope that it might save somebody else some hassle. This worked for me today, 8 Jan 2012, it might get out of date fast, and should be totally obsolete soon.
First, I installed a bunch of packages from oneiric. I don’t think these are available in precise yet. They are:
They can be downloaded in one go with this command (I believe, I haven’t actually tested, I downloaded them one by one). That command makes a new directory “skype-downloads” then downloads all the packages into it. From there, I ran:
dpkg -i *.deb
sudo apt-get -f install
sudo apt-get upgrade
sudo apt-get install libxss1:i386 libqtcore4:i386 libqt4-dbus:i386 libasound2:i386 libxv1:i386 libsm6:i386 libxi6:i386 libXrender1:i386 libxrandr2:i386 libfreetype6:i386 libfontconfig1:i386 libqtgui4:i386
This installs all the downloaded files, upgrades / fixes some of them, then installs a whole load more i386 dependencies. Finally, after all that, I was able to install the skype .deb I downloaded from skype.com. After downloading the file, I’d suggest using dpkg to install it like this:
dpkg -i /path/to/downloaded/skype-ubuntu_220.127.116.11-1_amd64.deb
Then I was able to start skype. However, it wouldn’t show in the systray. To solve that, I used this command:
gsettings set com.canonical.Unity.Panel systray-whitelist "['Skype']"
This command sets the value found in dconf-editor at desktop > unit > panel, called systra-whitelist. This value sets which programs can appear in the old fashioned system tray (now that we’re onto bigger and better things with unity and indicators). There’s a bug which means setting “all” in this value doesn’t work. So you need to add each program you want, in single quotes, separated by a comma. See this for more.
I don’t really understand what all of this does, I copied various bits and pieces from a few places and pieced it altogether through trial and error. This forum post talks about installing from oeneric, and this blog post listed the extra requirements.
I’m on skype video now, so this all worked! 🙂
The upgrading mechanism in recent WordPress version downloads an incremental upgrade zip file. I couldn’t find anything online to get the urls, so I dug into the code and I’m posting here.
The upgrade from 3.3.0 to 3.3.1 was in the file wordpress-3.3.1-partial-0.zip. I’ll try to remember to update this post as future versions come out. I’m not sure of the exact file format. I think it’s -partial-0/1/2 to mark how many minor versions back the upgrade goes.
Personally, I use these files for minor upgrades because it’s faster, but I use the full download file for a full version upgrade.
I’ve been looking into how to programatically process incoming emails. For example, to create an email address where somebody can send a CSV file and then have that data parsed and automatically inserted into a database.
There are some interesting tools in this space. The easiest, at least in principle, appears to be Email Yak. They expose a JSON API which will trigger a POST or GET request for incoming messages, and can send message likewise. However, upon signup I got a 500 error and likewise after logging in. So I can’t currently test the service. In principle though, it looks interesting. 500 emails a month on their domain for free, 1000 on any domain for $5/month. Then it kicks up to $40 or $150 for 20k or 100k emails.
Another interesting tool is Context.IO. This is essentially a web friendly API in front of IMAP mailboxes. Their pricing model is also interesting, starting at $1.50 per mailbox per month, with a $15 minimum. Currently the service is read-only, but the option to move messages around is coming in api v2. They also have a free account which includes 3 mailboxes, and they charge 85c/GB for attachment transfer over 100MB.
This is really about extracting knowledge from email inboxes and focuses Google’s mail hosting (gmail/google apps), but will work with others. There seems to be a strong focus on attachments and conversations. Could be a useful component in quickly building another service, but I’d guess I’d want to build my own version eventually.
Google App Engine provides a mechanism to handle incoming emails and pipe them to a script. Sounds very sensible, and it would probably be possible to build a mail routing system on top of this by having the python script send the mail onwards via an API call or POST request.
I also read a few articles about having postfix send mail to a script. This one is useful. This article talks about configuring custom reply-to addresses to know which emails bounce, something called VERP apparently.
It seems to me like Yak Mail (I’ve had a reply from them about the 500 error while typing this blog post, impressed!) and Context.IO are useful pieces. Google’s mail API is also smart, and Amazon will probably add something similar to AWS before too long. If this was core to an application, I’d probably set up postfix to forward mail to a script eventually. But in the early days, I think Yak Mail is probably the way to go.
In my roundup, appeared to have completely missed the best offering of all, mailgun. Free tier includes 200 messages a day on a shared IP, pro tier is $19 minimum per month on a shared ip, $59 minimum on a dedicated IP. APIs to send and receive mail, mailboxes accessible via IMAP and POP3, charges only for storage and message counts, not mailoxes. Plus open source helper libraries available. Looks like the slickest of the lot.
When trying to connect to an sftp server in lftp, it automatically tries public key authentication first. In my case, because I have so many keys, I usually get “too many authentication failures” before it gets around to trying the password. Took me a while to find the solution, but it turns out to be fairly simple.
Once lftp is running, simply issue this command:
set sftp:connect-program "ssh -a -x -o PubkeyAuthentication=false"
This causes ssh to be run with public key authentication disabled, so it tries the password immediately, and succeeds. Yay. 🙂 Posting here for future reference.
Plivo is an open source service that offers functionality comparable to the hosted services. The authors have also made it outrageously easy to install by packaging the whole thing into two easy to run scripts. There was no EC2 AMI so I set out to create one. It turns out to be fairly straightforward, and all possible through the web console.
Choose a base AMI
The first step is to choose a base AMI. I used the Ubuntu 10.04 amd64 standard AMI in the eu-west-1 region (ami-4290a636). Then I logged in, ran the plivo install commands, waited, waited some more, waited a little longer, and all was done.
Now, to secure the AMI before publishing it, I removed the ssh keys, authorized_keys, and the bash history. This is not as simple as it sounds. I also logged in from a host that I knew would show up in the “last logged in from” section.
I logged in and ran the following commands:
sudo shred -u /etc/ssh/ssh_host_*
shred -u ~/.ssh/authorized_keys
shred -u ~/.bash_history
Now I went into the web console, selected the instance, chose the Instance Actions menu and selected Create Image (EBS AMI). Then under AMIs, I selected my new image, and changed the permissions to public.
Note that in order to take a snapshot, the instance pauses for a second. During that pause, I lose my SSH connection, and having just destroyed SSH on the machine, I cannot get back in. So I have to terminate (kill) the instance and boot it a fresh from the new AMI. This creates new SSH host keys and puts my SSH key back into authorized_keys.
I’m sure there’s a more elegant (and potentially elaborate) way of doing this. But it worked for me. It was quick and painless. Now there’s a public plivo AMI in the eu-west-1 region. I’ll look into how I get it into other regions, and if I need to pay for the storage to have it publicly available.
The result is the new public ami-acd0e1d8 in the eu-west-1 region. If you choose to test the AMI, please let me know how you get on in the comments here.
James had a hard drive problem. He pulled the disk out of his laptop and brought it to me. Firstly I created a full image of the broken partition like so:
sudo dd if=/dev/sdb2 bs=1k conv=sync,noerror of=/path/to/image
Then I tried TestDisk. It worked like a charm and fixed the apparently broken NTFS boot sector. I thought that when James put the drive back in the laptop, it might “just work”, but apparently it didn’t. I had saved some of the most important files, but not all. James then wiped the drive to get a working machine again.
So now I had to restore files from an image of a broken partition. Turns out to be dead easy. The key ingredients were loopback and TestDisk.
sudo losetup /dev/loop0 /path/to/image
sudo testdisk /dev/loop0
It took me a while to figure out that I needed to choose partition table type none. I was dealing with an image of a single partition, so there was no partition table. After that, TestDisk behaved just like normal. I rebuilt the NFTS boot sector and then mounted the image like so:
sudo mkdir /mnt
sudo mount /dev/loop0 /mnt
This warned about the disk not having been shut down properly, ran something or other to clean it up, and then bingo, all the files were mounted and visible. I copied all the data from /mnt to an external drive, and will give that to James to restore from. Too easy!
Friday night and yesterday I was at Culture Hack Scotland 2011 (#chs11). It was a 24 hour hackday. A hackday is a session where designers, developers and other hackers get together and create stuff. Typically there’s a theme, and this theme was culture with specific focus on the Edinburgh Festivals. The event was put on by the Festivals Lab.
My interest at the beginning of the session was the people in the room. There seemed to be a lot of outward focus. People were building stuff for other people, for the public, for some sort of audience. I wanted to do something for the people physically present. I wanted to make some kind of contribution to the shared social experience of the event.
I started out thinking about photo and video. Taking portraits of participants maybe, or creating a video diary corner. After a couple of hours I hit on an idea. I wanted to do something with qrcodes and people. (A qrcode is a square barcode that many smartphones can scan, more on wikipedia.) Tagging physical people with qrcodes so they could be scanned by other participants in the event.
I had a vision of people complimenting each other, providing encouragement, and so on by scanning each other. So a person might be sitting working furiously and another participant walks past, scans them, and shares a positive message.
The execution seemed simple enough. I’d generate qrcodes that linked to twitter with the tweet message pre-filled, including the person’s twitter username. So I can scan Jill and immediately have a pre-written tweet saying something @jill.
The whole thing was incredibly simple. I wanted to launch in the morning, but spent ages getting the qrcodes printed. Tom Inglis helped a great deal here. He physically purchased and printed the stickers! I arrived just before 9am fired up and ready to go, but it was well after 1pm before I had the stickers ready to print. That could have been a lot faster, it was an easy problem to solve.
The next step was to tag people, which went fairly well. Most people were receptive. In total I linked 82 qrcodes to twitter accounts. In total, I count 21 tweets with the #tagrrd hashtag. So there were 4 qrcodes for ever one actual tweet. Those 21 tweets were produced by 16 authors over about 9 hours. That’s about one scan every 30 minutes.
I had hoped for much higher participation, but I think my execution let me down in the morning. I think if I had gotten the tags out earlier in the day, I could have spent more time encouraging people to use them.
I can see some interesting potential the concept. For example, I like the idea of creating a simple brand around a qrcode. Surrounding it in a red box for example. Then potentially sticking qrcodes around the city, maybe along the lines of geocaching. I also think the same idea could be done at other events with the tags handed out as people arrive. People might use them more if they were part of the event experience from the beginning.
I’ll keep my eyes peeled for other events where I could try to test the idea further. If you’re reading this and hosting an event in Edinburgh, let’s discuss the possibilities.
At future hack days, I’d love to see more tech oriented communication during the event. The #chs11 hashtag allowed people to communicate around twitter, which was ok. I think there’s room for easy improvement here.
A screen dedicated to showing a specific hashtag for developers would have been good. Somewhere developers could post questions, receive answers, and so on. Maybe that happened around the #chs11 tag and I missed it all, but a screen in the room would be good I reckon. Another nice option is the WordPress p2 theme. It’s a sort of vaguely private mini-twitter. Can all be done for free on wordpress.com.
Personally, and this is totally personal feedback, I would provide less food more often. The food appeared to be quite expensive, which is nice. I reckon the experience could be improved slightly by having less expensive food always available. For example, a fridge with sandwiches and snacks in it. It would be cool to have them freshly delivered at breakfast and lunch, but ultimately, probably not essential. Likewise with drinks. Having a coffee machine in the room, always on, continuously filled, for the whole 24 hours would be awesome.
A halfway demo might work well. Giving people the option, not required, to present their project after breakfast for example. Let the guys who worked overnight show off what they’ve done, maybe bring new people into their work. Likewise, people could pitch tough problems they’ve hit, see if other folks in the room have solutions to offer.
Overall, the event was awesome. Personally, I had a great time. The highlight for me was the sense of cooperation between the participants. There was a great spirit of collaboration, people sharing problems, bouncing idea between different teams, and so on. There was amazing talent in the room.
Quick geektastic post. Under Ubuntu 10.04 lucid lynx I can’t play audio CDs. When I put them into the drive, an error pops up every few seconds saying:
Unable to mount Audio Disc
DBus error org.freedesktop.DBus.Error.NoReply: Message did not receive a reply (timeout by message bus)
Eventually I stumbled upon this bug and found a solution. I open Nautilus, Edit > Preferences > Media > Never prompt or start programs on media insertion. Bingo, now I can insert a CD and it will play. I don’t think it’ll work in Rhythmbox because that’s so tightly integrated with Gnome, but I was able to play the CD in VLC and presumably I’d be able to rip it in something equally unconnected to Gnome.
Glorious, now I can rip some of my 6 year old CDs I just found. Happy days. 🙂
Here’s a random picture from flickr for the non techy readers to enjoy…
Thanks to this post, and a whole heap of other stuff, I finally sorted out a Google Maps problem on my Nexus One.
When trying to install some apps, I would see this error message in logcat:
requires unavailable shared library com.google.android.maps
I had the Google Maps app installed and working, but that didn’t fix the issue. It turns out, I had to add two other files and restart the phone. I found those files in the google zip from Cyanogenmod. It took a little fiddling, but I was able to use these instructions to remount
/system in write mode. First step was to take the following two files from the google apps zip (gapps-hdpi-20101114-signed.zip) and put them onto my sd card.
Then to load them onto the phone, I opened the terminal emulator and ran:
mount -o rw,remount -t yaffs2 /dev/block/mtdblock3 /system
Then after I rebooted the phone, I was able to install apps that depend on Google Maps. I can now check bus and train times, and do all kinds of other cool stuff with maps! 🙂
Note, this is only relevant if you do not want all Google Apps installed. I only have the map application installed as I don’t sync my phone with any Google services. If you’re using all the Google Apps, I suggest reinstalling as these steps should not be necessary.
To add a little colour, here’s an unrelated picture from flickr, courtesy of epSos.de.
This is the first post in a new category called notes. Things I want to remember and don’t have anywhere else to write down.
Install Medibuntu, WinFF, Firefox + Greasemonkey + Youtube without Flash Auto, and probably libavcodec-extra-52 from Medibuntu and easytag. Download the video within Firefox, save it to disk. Open WinFF, open the video, choose output format as Audio, set your options, and click convert. Then open EasyTAG if appropriate. Easy peasy. 🙂
I just installed SimpleID. Now I have my very own OpenID server. I no longer need to subject myself to the pain of myopenid.com. After they consistently ignored all my requests to fix a major bug in their system, I’ve gone elsewhere. Happy to be running my own server and away from JanRain and their abysmal non-support.
Install was painless. It took me maybe 30 minutes because my internet connection was running so slowly. On a fast line, it would have been a 5 minute install. Very cool. I did make one change to SimpleID, as per this ticket, to make it a little more secure.
My next project is to install Prosody so I have my own jabber/XMPP server as well. 🙂
Maybe you make WordPress sites for cash. Maybe you design themes or write plugins. Then, after your work is done, your clients (or friends, lovers, etc) need to be supported. Somebody needs to keep WordPress and her plugins up to date, secure, and backed up.
Would you like to share that load with some co-cooperators in a WordPress hosting cooperative? Imagine a small group of developers collectively managing 50 or 100 WordPress sites instead of individually managing 10 or 20.
Ok, you’re sold on the vision, what about the details?
Initially, a loose association of a few individuals, no legal structure. I’m willing to act as the banker for the startup period. I’ll register a domain name and pay for a few servers. I promise to transfer ownership of the domain and any other assets when (or if) a legal organisation is created at any point in the future. Or, if I choose to move away, to transfer the domain and other assets to another person in the group.
My suggestion is that we adopt a split pricing model. We set a fair market price for customers. In the beginning, it’s probably simper to charge per blog irrespective of traffic, disk or cpu usage. We can change this policy as soon as we need to.
Members then pay a pro-rated share of costs based on their number of sites. For example, we have 10 customers paying $10 a month, $100. Expenses are $150 a month, we have 5 members with 4 sites each, $50 over 20 sites, each member pays $2.50 per month per site.
To distinguish between customer and member sites, we can say if money changes hands, it’s a customer site. So a member might pay for 8 of their client’s sites at customer rates, and 3 for their family at member rates. The distinction is whether or not the member receives cash from somebody for that site. We trust each member to be honest.
It’s not as crazy as it sounds, honest! I suggest we adopt a post-paid, payment optional policy. At the end of each month, we send invoices marked payment optional. Customers can choose not to pay and their sites will be taken offline in reasonable time period.
The advantage of this model is we don’t ever have to deal with refunds, price disputes or otherwise. If the client is happy with the service they already received, great, if they’re not, they don’t have to pay and we part ways amicably.
- Transparency: All financials are publicly visible.
- Profits: Until we have a legal organisation, any profits are kept in the group to pay for expenses. No payouts to members until the legal structure is sound.
- Do-ocracy: Until we decide to change it, we each contribute what we can and what’s needed to keep the system online.
- Respect: Inspired by the Ubuntu project, in joining the group, we each commit to treat other members and customers with the utmost of respect at absolutely all times.
These are my initial thoughts as I wrote this post in half an hour. If you’d like to join the discussion, become a member or a customer, post a comment below, shoot me a message, or otherwise open the communication lines. 🙂
I’ve just acquired two short domains. They are cal.io and chm.ac. The first is cool, the second is a short version of my standard username, chmac (and formed from my initials, how original!).
I’m thinking I want to move this site, my email, and my other services over to one of the two. I can’t decide which. I think I like cal.io better, but calio.com is taken and I’m already invested in chmac. It’s my username across most sites. I’m 12 of the top 20 results in a search for chmac. I own that space.
Calio is a surname, there’s all kinds of stuff going on there. It’s an exciting new space, but it might also mean walking away from chmac, which I’m already on top of.
What do you think? cal.io or chm.ac?
Update: In the 3 days since I wrote this, while I waited for the domains to be registered, I think I’m decided on cal.io as my new domain. But, I’m still interested to hear feedback.
Using Liferea’s ability to locally parse feeds and a little inspiration, I hacked up a sed script to make my twitter feed all pretty. It works great for me, YMMV.
I published the script here, under the GPL. To use it, save the source into a file somewhere, make that file executable, then choose “Use conversion filter” in Liferea and select the file you just created. If you have problems, you could try leaving a comment here, I might be able to help.
When installing the latest batch of updates to Ubuntu 10.04 I hit a problem, I ran out of space on my /boot partition. A dialog popped up warning of low space on /boot. Then the install of updates failed because the new kernel couldn’t be completed.
The solution was remarkably simple, I post it here in the hope it might help others. Firstly, I removed the oldest kernel I had installed. I opened Synaptic (System > Administration) and then searched for my current kernel version (2.6.32). I saw I had 4 kernels installed. I then searched for 2.6.32-21, the oldest kernel. I marked these packages for complete removal:
Then I removed those and to finish, I marked for re-installation the same packages but the -24 version (the latest kernel that failed to install). Now all is happy once again. 🙂
I’ve just upgraded to WordPress 3.0 and switched to the brand new default theme called twentyten. If you’re reading this in your feed reader, come by and check out the new look.
I’ll update my picture (people seem shocked when they see it after meeting me in person!), and modify the menu using the new menu editor. I’ll try to make navigating a little easier. If you haven’t already tried it, I recommend the new version of WordPress.
Sometimes nautilus will try to generate a thumbnail for a video file while it’s downloading. Then nautilus remembers that it tried, and failed, to generate a thumbnail for that file. Once the file has finished downloading, the thumbnail remains broken. I’ve had this issue for a while, today I chose to find a solution.
Download this file and put it into your ~/.gnome2/nautilus-scripts directory. The script is by Barak, I uploaded a plain text version here to make it easier to download. Make the script executable, you can run
chmod +x ~/.gnome2/nautilus-scripts/delete_thumbnails.py in a terminal to do this. Now go to that directory in Nautilus, and you’re in business.
To test, right click on a file with a thumbnail. You should see a new menu, Scripts, under which you’ll see “delete_thumbnails.py”. Click that option and the thumbnail will be deleted. Press F5 to reload the folder in nautilus, and you should see a new thumbnail generated.
Thanks for a such a handy script Barak.
For a few months now I’ve been researching programs to write in. I have OpenOffice, I tried AbiWord, I use gedit for text files. They’re all good programs, but they’re not what I want to write in. I want something ultra simple. Very basic formatting, spellcheck, light quick load time. The best option I found was Tomboy, a sticky note application. It supports very simple formatting, has a spellcheck, and is dead simple. It loads almost instantly. But, it saves notes automatically in its own format. There’s no way for me to save different versions, choose a file name or location, etc. It’s fine for the writing, but I need to go elsewhere to save.
In the last couple of days, I discovered gwrite. It’s a very simple WYSIWYG HTML editor. It has the potential to be exactly what I want, but it’s very young software and still has a few usability bugs. I’ve reported them to the program’s author, so maybe it’ll improve in time. I might even look at the source code and see if I can provide some patches myself.
But, that’s not the point of this post. This post is about lyx, which is a seriously cool application I’ve just discovered. It’s a “writing tool”, not a word processor. It’s a tool designed for scientific and other authors to simply write text. It’s based on an underlying technology called LATEX. As I understand it, and I’m completely new to this whole thing, LaTeX allows an author to just write. The layout of chapters, titles, indentations, bullet points, and all that jazz, is handled by LaTeX macros. What does that mean? Well, I think it means I just write, then lyx, LaTeX and TeX make it look beautiful.
So, all excited, I decided to install lyx. This is where I hit a problem. I was prompted to download 438MB of data and use 745MB of disk space. That’s outrageously huge for a single program. I was blown away, it makes installing lyx many times larger than OpenOffice. I was strongly intrigued by what took up so much space, so I had a little sniff. It turns out that more than 70% of the download size and almost 60% of the disk space is used by documentation. Mostly, documentation for underlying applications which I didn’t specifically choose to install, they’re required to make lyx work.
Being on a slow internet connection, I decided waiting the day or two for 438MB to download was just too much. There must be another way. A little research later, I found my solution in a program called equivs. Equivs is a pair of tools to create shadow or dummy debs. In my case, this meant creating a dummy package to make apt think that I had already installed the massive collection of documentation that was necessary to install lyx. Thus I was able to install lyx by downloading only 117MB of data and using only 302MB of disk space. Still astronomically huge, but less than half of what I was originally facing.
And so, onto the point of this post. How does one do that? If you want a simple answer, here it is. Step 1, install this file. Step 2, install lyx as normal. Bingo, jobsagoodun. 🙂
For those who are interested, I’ll explain the process on Ubuntu 10.04. Install equivs in the usual way (
sudo apt-get install equivs will do the trick). Now create a new directory, I called it equivs-texlive-dummy-docs. In that directory, run
equivs-control texlive-dummy-docs.ctl. Now edit the newly created file. Mine looked like this. Next run
equivs-build texlive-dummy-docs.ctl. This command creates a new file called texlive-dummy-docs_1.0_all.deb. That file can be installed with
sudo dpkg -i texlive-dummy-docs_1.0_all.deb.
It took me a few hours to put all this together. Hopefully if you’re facing the same challenge, you can install one file and be done. 🙂
Update: I discovered that all these packages are installed because apt is configured to install recommended packages by default. I tried installing lyx without any of the recommended packages using
sudo apt-get install --no-install-recommends lyx, but previewing documents from lyx didn’t work. Instead I reverted to my equivs texlive-dummy-docs package. If you feel passionately about this topic, as I do, please chime in on this bug.