Happy 25th birthday, Perl!

Software

I use half a dozen languages for work and fun, but when I need to quickly get stuff done, Perl still does the job. I couldn't imagine my life without it.

#!/usr/bin/env perl -w
use strict;
&birthday("25th");
sub birthday {
    print("Happy $_[0] birthday!n");
}

A huge thank you to Peter Whiting who showed me Perl when I was 19, and to Larry Wall, Tom Christiansen and Jon Orwant for the language and the Camel Book :).


Last chance to vote on @Siracusa’s shirt design

Media

If you're as much a fan of the soon-to-be-ending Hypercritical podcast as I am, put your vote in for the shirt design you'll end up buying. Or shirts, as the case may be.

In an entirely unrelated question, how much does it cost to frame a shirt? Asking for a friend.

UPDATE: The poll is now closed.


Soon we can download an archive of our tweets

Internet

Morning coffee, free wifi, Twitter

This is huge news. Monstrously huge news. Eliza Kern at GigaOm:

Several Twitter users began noticing Sunday that they now have the ability to download their full archive of tweets, a capability that users have asked for since Twitter’s early days, but which looks like is rolling out to users now.

Photo by me in 2007, at the Coffee Bean and Tea Leaf in Tanglin Mall. Cue nostalgia!

Only took them half a decade

Back when we all started using Twitter in the early days, we just assumed our archive of tweets would be accessible, just as our blog posts are. After we hit the invisible ~3000 tweet ceiling, we realised some of them weren't easily accessible unless we had the original URL. As far as I know, it's not accessible in their API for third party clients, either. If you use Twitter as a IM client that's probably acceptable, but for those of us who tried using it as a microblog, that was a horribly limiting factor.

Prior to their recent policy changes for third party clients, I kept a backup of all my tweets in a pseudo-hidden category here. Still, there are a lot of gaps, and I have almost none of my tweets from 2007 to 2009. I'd been considering going through various caches in home folder backups to see if I could reconstruct a timeline that way, but given the sheer number of clients I've used over the years (Twitteriffic, Snitter, Twirl, Tweetie, TweetDeck, TweetBot, Polly… and that's just on the desktop) I knew I had my work cut out for me.

It’ll be like downloading six years of my life

ASASP, I'll be downloading my own archive. First thing I'll do: import it right here into my same pseudo-category. Second thing: figure out how to backup tweets more reliably going forward.

I wonder what format it'll be in. JSON? CSV? An SQLite database? Hey, that'd be schweet.


ISDN faster than ADSL!

Internet

As an early Christmas present, our Optus home ADSL connection has decided to throttle us down for no good reason. After cruising at a paltry 3012kbps for a few weeks, we're now hovering between 100-320kbps. Rebooting the modem does nothing other than make us wait between 1-2 hours for it to re-establish a connection.

I looked up some old bills, and confirmed we got faster speeds from our SingTel ISDN connection in 1997, when I was 11. Hollow jokes aside, me thinks I'll be on hold with Optus for much of tomorrow to establish what's going on.

Maybe this is just a ploy to make their Optus 3G network seem better around Earlwood. Sneaky.


I didn’t pay @CaramelJune for this one

Internet

I'm in good company, and am flattered!

#FunFact, I only started twitter for updates on @Rubenerd’s blog, Raemz’s art and Dannychoo site updates.

You can follow @Rubenerd_Blog too if you'd like to be updated of posts.


My first Perl CGI script website nostalgia thing!

Internet

Happiness is a stack of new interesting computer books!

If you've been on the net for a while, chances are your development history roughly follows mine! I've had people asking about my Perl script that I've lovingly referred to here over the years, so here's something :).

Ah the innocence of youth…

Before we go any further, yes that was my bedroom back in Singapore. I still have that lava lamp, that MacBook Pro and my beloved O'Reilly books :)

After graduating from GeoCities and the like to my own web server, I wrote a fairly simple Perl CGI script to run my site before finally moving over to dedicated CMSs including RapidWeaver, MediaWiki, WordPress and TextPattern. Since then, I've dabbled in Ruby/eRuby/eRubis, Ruby on Rails, Django and PHP, though admittedly my biggest strengths still lie in the back end with running and building VPSs or uni servers with FreeBSD, CentOS, Solaris, Apache, Lightty, *SQL and the like.

While not the most high performing or scalable systems, my Perl CGI script was a lot of fun! In a nutshell, it worked like this.

  1. Each of my blog posts was stored as a text file with the name of file corresponding to its stub, such as “shimapan.txt”. The files themselves were simply formatted as post name, date, category and content, with newlines delineating each. No fancy markdown, just HTML sans paragraph tags!

  2. When a page was requested, the script would attempt to open its corresponding file. If this operation failed, a 404.txt file was returned in its place.

  3. The script would pull the text, wrap each paragraph in <p> tags, open a rudimentary “template” text file and insert it between two HTML comments.

  4. For example, when using the teal Hatsune Miku theme, going to //rubenerd.com/?get=shimapan would put shimapan.txt into the mikutheme.txt file, then display a result any otaku would be proud of.

It just occurred to me this was also coded before Miku's time. Or even Akiyama Mio. I should have made a joke about Haruhi Suzumiya and bunny girls instead. You live and learn.

Processing and such fun

I eschewed (gesundheit) the CGI.pm module, mostly because I found it easy enough to create the standard "Content-type: text/html" and other headers myself. I was (and still am) a fan of minimalism, and my C training at the time made me adverse to using third party modules and APIs, especially when it was easy enough to do something myself.

The theme files and the posts themselves were uploaded to the server using SVN; they were "committed". To this day, I still think that was a really nifty way to do it. I've seen some people doing that with Git now.

Not showing any more than this, sheesh!

Learning from bad example?

I learned a lot from that site. Shortly after going live, I noticed thousands of hits to the 404.txt file. Obviously, bots had found my URL, and were filling random junk into it. I wrote in some rules that a stub could only be up to 24 characters long, and only contain alphanumeric characters. Any requests that didn't meet that criteria were automatically dropped.

I also learned about FastCGI. Unbeknownst to me in my fresh-out-of-high-school innocence, each time someone accessed my poor web server, Apache was spawning an entirely new CGI process to handle their request. Admittedly my site only started to get traction long after I ditched the script and moved onto dedicated CMSs, but had I kept using it I'm sure I would have been in for a nasty surprise!

There were also other performance issues. I never quantified how much of a hit my server was taking by having to open and close theme and data files each time, though I suppose I could have written a simple SQLite backend and compared it.

I also wasn't doing any caching; ideally once a post was completed and the theme set in stone, the script could have generated pages upon the post being uploaded, and served those instead. IIRC, this is how MovableType operated back in the mean old days. There weren't any dynamic parts to the site such as comments systems, tags and so on, so they would only need to be updated if the theme changed.

Justification!

Why am I posting about this, years from now? Because after writing a fairly basic Ruby on Rails install, playing with Django for a university assignment last semester, and my obsessive Wikipedia editing, I've decided to trial running Rubenerd.com off TextPattern! But as all infuriating entries end: that's for another post.


Dave Brubeck, 1920 – 2012

Media


NoScript Christmas hat!

Software

Via NoScript.net. I am in approvance! Is that a word?


Instapaper links for November 2012

Annexe

Links I saved onto Instapaper this month:


FreeBSD pkg bootstrap is being restored

Software

Icon from the Crystal Clear Project

Good news, we have news from the pkgbeta.FreeBSD.org server:

pkg bootstrap is being restored. Stay tuned.
Updated: 2012-11-28 10:20 UTC.

The site went down shortly after I wrote my pkgng FreeBSD Handbook post, ironically enough. I was going to post in the newsgroups asking what was up, but now we have news.

UPDATE: We now have a new message.

Currently this site only contains pkg bootstrap files!
Updated: 2012-11-28 11:30 UTC.