dth by Joshua Wood

Rails error tracking gets an upgrade

I meant to blog more this year, but have a few good reasons for the silence between February and now:

  1. I've been overloaded with Ruby on Rails contracting/consulting work. Yay!
  2. I've been building a Rails error tracking service with a couple of really talented developers: Starr Horne and Ben Curtis.

Without further ado, meet Honeybadger; the modern error management service for Rails:

We officially launched in September after a lengthy private beta, and have since been hard at work making a great service even better. Our customers tell us that we're already more stable than the competition, and while I may be a bit biased, I dare say that the 'badger is a pleasure to work with.

I also had the opportunity to produce our awesome demo video, which was engineered and narrated by my brother, Ben Wood.

If you build Ruby on Rails web applications and are still using the exception_notification gem - or nothing at all - you need to check out Honeybadger. For those of you who are already using an error tracking service, I invite you to take moment to re-evaluate the current options, and see if Honeybadger is a better fit; we think it will be, but that's for you to decide!

Try Honeybadger free for 30 days

Happy Valentine's Day

When my wife and I were dating, we communicated a lot via text messages because she lived in CA, while I was in WA. The year we were married, I wrote a little PHP script to take my iPhone's sms.db file and turn it into a searchable archive of our conversations, as a gift for Valentine's day. I'm sure she didn't fully understand the level of nerd it takes to code a Valentine's day present, but in any case it was a hit.

I have since moved to programming Ruby (and haven't looked back), so this year I thought it would be fun to upgrade the "codebase". I ended up using Sinatra for the server; anything else but a bare Rack app would be excessive. Including a HAML template with inline CSS, it's 42 lines of code. Without the template, it's 11.

Happy Valentine's Day, Kay. Looking forward to 60+ years of archives :).

View the code at Github

Hearsay - An Active Record extension for tracking mentions

Last year I created a small project manager in Ruby on Rails that among other things lets the users reference (or mention) tickets by number when posting messages and comments. Each ticket is assigned a unique number, beginning with 1. So when I reference "ticket #1" in a comment on an otherwise unrelated message, the system is smart enough to know that the ticket exists, and will automatically link to it in my comment.

Instead of just scanning the body of the comment for any pattern matching "#n" on the fly and assuming the ticket exists, I decided it would be better to scan the body once when the comment is created, and implicitly check each match to make sure that a ticket with that number exists. If it does, then an association between the comment and the ticket is created using a join table.

Using a join table for this purpose has a few benefits. For instance, I can easily search for all comments referencing a specific ticket without using a regular expression in the query. It also ensures that the reference was intentional, or at least the ticket existed at the time the comment was created. If I wanted to get really fancy, I could allow the user to remove a reference that incidentally happens to match the search pattern.

I had originally built this as a simple Rails plugin in vendor/plugins, but since those plugins are now deprecated in Rails 3.2, it was a good opportunity to package it as a gem and release into the wild.

So, I'm introducing a little gem called "hearsay" that aids in creating associations between model attributes and other models:


I've included basic usage instructions in the readme, but the code is documented with all the available options if you want to dig deeper.

Because the regular expression and finder method are configurable, this could be used for any situation where you want to match some text and use it to associate other objects; one alternative example could be twitter style mentions, where the regular expression is /@(\w+)/i and the finder might be find_by_username.

I'm considering this beta until I can get around to writing some tests. Contributions and suggestions are welcome.

Twitter Bootstrap is ruining the Internet

I'm really, really tired of seeing apps and blogs using Twitter's Bootstrap toolkit. I completely understand the draw; it "bootstraps" your idea/design/what-have-you. Why pay a designer (or try to design it yourself) when Twitter and others have already done the work for you? For one, your site will look like Twitter. But you lose more than just originality. The design process is a big part of discovering exactly how your interface functions best for its intended purpose, and when you outsource that process to a toolkit like Bootstrap, you're left in a cozy little box that is hard to break out of.

Truth be told, I'm one of the worst offenders when it comes to taking the easy way out, especially regarding design (take this blog, for example. I designed it myself... :)) It doesn't have to be this way, though, and I am going to do my best to change.

How to Deploy Jekyll/Octopress to Heroku

I recently migrated my WordPress blog to Octopress, which is a blogging framework for Jekyll, the static site generator.

When I was researching my options for moving my blog to Jekyll, I had some reservations about using Octopress versus just rolling my own layout for Jekyll. I wanted to deploy to Github Pages, but I really hated the deployment strategy that comes with Octopress. Since Github Pages actually runs Jekyll, I didn't like the thought of having to keep my Jekyll source on a 'source' branch and deploying the generated static site to my master branch just to get it to play nice with Github. This was before I realized that I'd probably need at least a few plugins if I wanted to mimic the behavior of my WordPress site. The more I looked at Github Pages, the more I began to think it wasn't for me...

So then I started looking at Heroku, and was confronted with an even uglier (albeit simpler) deployment strategy: check the systematically generated /public directory into source control. With a populated public directory, Octopress is a fully qualified rack application, and Heroku has no problem running it just like any other application. But I didn't want to clutter up my master branch with a public directory where a majority of the files are likely to change on almost every commit. My solution was to merge the Github strategy with the existing Heroku strategy, adding a little extra GIT flavor.

First, I created a _heroku directory, and copied the Octopress config.ru and Gemfile. On the Gemfile, I ditched everything except the Sinatra dependency:

source "http://rubygems.org"
gem 'sinatra', '1.2.6'

I added a public folder and created a "hello world" index.html file in it, just to have something to push up to Heroku before figuring out the actual deploy. Heroku doesn't need anything else to run the static site, so now all I needed was to populate the public folder with my static site output from Jekyll, and push the entire thing to Heroku.

I decided to write a Rake task similar to the :push deploy task for Github Pages that comes with Octopress, but first I needed to create my Heroku application and push up an initial deploy:

gem install heroku
cd _heroku
git init .
git add .
git commit -am "initial commit"
heroku create
git push heroku master

With that completed, I was able to launch my fresh Heroku app in my browser and see "Hello World" from the index.html file I had created. Note that instead of cloning an existing repository or creating a separate branch, I simply initialized a new git repository. This repository will be automatically picked up by my parent "source" repository and committed as a sub-repository which is then tracked by the most recent commit. I feel this is a lot cleaner than committing the static output to my Octopress repository, and is the point of this entire post.

Finally, I created a Rake task to copy my Jekyll /public directory (where the static files are generated) to _heroku/public, commit the result, and then push the sub-repository to Heroku. The code is pretty similar to the Github push method:

  desc "deploy basic rack app to heroku"
multitask :heroku do
puts "## Deploying to Heroku "
(Dir["#{deploy_dir}/public/*"]).each { |f| rm_rf(f) }
system "cp -R #{public_dir}/* #{deploy_dir}/public"
puts "\n## copying #{public_dir} to #{deploy_dir}/public"
cd "#{deploy_dir}" do
system "git add ."
system "git add -u"
puts "\n## Committing: Site updated at #{Time.now.utc}"
message = "Site updated at #{Time.now.utc}"
system "git commit -m '#{message}'"
puts "\n## Pushing generated #{deploy_dir} website"
system "git push heroku #{deploy_branch}"
puts "\n## Heroku deploy complete"

To make this the default deploy method, I changed a few config settings at the top of the Rakefile:

deploy_default = "heroku"
deploy_branch = "master"
deploy_dir = "_heroku" # deploy directory (for Github pages deployment)

And that's it! I gave the new code a try:

rake generate
rake deploy

Everything seemed to go alright, so I fired up my browser and there was my shiny new Octopress blog, with free hosting, and a deployment strategy that doesn't suck.

Know a better way? I'd welcome the input :).

Validating URL/URI in Ruby on Rails 3

I ran into an issue on a client project this week where we needed to validate a URL in our Ruby on Rails application, but wanted to check that it actually existed in addition to validating the format with a regular expression. After some minor searching, I ran across Ilya Grigorik's blog (that's been happening a lot lately for some reason.) He provided a nice little ActiveRecord validator that uses Net:HTTP to ping a domain and validate that it returns a 200 code (HTTPSuccess).

As it turns out, his post (and thereby his method) was a bit outdated, so I put together an updated validator that takes advantage of the new "sexy validations" provided in Rails 3. And here it is:

To get up and running, create a new file in your Rails lib directory called "uri_validator.rb", and copy/paste the above code. If you have added the lib directory to your autoload paths, then you're done! Otherwise, you'll want to include the file in your environment.rb or application.rb files (in your config directory) like so:

Finally, to use the new sexy validator, simply add the :uri option to any existing validates call:

This allows you to specify a custom format (must be a valid regular expression); otherwise, the format will default to URI::regexp(%w(http https)).

There it is, if you come up with your own variations of the validator or regex, don't forget to tell me about them!

By the way, just wanted to give a personal shout out to Ilya for his fantastic support of his new non-blocking EventMachine based ruby web server, Goliath. Hey man, you might want to change the name to David: I was going to make some crack here about his sling (not to be - or maybe? - confused with David's Sling) being asynchronous, but you get the idea :). Anyway, rock on.

Internet induced stress, and its effect on the mind.

For the past couple years, a theory has been growing vaguely in the back of my mind, but recently I've finally been able to shed some light on it. In a nutshell, as the Web becomes more ubiquitous in my every day life, I've felt a ramping level of stress and mental fatigue - and I don't think it's a coincidence.

It's hard to imagine a life before the smart phone, much less life before the Internet. In the 15 or so years since the Web's world debut, it has evolved beyond a point that few of even the most clairvoyant minds could speculate. The Web has become so essential in day to day life that it would be hard to withdraw even if we wanted to; that is, there is definitely an addictive quality in the fiber optic lines that feed our ever-increasing demand for current, relevant information.

I am part of the last few generations that can even remember a time before the Web, and part of the first few that do not remember a time before PCs. In fact, my father was writing software in the 70s - the decade before I was born. So, the birth of the Web is almost contemporaneous with my own. I grew up learning to read, write, and type; and while I (thankfully) took to reading as a hobby, I was highly enamored by the computer and the (seemingly) limitless possibilities it offered. Like it or not, the fact of the matter is that much of my childhood was spent behind a computer screen, and that trend has steadily increased into adulthood.

Now it's 2010, and the Web is basically a mainstay of society. Google, whose stated goal is to "organize the world's information and make it accessible to everyone," has already succeeded with a large portion of the world's information, and adds to its index every day. Not only has the Web taken over the way we share information, but the way we share our lives: it has moved beyond the informative and into the social realm. Services such as Facebook, MySpace, and Twitter have already woven themselves into the fabric of our social world in a way that makes them almost indispensable. And it doesn't stop there.

The Web is unique as a technology in the sense that it is assimilating other technologies. We are spending less time watching our TVs and more time watching the same programs online. Do we even listen to radio anymore? Not usually by choice - and if we do, half the time that's online too. When's the last time you wrote a letter? For that matter, when's the last time you wrote with a pen? (rent checks or grocery lists don't count.) Now books are going online, and even elementary schools are starting to make the switch from paper to internet-enabled digital reading devices. Not to mention clocks, maps, telephone... the list goes on.

I love the conveniences of the Internet as much as (and maybe more than...) the next person, and maybe that's why I've also begun to worry about the effect that it is having on my mind. For the past couple years I've been having increasing trouble concentrating, formulating complex logic, comprehending reading, and even simply thinking, be it creatively or contemplatively. I've noticed a decrease in patience (not that I had much to begin with) and an increase in scattered, disorganized thinking. I used to love to write creatively, but have not had the attention span or the imagination. Even my sleep has suffered; I rarely dream anymore, and fantasy is all but lost in my dreams. I often wake with the feeling that I spent the night in routine (but mentally taxing) thought patterns; sometimes my dreams are just continuations of the inner monologue that narrates my data-collecting during the day.

I know I'm not the only one experiencing this. It has been a somewhat discussed issue in the media as of late. Microsoft even lampooned it in their advertising campaign for their Bing search engine, depicting victims of "Search overload" and touting Bing as the cure. But is the cure to "Search overload" really smarter search?

This is basically what had been running through my mind (nebulously) for the past year when I came across an article about an upcoming book titled, "The Shallows: What the Internet Is Doing to Our Brains" by Nicholas Carr. While I am admittedly part of the dwindling few who still go out of their way to read books on paper, this was the first time in a while that I can remember being excited about buying a new release. Carr expertly describes the phenomena of the Information Age without resorting to whistle blowing or finger pointing. Beginning with a history of information-technology, he goes on to show scientifically how our brains adapt to the tools that we use, and while this isn't necessarily a bad thing (without it we wouldn't be able to evolve), there are always pros and cons. As our brains compensate to increase performance of new abilities, there will always be old ones that are lost.

Carr's overall premise and final warning is that while computers and the Internet have given us many advantages, including improved productivity and resourcefulness, they also have changed the way we absorb information, and in turn are literally changing the way our brains behave on the neurological level. We may be gaining a wealth of knowledge and the skills to access it, but in turn we're losing our ability for deep, contemplative thinking (among other things). In effect, computers are making us more like themselves.

I also tend to wonder how valuable this "wealth" of online information really is, when I consider the complex makeup of the human brain. We have designed our technological systems to treat our brains and memory as if they were also digital, when in reality nothing is further from the truth. With all the knowledge in the world at our fingertips, is it possible that we are simply diluting the knowledge that we actually possess? After a heavy session of online searching, sometimes I feel almost more disconnected from the subject than when I started.

All that being said, I love the Internet. I think it's great to have so much information available in one place. I don't think it would be possible or even plausible to change the technology or boycott it... After all many of us make our living directly or indirectly online, myself included (maybe I should title this "Bitchings of a computer programmer"). I do however think that we should regularly evaluate the tools that we use, and be aware of their effects on our lives. With so many different information sources, services, and devices, I would propose that we are entering an age of "Distraction Management", where our productivity and well being are largely dependent on whether we can harness the power of these tools without them driving us mad.

Backup SMS (sms.db) on iOS/iPhone4

Here is a quick little how-to on backing up your SMS database on your iPhone4 or 4.0 firmware 3G/3GS.

First of all, why backup SMS.db by itself when you can backup your entire iPhone? There are many reasons, but the biggest one will probably be so that you can restore your text messages without moving over other potentially unwanted data that comes with restoring your iPhone from a full backup. Personally, I wanted a fresh start when moving from my 3G to my iPhone4 - except I had 2 years of text messages stored on my 3G that I'd rather not lose. Now that the iPhone4 jailbreak has been released, it should be no problem to pull the sms.db from the 3GS filesystem and copy it over to the iPhone4.

The short-version of the process is as follows:

  1. Upgrade to 4.0 (iOS) firmware first, if you're on a 3G/3Gs.
  2. For 3GS and earlier models, jailbreak with redsn0w. You can get the latest here. For iPhone4, simply open jailbreakme.com on your device and follow the instructions.
  3. Install OpenSSH in Cydia.
  4. SSH or SFTP in to your phone using the IP address listed in your Wifi settings. (Port: 22, username: root, password: alpine)
  5. CD to the directory: /private/var/mobile/Library/SMS
  6. Copy sms.db to your local computer

If you have any questions, feel free to use the comments below. If there is enough interest, I'll further explain any of the above steps in case they aren't clear enough.

This is a technique that has been covered by a few other sites for the 3G, and hasn't changed very much with the iPhone4 (Really the only difference that I found is that the path of /var changed to /private/var.) I'll have to do another post on some more creative reasons to get your hands on your SMS.db. If you know your way around SQL Lite (it's easy to learn), there's pretty much no limit to how you can use your iPhone's SMS database!

Update 06/30/10: This post may have been a bit premature since the iPhone4 has not been (officially?) jailbroken yet? I have not gottten around to attempting to restore SMS.db to my iPhone4 (just don’t have the time for it at the moment). If someone can post a solution that would be great, or I will post my findings here once I get around to trying it.

Update 08/04/10: Now that the jailbreak for the iPhone4 has been officially released, you should be good to go!

civicrm_contact_type doesn't exist, 1146

If you've tried upgrading to CiviCRM 3.1.3 in Joomla, you may have run into the following error:

DB Error: no such table
Database Error Code: Table 'your_database.civicrm_contact_type' doesn't exist, 1146

The solution is really simple, just follow these instructions, expertly provided by Deepak Srivastava over at the CiviCRM community forums:

Workaround - 1
As soon as you see the error, doing a page reload should bring the upgrade screen back. And hitting upgrade button should work normally.

- OR -

Workaround - 2
Before installing the new codebase increase the session lifteime, so that session doesn't expire between installing the new codebase and hitting the upgrade.
Note: Session lifetime could be increased from Global Configuration >> Session Settings >> Session Lifetime. Change the session timeout back to previous one, once you done with upgrade.

Other than that, follow the official instructions, and you should be home-free!

Tracking copied text using Javascript, jQuery, and PHP

Everyone knows that most web site usage statistics are tracked by web servers, including user local, operating systems, page views, unique visits, etc. However, if you're really serious about tracking your user's activity, you'll use an analytics solution such as WebTrends, which is used by the New York Times to log on-screen actions which cannot be tracked by traditional means. Using the WebTrends dcsMultiTrack function, it's possible to capture virtually any event that can trigger a JavaScript function, whether it's on a static HTML page or inside a flash application.

One of the more creative pieces of tracking code that I ran across while browsing the New York Times source code (yeah, most people read the news... so?) was when a user copies text from an article. It's really such a simple concept, but one that never occurred to me: when the user selects text on screen, use JavaScript to capture it. Then set up a trigger to submit the selected text to the server via AJAX when the copy command is detected. Imagine the analytics that could be created based on popular locations within individual articles!

So I decided to work up a little demonstration using jQuery to log the event and PHP to handle the request server-side. This is very basic, but should be enough to get started toward your own super-analytics. You will need to include jQuery 1.3.2.

The JavaScript

* Copyright (c) 2009 Joshua Wood
* http://joshuawood.net/
* Based on research by Mark S. Kolich and The New York Times WordReference function
* http://mark.kolich.com/2009/09/use-javascript-and-jquery-to-get-user-selected-text.html
* Copyright (c) 2009 Mark S. Kolich
* http://mark.kolich.com
* Permission is hereby granted, free of charge, to any person
* obtaining a copy of this software and associated documentation
* files (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use,
* copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following
* conditions:
* The above copyright notice and this permission notice shall be
* included in all copies or substantial portions of the Software.

// Create a new Object for our code to reside in (makes it pretty and manageable)
var Example = Example || {};
Example = (function(){

var selection, selectionText, responseText; // Global object variables

// Get currently selected text
// Method based on getSelected() from CodeToad at
// http://www.codetoad.com/javascript_get_selected_text.asp
function getSelection() {
var t = '';
if(window.getSelection){ return window.getSelection(); }
else if(document.getSelection){ return document.getSelection(); }
else if(document.selection){
var selection = document.selection && document.selection.createRange();
selection.toString = function() { return this.text };
return selection;
return t;

// This is the callback function for the mouseup event
function handleClick(event) {
selection = getSelection();
selectionText = selection && selection.toString();

// This is the callback function for the oncopy event
function handleCopy(event) {
var wc = wordCount(selectionText);
if(wc) {
// Do something with copied text (send to server via ajax)
responseText = $.ajax({
url: "log.php",
global: false,
type: "POST",
data: ({text : selectionText}),
dataType: "text",
success: function(msg){

// A simple function to count the words in a string, copied directly from nytimes.com
function wordCount(inStr) {
var wc;
wc = inStr && inStr.replace(/[^\s\w]+/g, ""); // sans-punctuation
wc = wc && wc.replace(/^\s*/, "").replace(/\s*$/, ""); // trim
wc = wc && wc.length && wc.split(/\s+/).length; // count words
return Number(wc);

return {
initialize: function() {
$(document).bind("mouseup", handleClick);
document.getElementsByTagName("html")[0].oncopy = handleCopy;

// Initialize our little program and wait for copied text!


// Log some text from the request and spit it back out
// (This is where you would do something with it)</p>

$text = $_REQUEST['text'];
echo $text;

This example will wait for the user to copy text on screen, and then submit it to the server-side file log.php which can process it, save it, etc (in this case it sends it back to the browser and puts it in the DIV with the id "request").

View Example Download

What other tracking applications can you see for JavaScript and Ajax?