dth by Joshua Wood

Mastodon

I’ve been hanging out on Mastodon lately. Mastodon is kind of like Twitter, but without the politicians, the outrage (🤞), or the algorithms. It’s also open source and decentralized, meaning that one person or company doesn’t control it. Instead, Mastodon is made up of many different servers (called “instances”) which are managed by different people; this is sometimes called the “Fediverse”.

Since anyone can create their own Mastodon instance, there are many different instances to choose from. Many people join a general-purpose instance such as mastodon.social or mastodon.xyz. Others join one or more topical instances, such as ruby.social (an instance for Ruby developers!). There’s a full instance list here.

Oh yeah, and carrying on the whole proboscidea theme, instead of “tweets”, status updates on Mastodon are called “toots” (yeah it’s cutesy, deal with it).


Mastodon for Developers

If you’re a developer, check out Mastodon’s source code: it’s open source and built with Ruby on Rails and Sidekiq (with React front-end).

Mastodon is an implementation of a new web protocol called ActivityPub (which is itself inspired by an older protocol called OStatus). The W3C Recommendation for ActivityPub is here (read it—it’s super interesting to learn how federation works).

If you’re into Elixir, there is also an ActivityPub implementation built in Elixir and Phoenix; it’s called Plemora.

In fact, since ActivityPub is an open standard, there can be many implementations. Keep up to date with the ecosystem that is developing around ActivityPub here.

Recommended follows: @cwebber@octodon.social (one of the co-editors of ActivityPub), @Gargron@mastodon.social (the creator of Mastodon).


Mastodon for Ruby Developers

Imagine a place where people politely discuss — yeah, that rules reddit out — nothing but Ruby all day. James Adam runs ruby.social, a Mastodon instance and community for Ruby developers. If you’re interested in Ruby, I highly recommend joining.

I had some fun giving out around 75 Honeybadger t-shirts to promote the instance when it first launched.


OSS Contributions

My first pull request to Mastodon was merged in November 2017! I added a keyboard shortcut legend, which in the web interface you can check out by typing ?.


Additional Reading

'Badger Life

A few years ago, someone wanted to buy Honeybadger.

As a part of the deal, we would continue to work on the app as a part of this larger company. It was a good offer; it really made us think about what we want out of life and why we do what we do.

We turned it down.

Why on earth would we do that? The answer is one word that is so important to us, that the money didn’t matter:

Freedom.

It’s at the heart of everything we do at Honeybadger, and we didn’t want to lose that.

I wanted to tell that story in order to illustrate a point: Honeybadger isn’t just a business, it’s a way of life. That life centers around a few core values:

  • Family is more important than work.
  • Health is more important than success.
  • Financial independence is more important than being rich.

…but how do you have freedom and run a successful business?

Honeybadger is 100% remote, with no office, no employees, and absolutely no meetings.

OK, we have a few meetings, but mainly just to talk about the things I want to discuss in the rest of this article.

We’re focused on a healthy remote culture

Working remotely lets us design our unique environments, schedules, and habits in a way that is impossible in an office setting:

  • It gives us the freedom to work (and play) from anywhere.
  • It keeps us healthier because we can make self-care an integrated part of our work days (we all have daily exercise routines, and we also like to take long walks).
  • It’s more efficient: we don’t spend hours per day in traffic, and it lets us structure our days in the way that is most productive.
  • Our families love it because our schedules work around the family rather than working the family around a work schedule.

The result is a work culture which doesn’t dominate our lives.

We work on what motivates us

We usually work on Honeybadger. I say usually because we also each have side-projects. Some are money-making ventures, others are hobbies. Side-projects are encouraged; sometimes we even collaborate individually on different things.

Honeybadger happens to be what we all want to work on most of the time–going on 6 years!

When choosing what to work on, nothing is off limits. While we each tend to specialize in one or two areas, if we get bored, there’s always something new to explore.

We prefer asynchronous communication

When we work together, we don’t pair program; we like closed doors and headphones too much (it’s also difficult when you aren’t online at the same time).

We used team chat every day before it was cool (IRC, Campfire, now Slack), and it’s still our favorite place to hang out. We use it for impromptu conversations and to stay in touch throughout the day.

While we love chat, we always prefer asynchronous forms of communication–this allows us to avoid having the same schedules. Email, Basecamp, and GitHub are our favorite places to have long-form discussions. Even on Slack, it’s expected that replying immediately is optional.

When we do need to communicate face-to-face, we’re quick to jump on appear.in.

We document everything

We used to have a bunch of undocumented processes that lived inside our heads. That’s a problem when you want to be free to move between different areas of the business, or take some time off and let someone else step in and do your job.

Lately we’ve been taking the time to document the process (we call them systems–we’re engineers, after all) for literally everything, and it’s made a huge difference in our ability to delegate to each other.

We store our systems documentation as markdown files in a GitHub project called (surprise) “systems”.

Here’s an example of the types of systems we’re documenting:

Company
    Strategic Process
    Compliance
Product
    Features
    Support
Marketing
  Promoting Things to Developers
  The Blog
  Email Marketing
  Social Media
  Search Engine Optimization
  Exceptional Creatures

We also focus a lot of time on our customer documentation, which not only cuts down on the number of customer support requests we have to handle, but is another great internal resource.

We automate everything

Automation is central to everything we do. It’s the key to our business plan; our silver bullet which allows us to run a complex, ops-heavy startup with a scrappy team of 3 people.

Documentation is the first step of automation. If nothing else, it automates the process of researching or remembering how something works. It’s just the beginning, though. Most problems have an automated solution, and our primary responsibility is to find it.

Here are some examples:

  • We used to store boxes of t-shirts in our closets, haul them to conferences (and through airports), and drive to the post office to mail them to people.

    Solution: Printfection now manages our inventory and does all of this for us. We just copy and paste a link.

  • We used to get paged a lot when customers would send us torrents of data which our infrastructure couldn’t handle. The fix often required manual diagnosis.

    Solution: Our auto-scaling infrastructure at AWS now handles automatic provisioning and de-provisioning of servers on demand.

  • We used to get a lot of repetitive customer service requests which required administrative knowledge.

    Solution: We built self-service tools into the product, eliminating these requests entirely.

We plan strategically

Strategic planning is essential to make forward progress in the right areas of the business. We struggled with this in the early years; planning requires meetings, but we’re engineers—we hate meetings, and would rather be building. It’s hard.

We finally read some books and found a process that works for us. Each quarter, we have a 1-day retreat where we get together in person to plan and make big decisions. The result is a 1-page quarterly action plan which includes some goals which — if achieved — will move the business forward.

We use a Basecamp project to create the agenda for our retreats and track our progress throughout each quarter. We also have a weekly standup (30 minutes or less) via appear.in to discuss our progress (we regularly skip this meeting when we’re already on the same page).

Since we don’t see each other a lot in person, we really look forward to hanging out at our retreats, and usually plan something fun for after we get the work out of the way.

We hire entrepreneurs like us

We are all independent entrepreneurs who have Honeybadger in common, and we like to work with people like us. We hire experienced consultants who share our vision and want to make a big difference in the business.

Honeybadger is also a fun environment for developers who don’t want to be stuck in one language or technology (we don’t).

While we do write a lot of back-end Ruby code, we have projects which use many different languages, including JavaScript, Elixir, Python, Go, PHP, and even Java. We work with many platforms and frameworks like Rails, Phoenix, Laravel, and Node. Then there’s the ops side, which offers us numerous opportunities to dive into AWS, Ansible, Docker, etc.

We’re currently trying a “many hands make light work” approach. This is something we’re still experimenting with, and I’m really excited about it. If you’re a consultant and want to work/hang with us, shoot me an email and tell me what you love about Honeybadger.

What I Do

I often find myself struggling to explain what I do to people, and how it all fits together. Here’s the full story.

The Short Version

I have 2 companies. Honeybadger is a product company which I cofounded with Ben Curtis and Starr Horne in 2012. Hint is a software consultancy that I cofounded with my brother, Ben Wood, in 2014.

Hint and Honeybadger have nothing to do with each other, except that they are both awesome companies owned by me and partners named Ben (duh).

When I’m not at home (or at the gym), I like to hang out at the Hint office in downtown Vancouver, WA. I’m “employed” by Honeybadger, though, which does not have its own office (we are 100% virtual/remote, based in Seattle).

In total, I try to work 20-30 hours/week, mostly on Honeybadger, but I also spend a small amount of time on business strategy at Hint. The rest of my time is for family and personal development.

Many activities consume my days; here is a non-exhaustive list:

  • Writing documentation and creating processes for common tasks
  • Other types of writing (sales copy, articles, emails, etc.)
  • Reviewing pull requests and managing contractors
  • Experimenting with new marketing techniques
  • Product management and planning new features
  • Customer support
  • 1 hour of exercise minimum
  • Reading (books, articles, papers, proposals, magazines, etc.)
  • Programming/software development

Adding it all up, if I’m one thing, I’m an entrepreneur. I often wear many hats, but at the end of the day my job is to ensure that my businesses are on track to meet the unique goals that my partners and I set for them. If you’re wondering how I got here, read on.


The year was 2011.

I had been working as a freelance web developer for about 10 years, building everything from design/interactive projects to regular old websites, to full-blown web applications.

For the past few years I’d collaborated with two online friends of mine: Ben Curtis and Starr Horne. We actually wouldn’t all meet in person until after starting a company together, but that’s a different story.

The three of us mostly developed Software As A Service (SaaS)-style web applications (the business model popularized by Salesforce) using a popular programming framework called Ruby on Rails.

While building apps for clients was fun, we were all tired of the hourly grind, and really wanted to create our own apps. So we started looking for an opportunity.

That opportunity arrived the next year, in 2012. An app that we all used and loved started to have some problems, and frustrated by it, we saw our chance: we could make something better. So we did. We called it Honeybadger, and believe it or not, people loved it. It took off. (You can read more about our story here).

While we had paying customers from day one, it was slow going at first, and we all had families to support. We kept consulting on the side to pay the bills, while spending the rest of our time working on Honeybadger. Gradually, Honeybadger made more money, which allowed us to pay ourselves more, which allowed us to spend less time consulting (this is called “bootstrapping”).

During this time, my consulting business, Hint Media, was really booming. I had more work than I could handle, and began to hire subcontractors to help manage it all.

One of the subcontractors I hired was my brother, Ben Wood. Ben had a background in audio engineering, but was interested in web development, and we ended up working together a lot.

It was the perfect timing; I wanted to spend less time working on client stuff, and Ben wanted to spend more time on it. As Honeybadger began to consume more of my time, Ben and I agreed to partner on a new company, which Ben would run.

In 2014 we changed the name from “Hint Media” (wtf does “media” have to do with software?) to just “Hint”, got a tiny office in downtown Vancouver, WA, and hired our first employee.

Fast forward 4 years, to 2018.

Honeybadger will be 6 years old, and has thousands of happy customers. Ben C., Starr and I have really built something special. We’re a bit unconventional: we have no investors, no offices, no employees, and we’ve optimized the company for profit—or as we like to call it, “making it rain”. 🤑

Honeybadger allows us to do things like take a month off for our families, or for that matter take the afternoon off if we feel like it. Its purpose has always been to give us the freedom to pursue our dreams.

Hint, meanwhile, has grown to a team of 7, and we have a much larger office now. We’ve lazer-focused the company on helping medium sized software teams maintain and improve their software applications while forging long-term, trusting relationships with our clients.

At Hint we’re really excited to build a fun, healthy, and rewarding workplace which provides us the freedom and security to live extraordinary lives. Here are a few of us right before we took some time off for Christmas vacation, obviously working hard right up to the very end:

I love building businesses, and the unique challenges that they present. I’m hoping to share more about our approach as we continue to learn and grow.

My 2017 Arch Linux desktop setup

Update 2017-08-26: While the system was usable during heavy I/O, the lag was fairly annoying. I have since moved my system partitions back to the SSD (losing the raid1 redundancy). My two HDDs are still encrypted with LUKS, but I switched to btrfs’s built-in raid feature for raid1. So basically I’ve ditched mdadm and bcache.


Goals

My desktop is currently running a fairly custom configuration of Arch Linux which I’m going to attempt to explain in this post. First, I use my desktop as a multi-purpose system. My primary activities include:

  • Open source software development (Ruby, Node, Go, Elixir, etc.)
  • General desktop use (browser, email, chat, etc.)
  • Gaming (including some Windows-only games, like Overwatch)
  • Personal file server/cold storage (important files I would never want to lose)
  • Media server (Plex, etc.)
  • Kitchen sink (I run whatever appliances I happen to be using at the moment with docker)

I dual-boot Windows 10 so that I can play Overwatch and a few other games. I previously also used Windows as a media/file server, but I prefer to use Linux for my personal computing as there are many privacy concerns about Windows 10 and it’s kind of a black box to me. For example, I tried to use Storage Spaces and nearly lost some data when a drive failed and Windows wouldn’t mount my mirrored space (even though it was supposed to tolerate that failure transparently). I also wanted to encrypt the space, but after that near-miss I didn’t feel confident in allowing Windows to encrypt the same data it almost ate without the encryption.

That said, I had some pretty lofty goals for my new desktop setup:

1. At-rest encryption for everything

I encrypt the entire 500GB disk on my laptop with dm-crypt/LUKS (standard full-disk encryption technique on most Linux distros). I use LVM on top of LUKS so that all my volumes are encrypted at-rest, including swap. I find it much better (for the sake of sanity) to encrypt everything rather than try to decide what deserves encryption and what doesn’t.

I wanted to use the same approach on my desktop, except my disk layout is more complex: I have multiple HDDs (for media/archive storage) with a 128GB SSD M.2 card.

2. One encryption password

It would be simple enough to install the OS on my SSD using the same LUKS on LVM approach as my laptop, but that would mean my HDDs must be encrypted as separate volumes. I might be able to store those extra keys on the encrypted SSD (either in a keychain or when mounting the volumes in /etc/fstab), but ideally I wanted all my data encrypted using the same key which is entered once, at boot.

The other issue with using the SSD exclusively for the OS is that there would be no redundancy in case of drive failure (meaning it would take longer to restore my OS volumes from a backup).

3. Full redundancy

I wanted to be able to tolerate at least a single drive failure for my storage as well as my OS and bootloader. That means I need to use a raid1 or raid5 configuration for my OS as well as HDD storage, and at least 2 drives for each volume group.

4. OS Speed

I wanted to leverage my SSD as much as possible when booting the OS and accessing commonly used programs and files. Normally I’d just install the OS on the SSD, but again that offered no redundancy, and files stored on the HDDs would not benefit from it at all.

5. Flexible (and automated) system snapshots

Arch is a rolling release, which means it gets the latest package updates (including the Linux kernel) immediately. I’m pretty sure Archers used “YOLO” before it was cool.

It’s important to have a way to roll back my system if it doesn’t like an update. The alternatives are to drop everything I’m doing to troubleshoot and fix the issue or live with whatever it is until I can get around to it.

With that in mind, I wanted to have regular system snapshots which I could restore effortlessly.

The setup

My available drives consist of 1x128GB SSD (M.2), 2x5TB HDD, 4x2TB HDD. Because I don’t have more than 5TB of data atm, I opted to keep the 2TB drives (which I salvaged from a Seagate NAS) in reserve and go with the SSD and 2 5TB drives in raid1 (for 5TB of mirrored storage). The simplest thing I could think of was to install Arch on my raid1 array and use the SSD as a cache on top of it. I’d been curious if that were possible for some time until I stumbled onto bcache, which is exactly what I wanted.

I’m using 5 different file system layers which each serve a single purpose:

  1. Software raid (via mdadm)
  2. SSD cache (via bcache)
  3. Encrypted file system (via LUKS)
  4. LVM
  5. btrfs

My bootloader (Syslinux atm) is configured on a second raid1 array (same disks, different partitions) so that it should also be able to tolerate a drive failure transparently.

Here’s what my lsblk looks like:

NAME                 SIZE TYPE  MOUNTPOINT
sda                119.2G disk
sdb                  4.6T disk
├─sdb1               260M part
│ └─md127            260M raid1 /boot
└─sdb2               4.6T part
  └─md126            4.6T raid1
    └─bcache0        4.6T disk
      └─luks         4.6T crypt
        ├─vg0-swap    16G lvm   [SWAP]
        └─vg0-root   4.5T lvm   /
sdc                  4.6T disk
├─sdc1               260M part
│ └─md127            260M raid1 /boot
└─sdc2               4.6T part
  └─md126            4.6T raid1
    └─bcache0        4.6T disk
      └─luks         4.6T crypt
        ├─vg0-swap    16G lvm   [SWAP]
        └─vg0-root   4.5T lvm   /

sda is my SSD and sdb/sdc are my HDDs.

LVM allows me to configure a volume for swap which lives inside the encrypted LUKS container as well as change my partitions more easily in the future.

LUKS is also below the SSD cache, so any cached data is also encrypted at-rest.

The rest of the drive space is used for a root volume which is formatted with btrfs, which is a newer file system for Linux which allows me to create flexible subvolumes which can then be snapshotted using a copy-on-write strategy. I’ve configured my system to take hourly snapshots of the root subvolume (where the OS lives) so that if an Arch update hoses something I can just restore an earlier snapshot. For a really interesting talk about btrfs, check out Why you should consider using btrfs … like Google does.

Result

So far everything seems to be working great. Arch boots up quickly and everything feels snappy, about the same as on my laptop’s SSD, except I have 5TB available. Writing files to disk is really fast until (I assume) the HDDs get involved at which point I do have some system lag if I’m transferring a massive amount of data. So far that hasn’t been a big problem for me.

The real test will come when something fails… I still need to set up a full remote system backup and then I plan to run some failure scenarios like unplugging one of my HDDs and trying to boot the system, etc. I’m also very new to btrfs, but I really like what I’ve seen.

If you have any questions or would like a more detailed technical guide on how to set up this system from scratch, hit me up on Twitter.

Zero to GIMP

I love package management on Linux. Installing Photoshop and getting to the point where I could do something simple (like resize an image) on macOS would take me minutes (not to mention the application startup time). Installing GIMP and opening it on Ubuntu takes 15 seconds:

Is Apple forcing developers to switch to Linux?

I have been a developer using primarily open source technologies on Macs for over 12 years. I originally switched from Windows to Mac because I didn’t like how difficult open source development on Windows was or Microsoft’s tight grip on the OS; plus, I just felt like Apple “got me” more at that time. By the time OS X came out I was hooked.

I love macOS for its Unix/BSD-heritage and the power of the operating system. Ideological leanings aside, I still feel like it’s the best desktop operating system ever made. I spend a majority of my time in the terminal (tmux, vim, ssh, etc.) Apple’s developer ecosystem is excellent, and while Apple has always encouraged me to do things “their way”, I never felt forced.

Fast forward to today. Every Apple announcement has somehow bummed me out over the past few years. Every announcement seems to force me deeper into Apple’s product line. I can’t go a single day without being asked to log into something with my Apple ID. iTunes is basically ruined.

Then there’s the hardware. First they took away the ability to upgrade built-in storage and ram. I’m currently plagued by battery issues which I can’t fix without physically sending my laptop to Apple for an expensive replacement. In general the standard solution to broken Apple stuff seems to be “buy new stuff”.

So, I’ve been waiting patiently (and sometimes not so patiently) for months with a busted Macbook Pro from 2013 (not that old of a laptop for me) for Apple to announce the next generation so I could buy one. In the meantime, I bought my second Thinkpad and have been running Arch Linux as my primary operating system and loving it more every day.

The buzz today is that tomorrow Apple will announce a fundamental change to a classic computer interface which I happen to really like on their flagship laptop.

Removing part of the keyboard is a deal breaker in my opinion. I don’t care if there are workarounds, like remapping Caps Lock to Esc (mine is already remapped to Ctrl, by the way). To me, this shows that Apple doesn’t give a damn about the developers who rely on their hardware every day to build most of the apps that all of their customers use on that same hardware. At best, they’re so full of hubris that they assume their developer community will enthusiastically jump on-board and rewire their brains. Sadly, I’m sure many of them will, even if it’s for self-preservation.

Lucky for me, my livelihood is not tied to the App Store, and I can jump ship at any time. Most of my colleagues are in the same position.

Anyway, this entire rant was to set up a question: with this redesign, is Apple basically shutting out the computing purists and forcing them towards Linux? Let me explain:

  • If I want a modern laptop to run macOS, I must use a Macbook Pro. They don’t support any other hardware.
  • If I want a normal keyboard, I can’t use a Macbook Pro (assuming rumors are true) aside from buying the previous generation and using that for 4-5 years, at which point I’ll still be left with no options.
  • So basically, if I want normal hardware, I can’t use Apple anymore.
  • Windows is out of the question (although ironically, it may become a better alternative to macOS with Ubuntu on Windows and support for a variety of hardware vendors).
  • Linux/BSD are the only Unix-like alternatives to macOS.

So where does this leave all of us poor jaded Mac purists tomorrow?

Adding a signature to PDFs on Linux like Preview.app in macOS

tl;dr: xournal is the simplest application I’ve found that is able to accomplish this task (and relatively well).

Update: it looks like there is also some progress on adding support for eletronically signing documents in Evince (Gnome 3’s default document viewer) as well as a request to add support for graphical signatures!

I recently needed to sign a contract and return it via email. On macOS I would use Preview.app to view the PDF, place my signature, and export the signed copy.

It took me a lot of searching before I landed on a few good Linux alternatives, but they do exist.

What worked

xournal

xournal was the simplest app which was able to do exactly what I wanted. With xournal I was able to open a PDF from Nautilus (Gnome’s file manager), place and resize my signature and export a PDF that looked just like the original. To demonstrate this, I used this PDF and Barack Obama’s public domain signature. The resulting PDF is here, and here’s a screenshot of what it looks like in Gnome (not bad!).

I’ve seen reports that the PDFs exported from xournal could not be viewed on non-linux platforms, so I tested viewing the exported PDF in Preview.app on macOS and it displayed perfectly.

Master PDF Editor

Master PDF Editor seems like a nice full featured alternative to Acrobat. It has not only the ability to sign a document with your graphical signature but can also sign it with an electronic signature. I’m probably going to pick up a copy of this one at some point; it’s free for personal use and $49.95 for the commercial version (which includes the signature feature).

What didn’t work

Here’s a list of the other applications I tried (nothing against them, they just didn’t do exactly what I wanted):

  • Okular is the most full-featured document editor for Linux. It’s possible to add a custom signature by creating a custom stamp from an SVG or PNG file (see the KDE Help Center for instructions, under “Annotations”). In my experiments this produced very distorted looking images, however, and I couldn’t find how to move or resize the stamp once it had been placed. Okular is also a KDE application, and using it in Gnome (or other desktop environments) means pulling in a lot of KDE dependencies which I don’t otherwise need. It’s a very nice and complete app, however (and free).
  • LibreOffice Draw did not render the PDF properly; it seemed to convert the PDF to editable text and the fonts would get rendered incorrectly, among other things. The font issue may have been that it was looking for the fonts my system which is a headache I want to avoid (since I don’t need to edit the content).
  • GIMP does import PDF files, but it can only do a page at a time, and the resulting export is an image file (so you lose your original embedded information).
  • Inkscape was similar to GIMP. It can open the file, but doesn’t seem to export the original format correctly.
  • Adobe Acrobat 11 in wine. Instructions here. I was able to get it running, and it displayed my PDF fine, but crashed when I attempted to use the “Place Signature” tool.
  • Adobe Acrobat 9, available for arch here. This was the last (super outdated) version of Acrobat to support Linux natively. I didn’t even bother with this, because it’s stated obviously on the package page that it’s old and crashes reproducibly.

Other applications for working with PDF files

There are definitely other apps which I haven’t tried yet. The Arch wiki has a comprehensive list of them.

Thinkpad T420 HD+ Screen Upgrade

This summer one of my projects was to upgrade the LCD panel in my Thinkpad T420. I had bought my refurbished T420 from Newegg earlier this year with an SSD and 16GB ram upgrade; I thought I’d ordered the HD+ display (which has a max resolution of 1600x900 and is brighter), but when it arrived I was disappointed to find it had the base 1366x768 HD display.

Luckily, Thinkpads are extremely upgradeable, which is a main reason I bought one. I ended up having a lot of fun upgrading to the HD+ display myself.

My initial research led me to ThinkWiki, which has all the information you need to track down the right parts.

To upgrade your display, you will need an HD+ LCD Cable ($10 or less) and the new HD+ panel (around $50). I ordered mine from eBay from the following sellers (pay close attention to the model numbers and that the components are the HD+ versions):

  • Lenovo ThinkPad T420 LCD Display Screen Video Cable HD+ 04W1618/laptop-masters
  • 14.0” 93P5684 LED Screen for LENOVO 93P5692 93P5685 LCD LAPTOP B140RW02 V.1/browngranite

While ThinkWiki has a basic list of instructions for the parts of the T420 which must be removed to get to the screen, you’ll never figured it out without the manual, which provides detailed instructions, diagrams and parts list.

The full list of components which you must remove (in order) to replace the LCD panel is on page 127. I’ve included some pictures from my upgrade for some of the trickier steps:

By following the instructions in the manual (starting by removing the battery pack and moving forward) I was able to install the new HD+ panel in my T420 fairly easily.

The new HD+ Panel, Installed!

Happy modding!

Managing ENV configuration in development

There are a few tools I've come across over the past couple years for managing local configuration via ENV. One which comes to mind is dotenv. Since I use pow to serve my Rails applications in development, I've always just exported environment variables in a local .powenv file. It's simple, effective and doesn't require any extra tooling.

I recently deployed a client project with the talented folks over at Blue Box, and one of their ops guys introduced me to .rbenv-vars. While using .powenv was fine, .rbenv-vars has a distinct advantage: I no longer must type source .powenv when running commands in my shell (or create separate configuration to automate it). Every time Ruby is executed, the project's configuration is magically available.

I hadn't tried this on my local machine until just now, when I got tired of manually including Code Climate's repo token when running tests:

CODECLIMATE_REPO_TOKEN=asdf rake

My first attempt to get .rbenv-vars up and running was to follow the instructions on their GitHub page. This didn't work for whatever reason, so I uninstalled the plugin (rm -rf ~/.rbenv/plugins/rbenv-vars) and ran:

brew install rbenv-vars && rbenv rehash

Boom. Next, I created a local .rbenv-vars file in my project directory:

echo "CODECLIMATE_REPO_TOKEN=asdf" > .rbenv-vars

Lastly, I gitignored this file globally so that I wouldn't accidentally commit it on future projects:

echo ".rbenv-vars" >> ~/.gitignore

Now, whenever I run rake, Code Climate gets updated with the latest test coverage. Easy!

Recording inbound Rack requests for test fixtures

One of my open source projects over at Honeybadger.io is a Ruby gem which receives webhooks from a variety of inbound email services (works with any Rack-based application) and converts them into a standard Ruby Mail::Message.

The main benefit of this is that one really doesn't need to know anything about the format of the webhooks they are consuming -- they just pass a Rack::Request to Incoming! and get back the well known Mail::Message. Incoming! also makes it a snap to switch email services.

Testing all of the various webhook payloads is not a snap, unfortunately; some post JSON, some post form parameters, and some post raw email messages as the request body. The other day I was trying to write some integration specs for the various strategies. I was looking for something which would basically record an inbound request and then allow me to replay it later. I found one option, but it wasn't quite what I was looking for -- what I wanted is basically VCR in reverse.

After trying a few failed approaches, it struck me that since a Rack::Request is the only dependency of Incoming!, that's really all I need to test it. And because a Rack::Request is basically just a Rack env Hash, then I can dump that Hash to a file and reload it as a fixture in my specs. To accomplish the first half of this (creating the fixtures), I built a little Rack application which I named FixtureRecorder:

# spec/recorder.rb
require 'rack'

FIXTURES_DIR = File.expand_path('../../spec/fixtures', __FILE__)

class FixtureRecorder
  def initialize(app)
    @app = app
  end

  def call(env)
    env['fixture_file_path'] = file_path_from(env)
    begin
      @app.call(env)
    ensure
      File.open(env['fixture_file_path'], 'w') do |file|
        file.write(dump_env(env))
      end
    end
  end

  def file_path_from(env)
    file_path = env['PATH_INFO'].downcase.gsub('/', '_')[/[^_].+[^_]/]
    file_path = 'root' unless file_path =~ /\S/
    File.join(FIXTURES_DIR, [file_path, 'env'].join('.'))
  end

  def dump_env(env)
    safe_env = env.dup
    safe_env.merge!({ 'rack.input' => env['rack.input'].read })
    safe_env = safe_env.select { |_,v| Marshal.dump(v) rescue false }
    Marshal.dump(safe_env)
  end
end

app = Rack::Builder.new do
  use FixtureRecorder
  run Proc.new { |env|
    [200, {}, StringIO.new(env['fixture_file_path'])]
  }
end

Rack::Handler::WEBrick.run app, Port: 4567

The approach is similar to requestb.in, except that it runs locally and records the Rack env directly to a fixture file. The fixtures are named using the requested path name so that I can point multiple webhooks to the same server:

A local request to http://localhost:4567/sendgrid results in the fixture spec/fixtures/sendgrid.env

In my specs, I have a little helper to load the file contents back into Ruby and instantiate the Rack::Request:

# spec/spec_helper.rb
require 'rspec'
require 'rack'

FIXTURE_DIR = File.expand_path('../../spec/fixtures', __FILE__)

RSpec.configure do |c|
  # ...
  module Helpers
    def recorded_request(name)
      fixture_file = File.join(FIXTURE_DIR, "#{name}.env")
      env = Marshal.load(File.read(fixture_file))
      env['rack.input'] = StringIO.new(env['rack.input'])
      Rack::Request.new(env)
    end
  end

  c.include Helpers
end

The rest is pretty simple. In order to record fixtures, I first start the Rack server:

ruby spec/recorder.rb
[...] INFO  WEBrick 1.3.1
[...] INFO  ruby 2.0.0 (2013-06-27) [x86_64-darwin12.4.0]
[...] INFO  WEBrick::HTTPServer#start: pid=85483 port=4567

Next, I launch ngrok to expose the Rack server publicly:

ngrok                                              (Ctrl+C to quit)
                                                                   
Tunnel Status    online                                            
Version          1.6/1.5                                           
Forwarding       http://56e10a0f.ngrok.com -> 127.0.0.1:4567       
Forwarding       https://56e10a0f.ngrok.com -> 127.0.0.1:4567      
Web Interface    127.0.0.1:4040                                    
# Conn           0                                                 
Avg Conn Time    0.00ms 

Lastly, I configure each webhook to post to http://56e10a0f.ngrok.com/strategy-name, and then send some emails. The requests are received, and the Rack env is dumped to spec/fixtures/strategy-name.env.

And that's it. Using the fixture in action looks like this:

# spec/integration_spec.rb
require 'spec_helper'

describe Incoming::Strategies::Mailgun do
  let(:receiver) { test_receiver(:api_key => 'asdf') }

  describe 'end-to-end' do
    let(:request) { recorded_request('mailgun') }
    before do
      OpenSSL::HMAC.stub(:hexdigest).
        and_return(request.params['signature'])
    end

    it 'converts the Rack::Request into a Mail::Message' do
      expect(receiver.receive(request)).to be_a Mail::Message
    end
  end
end