Data retention with the Serverless Framework, DynamoDB, and Go

At Honeybadger we have standard retention periods for data from which our customers can choose. Based on which subscription plan they choose, we’ll store their error data up to 180 days. Some customers, though, need to have a custom retention period. Due to compliance or other reasons, they may want to have enforce a data retention period of 30 days even though they subscribe to a plan that offers a longer retention period. We allow our customers to configure this custom retention period on a per-project basis, and we then delete each error notification based on the schedule that they have set. Since we store customer error data on S3, we need to keep track of every S3 object we create and when it should be expired so that we can delete it at the right time. This blog post describes how we use S3, DynamoDB, Lambda, and the Serverless Continue reading “Data retention with the Serverless Framework, DynamoDB, and Go”

2017 in review

I can’t recall having done a year-in-review type of blog post before,
but when Patrick suggested it recently, it seemed like a great idea, so
I thought I’d give it a shot.

In short, 2017 was a great year! 🙂 I moved all of our servers from a
colocation facility to AWS in January, which helped me sleep a lot
better at night. Over the course of the year I continued to improve our
infrastructure, and we now have a very reliable and self-healing system.
Nearly everything we do (application, search, and database servers) is
self-managed, so it’s been fun to level-up my distributed system skills.
In December I put my AWS skills (literally) to the test by getting my
first AWS certification: AWS Certified Developer – Associate exam.

I also spent some time working on developing my marketing skills by
taking The Marketing Seminar by Seth Godin. The Continue reading “2017 in review”

Writing Again

A while back I changed the stack I used for publishing this blog with
the hope that I would write more because it would be easier to publish.
Looking back at how little I’ve written since I’ve made that change, I
can see that that didn’t work out so well. 🙂 I’m not going to change
my stack again (just yet), but I am going to try and write a bit more.
Hopefully it won’t be another year before my next post.

One cause of the lack of my writing is how busy I have been running my
error tracking service. It has been a lot
of fun, but it has also been a lot of work. The good news is that the
business continues to grow, and we just passed the five-year mark. My
co-founders and I are definitely living the bootstrapper’s dream, doing
work that we love, and Continue reading “Writing Again”

Solr Recovery

At Honeybadger this morning we had a
failure of our SolrCloud cluster (of three servers). Each of the three
servers has a replica of the eight shards of our notices collection.
Theoretically this means that two of the three servers can go away and
we can still serve search traffic. Sadly, reality doesn’t always match
the theory.

What happened to us this morning is that some of the shards became
leaderless when one of the servers ran out of disk space and started
throwing errors. In other words, we kept seeing this error in the logs:
No registered leader was found. As a result, the two remaining
servers refused to update the index, which brought a halt to
search-related operations. Since I’m relatively new to Solr, I had to
bang my head against the wall for a bit before I stumbled upon the

Simply bringing down one of the Continue reading “Solr Recovery”

Steppin’ up

Rob Walling wrote a great post yesterday
about building up your bootstrapped business over time by taking on
smaller projects before diving into big ones. His post reminded me of
Amy Hoy’s Stacking the Bricks philosophy, and I
think that taking the approach of learning to walk before learning to
run makes sense. Rob’s post made me reflect on my experience building
products that have gone from producing no income, to putting some change
in my pocket, to providing a nice income for my family, and I thought it
would be fun to share.

My day job has always been building web apps, so my first side projects
were also web apps: first, a community site, and later, a SaaS app for
managing test plan execution for software testers.
Those were fun, but never amounted to much.

The first side project I did with the goal of making money was a
self-published ebook about building e-commerce sites with Rails. This
was in 2006, when Rails was young, and that $12 ebook sold pretty well.

In 2007 I started freelancing full time, and I decided that I needed a
product with recurring revenue to help even out my cash flow, so I
started on Catch the Best, a SaaS app
that scratched my own itch. I launched that in October of 2007 (working
on in it part-time while working on client projects), and it got some
paying customers from day one. The revenue from that app has never been
large on a MRR basis, but it has been consistent, so I’m pretty happy
having that as a cash machine.

In 2008 I was building a SaaS billing system in Rails for the third
time. The first time was for Catch the Best, and second two times were
for clients that had engaged me to build SaaS apps for them.
It occurred to me that other developers might be
interested in buying what I had built so that they could save
themselves the time of building their own. So I cleaned up the code I
had written and launched RailsKits to sell
that billing code to other Rails developers. I priced it starting at $250,
and it was a hit. It effectively replaced my freelance income for a
while, and while it doesn’t make as much as it used to (since other
options for implementing billing have become available), it still is a consisitent revenue stream
for me.

After RailsKits, I knew I wanted to do another SaaS project, and in 2012
I found the right one: Honeybadger — an
application health monitoring service for Ruby developers. It has been
an incredibly fun project with awesome co-founders. Since its launch in
the fall of 2012 it has grown consistently, allowing me to cut back and
eventually eliminate my freelancing business.

I didn’t set out with a plan to start with an ebook, then move to a
larger product, then move to a recurring revenue product, but after
having considered Rob’s Stairstep Continue reading “Steppin’ up”

Inject Your App Data Into Help Scout

At Honeybadger we use Help Scout
to manage our customer support, and
that has worked out well for us. One thing I’ve wanted for quite a
while is more integration between Help Scout, our internal dashboard,
and the Stripe dashboard. After taking a mini-vacation to attend
MicroConf this week, I decided it was time to make my dreams come true.

Help Scout allows you to plug “apps” into their UI, and you can build
your own apps to populate the sidebar when looking at a help ticket.
All you have to do is provide a URL that Help Scout can hit which
returns a blob of HTML to be rendered on the page. Your app receives a
signed POST request where the payload is some information about the
support ticket you are viewing, which includes the email address of the
person who created the ticket. Here’s a Rails controller that receives
the request, verifies the signature, and returns some HTML for the user
found by email address:

require 'base64'
require 'hmac-sha1'

class HelpscoutController < ApplicationController
  skip_before_filter :verify_authenticity_token
  before_filter :verify_signature

  def user
    payload = JSON.parse(request.raw_post)
    if payload['customer'] && payload['customer']['email'] && @user = User.where(email: payload['customer']['email']).first
      render json: { html: render_to_string(action: :user, layout: false) }
      render json: { html: "User not found" }



    def verify_signature
      bail and return false unless (sig = request.headers['X-Helpscout-Signature']).present?

      (hmac ="secret-that-you-enter-in-helpscout's-ui")).update(request.raw_post)

      bail and return false unless sig.strip == Base64.encode64(hmac.digest).strip

    def bail
      render json: { html: "Bad signature" }, status: 403

After fetching the user record, it returns a blob of HTML via a HAML

  %li Created on #{l(@user.created_at.to_date, format: :long)}

Then you’re done! Now when you view a ticket in Help Scout you’ll see
info from your database about that user in the sidebar.

Default a Postgres column to the current date in a Rails migration

If you want to have a Postgres column (aside from created_at) that you want to be populated with the current date if no other date is specified, you may be tempted to create a migration like this:

add_column :invoices, :paid_on, :date, default: 'now()'

That will look like it works — you create a record, it gets populated with today’s date, and all is good. However, if you look at your schema, you will notice that new field has a default of today’s date instead of now(). Oops. 🙂

You might try to create the column with the recommendation from the Postgres documentation:

add_column :invoices, :paid_on, :date, default: 'CURRENT_DATE'

But that fails because Rails tries to quote that ‘CURRENT_DATE’ for you before it goes to Postgres, which blows up. Now what?

Here’s how to do what you want:

add_column :invoices, :paid_on, :date, default: { expr: "('now'::text)::date" }

This avoids the quoting problem (by using expr) and avoids the always-insert-migration-date’s-date problem (by using the default function of (‘now’::text)::date, which is effectively the same as CURRENT_DATE.

And now when you insert a record without specifying a value for that field, you get the date of the insertion, rather than the date of the field being created. 🙂

Searchlight and CanCan

I’m currently working on a client project where site adminstrators use
the same UI that site users do, so there are permissions checks in the
views and controllers to ensure the current user has the right to do or
see certain things. CanCan provides the access control, which takes
care of most of the issues with a simple can? check or

In one case I wanted to provide search on a list of items (the index
action) to admins so they could search through all items in the database, but users
should be able to only search on their own items. I’m using Searchlight
(highly recommended) for search, which returns results as an
ActiveRecord::Relation, so it’s easily chainable via CanCan, like so:

class InvoicesController < ApplicationController
  def index
    @search =[:search])
    @invoices = @search.results.accessible_by(current_ability, :index)

Searchlight is also smart enough to return all results if there no
search params provided, so this also works as a typical index action
that lists all items the user can see. If you’re curious about the
@search instance variable, that is used in the search form in the
index view.

So, if you need search with access control, use Searchlight and
CanCan… they are a great combo!

Installing Ruby 2.0

I had a bit of an adventure this morning getting Ruby 2.0 installed on
my mac with Mountain Lion, so I thought I’d share the tale with you in
case it can help save you some time on doing the same. Up until now
I’ve been developing my Rails apps with 1.9.3, but it was time to
upgrade and experience all the new hotness of the latest Ruby. I had
tried to install Ruby 2.0 before, but I had been stymied by an openssl
error when building. Today was to day to get that sorted.

I’m using rbenv to manage the different Ruby versions on my machine, so
the first step was to update ruby-build, which I have installed via
homebrew, so that I could fetch and build the latest Ruby. Sadly, I had
some weirdness with my homebrew installation that prevented me from
getting the latest homebrew, which prevented me from getting the latest
ruby-build, which prevented me from being able to install the latest
Ruby (2.0.0-p247):

$ brew update
error: Your local changes to the following files would be overwritten by merge:

I was pretty sure I hadn’t changed anything in homebrew myself, and I
found some guidance in the github issues list for homebrew that I should
just blow away my local changes with a git reset, which didn’t initially
work because apparently some permissions had changed in /usr/local:

$ cd /usr/local/Library
$ git reset --hard FETCH_HEAD
error: unable to unlink old '' (Permission denied)
error: unable to unlink old '' (Permission denied)
fatal: Could not reset index file to revision 'FETCH_HEAD'.

$ sudo chown -R `whoami` /usr/local
$ git stash && git clean -d -f
$ brew update

Now I was in business. Next up I upgraded ruby-build, and since I had
already installed openssl via homebrew previously, I could use that
while compiling:

$ brew upgrade ruby-build
$ env CONFIGURE_OPTS='--with-openssl-dir=/usr/local/opt/openssl' rbenv install 2.0.0-p247

Boom! Ruby 2.0 was finally installed. But then I hit a snag while
trying to install gems for one of my Rails projects:

$ bundle
Could not verify the SSL certificate for
There is a chance you are experiencing a man-in-the-middle attack, but most likely your system doesn't have the CA certificates needed for verification. For information about
OpenSSL certificates, see To connect without using SSL, edit your Gemfile sources and change 'https' to 'http'.

That was an especially useful error message, since that link provided a
tip on easily getting some updated ssl certificates:

$ brew install curl-ca-bundle

And that tells you to

$ export SSL_CERT_FILE=/usr/local/opt/curl-ca-bundle/share/ca-bundle.crt

And now everything works. Woohoo!

Proven: Customer interviews save you time and money

I’m on my way home from MicroConf 2013,
having learned a lot and having had a lot of fun. This was the third
year Rob and Mike have put on the conference, and the third year that it
has been an awesome experience. As
with the previous years, the speakers and the attendees were bright,
informative, friendly, motivated, and motivating. If you run a
bootstrapped biz, or are thinking about running one, MicroConf is the
place to be to really increase your motivation for and your knowledge
about running your business.

As an aside, I’m looking forward to
BaconBizConf at the end of
this month for the same reasons — I have no doubt it will also be a
great place to be for those who are interested in getting better at
making money. 🙂

I could write several blog posts about terrific takeaways from this
conference, but in this one I want to focus on the urging (especially by
Hiten Shah and Mike Taber)
to focus on customer development when starting and growing your business.
This topic had a particular impact on me at this time because it’s been
on my mind the last few weeks.

A couple of weeks ago I had the privilege of attending
the Switch Workshop put on by the Rewired Group and hosted by Jason
Fried of 37 Signals. That one-day workshop (which I also highly
recommend) was entirely focused on
how people make purchasing decisions, and how understanding that process
can help you find the right customers and give them the value for which
they are searching. Going through that workshop really got
me in the mindset of putting myself in my customer’s shoes (and head),
and letting that drive the decisions I make about my business. I had
been interested in customer development before, but that workshop really
sold me on just how effective it is to talk to customers about how and
why they’ve made the decisions they have made.

I work with entrepreneurs quite a bit in my consulting business (in
fact, I work almost exclusively with people who have an idea and are
wondering how they can convert that idea into a business), and all too
often I see a lack of diligent customer research cause time and money to
be wasted, building a product or service that not enough people really
want. That’s the bad news. But the good news is that this particular
problem is actually easy to overcome! And, even better, it doesn’t have
to take a lot of time or money! And, best of all, it’s even fun! That
workshop, and the emphasis that Hiten and Mike placed on really
understanding your customer, gave me the the information and motivation
I needed to implement a process to solve this problem for my own
business and help others who face this very common problem.

The super-simplified, fun-to-follow and obscenely-effective process
can be broken down into two steps. First, follow Hiten’s advice, and
start with a hypothesis, which I’ll paraphrase a bit:

We believe that (some type of customer / customer segment) has a
problem (with something or doing something)

Once you fill in the blanks with a very specific description of what you
think your ideal customer is and with a description of what the job they
want done is, then you are ready to move on to step number two (and this
is where it gets fun): talking to those customers. But here’s the
trick: Don’t talk about you! Talk about them! Talk about who they are
and what they do (thus making sure they really are your target customer,
and/or helping you learn about who your target customer is). Then, talk
about what they are doing today to solve the problem that you are
thinking of addressing. Ask them how they came to the solution that
they are using, or how they confronted the problem on which you are
focused. If all goes well, you’ll end up talking very little about the
product or service idea in your head, and instead you’ll end up hearing
a lot about what they do and how they do it, which will give you such
good insights that you’ll find yourself smiling as they are talking.

Of course this process isn’t limited to just new ideas, products, or
services. You can use this same process when working on a new feature
for your current product, or when trying to find out why what you are
currently offering isn’t quite as successful as you’d like it to be.

Now, you might be wondering how I know this process is worthwhile and
fun. It’s because I’ve done it myself, of course! 🙂 And I can
certainly attest that it works wonders. In fact, I got to practice the
process while still at MicroConf, since some of the attendees are
ideal customers for my Rails application monitoring service,
Honeybadger. There’s a new feature I’ve
been thinking about the past few weeks, and I wanted to make sure I was
on the right track with it. I had a hypothesis that Rails developers
would like a better way to do activity X than the way they are currently
doing it. So I found a few Rails developers that I had already been
talking to during the conference, and I asked them one question: When
do you do activity X and why? From that one simple question, and a short
conversation that followed it, I realized that I was indeed on to
something with my planned feature. More importantly, what I heard from
my customers let me know that my planned approach was actually a bit
flawed. And better yet, I realized that the approach that would
better solve the problem (as they saw the problem) would actually be
easier for me to implement than my initial plan. I was thrilled!

I hope my small and simple experience of doing customer development —
and particularly interviewing customers — has helped persuade you to get
out there and do it yourself, if you aren’t already doing it.

If you think this sounds interesting, and you’d like some help saving time
and money while using this process to help build your product or your next
feature, do get in touch. I’ll be happy to
answer your questions or provide whatever help I can.

Rails Caching Strategies (a presentation)

Tonight I gave a presentation at the Rails Meetup in Seattle on caching strategies with Rails, and I had a great time. I’m convinced Rails developers don’t use caching enough, and we have so many good options for caching in our apps, we really should be avid practitioners of caching. 🙂

I had a request to put my slides up, so here they are. They are a bit more useful if you have the context of the words I said along with them, but that’s the way these things go, right?

For those of you who were there, I suppose I should apologize for mentioning (so many times) the Airbrake competitor that I recently launched. After the 2nd or so time I mentioned it, it became a bit of a joke to see how many times I could mention it, and I do hope I didn’t annoy you. 🙂

Looking for Ruby and Rails developers

I’m on the hunt for development help, so if you’ve been working with
Ruby and/or Rails for a while, and you’re looking for some contract
hours to fill, I’d love to hear from you.

A fellow entrepreneur here in Seattle and I both need someone part-time,
so between the two of us, we might be able to use all the hours you can
spare. We’re both open to intermediate through advanced Ruby/Rails
developers, with hourly prices from $60 to $120, with at least 10 to 20
hours available per week. Both of us are looking to work with a
developer for an extended period of time, so this could be the beginning
of a beautiful friendship. 🙂

My needs include getting help with RailsKits,
and Catch the Best. If you end up helping
me with RailsKits, that could lead to additional engagements, as I’m
often getting requests from people Continue reading “Looking for Ruby and Rails developers”

Missing attr_unsearchable in ransack?

If you are upgrading to ransack from meta_search, and you are missing
being able to use attr_unsearchable to hide various model methods from
search, you can add this to your model instead:

def self.ransackable_attributes(auth_object = nil)
  (column_names - ['company_name']) + _ransackers.keys

In this case, company_name will no longer be searchable with the
dynamic scopes that ransack creates.

Easy mobile device login for Rails apps

I read an article this morning on TechCrunch about an upcoming service that makes
it easy for developers to add mobile logins to their web applications,
and I thought I’d try something similar for a new project I’m working

If you use the Devise gem for managing logins for your Rails app, and
you pass the :token_authenticatable to the devise method, then your
users can log in with a link that includes an authentication token,
bypassing the email/username and password login. With that in place,
all you need to do is generate a QR code that encodes a link to your app
with this authentication token included. Here’s how you can do that:

Following the instructions of the qr code gem,
install the gem by adding gem 'rqrcode' to your Gemfile and running
bundle install. Generating the code can be done in a HAML view like so:

-  Continue reading "Easy mobile device login for Rails apps"

Print Progress Bar Background Color in Chrome

I’m working on a project today that uses Twitter Bootstrap, displays
progress bars, and has a requirement to print those progress bars.
Sadly, Chrome doesn’t like to print background colors by default, so
printing the progress bars didn’t work so well. Here’s the trick to get
it to work: -webkit-print-color-adjust:exact. This instructs Chrome
to print the background color.

After that, adding a few more styles (in a print stylesheet) makes for nice-looking progress bars (SCSS):

.progress {
  background-image: none;
  -webkit-print-color-adjust: exact;
  box-shadow: inset 0 0;
  -webkit-box-shadow: inset 0 0;

  .bar {
    background-image: none;
    -webkit-print-color-adjust: exact;
    box-shadow: inset 0 0;
    -webkit-box-shadow: inset 0 0;

Googlebot Gotcha

Did you build your site thinking that googlebot can’t understand your javascript? I did, and I was a bit surprised when I learned I was wrong…

Continue reading “Googlebot Gotcha”

Skipping Asset Compilation with Capistrano

Capistrano has a handy task that runs rake assets:precompile for you when you are deploying your Rails 3.1 application. This gives you an easy way to get the performance boosts of having only one css file and one javascript file to load per request. The price you pay for that benefit is the amount of time it takes to run that rake task when you are deploying. There is a way to get the benefit while reducing that cost, though.

Since capistrano creates a symlink for the assets that is moved across deploys, you really don’t need to compile those assets for any deploy where the assets didn’t change. Instead, all you need to do is move the symlink. However, the default capistrano for compiling the assets does compile them every time, regardless of whether any assets were changed in the set of commits that you are deploying. The Continue reading “Skipping Asset Compilation with Capistrano”

Deploying New Relic Server Monitoring with Chef

This morning I deployed New Relic’s new Server Monitoring feature for the first time (I’ve used Scout previously). It’s cool to see your server vitals right next to all your app vitals, and their interface looks attractive, to boot.

Since I deploy everything with Chef, I threw together a quick Chef recipe to automate the installation of New Relic’s server monitoring agent. It has been tested with Ubuntu 10.04 LTS, and you can configure the license key in your Chef JSON config.

You can grab the recipe from my Chef recipe repository at Github.

Improvements to Bundle Watcher

I just released an update to Bundle Watcher this morning that may make it a little easier to get your Ruby gem updates tracked. Now you can specify a URL where your Gemfile.lock resides, rather than having to upload a file.

You can also now see a list of bundles that your tracking, once you’ve logged in via Github. This list shows you at a glance which gems have been updated for your bundles.

Faker 1.0 released

Earlier this week I released version 1.0 of the Faker gem. It’s been about 4 years since the initial release of the gem, and the API has been fairly stable for the last couple of years, so I figured it was a good time to make the jump to 1.0. 🙂

This release finishes the conversion to I18n. Just about everything is in the locale files now, including the ability to define custom formats for everything – company names, street addresses, etc. And, with the magic of method_missing, you can add new items to your locale file and have them show up as methods in the Faker classes.

The 1.0 release also settles some long-standing issues people have had with bad interaction between Faker, Rails 2.3, and locales (especially fallbacks). Though I’m not actively seeking to support Rails 2.3, I at least don’t want it Continue reading “Faker 1.0 released”