Flipper: Insanely Easy Feature Flipping

Let Nunes Do It

In a moment of either genius or delirium I decided to name my newest project after myself. Why? Well, here is the story whether you want to know or not.

Why Nunes?

Naming is always the hardest part of a project. Originally, it was named Railsd. The idea of the gem is automatically subscribe to all of the valuable Rails instrumentation events and send them to statsd in a sane way, thus Railsd was born.

After working on it a bit, I realized that the project was just an easy way to send Rails instrumentation events to any service that supports counters and timers. With a few tweaks, I made Railsd support InstrumentalApp, a favorite service of mine, in addition to Statsd.

Thus came the dilemma. No longer did the (already terrible) name Railsd make sense. As I sat and thought about what to name it, I remembered joking one time about naming a project after myself, so that every time anyone used it they had no choice but to think about me. Thus Nunes was born.

Lest you think that I just wanted to name it Nunes only so that you think of me, here is a bit more detail. Personally, I attempt to instrument everything I can. Be it code, the steps I take, or the calories I consume, I want to know what is going on. I have also noticed that which is automatically instrumented is the easiest to instrument.

I love tracking data so deeply that I want to instrument your code. Really, I do. I want to clone your repo, inject a whole bunch of instrumentation and deploy it to production, so you can know exactly what is going on. I want to sit over your shoulder and look at the graphs with you. Ooooooh, aren’t those some pretty graphs!

But I don’t work for you, or with you, so that would be weird.

Instead, I give you Nunes. I give you Nunes as a reminder that I want to instrument everything and you should too. I give you Nunes so that instrumenting is so easy that you will feel foolish not using it, at least a start. Go ahead, the first metric is free! Yep, I want you to have that first hit and get addicted, like me.

Using Nunes

I love instrumenting things. Nunes loves instrumenting things. To get started, just add Nunes to your gemfile:

# be sure to think of me when you do :)
gem "nunes"

Once you have nunes in your bundle (be sure to think of bundling me up with a big hug), you just need to tell nunes to subscribe to all the fancy events and provide him with somewhere to send all the glorious metrics:

# yep, think of me here too
require 'nunes'

# for statsd
statsd = Statsd.new(...)
Nunes.subscribe(statsd) # ooh, ooh, think of me!

# for instrumental
I = Instrument::Agent.new(...)
Nunes.subscribe(I) # one moooore tiiiime!

With just those couple of lines, you get a whole lot of goodness. Out of the box, Nunes will subscribe to the following Rails instrumentation events:

  • process_action.action_controller
  • render_template.action_view
  • render_partial.action_view
  • deliver.action_mailer
  • receive.action_mailer
  • sql.active_record
  • cache_read.active_support
  • cache_generate.active_support
  • cache_fetch_hit.active_support
  • cache_write.active_support
  • cache_delete.active_support
  • cache_exist?.active_support

Thanks to all the wonderful information those events provide, you will instantly get some of these counter metrics:

  • action_controller.status.200
  • action_controller.format.html
  • action_controller.exception.RuntimeError – where RuntimeError is the class of any exceptions that occur while processing a controller’s action.
  • active_support.cache_hit
  • active_support.cache_miss

And these timer metrics:

  • action_controller.runtime
  • action_controller.view_runtime
  • action_controller.db_runtime
  • action_controller.posts.index.runtime – where posts is the controller and index is the action
  • action_view.app.views.posts.index.html.erb – where app.views.posts.index.html.erb is the path of the view file
  • action_view.app.views.posts._post.html.erb – I can even do partials! woot woot!
  • action_mailer.deliver.post_mailer – where post_mailer is the name of the mailer
  • action_mailer.receive.post_mailer – where post_mailer is the name of the mailer
  • active_record.sql
  • active_record.sql.select – also supported are insert, update, delete, transaction_begin and transaction_commit
  • active_support.cache_read
  • active_support.cache_generate
  • active_support.cache_fetch
  • active_support.cache_fetch_hit
  • active_support.cache_write
  • active_support.cache_delete
  • active_support.cache_exist

But Wait, There is More!

In addition to doing all that work for you out of the box, Nunes will also help you wrap your own code with instrumentation. I know, I know, sounds too good to be true.

class User < ActiveRecord::Base
  extend Nunes::Instrumentable # OH HAI IT IS ME, NUNES

  # wrap save and instrument the timing of it
  instrument_method_time :save

This will instrument the timing of the User instance method save. What that means is when you do this:

# the nerve of me to name a user nunes
user = User.new(name: "NUNES!")

An event named instrument_method_time.nunes will be generated, which in turn is subscribed to and sent to whatever you used to send instrumentation to (statsd, instrumental, etc.). The metric name will default to “class.method”. For the example above, the metric name would be user.save. No fear, you can customize this.

class User < ActiveRecord::Base
  extend Nunes::Instrumentable # never

  # wrap save and instrument the timing of it
  instrument_method_time :save, 'crazy_town.save'

Passing a string as the second argument sets the name of the metric. You can also customize the name using a Hash as the second argument.

class User < ActiveRecord::Base
  extend Nunes::Instrumentable # gonna

  # wrap save and instrument the timing of it
  instrument_method_time :save, name: 'crazy_town.save'

In addition to name, you can also pass a payload that will get sent along with the generated event.

class User < ActiveRecord::Base
  extend Nunes::Instrumentable # give nunes up

  # wrap save and instrument the timing of it
  instrument_method_time :save, payload: {pay: "loading"}

If you subscribe to the event on your own, say to log some things, you’ll get a key named :pay with a value of "loading" in the event’s payload. Pretty neat, eh?


I hope you find Nunes useful and that each time you use it, you think of me and how much I want to instrument your code for you, but am not able to. Go forth and instrument!

P.S. If you have ideas for Nunes, create an issue and start some chatter. Let’s make Nunes even better!

Stupid Simple Debugging

There are all kinds of fancy debugging tools out there, but personally, I get the most mileage out of good old puts statements.

When I started with Ruby, several years ago, I used puts like this to debug:

puts account.inspect

The problem with this is two fold. First, if you have a few puts statements, you don’t know which one is actually which object. This always led me to doing something like this:

puts "account: #{account.inspect}"

Second, depending on whether you are just in Ruby or running an app through a web server, puts is sometimes swallowed. This led me to often times do something like this when using Rails:

Rails.logger.debug "account: #{account.inspect}"

Now, not only do I have to think about which method to use to debug something, I also have to think about where the output will be sent so I can watch for it.

Enter Log Buddy

Then, one fateful afternoon, I stumbled across log buddy (gem install log_buddy). In every project, whether it be a library, Rails app, or Sinatra app, one of the first gems I throw in my Gemfile is log_buddy.

Once you have the gem installed, you can tell log buddy where your log file is and whether or not to actually log like so:

  :logger   => Gauges.logger,
  :disabled => Gauges.production?,

Simply provide log buddy with a logger and tell it if you want it to be silenced in a given situation or environment and you get some nice bang for your buck.

One Method, One Character

First, log buddy adds a nice and short method named d. d is 4X shorter than puts, so right off the bat you get some productivity gains. The d method takes any argument and calls inspect on it. Short and sweet.

d account # will puts account.inspect
d 'Some message' # will puts "Some message"

The cool part is that on top of printing the inspected object to stdout, it also logs it to the logger provided in in LogBuddy.init. No more thinking about which method to use or where output will be. One method, output is sent to multiple places.

This is nice, but it won’t win you any new friends. Where log buddy gets really cool, is when you pass it a block.

d { account } # puts and logs account = <Account ...>

Again, one method, output to stdout and your log file, but when you use a block, it does magic to print out the variable name and that inspected value. You can also pass in several objects, separating them with semi-colons.

d { account; account.creator; current_user }

This gives you each variable on its own line with the name and inspected value. Nothing fancy, but log buddy has saved me a lot of time over the past year. I figured it was time I send it some love.

Stupid Simple Debugging

There are all kinds of fancy debugging tools out there, but personally, I get the most mileage out of good old puts statements.

When I started with Ruby, several years ago, I used puts like this to debug:

puts account.inspect

The problem with this is two fold. First, if you have a few puts statements, you don’t know which one is actually which object. This always led me to doing something like this:

puts "account: #{account.inspect}"

Second, depending on whether you are just in Ruby or running an app through a web server, puts is sometimes swallowed. This led me to often times do something like this when using Rails:

Rails.logger.debug "account: #{account.inspect}"

Now, not only do I have to think about which method to use to debug something, I also have to think about where the output will be sent so I can watch

Continue reading “Stupid Simple Debugging”

Data Modeling in Performant Systems

I have been working on Words With Friends, a high traffic app, for over six months. Talk about trial by fire. I never knew what scale was. Suffice to say that I have learned a lot.

Keeping an application performant is all about finding bottlenecks and fixing them. The problem is each bottleneck you fix leads to more usage and a new bottleneck. It is a constant game of cat and mouse. Sometimes you are the cat and sometimes, well, you are not.

Most of the time, the removal of those bottlenecks is about moving hot data to places that can serve it faster. Disks are slow, memory is fast, enter more memcached.

Over time, you work and work to move hot data into memory and simplify your data access to fit into memory. Key here, value there. Eventually, you get to a place where you have simplified how you access your data into simple key/value lookups.

Games get marshaled into a key named "Game:#{id}". Joins are simplified to selecting ids and caching the array of ids into a key such as "User:#{id}:active_game_ids" or "User:#{id}:over_game_ids". In turn, those arrays are turned into objects by un-marshaling the contents of "Game:#{id}", etc.

Your data model morphs from highly relational to key/value because key/value is fast and memcached can withstand a bruising.

Do it once, and you know how to do it in the future. The problem is by the time you get to this data model, it is kind of bolted on/in to your app.

What if you could design it this way from the beginning? What if you had no option but to think through your data model in keys and values? Need your data in two different ways? Put it in two different places, etc, etc.

I have good news. Now you can.

A Little History

Not long into my tenure with WWF, we were hitting a lot of walls and there was a lot of talk about NoSQL. Mongo? Membase? Cassandra? Riak?

Which one will work best for the problem at hand? What if we could try them all really easily by just changing which place the data went to? What if we could try out more than one at once?

I sat down one weekend and started thinking about the app and realized what I just talked about above. Along the way, our data access changed from relational to key lookups. This made me think about a hash.

Hashes are so versatile, and yet, so constrained. Hashes are for reading, writing and deleting keys, just like key/value stores. I did a bit of GitHub searching and stumbled across moneta, by Yehuda Katz.

Moneta immediately struck me as brilliant. I was shocked there was no activity around it. If you only allow yourself to read, write and delete with the same API, you can make nearly any data store talk the correct language.

I fiddled with it and forked it, but in the end, it was not quite what I was looking for. I liken it to my first house. I like the house, but having lived in it for six years, I know exactly what I want out of my next house.

The folks at Newtoy (now Zynga with Friends) had mentioned that they wanted to build their own object mapper and name it ToyStore—such a great name.

In a fit of inspiration over the 4th of July weekend, I cranked out attributes and initialization, relying heavily on ActiveModel. It was really fun. I emailed the crew when the next work day came around and they were stoked.

It began to occupy some of my work-related time and Geoffrey Dagley started helping me with it. Over the next few weeks, Geof and I hammered out validations, serialization, callbacks, dirty tracking, and much more.

Everything was built on the premise that the only acceptable methods that could be used to read, write and delete data were read, write and delete.

Adapter: The Common Interface

Over time Brandon Keepers got involved and ToyStore started looking pretty legit. We switched from using Moneta as the base to something I whipped together in a few hours, Adapter.

Defining an adapter is as simple as telling it how the client reads, writes and deletes data. You also have to define a clear method for convenience and to stick close the Ruby hash API.

The client can be anything that you want to have a unified interface. For example, this is how you would create an adapter to store things in a ruby hash.

Adapter.define(:memory) do
  def read(key)

  def write(key, value)
    client[key_for(key)] = encode(value)

  def delete(key)

  def clear

key_for ensures that most things can work as a key. encode and decode allow one to hook some kind of serialization in, whatever you fancy, be it Marshal, JSON, or whatever you can imagine.

By defining those methods, we can now get an instance of this adapter and connect it to a client. In the example above, the client is just a plain ruby hash, but in other adapters, it could be an instance of Redis (adapter), Memcached (adapter), or maybe a Riak bucket (adapter).

adapter = Adapter[:memory].new({}) # sets {} to client
adapter.write('foo', 'bar')
adapter.read('foo') # 'bar'
adapter.fetch('foo', 'bar') # returns bar and sets foo to bar

# [] and []= are aliased to read and write
adapter['foo'] = 'bar'
adapter['foo'] # 'bar'

Adapters can also be defined using a block (like above), a module, or both (module included first, then block so you can override module with block).

Adapters can also define atomic locking mechanisms, see the memcached and redis adapters for their locking implementations. The more opaque the object, the more you need to lock. Or, in the case of riak, the adapter can handle read conflicts.

ToyStore: The Mapper Fixings on top of Adaper

Once you have secured how your data layer speaks the adapter interface you can use the real power, ToyStore.

Lets say you want to store your users in redis. Create your class, include the Toy::Store, and set it to store in redis.

require 'toystore'
require 'adapter/redis'

class User
  include Toy::Store
  store :redis, Redis.new

  attribute :email, String

From there, you can go to town, defining attributes, validations, callbacks and more.

class User
  include Toy::Store
  store :redis, Redis.new

  attribute :email, String
  validates_presence_of :email
  before_save :lower_case_email

  def lower_case_email
    self.email = email.downcase if email

user = User.new
pp user.valid?

user.email = 'John'
pp user.save

pp user
pp User.get(user.id)

pp User.get(user.id)

Change your mind? Decide that you do not want to use Redis? Fancy Riak? Simply change the store to use the riak adapter and you are rolling.

require 'toystore'
require 'adapter/riak'

class User
  include Toy::Store
  store :riak, Riak::Client.new['users']

  attribute :email, String

Boom. You just completely changed your data store in a couple lines of code. Practical? Yes and no. Cool? Heck yeah.

What all does Toy::Store come with out of the box? So glad you asked.

  • Attributes – attribute :name, String (or some other type) Can be virtual which works just like attr_accessor but all the power of dirty tracking, serialization, etc. Also, can be abbreviated which means :first_name could be the method you use, but in the data store the attribute is :fn. Save those bytes! Allows for default values and defaults can be procs.
  • Typecasting – Same type system as MongoMapper. One day they will share the exact same type system in its own gem, for now duplicated.
  • Callbacks – all the usual suspects.
  • Dirty Tracking – save, create, update, destroy
  • Mass assignment security – attr_accessible and attr_protected
  • Proper cloning
  • Lists – arrays of ids. If user has many games, user would have list :games which stores in game_ids key on user and works just like an association.
  • Embedded Lists – array of hashes. More consistent than MongoMapper, which will soon reap the benefits of the work on Toy Store embedded lists.
  • References – think belongs_to by a different (better?) name. Post model could reference :creator, User to add creator_id key and relate creator to post.
  • Identity Map – On by default. Should be thread-safe.
  • Read/write through caching – If you specify a cache adapter (say memcached), ToyStore will write to memcached first and read from memcached first, populating the cache if it was not present.
  • Indexing – Need to do lookups by email? index :email and whenever a user is saved the user data is written to one key and the email is written as another key with a value of the user id.
  • Logging
  • Serialization (XML and JSON)
  • Validations
  • Primary key factories

It pretty much has you covered. Adapters for redis, memcached, riak, and cassandra already exist. Expect a Mongo one soon. Have to make a few tweaks to adapter. Yep, even Mongo.

What are other adapters that could be created? Membase? Just start with the memcached adapter and override key_for. Git? File system? REST? MySQL?! I love it!

The Future

The future is not picking a database and forcing all your data into it. The future (heck, now even) is the right database for the job and your application may need several of them.

All this said, in no way do I think ToyStore is going to take the world by storm. It is a different way to build applications. This way comes with great power, but great confusion as well.

Currently, each model is serialized into one key in the store, based on how the adapter does encode/decode. Eventually, I would like to add the ability to store different attributes in different keys. For example, maybe you want active_game_ids to be stored in a key by itself so you don’t have to constantly save the entire user object.

I can also see a use for being able to store an attribute not just a different key, but a different store entirely. Store your user objects in Riak, but active_game_ids in a Redis set. This is where it would get really powerful.

At any rate, I am very excited about this project and I think it has a lot of potential. I would also like to add that MongoMapper is here to stay.

In fact, I learned from my mistakes on MongoMapper when building ToyStore and will be back-porting those learned experiences very soon. Expect a flurry of activity over the next little while.

Closing Thanks

Huge thanks to Newtoy (now Zynga with Friends) for allowing Geof and I to open source this. Several pieces of ToyStore were built on their dime and I really appreciate their contribution to the Ruby and Rails community!

As is typical with new projects, there are probably rough spots and good luck finding documentation. I have included a bevy of examples and the tests do a superb job at explaining the functionality of each method/feature.

Let me know what your thoughts are and be sure to kick the tires!

Roundup of Links

A Scam I Say!

Today, I repeated myself in a particular way for the last time. In at least four places in Harmony, I had faux classes that responded to all, find, create, etc. and initialized attributes from a hash. As I went to write another, I realized I had a problem and it was time for a new open source project.

A Little History

Harmony has a Site model, which has a site mode. The site mode is either ‘development’ or ‘live’. Back in the day, I would have created a SiteMode model that was backed by the database and hooked up all the relationships between Site and SiteMode.

Over the years, I have realized that is a waste. The information in that database table rarely changes and if it does, it is usually accompanied by other code changes and a deploy. This type of information is perfect for just storing in memory. If you need to change it, do so, commit, and deploy.

When I was originally working on site mode’s a few years back, I created a fake model that looked similar to this:

class SiteMode
  cattr_accessor :modes
  @@modes = [
    {:id => 1, :name => 'live'},
    {:id => 2, :name => 'development'},
  def self.[](id)
  def self.all
    @@modes.map { |m| SiteMode.new(m[:id]) }

  attr_accessor :id, :name
  def initialize(id)
    self.class.modes.detect { |m| m[:id] == id }.each_pair do |attr, value|
      self.send("#{attr}=", value)
  def password_required?
    id == 2
  def display_name

With just that code, Site could belong_to :site_mode and everything just worked. I got my same relationships and instead of querying a database (and having to keep that data in sync), everything was just stored in memory.

The Scam

Like I said, rather than create another fake model with all the same code and tests, I pulled out the shared pieces into a gem named scam. With scam in the mix, the SiteMode model now looks like this:

class SiteMode
  include Scam

  attr_accessor :name

  def password_required?
    id == 2

  def item_cache?
    id == 1

  def display_name

  :id   => 1,
  :name => 'live'

  :id   => 2,
  :name => 'development'

By just including Scam, we get all the enumerable functionality and such (see the specs for more). Now the SiteMode class deals specifically with site mode related code instead of how to initialize, create, and enumerate site modes. I switched the other classes that used the same idiom and was left with less code on the other side.

A Few Notes

This is nothing earth shattering, but it saves queries and wraps up an idiom I was using into a separate, well-tested piece of functionality that I can share not only across Harmony, but other applications as well.

One other thing worth noting is my choice of using id’s and having them be integers. The first advantage to using integers is size. The integer 2 is smaller to store than the string ‘development’. That might not seem like a big deal, but after working on the projects I have recently, every byte still counts.

The second reason is that integers are more flexible. Right now, 1 is live and 2 is development. If I decide I want another site mode to be the default instead of development, I can simply change development to a different id and create my new one as 2. The new one instantly becomes the new site mode for all sites with site_mode_id of 2.

If, on the other hand, I had used strings, I would have to map my new site mode to development and map development to something different. This would certainly lead to confusion in the code down the road. Strings have meaning because they are often words, whereas integers do not. Hope that makes sense.

At any rate, you can gem install scam if you have a need. If not, I hope some of the ideas in this post inspire you in some way.

Hunt, An Experiment in Search

I already mentioned this on Twitter, but for those of you that do not follow me there I thought I would take some time out of my day for a quick write up.

Quite often for little pet projects, I want search. While I am whole-heartedly behind Sunspot and Solr for anything serious, most of the time I will gladly sacrifice accuracy for simplicity.

This weekend I spent a couple hours whipping together Hunt, a really basic MongoMapper plugin that makes it easy to add basic search to your application.


Install is obvious, but I will put it here so you can copy and paste.

gem install hunt

Once installed, you just have to declare the plugin in your model, and then tell it what you want to search on.

class Note
  include MongoMapper::Document

  key :title, String

  # Declare plugin and what to search
  plugin Hunt
  searches :title

With just those two lines, Hunt will start tracking terms and allow you to search through them. The handy thing is that the search method just returns a scope, allowing you to further filter it or do counting and pagination.

Note.create(:title => 'A different test')
Note.search('test').count    # 1
Note.search('testing').count # 1

# [#<Note searches: {"default"=>["differ", "test"]}, title: "A different test", _id: BSON::ObjectId('...')>]

Note.search('test').paginate(:page => 1)
# [#<Note searches: {"default"=>["differ", "test"]}, title: "A different test", _id: BSON::ObjectId('...')>]

Behind the Scenes

So how does Hunt work? Before save, it gets the value of each key we want to search on, mashes them together into a big string and then does the following:

  • squeezes multiple spaces together
  • splits string on space into array
  • downcases all words
  • rejects all words less than 2 in size
  • rejects all words in an ignore list (do, don’t, you, yours, etc.)
  • strips punctation from each word
  • rejects any blank words
  • stems each word
  • uniqs the remaining stems


Probably most of that seems logical, but stemming may be unfamiliar to you. I was unfamiliar too before doing a bit of research. Stemming is the process of reducing inflected words to their root (ie: testing, tested, and tests to test).

I think of stemming as word normalization. As long as you normalize all the words you want to search on and normalize the search terms each time you query, your searches can be more intelligent than just exact matches.

I found several gems for stemming and after some cursory examination and benchmarking went with fast-stemmer.


Hunt automatically creates a key named searches. It then stores the array of stemmed words in a key named default inside searches.

Note.create(:title => 'This is a test')
#<Note searches: {"default"=>["test"]}, title: "This is a test", _id: BSON::ObjectId('...')>

Note.create(:title => 'A different test')
#<Note searches: {"default"=>["differ", "test"]}, title: "A different test", _id: BSON::ObjectId('...')>

I chose a hash instead of an array because it is more flexible going forward (think named searches of different key combinations) and affects nothing in the short term.

Knowing this, we can create a Mongo index on this key to ensure that queries stay snappy:

Note.ensure_index :'searches.default'

If you always search scoped to a user and you have a user_id key, you could do a compound index:

Note.ensure_index [[:user_id, Mongo::Ascending], [:'searches.default', Mongo::Ascending]]

Hunt makes no assumptions on how you actually want to index the key. It merely stores the stuff you want to index.

Also, like I said before, you can index as many or as few keys as you want. I put an example of multiple fields on Github for you to peruse.


So what is the value of all this? It gave me a chance to play with stemming and solves my short term issue of wanting basic search without any extra infrastructure.

Would I recommend that you use it? Probably not. 🙂 That said, feel free to hack around on it and see if you can add other interesting features, such as scoring.

If nothing else, I think it shows how flexible MongoDB can be.

Caching With Mongo

For those of you that do not follow me on Twitter or Github, a while back I released Bin, an ActiveSupport MongoDB cache store. Since I have been quiet here, I thought I would talk a bit about it to help get back in the swing of things.

Using Bin is just like using any other AS cache store.

connection = Mongo::Connection.new
db = connection['bin_cache']

Rails::Initializer.run do |config|
  config.cache_store = Bin::Store.new(db)

Once you have set things up, you can use all the typical Rails.cache methods.

Rails.cache.write('foo', 'bar')
Rails.cache.read('foo') # 'bar'

Rails.cache.fetch('foo') do
  # some expensive thing

The list goes on, but in the interest of brevity, I will just link you to the specs. The cool thing about bin is that it supports both ActiveSupport 2 and 3 along with Ruby 1.8.7 and 1.9.1. Oh, and it supports expires with the same API as the memcache store.

That pretty much covers the basics, feel free to go kick the tires or hang around here a bit to learn how I made Bin work with AS2 and AS3.

Supporting AS2/3

In both ActiveSupport 2 and 3, you inhert from ActiveSupport::Cache::Store to create a new store. The difference between the version is quite subtle though. In AS3, you override the methods read, write, etc. as needed and use super with a block to get the inherited functionality. In AS2, you do the same thing, but super does not accept a block. Being that as a community we are now mugwumps (mug on Rails 2 and wumps on Rails 3), I thought it would be nice to support both.

In order to make this happen, I knew all I need to do was shim compatibility for Rails 2. So what I did is create a compatibility class that inherits from ActiveSupport::Cache::Store for AS3 and then if active support’s version is less than 3, I reopen the class and add in the compatibility stuff to make it work like 3. Here is the code in its entirety:

# encoding: UTF-8
module Bin
  class Compatibility < ActiveSupport::Cache::Store
    def increment(key, amount=1)

    def decrement(key, amount=1)

  if ActiveSupport::VERSION::STRING < '3'
    class Compatibility
      def write(key, value, options=nil, &block)
        super(key, value, options)

      def read(key, options=nil, &block)

      def delete(key, options=nil, &block)

      def delete_matched(matcher, options=nil, &block)

      def exist?(key, options=nil, &block)

So then Bin::Store just inherits from Compatibility:

module Bin
  class Store < Compatibility
    # ... stuff

I cringe using a specific version string comparison like above, but it was simple and worked so I went with it. The last piece of the puzzle was setting up rake tasks to run the specs against different active support versions.

namespace :spec do
  Spec::Rake::SpecTask.new(:all) do |t|
    t.ruby_opts << '-rubygems'
    t.verbose = true

  task :as2 do
    sh 'ACTIVE_SUPPORT_VERSION="<= 2.3.8" rake spec:all'

  task :as3 do
    sh 'ACTIVE_SUPPORT_VERSION=">= 3.0.0.beta3" rake spec:all'

desc 'Runs all specs against Active Support 2 and 3'
task :spec do

Note that I make an all task to run the specs then two distinct tasks to run against AS2 and AS3. All those tasks do is set an environment variable that I use in the test to force a particular version.

gem 'activesupport', ENV['ACTIVE_SUPPORT_VERSION']

Now when I run rake, it runs the tests against a 2.3 and a 3.0+ version of ActiveSupport so I know when something goes wrong with either. No flipping gem sets or other shenanigans. As always, if you have improvements or other way so doing stuff like this, please let me know. I am here to learn people.

Using Bin on the last project I worked on to cache large fragments of the layout significantly reduced response times. Always fun to see numbers like that drop after a deploy!

MongoMapper 0.8: Goodies Galore

Let me tell you, this release has been a tough one. It is made up of 43 commits to Plucky and 92 commits to MongoMapper. Features added include a sexy query language, scopes, attr_accessible, a fancy cache key helper, a :typecast option for array/set keys, and a bajillion little improvements. Let’s run through each of them just for fun.

Sexy Query Language

This right here is all thanks to plucky. The goal for plucky is a fancy query language on top of MongoDB. It has been created in a such a way that other MongoDB projects (Mongoid, Candy, MongoDoc, etc.) can benefit from it if they wish. It still has a long way to go in covering edge cases and deeply nested queries, but the majority of queries one will do are covered quite nicely.

User.where(:age.gt => 27).sort(:age).all
User.where(:age.gt => 27).sort(:age.desc).all
User.where(:age.gt => 27).sort(:age).limit(1).all
User.where(:age.gt => 27).sort(:age).skip(1).limit(1).all

All of the above are supported out of the box. Each query method (limit, reverse, update, skip, fields, sort, where) returns a Plucky::Query object so they can be changed together until a kicker is hit, such as all, first, last, paginate, count, size, each, etc. It is fashioned in a similar form as ARel in this manner, but more simple as ARel has to handle a lot more than just simple queries (joins, etc).


The main thing I was waiting for to do scopes was to get plucky to a point where scopes would be just a sprinkling of code to merge plucky queries. Thankfully that day has finally arrived and with this latest release, you can now scope away. The code is so compact, that I figured I would drop it in here for those that are curious:

module MongoMapper
  module Plugins
    module Scopes
      module ClassMethods
        def scope(name, scope_options={})
          scopes[name] = lambda do |*args|
            result = scope_options.is_a?(Proc) ? scope_options.call(*args) : scope_options
            result = self.query(result) if result.is_a?(Hash)
          singleton_class.send :define_method, name, &scopes[name]

        def scopes
          read_inheritable_attribute(:scopes) || write_inheritable_attribute(:scopes, {})

Yep, that is it. With that bit of code, you can now do stuff like this:

class User
  include MongoMapper::Document

  # plain old vanilla scopes with fancy queries
  scope :johns,   where(:name => 'John')

  # plain old vanilla scopes with hashes
  scope :bills, :name => 'Bill'

  # dynamic scopes with parameters
  scope :by_name,  lambda { |name| where(:name => name) }
  scope :by_ages,  lambda { |low, high| where(:age.gte => low, :age.lte => high) }

  # Yep, even plain old methods work as long as they return a query
  def self.by_tag(tag)
    where(:tags => tag)

  # You can even make a method that returns a scope
  def self.twenties; by_ages(20, 29) end

  key :name, String
  key :tags, Array

# simple scopes
pp User.johns.first
pp User.bills.first

# scope with arg
pp User.by_name('Frank').first

# scope with two args
pp User.by_ages(20, 29).all

# chaining class methods on scopes
pp User.by_ages(20, 40).by_tag('ruby').all

# scope made using method that returns scope
pp User.twenties.all

I am sure there are some edge cases, but I cannot wait to start swapping some of the code I have out for scopes. This is definitely one of the features I missed most from ActiveRecord.


Previously, MongoMapper only supported attr_protected. The main reason was that I am lazy and someone from the community contributed the beginnings of the code. I spent some time today adding attr_accessible, so now you can really lock down your models if you want to.

class User
  include MongoMapper::Document

  attr_accessible :first_name, :last_name, :email

  key :first_name, String
  key :last_name, String
  key :email, String
  key :admin, Boolean, :default => false

Based on the example above, only first_name, last_name and email can be assigned when using mass assignment, such as in .new or #update_attributes.

Cache Key

On a recent MongoMapper project, I had to some caching. This led me to create bin, a MongoDB ActiveSupport cache store. The first thing you notice when you start to cache stuff is that you need a key to name the cached object or fragment. I dug around in AR and discovered the cache_key method. MongoMapper’s cache_key works the same with a little twist. You can pass arguments to it and they will become suffixes on the cache key. Lets look at an example:

class User
  include MongoMapper::Document

User.new.cache_key # => "User/new"
User.create.cache_key # => "User/:id"
User.create.cache_key(:foo, :bar) # => "User/:id/foo/bar"

It should also be noted that if the User model has an updated_at key, that will be appended after the id like so User/:id-:timestamp. This addition is definitely going to clean up some code on a project of mine.

Typecasting Array/Set values

A common idiom in MongoDB modeling is to use Array keys for many to many type relationships. You have a User model and a Site model. Sites can have many Users and Users can have many Sites. Typically, I make a key :user_ids, Array and store the ids of each user that has access to the site.

When this is done from web forms, everything comes in as a string, so you have to typecast those strings to object ids. The new :typecast option wraps this up in a single key/value.

class Site
  include MongoMapper::Document
  key :user_ids, Array, :typecast => 'ObjectId'

Now, whenever user_ids is assigned, each value gets typecast to an ObjectId. This will work with any class that defines the to_mongo class method, which means you can use it with custom types as well.


I learned more about Ruby while working on this release of MongoMapper than probably any other period in my brief history. I really feel like this release brings MongoMapper to the forefront of MongoDB/Ruby object mappers.

All the typical dressings are now in place and with a few more tweaks, I can smell 1.0. Hope you all find this stuff useful and as always, if you don’t, that is ok because I am enjoying the heck out of working on this stuff. 🙂

Oh, and if all of this above did not excite you, know that the new MongoMapper site, including full documentation, is well underway and should be ready for consumption soon. Hooray!

May 17: The Happy Streets of Wilmette

Book Status

The Cucumber chapter is nearing final edit for beta. I cleared up a handful of errata, of which probably the most serious was a mistake on how to get the fixture data to pass the first test in the book. I’m hoping to get Beta 3 out later this week, and then I have to decide which direction for beta 4.

Oh, and the book: still on sale.

Agile Working

A few links about being a Rails Developer:

Jake Scruggs asks if you are really doing Agile development.

Mike Gunderloy makes a list of the services and tools that he finds useful in his development business. A great post if you are a small consulting firm.

Harri Kauhanen posts about how to sell Ruby on Rails projects. I’ve heard most of these in sales meetings over the last few years, though my impression was that it was getting better.

And Then…

The Rubinius Ruby interpreter reached a 1.0 milestone on Friday, as noted by nearly every Ruby person on Twitter. So far, I haven’t used it, and my RVM install of it failed.

In a less interesting story, Ruby Gems 1.3.7 was also released.

And Finally…

This Onion article about David Simon doing a series in Wilmette, IL, cracked me up, but mostly because I grew up there.

Filed under: Agile, Consulting, Gems, Onion, RailsRx, Rubinius

Annotate Your Models

One of the primary functions of any ORM (e.g. Active::Record) is to provide all those neat little “ghost” methods to manipulate your persistent data.  The problem is that ghosts are invisible.  It can be frustrating to open a model file and see nothing about the model’s attributes.  For Active::Record the solution is the annotate gem.
sudo […]

Because Gem Names Are Like Domains in the 90’s

One of my favorite parts of every new gem is naming it. The other day, when I was trying to name joint, it occurred to me that I should always check if a gem name is available before I create my project. I did a quick search on RubyGems and discovered it was available.

Last night, I decided I should whip together a tiny gem that allows you to issue a whois command to see if a gem name is taken. Why leave the command line, eh?


gem install gemwhois

This adds the whois command to gem. Which means usage is pretty fun.


$ gem whois httparty

   gem name: httparty
     owners: John Nunemaker, Sandro Turriate
       info: Makes http fun! Also, makes consuming restful web services dead easy.
    version: 0.5.2
  downloads: 40714
$ gem whois somenonexistantgem

  Gem not found. It will be mine. Oh yes. It will be mine. *sinister laugh*

If the gem is found, you will see some details about the project (maybe you can convince them to hand over rights if they are squatting). If the gem is not found, you will receive a creepy message in the same vein as the RubyGems 404 page.

The Fun Parts

The fun part of this gem was recently I noticed that other gems have been adding commands to the gem command. I thought that was interesting so I did a bit of research. I knew that both gemedit and gemcutter added commands so I downloaded both from Github and began to peruse the source. Turns out it is quite easy.

First, you have to have a rubygems_plugin.rb file in your gems lib directory. This is mostly ripped from gemcutter:

if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.3.6')
  require File.join(File.dirname(__FILE__), 'gemwhois')

Next, you have to create a command. At the time of this post, here is the entirety of the whois command:

require 'rubygems/gemcutter_utilities'

class Gem::Commands::WhoisCommand < Gem::Command
  include Gem::GemcutterUtilities

  def description
    'Perform a whois lookup based on a gem name so you can see if it is available or not'

  def arguments
    "GEM       name of gem"

  def usage
    "#{program_name} GEM"

  def initialize
    super 'whois', description

  def execute
    whois get_one_gem_name

  def whois(gem_name)
    response = rubygems_api_request(:get, "api/v1/gems/#{gem_name}.json") do |request|
      request.set_form_data("gem_name" => gem_name)

    with_response(response) do |resp|
      json = Crack::JSON.parse(resp.body)
      puts <<-STR.unindent

        gem name: #{json['name']}
          owners: #{json['authors']}
            info: #{json['info']}
         version: #{json['version']}
       downloads: #{json['downloads']}


  def with_response(resp)
    case resp
    when Net::HTTPSuccess
      block_given? ? yield(resp) : say(resp.body)
      if resp.body == 'This rubygem could not be found.'
        puts '','Gem not found. It will be mine. Oh yes. It will be mine. *sinister laugh*',''
        say resp.body

The important part is inheriting from Gem::Command. Be sure to require 'rubygems/command_manager' at some point as well. Once you have the rubygems_plugin file and a command created, you simple register the command:


The comments and code in RubyGems itself is pretty helpful if you are curious about what you can do.


The trickier part was testing the command. Obviously, building the gem from gemspec and installing over and over does not a happy tester make. I did a bit of research and found the following testing output helpers and the unindent gem:

module Helpers
  module Output
    def assert_output(expected, &block)
      keep_stdout do |stdout|
        if expected.is_a?(Regexp)
          assert_match expected, stdout.string
          assert_equal expected.to_s, stdout.string

    def keep_stdout(&block)
        orig_stream, $stdout = $stdout, StringIO.new
        s, $stdout = $stdout.string, orig_stream

With these little helpers, it was quite easy to setup the command and run it in an automated way:

require 'helper'

class TestGemwhois < Test::Unit::TestCase
  context 'Whois for found gem' do
    setup do
      @gem = 'httparty'
      @command = Gem::Commands::WhoisCommand.new

    should "work" do
      output = <<-STR.unindent

        gem name: httparty
          owners: John Nunemaker, Sandro Turriate
            info: Makes http fun! Also, makes consuming restful web services dead easy.
         version: 0.5.2
       downloads: 40707

      assert_output(output) { @command.execute }
  context "Whois for missing gem" do
    setup do
      @gem = 'missing'
      stub_gem(@gem, :status => ["404", "Not Found"])
      @command = Gem::Commands::WhoisCommand.new

    should "work" do
      output = <<-STR.unindent

        Gem not found. It will be mine. Oh yes. It will be mine. *sinister laugh*

      assert_output(output) { @command.execute }

The only other piece of the puzzle was using FakeWeb to stub the http responses for the found and missing gems. You can see more on that in the test helper file.


At any rate, the gem is pretty tiny and possibly useless to others, but it was fun. Gave me a chance to play around with testing STDOUT and creating RubyGem commands. Plus, now I know if the gem name I want is available in just a few keystrokes.

A Nunemaker Joint

Since The Changelog has already scooped me (darn those guys are fast), I figured I should post something here as well. Last December I posted Getting a Grip on GridFS. Basically, I liked what Grip was doing, but made a few tweaks. Grip’s development has continued, but it is headed down a bit different path, so I thought it might be confusing to keep my fork named the same.

Today, I did some more work on the project and now that it is unrecognizable when compared to Grip, I renamed it to Joint. Yes, I realize that I now have gems named crack and joint. Joint is a tiny piece of code that joins MongoMapper and the new Ruby GridFS API.


What I love about joint is its simplicity. Simply declare the attachment and you are good to go.

class Asset
  include MongoMapper::Document
  plugin Joint # add the plugin

  attachment :file # declare an attachment named image

With that simple declaration, you get #file and #file= instance methods and several keys (file_id, file_name, file_type, and file_size). The #file instance method returns a simple proxy to make the API a bit prettier and sends all other calls on the proxy to the GridIO instance.

asset = Asset.create(:file => params[:file])
asset.file.id   # GridFS Object Id
asset.file.name # file name
asset.file.type # mime type as determined by wand gem
asset.file.size # size in bytes

There is no limit to the number of attachments, but each attachment uses 4 keys so I would not add more than one or two (more is a sign you are doing something wrong). As mentioned in the comment above, it uses my wand project to determine the mime type.

For those that are not familiar with wand (as I have not posted here about it), it first attempts to determine the mime type using the mime-types gem. If that fails to returning anything, it drops down to the unix file command.

What It Does Not Do

Anything else. All joint handles is assigning a file and storing it in GridFS. It doesn’t do versions, resizing, etc. For this type of stuff, I would recommend imagery with some HTTP caching (varnish, et el.) sitting in front of it.

I can certainly see having some triggers (callbacks) at some point such as after jointed or something for when you do want to do post processing. I’ll leave that for a rainy day though.

Canable: The Flesh Eating Permission System

A while back I wrote about how to add simple permissions to your apps. Since then, I have worked on a few applications (Harmony among them) where I have taken that concept and expanded it. Yesterday, I decided that I had repeated myself enough times (3) and that I should abstract the shared functionality of those apps into a gem. Thus, Canable, the flesh eating permission system, was born.


Canable does not actually implement any permissions for you (or actually eat flesh). Instead, it provides you with all the helpers and then (gasp) you have to do the work. The idea centers around running all permissions through current_user. Anytime you check if a user can do something you use a can method:



Instead of having a big case statement in those can methods for each different type of object, I use the strategy pattern to just ask the object if the user has permission to do the action. This is done by having a matching “able” method to the “can” method, thus canable.

class Item
  def updatable_by?(user)
    creator == user

The above code, for example, makes it so that only the creator of an item can update it. Obviously, you can get more in depth from there. By default, I add the following can and able methods:

:view => :viewable
:create => :creatable
:update => :updatable
:destroy => :destroyable

Custom Actions

If you need permissions for actions other than the defaults, you can add your own quite easily:

Canable.add(:publish, :publishable)

The readme over on Github has far more details, but I figured I would at least cover it here a bit. It might seem a bit weird at first, but once you start rolling with it, it makes for a pretty easy to implement and understand permission system.

The really funny part is that it is only like 80 lines of code, as most of the methods are dynamically generated. I am perfectly fine if I am the only one who uses this and finds it helpful, but you never know, so feel free to install it as a gem or fork it on github.

Note: No permissions were harmed in the making of this gem.

Quick Database Conversion Using taps

taps is a database agnostic import/export gem that works with all the databases that sequel supports, including Amalgalite, ADO, DataObjects, DB2, DBI, Firebird, Informix, JDBC, MySQL, ODBC, OpenBase, Oracle, PostgreSQL and SQLite3. It is also used by Heroku to push and pull your apps databases.
~ % [sudo] gem install taps
Here is a quick snippet that shows how […]

Know When to Fold ‘Em

In which I relinquish the day to day maintenance of a few of my projects.

I have a lot of projects. Each time I feel pain or inspiration, I’ll whip together a new library and release it as a gem. It is fun and I love it. It is even more fun when people come along and use those projects to do cool stuff. This in turn, inspires me to write more code and release more projects. It is a vicious cycle.

A while back, I caught myself making jokes about how I don’t even use my projects. I can barely remember the last time I actually used HTTParty, HappyMapper, or the Twitter gem. Not too long ago, I came across Dr. Nic’s Future Ruby talk on Living with 1000 open source projects.

<object height=”355″ width=”425″><param /><param /><param /><embed src=”http://static.slidesharecdn.com/swf/ssplayer2.swf?doc=livingwith1000opensourceprojects-novideos-090712054221-phpapp01&#38;stripped_title=living-with-1000-open-source-projects” height=”355″ width=”425″></embed></object>

In the presentation, he says that you should maintain the projects you use everyday and abandon the rest. Good advice. Over the past few months, I have been seeding maintenance and new features to other talented developers for several of my projects.


The first to go was HTTParty. I believe it was the Ruby Hoedown where I ran into Sandro. He mentioned some HTTParty bugs and asked him if he was interested in taking over. He accepted and the last release (0.4.5) was all him.


Brandon Keepers, a good friend of mine, has a client project that uses HappyMapper, so the fact that he actually uses it made him a logical choice to help with the maintenance of it. He did a bunch of namespace work for the 0.3 release and now has commit rights.

The Twitter Gem

The last gem that was beginning to feel like a burden was the Twitter gem. Wynn Netherland has built several apps that rely on the Twitter gem, so I gave him commit rights and he recently added lists to it.


I can’t say that I am abandoning these projects, as I am sure from time to time I’ll feel inspired and spend some time on them. I just know that I am no good for them if I am not using them. I can’t feel the pain or know what is needed if I am not using the code.

I’m posting about this for two reasons. First and foremost to give some credit to the people who are doing the work now. Second, just setting some expectations that I probably won’t be snappy in responses for these projects as I’m not actively working on them anymore.

Planet Argon Podcast, Episode 1: Shin Splints

We’re currently waiting to get our new podcast approved by Apple, but have uploaded episode 1 to tumblr in the meantime.

In this short episode, we cover the following topics:

<li><a href="http://github.com/binarylogic/authlogic">Authlogic</a></li>
    <li><a href="http://github.com/notahat/machinist/">Machinist</a></li>
    <li><a href="http://github.com/robbyrussell/faker">Faker</a></li>
    <li><a href="http://lesscss.org/"><span class="caps">LESS</span></a> (for <span class="caps">CSS</span>)</li>
    <li><a href="http://textorize.org/">Textorize</a></li>
    <li><a href="http://planetargon.com/what-we-do/development/rails-code-audit">The 8-Hour Rails Code Audits</a></li>

We&#8217;re planning to keep this short and focused to a few topics. Once it&#8217;s posted on iTunes, we&#8217;ll let you know.

<p>Please consider <a href="http://feeds.feedburner.com/PlanetArgonPodcast">subscribing to the podcast</a>. Enjoy!</p><div class="feedflare">