Testing Advice in Eleven Steps

This post is by noelrap from Rails Test Prescriptions Blog

Click here to view on the original site: Original Post

As it happens, my generic advice on Rails testing hasn’t changed substantially, even though the tools I use on a daily basis have.

  • Any testing tool is better than no testing. Okay, that’s glib. You can make an unholy mess in any tool. You can also write valuable tests in any tool. Focus on the valuable part.

  • If you’ve never tested a Rails application before, I still recommend you start with out of the box stuff: Test::Unit, even fixtures. Because it’s simpler and there’s a very good chance you will be able to get help if you need it.

  • That said, if you are coming into a team that already has a house style to use a different tool, use that one. Again, because you’ll be able to get support from those around you.

  • Whatever tool you choose, the important thing is to write a small test, make it pass with a small piece of code, and refactor. Let the code emerge from the tests. If you do that, you are ahead of the game, no matter what tool you are using.

  • At any given moment, the next test has some chance of costing you time in the short term. The problem is it’s nearly impossible to tell which tests will cost the time. Play the odds, write the test. Over the long haul, the chance that the tests are really the bottleneck are, in my experience, quite small.

  • If you start with the out of the box test experience, you will likely experience some pain points as you test more and more. That’s the time to add new tools, like a mock object package, a factory data package, or a context package. Do it when you have a clear sense that the new complexity will bring value.

  • Some people like the RSpec syntax and, for lack of a better word, culture. Others do not. If you are one of the people who doesn’t like it, don’t use it. Well, try it once. You never know.

  • I go back and forth on whether Test::Unit and RSpec are actually functionally equivalent, and eventually have decided it doesn’t matter. You can write a good test suite in either, and if there is a particular bell or whistle on one of them that attracts you or repels you, go that way.

  • You really should do some kind of full-stack testing, especially once you’ve gotten good at unit testing. But whether it’s the Test::Unit integration testing, the new Capybara syntax, or Steak, or Cucumber, is, again, less important than the idea that you are specifying behavior and automatically verifying that the code matches the specification. Most of what I said about RSpec above also applies to Cucumber.

  • This old joke that was repeated with relish on the XP mailing list circa 2000: “Doctor, it hurts when I do this”. “Then don’t do it”.

  • And last, but not least, buy my book. Or buy Dave’s book. Or Kent Beck’s book. Or hang out on mailing lists. Ask questions on Twitter. If you want to get better at testing, there are all kinds of resources available.

Filed under: testing, Uncategorized

Agile Web Development with Rails 4th ed now in print

This post is by Pragmatic Bookshelf from Pragmatic Bookshelf

Click here to view on the original site: Original Post

Agile Web Development with Rails 4th ed now in print and shipping.

Cucumber Rails 0.4: The De-Web-Step-ining

This post is by noelrap from Rails Test Prescriptions Blog

Click here to view on the original site: Original Post

Consider this part of an occasional series where I attempt to revisit tools discussed in Rails Test Prescriptions that have undergone some revision. (NOTE: Most of this was written before the DHH Twitter-storm about testing this week. For the purposes of this post, I’m choosing to pretend the whole thing didn’t happen.)

The cucumber-rails gem released version 0.4 last week, which had some significant changes, and intensified what we might call the opinionated nature of Cucumber over what a Cucumber scenario should look like.

If you update cucumber-rails, you need to re-run the rails generate cucumber:install to see the new stuff.

There are a couple of minor changes — the default env.rb file is much simpler, the capybara date selector steps now work with Rails 3, that kind of thing. The biggest change, though is conceptual, and comes in two parts.

Part one is best laid out by the new first line of the web_steps.rb file:


The header goes on to say that if you make use of these steps you will end up with verbose and brittle cucumber features. Also, your hair will fall out, and you will have seven years bad luck. The last may be more implied than stated.

Why would they do such a thing? And what’s the “official” preferred way to use Cucumber now?

Well, it’s not like the Cucumber dev team has me on speed-dial or anything like that, but since they subtly included in the web_steps.rb file links to, count ‘em, three separate blog posts explaining how to best use Cucumber, I will follow that subtle, yet blazing, trail and try to put it together in some coherent way so that I can understand it.

(Note to Cucumber dev team: if you feel the need to link to this post in future versions of Cucumber, you should consider yourself as having permission to do so….)

Anyway, the Cucumber team is making a very opinionated statement about how to use Cucumber “with the grain”, and I actually don’t think that statement is “don’t use the web_steps” file — I think that some parts of the web_steps file have a place in the Cucumber world.

Here’s the statement as I see it:

  • A Cucumber scenario is an acceptance test.
  • As such, the scenario should completely be in the domain of the user.
  • A Cucumber scenario should not have any reference to implementation details.
  • Implementation details include, but are not limited to: CSS selectors, class names, attribute names, and HTML display text.

As a good rule of thumb, if you are putting something in your Cucumber steps in quotation marks, you should at least think about whether your Cucumber scenario is at a high enough level. In the Cucumber world, the place for implementation-specific details is in the step definition files. If the acceptance criteria changes, the scenario should change, but if the implementation changes, only the step definitions should change.

This sharp separation between the acceptance test and the implementation is a feature, not a bug, in Cucumber (By the way, you do not want bugs in your cucumbers. Yuck.) The separation is what makes Cucumber a true black-box test of your application, and not a black box riddled with holes.

That said, full-stack testing that is based on knowing implementation details — which is “integration testing” rather than “acceptance testing” — is a perfectly valid thing to do, especially in a case where there isn’t an external customer that needs or wants to see the acceptance testing. But, if you are actually doing integration testing, then you don’t need the extra level of indirection that Cucumber offers — you should drop down to Steak, or Rails integration tests, or the new Capybara acceptance test DSL or something.

Okay, so. Acceptance testing is not integration testing, and if you are trying to do integration testing via Cucumber, you will be frustrated, because that’s not what Cucumber is good at. To me, there’s a value in acceptance testing, or in this case, acceptance test driven development, because it’s helpful to try and describe the desired system behavior without any implementation details confusing the issue.

Which brings us back to the question of how you actually replace the web steps in your Cucumber scenarios. Essentially the idea is to replace implementation-based steps with steps that describe behavior more generically. You might have something like this:

Scenario: Updating a user profile
  Given a user named "Noel" with a preference for "Cool Stuff"
  When I go to the edit profile page
  And I fill in "bananas" for "Favorite Food"
  And I select "Comic Books" from "Preferences"
  And I press "Submit"
  Then I should see "Bananas"
  And I should see "Comic Books"

That’s not horrible, because it doesn’t have any explicit CSS or code in it, but it’s still very much about implementation details, such as the exact starting state of the user, the labels in the form, and the details of the output. On the plus side, the only step definition you’d need to write for this is for the first step, every other step is covered by an existing web step. But… I’ve written my share of Cucumber scenarios that look like this, and it’s not the best way to go. It’s hard to tell from this step what the most important parts are and what system behavior is actually being described.

The implicit version of the scenario looks more like this:

Scenario: Updating a user profile
  Given I am an existing user with a partially completed profile
  When I go to edit my profile
  And I fill in new preferences
  Then I see my new preferences on my profile page

Two questions to answer: why is this better, and how does it work?

The second question first. We need to write step definitions for all these steps. Normally, I write these in terms of the underlying Capybara or Webrat API rather than calling web steps. The second step doesn’t need a full definition, it just needs an entry for /edit my profile/ in the paths.rb file (right now, it seems like that’s about the only step in the web steps file that the Cucumber team is willing to use), but the other three steps need definitions — here’s what they might look like, this might have a typo or syntax jumble, it’s just the basic idea.

Given /^I am an existing user with a partially completed profile$/ do
  @user = Factory(:user)
  @user.profile = Factory(:profile, :preference => "Cool Stuff",
     :favorite_food => nil)

When /^I fill in new preferences$/ do
  fill_in("Favorite Food", :with => "Bananas")
  select("Comic Books", :from => "Preferences")

Then /^I see my new preferences on my profile page$/
  with_scope("preference listing") do
    page.should have_selector(selector_for("bananas are my favorite food"))
    page.should have_selector(selector_for("comic books are my preference"))

If you are used to Cucumber but haven’t used the 0.4 Rails version yet, the last step will look unfamiliar. Bear with me for a second.

Why is the second version better? It’s not because it’s shorter — it’s a bit longer, although only a bit (the first version would need a step definition for the user step as well). However, the length is split into more manageable chunks. The Cucumber scenario is shorter, and more to the point, each step is more descriptive in terms of what it does and how it fits into the overall scenario. The new step definitions you need to write add a little complexity, but not very much, and my Cucumber experience is that the at this size, the complexity of the step definitions is rarely the bottleneck. (For the record, the bottleneck is usually getting the object environment set up, followed by the inevitable point of intersection with implementation details, which is why I’m so keen to try and minimize intersection with the implementation.)

Yes, the scenario is something you could show a non-developer member of the team, but I also think it’s easier for coders to comprehend, at least in terms of getting across the goals of the system. And this is supposed to be an acceptance test — making the goals of the system explicit is the whole point.

Okay, either you believe me at this point or you don’t. I suspect that some of you look at the step definitions and say “hey, I could string those seven lines of code together and call it a test all by itself”. Again, if that’s what works for you, fine. Any full-stack testing is probably better than no full-task setting. Try it once, though. For me.

Back to the step definitions, the last one uses the selector_for method — and I hope I’m using it right here because I haven’t gotten a chance to work with it yet, and the docs aren’t totally clear to me. The idea behind selector_for is to be analogous to the path_to method, but instead of being a big long case statement that turns a natural language phrase into a path, it’s a big long case statement that turns a natural language phrase into a CSS selector. The big lng case statement is in the support folder in a selectors.rb file. The with_scope method uses the same big case statement to narrow the statements inside the block to DOM elements within the snippet.

As with the paths, the idea is to take information that is implementation specific and likely to be duplicated and quarantine it into one particular location. As I said, I haven’t really incorporated this into my Cucumber routine yet, my first thought is that it’ll be nice to hide some of the complex CSS selectors I use in view testing, but I worry that the selectors.rb file will become a mess and that there’s less probability of duplicating a snippet.

I sure wish I had a rousing conclusion to reward you as this post nears the 1750 word mark. I like the direction that these changes are taking Cucumber, they are in line with what I’ve found to be the best use of the tool. Take a chance and try writing tests as implicitly as you can, as an exercise and see if it works for you.

Filed under: Cucumber, Rails, testing, Uncategorized

DHH Offended By RSpec, Says Test::Unit Is Just Great

This post is by Peter Cooper from Ruby Inside

Click here to view on the original site: Original Post

As an outspoken and opinionated guy, David Heinemeier Hansson (a.k.a. DHH), creator of Rails, is no stranger to a little bit of controversy. He frequently sets off interesting debates on Twitter from his @dhh account. The latest is, perhaps, the most involved yet and has been rattling on for a couple of hours today.

So what’s the beef? RSpec and Cucumber versus.. Test::Unit. It’s no secret that DHH is a happy Test::Unit (and fixtures) user. Last October he tweeted:

But here’s what kicked off today’s debate:

Naturally, this brought a plethora of heckles, support, snark, and questions from DHH’s followers, including:

More specifically, though, DHH noted that this gist comparing some Test::Unit tests to RSpec triggered his statements.

But back to the debate. JD Skinner asked: What testing tools do you use/recommend? DHH replied:

Jeremy Welland asked: So why do you think RSpec/shoulda’s approach is so popular? DHH’s response:

The debate rattled on for a few hours amongst various Ruby developers on Twitter (including Dave Thomas and Bryan TATFT Liles) and if you want to really dig into it, check out DHH’s Twitter account and follow through some of the responses.

I agree with DHH for the most part, though I use a mixture of RSpec, Test::Unit, and Minitest just to “keep my hand in.” RSpec (and BDD) did seem to become “the way” to do testing in many circles, though, even when it didn’t present any significant benefits. Cucumber, following on from RSpec, had a similar experience.

This debate is important, though, not because DHH is right or wrong or because one testing system is better than another, but because it wipes away some of the lines in the sand and assumed “correct way” attitudes that exist. If you want to use Test::Unit, it’s not “old and uncool” (yes, I’ve heard this) and you should get on with it (DHH is using it fer chrissakes.) Likewise, RSpec and Cucumber are not panaceas or catch all solutions but may have significant benefits you can take advantage of.

Research your options and pick the tool that makes sense for you and your team. DHH has and he’s sticking with Test::Unit. What’s your take on it?

Update: DHH recommends this post about testing which suggests you check out Test::Unit first before considering more extensive frameworks. A good read!

Watchr – More Than An Automated Test Runner

This post is by Joe Fiorini from Ruby Inside

Click here to view on the original site: Original Post

Watchr is a development tool that monitors a directory tree and triggers a user defined action (in Ruby) whenever an observed file is modified. Its most typical use is continuous testing, and as such it is a more flexible alternative to autotest. It is maintained by Martin Aumont and available on GitHub.

Watchr works by allowing you to specify the path to the file or files you want to monitor. When the file is changed it executes whatever block of Ruby code you give it. As the README states its most common use case is as a replacement for autotest. After using Watchr for a couple years now, I have learned that it’s much more than that. For example, it has helped me automatically copy a setup script to a virtual machine while building it and to update large blocks of content in a database.

How I Used Watchr to Automate Shopify Theme Development

I recently started a Shopify project with a colleague (Shopify is a hosted platform for building ecommerce sites). Users can edit their store’s look and feel through a web-based admin interface. This may work for most shop owners, but I’ve grown accustomed to editing text in Vim and working at the command line. Fortunately, Shopify has a very nice API for uploading templates to your store. Watchr to the rescue! I fired up my editor and not long after I had a very helpful script:

require 'shopify_api'

watch('templates/.*\.liquid') do |match|
  puts "Updating #{match[0].inspect}..."

def upload_template(file)
  ShopifyAPI::Base.site = "http://{key}:{secret}@{domain}.myshopify.com/admin/"
  asset = ShopifyAPI::Asset.find(file)
  asset.value = File.read(file)

With this script all I have to do is save the file I’m working on and Watchr uploads it to Shopify for me. How does it work? The directory structure on my file system mirrors Shopify’s template structure. All my script has to do is read the file and send the path and contents to the server. Watchr uses OSX’s native File System Events API to listen for changes to files matching the path string I pass into watch. When a matching file changes, it executes the block and hands in the path to the changed file (or files).

These simple Watchr scripts have saved me time and, more importantly, frustration. With just a few lines of code I can automate away tedious parts of my day while following my preferred workflow and not compromise efficiency. Watchr is not just about running my tests, it’s about improving my workflow as much as possible.

More Info

Want some more information? Check out Watchr on Github, check out the docs, or read the wiki for some more examples. When you’re ready to get started run gem install watchr and write your first script.

Editor’s note: I just noticed that Ric Roberts wrote Watchr: A Flexible, Generic Alternative to AutoTest for Ruby Inside back in 2009. You might find that post useful too.

Joe is a Software Craftsman at LeanDog Software where he helps run their Ruby
delivery practice on a boat in Cleveland, OH. He blogs about
software-related topics at http://blog.densitypop.com and tweets as

The End of Monkeypatching

This post is by Xavier Shay from Ruby Inside

Click here to view on the original site: Original Post

Monkey-patching is so 2010. We’re in the future, and with Github and Bundler there is now rarely a need to monkey-patch Ruby code in your applications.

Monkey-patching is the dangerous-yet-frequently-useful technique of re-opening existing classes to change or add to their behavior. For example, if I have always felt that Array should implement the sum method, I can add it in my codebase:

class Array
  def sum
    inject {|sum, x| sum + x }

That is a monkey-patch. Of course, when I require activesupport it also adds a sum method to Array though its version has an arity of one and takes a block. This conflict can cause hard to track down errors and is why monkey-patching is to be used with caution.

Thankfully, the spread of this abuse is minimal and most developers understand the risks. More frequently, monkey-patching is used to quickly fix bugs in existing libraries by reopening a class and replacing an existing method with an implementation that works correctly. This is often a fragile solution, relying on on sometimes complex techniques to override exactly the right bit of code, and also on the underlying library not be refactored.

In the dark ages when it was troublesome to own or release your own versions of gems, this was the only cheap solution available. Nowadays though, a modern Ruby application has access to a far easier and more robust solution: fork the offending code, fix it at the source, and set up your application dependencies to use your new code directly. All without having to package a new gem!

The first few steps of this process are mostly self-explanatory, but I have documented them below anyway. If you are already old hat at this stuff, feel free to skip directly to step 4.

Step 1: Fork the Library

It is rare to find a Ruby gem or library that isn’t on Github these days. After locating the library, always check the network graph to try and find other popular forks. Often the problem you are trying to solve has already been fixed by another developer. In that case, you can skip straight to step 3. Otherwise, fork the code base to your own GitHub account.

Step 2: Make Your Changes

Clone your fork and make whatever changes you need. If you are feeling generous, add an appropriate test to the code base as well so it can be contributed back to the original fork, but as long as you have a test somewhere (such as in your main app) for the desired behavior you will be fine.

Step 3: Change Your Gemfile

Point your Gemfile at the new code:

# Gemfile
# From this
gem 'rails'

# To this
gem 'rails', :git => 'git://github.com/xaviershay/rails', :branch => 'important-fix'

And reinstall your gems by running bundle.

Step 4: Document

This step is important. There is no excuse for skipping it. You need documentation in three places:

  1. A note at the top of the README in your fork, documenting the changes. Any developer can stumble across a public fork, and there is nothing more frustrating than trying to figure out whether a fork already solves your problem. At the very least, a “here be dragons” note will be appreciated.

  2. The place in your code base that depends on the fork. You can expect other developers to be familiar with rails and the standard gems. They won’t be familiar with the behavior of your changes.

  3. The Gemfile. Make a note above your gem line as to why a fork is required. Provide enough information that a future developer will know when or if it would be appropriate to upgrade or switch back to the main gem. Here are some real examples from some of my projects:

# An experimental fix for memory bloat issues in development, if it works
# I will be patching to core.
gem ...

# 1.1 requires rubygems > 1.4, so won't install on heroku. This fork removes
# that dependency, since it is actually only required for the development
# dependencies.
gem ...

# Need e86f5f23f5ed15d2e9f2 in master and us to upgrade to dm-core 1.1
# before we switch back. Should be in 1.1.1 release.
gem ...

Bonus Step: Upgrading

Six months down the track, how will you know whether your monkey-patched fixes have been solved elsewhere? Sure, your tests should cover it, but it is nice to have some more confidence. We can use some git tricks to get some intel. Add the master fork as a remote to your project, and you can get a log of the differences between then. Here is an example of a fork of dm-rails I have:

$ git clone git://github.com/xaviershay/dm-rails
$ cd dm-rails
$ git remote add datamapper git://github.com/datamapper/dm-rails
$ git fetch datamapper
$ git log --format=oneline --graph v1.1.0..origin/dev-fix-3.0.x
* e9a2b623aea6c87675247230acce81b031163719 Need to .dup this array because otherwise deleting from it causes undefined iteration
* 0736617a1a97862ab249e6388a3c87df4d9d3231 Remove duplicate dependencies from gemspec now that Jeweler reads the Gemfile
* 0265016cdf4528a922e1db32ae922924465f095f Revert "Merge branch 'master' into mine"
* f054c803baf41fabe0ac443bc8d205f867100a9c Merge branch 'master' into mine
* a969fd1ac2066e4b4bc785a0e9a7d904309ca64f Regenerate gemspec
* 373073444acae97b2a9ad9e511e16f44a46a73ed Clear out descendants on preloadmodels to prevent memory bloat in development

Ignoring the merge and gemspec commits, you can see that my commits did not make it into the 1.1.0 release. This does not mean I should not upgrade – it is quite possible that my problem was solved in another way – but it lets me know what I am looking for.

Parting Words

For a project using Bundler, there is now rarely ever a need to monkey patch anything. Any bugs or enhancements can be fixed properly at the source, resulting in happier code and happier developers.

[yay] Want to jump on the Rails 3 train? Michael Hartl’s Ruby on Rails 3 Tutorial series is the way to go. There’s a free, online book but if you want to go further, pick up the 15+ hours of screencasts giving you an ‘over the shoulder’ view of a Rails professional building Rails 3 apps in real time.

#259 Decent Exposure

This post is by RailsCasts from RailsCasts

Click here to view on the original site: Original Post

The decent_exposure gem makes it convenient to share controller data with the view through methods instead of instance variables.

#259 Decent Exposure

This post is by RailsCasts from RailsCasts

Click here to view on the original site: Original Post

The decent_exposure gem makes it convenient to share controller data with the view through methods instead of instance variables.

Why The Lucky Stiff’s Delightful Foreword for Beginning Ruby

This post is by Peter Cooper from Ruby Inside

Click here to view on the original site: Original Post

I started Ruby Inside in May 2006 as a promotional vehicle for my then in-progress book, Beginning Ruby. It eventually went on to be published by Apress and is now on its second edition having sold quite a few copies.

It’s typical to choose someone who’s better known than you in the field to write a foreword for you to lend some legitimacy to your book and I only had one choice: Why The Lucky Stiff. As with many Rubyists, Why was a hero of mine and I wanted to go with the unusual route of an illustrated foreword. Surprisingly, Why readily accepted the challenge.

The foreword turned out great. Why disappeared for a few months after submitting his first drafts so we stuck with them but he had more planned. Many readers of Beginning Ruby have commented on how much they love his work, but today I discovered that several people I know well had never seen this work (because, of course, competent Rubyists don’t need an introductory book). So here’s a copy and paste of Why’s foreword for Beginning Ruby for all to enjoy:

If you want to check out Beginning Ruby, give this PDF a look. The 2nd edition is a couple of years old now so ignore the Rails bits. Oh, and shh…

JRuby 1.6.0, Ruby stats, memes, and more

This post is by Jason Seifer and Dan Benjamin from The Ruby Show

Click here to view on the original site: Original Post

In this episode, Peter and Jason cover the latest release of JRuby, John Nunemaker and stats, tuning ruby 1.9.2, and more.

MacRuby 0.10 Released: XCode 4 Support and App Store Submissions

This post is by Peter Cooper from Ruby Inside

Click here to view on the original site: Original Post

apple-ruby-3.jpgMacRuby’s lead developer and Apple employee Laurent Sansonetti has today released MacRuby 0.10 (yep, that’s ten), the latest version of the Mac OS X-focused Ruby implementation. 0.10 is the latest stepping stone on the way to a forthcoming 1.0 release.

Getting MacRuby

You can grab MacRuby 0.10 from the downloads page or directly at http://www.macruby.org/files/MacRuby 0.10.zip. Beware, though, that the binary installer download will only work on 64 bit Intel-powered machines running OS X 10.6 or higher.

New Features, New Possibilities

0.10 is not a major release but a few things stand out in the release notes for 0.10 amongst all the usual performance tweaks and bug fixes:

  • Support for the new MacBook Pro hardware (SandyBridge processors).
  • Fixes in macruby_deploy for App Store submissions.
  • Xcode4 support.

This is the first time I’ve seen an obvious reference to the App Store in the MacRuby release notes and it’s an exciting development. Back in October 2010, I wrote MacRuby + Mac App Store == Low Hanging Fruit for Rubyists where I riffed on the possibilities that MacRuby could offer to Rubyists looking to make a splash with Ruby powered desktop apps. It looks like the door might be opening a little on this.

Going Further

If MacRuby interests you and you want to ‘book up’ check out MacRuby: The Definitive Guide by Matt Aimonetti and MacRuby in Action by Brendan Lim. Both are still in pre-release stages but beta copies are available.

I also recommend reading MacRuby for the Desktop: Seven Reasons by Andre Lewis of Scout. He’s in the process of building a desktop Mac app with MacRuby and shares some general insights into how he’s finding MacRuby.

[ad] Jaconda is a chat system designed for teams co-ordinating on projects. It works from the browser, your IM app, or your phone, and lets you chat, share files, and keep up with tickets and project activity which can be delivered automatically to your chat rooms.

Programming Concurrency on the JVM: Mastering Synchronization, STM, and Actors Now in Beta

This post is by Pragmatic Bookshelf from Pragmatic Bookshelf

Click here to view on the original site: Original Post

Programming Concurrency on the JVM: Mastering Synchronization, STM, and Actors now in beta

Hi My Name is John…

This post is by John Nunemaker from RailsTips by John Nunemaker

Click here to view on the original site: Original Post

…and I am addicted to analytics. It all started when I was a wee lad. I quite enjoyed playing Tecmo NBA Basketball, among other games. One day, while rocking the house with Shawn Kemp and the Seattle Supersonics, I noticed that Tecmo NBA basketball did not seem to be correctly recording rebounds.

Obviously, this kind of egregious error was unacceptable. With pad and paper, I began to keep track of rebounds on my own. After each rebound, I would record the stat for the player grabbing it. Yes, I actually paused game play so that I could have correct analytics on rebounds.

The Joys of Blogging

Anyway, fast forward to 2011 where I now operate as a programmer. I could tell you that I grew out of that phase in my life, but alas I have not. From Shortstat, to Mint, and now on to Gaug.es, I have maintained quite a fascination with analytics.

If I am being completely honest, one of the main reasons I blog is to see the views come in after a new post. And oh the joys when it lands on Reddit or HN and brings me people in excess (and lame comments covering how stupid I am).

Graphite and Statsd

The great thing is that on top of websites, I now help maintain several applications. Applications are a fun and tricky beast full of opportunities to record metrics. Most of the time though, these metrics go unrecorded because it is too much work to store and maintain them.

After reading measuring anything and everything by the fine folks at Etsy, I decided it was time to get dirty. I spent a few hours this weekend setting up Graphite and statsd on a small VPS.

Graphite is “enterprise scalable realtime graphing” and statsd, built by Etsy, is a “network daemon for aggregating statistics, rolling them up, then sending them to Graphite”.

Stealing pieces of a gist, I fumbled my way through, and with a little help from Kastner, I was good to go.


Once I was past the I feel stupid because I have never really setup python or node.js apps before, it was time to start sending my setup some data. statsd speaks UDP, which I have certainly heard about, but never before actually looked into.

UDP is an unreliable, unordered, lightweight protocol for slinging messages around the interwebs. The best way to think of it for those that are unfamiliar is fire and forget. The huge upside of UDP for analytics is that the effect of sprinkling it all over your app is minimal.

You lose a millisecond constructing and sending the message, but if statsd ever goes down, your app does not. You simply lose statistics until it comes back up. Lets look at a simple example.

require 'socket'
socket = UDPSocket.new
socket.send('some message', 0, '', 33333)

Go ahead and run that. Notice how it doesn’t error? No, it does not magically spin up something in the background. It is fire and forget. The message is sent, but whether or not it makes it to its destination does not matter. Most of the time it will, sometimes it won’t.

I read somewhere that TCP is like a phone call and UDP is like a letter in the mail. Good analogy.

Statsd from Ruby

I started to work on a UDP client for statsd and then realized I should probably check Github before getting too far in. Thankfully, Rein already had a nice little statsd library created.

I felt like it was missing a few things, so I forked it and added a time method that works with blocks and namespacing (so I could track multiple apps from same graphite/statsd install). I have already talked with him and he plans on pulling both. Until then, you can checkout the mine branch on my fork.

Now that I had the server side setup and was armed with a client library, I started to think about what kind of stats I would like to add to Gaug.es. The first thing I could think of was recording each track. I already store an all time number in Mongo, but minute/hour/day data could not hurt.

I created a tiny wrapper around Rein’s library so things would only be tracked in production. I certainly could do this other ways, and probably will, but it worked good enough to get things out the door.

class Stats
  cattr_accessor :client

  def self.record_stats?
    Gauges.environment == 'staging' || Gauges.environment == 'production'

  def self.increment(*args)
    client.increment(*args) if record_stats?

  def self.decrement(*args)
    client.decrement(*args) if record_stats?

  def self.timing(*args)
    client.timing(*args) if record_stats?

Stats.client = Statsd.new(ipaddr, port)
Stats.client.namespace = 'gauges'

Using this, I added a increment to the track route Stats.increment('routes.track'), deployed, and instantly had graphs to play with. Below is tracks per second since last night when I first added the tracking.

Fun Use Case

In Gaug.es, about 75% of the storage is in the contents collection. This collection tracks the views, titles and paths for each site. I was curious what was taking up more space, titles or paths.

Abusing the timing method in statsd, I was able to send the length of the path and title for each piece of content as it was tracked and then get a nice graph of the lower, upper, mean, and upper 90 percentiles.

I noticed right away that some pieces of content were over 600 characters long. This seemed odd, so I started logging the offending pieces of content. I tailed the log for a while and saw that it was Facebook’s fault. 🙂

For some reason sites using Facebook’s “like” tools end up getting a querying string parameter named fbc_channel, which has a value that is hundreds of characters of json. Awesome.

I created a test case out of the misbehaving content, stripping the fbc_channel param, and deployed a fix. Based on the graph below it is obvious when I pushed out the change.

From adding the analytics, to detection, to deploying a fix, only a few minutes flew by. Note that previously I would not have even tracked content path length. I would have never discovered the issue and the sites that had this going on would have continued to have jacked up stats, probably never mentioning it to me.

You have no excuse

I spent a few hours getting things running, but oh the joy I have now. Setup a small VPS or an EC2 micro instance. Install graphite and statsd. Never again wonder. Graph all your theories and improve your apps. That is all for now, I have more metrics to track!

#258 Token Fields

This post is by RailsCasts from RailsCasts

Click here to view on the original site: Original Post

With the jQuery Tokeninput plugin it is easy to add an autocompleting list of entries for a many-to-many association.

#258 Token Fields

This post is by RailsCasts from RailsCasts

Click here to view on the original site: Original Post

With the jQuery Tokeninput plugin it is easy to add an autocompleting list of entries for a many-to-many association.

Maze Generation: More weave mazes

This post is by Jamis from the { buckblogs :here } - Home

Click here to view on the original site: Original Post

My previous post showed one way to create “weave mazes”, or mazes in which the passages weave over and under one another. The technique I showed was flexible, and could be implemented using several different maze algorithms, but it has (at least) one big drawback: it’s not predictable. You don’t know how many over/under crossings you’ll get until the maze has been generated, and for small mazes (5×5, for instance) you may very often get no crossings at all.

In the comment thread for that post, Robin Houston described an alternate algorithm. Instead of building the crossings as you generate the maze, you do it as a preprocessing step. You scatter crossings (either at random, or deliberately) across the maze, and then generate the maze so that they connect correctly.

Sounds like it could be tricky, yeah? You’ve got all these independent connections, like little graphs floating around, all disjoint… If only there were a way to generate a maze by connecting disjoint trees into a single graph…

What’s that? Oh, right! Kruskal’s algorithm does just that! Let’s take a look at how easy Kruskal’s makes this.

Weave mazes via preprocessing

I’m not going to review Kruskal’s algorithm here. If you don’t remember the details, I strongly recommend you read (or re-read) my article on Kruskal’s algorithm. Don’t worry, I’ll wait.

Seriously, read it. The rest of this post won’t make much sense unless you understand how Kruskal’s works.

Got it? Alright, let’s proceed.

So, let’s assume you’ve got your blank grid, and you’ve set it up (just like Kruskal’s wants) so that each cell is the root of its own (one-node) tree. You’re about to generate the maze…

First, though, we do the preprocessing step: let’s scatter over/under crossings about the grid. We have a few constraints:

  1. no crossing can be placed on the edge of the grid (because that would imply a passage moving outside the grid).
  2. no crossing can be placed immediately adjacent to another crossing (this just simplifies things—allowing adjacent crossings appears to introduce a surprising amount of complexity).
  3. if the y-1 and y+1 (north and south) cells are already connected (in the same set), the crossing must not be allowed (because it would create a loop).
  4. similarly, if the x-1 and x+1 (west and east) cells are already connected, the crossing must not be allowed.

So, we place a crossing. We randomly decide whether the over-passage is north/south or east/west, and then carve the appropriate values into the grid.

However, since we’re dealing with Kruskal’s algorithm, we also need to update the connection sets, and then remove the relevant edges from the edge set. Because we don’t allow adjacent crossings, we don’t ever have to worry about things connecting directly to the cross-over cells (this is why allowing adjacent crossings gets complicated). So, to update the sets, we just join the north/south cells, and the east/west cells. And then we remove the edges connecting the cross-over cell, and its adjacent cells.

A lot of words, but not so much work, in practice!

Once you’ve set all the over/under crossings, you’re ready to actually generate the maze. And the ahem amazing thing about Kruskal’s algorithm is, if you’ve correctly handled the edges and connections in the preprocessing step, you can run it without modification at this stage. The result will be a perfect, weave maze!

For your enjoyment, here are some demos to play with. Try the different settings to see how the output changes: particularly, notice how Kruskal’s using the naive “weave as you go” approach generates far fewer crossings (on average) than the approach described here.

<link href=”http://jamisbuck.org/mazes/css/mazes.css” />

Recursive Backtracker (in-process weave):

Kruskal’s (in-process weave):

Kruskal’s (pre-processed weave):
Density: <select><option>very</option><option>moderate</option><option>light</option><option>very light</option></select>


Since the only thing that changes between this weave technique and the non-weave Kruskal’s algorithm is the pre-processing step, I’m just going to go over the pre-processing step here.

You will probably also want to use a rendering method that allows you to unambiguously show the over/under crossings, such as the method using unicode characters that I described in my previous article.

Now, there are a lot of ways you could go about scattering crossings across the grid. For simplicity, I’m going to just iterate over each cell and randomly decide whether a crossing should be placed there. (This makes it easier to parameterize the process, allowing you to provide a “density” parameter to control how many crossings get placed.)

So, the first thing I do is just iterate over the possible cells:

1.upto(height-2) do |cy|
  1.upto(width-2) do |cx|
    # ...

We start at 1 and go to n-2, because of the first constraint: crossings cannot be placed on the grid boundary.

Within the loop, I then use the “density” parameter (an integer from 0 to 100) to determine whether to skip this cell or not:

next unless rand(100) < density

Next, I compute the coordinates of the adjacent cells, and then check to make sure the cell really is eligible for a crossing:

nx, ny = cx, cy-1
wx, wy = cx-1, cy
ex, ey = cx+1, cy
sx, sy = cx, cy+1

next if grid[cy][cx] != 0 ||
  sets[ny][nx].connected?(sets[sy][sx]) ||

(Remember that sets is a two-dimensional array of Tree objects that allow us to quickly query and join sets.)

If the grid at the chosen point is non-zero, then (by implication) we are adjacent to another crossing, and that isn’t allowed. And if the north and south sets are already connected, or the east and west sets, then we can’t allow a crossing here either (lest we introduce a loop into the graph).

Now that the sets have been updated, we can carve the new passages into the grid:

if rand(2) == 0
  grid[cy][cx] = E|W|U
  grid[cy][cx] = N|S|U

grid[ny][nx] |= S if (grid[ny][nx] & U) == 0
grid[wy][wx] |= E if (grid[wy][wx] & U) == 0
grid[ey][ex] |= W if (grid[ey][ex] & U) == 0
grid[sy][sx] |= N if (grid[sy][sx] & U) == 0

Recall that the U constant is just used to indicate the presence of an “under” passage. Thus, “E|W|U” means an east/west passage with an implied north/south passage beneath it, and “N|S|U” means a north/south passage with an implied east/west passage beneath it.

Further, we only carve on an adjacent passage if it doesn’t already have the “under” bit set. (If it does, then it is already a four-way crossing, and the passage we’re trying to add is already present, either explicitly or implicitly.)

Finally, we just need to update the edge list, to account for the edges we’ve now implicitly processed by adding this crossing:

edges.delete_if do |x, y, dir|
  (x == cx && y == cy) ||
  (x == ex && y == ey && dir == W) ||
  (x == sx && y == sy && dir == N)

Remember that the edge list is just a collection of 3-tuples, with each tuple consisting of the x and y coordinates of a cell, and the direction that the edge exits that cell. It only contains edges that exit cells to the west, or the north (otherwise, we’d get duplicate edges: west from cell x is the same edge as east from cell x+1).

Thus, we’re deleting any edge connecting to the crossing cell itself, the west-facing edge from the eastern cell, and the north-facing edge from the southern cell.

When these loops finish, you’ll have a grid containing randomly-scattered crossings, ready for Kruskal’s algorithm to finish up.


This approach allows you to have weave mazes with a much higher density of crossings than the in-process approach. However, while the in-process approach can generate weave mazes with adjacent crossings, the algorithm described here cannot. I’ve spent some time trying to fix this flaw, but quickly found myself mired in complexity: I’m hoping brighter minds than mine can find an elegant solution to this!

Still, the biggest lesson I took away from this experience is this: it pays to know multiple different ways to solve a problem. I’m not a big fan of Kruskal’s algorithm in general (I don’t like the esthetic of the mazes it generates), but if I had never bothered to learn how Kruskal’s operates, I would never have been able to recognize it’s suitability for connecting the pre-processed crossings.

Stated more generally: the key to appearing smart is not knowing everything, but rather knowing the right thing.

Just because an algorithm may not be your favorite way of solving a problem most times, does not mean it won’t eventually be the best way to solve a problem. You just wait: one of these days I’ll encounter a problem while writing a web app that one of these maze algorithms will be perfect for!

So, give this algorithm a try, if for no other reason than it’s good exercise! And we all know exercise is good for you. 😉 My own implementation is given below:

<noscript>kruskals-weave.rb on gist.github.com</noscript>


Leaving a Legacy…System

This post is by Chad Fowler from The Passionate Programmer

Click here to view on the original site: Original Post

Ever since reading David Heinemeir Hansson’s post Enterprise Is the New Legacy over five years ago, I’ve been chewing on something. The gist of the post was that “enterprise” is and should be a bad word, just like “legacy”.

But why is “legacy” a bad word to begin with? The word makes most software developers and IT people ill.

In other fields and in life in general, the word “legacy” isn’t thusly encumbered. It refers to an inheritance left to those behind you. Your life’s work. Your essential story.

In software, that story is assumed to be a tragedy.

But even in the case of software, “legacy” is an indication of success. Sure, old software was written with old technology. And most software (or indeed most things created by humans) has its share of warts and dark corners. But the fact that we refer to a piece of software as “legacy” indicates that it was successful enough to have been deployed and to have been used for enough good that it is now something we’re “left with” and that we must either maintain or replace.

That isn’t so bad, is it? In an industry where more software projects fail or are ‘challenged’ than succeed, getting to legacy status is cause for celebration!

Here’s a sad idea: as developers, even when we do succeed, we tend to create things that are abandoned at great cost only a few years after we pour our hearts and souls into them. As rough as your last project might have been and as hard as the deadlines were, chances are your project will be disparaged and terminated within 10 years of its birth.

So, what do we developers leave as a legacy? In most cases, we don’t leave much of anything.

At a previous job, a mission-critical core system ran on an ancient, customized mainframe with a custom TCP/IP stack and a custom relational database system. At the time, the system was over 25 years old. It performed well. It survived Y2K. It was well understood. It reliably ran our (big) business.

That was ten years ago. I’d be willing to bet it’s still running. If not the whole system, at least a subset.

That would make it 35.

Sure, it had its ugly parts. And most of us were terrified of it. But, hey! Still running and doing its business after 35 years. I hope I ever create something that successful.

How would I create something that had that kind of longevity? How different would my designs be if I believed I was creating software to last 40 years?

It’s daunting, isn’t it? My knee-jerk reaction might be to do a Big Design Up Front. But how could I possibly design an entire system with 40 years of future knowledge in mind? I couldn’t. Even predicting next year is hard.

So maybe I’d need to design something that was ultimately flexible. A framework of frameworks where everything is pluggable.

Any software developer who lived through parts of the 90s knows these systems buckle under their own weight.

I don’t know how to design a system that could live a long and healthy life. I don’t know because I haven’t done it yet. Have you?

Note: This wasn’t a rhetorical question.

Ward Cunningham interview on the Pragmatic Podcast

This post is by Pragmatic Bookshelf from Pragmatic Bookshelf

Click here to view on the original site: Original Post

Ward Cunningham on the Pragmatic Podcast!

Passenger 3.0.5, jQuery, RSpec, and more

This post is by Jason Seifer and Dan Benjamin from The Ruby Show

Click here to view on the original site: Original Post

In this episode, Peter and Jason cover Phusion Passenger 3.0.5, jQuery on Rails, The RSpec Book, and just about every ruby gem on GitHub.

JRuby 1.6 Released: Ruby 1.9.2 Support and More

This post is by Peter Cooper from Ruby Inside

Click here to view on the original site: Original Post

It’s a newsflash! JRuby 1.6.0 has been released today. Congratulations to the JRuby team. 1.6 is a significant and much awaited release and comes after a 9 month push of over 2500 commits.

Hit up the official release post for the full run-through but here are some of the highlights of the release:

  • Windows has been added to the JRuby team’s continuous integration system meaning that Windows support is only going to get better
  • Ruby 1.9.2 language and API support (with the exception of Encoding::Converter and ripper)
  • Built in profiler
  • General performance improvements
  • Experimental support for C extensions (with provisos)
  • RSpec is no longer included (worth mentioning in case it catches you out..)

You can download binary and source releases direct from JRuby.org if you want to get up to date or update RVM with rvm get head and rvm reload before running rvm install jruby-1.6.0 🙂

Fingers crossed for some great JRuby tutorials and guides coming along in the next couple of months.

[me!] My Ruby Weekly e-mail newsletter is 7 months old and going great! For the best Ruby news of the week, check it out. However, you might also like JavaScript Weekly, a newer newsletter of mine dedicated to.. yep, JavaScript, node.js, CoffeeScript, etc. 😉