Rails, Objects, Tests, and Other Useful Things

For the first time in quite a while, I’ve been able to spend time working on a brand-new Rails application that’s actually a business thing and not a side project. It’s small. Okay, it’s really small. But at least for the moment it’s mine, mine, mine. (What was that about collective code ownership? I can’t hear you…)

This seemed like a good time to reflect on the various Object-Oriented Rails discussions, including Avdi’s book, DCI in Rails, fast Rails tests, Objectify, DHH’s skepticism about the whole enterprise, and even my little contribution to the debate. And while we’re at it, we can throw in things like Steve Klabnik’s post on test design and Factory Girl

I’m not sure I have any wildly bold conclusion to make here, but a few things struck me as I went through my newest coding enterprise with all this stuff rattling around in my head.

A little background — I’ve actually done quite a bit of more formal Object-Oriented stuff, though it’s more academic than corporate enterprise. My grad research involved teaching object-oriented design, so I was pretty heavily immersed in the OO documents circa the mid-to-late 90s. So, it’s not like I woke up last May and suddenly realized that objects existed. That said, I’m as guilty as any Rails programmer at taking advantage of the framework’s ability to write big balls of mud.

Much of this discussion is effectively about how to manage complexity in an application. The thing about complexity, while you can always create complexity in your system, you can’t always remove it. At some point, your code has to do what it has to do, and that’s a minimum on how complex your system is. You can move the complexity around, and you can arguably make it easier to deal with. But… to some extent “easier to deal with” is subjective, and all these techniques have trade-offs. Smaller classes means more classes, adding structure to make dependencies flexible often increases immediate cost. Adding abstraction simplifies individual parts of the system at the cost of making it harder to reason about the system as a whole. There are some sweet spots, I think, but a lot of this is a question of picking the Kool-Aid flavor you like best.

Personally, I like to start with simple and evolve to complex. That means small methods, small classes, and limited interaction between classes. In other words, I’m willing to accept a little bit of structural overhead in order to keep each individual piece of the code simple. Then the idea is to refactor aggressively, making techniques like DCI more something I use as a tool when I see complexity then a place I start from. Premature abstraction is in the same realm as premature optimization. (In particular, I find a lot of forms of Dependency Injection really don’t fit in my head, it takes a lot for me to feel like that flavor of flexible dependencies are the solution to my problem.)

I can never remember where I saw this, but it was an early XP maxim that you should try to keep the simplicity 90% of your system that was simple so that you had the maximum resources to bear on the 10% that is really hard.

To make this style work, you need good tests and you need fast tests — TDD is a critical part of building code this way. You need to be confident that you can refactor, and you need to be able to refactor in small steps and rerun tests. That’s why, while I think I get what Gregory Moeck is saying here, I can’t agree with his conclusion. I think “more testable” is just as valid an engineering goal as “fast” or “uses minimal memory”. I think if your abstraction doesn’t allow you to test, then you have the wrong abstraction. (Though I still think the example he uses is over built…).

Fast tests are most valuable as a means to an end, with the end being understandable and easily changeable code. Fast tests help you get to that end because you can run them more often, ideally you can run them fast enough so that you don’t break focus going back and forth between tests and code, the transition is supposed to be seamless. Also, an inability to write fast tests easily often means that there’s a flaw in your design. Specifically, it means that there’s too much interaction between multiple parts of your program, such that it’s impossible to test a single part in isolation.

One of the reasons that TDD works is that the tests become kind of a universal client of your code, forcing your code to have a lot of surface area, so to speak, and not a lot of hidden depth or interactions. Again, this is valuable because code without hidden depth is easier to understand and easier to change. If writing tests becomes hard or slow, the tests are trying to tell you that your code is building up interior space where logic is hiding — you need to break the code apart to expose the logic to a unit test.

The metric that matters here is how easily you can change you code. A quick guide to this is what kinds of bugs you get. A well-built system won’t necessarily have fewer bugs, but will have shallower bugs that take less time to fix.

Isolation helps, the Single Responsibility Principle helps. Both of these are good rules of thumb in keeping the simple parts of your code simple. But it also helps to understand that “single responsibility” is also a matter of perspective. (I like the guideline in GOOS that you should be able to describe what a class does without using “and” or “or”.

Another good rule of thumb is that objects that are always used together should be split out into their own abstraction. Or, from the other direction, data that changes on different time scales should be in different abstractions.

In Rails, remember that “models” is not the same as “ActiveRecord models”. Business logic that does not depend on persistence is best kept in classes that aren’t also managing persistence. Fast tests are one side effect here, but keeping classes focused has other benefits in terms of making the code easier to understand and easier to change.

Actual minor Rails example — pulling logic related to start and end dates into a DateRange class. (Actually, in building this, I started with the code in the actual model, then refactored to a HasDateRange service module that was mixed in to the ActiveRecord model, then refactored to a DateRange class when it became clear that a single model might need multiple date ranges. The DateRange class can be reused, and that’s great, but the reuse is a side-effect of the isolation. The main effect is that it’s easier to understand where the date range logic is.

I’ve been finding myself doing similar things with Rails associations, pulling methods related to the list of associated objects into a HasThings style module, then refactoring to a ThingCollection class.

You need to be vigilant to abstractions showing up in your code. Passing arguments, especially if you are passing the same argument sets to multiple methods, often means there’s a class waiting to be born. Using a lot of If logic or case logic often means there’s a set of objects that have polymorphic behavior — especially if you are using the same logical test multiple times. Passing around nil often means you are doing something sub-optimally.

Another semi-practical Rails example: I have no problem with an ActiveRecord model having class methods that create new objects of that model as long as the methods are simple. As soon as the methods get complex, I’ve been pulling them into a factory class, where they become instance methods. (I always have the factory be a class that is instantiated rather than having it be a set of class methods or a singleton — I find the code breaks much more cleanly as regular instance methods.) At that point, you can usually break the complicated factory method into a bunch of smaller methods with semantically meaningful names. These classes wind up being very similar to a DCI context class.

Which reminds me — if you are wondering whether the Extract Method refactoring is needed in a particular case, the answer is yes. Move the code to a method with a semantically meaningful name. Somebody will be thankful for it, probably you in a month.

Some of this is genuinely subjective — I never in a million years would have generated this solution — I’d be more likely to have a Null Object for Post if this started to bother me, because event systems don’t seem like simplifications to me.

I do worry how this kind of aggressive refactoring style, or any kind of structured style, plays out in a large team or even just a team with varying degrees of skill, or even just a team where people have different styles. It’s hard to aggressively refactor when three-dozen coders are dependent on something (though, granted, if you’ve isolated well you have a better shot). And it’s hard to overstate the damage that one team member who isn’t down with the program can do to your exquisite object model. I don’t have an answer to this, and I think it’s a really complicated problem.

You don’t know the future. Speculation about reuse gains and maintenance costs are just speculation. Reuse and maintenance are the side effect of good coding practices, but trying to build them in explicitly by starting with complexity is has the same problems as any up-front design, namely that you are making the most important decisions about your system at the point when you know the least about the system. The TDD process can help you here.

Setting Up Fast No-Rails Tests

The key to fast tests is simple: don’t do slow things.

Warning: this post is a kind of long examination of a problem, namely, how to integrate fast non-Rails tests and slow Rails tests in the same test suite. This may be a problem nobody is having. But having seen a sample of how this might work, I was compelled to try and make it work in my toy app. You’ve been warned, hope you like it.

In a Rails app, “don’t do slow things” largely means “don’t load Rails”. Which means that the application logic that you are testing should be separable from Rails implementation details like, say, ActiveRecord. One way to do that is to start putting application logic in domain objects that use ActiveRecord as an implementation detail for persistence.

By one of those coincidences that aren’t really coincidences, not only does separating logic from persistence give you fast tests, it also gives you more modular, easier to maintain code.

To put that another way, in a truly test-driven process, if the tests are hard to write, that is assumed to be evidence that the code design is flawed. For years, most of the Rails testing community, myself included, have been ignoring the advice of people like Jay Fields and Michael Feathers, who told us that true unit tests don’t touch the database, and we said, “but it is so easy to write a model test in Rails that hits the database, we are sure it will be fine.” And we’ve all, myself included, been stuck with test suites that take way too long to run, wondering how we got there.

Well, if the tests get hard to write or run, we’re supposed to consider the possibility that the code is the issue. In this case, that our code is too entangled with ActiveRecord. Hence, fast tests. And better code.

Anyway, I built a toy app placing logic in domain objects for the Mountain West workshop. In building this, I wanted to try a whole bunch of domain patterns at once, fast tests, DCI, presenters, dependency injection. There are a lot of things that I have to say about messing around with some of the domain object patterns floating around, but first…

Oh. My. God. It is great to be back in a code base where the tests ran so fast that I didn’t have time to lose focus while the tests ran. It occurred to me that it is really impossible to truly do TDD if the tests don’t run fast, and that means we probably have a whole generation of Rails programmers who have never done TDD, who only know tests as the multi-minute slog they need to get through to check in their code, and don’t know how much fun fast TDD is.

Okay, at some unspecified future point, I’ll talk about some of the other patterns. Right now, I want to talk about fast tests, and some ideas about how to make them run. While the basic idea of “don’t do slow things” is not hard, there are some logistical issues about managing Rails-stack and non-Rails stack tests in the same code base that are non obvious. Or at least they weren’t obvious to me.

One issue is file logistics. Basically, in order to run tests without Rails, you just don’t load Rails. In a typical Rails/RSpec setup, that means not requiring spec_helper into the test file. However, even without spec_helper, you still need some of the same functionality.

For instance, you still need to load code into your tests. This is easy enough, where spec_helper loaded Rails and triggered the Rails auto load, you just need to explicitly require the files that you need for each spec file. If your classes are really distributing responsibility, you should only need to require the actual class under test and maybe one or two others. I also create a fast_spec_helper.rb file, which starts like this:

$: << File.expand_path("app")
require 'pry'
require 'awesome_print'

Pry and Awesome Print are there because they are useful in troubleshooting, the addition to the load path is purely a convenience when requiring my domain classes.

There is another problem, which is that your domain classes still need to reference Rails and ActiveRecord classes. This is a little messier.
I hope it’s clear why this is a problem – even if you are separating domain logic from Rails, the two layers still need to interact, even if it’s just of the load/save variety. So your non-Rails tests and the code they call may still reference ActiveRecord objects, and you need to not have your tests blow up when that happens. Ideally, you also don’t want the tests to load Rails, either, since that defeats the purpose of the fast test.

Okay, so you need a structure for fast tests that allows you to load the code you need, and reference the names of ActiveRecord objects without loading Rails itself.

Very broadly speaking, there are two strategies for structuring fast tests. You can put your domain tests in a new top-level directory – Corey Haines used spec-no-rails in his shopping cart reference application. Alternately, you can put domain tests with everything else in the spec directory, with subdirectories like spec/presenters and the like, just have those files load your fast_spec_helper. About a month ago, Corey mentioned on Twitter and GitHub that he had moved his code in this direction.

There are tradeoffs. The separate top-level approach enforces a much stricter split between Rails tests and domain tests – in particular, it makes it easier to run just the domain tests without loading Rails. On the other hand, the directory structure is non-standard, there is a whole ecosystem of testing tools that basically assumes that you have one test directory.
It’s not hard to support multiple spec directories with a few custom rake tasks, though it is a little awkward. Since your Rails objects are never loaded in the domain object test suite, though, it’s very easy to stub them out with dummy classes that are only used by the domain object tests.

As I mentioned, Corey has also shown an example with all the tests under single directory and some namespacing magic. I’m not 100% sure if I like the single top-level better. But I can explain how he got it to work.

With everything being under the same top level directory, it’s easier to run the whole suite, but harder to just run the fast tests (not very hard, just harder). Where it gets weird is when your domain objects reference Rails objects. As mentioned before, even though your domain objects shouldn’t need ActiveRecord features, they may need to reference the name of an ActiveRecord class, often just to call find or save methods. Often, “fast” tests get around this by creating a dummy class with the same name as the ActiveRecord class.

Anyway, if you are running your fast and slow tests together, you’re not really controlling the order of test runs. Specifically, you don’t know if the ActiveRecord version of your class is available when your fast test just wants the dummy version. So you need dummy versions of your ActiveRecord classes that are only available from the fast tests, while the real ActiveRecord objects are always visible from the rest of the test suite.

I think I’m not explaining this well. Let’s say I have an ActiveRecord object called Trip. I’ve taken the logic for purchasing a trip and placed it in a domain object, called PurchaseTripContext. All that’s fine, and I can test PurchaseTripContext in a domain object test without Rails right up until the point where it actually needs to reference the Trip class because it needs to create one.

The thing is, you don’t actually need the entire Trip class to test the PurchaseTripContext logic, you just need something named Trip that you can create, set some attributes on, and save. It’s kind of a fancy mock. And if you just require the existing Trip, then ActiveRecord loads Rails, which is what we are trying to avoid.

There are a few ways to solve this access problem:

If you have a separate spec_fast directory that only runs on its own, then this is easy. You can create just a dummy class called Trip – I make the dummy class a subclass of OpenStruct, which works tolerably well. class Trip < OpenStruct; end.

You could also use regular stub, but there are, I think, two reasons why I found that less helpful. First is that the stubs kind of need to be recreated for each test, whereas a dummy class basically gets declared once. Second, OpenStruct lets you hold on to a little state, which – for me – makes these tests easier to write.

Anyway, if your domain logic tests are mixed into the single spec directory, then the completely separate dummy class doesn’t work – the ActiveRecord class might already be loaded. Worse, you you can’t depend on the ActiveRecord class being there because you’d like to run your domain test standalone without running Rails. You can still create your own dummy Trip class, but it requires a little bit of Ruby module munging, more on that in a second.

If you want to get fancy, you can use some form of dependency injection to make the relationship between TripPurchaseContext and Trip dynamic, and use any old dummy class you want. One warning – it’s common when using low-ceremony dependency injection to make the injected class a parameter of the constructor with a default, as in def initialize(user, trip_class = Trip). That’s fine, but it doesn’t completely solve our testing problem because the use of Trip in the parameter list needs to be resolved at load time, so the constant Trip still needs some value.

Or, you could bite the bullet and bring the Rails stack in to test because of the dependency. For the moment, I reject this out of hand.

This isn’t an exhaustive list, there are any number of increasingly insane inheritance or metaprogramming things on the table. Or under the table.

So, if we choose a more complicated test setup with multiple directories, we get an easy way to specify these dummy classes. If we want the easier single-directory test setup, then we need to do something fancier to make the dummy classes work for the fast tests but be ignored by the Rails-specific tests.

At this point, I’m hoping this makes sense. Okay, the problem is that we want a class to basically have selective visibility. Here’s the solution I’m trying – this is based on a gist that Corey Haines posted a while back. I think I’m filling in the gaps to make this a full solution.

For this to work, we take advantage of a quirk in they way Ruby looks up class and module names. Ruby class and module names are just like any other Ruby constant. When you refer to a constant that does not have any scope information, like, say, the class name Trip, Ruby first looks in the current module, but if the current module doesn’t contain the class, then Ruby looks in the global scope. (That’s why sometimes you see a constant prefixed with ::, as in ::Trip, the :: forces a global lookup first).

That’s perfect for us, as it allows us to put a Trip class in a module and have it shadow the ActiveRecord Trip class in the global scope. There’s one catch, though – the spec, the domain class, and the dummy object all have to be part of the same local module for them all to use the same dummy class.

After some trial and error (lots of error, actually), here’s a way that I found which works with both the fast tests and the naming conventions of Rails autoload. I’m not convinced this is the best way, so I’m open to suggestions.

So, after 2000 words of prologue, here is a way to make fast tests run in the same spec directory in the same spec run as your Rails tests.

Step 1: Place all your domain-specific logic classes in sub modules.

I have sub directories app/travel/presenters, and app/travel/roles and the like, where travel is the name of the Rails application. I’m not in love with the convention of putting all the domain specific directories at a separate level, but it’s what you need to do in Rails to allow autoloaded classes to be inside a module.

So, my PurchaseTripContext class, for example, lives at app/travel/contexts/purchase_trip_context.rb, and starts out:

module Contexts
  class PurchaseTripContext
    # stuff
  end
end

Step 2: Place your specs in the same module

The spec for this lives at spec/contexts/purchase_trip_context_spec.rb (yes, that’s an inconsistency in the directory structure between the spec and app directories.) The spec also goes inside the module:

module Contexts
  describe PurchaseTripContext do
    it "creates a purchase" do
      #stuff
    end
  end
end

Step 3: Dummy objects

The domain objects are in a module, the specs are in a module, now for the dummy classes. Basically, I just put something like this in my fast_spec_helper.rb file:

module Contexts
  class Trip < OpenStruct; end
  class User < OpenStruct; end
end

This solves the problem, for some definition of “solves” and “problem”. The fast tests see the dummy class, the Rails tests see the Rails class. The tests can be run all together or in any smaller combination. The cost is a little module overhead that’s only slightly off-putting in terms of finding classes. I’m willing to pay that for fast tests. One place this falls down, though, is if more than one of my sub-modules need dummy classes – each sub-module then needs its own set, which does get a little ugly. I suspect there’s a way to clean that up that I haven’t found yet.

In fact, I wonder if there’s a way to clean up the whole thing. I half expect to post this and have somebody smart come along and tell me I’m over complicating everything – wouldn’t be the first time.

Next up, I’ll talk a little bit about how some of the OO patterns for domain objects work, and how they interact with testing.

Filed under: Rails, testing

Setting Up Fast No-Rails Tests

The key to fast tests is simple: don’t do slow things.

Warning: this post is a kind of long examination of a problem, namely, how to integrate fast non-Rails tests and slow Rails tests in the same test suite. This may be a problem nobody is having. But having seen a sample of how this might work, I was compelled to try and make it work in my toy app. You’ve been warned, hope you like it.

In a Rails app, “don’t do slow things” largely means “don’t load Rails”. Which means that the application logic that you are testing should be separable from Rails implementation details like, say, ActiveRecord. One way to do that is to start putting application logic in domain objects that use ActiveRecord as an implementation detail for persistence.

By one of those coincidences that aren’t really coincidences, not only does separating logic from persistence give you fast tests, it also gives you more modular, easier to maintain code.

To put that another way, in a truly test-driven process, if the tests are hard to write, that is assumed to be evidence that the code design is flawed. For years, most of the Rails testing community, myself included, have been ignoring the advice of people like Jay Fields and Michael Feathers, who told us that true unit tests don’t touch the database, and we said, “but it is so easy to write a model test in Rails that hits the database, we are sure it will be fine.” And we’ve all, myself included, been stuck with test suites that take way too long to run, wondering how we got there.

Well, if the tests get hard to write or run, we’re supposed to consider the possibility that the code is the issue. In this case, that our code is too entangled with ActiveRecord. Hence, fast tests. And better code.

Anyway, I built a toy app placing logic in domain objects for the Mountain West workshop. In building this, I wanted to try a whole bunch of domain patterns at once, fast tests, DCI, presenters, dependency injection. There are a lot of things that I have to say about messing around with some of the domain object patterns floating around, but first…

Oh. My. God. It is great to be back in a code base where the tests ran so fast that I didn’t have time to lose focus while the tests ran. It occurred to me that it is really impossible to truly do TDD if the tests don’t run fast, and that means we probably have a whole generation of Rails programmers who have never done TDD, who only know tests as the multi-minute slog they need to get through to check in their code, and don’t know how much fun fast TDD is.

Okay, at some unspecified future point, I’ll talk about some of the other patterns. Right now, I want to talk about fast tests, and some ideas about how to make them run. While the basic idea of “don’t do slow things” is not hard, there are some logistical issues about managing Rails-stack and non-Rails stack tests in the same code base that are non obvious. Or at least they weren’t obvious to me.

One issue is file logistics. Basically, in order to run tests without Rails, you just don’t load Rails. In a typical Rails/RSpec setup, that means not requiring spec_helper into the test file. However, even without spec_helper, you still need some of the same functionality.

For instance, you still need to load code into your tests. This is easy enough, where spec_helper loaded Rails and triggered the Rails auto load, you just need to explicitly require the files that you need for each spec file. If your classes are really distributing responsibility, you should only need to require the actual class under test and maybe one or two others. I also create a fast_spec_helper.rb file, which starts like this:

$: << File.expand_path("app")
require 'pry'
require 'awesome_print'

Pry and Awesome Print are there because they are useful in troubleshooting, the addition to the load path is purely a convenience when requiring my domain classes.

There is another problem, which is that your domain classes still need to reference Rails and ActiveRecord classes. This is a little messier.
I hope it’s clear why this is a problem – even if you are separating domain logic from Rails, the two layers still need to interact, even if it’s just of the load/save variety. So your non-Rails tests and the code they call may still reference ActiveRecord objects, and you need to not have your tests blow up when that happens. Ideally, you also don’t want the tests to load Rails, either, since that defeats the purpose of the fast test.

Okay, so you need a structure for fast tests that allows you to load the code you need, and reference the names of ActiveRecord objects without loading Rails itself.

Very broadly speaking, there are two strategies for structuring fast tests. You can put your domain tests in a new top-level directory – Corey Haines used spec-no-rails in his shopping cart reference application. Alternately, you can put domain tests with everything else in the spec directory, with subdirectories like spec/presenters and the like, just have those files load your fast_spec_helper. About a month ago, Corey mentioned on Twitter and GitHub that he had moved his code in this direction.

There are tradeoffs. The separate top-level approach enforces a much stricter split between Rails tests and domain tests – in particular, it makes it easier to run just the domain tests without loading Rails. On the other hand, the directory structure is non-standard, there is a whole ecosystem of testing tools that basically assumes that you have one test directory.
It’s not hard to support multiple spec directories with a few custom rake tasks, though it is a little awkward. Since your Rails objects are never loaded in the domain object test suite, though, it’s very easy to stub them out with dummy classes that are only used by the domain object tests.

As I mentioned, Corey has also shown an example with all the tests under single directory and some namespacing magic. I’m not 100% sure if I like the single top-level better. But I can explain how he got it to work.

With everything being under the same top level directory, it’s easier to run the whole suite, but harder to just run the fast tests (not very hard, just harder). Where it gets weird is when your domain objects reference Rails objects. As mentioned before, even though your domain objects shouldn’t need ActiveRecord features, they may need to reference the name of an ActiveRecord class, often just to call find or save methods. Often, “fast” tests get around this by creating a dummy class with the same name as the ActiveRecord class.

Anyway, if you are running your fast and slow tests together, you’re not really controlling the order of test runs. Specifically, you don’t know if the ActiveRecord version of your class is available when your fast test just wants the dummy version. So you need dummy versions of your ActiveRecord classes that are only available from the fast tests, while the real ActiveRecord objects are always visible from the rest of the test suite.

I think I’m not explaining this well. Let’s say I have an ActiveRecord object called Trip. I’ve taken the logic for purchasing a trip and placed it in a domain object, called PurchaseTripContext. All that’s fine, and I can test PurchaseTripContext in a domain object test without Rails right up until the point where it actually needs to reference the Trip class because it needs to create one.

The thing is, you don’t actually need the entire Trip class to test the PurchaseTripContext logic, you just need something named Trip that you can create, set some attributes on, and save. It’s kind of a fancy mock. And if you just require the existing Trip, then ActiveRecord loads Rails, which is what we are trying to avoid.

There are a few ways to solve this access problem:

If you have a separate spec_fast directory that only runs on its own, then this is easy. You can create just a dummy class called Trip – I make the dummy class a subclass of OpenStruct, which works tolerably well. class Trip < OpenStruct; end.

You could also use regular stub, but there are, I think, two reasons why I found that less helpful. First is that the stubs kind of need to be recreated for each test, whereas a dummy class basically gets declared once. Second, OpenStruct lets you hold on to a little state, which – for me – makes these tests easier to write.

Anyway, if your domain logic tests are mixed into the single spec directory, then the completely separate dummy class doesn’t work – the ActiveRecord class might already be loaded. Worse, you you can’t depend on the ActiveRecord class being there because you’d like to run your domain test standalone without running Rails. You can still create your own dummy Trip class, but it requires a little bit of Ruby module munging, more on that in a second.

If you want to get fancy, you can use some form of dependency injection to make the relationship between TripPurchaseContext and Trip dynamic, and use any old dummy class you want. One warning – it’s common when using low-ceremony dependency injection to make the injected class a parameter of the constructor with a default, as in def initialize(user, trip_class = Trip). That’s fine, but it doesn’t completely solve our testing problem because the use of Trip in the parameter list needs to be resolved at load time, so the constant Trip still needs some value.

Or, you could bite the bullet and bring the Rails stack in to test because of the dependency. For the moment, I reject this out of hand.

This isn’t an exhaustive list, there are any number of increasingly insane inheritance or metaprogramming things on the table. Or under the table.

So, if we choose a more complicated test setup with multiple directories, we get an easy way to specify these dummy classes. If we want the easier single-directory test setup, then we need to do something fancier to make the dummy classes work for the fast tests but be ignored by the Rails-specific tests.

At this point, I’m hoping this makes sense. Okay, the problem is that we want a class to basically have selective visibility. Here’s the solution I’m trying – this is based on a gist that Corey Haines posted a while back. I think I’m filling in the gaps to make this a full solution.

For this to work, we take advantage of a quirk in they way Ruby looks up class and module names. Ruby class and module names are just like any other Ruby constant. When you refer to a constant that does not have any scope information, like, say, the class name Trip, Ruby first looks in the current module, but if the current module doesn’t contain the class, then Ruby looks in the global scope. (That’s why sometimes you see a constant prefixed with ::, as in ::Trip, the :: forces a global lookup first).

That’s perfect for us, as it allows us to put a Trip class in a module and have it shadow the ActiveRecord Trip class in the global scope. There’s one catch, though – the spec, the domain class, and the dummy object all have to be part of the same local module for them all to use the same dummy class.

After some trial and error (lots of error, actually), here’s a way that I found which works with both the fast tests and the naming conventions of Rails autoload. I’m not convinced this is the best way, so I’m open to suggestions.

So, after 2000 words of prologue, here is a way to make fast tests run in the same spec directory in the same spec run as your Rails tests.

Step 1: Place all your domain-specific logic classes in sub modules.

I have sub directories app/travel/presenters, and app/travel/roles and the like, where travel is the name of the Rails application. I’m not in love with the convention of putting all the domain specific directories at a separate level, but it’s what you need to do in Rails to allow autoloaded classes to be inside a module.

So, my PurchaseTripContext class, for example, lives at app/travel/contexts/purchase_trip_context.rb, and starts out:

module Contexts
  class PurchaseTripContext
    # stuff
  end
end

Step 2: Place your specs in the same module

The spec for this lives at spec/contexts/purchase_trip_context_spec.rb (yes, that’s an inconsistency in the directory structure between the spec and app directories.) The spec also goes inside the module:

module Contexts
  describe PurchaseTripContext do
    it "creates a purchase" do
      #stuff
    end
  end
end

Step 3: Dummy objects

The domain objects are in a module, the specs are in a module, now for the dummy classes. Basically, I just put something like this in my fast_spec_helper.rb file:

module Contexts
  class Trip < OpenStruct; end
  class User < OpenStruct; end
end

This solves the problem, for some definition of “solves” and “problem”. The fast tests see the dummy class, the Rails tests see the Rails class. The tests can be run all together or in any smaller combination. The cost is a little module overhead that’s only slightly off-putting in terms of finding classes. I’m willing to pay that for fast tests. One place this falls down, though, is if more than one of my sub-modules need dummy classes – each sub-module then needs its own set, which does get a little ugly. I suspect there’s a way to clean that up that I haven’t found yet.

In fact, I wonder if there’s a way to clean up the whole thing. I half expect to post this and have somebody smart come along and tell me I’m over complicating everything – wouldn’t be the first time.

Next up, I’ll talk a little bit about how some of the OO patterns for domain objects work, and how they interact with testing.

Filed under: Rails, testing

Faker 1.0 released

Earlier this week I released version 1.0 of the Faker gem. It’s been about 4 years since the initial release of the gem, and the API has been fairly stable for the last couple of years, so I figured it was a good time to make the jump to 1.0. 🙂

This release finishes the conversion to I18n. Just about everything is in the locale files now, including the ability to define custom formats for everything — company names, street addresses, etc. And, with the magic of method_missing, you can add new items to your locale file and have them show up as methods in the Faker classes.

The 1.0 release also settles some long-standing issues people have had with bad interaction between Faker, Rails 2.3, and locales (especially fallbacks). Though I’m not actively seeking to support Rails 2.3, I at least don’t want it to be broken, so this release should cover that. Both Ruby 1.9.2 and 1.8.7 are fully supported.

Finally, I want to send out a big “thank you” to everyone (and there are a lot of them) who contributed code and ideas to this release. I really appreciate the interest shown and the work done by so many people who use and love Faker. According to rubygems.org, it has been installed over 400,000 times — over 1,000 times in the past few days!

Of course, I’m not done yet… next on the feature list is Faker::Image, which will provide an interface to all those cool fake image generator services out there. 🙂

Testing Advice in Eleven Steps

As it happens, my generic advice on Rails testing hasn’t changed substantially, even though the tools I use on a daily basis have.

  • Any testing tool is better than no testing. Okay, that’s glib. You can make an unholy mess in any tool. You can also write valuable tests in any tool. Focus on the valuable part.

  • If you’ve never tested a Rails application before, I still recommend you start with out of the box stuff: Test::Unit, even fixtures. Because it’s simpler and there’s a very good chance you will be able to get help if you need it.

  • That said, if you are coming into a team that already has a house style to use a different tool, use that one. Again, because you’ll be able to get support from those around you.

  • Whatever tool you choose, the important thing is to write a small test, make it pass with a small piece of code, and refactor. Let the code emerge from the tests. If you do that, you are ahead of the game, no matter what tool you are using.

  • At any given moment, the next test has some chance of costing you time in the short term. The problem is it’s nearly impossible to tell which tests will cost the time. Play the odds, write the test. Over the long haul, the chance that the tests are really the bottleneck are, in my experience, quite small.

  • If you start with the out of the box test experience, you will likely experience some pain points as you test more and more. That’s the time to add new tools, like a mock object package, a factory data package, or a context package. Do it when you have a clear sense that the new complexity will bring value.

  • Some people like the RSpec syntax and, for lack of a better word, culture. Others do not. If you are one of the people who doesn’t like it, don’t use it. Well, try it once. You never know.

  • I go back and forth on whether Test::Unit and RSpec are actually functionally equivalent, and eventually have decided it doesn’t matter. You can write a good test suite in either, and if there is a particular bell or whistle on one of them that attracts you or repels you, go that way.

  • You really should do some kind of full-stack testing, especially once you’ve gotten good at unit testing. But whether it’s the Test::Unit integration testing, the new Capybara syntax, or Steak, or Cucumber, is, again, less important than the idea that you are specifying behavior and automatically verifying that the code matches the specification. Most of what I said about RSpec above also applies to Cucumber.

  • This old joke that was repeated with relish on the XP mailing list circa 2000: “Doctor, it hurts when I do this”. “Then don’t do it”.

  • And last, but not least, buy my book. Or buy Dave’s book. Or Kent Beck’s book. Or hang out on mailing lists. Ask questions on Twitter. If you want to get better at testing, there are all kinds of resources available.

Filed under: testing, Uncategorized

Cucumber Rails 0.4: The De-Web-Step-ining

Consider this part of an occasional series where I attempt to revisit tools discussed in Rails Test Prescriptions that have undergone some revision. (NOTE: Most of this was written before the DHH Twitter-storm about testing this week. For the purposes of this post, I’m choosing to pretend the whole thing didn’t happen.)

The cucumber-rails gem released version 0.4 last week, which had some significant changes, and intensified what we might call the opinionated nature of Cucumber over what a Cucumber scenario should look like.

If you update cucumber-rails, you need to re-run the rails generate cucumber:install to see the new stuff.

There are a couple of minor changes — the default env.rb file is much simpler, the capybara date selector steps now work with Rails 3, that kind of thing. The biggest change, though is conceptual, and comes in two parts.

Part one is best laid out by the new first line of the web_steps.rb file:

# TL;DR: YOU SHOULD DELETE THIS FILE

The header goes on to say that if you make use of these steps you will end up with verbose and brittle cucumber features. Also, your hair will fall out, and you will have seven years bad luck. The last may be more implied than stated.

Why would they do such a thing? And what’s the “official” preferred way to use Cucumber now?

Well, it’s not like the Cucumber dev team has me on speed-dial or anything like that, but since they subtly included in the web_steps.rb file links to, count ‘em, three separate blog posts explaining how to best use Cucumber, I will follow that subtle, yet blazing, trail and try to put it together in some coherent way so that I can understand it.

(Note to Cucumber dev team: if you feel the need to link to this post in future versions of Cucumber, you should consider yourself as having permission to do so….)

Anyway, the Cucumber team is making a very opinionated statement about how to use Cucumber “with the grain”, and I actually don’t think that statement is “don’t use the web_steps” file — I think that some parts of the web_steps file have a place in the Cucumber world.

Here’s the statement as I see it:

  • A Cucumber scenario is an acceptance test.
  • As such, the scenario should completely be in the domain of the user.
  • A Cucumber scenario should not have any reference to implementation details.
  • Implementation details include, but are not limited to: CSS selectors, class names, attribute names, and HTML display text.

As a good rule of thumb, if you are putting something in your Cucumber steps in quotation marks, you should at least think about whether your Cucumber scenario is at a high enough level. In the Cucumber world, the place for implementation-specific details is in the step definition files. If the acceptance criteria changes, the scenario should change, but if the implementation changes, only the step definitions should change.

This sharp separation between the acceptance test and the implementation is a feature, not a bug, in Cucumber (By the way, you do not want bugs in your cucumbers. Yuck.) The separation is what makes Cucumber a true black-box test of your application, and not a black box riddled with holes.

That said, full-stack testing that is based on knowing implementation details — which is “integration testing” rather than “acceptance testing” — is a perfectly valid thing to do, especially in a case where there isn’t an external customer that needs or wants to see the acceptance testing. But, if you are actually doing integration testing, then you don’t need the extra level of indirection that Cucumber offers — you should drop down to Steak, or Rails integration tests, or the new Capybara acceptance test DSL or something.

Okay, so. Acceptance testing is not integration testing, and if you are trying to do integration testing via Cucumber, you will be frustrated, because that’s not what Cucumber is good at. To me, there’s a value in acceptance testing, or in this case, acceptance test driven development, because it’s helpful to try and describe the desired system behavior without any implementation details confusing the issue.

Which brings us back to the question of how you actually replace the web steps in your Cucumber scenarios. Essentially the idea is to replace implementation-based steps with steps that describe behavior more generically. You might have something like this:

Scenario: Updating a user profile
  Given a user named "Noel" with a preference for "Cool Stuff"
  When I go to the edit profile page
  And I fill in "bananas" for "Favorite Food"
  And I select "Comic Books" from "Preferences"
  And I press "Submit"
  Then I should see "Bananas"
  And I should see "Comic Books"

That’s not horrible, because it doesn’t have any explicit CSS or code in it, but it’s still very much about implementation details, such as the exact starting state of the user, the labels in the form, and the details of the output. On the plus side, the only step definition you’d need to write for this is for the first step, every other step is covered by an existing web step. But… I’ve written my share of Cucumber scenarios that look like this, and it’s not the best way to go. It’s hard to tell from this step what the most important parts are and what system behavior is actually being described.

The implicit version of the scenario looks more like this:

Scenario: Updating a user profile
  Given I am an existing user with a partially completed profile
  When I go to edit my profile
  And I fill in new preferences
  Then I see my new preferences on my profile page

Two questions to answer: why is this better, and how does it work?

The second question first. We need to write step definitions for all these steps. Normally, I write these in terms of the underlying Capybara or Webrat API rather than calling web steps. The second step doesn’t need a full definition, it just needs an entry for /edit my profile/ in the paths.rb file (right now, it seems like that’s about the only step in the web steps file that the Cucumber team is willing to use), but the other three steps need definitions — here’s what they might look like, this might have a typo or syntax jumble, it’s just the basic idea.


Given /^I am an existing user with a partially completed profile$/ do
  @user = Factory(:user)
  @user.profile = Factory(:profile, :preference => "Cool Stuff",
     :favorite_food => nil)
end 

When /^I fill in new preferences$/ do
  fill_in("Favorite Food", :with => "Bananas")
  select("Comic Books", :from => "Preferences")
  click_button("Submit")
end

Then /^I see my new preferences on my profile page$/
  with_scope("preference listing") do
    page.should have_selector(selector_for("bananas are my favorite food"))
    page.should have_selector(selector_for("comic books are my preference"))
  end
end

If you are used to Cucumber but haven’t used the 0.4 Rails version yet, the last step will look unfamiliar. Bear with me for a second.

Why is the second version better? It’s not because it’s shorter — it’s a bit longer, although only a bit (the first version would need a step definition for the user step as well). However, the length is split into more manageable chunks. The Cucumber scenario is shorter, and more to the point, each step is more descriptive in terms of what it does and how it fits into the overall scenario. The new step definitions you need to write add a little complexity, but not very much, and my Cucumber experience is that the at this size, the complexity of the step definitions is rarely the bottleneck. (For the record, the bottleneck is usually getting the object environment set up, followed by the inevitable point of intersection with implementation details, which is why I’m so keen to try and minimize intersection with the implementation.)

Yes, the scenario is something you could show a non-developer member of the team, but I also think it’s easier for coders to comprehend, at least in terms of getting across the goals of the system. And this is supposed to be an acceptance test — making the goals of the system explicit is the whole point.

Okay, either you believe me at this point or you don’t. I suspect that some of you look at the step definitions and say “hey, I could string those seven lines of code together and call it a test all by itself”. Again, if that’s what works for you, fine. Any full-stack testing is probably better than no full-task setting. Try it once, though. For me.

Back to the step definitions, the last one uses the selector_for method — and I hope I’m using it right here because I haven’t gotten a chance to work with it yet, and the docs aren’t totally clear to me. The idea behind selector_for is to be analogous to the path_to method, but instead of being a big long case statement that turns a natural language phrase into a path, it’s a big long case statement that turns a natural language phrase into a CSS selector. The big lng case statement is in the support folder in a selectors.rb file. The with_scope method uses the same big case statement to narrow the statements inside the block to DOM elements within the snippet.

As with the paths, the idea is to take information that is implementation specific and likely to be duplicated and quarantine it into one particular location. As I said, I haven’t really incorporated this into my Cucumber routine yet, my first thought is that it’ll be nice to hide some of the complex CSS selectors I use in view testing, but I worry that the selectors.rb file will become a mess and that there’s less probability of duplicating a snippet.

I sure wish I had a rousing conclusion to reward you as this post nears the 1750 word mark. I like the direction that these changes are taking Cucumber, they are in line with what I’ve found to be the best use of the tool. Take a chance and try writing tests as implicitly as you can, as an exercise and see if it works for you.

Filed under: Cucumber, Rails, testing, Uncategorized

Improving Your Methods

I am always looking for ways to improve my efficiency while coding. One of the things that has been bothering me lately is how I run tests. Back in the day, I used autotest. Of late, I have been using watchr. Finally, this week I worked out something that does exactly what I want.

Watchr

Watchr is great at running arbitrary code when files change, but it cannot read my mind. When working on libraries, such as Hunt, Joint, MongoMapper, etc., running tests/specs/features every time a file changes is fine. Heck, the whole suite MongoMapper runs in like 10 seconds.

You can take it a step further and make watchr even more responsive by making it run only related files. For example, whenever lib/mongo_mapper/plugins/accessible.rb is saved, most likely I want to run the test/functional/test_accessible.rb test (checkout MM’s specs.watchr file on Github).

The Problem

The issues I have had with watchr is more when working on big applications, such as Harmony or Words with Friends. Bigger applications have bigger test suites and the process of automatically determining which tests I want to run when a file changes is difficult to impossible and changes from moment to moment.

After about a month of the pain that is manually running specific test files, I finally decided to think out what I wanted. The conclusion I came to was that I do not want tests to automatically run when I save a file. Instead, I want it to be easier to run specific tests based on keywords.

The Solution

Since I was working on Harmony at the time and it uses test/unit, I decided to write a script that would grep through the tests to match arguments I pass in and run all of the matches. Below is the script I came up with:

#!/usr/bin/env ruby
require File.dirname(__FILE__) + '/../config/boot'

if ARGV.empty?
  puts <<-EOH
Usage:
  script/test [test match string]

Example:
  script/test site
  script/test site_test
  script/test site_test account_test

  Above would run all tests matching site
  (ie: test/unit/site_test.rb, test/functional/admin/sites_controller.rb, etc.)

EOH
  exit
end

tests = Dir.glob(File.dirname(__FILE__) + '/../test/**/*').map do |file|
  # skip non test files
  next unless file.include?('_test')

  # check if any of the inputs match the file
  if ARGV.any? { |match| file =~ /#{match}/ }
    File.expand_path(file).gsub(Rails.root.join('test'), 'test')
  end
end.compact.join(' ')

test_loader = File.expand_path('../test_loader', __FILE__)

$stdout.sync = true
command = "bundle exec ruby -Itest #{test_loader} #{tests}"

puts(command)

IO.popen(command) { |com| com.each_char { |c| print(c) } }

Note that it uses this test_loader, stolen directly from rake.

#!/usr/bin/env ruby

ARGV.each { |f| load f unless f =~ /^-/  }

I dropped both of these files in the script/ directory of Harmony. The cool thing is with not very much code, I can now do the following:

# run all of harmony's tests for liquid filters
# filters are all in a folder test/unit/filters
script/test filters

# same thing but drops
script/test drops

# run all the item related tests (from controllers, models, filters and drops)
script/test item

# run unit tests for feed and feed template
script/test unit/feed

# run unit test for feed and feed drop test
script/test unit/feed feed_drop

Conclusion

In general I am working on a very specific thing. All I want is to be able to test that very specific thing, on the fly, and very easily. script/test allows me to do that. Oh, and of course I aliased script/test to st, as I hate typing more than 2 characters. 😉

Not sure that this is valuable to anyone else as it is so specific to how I work, but it is the thought that counts. Also, It could easily be adapted to rspec or cucumber. I can say that it has drastically helped me.

I think the lesson to grep from this post is always be conscious of how you work and when you see a chance for improvement, spend a few minutes to make it happen.

July 26, 2010: A Hammer, A Nail, and A Giant Squid

Book Status

Beta 5 should be out early this week, featuring a mostly new chapter on testing legacy projects, and also updating the code setup and the initial walkthrough chapters to Rails 3. Over the next couple betas any remaining Rails 3 incompatibilities will also be fixed.

Book Reviews

Something new for you on a Monday, a couple of novels that I liked in the last couple of weeks.

Kraken, by China Mieville. I’m a huge Mieville fan, so I was excited for this one.

The story starts when the preserved remains of a giant squid are stolen from a London museum, and the curator of the museum is dragged into a world where an apocalyptic squid cult is one of the least weird things going on. It’s much more loose and jokey than Mieville’s other stuff, something like a cross between Mieville’s (oustanding) YA novel Un Lun Dun, Gaiman’s Neverwhere, with the magical sub-world, and a Tim Powers novel a la Last Call or Expiration Date, filled with supernatural creatures who obey obscure and and convoluted supernatural worlds.

Overall the book is a lot of fun — Mieville freely calls it a “shaggy god” story, which should give you an idea of the tone. It’s not perfect; it takes forever to get started, and the flip side of loose and jokey is that sometimes Mieveille spends a lot of time on characters or conceits that don’t tie in. But there are at least five really outstanding, audacious ideas or moments in the book, and for all that it’s loose verbally, the plot comes together nicely at the end. Plus Mieville unironically uses the phrase “squid pro quo”. So how can you go wrong?

Go Mutants!, by Larry Doyle. Doyle is a former Simpsons writer who has written a satirical mash up of pretty much every 50′s B movie. I mean all of them. You’ve got your alien wanting to take over the user, a radioactive ape, a woman who’s head is on a pan, atomic cars, teenage angst, flying saucers. There’s a lot going on, and if you don’t like one of the jokes, wait about a paragraph and there’s another one coming.

Most of the jokes land, and liked the book a lot more than I thought I would once it became clear where it was going. It’s got some heart, and some satirical bite. It’s a little too willing to compromise tone for a quick joke to be truly great, but it’s fun, and if you’ve sat through enough MST3K to see a lot of the referenced movies, you’ll probably like it. It’s got some clever alternate history bits too, for example, the initial alien contact takes place at the Polo Grounds, interrupting the famous “The Giants Win The Pennant” moment.

Links

The Rails Best Practices web site opened up, which is affiliated with the rails-bestpractices gem. Vote for best practices, and they might wind up incorporated in the gem. As I write this, leading the pack is “N+1 Queries” — presumably avoiding them. Don’t see testing yet, though, he said, banging that thing that looks like a nail with that hammer…

Speaking of tools, the Software Craftsmanship North America 2010 conference is October 15 and 16 at the Mariott Chicago O’Hare. Speakers include Dave Astels, Michael Feathers, and “Uncle Bob” Martin.

Aaron Sumner at Everyday Rails has a meta-tutorial of resources for various Rails command line tools, including the Rails command, generators, Rake, and the Unix command line. Take a look if you’d like to get better at navigating from the console.

Tim Bray from OSCON has a true essay on what all of us owe to Perl and to Desperate Perl Hackers. He’s right about what Perl has brought, although it’d still be about my eleventeenth choice of language to use.

I have kind of mixed feelings about this Emma Lindsay post at Thoughtbot’s Giant Robot’s blog. It’s about taking time in a TDD process to consider the overall design of the code. Which you should do, but which I think is already part of the TDD process. I completely agree with the workflow laid out in the post.

Two quibbles. I disagree with the sentence “Test Driven Development rests on the assumption that you basically know the optimal way to make your tests pass in advance.” — I’ve used TDD many times when I had no idea how the tests were going to pass, it works great in that case. Where TDD falls down is when you don’t know the output of the code — if you know the output, but don’t know the algorithm then TDD works fine. Also, the list of TDD steps doesn’t include a refactoring step, which is where, in practice, most of these design decisions would take place. All that said, writing a quick spike is sometimes needed to figure out the problem you want to solve and the tests you need to write.

Filed under: Perl, Rails, testing

July 19, 2010: Building a Legacy

And Now A Word

The schedule for WindyCityRails 2010 just came out. WindyCityRails is Saturday, Sept, 11 at the Westin Chicago River North.

I will be running the PM tutorial session on “Testing in a Legacy Environment”. I am frequently asked how to start testing on a pre-existing code base with no tests. In this session, we’ll start with a made-up “legacy” code base, and discuss techniques for adding tests, and fixing bug, and adding new features in a test-driven way.

I’m excited, and I think it’s going to be a fun and useful session. WindyCityRails is an extremely well done conference, and you all should check it out. There’s an early bird registration price, which is good until August 1st. You can register here.

I hope to see you there.

Book Status

The legacy chapter draft heading to editor today. Next up is probably the Rails 3/Devise instructions and tutorial updates. The book is still available for purchase in beta, and for pre-order on Amazon.

Links

Via Corey Haines, here’s an interesting mini-essay on refactoring and cleanup from J. B. Rainsberger on the TDD mailing list.

I’m putting this link here so that I never have to do a Google search for HTML entity definitions ever again. (Via larkware).

Another “Why I Like Ruby” essay, this one from Rob Conery.

Alex Chaffee from Pivotal Labs has an RSpec add-on that shows you the exact location where an expected string differs from the returned value. If you have ever tried to track down string issues in a long string where the spacing turns out to be different 150 characters down the line, this will be a good thing to have around.

James Golick released two gems that help in production deployment of new features. The rollout gem helps limit a particular feature to a subset of users, even allowing for quick de-activation of a feature if needed. As a companion, the degrade gem allows you to automatically remove a feature (or trigger other behavior) when a certain number of errors are triggered.

Filed under: RSpec, Ruby, testing

June 14, 2010, Practice makes less imperfect

Still catching up on links. The PeepOpen review has morphed into a larger IDE/TextMate piece, hoping to finish that today.

Book Status

Still working on the renovated Style chapter, which will probably combine the chapters that are in the current Table of Contents as “Testing Style and Structure”, “Fix Slow Tests”, “Rcov”, and “Help! My Test Is Failing”. The chapter on Legacy testing will remain a separate chapter — I get asked about how to test legacy projects all the time.

What happens at that point kind of depends where we are on page count — there are two chapters left that are basically unwritten (Selenium, performance testing), and two chapters that are written but need to be brought up to date (Shoulda, RSpec). Probably more information on this line later this week.

Today In Links

Liked this article from Naresh Jain about deliberately practicing TDD on sample problems to get better. Not sure if I’ve mentioned it here, but Project Euler is a great source of sample problems if you are mathematically inclined.

I suppose it was inevitable that somebody would write about Steve Jobs’ presentation style in the wake of the network issues during the iPhone keynote last week. Still, good advice, even if they handwave over the most useful helpful bit — “an adoring crowd”.

Yehuda posted a short gist about implementing the “acts_as” pattern more simply then is usually done.

Thoughtbot posted a list of the Rails 3 compatibility status of all their open projects. Yay! Most relevant for my immediate purposes, Shoulda has a new release with Rails 3 support and “some dramatic changes”. Though I couldn’t quite see from the history what they meant. More details coming.

In other Rails 3 news, Jhimy Villar has a workaround for a Rails 3 issue affecting Authlogic. I’m seriously considering moving the Rails Test Prescription examples to Devise on the grounds that a) it’s already Rails 3 compatible, b) it seems to have fewer setup steps and c) it seems to stay out of the way a bit more, which is a big plus for my purpose.

Did not mention this last week, but RubyConf X will be Nov 11 – 13 in New Orleans. Never made it to a RubyConf.

Zed Shaw has announced the Mongrel2 project, which is a complete redesign of Mongrel. Not much there yet, but watch this space.

Finally

In an interview with Think Geek (via GeekDad), Jonathan Coulton says that the new album he’s been teasing for a bit will be produced by John Flansburgh of They Might Be Giants. That should be fun.

Filed under: Authlogic, Coulton, Mongrel, Rails 3, Shoulda, Steve Jobs, testing, Yehuda

May 20, 2010: Fontastic

Book Status

Starting to sound repetitive. Still working on the Cuke chapter, this time focusing on cleaning up the parts where I recommend ways to use Cucumber. Still hoping for a beta early next week.

Other things

This week in Yehuda, there’s a very long article about text encodings and what problems they have, and in particular how Ruby’s implementation is shaped by the complicated relationship between Unicode and Japanese.

I’m not completely sure I endorse this mechanism for using models in migrations, but I’ll mention it in case it solves a problem for you.

Jake Scruggs, blogging up a storm, today on using code in interviews. This is something that seems to have come on very quickly as a best practice. In my (admittedly quick) job search in 2007, I was never asked to do this. By 2009, it was pretty common, and I had to do several code samples, either before or during interviews. (For Obtiva, I had to pair program with Dave Hoover. In Python. Which I hadn’t used seriously for about three years.)

Quick note on how to stub paperclip during testing to avoid dependencies on ImageMagick, which seems a noble goal.

Thoughtbot has a very nice article showing the implementation of search functionality.

Last but not least

Google WebFonts. Which seems to be a new, free, set of fonts that you can link to from your app, and just use. Not a huge selection at the moment, hopefully more coming.

Filed under: ActiveRecord, Font, Google, Rails, Ruby, testing, Unicode, Yehuda

May 3, 2010: Hi, I’m Back

Hey, where were you?

Sorry about that, I spent most of last week running the Obtiva Ruby/Rails/TDD 4-day boot camp training, and I didn’t have time to do this daily catchup. Hey, if you think you need me or somebody like me to come to your company and blather about Ruby and Rails for a few days, contact us at http://www.obtiva.com. It’s fun.

Book Status

Rails test prescriptions: still on sale. Please do go to the forum to talk about what’s there and what’s not there.

Lulu raffle: still open, I think for another day or two.

Meantime, I’ve been working through the Cucumber chapter, and also proofing the mock article that will be in the May Pragazine.

Tab Dump

Several days worth of stuff.

Cucumber 7 is out of beta and in the wild. I’m hoping this doesn’t mean too much updating of the chapter I’m in the middle of editing. The big change is a new parser advertised as 50-100 times faster. Which sounds like an outstanding change.

This week in Rails Dispatch, an article outlining the new ActiveRelation/Arel implementation of ActiveRecord for Rails 3

Thinking in Rails has a nice list of Ruby and Rails podcasts.

This is exactly what I want from a Rails plugin in: short, sweet, and solves a problem. In this case, from Ryan Bigg, finding database records by partial date.

I think I’ll probably use this one: a detailed cheat sheet for all things Rails Migration.

A very detailed article on unobtrusive JavaScript that I really need to read more carefully.

The Thoughtbot team shows a nice design retrospective, walking through their process.

A couple of test links:

José Valim gives out some awards for best test suite features.

Will Leinweber tells you what the winning integration test stack looks like.

Bryan Liles at the Smarticus blog also responds to the question of whether you need unit tests and provides a good overview of the TDD process. I think he’s got this right.

Finally

Apparently the Peanuts brand is still worth something, even without daily content, as an 80% stake in the brand rights for Peanuts just sold for $175 million. And if you want a sense of exactly where the pecking order is here, the article casually mentions in the next-to-last paragraph that the rights to Dilbert are also included…

Filed under: cheat sheets, Cucumber, JavaScript, Obtiva, Peanuts, Podcasts, Rails 3, RailsRx, Teaching, testing

April 27, 2010, Now Writing About Cucumbers

Top Story

For me, the top story is still Rails Test Prescriptions on sale, and my discussion yesterday of the raffle for the old Lulu customers.

Book Status

Now re-doing the Cucumber chapter, which was written long enough ago that it didn’t consider tags. Cucumber has had approximately seventy-million releases in the interim, so there’s some writing to do. This is the first chapter where I’m adding Rails 3 setup instructions, which will eventually go everywhere in the book, of course.

Tab Dump

Have to say, RVM support in RubyMine is potentially really cool.

Kent Beck would like to analogize goat farming and software development. I’ve heard worse.

I know you all have been following this story closely, so you’ll be pleased to know that you can now bring your iPad into Israel with impunity. Again, carrying two of them with the roman numerals I to X as wallpaper.

Macworld has released an epub-formatted, iBooks compatible, user guide to the iPad.

Webrat bumped it’s version to 0.7.1.

I frequently complain that there’s no good visualizer for git repositories. This fork of GitX looks like it comes pretty close, though.

Finally

I’m pretty sure I disagree with some of this article by Josh Clayton talking about integration tests being more useful than unit tests. He’s probably right about integration tests being more useful for ultimate correctness, but that’s not everything that TDD is about. Unit tests are critical for the development process, and writing great code in the moment of development, and for supporting design changes and refactoring. Unit and integration tests have two complementary functions, just because they cover the same code doesn’t mean they are redundant.

Filed under: Cucumber, Git, iPad, Kent Beck, RailsRx, RubyMine, testing, Webrat

April 23, 2010: Still Alive

Top Story

If you think the top story is going to be anything other than the continued launch of Rails Test Prescriptions, well, you probably don’t know me very well. I may not be a marketing genius, but I do know the value of repetition. I mean, if there’s one thing I know, it’s the value of repetition.

Thanks to everybody who made yesterday fun: those of you who bought the book, those of you who blogged or tweeted about the announcement, and anybody who read this. And if you haven’t bought the book yet, well, I’ll repeat myself.

Tab Dump

A couple of quick ones here:

A ruby Mandlebrot set generator short enough to fit in a tweet.

Here’s a Ruby library to the TextCaptcha humane and accessible Captcha library. I really hate twisted image Captcha’s — the Wrox book even has a minimal implementation of this kind of problem-solving Captcha idea.

Git bisect is one of those things you’ll use about once every six months, but when you do, it’ll be totally amazing.

Sarah Allen has some comments on Shannon JJ Behrens testing talk. JJ and I worked together about — oy — ten years ago now, where he tried (and temporarily failed) to talk me into switching Python from Java. I find the idea that both of us are now talking about Ruby testing to be wildly funny.

Also, nobody seems to know exactly why Israel has banned the iPad, but Time magazine sees corruption.

Finally

Things that make me happy: Noted character actor William Atherton is interviewed in the Onion AV club, and had two great things to say about one of my favorite movies, Real Genius.

Anywhere I go in the world now, that movie is as popular most anywhere as Ghostbusters or the Die Hards. It’s amazing, and it has a constant following in college kids. It isn’t something that seems to age.

And:

They popped the popcorn for three months. There was a machine in the studio that did nothing all day long but pop popcorn…Then they took it way out to canyon country and a subdivision that was just being built, and they threw it into this house that they pulled down. It was real old-fashioned stuff. Now they’d do it digitally, I guess, but in those days, you had to pop the dang popcorn and put it in a truck and schlep it out to the valley.

And now I’m smiling.

Filed under: Fractal, Git, Lame Repetition Jokes, RailsRx, Real Genius, Ruby, testing

Rails Rx Standup: April 12, 2010

Top Story

For a while, it looked like the top story was going to be Apple’s new developer Rule 3.3.1, described here by John Gruber. More on that in a second.

But the real top story is the news that Twitter has bought Tweetie, intending to rebrand it as Twitter for iPhone, and dropping the price to a low, low, free. Eventually, it will be the core of Twitter for iPad. Wow.

Tweetie is probably the only case where I actually prefer the iPhone experience to the desktop experience, but I’d also be very sad if Tweetie for Mac was orphaned. (Not least because I just bought the MacHeist bundle in part as a way to get the Tweetie Mac beta sooner…). Later update: Tweetie developer Loren Brichter said on the MacHeist forum that the next Tweetie/Mac beta will come out.

I actually suspect that at least some of the existing iPhone Twitter clients will be able to continue — there’s clearly room in the ecosystem for apps that have much different opinions than Tweetie. It depends on how aggressive Twitter is planning to be. Dropping Tweetie’s price to free strikes me as agressive, although it may just be that the Twitter team is averse to direct ways of making money.

As for the Apple story, it’s a familiar space. Apple does something — in this case, blocking apps not originally written in C, C++, or Objective-C — that might have a reasonable user or branding component (keeping the iPhone platform free of least-common-denominator cross-platform apps) and taking it just too far for users or developers to be comfortable with it. That’s, of course, an understatement, as a lot of developers are really angry. Gruber’s point about the Kindle apps is good (and was later cited by Steve Jobs), but on the whole, I think this is a bit to far for Apple, or maybe I’m just upset that that the door seems to have been slammed on MacRuby apps for iPhone ever being feasible.

Book Update

Still working on the Webrat/Capybara chapter. Describing two tools that are so similar is really challenging for me — when there’s a difference, keeping it clear which tool is under discussion.

Also I’ve got the probability that I’ll have an article in an upcoming issue of the Pragmatic Magazine. This will probably be based on material from the book, but edited to fit the magazine article format. Probably either factory tools or mocks. Or maybe Ajax testing. Haven’t decided yet.

Tab Dump

Don’t think I’ve mentioned this yet, but here is a cool presentation of RSpec tricks. Some of these don’t work in RSpec 2, though.

While we’re on the presentation kick, here’s a nice intro to Git from James Edward Gray.

If you’ve ever tried to deploy Agile in a hostile environment, then the recent This American Life episode about the General Motors/Toyota NUMMI plant will resonate for you.

And Finally

A comparison of a boatload of Ruby test frameworks, being used in Iron Ruby to test some .NET code. I admit that I was not familiar with all the frameworks used here.

Filed under: Agile, Apple, Git, RSpec, standup, testing, This American Life, Twitter

Because Gem Names Are Like Domains in the 90’s

One of my favorite parts of every new gem is naming it. The other day, when I was trying to name joint, it occurred to me that I should always check if a gem name is available before I create my project. I did a quick search on RubyGems and discovered it was available.

Last night, I decided I should whip together a tiny gem that allows you to issue a whois command to see if a gem name is taken. Why leave the command line, eh?

Installation

gem install gemwhois

This adds the whois command to gem. Which means usage is pretty fun.

Usage

$ gem whois httparty

   gem name: httparty
     owners: John Nunemaker, Sandro Turriate
       info: Makes http fun! Also, makes consuming restful web services dead easy.
    version: 0.5.2
  downloads: 40714
  
$ gem whois somenonexistantgem

  Gem not found. It will be mine. Oh yes. It will be mine. *sinister laugh*

If the gem is found, you will see some details about the project (maybe you can convince them to hand over rights if they are squatting). If the gem is not found, you will receive a creepy message in the same vein as the RubyGems 404 page.

The Fun Parts

The fun part of this gem was recently I noticed that other gems have been adding commands to the gem command. I thought that was interesting so I did a bit of research. I knew that both gemedit and gemcutter added commands so I downloaded both from Github and began to peruse the source. Turns out it is quite easy.

First, you have to have a rubygems_plugin.rb file in your gems lib directory. This is mostly ripped from gemcutter:

if Gem::Version.new(Gem::RubyGemsVersion) >= Gem::Version.new('1.3.6')
  require File.join(File.dirname(__FILE__), 'gemwhois')
end

Next, you have to create a command. At the time of this post, here is the entirety of the whois command:

require 'rubygems/gemcutter_utilities'

class Gem::Commands::WhoisCommand < Gem::Command
  include Gem::GemcutterUtilities

  def description
    'Perform a whois lookup based on a gem name so you can see if it is available or not'
  end

  def arguments
    "GEM       name of gem"
  end

  def usage
    "#{program_name} GEM"
  end

  def initialize
    super 'whois', description
  end

  def execute
    whois get_one_gem_name
  end

  def whois(gem_name)
    response = rubygems_api_request(:get, "api/v1/gems/#{gem_name}.json") do |request|
      request.set_form_data("gem_name" => gem_name)
    end

    with_response(response) do |resp|
      json = Crack::JSON.parse(resp.body)
      puts <<-STR.unindent

        gem name: #{json['name']}
          owners: #{json['authors']}
            info: #{json['info']}
         version: #{json['version']}
       downloads: #{json['downloads']}

      STR
    end
  end

  def with_response(resp)
    case resp
    when Net::HTTPSuccess
      block_given? ? yield(resp) : say(resp.body)
    else
      if resp.body == 'This rubygem could not be found.'
        puts '','Gem not found. It will be mine. Oh yes. It will be mine. *sinister laugh*',''
      else
        say resp.body
      end
    end
  end
end

The important part is inheriting from Gem::Command. Be sure to require 'rubygems/command_manager' at some point as well. Once you have the rubygems_plugin file and a command created, you simple register the command:

Gem::CommandManager.instance.register_command(:whois)

The comments and code in RubyGems itself is pretty helpful if you are curious about what you can do.

Testing

The trickier part was testing the command. Obviously, building the gem from gemspec and installing over and over does not a happy tester make. I did a bit of research and found the following testing output helpers and the unindent gem:

module Helpers
  module Output
    def assert_output(expected, &block)
      keep_stdout do |stdout|
        block.call
        if expected.is_a?(Regexp)
          assert_match expected, stdout.string
        else
          assert_equal expected.to_s, stdout.string
        end
      end
    end

    def keep_stdout(&block)
      begin
        orig_stream, $stdout = $stdout, StringIO.new
        block.call($stdout)
      ensure
        s, $stdout = $stdout.string, orig_stream
        s
      end
    end
  end
end

With these little helpers, it was quite easy to setup the command and run it in an automated way:

require 'helper'

class TestGemwhois < Test::Unit::TestCase
  context 'Whois for found gem' do
    setup do
      @gem = 'httparty'
      stub_gem(@gem)
      @command = Gem::Commands::WhoisCommand.new
      @command.handle_options([@gem])
    end

    should "work" do
      output = <<-STR.unindent

        gem name: httparty
          owners: John Nunemaker, Sandro Turriate
            info: Makes http fun! Also, makes consuming restful web services dead easy.
         version: 0.5.2
       downloads: 40707

      STR
      assert_output(output) { @command.execute }
    end
  end
  
  context "Whois for missing gem" do
    setup do
      @gem = 'missing'
      stub_gem(@gem, :status => ["404", "Not Found"])
      @command = Gem::Commands::WhoisCommand.new
      @command.handle_options([@gem])
    end

    should "work" do
      output = <<-STR.unindent

        Gem not found. It will be mine. Oh yes. It will be mine. *sinister laugh*

      STR
      assert_output(output) { @command.execute }
    end
  end
end

The only other piece of the puzzle was using FakeWeb to stub the http responses for the found and missing gems. You can see more on that in the test helper file.

Conclusion

At any rate, the gem is pretty tiny and possibly useless to others, but it was fun. Gave me a chance to play around with testing STDOUT and creating RubyGem commands. Plus, now I know if the gem name I want is available in just a few keystrokes.

Riot: for fast, expressive and focused unit tests

JustinRiot is a new Ruby test framework by Justin Knowlden that focuses on faster testing. Justin was frustrated with his slow running test suites, despite employing techniques such as using factories, mocks and avoiding database access. He realized that a slow-running suite makes one reluctant to run it or expand it – not good.

With Riot, each test consists of a block which forms a single assertion on the topic of the test, keeping the tests focused. Tests run in a specific context, and the setup code is only run once per context, further contributing to the speed of your test suite, and unlike some Ruby test frameworks, such as Shoulda, that rely on or are based on Test::Unit, Riot has taken a new approach for speed purposes. In Justin’s own comparisons, Riot comes out about twice as fast as Test::Unit.

Here’s an example Riot test (from the README):

context "a new user" do
  setup { User.new(:email => 'foo@bar.com') }
  asserts("email address") { topic.email }.equals('foo@bar.com')
end

Riot’s comprehensive README also includes lots of examples and details on how to modify your Rakefile to run your Riot test suite in different frameworks. The full documentation is available online here.

You can install Riot as a gem from Gemcutter:

sudo gem sources -a http://gemcutter.org
sudo gem install riot

Justin also has a spin-off project called Riot Rails, which includes some Rails-related macros for testing your Ruby On Rails code, and Alex Young has written a Javascript port of Riot which is worth checking out too. He also has his own look at Riot and demonstrates how Riot can reduce redundancy in tests.

CodebaseLogo-RI.png[ad] Codebase is a fast & reliable git, mercurial & subversion hosting service with complete project management built-in – ticketing, milestones, wikis & time tracking – all under one roof. Click here to try it – free.

Let a human test your app, not (just) unit tests


I’m a big believer in unit testing. We unit test our Rails apps extensively, and we’ve done so for years. On some projects, we do both unit testing and integration testing using Cucumber. I preach unit testing to everyone I can. I’d probably turn down a project if the client wouldn’t let us write tests (though this has never come up, and I don’t think it would be a hard sell).

But for a long time, that’s all I did on my projects. Our clients and users would find the bugs that got past the developers. They were, in effect, our QA testers. (I think a lot of small/agile teams are the same way; based on my experience, I’d be surprised if more than 20% of Rails projects were comprehensively tested by a human.)

This is not right. A good QA tester is worth the surprisingly modest expense.

If I unit test, do I really need to hire a QA tester?

Keep on writing unit tests. But unit tests and human testing are two completely different things. They both aim to increase code quality and decrease bugs, but they do this in different ways.

Developer (unit) testing has three benefits. It:

  • Makes refactoring possible. Don’t even try to refactor a large app without a test suite.
  • Speeds up development. I know there are some haters who deny this, but they’ve either never really given unit testing a chance, or their experience has been 180º different than mine.
  • Eliminates some bugs. Not all, but some.

Human testing has related, but somewhat different, benefits. It:

  • Eliminates other bugs. Unit tests are great for certain categories of bugs, but not for others. When a human walks through an application with the express purpose of making things break, they’re going to find things that developer-written unit tests won’t find.
  • Acts as a “practice run”. Before letting a client, boss, or user see a change, let a QA tester see it. You’d be surprised how many 500 errors and IE incompatibilities you can avoid.
  • Gives you confidence before you deploy. After working with good QA testers, I can’t imagine deploying an app to production without having a QA tester walk through it.
  • Saves you time. If you don’t have a QA role on your project, your developers will be defacto testers. They probably won’t do a good job at this, since they’ll be hoping things succeed (rather than making them fail). And their time is probably more expensive than a good tester’s time.

How to use a QA tester in an agile project

Agile testers should do four things.

First, they should verify or reject each story that is completed. Every time a developer indicates that a feature or bug is completed, whether you use a story tracker or index cards, a QA tester should verify this. Don’t deploy to production until the tester gives it a thumbs-up.

Second, they should do exploratory testing after every deploy. A few minutes clicking around in production can sniff out a lot of potential errors.

Third, they should test edge cases. What happens if a user types in a username that is 300 characters long? What they try to delete an item that is still processing? What if they upload a PDF file as an avatar? Testers are great at this sort of thing.

Fourth, they should test integrations. Unit tests can’t (and shouldn’t) test multi-step processes. Integration testing tools like Cucumber are OK, but don’t catch everything. Identify the main multi-step processes on your site, and have a human verify them every time they change.

Expect a tester to increase your development costs by 5%-10%. We find that 1 hour of testing for every 6 hours of developer time is a reasonable estimate. Our testers cost about 40% less than our developers. So on a typical invoice, testing services are about 10% of development services.

Bill separately for testing. Don’t just roll it into your developer rate. Clients are more likely to object to a 10% increase in your main hourly rate than a separate, lower testing line item.

Finding a good tester

There are two main ways to find a tester.

First, you can train one. Tech-savvy folks who aren’t programmers are a good option. They understand enough to fit in with your development process, but are happy testing and not coding. If you find the right person, they can be testing in no time, and won’t cost a ton of money.

Second, find one that understands agile development. There are plenty of professional testers out there, but most of them do waterfall testing: spend 3 weeks writing test cases, get release from developers, and spend 3 weeks testing. I can say, without hyperbole, that this is how exactly 0% of Rails development projects work. Look for the small number of testers that actually have experience with iterative development, flexible scope, and rapid turnaround. You can sometimes find these people at agile events (conferences or user groups). Otherwise, ask other developers. I found one via referral, and I’ve since referred him to others. This second category will probably be more expensive than the first, but if you want to ship the best code you can, go with this route. Just make sure you avoid a Zompire Dracularius.

Let a human test your app, not (just) unit tests

        <img src="http://www.billionswithzeroknowledge.com/wp-content/uploads/2009/04/failwhale.jpg" width="300px" />

I’m a big believer in unit testing. We unit test our Rails apps extensively, and we’ve done so for years. On some projects, we do both unit testing and integration testing using Cucumber. I preach unit testing to everyone I can. I’d probably turn down a project if the client wouldn’t let us write tests (though this has never come up, and I don’t think it would be a hard sell).

But for a long time, that’s all I did on my projects. Our clients and users would find the bugs that got past the developers. They were, in effect, our QA testers. (I think a lot of small/agile teams are the same way; based on my experience, I’d be surprised if more than 20% of Rails projects were comprehensively tested by a human.)


This is not right. A good QA tester is worth the surprisingly modest expense.


<h4>If <div class="post-limited-image"><img src="http://feeds.feedburner.com/~r/RailSpikes/~4/8vuia0PgjwA" height="1" width="1" alt=""/></div><!--more--> unit test, do I really need to hire a QA tester?</h4>


Keep on writing unit tests. But unit tests and human testing are two completely different things. They both aim to increase code quality and decrease bugs, but they do this in different ways.


Developer (unit) testing has three benefits. It:


<ul>
<li><strong>Makes refactoring possible.</strong> Don’t even try to refactor a large app without a test suite.</li>
</ul>


<ul>
<li><strong>Speeds up development.</strong> I know there are some haters who deny this, but they’ve either never really given unit testing a chance, or their experience has been 180º different than mine.</li>
</ul>


<ul>
<li><strong>Eliminates some bugs.</strong> Not all, but some.</li>
</ul>


Human testing has related, but somewhat different, benefits. It:


<ul>
<li><strong>Eliminates other bugs.</strong> Unit tests are great for certain categories of bugs, but not for others. When a human walks through an application with the express purpose of making things break, they’re going to find things that developer-written unit tests won’t find.</li>
</ul>


<ul>
<li><strong>Acts as a “practice run”.</strong> Before letting a client, boss, or user see a change, let a QA tester see it. You’d be surprised how many 500 errors and IE incompatibilities you can avoid.</li>
</ul>


<ul>
<li><strong>Gives you confidence before you deploy.</strong> After working with good QA testers, I can’t imagine deploying an app to production without having a QA tester walk through it.</li>
</ul>


<ul>
<li><strong>Saves you time.</strong> If you don’t have a QA role on your project, your developers will be defacto testers. They probably won’t do a good job at this, since they’ll be hoping things succeed (rather than making them fail). And their time is probably more expensive than a good tester’s time.</li>
</ul>


<h4>How to use a QA tester in an agile project</h4>


Agile testers should do four things.


First, they should verify or reject each story that is completed. Every time a developer indicates that a feature or bug is completed, whether you use a story tracker or index cards, a QA tester should verify this. Don’t deploy to production until the tester gives it a thumbs-up.


Second, they should do exploratory testing after every deploy. A few minutes clicking around in production can sniff out a lot of potential errors.


Third, they should test edge cases. What happens if a user types in a username that is 300 characters long? What they try to delete an item that is still processing? What if they upload a <span class="caps">PDF</span> file as an avatar? Testers are great at this sort of thing.


Fourth, they should test integrations. Unit tests can’t (and shouldn’t) test multi-step processes. Integration testing tools like Cucumber are OK, but don’t catch everything. Identify the main multi-step processes on your site, and have a human verify them every time they change.


Expect a tester to increase your development costs by 5%-10%. We find that 1 hour of testing for every 6 hours of developer time is a reasonable estimate. Our testers cost about 40% less than our developers. So on a typical invoice, testing services are about 10% of development services.


Bill separately for testing. Don’t just roll it into your developer rate. Clients are more likely to object to a 10% increase in your main hourly rate than a separate, lower testing line item.


<h4>Finding a good tester</h4>


There are two main ways to find a tester.


First, you can train one. Tech-savvy folks who aren’t programmers are a good option. They understand enough to fit in with your development process, but are happy testing and not coding. If you find the right person, they can be testing in no time, and won’t cost a ton of money.


Second, find one that understands agile development. There are plenty of professional testers out there, but most of them do waterfall testing: spend 3 weeks writing test cases, get release from developers, and spend 3 weeks testing. I can say, without hyperbole, that this is how exactly 0% of Rails development projects work. Look for the small number of testers that actually have experience with iterative development, flexible scope, and rapid turnaround. You can sometimes find these people at agile events (conferences or user groups). Otherwise, ask other developers. I found one via referral, and I’ve since referred him to others. This second category will probably be more expensive than the first, but if you want to ship the best code you can, go with this route. Just make sure you avoid a <a href="http://www.zompire-dracularius.com/">Zompire Dracularius</a>.
      <img src="http://feeds.feedburner.com/~r/RailSpikes/~4/8vuia0PgjwA" height="1" width="1" alt=""/>