Setting Up Fast No-Rails Tests

The key to fast tests is simple: don’t do slow things.

Warning: this post is a kind of long examination of a problem, namely, how to integrate fast non-Rails tests and slow Rails tests in the same test suite. This may be a problem nobody is having. But having seen a sample of how this might work, I was compelled to try and make it work in my toy app. You’ve been warned, hope you like it.

In a Rails app, “don’t do slow things” largely means “don’t load Rails”. Which means that the application logic that you are testing should be separable from Rails implementation details like, say, ActiveRecord. One way to do that is to start putting application logic in domain objects that use ActiveRecord as an implementation detail for persistence.

By one of those coincidences that aren’t really coincidences, not only does separating logic from persistence give you fast tests, it also gives you more modular, easier to maintain code.

To put that another way, in a truly test-driven process, if the tests are hard to write, that is assumed to be evidence that the code design is flawed. For years, most of the Rails testing community, myself included, have been ignoring the advice of people like Jay Fields and Michael Feathers, who told us that true unit tests don’t touch the database, and we said, “but it is so easy to write a model test in Rails that hits the database, we are sure it will be fine.” And we’ve all, myself included, been stuck with test suites that take way too long to run, wondering how we got there.

Well, if the tests get hard to write or run, we’re supposed to consider the possibility that the code is the issue. In this case, that our code is too entangled with ActiveRecord. Hence, fast tests. And better code.

Anyway, I built a toy app placing logic in domain objects for the Mountain West workshop. In building this, I wanted to try a whole bunch of domain patterns at once, fast tests, DCI, presenters, dependency injection. There are a lot of things that I have to say about messing around with some of the domain object patterns floating around, but first…

Oh. My. God. It is great to be back in a code base where the tests ran so fast that I didn’t have time to lose focus while the tests ran. It occurred to me that it is really impossible to truly do TDD if the tests don’t run fast, and that means we probably have a whole generation of Rails programmers who have never done TDD, who only know tests as the multi-minute slog they need to get through to check in their code, and don’t know how much fun fast TDD is.

Okay, at some unspecified future point, I’ll talk about some of the other patterns. Right now, I want to talk about fast tests, and some ideas about how to make them run. While the basic idea of “don’t do slow things” is not hard, there are some logistical issues about managing Rails-stack and non-Rails stack tests in the same code base that are non obvious. Or at least they weren’t obvious to me.

One issue is file logistics. Basically, in order to run tests without Rails, you just don’t load Rails. In a typical Rails/RSpec setup, that means not requiring spec_helper into the test file. However, even without spec_helper, you still need some of the same functionality.

For instance, you still need to load code into your tests. This is easy enough, where spec_helper loaded Rails and triggered the Rails auto load, you just need to explicitly require the files that you need for each spec file. If your classes are really distributing responsibility, you should only need to require the actual class under test and maybe one or two others. I also create a fast_spec_helper.rb file, which starts like this:

$: << File.expand_path("app")
require 'pry'
require 'awesome_print'

Pry and Awesome Print are there because they are useful in troubleshooting, the addition to the load path is purely a convenience when requiring my domain classes.

There is another problem, which is that your domain classes still need to reference Rails and ActiveRecord classes. This is a little messier.
I hope it’s clear why this is a problem – even if you are separating domain logic from Rails, the two layers still need to interact, even if it’s just of the load/save variety. So your non-Rails tests and the code they call may still reference ActiveRecord objects, and you need to not have your tests blow up when that happens. Ideally, you also don’t want the tests to load Rails, either, since that defeats the purpose of the fast test.

Okay, so you need a structure for fast tests that allows you to load the code you need, and reference the names of ActiveRecord objects without loading Rails itself.

Very broadly speaking, there are two strategies for structuring fast tests. You can put your domain tests in a new top-level directory – Corey Haines used spec-no-rails in his shopping cart reference application. Alternately, you can put domain tests with everything else in the spec directory, with subdirectories like spec/presenters and the like, just have those files load your fast_spec_helper. About a month ago, Corey mentioned on Twitter and GitHub that he had moved his code in this direction.

There are tradeoffs. The separate top-level approach enforces a much stricter split between Rails tests and domain tests – in particular, it makes it easier to run just the domain tests without loading Rails. On the other hand, the directory structure is non-standard, there is a whole ecosystem of testing tools that basically assumes that you have one test directory.
It’s not hard to support multiple spec directories with a few custom rake tasks, though it is a little awkward. Since your Rails objects are never loaded in the domain object test suite, though, it’s very easy to stub them out with dummy classes that are only used by the domain object tests.

As I mentioned, Corey has also shown an example with all the tests under single directory and some namespacing magic. I’m not 100% sure if I like the single top-level better. But I can explain how he got it to work.

With everything being under the same top level directory, it’s easier to run the whole suite, but harder to just run the fast tests (not very hard, just harder). Where it gets weird is when your domain objects reference Rails objects. As mentioned before, even though your domain objects shouldn’t need ActiveRecord features, they may need to reference the name of an ActiveRecord class, often just to call find or save methods. Often, “fast” tests get around this by creating a dummy class with the same name as the ActiveRecord class.

Anyway, if you are running your fast and slow tests together, you’re not really controlling the order of test runs. Specifically, you don’t know if the ActiveRecord version of your class is available when your fast test just wants the dummy version. So you need dummy versions of your ActiveRecord classes that are only available from the fast tests, while the real ActiveRecord objects are always visible from the rest of the test suite.

I think I’m not explaining this well. Let’s say I have an ActiveRecord object called Trip. I’ve taken the logic for purchasing a trip and placed it in a domain object, called PurchaseTripContext. All that’s fine, and I can test PurchaseTripContext in a domain object test without Rails right up until the point where it actually needs to reference the Trip class because it needs to create one.

The thing is, you don’t actually need the entire Trip class to test the PurchaseTripContext logic, you just need something named Trip that you can create, set some attributes on, and save. It’s kind of a fancy mock. And if you just require the existing Trip, then ActiveRecord loads Rails, which is what we are trying to avoid.

There are a few ways to solve this access problem:

If you have a separate spec_fast directory that only runs on its own, then this is easy. You can create just a dummy class called Trip – I make the dummy class a subclass of OpenStruct, which works tolerably well. class Trip < OpenStruct; end.

You could also use regular stub, but there are, I think, two reasons why I found that less helpful. First is that the stubs kind of need to be recreated for each test, whereas a dummy class basically gets declared once. Second, OpenStruct lets you hold on to a little state, which – for me – makes these tests easier to write.

Anyway, if your domain logic tests are mixed into the single spec directory, then the completely separate dummy class doesn’t work – the ActiveRecord class might already be loaded. Worse, you you can’t depend on the ActiveRecord class being there because you’d like to run your domain test standalone without running Rails. You can still create your own dummy Trip class, but it requires a little bit of Ruby module munging, more on that in a second.

If you want to get fancy, you can use some form of dependency injection to make the relationship between TripPurchaseContext and Trip dynamic, and use any old dummy class you want. One warning – it’s common when using low-ceremony dependency injection to make the injected class a parameter of the constructor with a default, as in def initialize(user, trip_class = Trip). That’s fine, but it doesn’t completely solve our testing problem because the use of Trip in the parameter list needs to be resolved at load time, so the constant Trip still needs some value.

Or, you could bite the bullet and bring the Rails stack in to test because of the dependency. For the moment, I reject this out of hand.

This isn’t an exhaustive list, there are any number of increasingly insane inheritance or metaprogramming things on the table. Or under the table.

So, if we choose a more complicated test setup with multiple directories, we get an easy way to specify these dummy classes. If we want the easier single-directory test setup, then we need to do something fancier to make the dummy classes work for the fast tests but be ignored by the Rails-specific tests.

At this point, I’m hoping this makes sense. Okay, the problem is that we want a class to basically have selective visibility. Here’s the solution I’m trying – this is based on a gist that Corey Haines posted a while back. I think I’m filling in the gaps to make this a full solution.

For this to work, we take advantage of a quirk in they way Ruby looks up class and module names. Ruby class and module names are just like any other Ruby constant. When you refer to a constant that does not have any scope information, like, say, the class name Trip, Ruby first looks in the current module, but if the current module doesn’t contain the class, then Ruby looks in the global scope. (That’s why sometimes you see a constant prefixed with ::, as in ::Trip, the :: forces a global lookup first).

That’s perfect for us, as it allows us to put a Trip class in a module and have it shadow the ActiveRecord Trip class in the global scope. There’s one catch, though – the spec, the domain class, and the dummy object all have to be part of the same local module for them all to use the same dummy class.

After some trial and error (lots of error, actually), here’s a way that I found which works with both the fast tests and the naming conventions of Rails autoload. I’m not convinced this is the best way, so I’m open to suggestions.

So, after 2000 words of prologue, here is a way to make fast tests run in the same spec directory in the same spec run as your Rails tests.

Step 1: Place all your domain-specific logic classes in sub modules.

I have sub directories app/travel/presenters, and app/travel/roles and the like, where travel is the name of the Rails application. I’m not in love with the convention of putting all the domain specific directories at a separate level, but it’s what you need to do in Rails to allow autoloaded classes to be inside a module.

So, my PurchaseTripContext class, for example, lives at app/travel/contexts/purchase_trip_context.rb, and starts out:

module Contexts
  class PurchaseTripContext
    # stuff
  end
end

Step 2: Place your specs in the same module

The spec for this lives at spec/contexts/purchase_trip_context_spec.rb (yes, that’s an inconsistency in the directory structure between the spec and app directories.) The spec also goes inside the module:

module Contexts
  describe PurchaseTripContext do
    it "creates a purchase" do
      #stuff
    end
  end
end

Step 3: Dummy objects

The domain objects are in a module, the specs are in a module, now for the dummy classes. Basically, I just put something like this in my fast_spec_helper.rb file:

module Contexts
  class Trip < OpenStruct; end
  class User < OpenStruct; end
end

This solves the problem, for some definition of “solves” and “problem”. The fast tests see the dummy class, the Rails tests see the Rails class. The tests can be run all together or in any smaller combination. The cost is a little module overhead that’s only slightly off-putting in terms of finding classes. I’m willing to pay that for fast tests. One place this falls down, though, is if more than one of my sub-modules need dummy classes – each sub-module then needs its own set, which does get a little ugly. I suspect there’s a way to clean that up that I haven’t found yet.

In fact, I wonder if there’s a way to clean up the whole thing. I half expect to post this and have somebody smart come along and tell me I’m over complicating everything – wouldn’t be the first time.

Next up, I’ll talk a little bit about how some of the OO patterns for domain objects work, and how they interact with testing.

Filed under: Rails, testing

Setting Up Fast No-Rails Tests

The key to fast tests is simple: don’t do slow things.

Warning: this post is a kind of long examination of a problem, namely, how to integrate fast non-Rails tests and slow Rails tests in the same test suite. This may be a problem nobody is having. But having seen a sample of how this might work, I was compelled to try and make it work in my toy app. You’ve been warned, hope you like it.

In a Rails app, “don’t do slow things” largely means “don’t load Rails”. Which means that the application logic that you are testing should be separable from Rails implementation details like, say, ActiveRecord. One way to do that is to start putting application logic in domain objects that use ActiveRecord as an implementation detail for persistence.

By one of those coincidences that aren’t really coincidences, not only does separating logic from persistence give you fast tests, it also gives you more modular, easier to maintain code.

To put that another way, in a truly test-driven process, if the tests are hard to write, that is assumed to be evidence that the code design is flawed. For years, most of the Rails testing community, myself included, have been ignoring the advice of people like Jay Fields and Michael Feathers, who told us that true unit tests don’t touch the database, and we said, “but it is so easy to write a model test in Rails that hits the database, we are sure it will be fine.” And we’ve all, myself included, been stuck with test suites that take way too long to run, wondering how we got there.

Well, if the tests get hard to write or run, we’re supposed to consider the possibility that the code is the issue. In this case, that our code is too entangled with ActiveRecord. Hence, fast tests. And better code.

Anyway, I built a toy app placing logic in domain objects for the Mountain West workshop. In building this, I wanted to try a whole bunch of domain patterns at once, fast tests, DCI, presenters, dependency injection. There are a lot of things that I have to say about messing around with some of the domain object patterns floating around, but first…

Oh. My. God. It is great to be back in a code base where the tests ran so fast that I didn’t have time to lose focus while the tests ran. It occurred to me that it is really impossible to truly do TDD if the tests don’t run fast, and that means we probably have a whole generation of Rails programmers who have never done TDD, who only know tests as the multi-minute slog they need to get through to check in their code, and don’t know how much fun fast TDD is.

Okay, at some unspecified future point, I’ll talk about some of the other patterns. Right now, I want to talk about fast tests, and some ideas about how to make them run. While the basic idea of “don’t do slow things” is not hard, there are some logistical issues about managing Rails-stack and non-Rails stack tests in the same code base that are non obvious. Or at least they weren’t obvious to me.

One issue is file logistics. Basically, in order to run tests without Rails, you just don’t load Rails. In a typical Rails/RSpec setup, that means not requiring spec_helper into the test file. However, even without spec_helper, you still need some of the same functionality.

For instance, you still need to load code into your tests. This is easy enough, where spec_helper loaded Rails and triggered the Rails auto load, you just need to explicitly require the files that you need for each spec file. If your classes are really distributing responsibility, you should only need to require the actual class under test and maybe one or two others. I also create a fast_spec_helper.rb file, which starts like this:

$: << File.expand_path("app")
require 'pry'
require 'awesome_print'

Pry and Awesome Print are there because they are useful in troubleshooting, the addition to the load path is purely a convenience when requiring my domain classes.

There is another problem, which is that your domain classes still need to reference Rails and ActiveRecord classes. This is a little messier.
I hope it’s clear why this is a problem – even if you are separating domain logic from Rails, the two layers still need to interact, even if it’s just of the load/save variety. So your non-Rails tests and the code they call may still reference ActiveRecord objects, and you need to not have your tests blow up when that happens. Ideally, you also don’t want the tests to load Rails, either, since that defeats the purpose of the fast test.

Okay, so you need a structure for fast tests that allows you to load the code you need, and reference the names of ActiveRecord objects without loading Rails itself.

Very broadly speaking, there are two strategies for structuring fast tests. You can put your domain tests in a new top-level directory – Corey Haines used spec-no-rails in his shopping cart reference application. Alternately, you can put domain tests with everything else in the spec directory, with subdirectories like spec/presenters and the like, just have those files load your fast_spec_helper. About a month ago, Corey mentioned on Twitter and GitHub that he had moved his code in this direction.

There are tradeoffs. The separate top-level approach enforces a much stricter split between Rails tests and domain tests – in particular, it makes it easier to run just the domain tests without loading Rails. On the other hand, the directory structure is non-standard, there is a whole ecosystem of testing tools that basically assumes that you have one test directory.
It’s not hard to support multiple spec directories with a few custom rake tasks, though it is a little awkward. Since your Rails objects are never loaded in the domain object test suite, though, it’s very easy to stub them out with dummy classes that are only used by the domain object tests.

As I mentioned, Corey has also shown an example with all the tests under single directory and some namespacing magic. I’m not 100% sure if I like the single top-level better. But I can explain how he got it to work.

With everything being under the same top level directory, it’s easier to run the whole suite, but harder to just run the fast tests (not very hard, just harder). Where it gets weird is when your domain objects reference Rails objects. As mentioned before, even though your domain objects shouldn’t need ActiveRecord features, they may need to reference the name of an ActiveRecord class, often just to call find or save methods. Often, “fast” tests get around this by creating a dummy class with the same name as the ActiveRecord class.

Anyway, if you are running your fast and slow tests together, you’re not really controlling the order of test runs. Specifically, you don’t know if the ActiveRecord version of your class is available when your fast test just wants the dummy version. So you need dummy versions of your ActiveRecord classes that are only available from the fast tests, while the real ActiveRecord objects are always visible from the rest of the test suite.

I think I’m not explaining this well. Let’s say I have an ActiveRecord object called Trip. I’ve taken the logic for purchasing a trip and placed it in a domain object, called PurchaseTripContext. All that’s fine, and I can test PurchaseTripContext in a domain object test without Rails right up until the point where it actually needs to reference the Trip class because it needs to create one.

The thing is, you don’t actually need the entire Trip class to test the PurchaseTripContext logic, you just need something named Trip that you can create, set some attributes on, and save. It’s kind of a fancy mock. And if you just require the existing Trip, then ActiveRecord loads Rails, which is what we are trying to avoid.

There are a few ways to solve this access problem:

If you have a separate spec_fast directory that only runs on its own, then this is easy. You can create just a dummy class called Trip – I make the dummy class a subclass of OpenStruct, which works tolerably well. class Trip < OpenStruct; end.

You could also use regular stub, but there are, I think, two reasons why I found that less helpful. First is that the stubs kind of need to be recreated for each test, whereas a dummy class basically gets declared once. Second, OpenStruct lets you hold on to a little state, which – for me – makes these tests easier to write.

Anyway, if your domain logic tests are mixed into the single spec directory, then the completely separate dummy class doesn’t work – the ActiveRecord class might already be loaded. Worse, you you can’t depend on the ActiveRecord class being there because you’d like to run your domain test standalone without running Rails. You can still create your own dummy Trip class, but it requires a little bit of Ruby module munging, more on that in a second.

If you want to get fancy, you can use some form of dependency injection to make the relationship between TripPurchaseContext and Trip dynamic, and use any old dummy class you want. One warning – it’s common when using low-ceremony dependency injection to make the injected class a parameter of the constructor with a default, as in def initialize(user, trip_class = Trip). That’s fine, but it doesn’t completely solve our testing problem because the use of Trip in the parameter list needs to be resolved at load time, so the constant Trip still needs some value.

Or, you could bite the bullet and bring the Rails stack in to test because of the dependency. For the moment, I reject this out of hand.

This isn’t an exhaustive list, there are any number of increasingly insane inheritance or metaprogramming things on the table. Or under the table.

So, if we choose a more complicated test setup with multiple directories, we get an easy way to specify these dummy classes. If we want the easier single-directory test setup, then we need to do something fancier to make the dummy classes work for the fast tests but be ignored by the Rails-specific tests.

At this point, I’m hoping this makes sense. Okay, the problem is that we want a class to basically have selective visibility. Here’s the solution I’m trying – this is based on a gist that Corey Haines posted a while back. I think I’m filling in the gaps to make this a full solution.

For this to work, we take advantage of a quirk in they way Ruby looks up class and module names. Ruby class and module names are just like any other Ruby constant. When you refer to a constant that does not have any scope information, like, say, the class name Trip, Ruby first looks in the current module, but if the current module doesn’t contain the class, then Ruby looks in the global scope. (That’s why sometimes you see a constant prefixed with ::, as in ::Trip, the :: forces a global lookup first).

That’s perfect for us, as it allows us to put a Trip class in a module and have it shadow the ActiveRecord Trip class in the global scope. There’s one catch, though – the spec, the domain class, and the dummy object all have to be part of the same local module for them all to use the same dummy class.

After some trial and error (lots of error, actually), here’s a way that I found which works with both the fast tests and the naming conventions of Rails autoload. I’m not convinced this is the best way, so I’m open to suggestions.

So, after 2000 words of prologue, here is a way to make fast tests run in the same spec directory in the same spec run as your Rails tests.

Step 1: Place all your domain-specific logic classes in sub modules.

I have sub directories app/travel/presenters, and app/travel/roles and the like, where travel is the name of the Rails application. I’m not in love with the convention of putting all the domain specific directories at a separate level, but it’s what you need to do in Rails to allow autoloaded classes to be inside a module.

So, my PurchaseTripContext class, for example, lives at app/travel/contexts/purchase_trip_context.rb, and starts out:

module Contexts
  class PurchaseTripContext
    # stuff
  end
end

Step 2: Place your specs in the same module

The spec for this lives at spec/contexts/purchase_trip_context_spec.rb (yes, that’s an inconsistency in the directory structure between the spec and app directories.) The spec also goes inside the module:

module Contexts
  describe PurchaseTripContext do
    it "creates a purchase" do
      #stuff
    end
  end
end

Step 3: Dummy objects

The domain objects are in a module, the specs are in a module, now for the dummy classes. Basically, I just put something like this in my fast_spec_helper.rb file:

module Contexts
  class Trip < OpenStruct; end
  class User < OpenStruct; end
end

This solves the problem, for some definition of “solves” and “problem”. The fast tests see the dummy class, the Rails tests see the Rails class. The tests can be run all together or in any smaller combination. The cost is a little module overhead that’s only slightly off-putting in terms of finding classes. I’m willing to pay that for fast tests. One place this falls down, though, is if more than one of my sub-modules need dummy classes – each sub-module then needs its own set, which does get a little ugly. I suspect there’s a way to clean that up that I haven’t found yet.

In fact, I wonder if there’s a way to clean up the whole thing. I half expect to post this and have somebody smart come along and tell me I’m over complicating everything – wouldn’t be the first time.

Next up, I’ll talk a little bit about how some of the OO patterns for domain objects work, and how they interact with testing.

Filed under: Rails, testing

Self-assessment

Here’s what I’ve got.

2 chapters introducing jQuery and Jasmine via a walkthrough of a simple piece of JavaScript functionality.

1 need to convert all my text from its current proprietary format to something more Markdown based.

1 genuinely silly conceit tying together the application that gets built in the book. And I mean that in the best way. It should be silly, there’s no reason not to be bold. There is even a twist ending. I think.

1 slightly dusty self-publishing tool chain that converts a directory of markdown files into HTML, with syntax colored code. It’s possible that there’s a better library for some of the features these days.

1 chapter on converting that simple piece of jQuery into various patterns of JavaScript object. I quite like this one, actually.

1 website, which is currently hosted by WordPress – at one point, I had to abandon the railsrx.com site that actually sold stuff, and WordPress was easy. I think I’ll need to upgrade that a bit.

1 Intro chapter covering JavaScript basics and the Chrome developer tools. Not sure if this is at the right level for the audience I expect.

1 Prince XML license for converting said HTML files into PDF. No idea if that’s still the best tool for the job. Or even if my license is current.

1 chapter on building a marginally complex auto complete widget in jQuery and Jasmine. I like this example.

1 copy of most of the book’s JavaScript code in CoffeeScript. Not sure when I thought this was the right idea for the book, beyond an excuse to use CoffeeScript.

1 chapter on jQuery and Ajax.

0 toolchains for generating epub and mobi files. I know I can find this.

1 case of impostor’s syndrome, not helped by rereading the harsh review of Rails Test Prescriptions on Amazon. That was dumb, why would I do that?

1 chapter on using JSON. As far as I can remember, this chapter never went to edit.

3 people who mentioned on Twitter that they’d buy a self-published book. Don’t worry, I won’t hold you to it.

1 plan for writing 2 or three chapters on Backbone.js

5 people who reviewed the last version who I feel should get free copies when this comes out. It’s not their fault.

4 viewings of Ze Frank’s “Invocation for Beginnings”

So. Ready to go. Watch this space.

Filed under: JavaScript, Me, self promotion

A Brief Announcement About A Book

So… The JavaScript book that I had contracted to do with Pragmatic will no longer be published by them.

I need to be careful as I write about this. I don’t want to be defensive – I’m proud of the work I did, and I like the book I was working on. But I don’t want to be negative either. Everybody that I worked with at Pragmatic was generous with their time and sincere in their enthusiasm for the project. Sometimes it doesn’t work out, despite the best intentions.

I haven’t spoken about this project publicly in a while because it was so up in the air. And also because I’m not sure what to say about it without sounding whiny or mean. And also because I was afraid of jinxing things, which is obviously less of an issue now.

Since November, the book has been in review and I’ve gone through a few cycles with Pragmatic trying to get things just right. The issues had more to do with the structure and presentation of the material then of the content or writing itself. I’m not completely sure what happened, but I think it’s fair to say that the book I was writing did not match the book they wanted in some way or another.

Anyway, that’s all water under the bridge. I have full rights to the text I’ve already produced. Self-publishing is clearly an option, though the phrase “sunk costs” keeps echoing in my head. It’s hard to resist the irony of starting with a Pragmatic contract and moving to self-publishing after having done it the other way around with Rails Test Prescriptions. I’m hoping to blog more – in addition to being a time sink, not being able to write about this book was kind of getting in my head about any blogging.

Thanks for listening, and watch this space for further announcements. I was excited about this project, and while this is disappointing, I’ll be excited about it again in a few days. Thanks to the people I worked with at Pragmatic for the shot, and thanks to all the people who have been supportive of this project.

Filed under: JavaScript, Me, self promotion

Control Your Development Environment And Never Burn Another Hamburger

Everything I know about the world of fine dining I know from watching Top Chef and from eating at Five Guys. But I do know this: chefs have the concept of mise en place (which does not mean Mice In Place), which is the idea that everything the chef is going to need to prepare the food is done ahead of time and laid out for easy access. The advantages of having a good mise en place include time savings from getting common prep done together, ease of putting together meals once the prep is done, ease of cleanup. Again, I have no idea what I’m talking about, but it seems like the quality and care that a chef puts into their mise en place has a direct and significant benefit on how well they are able to do their job.

You probably know where I’m going with this, but I’m becoming increasing convinced that one of the biggest differences between expert and novice developers is control over the environment and tools they use. I came to this the hard way – for a long time I was horrible about command lines, it was actually something that kept me away from Rails for a little bit. It’s not like I’m Mr. Showoff Bash Script guy these days, but I know what I need to know to make my life easier.

Let me put it another way. Once upon a time I read a lot of human factors kind of materials. I don’t remember where I read it, but I read once about an expert short-order cook at a burger joint. His trick was that he would continually shift the burgers to the right as he cooked them, such that by the time they got to the end of the griddle, they were done. Great trick (though admittedly you need to be a bit of an expert to pull it off). Not only did the cook know when things were done without continually checking, but it was easy to put on a burger that needed a different amount of doneness simply by starting it farther left along the griddle.

What does that mean?

The name of the game in being an expert developer is reducing cognitive load.

Try and list all the things you need to keep in your head to get through your day. Think about all the parts of that you could offload onto your environment, so that you can see them at a glance, like the short order cook, and not have to check. What are the repetitive tasks that you do that can be made easier by your environment? What can you do so that your attention and short-term memory, which are precious and limited, are focused on the important parts of your problem, and not on remembering the exact syntax of some git command.

This is not about which editor you use, but it is about picking an editor because you like it and understand how to customize it, not because all the cool kids use it. If you are going to use Vim, really learn how to use Vim – it’s not that hard. But if you use Vim, and don’t learn the Vim features that actually make it useful, then it’s not helping you, and it’s probably hurting you. Is Vim (or TextMate, or whatever) making your life easier or not? Vim drives me nuts, but I’ve seen people fly supersonically on it.

I’m getting a little cranky about seeing people’s environments – I’m not normally a big Your Doing It Wrong kind of guy, but, well, I’m getting a little bit cranky.

If you’re doing web development, there are probably three things that you care about at any given time: your text editor, a command line, and a web browser. Every man, woman, and child developer at my fine corporation has a laptop along with a second monitor that is larger than a decent surfboard. If you can’t see at least two of those three things at all times, try and figure out how much time you spend flipping between windows that you could be seeing together. If you are running Vim in a terminal in such a way that you can never see an editor and a command line at the same time, I think you can probably do better.

If you use Git, for the love of God, please put your git branch and status in your shell prompt. RVM status, too. And it’s not hard to add autocompletion of Git branches, too. Or, go all the way to zsh, which has fantastic autocompletion. Again, reducing cognitive load – you can see your environment without checking, you can type complex things without knowing all the syntax.

Use a clipboard history tool. I use Alfred, which is generally awesome as a launcher and stuff, but it’s not the only one.

Use some tool that converts frequently used text to shorter easier to remember text. Shell aliases do this, I also use TextExpander, which has the advantage that TextExpander shortcuts are usable in SSH sessions.

The thing is, I don’t know what the most important advice is for you. You need to be aware of what you are doing, and strangely, lower your tolerance to your own pain. What do you do all the time that is harder than it needs to be? Is there a tool that makes it easier or more visible? How hard would it be to create or customize something? Are you the cook who can look at the griddle and know exactly when everything will be done, or are you the guy constantly flipping burgers to check?

Filed under: Uncategorized

Control Your Development Environment And Never Burn Another Hamburger

Everything I know about the world of fine dining I know from watching Top Chef and from eating at Five Guys. But I do know this: chefs have the concept of mise en place (which does not mean Mice In Place), which is the idea that everything the chef is going to need to prepare the food is done ahead of time and laid out for easy access. The advantages of having a good mise en place include time savings from getting common prep done together, ease of putting together meals once the prep is done, ease of cleanup. Again, I have no idea what I’m talking about, but it seems like the quality and care that a chef puts into their mise en place has a direct and significant benefit on how well they are able to do their job.

You probably know where I’m going with this, but I’m becoming increasing convinced that one of the biggest differences between expert and novice developers is control over the environment and tools they use. I came to this the hard way – for a long time I was horrible about command lines, it was actually something that kept me away from Rails for a little bit. It’s not like I’m Mr. Showoff Bash Script guy these days, but I know what I need to know to make my life easier.

Let me put it another way. Once upon a time I read a lot of human factors kind of materials. I don’t remember where I read it, but I read once about an expert short-order cook at a burger joint. His trick was that he would continually shift the burgers to the right as he cooked them, such that by the time they got to the end of the griddle, they were done. Great trick (though admittedly you need to be a bit of an expert to pull it off). Not only did the cook know when things were done without continually checking, but it was easy to put on a burger that needed a different amount of doneness simply by starting it farther left along the griddle.

What does that mean?

The name of the game in being an expert developer is reducing cognitive load.

Try and list all the things you need to keep in your head to get through your day. Think about all the parts of that you could offload onto your environment, so that you can see them at a glance, like the short order cook, and not have to check. What are the repetitive tasks that you do that can be made easier by your environment? What can you do so that your attention and short-term memory, which are precious and limited, are focused on the important parts of your problem, and not on remembering the exact syntax of some git command.

This is not about which editor you use, but it is about picking an editor because you like it and understand how to customize it, not because all the cool kids use it. If you are going to use Vim, really learn how to use Vim – it’s not that hard. But if you use Vim, and don’t learn the Vim features that actually make it useful, then it’s not helping you, and it’s probably hurting you. Is Vim (or TextMate, or whatever) making your life easier or not? Vim drives me nuts, but I’ve seen people fly supersonically on it.

I’m getting a little cranky about seeing people’s environments – I’m not normally a big You’re Doing It Wrong kind of guy, but, well, I’m getting a little bit cranky.

If you’re doing web development, there are probably three things that you care about at any given time: your text editor, a command line, and a web browser. Every man, woman, and child developer at my fine corporation has a laptop along with a second monitor that is larger than a decent surfboard. If you can’t see at least two of those three things at all times, try and figure out how much time you spend flipping between windows that you could be seeing together. If you are running Vim in a terminal in such a way that you can never see an editor and a command line at the same time, I think you can probably do better.

If you use Git, for the love of God, please put your git branch and status in your shell prompt. RVM status, too. And it’s not hard to add autocompletion of Git branches, too. Or, go all the way to zsh, which has fantastic autocompletion. Again, reducing cognitive load – you can see your environment without checking, you can type complex things without knowing all the syntax.

Use a clipboard history tool. I use Alfred, which is generally awesome as a launcher and stuff, but it’s not the only one.

Use some tool that converts frequently used text to shorter easier to remember text. Shell aliases do this, I also use TextExpander, which has the advantage that TextExpander shortcuts are usable in SSH sessions.

The thing is, I don’t know what the most important advice is for you. You need to be aware of what you are doing, and strangely, lower your tolerance to your own pain. What do you do all the time that is harder than it needs to be? Is there a tool that makes it easier or more visible? How hard would it be to create or customize something? Are you the cook who can look at the griddle and know exactly when everything will be done, or are you the guy constantly flipping burgers to check?

Filed under: Uncategorized

iaWriter and iCloud, You Know, In The Cloud

If I don’t write about iOS editors every few months, then it’s harder for me to justify continuing to mess around with them…

The thing that’s changed my editor use in the last couple of months is iaWriter Mac and iOS adding iCloud support, even more deeply integrated than Apple’s own applications. iaWriter is the first writing program I use to move to the iCloud future (though there are some games and other programs that also sync via iCloud already).

At a technical level, the integration is fantastic. In iaWriter, iCloud shows up as a storage location, on par with internal iPad and Dropbox storage. If you are just using the iPad version then there is not much difference between iCloud and Dropbox. iCloud saves automatically, but Dropbox lets you use subfolders. (As a side note, iaWriter has improved its Dropbox sync from “show-stoppingly bad” to “works with a couple of annoyances”, the main annoyance being that it doesn’t remember your place in the Dropbox file hierarchy.)

Where the iCloud thing gets really cool is if you are running iaWriter on both iPad and Mac. On iaWriter Mac, you get a command in the file menu for iCloud, which has a sublisting of all the files iaWriter is managing in iCloud, along with commands to move the current file to or from iCloud.

When you make a change to an iCloud file (on the Mac side, an explicit save, on the iPad side an automatic local save), it is automatically sent to iCloud and pushed to the other site. No different from Dropbox, you say. True, except that the iCloud sync behaves much better if a file is simultaneously open in both apps. The changes just appear in the other app. You can put the iPad and Mac next to each other and go back and forth between the two with only a very slight pause while they sync up.

I haven’t quite gotten that level of integration from Dropbox. In particular, if a Lion-aware app has Dropbox change the file behind its back, the original Mac file continues to be displayed with a filename indicating that it is a deleted version. You then need to close the Mac file and reopen it. I’m not sure I’ve seen an iOS editor that polls Dropbox for changes, though one of the auto-sync ones (Elements, WriteRoom) might.

This may seem esoteric, but since I tend to have several blog posts on progress in open windows on my laptop, I do wind up regularly using the iPad to edit an open file. The iaWriter iCloud sync is noticeably less annoying.

It’s not all sweetness and light, especially if you are a really heavy creator of text files. There is no such thing as a folder in iCloud land, which will eventually become an organizational problem. Worse, there’s an implied lock-in to using iCloud that seems to miss the point of using text files in the first place.

When you move a file to iCloud from the Mac, it moves the file to the iCloud hidden directory, which I think is somewhere in the library directory. Although it doesn’t technically vanish from your hard drive – if you can find the file, you can open it in another application (for what it’s worth, the Alfred launcher can find the files), the clear intent is that the file is in a place not to be touched by applications other than iaWriter.

On the iPad side, the situation is worse. If a file is in iaWriter’s iCloud storage than no other iPad app can see it. (To be fair, it is relatively easy for iaWriter to move a file from iCloud to Dropbox from either device.) I don’t know if sharing files between applications will be possible when more applications support iCloud, or whether iCloud is strictly sandboxed.

And hey, for a lot of things, this limitation isn’t an issue. If you are using a tool with its own format, then it is less of an issue that other applications can’t see it. Even with something like text, if you aren’t the kind of crazy that needs to open a file in a gajillion different editors, you are probably okay. If you are using text specifically because it’s easy to move around between different programs, and you have a workflow where a file will commonly be touched by different apps, then iCloud is going to get in your way a little.

As for me, the iCloud support has made me use iaWriter more often for blogs and short notes. (Though I still use Textastic for more structured stuff on iPad.) I always liked iaWriter, but for a while it was just really bad at sync compared to other iOS editors. So, despite some quibbles about what happens in iCloud when I have dozens of files that I want to share among different apps, right now, the sync is good enough to make it valuable.

Filed under: Uncategorized

iaWriter and iCloud, You Know, In The Cloud

If I don’t write about iOS editors every few months, then it’s harder for me to justify continuing to mess around with them…

The thing that’s changed my editor use in the last couple of months is iaWriter Mac and iOS adding iCloud support, even more deeply integrated than Apple’s own applications. iaWriter is the first writing program I use to move to the iCloud future (though there are some games and other programs that also sync via iCloud already).

At a technical level, the integration is fantastic. In iaWriter, iCloud shows up as a storage location, on par with internal iPad and Dropbox storage. If you are just using the iPad version then there is not much difference between iCloud and Dropbox. iCloud saves automatically, but Dropbox lets you use subfolders. (As a side note, iaWriter has improved its Dropbox sync from “show-stoppingly bad” to “works with a couple of annoyances”, the main annoyance being that it doesn’t remember your place in the Dropbox file hierarchy.)

Where the iCloud thing gets really cool is if you are running iaWriter on both iPad and Mac. On iaWriter Mac, you get a command in the file menu for iCloud, which has a sublisting of all the files iaWriter is managing in iCloud, along with commands to move the current file to or from iCloud.

When you make a change to an iCloud file (on the Mac side, an explicit save, on the iPad side an automatic local save), it is automatically sent to iCloud and pushed to the other site. No different from Dropbox, you say. True, except that the iCloud sync behaves much better if a file is simultaneously open in both apps. The changes just appear in the other app. You can put the iPad and Mac next to each other and go back and forth between the two with only a very slight pause while they sync up.

I haven’t quite gotten that level of integration from Dropbox. In particular, if a Lion-aware app has Dropbox change the file behind its back, the original Mac file continues to be displayed with a filename indicating that it is a deleted version. You then need to close the Mac file and reopen it. I’m not sure I’ve seen an iOS editor that polls Dropbox for changes, though one of the auto-sync ones (Elements, WriteRoom) might.

This may seem esoteric, but since I tend to have several blog posts on progress in open windows on my laptop, I do wind up regularly using the iPad to edit an open file. The iaWriter iCloud sync is noticeably less annoying.

It’s not all sweetness and light, especially if you are a really heavy creator of text files. There is no such thing as a folder in iCloud land, which will eventually become an organizational problem. Worse, there’s an implied lock-in to using iCloud that seems to miss the point of using text files in the first place.

When you move a file to iCloud from the Mac, it moves the file to the iCloud hidden directory, which I think is somewhere in the library directory. Although it doesn’t technically vanish from your hard drive – if you can find the file, you can open it in another application (for what it’s worth, the Alfred launcher can find the files), the clear intent is that the file is in a place not to be touched by applications other than iaWriter.

On the iPad side, the situation is worse. If a file is in iaWriter’s iCloud storage than no other iPad app can see it. (To be fair, it is relatively easy for iaWriter to move a file from iCloud to Dropbox from either device.) I don’t know if sharing files between applications will be possible when more applications support iCloud, or whether iCloud is strictly sandboxed.

And hey, for a lot of things, this limitation isn’t an issue. If you are using a tool with its own format, then it is less of an issue that other applications can’t see it. Even with something like text, if you aren’t the kind of crazy that needs to open a file in a gajillion different editors, you are probably okay. If you are using text specifically because it’s easy to move around between different programs, and you have a workflow where a file will commonly be touched by different apps, then iCloud is going to get in your way a little.

As for me, the iCloud support has made me use iaWriter more often for blogs and short notes. (Though I still use Textastic for more structured stuff on iPad.) I always liked iaWriter, but for a while it was just really bad at sync compared to other iOS editors. So, despite some quibbles about what happens in iCloud when I have dozens of files that I want to share among different apps, right now, the sync is good enough to make it valuable.

Filed under: Uncategorized

Getting Back to Smalltalk

This week I wound up trying to put together a sample “real-world” problem suitable for use in an automated web thing aimed at potential new hires. Of course, any actual “real-world” problem would be have too many extraneous details to make it suitable given the constraints of the web thing, but trying to create the illusion of real-world-ness was kind of fun.

Since this seemed like sort of a weird way to spend a day or two, I decided to make it weirder by implementing both my solution to the problem and the code to generate test data in a language I don’t use every day. Just for the heck of it, I picked Smalltalk (Specifically, Pharo), the first time I’ve written Smalltalk code in anger for… hmm. Let’s see… I wrote the intro chapter to this in about 2000, and even then it had been a couple of years since I really wrote something in Smalltalk. (For the record, my other choice was Clojure, but I seem to have some weird mental block about getting started on a Clojure project. Someday soon, though.)

Anyway, Smalltalk was interesting because the environment is so different from anything else going and because I really haven’t used it in so long, I was curious to see whether my reactions to the Smalltalk environment would be any different given how long it’s been. For example, the last time I worked in Smalltalk, SUnit and TDD wasn’t a thing.

Overall, successful experience. I was able to actually write the program. It probably took me a little longer than it would have in Ruby, just because I was a bit rusty and the Pharo UI was a little unfamiliar. But it works, and most of the code isn’t bad – it got a little ragged toward the end. Here are a few random notes about the experience:

Language Structure

Whatever the line between a “scripting” language and a “normal” language is, Smalltalk is on the “not a scripting language” side of it. Smalltalk is very much a purist language with a small, consistent syntax, and not a lot of the sugar that you get used to in the Ruby/Python/Perl neighborhoods. Method names tend to be longish, and although the environment encourages short methods, basically there’s more typing in Smalltalk than in the equivalent Ruby.

Smalltalk was created almost completely outside the C/C++/Java family of programming languages, which combined with its pure structure makes it feel somewhat alien. And I say that as somebody who likes Smalltalk and genuinely enjoyed working in the environment.

For example:
* The first element of an array is index 1, not index 0. Which, honestly, makes more sense.
* Smalltalk doesn’t have operator precedence, and is evaluated strictly left-to-right, so a mathematical expression like 2 + 3 * 4 will not give you the result you would get in, say, every other programming language ever.
* Smalltalk doesn’t use the dot-notation for method calls, instead, the dot is a statement separator. It has keyword arguments, as later borrowed by Objective-C. Strings are single quotes, double quoted strings are comments.

Then you get other things like all the boolean and loop logic being managed by the library rather than the syntax. Really, the only thing the syntax manages in Smalltalk is order of operations and a very small amount of literal syntax. There’s a rich set of collection classes, and the names and classes are just slightly off from what you are used to. Takes some immersion to get the feel.

Workflow

The other really unfamiliar thing about Smalltalk is the environment and workflow. Smalltalk source isn’t normally kept in files. Instead, you run a binary image of all the code in the Smalltalk environment. If I’m remembering correctly, a Smalltalker working on multiple projects would have a separate image for each project. (There are mechanisms for teams working together, but I don’t think they are very standard, and I’m not all that familiar with them.)

The thing about Smalltalk is that you are always inside the environment. The idea, for example, that Ruby is super-flexible in what it allows you to do at run-time is a trivial point in Smalltalk. Everything is at run-time, because the phone call is coming from inside the house, so to speak. Of course you can create classes and methods at run-time. If you couldn’t, the whole system would be unworkable.

The image contains your code, the entire Smalltalk system, and the state of all the objects therein. Smalltalk typically also has a mechanisms for export/import of source, and also for internal version control. I think it’s fair to say that dealing with the workspace is at best counterintuitive when you are used to dealing with text editors and files.

Your main interface with the Smalltalk system is the code browser, which allows you to view any class and method in the system. One method at a time. Quick Quiz – what’s the Smalltalk keyword to define a method? Trick question. Smalltalk doesn’t have one. Smalltalk just knows that the things you type into the code browser are methods, and the first line of a method is its name. Code browsers are very neat, but again, using them effectively is a skill – a big step is realizing you can have more than one of them open at a time.

What’s interesting to me are the places where the Smalltalk environment is still ahead of my regular code environments, and where it’s behind. On the plus side:

  • When you save a Smalltalk method, the code browser verifies that it’s syntactically correct before allowing the save. If you are calling a method or using a variable that doesn’t exist, you have to okay it – in some circumstances, you even have the ability to create the method or variable as part of the save process.
  • The browser has easy access to all old versions of a method that you are looking at.
  • It’s very easy to see places that might call the method you are writing.
  • The browser also has some powerful refactoring functionality built in.
  • Integration is amazing, since everything is running. To give one example, the code browser knows which methods are SUnit tests, and they display in the browser with a red/yellow/green/gray indicator of their current state (error/failure/success/not run). Individual tests can be run separately easily in the browser, and since they environment is already loaded, the tests run in an eye blink.
  • On a related note, the debugger is famously awesome. From a breakpoint, you can see the state of all variables, edit values, edit code, all nice and easy.
  • You also can create workspaces, which are just blank code windows that you can execute arbitrary snippets in, like a iRb loop, but a bit more flexible.

But there are some missing pieces:

  • The code editor itself isn’t as powerful as you are used to if you’ve been using Vim or TextMate or, hell, anything. I’m a big “typing is not the bottleneck” kind of guy, but still, I would have loved some macros or snippets or something.
  • The one-method on display per code browser limitation is, well, limiting. You can have multiple code browsers open, but that takes up a lot of real estate. Tabbed browsers, split windows, that kind of thing would be kind of nice.
  • Intellectually I know that it’s really hard to lose changes in a Smalltalk image, but it still feels fragile to me not to have files readable by an external tool.

One interesting feature of the browser based editing environment is that it’s very easy to create a new method that is based on an existing method. You don’t even have to cut and paste, just browse to the existing method, change its name and save. Smalltalk will create a new method based on the new name.

This is very nice from a getting code in the system standpoint, but it seems to have the side effect of encouraging more duplication in the system than I might be otherwise comfortable with. The effect is confounded by the fact that you can only see one method at a time in the browser, making it difficult to scan for duplication.

What’s not getting across here is how smooth the workflow in the small is when you really get the rhythm going. It’s an environment that really suits a “make a small change, evaluate the change” process. In some ways, it’s not surprising that the test-first movement would have gotten serious momentum in Smalltalk, test-first is just a formalization of the way a good Smalltalk programmer interacts with the system. Plus, I’m a purist in so many ways, and even though the code takes a little getting used to, I like the way it turns out.

So, successful test. Would try again. I still want to try something with Seaside one of these days just to see what that’s like.

Filed under: Smalltalk

Getting Back to Smalltalk

This week I wound up trying to put together a sample “real-world” problem suitable for use in an automated web thing aimed at potential new hires. Of course, any actual “real-world” problem would be have too many extraneous details to make it suitable given the constraints of the web thing, but trying to create the illusion of real-world-ness was kind of fun.

Since this seemed like sort of a weird way to spend a day or two, I decided to make it weirder by implementing both my solution to the problem and the code to generate test data in a language I don’t use every day. Just for the heck of it, I picked Smalltalk (Specifically, Pharo), the first time I’ve written Smalltalk code in anger for… hmm. Let’s see… I wrote the intro chapter to this in about 2000, and even then it had been a couple of years since I really wrote something in Smalltalk. (For the record, my other choice was Clojure, but I seem to have some weird mental block about getting started on a Clojure project. Someday soon, though.)

Anyway, Smalltalk was interesting because the environment is so different from anything else going and because I really haven’t used it in so long, I was curious to see whether my reactions to the Smalltalk environment would be any different given how long it’s been. For example, the last time I worked in Smalltalk, SUnit and TDD wasn’t a thing.

Overall, successful experience. I was able to actually write the program. It probably took me a little longer than it would have in Ruby, just because I was a bit rusty and the Pharo UI was a little unfamiliar. But it works, and most of the code isn’t bad – it got a little ragged toward the end. Here are a few random notes about the experience:

Language Structure

Whatever the line between a “scripting” language and a “normal” language is, Smalltalk is on the “not a scripting language” side of it. Smalltalk is very much a purist language with a small, consistent syntax, and not a lot of the sugar that you get used to in the Ruby/Python/Perl neighborhoods. Method names tend to be longish, and although the environment encourages short methods, basically there’s more typing in Smalltalk than in the equivalent Ruby.

Smalltalk was created almost completely outside the C/C++/Java family of programming languages, which combined with its pure structure makes it feel somewhat alien. And I say that as somebody who likes Smalltalk and genuinely enjoyed working in the environment.

For example:
* The first element of an array is index 1, not index 0. Which, honestly, makes more sense.
* Smalltalk doesn’t have operator precedence, and is evaluated strictly left-to-right, so a mathematical expression like 2 + 3 * 4 will not give you the result you would get in, say, every other programming language ever.
* Smalltalk doesn’t use the dot-notation for method calls, instead, the dot is a statement separator. It has keyword arguments, as later borrowed by Objective-C. Strings are single quotes, double quoted strings are comments.

Then you get other things like all the boolean and loop logic being managed by the library rather than the syntax. Really, the only thing the syntax manages in Smalltalk is order of operations and a very small amount of literal syntax. There’s a rich set of collection classes, and the names and classes are just slightly off from what you are used to. Takes some immersion to get the feel.

Workflow

The other really unfamiliar thing about Smalltalk is the environment and workflow. Smalltalk source isn’t normally kept in files. Instead, you run a binary image of all the code in the Smalltalk environment. If I’m remembering correctly, a Smalltalker working on multiple projects would have a separate image for each project. (There are mechanisms for teams working together, but I don’t think they are very standard, and I’m not all that familiar with them.)

The thing about Smalltalk is that you are always inside the environment. The idea, for example, that Ruby is super-flexible in what it allows you to do at run-time is a trivial point in Smalltalk. Everything is at run-time, because the phone call is coming from inside the house, so to speak. Of course you can create classes and methods at run-time. If you couldn’t, the whole system would be unworkable.

The image contains your code, the entire Smalltalk system, and the state of all the objects therein. Smalltalk typically also has a mechanisms for export/import of source, and also for internal version control. I think it’s fair to say that dealing with the workspace is at best counterintuitive when you are used to dealing with text editors and files.

Your main interface with the Smalltalk system is the code browser, which allows you to view any class and method in the system. One method at a time. Quick Quiz – what’s the Smalltalk keyword to define a method? Trick question. Smalltalk doesn’t have one. Smalltalk just knows that the things you type into the code browser are methods, and the first line of a method is its name. Code browsers are very neat, but again, using them effectively is a skill – a big step is realizing you can have more than one of them open at a time.

What’s interesting to me are the places where the Smalltalk environment is still ahead of my regular code environments, and where it’s behind. On the plus side:

  • When you save a Smalltalk method, the code browser verifies that it’s syntactically correct before allowing the save. If you are calling a method or using a variable that doesn’t exist, you have to okay it – in some circumstances, you even have the ability to create the method or variable as part of the save process.
  • The browser has easy access to all old versions of a method that you are looking at.
  • It’s very easy to see places that might call the method you are writing.
  • The browser also has some powerful refactoring functionality built in.
  • Integration is amazing, since everything is running. To give one example, the code browser knows which methods are SUnit tests, and they display in the browser with a red/yellow/green/gray indicator of their current state (error/failure/success/not run). Individual tests can be run separately easily in the browser, and since they environment is already loaded, the tests run in an eye blink.
  • On a related note, the debugger is famously awesome. From a breakpoint, you can see the state of all variables, edit values, edit code, all nice and easy.
  • You also can create workspaces, which are just blank code windows that you can execute arbitrary snippets in, like a iRb loop, but a bit more flexible.

But there are some missing pieces:

  • The code editor itself isn’t as powerful as you are used to if you’ve been using Vim or TextMate or, hell, anything. I’m a big “typing is not the bottleneck” kind of guy, but still, I would have loved some macros or snippets or something.
  • The one-method on display per code browser limitation is, well, limiting. You can have multiple code browsers open, but that takes up a lot of real estate. Tabbed browsers, split windows, that kind of thing would be kind of nice.
  • Intellectually I know that it’s really hard to lose changes in a Smalltalk image, but it still feels fragile to me not to have files readable by an external tool. UPDATE: I could have been clearer about this. Smalltalk doesn’t save changes in the image as such, instead it saves all your changes and lets you play changes back. I was trivially able to recover about 30 minutes of work after a Pharo crash, simply because all the changes were recorded and easily placed back in the image. The Smalltalk setup is effective, but feels different from the way I tend to think about their projects.

One interesting feature of the browser based editing environment is that it’s very easy to create a new method that is based on an existing method. You don’t even have to cut and paste, just browse to the existing method, change its name and save. Smalltalk will create a new method based on the new name.

This is very nice from a getting code in the system standpoint, but it seems to have the side effect of encouraging more duplication in the system than I might be otherwise comfortable with. The effect is confounded by the fact that you can only see one method at a time in the browser, making it difficult to scan for duplication.

What’s not getting across here is how smooth the workflow in the small is when you really get the rhythm going. It’s an environment that really suits a “make a small change, evaluate the change” process. In some ways, it’s not surprising that the test-first movement would have gotten serious momentum in Smalltalk, test-first is just a formalization of the way a good Smalltalk programmer interacts with the system. Plus, I’m a purist in so many ways, and even though the code takes a little getting used to, I like the way it turns out.

So, successful test. Would try again. I still want to try something with Seaside one of these days just to see what that’s like.

Filed under: Smalltalk

Things that Should Be Metaphors, Part 1

I present to you two things that sound like they should be metaphors for project issues. Except for two things:

  1. They are real
  2. I have no idea what they might be metaphors for

The Library Book Priority Conundrum

I read a lot. In general, I purchase books I’m very excited about reading, and books that I’m less sure about I check out from my local library. The problem is that I always have more books than time to read, and the library books have the nasty property that the library expects them back sooner or later.

The practical effect is that aside from a few books that are “drop everything” when they come out, I’m forever reading library books first, before the books I’ve purchased, because they will go back home, where as the books I own I get to keep.

Since I’ve already said that I’m usually more excited about the books I’ve purchased and get to keep, this is clearly not optimal. But even knowing this problem, I book I actually purchased (Ganymede by Cherie Priest) has been waiting behind a bunch of library books (That is All by John Hodgman, Kingdom of the Gods by N. K. Jemisin, among others). Okay, those books are great too. But still…

The last clementine problem

I go to the grocery store and buy a bag of clementines. Because they are yummy and easier to peel than grapefruits. Every bag has one or two clementines that are clearly a little less good than the others, so naturally I wait to eat those last. However, since as the clementines get yuckier they become less desirable than anything else in my kitchen, I don’t eat them at all. The result is that I don’t eat the last few clementines, because the pretzels are more enticing. I don’t get new clementines, because I already have clementines, and I don’t throw out the bad clementines until they are well and truly no longer food. The upshot is that I can go weeks without clementines, even though I like them.

Filed under: Uncategorized

Things that Should Be Metaphors, Part 1

I present to you two things that sound like they should be metaphors for project issues. Except for two things:

  1. They are real
  2. I have no idea what they might be metaphors for

The Library Book Priority Conundrum

I read a lot. In general, I purchase books I’m very excited about reading, and books that I’m less sure about I check out from my local library. The problem is that I always have more books than time to read, and the library books have the nasty property that the library expects them back sooner or later.

The practical effect is that aside from a few books that are “drop everything” when they come out, I’m forever reading library books first, before the books I’ve purchased, because they will go back home, where as the books I own I get to keep.

Since I’ve already said that I’m usually more excited about the books I’ve purchased and get to keep, this is clearly not optimal. But even knowing this problem, I book I actually purchased (Ganymede by Cherie Priest) has been waiting behind a bunch of library books (That is All by John Hodgman, Kingdom of the Gods by N. K. Jemisin, among others). Okay, those books are great too. But still…

The last clementine problem

I go to the grocery store and buy a bag of clementines. Because they are yummy and easier to peel than grapefruits. Every bag has one or two clementines that are clearly a little less good than the others, so naturally I wait to eat those last. However, since as the clementines get yuckier they become less desirable than anything else in my kitchen, I don’t eat them at all. The result is that I don’t eat the last few clementines, because the pretzels are more enticing. I don’t get new clementines, because I already have clementines, and I don’t throw out the bad clementines until they are well and truly no longer food. The upshot is that I can go weeks without clementines, even though I like them.

Filed under: Uncategorized

Ten Things That Drive Me Crazy About Conference Talks, And How To Avoid Them

If I’m counting right, Ruby Midwest is my fifth conference this year, which is I guess one of the perks of having an employer that likes being involved in the community. It’s great – I like meeting all the smart people involved with Ruby, I learn things, and sometimes people ask me to sign my book. (Okay, that happened once. Still, that’s more than zero.)

At the risk of shooting myself in the foot forty-eight hours before I go give my own talk, here’s a list I compiled for a lightning talk at a recent Groupon Geekfest, entitled 10 Things That Drive Me Crazy About Conference Presentations. I offer it with love and respect. I also note that I’ve committed at least five of these, and will probably commit at least one of them on Friday when I go on at Ruby Midwest.

Here’s the big takeaway: if you are going to the trouble of delivering a live presentation, you need to think of it as a performance, and you need to offer something to the audience that they wouldn’t be able to get from watching the slides.

Coincidentally, Merlin Mann covers similar points from a different perspective on a recent Back to Work Podcast. A lot of good advice there.

Hey, I’m not perfect at this, and posting this pretty much guarantees an epic failure on Friday, but maybe this will help you.

Here’s my list:

1. Underprepared

Do I need to explain this one? Generally my marker here is if the speaker seems surprised by the content of their slides, they are probably underprepared. This isn’t just stumbling around, though. To me it also applies to things like not seeming to know why you are giving the talk, or what you want me, the viewer, to understand when you are done.

If you can’t answer the questions “Who am I aiming this talk at”, “What is the one thing those people should leave this talk thinking”, and “Why is this story going to be compelling to those people”, you are underprepared.

Being nervous is a separate, but related issue. The best fix for presentation nerves that I know of is practice.

2. Overprepared

Yes, it is possible to be overprepared for a presentation. The symptom of this is usually a word for word script that the presenter is reading from, and not paying attention to the audience. The problem, though, is that your talk is brittle, and you won’t be able to adjust to an unexpected audience or environment.

3. Hiding

Most of these conferences are in rooms that are not designed for a performance by a single speaker. Think about a comedy club. The stage is small, not very high, and it’s in the middle. The audience is close together and close to the stage. This is a good environment for a performance.

Picture a hotel ballroom. The speaker is often set up in a corner, frequently behind a podium. The audience is often ten or fifteen feet away, and there is often a lot more space than people. That’s a hard way to go.

Don’t hide behind the podium. Move around. My biggest regret of the whole conference year was not insisting on a body mike at a conference, and having to give a talk from behind a podium. Own the space. You want people focusing on you first and the slides second.

4. Not interacting

A related point. Pay attention to the audience. In a tech conference situation, you are competing with everybody’s laptop and phone, which is to say you are competing with the entire Internet. Can you be more interesting than the entire Internet? At least for a few minutes?

People will pay attention to you as a speaker if they think you are paying attention to them. “[Insert Town] audiences are the greatest audiences in the world” is a huge performing cliché. You know why? Because it works. Anything that gets people involved gives them a little investment in paying attention.

When I was in college, I’d start stand-up sets by coming out with a tape recorder holding it up and explaining that I was recording the performance for my mother, and I’d really appreciate it if the audience could give me one big laugh. Cheesy? Practically coated in cheddar. Effective? Yes.

5. “My Tool Is Great”

This is a type of talk that bugs me. I gave one like this once, and it bugged me when I gave it. Define a problem. Don’t talk about any general part of the problem, just talk about your awesome tool that nobody else is using and why it’s awesomeness is overwhelming. Frequently, this involves ignoring any other way of solving the problem.

6. “My Company Is Great”

Oy. This one usually goes like this:

“At my company we have the perfect process. We never have meetings. In fact, if more than two people show up in the same room, we beat them with sticks. We deploy once every microsecond, and everybody can ask for a deploy, including the people at the company across the hall. We exist in a state of perfect harmony. If you are not like us, your company is substandard. Is that why you are sad? Join us.”

It’s a bit much, and I say that speaking as somebody who has generally been proud to identify with the companies I’ve worked for. What’s usually missing is a) the context of why good processes have taken hold at your company and b) what those of us who are not blessed can do to make progress toward Nirvana.

7. Trendy Slides

A personal pet peeve, and I don’t expect this to bother you as much as it bothers me. In a way, this is another standup thing. It’s easy to get a laughter from an audience without actually being clever. You can shock them, you can flatter them.

For me – and I think I may be alone in this – putting the latest meme in your presentation just because it’s there is in the same category. It’s quick, easy, the audience will react, but ultimately it’s a distraction from what you are trying to. Remember, you want the audience to focus on you, not the slides.

That said, it is possible to add a meme or something cleverly. It’s just that the bar is pretty high.

8. Illegible slides

Let’s make this simple: every word on your slides that is communicating information needs to be clearly legible from 50 feet away, period. The body text on the presentation I’m giving friday is 96 points. With the exception of a few code slides, there are fewer than 15 words on a slide – this is a performance, not an eye exam.

Dark text on light background is usually more visible in a projector context than light text on a dark background.

9. Not knowing the audience

This comes in a couple of forms.

  • Not knowing that your audience may have a different expertise or knowledge level than you expect.
  • Not realizing that people in the audience may be different in general, and have different reference points, or different standards of what is uncomfortable or offensive. They may have genders or nationalities different from your own. That doesn’t mean you have to be bland, but it does mean you need to be aware that there’s a wider world out there.

10. Incomplete lists of ten

Filed under: Uncategorized

Stevenotes

I wasn’t going to write this, because it’s not like the Internet has a dearth of people writing about Steve Jobs who never knew him or interacted with him in any way. I wrote it anyway.

I’ve watched the first few minutes of the keynote introducing the iPhone several times. It amazes me – both as a seminal moment of technology and as a presentation. Jobs starts of by saying he’s been looking forward to this day for two-and-a-half years, and that he’s been fortunate to work on multiple “revolutionary” products in his career.

He says that “Today, we’re introducing three revolutionary products” – remember this part?:

  • “A widescreen iPod with touch controls” – he says, and the crowd goes nuts.
  • “A revolutionary mobile phone” – the crowd goes berserk, this is what they came for.
  • “A breakthrough internet communications device” – polite applause, but really, nobody knows what the hell he’s talking about.

He repeats the phrases, makes it clear that he’s talking about one device, “and we are calling it… iPhone”. Then he shows a gag picture of an old iPod with a rotary dial, and it’s funny – not a phrase normally associated with Jobs’ keynotes.

It’s a fantastic performance. Here’s the thing… People went crazy for the iPod and phone parts. Almost five years later, there’s no doubt that that part that nobody got at first – the internet communicator – is far and away the most revolutionary part of the iPhone experience. I had a phone in my pocket before 2007. I had an iPod in my pocket before 2007. The biggest change to my behavior from the iPhone is the way I can access the Internet from my pocket. (Yes, I know there were mobile phones that could connect to the Internet before the iPhone. I even used some. It wasn’t the same thing in an way.)

Which makes me wonder, why describe the device as an “internet communicator” at all, let alone have it as the last part of the description? I mean, it’s nice to have three beats in description, but the build from iPod (yay!) to phone (YAY!) to “Internet communicator” (huh?) is certainly weird. I assume that the “Internet communicator” part is there because somebody – presumably Jobs – felt that it was going to be important. Placed it last because it was most important. Even if nobody else realized that yet.

It’s been interesting to see how Jobs has been described in media reports, and how none of the descriptions quite fit. “Inventor”, “pioneer”, “visionary”, “technology titan”. They all have a grain of truth, but it’s almost impossible to imagine a short phrase that could encapsulate exactly what Jobs’ role was in all of these revolutionary products. At this point, the glib, writerly thing to do would be to claim that “Internet communicator” is the phrase that fits. It’s tempting, but kind of forced.

I used to say this: “Apple is a long-range experiment in how much people will pay for good design. And the answer is not much”. I said this for years, probably first around 1994. It’s become less and less true over the past few years, not because Apple is any less a design experiment, but because the audience has finally caught up with them and more and more people have been willing to pay for devices that are well designed. Jobs was the curator of that design sense, and that design sense has shaped the tools I use and the way I live and work.

Filed under: Apple

Coming Soon: Getting Things Done In JavaScript

Okay, the blog has been very quiet for the last month or so. Please be polite and pretend you noticed. I’ve alluded online to a new book one or two places and now I think it’s far enough along that I can mention it in public without being too scared.

Let’s do this Q&A style, call it an infrequently asked question list…

Q: What’s the new book?

A: Great question. The working title is Getting Things Done In JavaScript. That may not be the final title. My proposed titles, Enough JavaScript to Get By and JavaScript for People who Hate JavaScript were (rightly) deemed unsuitable.

Q: Okay. That’s the title. What’s the book?

A: Here are some excerpts from my proposal:

  • The intended audience are developers used to doing back-end development, probably but not necessarily in Rails, who are increasingly asked to move functionality to the client, and are not familiar with the best JavaScript tools available for the job.

  • This book is aimed at developers who are explicitly working on front ends for web applications and looking for guidance on how to approach the simple parts first and the complex parts as needed. In my head, this is triangulated with three non-JavaScript books that I think are around that level: Beck’s Smalltalk Best Practice Patterns, Olsen’s Eloquent Ruby, and Valim’s Crafting Rails Applications.

  • Everything is test-driven. This book will contain more Jasmine than a Disney princess convention.

Does that help? Put another way, it’s the JavaScript book I wanted to hand our last apprentice when he asked for a good guide to JavaScript. Another way I’ve described the audience is people who have poked at JavaScript a few years ago, just got back into it, and aren’t quite sure why everything is an anonymous function these days. I’ve also called it JavaScript: An Idiosyncratic Guide, as in the thing you use after you have the information in the definitive guide.

Q: Why a book on JavaScript?

A: Because I was a guy who poked at JavaScript a few years ago, just got back into it, and wasn’t quite sure why everything is an anonymous function these days…

Well, that’s part of it. I felt like it was an area where I had something to offer, and where I could leverage the time that I had spent getting back into the latest and greatest JavaScript tools and make it easier for others to do the same.

Q: When can I buy it?

A: The initial draft is maybe 1/3 done. Ish. The hope is to get it available in beta sometime in November, and given the schedule that Pragmatic likes for books these days, to have the final come out something like January. That’s still an aggressive schedule, and I’m probably just a smidge behind, but I have a decent idea where I want it to go, and I’m making steady progress.

Q: What’s actually in the book?

A: The outline is still a bit in flux, but the basic idea is to take a pure server-side application and bit-by-bit move features to the client-side in a slow and not-even-a-single-tiny-bit-contrived way. The JavaScript topics are largely focused on creating what passes for objects, so there’s discussion of the object model, functions and scope, and the like – it’s not (at least at the moment) a tutorial on the basics of JavaScript. There’s a lot of jQuery, and a lot of Jasmine, and there will also be jQuery UI, jQuery Mobile, and Backbone.

That’s the story. Coming soon to a theater near you. Hope you like it.

Filed under: Books, Jasmine, JavaScript, Me

What I Learned

As you may have heard, Obtiva got bought by Groupon. I’ve been traveling a bunch, so this coming week is my first full week in the Groupon office post-transition. And, well, someday maybe I’ll write retrospectively about Obtiva, but today isn’t that day. I’ll probably write about what I’m going to be doing at Groupon, but today isn’t that day, either.

Instead, I realized that month marks four years since I left Motorola and became a Rails consultant. In that time, I worked, for some definition of worked, on over a dozen projects and probably watched at least another dozen from down the hall.

I must have learned something, right?

I finished out this post, so I guess that means that yes, I do think I learned something. Here are a dozen or so oversimplified, fortune cookie-esque things that I think I learned in the last four years. Some of these are probably blog posts in their own right, which I may get to one of these days.

  • It’s a hallmark of successful engineering teams that they understand that if you do not need to make a decision, then you need to not make a decision. That’s sometimes called “preserving ambiguity”, and I remember from back in my days studying engineering education that it’s a hallmark of successful engineering teams across disciplines.

  • One reason why preserving ambiguity is necessary is that if you get too concrete too soon, then early decisions have a kind of gravity that makes it hard to escape them even when it’s best to explore the problem space more fully. A couple of times in the last four years, clients have come in with polished visual designs, and even if the layout and structure doesn’t work at all from a functionality or usability standpoint, it’s hard to escape the concreteness of the design to find a better design or a better definition of the project.

  • There’s very little that damages a team dynamic more quickly than trying to measure and compare individual contributions. (Technically, I learned this one at Motorola, but it still applies…) On a larger project, measuring subteam contributions also qualifies as problematic.

  • One thing you absolutely must have when coming out of your initial project meetings is a list of things that are not part of your project. Again, a common client pattern was trying to be the “YouTube of X” and the “Facebook of X” and the “Twitter of x”. Pick something to do well and do it well.

  • You can’t escape the software triangle, of scope, budget, and schedule. Keeping a project on track means saying “this is out of scope”, or “okay, we can do this, but that means something else needs to move out of scope”.

  • Hiring is so, so important. I once heard it said that the secret weakness of Agile was that one sociopath could ruin an entire project. That doesn’t mean “just hire your friends”, but it does mean that being able to work as part of a team is important.

  • If the business/management team and the development team trust each other, than almost any process works. If they don’t, almost nothing can fix it. One advantage of Agile methods is they provide a lot of quick, easy, and early opportunities for each side to show trustworthiness. (There’s definitely a longer post in this one…)

  • Because, ultimately, for a lot of our clients, working with developers is like going to the mechanic. When the mechanics say that the fitzelgurbber has been gazorgenplatzed, and it’s 500 bucks, do you trust them? Why or why not? How do you apply that back to your day job?

  • The development team’s job in an Agile project is to honestly estimate the cost of things, it’s the business team’s job to honestly estimate the value. Don’t cross those streams. It’s bad.

  • Agile is about managing change, not continuous change. If anybody on your project tries to justify a change with something like “we’re agile, we can change anything whenever we want”, run for the hills.

  • I didn’t say this, but I remember hearing it a few years ago. The right amount of process is just a little bit less process than you need. In other words, the slight chaos from too little process is much preferable to the overhead of too much process.

  • Look, I’ll admit that those of us that identify as software craftsmen sometimes get overly precious with our naming conventions and of course delivering business value is the number one priority. That said, if you are on a project and are being told to do less than your best work for the good of the project (by not testing, or by incurring too much technical debt), that should at the very least be alarming.

  • Pair programming has more overhead than is often acknowledged in Agile literature. I find that, especially on a small team, keeping a pair in sync time-wise is very hard. That said, I don’t have a better solution for code review yet.

Can’t wait to see what I learn this year. A lot of new stuff for me, should be interesting.

Filed under: Me, Projects

Bill James, Sabermetrics, and You, or At Least Me

I was a nerdy kid.

I suppose that isn’t much of a surprise, given how I turned out. But in those pre-computer days, I was nerdy about math and baseball. I was the kind of kid that kept a daily log of my batting statistics in the recess kickball games.

So you can imagine my surprise and happiness when this image appeared in Sports Illustrated, in May 1981. I was ten:

Bill James in Sports Illustrated

The man in the foreground is Bill James, who would soon go on to be one of the most influential baseball writers of the last thirty years. At the time, though, he was self-publishing his baseball book to a small but fiercely loyal group of fans, one of whom actually wrote the SI article.

In the background, on the scoreboard, was one of James’ inventions – a formula called Runs Created that purported to be the most accurate way to measure a baseball player’s contributions.

Okay, I’m getting carried away. The relevant point is that I was dazzled enough by the original article to start looking for James’ annual book once he started getting published and distributed nationally. As I said, I was young, and the books cost like seven dollars of my own money, so this was kind of a big deal.

I got lucky in my choice of baseball writers. Not only was James iconoclastic and funny, but he was very good at explaining his methods. And I don’t mean that he was good at explaining the math – James is the first to admit that he is no mathematician (admittedly joking, he once described the “standard deviation” of batting average as “about what your standard deviant would hit”). James’ skill was in explaining why he did his experiments in a particular way. In a very real way, the most important things I learned about how science works were from reading Bill James.

In, I think, the first book of James’ that I read, he responds to criticism of earlier work:

“Journalists start with the answer… [Sabermetrics] starts with the question”

Sabermetrics, by the way, was the word that James coined for the search for objective knowledge about baseball – “saber” from SABR, the Society for American Baseball Research".

In other words, where a journalist would start with “Derek Jeter should be in the Hall of Fame”, James would start with “Should Derek Jeter be in the Hall of Fame?”, and not in the lazy-local-news-headline way where you know from the question where the answer is. More likely, James would start with “What kind of player is in the Hall of Fame? Does Derek Jeter meet that standard?”

I suspect that this distinction is obvious to most of you reading this (though it’s easy to find places in our public discourse where nobody seems to understand it.) But it was a big deal to 12-year-old me.

Later, I remember an epic dismantling of the phrase “Pitching is 75% of Baseball”, starting with wondering what that even meant, and then going one by one through the things that would be implied of that statement was true, determining that none of them actually were, and eventually concluding that even the baseball traditionalists who were fondest of the claim didn’t act in any way consistent with actually believing it. I can still quote large chunks of that one.

Some quotes didn’t really become meaningful to me until I started writing myself:

“One of the operating assumptions of this book is that you either own McMillan’s Baseball Encyclopedia or don’t care what it has to say. In either case, you don’t need me to tell you what an outfielder’s assist totals are”

I’ve used some variant of that comment for every book I’ve written (though it didn’t always make into the final version). I’ve also used in when reviewing books. It’s a really useful way to think about your audience, to realize that you can assume knowledge of or disinterest in certain information.

Another quote that was big with me when I used to read academic papers all the time, but that I also keep in mind when I write.

“This isn’t a bull session, this is science. I only write like it’s a bull session because I don’t like how scientists write”

James is always been a little cranky on the subjects of professionalism and expertise, which he sees as often being used as nothing barriers to keep out the riff-raff.

“When you write something it is either true or false and being an expert or not being an expert has nothing to do with it”

What’s really stuck with me, though is the way James went about seeking more objective knowledge. The process was simple.

  • Ask a question.
  • Determine something that is observable that would be true if the answer to the question is true.
  • Use small, empirical measurements. James is the king of quick-and-dirty measurements that favor ease of calculation and understanding over multi-digit precision.
  • Compare similar items that differ in one aspect. In the mid 80-s James was more excited about a method to measure how similar two players were than almost anything else, because it allowed him to create controlled studies.
  • Follow the data. You probably won’t learn what you expect. Respect the data and respect its limitations.

If you are interested in James’ baseball work, the best introduction right now is probably the Historical Baseball Abstract, which is an overview of both his statistical methods and his historical interests. A more biographical look at his effects can be found in Moneyball, by Michael Lewis, which is about how the Oakland A’s applied James-style methods to actually win games. It’s a great non-fiction narrative, and Lewis is, as far as I can tell, unusually factually accurate.

James’ most recent book is Popular Crime, which is not about baseball, but rather a historical overview of crime stories that become pop-culture touchstones, and also the books that have been written about them. It’s cranky, scattershot, obsessive, and hard to put down.

Filed under: Books, Me

Red Buttons, The Uncanny Valley, And BDD Workshops

I want to tell you about LoneStar RubyConf and how my session went and all that, but first I want to tell you this seemingly unrelated story.

Once upon a time, my Senior undergraduate project was an educational software tool I built to teach fractions to elementary school students. (To give you the time frame here, it was written in Visual Basic 1 on Windows 3, and I had to ship my own desktop to the school to ensure that they’d have something I could run it on. Also, I travelled to the school via dinosaur.) Anyway, the metaphor for the program was slices of pizza, and at the beginning of the game, the student saw an uncut pizza on the screen. Times being what they were, I was using the VB graph library to draw the “pizzas”, so a new pizza was represented by a red 3D pie chart, which means it basically looked like a red cylinder.

The first student sits down and starts clicking on the pizza like there’s no tomorrow. I immediately realize that the red cylinder that I had set up to be a pizza looked like a huge red button right in the middle of the screen. I couldn’t possibly have given it more focus – honestly, it looked like it might launch missiles or something. And I know, that even though I had never thought of it in that way, that every student is going to do the same thing and that I’m going to spend my day telling people not to click on the pizza. Which is what happened. And I felt like an idiot – how could I not have seen it before?

Which brings us to my workshop at LSRC, “Correctness is only a side effect: Improving your life with BDD”. Here was the plan:

  • I wanted the workshop to have kind of a code retreat kind of flavor, where the attendees would be able to focus on the pure process without regard for real-world constraints.

  • The idea was that there would be a hands-on code problem, some discussion of how that was done via BDD based on the student’s concerns, then some prepared material based on the topic.

  • The extended example was some kind of restaurant recommendation system, where each restaurant would have a score based on, at first, just ratings, then cuisine, price, location, and so on. The plan was to build up an increasingly complicated score function than show the process of using BDD to refactor it to it’s own scoring engine.

Well, that didn’t work out. Some of it did. The prepared material was decent – I’ve used a lot of it before. A couple of other things happened, of which the most interesting was how the attendees interacted with the problem.

I started by presenting the problem and mentioning that we were going to be focusing on testing a score method, because that was where the interesting logic to test was going to be. We walked through the failing test for initial conditions, did the method returning a literal to make the test pass, forcing a dynamic method with another failing test. So far, it seems to be going smoothly from my point of view. This was quite an advanced group of students – often when I do these workshops at regional conferences, the attendees are on the newer side, but this group was experienced, knowledgable, and already largely bought into the idea of BDD. (Based on earlier experiences, I had pitched the material maybe a little to beginnery for the crowd, causing me to worry if I was presenting any useful information to them at all).

Soon it’s time to break out for the first hands-on example, which is adding some new features to the score function. And the conversation went something like this: (I’m compressing this a bit for dramatic effect…)

Me: And the task is to make recommendations based on cuisine and price. You can do this by filtering a list of restaurants, or, I’d recommend trying to do it by modifying the score function.

Clever Attendee: How do we store the list of restaurants?

Me: Oh. Um. I don’t really care. In a real system the framework would manage that. You can explicitly pass an array of restaurants if you want, or just test the score function for a single restaurant.

Clever Attendee: Okay. But how should I store the list of restaurants?

Which was my red button moment. Of course somebody coming to this problem for the first time would want to know how things were stored. Of course a person who didn’t know what I had planned for the rest of the day would worry about that.

And I realized that I had landed in the uncanny valley of example problems – close enough to realistic for people to care about practical concerns, but not specified enough for those concerns to be nicely handled. And I missed it because I was fixated on how I would test the score method.

We worked through it, though, and even did another round of examples using mock objects to fake out a location service. Still, we weren’t able to get anywhere near my plan of building up a complex system via BDD. Again – my fault for not getting the example right – the attendees were all great.

Ironically, I think I could have gotten away with it if the example was in print, where I would been able to just work through the changes in the score example.

Toward the end of the day, I kind of got the feeling that everybody, including me, was a little frustrated, so I just came out and said it. “I think the prepared material has been fine, but I don’t think the example has really worked. Would anybody mind trying a more abstract, code kata-like problem via BDD as the last example.” Everybody pretty much unanimously picked the abstract example, and so we pivoted to the abstract problem that I wrote for Ruby Learning a little bit ago. People seemed to work on that enthusiastically.

In the end, I think I provided value for everybody that was there, many attendees said nice things. But I’m annoyed with myself that I wasn’t able to deliver everything I wanted to.

Lessons for next time:

  • For this workshop, it seems likely that a well-specified but abstract problem is going to be better than a less-specified but nominally real-world problem. I think if I did this next time, we’d build up a simple game.
  • The content of any workshop like this is always dependent on the attendees needs. As the presenter, I have to be paying attention to how people seem to be doing. As the attendee, you should feel free to let the presenter know what you are hoping to get out of the session and if it’s working for you.

So, thanks to those of you that came out – you were great, I hope you learned something, and I appreciate they way you helped me improve my teaching skills. But I really hope you learned useful stuff, too.

Filed under: Me

In The Jungle, The Mighty Jungle

A few quick notes about Lion, mostly first impressions and things I haven’t necessarily seen a ton of coverage on.

The install itself

This is, I think, my third major OS upgrade since I started using OS X all the time. It’s by far the easiest install. The biggest problem was the time it took to download the installer. I also had a problem where the XCode installer finished but didn’t register itself as finished, such that it appeared to hang. That took a little effort to track down, but wasn’t actually damaging.

Beyond that, though, nearly everything just worked. I had to reinstall exactly one Unix-y binary (ImageMagick, ever the outlier when dealing with annoying installations). I was afraid that I’d need to fuss with things like MySQL or my Ruby install, but by and large, I didn’t.

That Darn Scrolling

The most obvious change as you go to Lion is the new scrolling – it’s such a big deal, that Apple even gives you a dialog box on your first Lion boot reminding you that things have changed.

So, is the change irritating or merely annoying? Okay, for the first hour or so, it was completely unmanageable, to the point where I could feel the tension in my wrists from straining to remember which way to push the track pad.

At the risk of being obvious, what’s happening here is a metaphor shift. Rather than imagining that scrolling is the act of moving a window over your document up and down – moving the pointer down moves the window down and shows you a lower part of the document you now imagine that you are moving the document itself so moving the pointer down drags the document down and shows you a higher part of the document. As pretty much everybody has noted, this is exactly how scrolling works on iOS devices, where you the metaphor of dragging the document itself is much more concrete. (Weirdly, I used iOS devices for quite a while before I consciously started to think about how iOS was backward relative to the Mac.)

After a few days with the new scrolling I’ve basically got it. I find that if I don’t look at the scrollbars when I scroll, it’s much easier to imagine that I’m dragging the document. Also, for some reason, I took to the new scrolling most quickly in minimal apps or apps that are very similar to their iOS versions, such as Reeder. And I can’t seem to get it right in iTunes for some reason.

Is this change a good thing? Dunno. It’s clearly a thing. It’s a little weird to have something as fundamental as scroll direction be subject to user whim – I expect it’ll make pair programming interesting if it really becomes something that a significant minority of users don’t switch to the new version. There’s one problem with the new system that I think is unambiguously bad – since the scroll bars fade to the background when they aren’t used, it’s much harder to see how large a document is and where you are in the document from a quick glance. That’s a loss of information that doesn’t seem to be counterbalanced by anything. It’s also weird that you can still grab the scroller itself, and move it in the traditional direction (although since the scroller is now on top of the view in many apps, it’s sometimes hard to actually grab something at the edge of a document). My overall feeling is that this would make total sense if we had been doing it for fifteen years, but right now it’s going to feel weird for a while.

Another thing is that if the document is scrollable in two directions, it seems to be much harder to keep a pure vertical scroll without it drifting into a slight horizontal scroll. Also, I can’t imagine this working if you were using a mouse instead of a trackpad.

Overall look and feel

Broadly, it seems like there are three overlapping mandates for the look and feel changes in Lion – make interface elements less prominent (with the glaring exception of iCal), incorporate successful features from iOS, and animate anything that’s not nailed down. So scrollbars and other basic interface elements have become more muted across the board. Those changes are not dramatic, but I like them, they do tend to keep focus where it belongs.

I really like Mission Control as a re-imagining of Spaces/Dashboard/Expose. The Mission Control screen is very nice, easy to see, and it’s very easy to manipulate spaces – this is one case where the gestures really work. (I never was able to stick to using spaces before, but I have been using them a bit in Lion). The new full-screen mode doesn’t work for me, mostly because I’m often in a dual monitor situation, and the second monitor is ignored in full-screen, which seems a waste, but I can see how making each full-screen app its own space makes dealing with a bunch of full-screen apps much easier. Launchpad seems to be something that I don’t need, and it feels like it would be hard to manage.

The animations don’t bother me as much as they seem to bother other people – though ask me again about Mail.app in a few weeks. I’ve seen some complaints about the speed of the animation between spaces, but it seems reasonable to me.

That said, the iCal redesign doesn’t do much for me, but I’m not a heavy iCal user. Address book I like better, though I still think it’s a little hard to use. One feature that I do like is that, if you use iPhoto’s faces feature, Address Book can easily search iPhoto for pictures matching the name of the contact to use as the avatar for that contact.

Auto-Save and Versions

One of the biggest functional features of Lion is the auto-save and versioning. Lion-native apps auto save when idle or at a timed interval, and automatically save when the app is closed. They also automatically restore state when the app reopens. Apps have a Time Machine like interface to view old versions of the same document. Points:

  • Basically, this is awesome.
  • I think it’s going to be much harder to break my typing tic of pressing command-s at the end of every sentence then it is to adjust to the scrolling thing.
  • I also think it’s going to take some time to get used to the new “Save a Version/Duplicate/Revert To Saved” wording in the file menu.

One thing I haven’t seen commented on is that there seem to be two different kinds of version support in Lion. Which may mean that I’m getting this wrong. But it appears that there is a subtle difference between apps like Pages, Keynote, and other applications. In Pages and Keynote, you have access to every save point over the history of the document.

For other applications, if you are connected to your Time Machine drive, you have all time machine snapshots. If you aren’t you seem to have access to maybe the most recent time machine snapshot. I’m not 100% sure exactly what’s going on and I’m not sure yet if it’s an app thing or a document type thing – for example, it seems like Apple’s TextEdit can create multiple versions of a text document, but, for example, Byword can’t. But Byword is Lion-compatible, in that it has the new-style File menu. Ultimately, as cool as this is theory, it’s a little confusing in implementation.

New Apps

I’ve started playing with Mail.app, which I stopped using about two years ago on the grounds that it was really irritating. It’s a lot better now, with a more useful three-column layout (that can become a two-column layout), conversation threading, a really, really nice search feature and a bunch of animations that straddle the line between charming and annoying. For the record, the popout animation for replying I find a bit much, but they way sent messages fly up off the top of the screen kind of makes me smile. (And if you liked the animation from the App Store where the icon files into the dock, note that Safari uses something similar for downloads, and Mail uses it for replies.)

One nice touch that I haven’t seen called out much is that in Mail and iChat, and I’m not sure where else, inline url’s have a little arrow after them, which triggers a quick look preview of the web page, similar to the way Google’s quick preview works. That’s nice.

Anything else I can think of

Lion has also added a system wide autocorrect clearly based on the iOS version. I thought this was going to bug me, but actually I kind of like it. It appears to work a little better than the iOS version at identifying and correcting actual typos, the UI is a nice combination of unobtrusive but yet making you aware that a change has been made, and it’s much more responsive than TextExpander (which I love for deliberate macros, but which has always felt a little sluggish when correcting typos). Also, the autocorrect has fixed like five typos just in this paragraph, and only got one of them wrong. I’ll actually take those odds.

So

Too many words to say this: I like Lion so far, although some of the specific choices puzzle me cough iCal cough. It’s taken me less time to get used to the changes than I thought, and I’m finding some of the changes making definite improvements in my normal workflow.

Filed under: Mac

In Which I Blather About Self-Publishing

So I tend to keep an eye on interesting things in the Ruby self-published technical book space.

This isn’t exactly recent, but I did want to mention and endorse Avdi Grimm’s Exceptional Ruby. This is exactly the kind of thing that should be happening in the self-publishing space. It’s a brief, thorough exploration of a very specific topic, in this case error and exception handling in Ruby. You may think you understand Ruby’s error mechanisms, but I’m pretty sure that unless you are actually Avdi, you will learn something both about the mechanics or Ruby’s exception handling and how best to robustly integrate error management into your code.

The book is clear and authoritative, not least because Avdi is up front about certain code patterns that he is presenting but has not had wild amounts of experience with. I like when a technical author is comfortable enough to admit less than perfect omnipotence.

Anyway, it’s $15 dollars at both exceptionalruby.com and at Pragmatic – Avdi worked out a deal where Pragmatic is co-distributing the book. Unlike what I did with Rails Test Pescriptions, Exceptional Ruby is just being distributed by Pragmatic – it didn’t go through Prag design or edits, and Avdi also sells it on his own. This strikes me as a very interesting experiment, and I hope it works out for both sides.

Which brings me to the latest from Thoughtbot, namely a new ebook on Backbone.js and Rails.

I need to start with a disclaimer – there’s a sense in which this book partially competes with my Not Completely Announced JavaScript book. Also, I haven’t read what they have yet, although quibbles aside, there’s a good chance that I’ll buy it. Also, I use all kinds of Thoughtbot tools and I have a lot of respect for them as a team.

Okay, we clear?

Thoughtbot is producing a new ebook – apparently as a group, because they aren’t listing specific authors. (As a matter of marketing, and consumer confidence, I really recommend that they list the authors…). They have an extensive outline on the site, although they don’t estimate the final page count. The distribution model is interesting – when you pay, you get access to a github repo containing up to the moment source code that can be converted to a variety of formats. (Though it is unclear if any format other than HTML is supported immediately). The outline appears to be solid, can’t quibble with any of that yet.

Now how much would you pay?

It’s $39. Until August 1, when it goes up to $49.

That is a bold pricing strategy.

Which doesn’t mean it won’t work out fine for Thoughtbot.

To put that $49 in context, that’s more than double most Pragmatic ebooks. It’s almost triple what O’Reilly sells their new jQuery Mobile book for, which would seem to be a reasonable comparison, topic size wise. And, of course, it’s five times what I self-published for a few years ago. More on that in a second.

Offhand, I can think of three reasons why they might try that pricing strategy:

  • Value pricing: if this book saves you an hour of research or coding, then that easily covers the price for most purchasers. Which is true, if not exactly how the market prices technical books. But if you look at it as a fraction of what a typical training course costs, then it makes more sense.

  • Profit pricing: Thoughtbot may have decided they need to price at this level to make the cost in time spent on the book worthwhile to them.

  • Boutique pricing: Thoughtbot may want to place a marker down as experts in this space, but limit their customers to people who are seriously interested in both the subject and the somewhat interactive model they have chosen for creating the book.

All these arguments seem valid, or at least defensible, and I have no idea what Thoughtbot’s strategy actually is. I’m really curious to see what happens, though. I’ll report back after I buy it and they get some time to put content in place.

It seems to me like part of the marketing side self publishing a book is giving potential readers as few reasons as possible to not buy the book. Right now, there are a lot of reasons here – the price, the lack of a sample, the lack of an author or publisher who might have a track record (although obviously Thoughtbot itself has a great reputation, and this will be more meaningful to some people than others). Some of this will correct itself over time – if things are good, they’ll get some word of mouth – testimonials would be a great thing on their website.

Which is a roundabout way to say a couple of things about self-publishing based on my own experience.

First is that it’s wildly clear that I chickened out on the pricing. (Rails Test Prescriptions was $9 when I self published it.) I could make up something about how I was pursuing a mass-marketing strategy and blah, blah, but the simple fact was I was petrified that nobody would see value in it if I went above $9. In retrospect, I probably could have gotten away with a higher price.

The other mistake I made – or what could potentially have been a mistake – was to not set a finite goal. Avdi, I think, has this exactly right. He took a specified chunk, wrote about it, and called it a day. When I first started with RTP, I said a lot of things about how I would keep the book updated. It didn’t occur to me until after I started selling that I was setting myself up for an infinitely long task. Again, though, I was afraid that there would be a lack of perceived value in a book that ended, and obviously one of the potential advantages of an ebook is ease of updating.

One thing the Thoughtbot team has right, though, is an easy way of updating – their book will be a git repo. I hope, though, that they’ll separate in-progress from edited and reviewed by putting them on separate branches. I messed this up a bit by assuming that my distribution channel could handle it and then having to come up with a stopgap fairly quickly.

I will say, though, that publishers and editors are somewhat like the government in that when they do their jobs well, you don’t realize how much value they can add. There’s a process for doing things that the author might not want to be involved in – design, indexing, distribution.

I don’t have a grand conclusion here – I’m very interested to see all kinds of experiments with self-publishing, I don’t think anybody really knows how things will play out.

Filed under: eBooks