Ryan Bates of RailsCasts – Ruby on Rails Podcast

Ryan Bates of RailsCasts on his day job, bowling, and the Rails Activism team.

Ryan Bates of RailsCasts – Ruby on Rails Podcast

Ryan Bates of RailsCasts on his day job, bowling, and the Rails Activism team.

iPhone Mockups

The next version of Balsamiq, a really cool tool to create low fidelity mockups, will support mocking up of iPhone apps. That’s gonna be the quickest way to visuaizel and experiment with you next killer iPhone app idea.

Balsamiq allows to drag and drop components on a canvas and then just add content via text, which will be rendered as an image. So the text on the right below creates the interface on the right. That’s too cool.

A Simple Label
- Delete
+ Add and sub-menu, >
Two Labels, yup
v A Checkmark, (>)
* A Bullet, Read >
_ Space for an icon
__ Space for a big icon
On button, ON
Off button, OFF
 
v And empty row, (above)
balsmaiq_iphone.png

Corey Haines and RMM at Hashrocket


Corey Haines at Hashrocket / RMM from Hashrocket on Vimeo.

Join us in welcoming Corey Haines (aka The Software Journeyman) to Hashrocket for some pair programing with Stephen Caudill. Corey Haines travels around the country pair programming with whoever will feed him a scrumptious vegetarian meal and give him a spot to put his air mattress.

In the video, Corey arrives at Hashrocket (accompanied by Cory Foy) and introduces himself to the team. Then Obie, Corey and crew proceed to discuss software craftmanship and the project that Corey and Stephen will be working on for the duration of the week: RMM – the Rails Maturity Model application. A great conversation ensues that really hits home on what RMM is all about.

Rails Envy Podcast – Episode #068: 02/25/2009

Episode 068. Nothing like starting out an episode with an awkward, poorly received joke.

Subscribe via iTunes – iTunes only link.
Download the podcast ~16 mins MP3.
Subscribe to feed via RSS by copying the link to your RSS Reader


Sponsored by New Relic
The Rails Envy podcast is brought to you this week by NewRelic. NewRelic provides RPM which is a plugin for rails that allows you to monitor and quickly diagnose problems with your Rails application in real time. Debuting this week is the Rails Lab which gives you expert advice on tuning and optimizing your Rails app. Check it out at RailsLab.NewRelic.com.

Announcing the Helpdesk Rails Kit

A new Rails Kit is now available: The Helpdesk Rails Kit makes it easy for you to add a support center to your existing Rails app, or even to just run a standalone support site along-side your existing Rails app.

You get an admin interface for you and other people on your team to see, route, and handle tickets from your customers and update your knowledgebase, and your customers get a web interface to keep track of their tickets and view your support articles. The Helpdesk Kit also provides email integration, so you can have your customers send email to your existing support email address and get responses back from that email address, while you and your team can still view, tag, and assign the tickets that come in via email using the web interface (which also has an ATOM feed, to keep you on top of everything).

If you’ve been wanting a support site for your Rails app, check out the Helpdesk Rails Kit. And, If you need some recurring billing for your Rails app, don’t forget about the Software as a Service Rails Kit. With these two Kits, it’s almost SaaS-in-a-box. 🙂

Net::SSH, Capistrano, and Saying Goodbye

It is with mixed emotions that I announce two things this evening.

First, I’m announcing the final release of both Net::SSH (2.0.11) and Capistrano (2.5.5). Both are minor changes: Net::SSH 2.0.11 adds support for a :key_data option, so you can supply raw PEM-formatted key data. Capistrano 2.5.5 enhances the role() method so you can now declare empty roles. Either way, not much to get excited about, but the changes were pending and deserved releasing.

Secondly: I’m ceasing development on SQLite/Ruby, SQLite3/Ruby, Net::SSH (and related libs, Net::SFTP, Net::SCP, etc.) and Capistrano. I will no longer be accepting patches, bug reports, support requests, feature requests, or general emails related to any of these projects. For Capistrano, I will continue to follow the mailing list, and might appear in the #capistrano irc channel from time to time, but I am no longer the maintainer of these projects. I will continue to host the capify.org site and wiki for as long as they are of use to people.

This was a very hard decision, and one that has taken me months to come to grips with. I cannot express how much I appreciate the huge support from everyone that has found value in Capistrano, in particular. Your kind words and encouragement have meant a lot to me. But I’m burning out, and I have to drop these before things get worse. Maybe after some period of time I’ll come back to them—I don’t know. But I’m not planning on it.

So where do these projects go from here? That’s entirely up to the community. If you have a neat idea for any of these, please feel free to fork the project on GitHub (see my profile page for the links to the individual projects) and release updates on your own schedule. If no one steps forward, that’s fine—I’m not asking for volunteers. But if someone feels passionately that any of these are not “finished”, and has ideas for how they could be further improved, I will not stand in the way.

However, please know that I am not available for questions about the code, or for advice on how to implement changes. I’m trying to cut as cleanly as I can. Any emails I get asking about the code will likely be ignored. I’m not trying to be rude; I’m just setting expectations.

I won’t disappear, though. These libraries were just becoming millstones around my neck; without their weight dragging me down, I look forward to being able to experiment and play with new projects and new ideas. We’ll see what the future holds!

So, thanks all for a fantastic couple of years.

Another look at integrating Scribd with your Rails application

Almost a year ago I wrote about integrating Scribd into your Rails application, and since then that feature has been working well in my applicant tracking system. Today, though, I got a request for information on how I display the documents that I send to Scribd, so I thought I’d share that, too.

In the previous post I showed how to send the document to Scribd using the API, and in this post I’ll show how I take the results from the API and use them in my application. First, here’s a snippet from my Attachment model that shows I create Scribd URLs based on the info that comes back from the API. The scribd_access_key and scribd_doc_id are database-backed methods whose values get populated with the response from the API. The scribd_url method is defined like so, simply creating a link to Scribd using the document ID and access key:


  def scribd_url
    "http://www.scribd.com/word/full/#{self.scribd_doc_id}?access_key=#{self.scribd_access_key}"
  end

Here’s a snippet from my view that shows the attachments related to a job candidate’s submission. The preview link is conditional on the Scribd document ID being set, since not all documents that could be attached (e.g., resumes in WordPerfect format) will be successfully converted by Scribd. The user still gets a download link though.



< % if attachments.any? %> <ul class="attachments"> < % attachments.each do |att| <span>> <li id="attachment_<</span>= att.id %>"> < %= link_to att.filename, attachment_url(att), :title => 'Download' %> < %= link_to(image_tag('icons/page_white_magnify.png'), att.scribd_url, :id => att.scribd_doc_id, :rel => att.scribd_access_key, :class => 'scribd', :title => 'Preview') unless att.scribd_doc_id.blank? %> </li> < % end %> </ul>

<div id="scribd" style="display:none; margin-top: 1em"> <a onclick="$('scribd').hide()">Close preview</a> <div id="scribd_preview" style="margin-top: 2px"> </div> </div> < % content_for :head do %> <script type="text/javascript" src='http://www.scribd.com/javascripts/view.js'></script> < %= javascript_include_tag 'scribd.js' %> < % end %> < % end %>

And here’s my javascript that I inject into the head of the document via that call to content_for. It takes the info from the preview link and passes that along to the javascript provided by Scribd for doing in-page embedding.



var scribd_doc;

function showScribd(link) { scribd_doc = new scribd.Document(link.id, link.rel); scribd_doc.addParam('height', 400); scribd_doc.addParam('width', 620); scribd_doc.addParam('page', 1); scribd_doc.write('scribd_preview'); $('scribd').show();
}

Event.observe(window, 'load', function() { $$('a.scribd').each(function(link) { link.observe('click', function(event) { showScribd(link); Event.stop(event); }); });
});


So there you go, a complete Scribd integration for Rails, both server-side and client-side. Now it should be as easy as pie for you to bake Scribd into your own application.

What’s New in Edge Rails: Batched Find


This feature is scheduled for: Rails v2.3


ActiveRecord got a little batch-help today with the addition of ActiveRecord::Base#find_each and ActiveRecord::Base#find_in_batches. The former lets you iterate over all the records in cursor-like fashion (only retrieving a set number of records at a time to avoid cramming too much into memory):

1
2
3
Article.find_each { |a| ... } # => iterate over all articles, in chunks of 1000 (the default)
Article.find_each(:conditions => { :published => true }, :batch_size => 100 ) { |a| ... }
  # iterate over published articles in chunks of 100

You’re not exposed to any of the chunking logic – all you need to do is iterate over each record and just trust that they’re only being retrieved in manageable groups.

find_in_batches performs a similar function, except that it hands back each chunk array directly instead of just a stream of individual records:

1
2
3
4
Article.find_in_batches { |articles| articles.each { |a| ... } }
  # => articles is array of size 1000
Article.find_in_batches(batch_size => 100 ) { |articles| articles.each { |a| ... } }
  # iterate over all articles in chunks of 100

find_in_batches is also kind enough to observe good scoping practices:

1
2
3
4
5
6
class Article < ActiveRecord::Base
  named_scope :published, :conditions => { :published => true }
end

Article.published.find_in_batches(:batch_size => 100 ) { |articles| ... }
  # iterate over published articles in chunks of 100

One quick caveat exists: you can’t specify :order or :limit in the options to find_each or find_in_batches as those values are used in the internal looping logic.

Batched finds are best used when you have a potentially large dataset and need to iterate through all rows. If done using a normal find the full result-set will be loaded into memory and could cause problems. With batched finds you can be sure that only 1000 * (each result-object size) will be loaded into memory.

tags: ruby,
rubyonrails



Why Instant Deployment Matters

How much better are two steps than three? Does it matter if something takes five minutes instead of twenty? When it comes to software deployment and provisioning, does instant really matter?

Recently, I was ranting on this subject to a user who had the misfortune of asking me about it in person.

“Truly instant provisioning and deployment is the ultimate goal,” I said. “10 seconds isn’t good enough. We have to –”,

“Look,” he interrupted, “I love what you guys are doing and don’t want you to stop, but why are you so obsessed with this?”

My immediate answer: because we’re obsessive people. A couple years ago we stumbled across what we view as a glaring disconnect between the way software is developed and the way it’s provisioned and deployed. Now, like a person who’s noticed a crooked picture on the wall, we are totally fixated on setting it straight.

This was a shallow answer though, and he wasn’t convinced:

“I mean it’s not that bad as is, is it?” he said. “It’s been improving steadily for years.”

And that’s when it hit me. While everyone is adversely affected by this growing problem, most people don’t actually see it. It has crept up on us gradually.

1996: A development team of perhaps 10 people (toting advanced computer science degrees) spends 6 months building software to laboriously defined specifications, writing their own framework, and using limited libraries and no testing harness. It then takes an IT team of say 3 people a couple of weeks to provision server resources, configure and install the OS and software stack, and deploy the software.

2000: A more ambitious team of 6 people (toting half-finished computer science degrees) spends 3 months building a web application to satisfy a PRD, using primitive frameworks and some integration testing. It then takes 3 people about a week (optimistically) to provision servers from IT, install the web stack, and deploy the app.

2004: A 4-person team (half of whom went to art school) spends 4 weeks writing a web app to some short and loose specs, using decent frameworks, unit and integration testing, and lots of user feedback. It then takes just 2 people about a week to provision virtual servers, install a complete web stack, and deploy the app.

2008: An agile team of 4 people (plus perhaps a scrum master) spend a week building the first complete version of their web app from just a rough user story, using advanced web frameworks, fully featured libraries, test-driven development, and sharp agile practices. It then takes just one person a few days to provision new resources from IT or a fast-moving hosting company, install the default web stack, and do the initial deploy.

Let’s look at these data points:

The bottom row is the percentage of the total project/iteration time spent on provisioning and deployment. Look at it this way:

This is shocking. Provisioning and deployment has gotten 10x faster during this period, but development has gotten 130x faster. Development teams are getting smaller and more agile, doing shorter iterations (deploying more often), and scaling their apps more quickly (more frequent provisioning). This results in a dramatic increase in the portion of time spent provisioning and deploying.

At this rate, in less than 3 years we’ll be spending as much time deploying and provisioning as we spend developing. These numbers are based on our direct experience with medium/large company software projects. You can play around with the scenarios; even with widely different numbers the curve is about the same.

The reason most people don’t see this growing problem is because it’s masked by the gradual improvement of the deployment and provisioning process.

Capistrano, for example, is an awesome deployment tool, which makes us feel great about the improving state of deployment tools. But these incremental improvements aren’t keeping up with agile development; they’re an investment in a race that can’t be won.

We see this playing out often now. We’ve been contacted by quite a few Fortune 500 companies lately who, after a massive agile restructuring of their software development organization, discovered they are now spending as much time on provisioning as development. All the economic benefit of agile development is consumed by provisioning – this has enormous fiscal impact.

How do we solve this problem? It doesn’t seem possible to both make provisioning/deployment faster than development, and also keep it there by continuously improving at a higher rate. How do we get off this treadmill?

What if we could provision and deploy instantly? This is where the difference between “a little” and “none” comes into play. If it’s instant, the portion of time spent on it goes to zero. The development process can then improve at any speed, and deployment/provisioning will never become a barrier. Problem solved.

This is, by the numbers, why instant deployment matters.

How are we actually achieving instant deployment? Over the next two weeks we’ll be posting more information on the challenges involved, and how we’ve designed Heroku’s architecture to meet them.

#150 Rails Metal

Rails Metal is a way to bypass the standard Rails request process for a performance boost. In this episode you will learn how to shave off a few milliseconds using Metal.

#150 Rails Metal

Rails Metal is a way to bypass the standard Rails request process for a performance boost. In this episode you will learn how to shave off a few milliseconds using Metal.

File Downloads Done Right

        Getting your file downloads right is one of the most important parts of your File Management functionality.  A poorly implemented download function can make your application painful to use, not just for downloaders, but for everyone else too.


Thankfully it’s also one of the easiest things to get right.


<h2>The simple version</h2>


For the purposes of this article let’s assume that your application needs to provide access to a large zip file, but that access should be restricted to logged in users.


The first choice we have to make is where to store this file.  In this case there’s really only one wrong answer, and that’s to store it in the public folder of your rails application. Every file stored in public will be served by our webserver without the involvement of our rails application.  This makes it impossible for us to check that the user has logged in.  Unless <!--more--> files are <strong>completely</strong> public, you shouldn’t go anywhere near the public folder.


So let’s assume we’ve stored the zip file in:


<pre><code>/home/railsway/downloads/huge.zip</code></pre>


Next we need a simple download action to send the file to the user, thankfully rails has this built right in:
  before_filter :login_required
  def download
    send_file '/home/railsway/downloads/huge.zip', :type=>"application/zip" 
  end
Now when our users click the download link, they’ll be asked to choose a location and then be able to view the file.  The bad news is, there’s a catch here.   The good news is it’s easy to fix.


<h2>What’s the catch?</h2>


The problem here is one of scarce resources, and that resource is your rails processes.  Whether you’re using mongrel, fastcgi or passenger you have a limited number of rails processes available to handle application requests.  When one of your users makes a request, you want to know that you either have a process free to handle the request, or that one will become free in short order. If you don’t, users will face an agonizing wait for pages to load, or see their browser sessions timeout entirely.


When you use the default behaviour of send_file to send the file out to the user,  your rails process will read through the entire file, copying the contents of the file to the output stream as it goes.  For small files like images this probably isn’t that big of a deal, but for something enormous like a 200M zip file, using send_file will tie up a process for a long time. Users on slow connections will soak up a rails process for correspondingly longer.


If you get a large number of downloads running, you may find all your rails processes taken up by downloaders, with none left to serve any other users.  For all intents and purposes your site is down: you’ve instituted a denial of service attack against yourself.


<h2>What about threads?</h2>


Unfortunately threads in ruby won’t save us.  The combination of blocking IO and green threads mean that even though you’re doing the work in a thread, it’s blocking the entire process most of the time anyway.  JRuby users may get a performance improvement, but it’s still going to be a noticeable consumption of resources when compared to letting a web server stream the file.


Don’t believe everything you read on the internet, threads and ruby just won’t help you with most of this stuff.


<h2>So What’s the Solution?</h2>


Thankfully this problem was solved a long time ago by the guys at live journal. They used perl instead of ruby, but had the same problems.  Downloading files would block their application processes for too long, and cause other users to have to wait.  Their solution was elegant and simple.  Instead of making the application processes stream the file to the user, they simply tell the webserver what file to send, and let the web server bother with the details of streaming the file out to the client.


Their particular solution is quite cumbersome to set up and use, but there’s a very similar solution available called X-Sendfile.  It’s supported out of the box with later versions of lighttpd, and available as a <a href="http://tn123.ath.cx/mod_xsendfile/">module for apache</a>.


The way it works is instead of sending the file to our users, our rails application will simply check they’re allowed to download it (using our login_required filter) then write the name of the file into a special response header then render an empty response.  Once apache sees that response it will read the file from disk and stream it out to the user.  So your headers will look something like:


<pre><code>X-Sendfile: /home/railsway/downloads/huge.zip</code></pre>


The apache module has a slightly annoying default setting that prevents it from sending files outside the public folder, so you’ll need to add the following configuration option:


<pre><code>XSendFileAllowAbove on</code></pre>


Thankfully for rails users x-sendfile support is built right in to rails, allowing us to make a few minor changes and we’re done.
  
  before_filter :login_required
  def download
    send_file '/home/railsway/downloads/huge.zip', :type=>"application/zip", :x_sendfile=>true
  end
With that, we’re done.  Our rails process just make a quick authorization check and render a short response, and apache uses its own optimised file streaming code to send the file down to our users. Meanwhile, the rails process is free to go on to the next request.


Nginx users can use a similar header called X-AccelRedirect.  This is a little more fiddly to set up, and requires your application to write a special internal <span class="caps">URL</span> to the http response rather than the full path, but in terms of scalability and resource contention, it’s just as great.  There’s an <a href="http://blog.kovyrin.net/2006/11/01/nginx-x-accel-redirect-php-rails/">intro to the nginx module</a> available if you’re an nginx user.  If only uploads were this easy!


<h2>Up Next</h2>


The next article in the series will cover my experiences when dealing with the <strong>storage</strong> of your files.  Should you use S3?  What about blobs, <span class="caps">NFS</span>, GFS or MogileFS?

File Downloads Done Right

Getting your file downloads right is one of the most important parts of your File Management functionality. A poorly implemented download function can make your application painful to use, not just for downloaders, but for everyone else too.

Thankfully it’s also one of the easiest things to get right.

The simple version

For the purposes of this article let’s assume that your application needs to provide access to a large zip file, but that access should be restricted to logged in users.

The first choice we have to make is where to store this file. In this case there’s really only one wrong answer, and that’s to store it in the public folder of your rails application. Every file stored in public will be served by our webserver without the involvement of our rails application. This makes it impossible for us to check that the user has logged in. Unless your files are completely public, you shouldn’t go anywhere near the public folder.

So let’s assume we’ve stored the zip file in:

/home/railsway/downloads/huge.zip

Next we need a simple download action to send the file to the user, thankfully rails has this built right in:

  before_filter :login_required
  def download
    send_file '/home/railsway/downloads/huge.zip', :type=>"application/zip" 
  end

Now when our users click the download link, they’ll be asked to choose a location and then be able to view the file. The bad news is, there’s a catch here. The good news is it’s easy to fix.

What’s the catch?

The problem here is one of scarce resources, and that resource is your rails processes. Whether you’re using mongrel, fastcgi or passenger you have a limited number of rails processes available to handle application requests. When one of your users makes a request, you want to know that you either have a process free to handle the request, or that one will become free in short order. If you don’t, users will face an agonizing wait for pages to load, or see their browser sessions timeout entirely.

When you use the default behaviour of send_file to send the file out to the user, your rails process will read through the entire file, copying the contents of the file to the output stream as it goes. For small files like images this probably isn’t that big of a deal, but for something enormous like a 200M zip file, using send_file will tie up a process for a long time. Users on slow connections will soak up a rails process for correspondingly longer.

If you get a large number of downloads running, you may find all your rails processes taken up by downloaders, with none left to serve any other users. For all intents and purposes your site is down: you’ve instituted a denial of service attack against yourself.

What about threads?

Unfortunately threads in ruby won’t save us. The combination of blocking IO and green threads mean that even though you’re doing the work in a thread, it’s blocking the entire process most of the time anyway. JRuby users may get a performance improvement, but it’s still going to be a noticeable consumption of resources when compared to letting a web server stream the file.

Don’t believe everything you read on the internet, threads and ruby just won’t help you with most of this stuff.

So What’s the Solution?

Thankfully this problem was solved a long time ago by the guys at live journal. They used perl instead of ruby, but had the same problems. Downloading files would block their application processes for too long, and cause other users to have to wait. Their solution was elegant and simple. Instead of making the application processes stream the file to the user, they simply tell the webserver what file to send, and let the web server bother with the details of streaming the file out to the client.

Their particular solution is quite cumbersome to set up and use, but there’s a very similar solution available called X-Sendfile. It’s supported out of the box with later versions of lighttpd, and available as a module for apache.

The way it works is instead of sending the file to our users, our rails application will simply check they’re allowed to download it (using our login_required filter) then write the name of the file into a special response header then render an empty response. Once apache sees that response it will read the file from disk and stream it out to the user. So your headers will look something like:

X-Sendfile: /home/railsway/downloads/huge.zip

The apache module has a slightly annoying default setting that prevents it from sending files outside the public folder, so you’ll need to add the following configuration option:

XSendFileAllowAbove on

Thankfully for rails users x-sendfile support is built right in to rails, allowing us to make a few minor changes and we’re done.

  
  before_filter :login_required
  def download
    send_file '/home/railsway/downloads/huge.zip', :type=>"application/zip", :x_sendfile=>true
  end

With that, we’re done. Our rails process just make a quick authorization check and render a short response, and apache uses its own optimised file streaming code to send the file down to our users. Meanwhile, the rails process is free to go on to the next request.

Nginx users can use a similar header called X-AccelRedirect. This is a little more fiddly to set up, and requires your application to write a special internal URL to the http response rather than the full path, but in terms of scalability and resource contention, it’s just as great. There’s an intro to the nginx module available if you’re an nginx user. If only uploads were this easy!

Up Next

The next article in the series will cover my experiences when dealing with the storage of your files. Should you use S3? What about blobs, NFS, GFS or MogileFS?

Most Bugs Fall to the Second Pair of Eyes

While recently discussing if a power law applies to bugs at the ongoing seminar I sentimentally call work, I noted a corollary to Linus’s Law.

Linus’s Law states:

Given enough eyeballs, all bugs are shallow

The corollary is:

Most bugs fall to the second pair of eyes

That is, just having one other developer look at a bug will likely resolve it. Of course, developer’s who are pair programming already know this and are ahead of the game.

On a related note, pretty much every developer has been in the following situation. You are called over by a collegue to look over a bug, often accompanied by the statement, ‘This should work.’ You give a cursory examination and you resolve the problem with a quick change, often one that is obvious (after you pointed it out!)

If you come away from that experience thinking ‘Fresh pair of eyes’ or some such, good for you, you passed the test.

If you come away from that experience thinking ‘I am so awesome!’ or some such, not so good for you, you are probably not even competent.

Rails Envy Podcast – Episode #067: 02/18/2009

Episode 067. Awkward secrets.

Subscribe via iTunes – iTunes only link.
Download the podcast ~24 mins MP3.
Subscribe to feed via RSS by copying the link to your RSS Reader


Sponsored by New Relic
The Rails Envy podcast is brought to you this week by NewRelic. NewRelic provides RPM which is a plugin for rails that allows you to monitor and quickly diagnose problems with your Rails application in real time. Debuting this week is the Rails Lab which gives you expert advice on tuning and optimizing your Rails app. Check it out at RailsLab.NewRelic.com.

BONUS

Ryan Tomayko – Ruby on Rails Podcast

Rack and Sinatra committer Ryan Tomayko talks about the benefits of Rails, the value of middleware, and the future of deployment.

Also Mentioned

Ryan Tomayko – Ruby on Rails Podcast

Rack and Sinatra committer Ryan Tomayko talks about the benefits of Rails, the value of middleware, and the future of deployment.

Also Mentioned

Git commit-msg for Lighthouse tickets

A quick follow-up to a post from a few months ago on how our team has a naming convention for git branches when we’re working on Lighthouse tickets (read previous post).

I’ve just put together a quick git hook for commit-msg, which will automatically amend the commit message with the current ticket number when you’re following the branch naming conventions described here.

Just toss this gist into .git/hooks/commit-msg.


  #!/bin/sh

  #
  # Will append the current Lighthouse ticket number to the commit message automatically
  # when you use the LH_* branch naming convention.
  #
  # Drop into .git/hooks/commit-msg
  # chmod +x .git/hooks/commit-msg

  exec < /dev/tty

  commit_message=$1
  ref=$(git symbolic-ref HEAD 2> /dev/null) || return
  branch=${ref#refs/heads/}

  if [[ $branch =~ LH_(.*) ]]
  then
  lighthouse_ticket=${BASH_REMATCH[1]}

    echo "What is the state of ticket #${lighthouse_ticket}? " 
    echo "(o)pen " 
    echo "(h)old" 
    echo "(r)esolved" 
    echo "Enter the current state for #${lighthouse_ticket}: (o)" 

    state="open" 

    read state_selection

    case $state_selection in
      "o" )
        state="open" 
        ;;
      "h" )
        state="hold" 
        ;;
      "r" )
        state="resolved" 
        ;;
    esac
  echo >&2 "[#${lighthouse_ticket} state:${state}]" >> "$1" 
    exit 0
  fi

Then a quick example of how this works…


  ➜  bin git:(LH_9912 ♻ ) git ci -m "another test" 
  What is the state of this ticket? 
  (o)pen 
  (h)old
  (r)esolved
  Enter the current state: (o)
  h
  Created commit 1ed2713: another test
   1 files changed, 3 insertions(+), 1 deletions(-)

Now to see this in action… (screenshot)

git message hook

Then we’ll check out the git log really quick.


➜  bin git:(LH_9912) git log
commit 1ed271323c4a054fe56e76bddc9ac81d241a1032
Author: Robby Russell <robby@planetargon.com>
Date:   Mon Feb 16 12:06:33 2009 -0800

    another test
    [#9912 state:hold]

Thanks to Andy for helping me figure out how to read user input during a git hook.

#149 Rails Engines

Rails 2.3 brings us much of the same functionality as the Rails Engines plugin. Learn how to embed one application into another in this episode.