June 30, 2011: Among Other Things, Me In Texas

This post is by noelrap from Rails Test Prescriptions Blog

Click here to view on the original site: Original Post

So, as threatened on Twitter, I decided to overreact to Vim users by trying out BBEdit for my Rails development. Expect a write up soon, but the first pass is that it’s clearly a very powerful program, but it also clearly was developed in response to a set of needs that are not completely congruent with my needs.

1. Contains Me

I’m very excited to mention that I’ll be doing a day-long training session at Lone Star Ruby Conf. The session is entitled Being Correct is Only a Side Benefit: Improve Your Life with BDD. Here’s how I’m describing the session:

In this workshop, attendees will build a complex program using the strict BDD process. Along the way, they will see how BDD improves developer speed and code quality, learn the five habits of highly successful tests, and discover how to best leverage existing tools to improve their coding life.

It’s going to focus on the BDD process itself, rather than on specific tools, and on the benefits to your code from writing tests first.

You can sign up for the session at the Lone Star page. Please sign up, it’s going to be a lot of fun, and we’ll all get better at testing together.

2. Ruby Readability Links

Here are a few recent links on readability and Ruby code

Josh Susser would like to tell you that a meaningful symbol is a better argument than a boolean, while Martin Fowler suggests that it’s even better to have separate methods for each flag value.

Meanwhile, Phil Nash wants to be very clear that he doesn’t like the alternative Ruby 1.9 hash syntax y: 3 as the replacement to :y => 3. I have to admit, I’m not sold on it yet either, it still looks weird to me.

3. Postgres

One of my takeaways from RailsConf this year was the idea that the Ruby community is starting to shift toward PostgreSQL as a database, citing PostgreSQL’s superior performance and stability.

Ruby Inside presents a tutorial from Will Jessop about how to install PostgreSQL on OS X systems. It’s nice, though I’d really like to see the grab-bag of Postgres tools alluded to in the post.

4. Verbosity

Meantime, there was another debate going on about Ruby’s verbose mode, including Mislav Marohnic describing how Ruby’s verbose mode is broken, essentially because verbose mode is both verbose, and a code lint, whereas it might be more useful to separate the features.

For his part, Avdi Grimm extends the post and defends verbose mode by describing how Ruby’s verbose mode can be helpful, and providing a different perspective on some of the warnings.

5. Drumkit

Just yesterday, Obtiva’s own Chris Powers presented his nascent JavaScript web framework called DrumKit.js. The goal of DrumKit.js is to provide a framework where as much code as possible can be shared between the server — in this case running Node.js, and the client — running in JavaScript on the browser. So, once the page goes to the client, subsequent requests are handled in Ajax, but the application code remains the same.

It’s a super-cool idea, and I’m looking forward to it getting fleshed out and expanded over time.

Filed under: Uncategorized

Release Machine, linguist, coffeescript, vim, and more

This post is by Jason Seifer and Dan Benjamin from The Ruby Show

Click here to view on the original site: Original Post

In this episode, Peter and Jason take you on a magical journey of new releases, new gems, development best practices, and more.

Check out Peter’s new Ruby training course: Ruby Reloaded. Use the coupon code rubyshow for $50 off.

Continuous Testing with Ruby, Rails, and JavaScript now in print

This post is by Pragmatic Bookshelf from Pragmatic Bookshelf

Click here to view on the original site: Original Post

Continuous Testing with Ruby, Rails, and JavaScript now in print and shipping.

The New Heroku (Part 4 of 4): Erosion-resistance & Explicit Contracts

This post is by Adam from Heroku

Click here to view on the original site: Original Post

In 2006, I wrote Catapult: a Quicksilver-inspired command-line for the web. I deployed it to a VPS (Slicehost), then gave the URL out to a few friends. At some point I stopped using it, but some of my friends remained heavy users. Two years later, I got an email: the site was down.

Logging into the server with ssh, I discovered many small bits of breakage:

  • The app’s Mongrel process had crashed and not restarted.
  • Disk usage was at 100%, due to growth of logfiles and temporary session data.
  • The kernel, ssh, OpenSSL, and Apache needed critical security updates.

The Linux distro had just reached end-of-life, so the security fixes were not available via apt-get. I tried to migrate to a new VPS instance with an updated operating system, but this produced a great deal more breakage: missing Ruby gems, hardcoded filesystem paths in the app which had changed in the new OS, changes in some external tools (like ImageMagick). In short, the app had decayed to a broken state, despite my not having made any changes to the app’s code. What happened?

I had just experienced a powerful and subtle force known as software erosion.

Software Erosion is a Heavy Cost

Wikipedia says software erosion is "slow deterioration of software over time that will eventually lead to it becoming faulty [or] unusable" and, importantly, that "the software does not actually decay, but rather suffers from a lack of being updated with respect to the changing environment in which it resides." (Emphasis added.)

If you’re a developer, you’ve probably built hobby apps, or done small consulting projects, that resulted in apps like Catapult. And you’ve probably experienced the pain of minor upkeep costs over time, or eventual breakage when you stop paying those upkeep costs.

But why does it matter if hobby apps break?

Hobby apps are a microcosm which illustrate the erosion that affects all types of apps. The cost of fighting erosion is highest on production apps — much higher than most developers realize or admit. In startups, where developers tend to handle systems administration, anti-erosion work is a tax on their time that could be spent building features. On more mature projects, dedicated sysadmins spend a huge portion of their time fighting erosion: everything from failed hardware to patching kernels to updating entire OS/distro versions.

Reducing or eliminating the cost of fighting software erosion is of huge value, to both small hobby or prototype apps, and large production apps.

Heroku, the Erosion-resistant Platform

Heroku’s new runtime stack, Celadon Cedar, makes erosion-resistance a first-class concern.

This is not precisely a new feature. Rather, it is a culmination of what we’ve learned over the course of three years of being responsible for the ongoing upkeep of infrastructure supporting 150k apps. While all of our runtime stacks offer erosion-resistance to some degree, Cedar takes it to a new level.

The evidence that Heroku is erosion-resistant can be found in your own Heroku account. If you’re a longtime Heroku user, type heroku apps, find your oldest app, and try visiting it on the web. Even if you haven’t touched it in years, you’ll find that (after a brief warm-up time) it comes up looking exactly as it did the last time you accessed it. Unlike an app running on a VPS or other server-based deploy, the infrastructure on which your app is running has been updated with everything from kernel security updates to major overhauls in the routing and logging infrastructure. The underlying server instances have been destroyed many times over while your app’s processes have been seamlessly moved to new and better homes.

How Does Erosion-resistance Work?

Erosion-resistance is an outcome of strong separation between the app and the infrastructure on which it runs.

In traditional server-based deployments, the app’s sourcecode, config, processes, and logs are deeply entangled with the underlying server setup. The app touches the OS and network infrastructure in a hundred implicit places, from system library versions to hardcoded IP addresses and hostnames. This makes anti-erosion tasks like moving the app to a new cluster of servers a highly manual, time-consuming, and error-prone procedure.

On Heroku, the app and the platform it runs on are strongly separated. Unlike a Linux or BSD distribution, which gets major revisions every six, twelve, or eighteen months, Heroku’s infrastructure is improving continuously. We’re making things faster, more secure, more robust against failure. We make these changes on nearly a daily basis, and we can do so with the confidence that this will not disturb running apps. Developers on those apps need not know or care about the infrastructure changes happening beneath their feet.

How do we achieve strong separation of app and infrastructure? This leads us to the core principle that underlies erosion-resistance and much of the value of the platform deployment model: explicit contracts.

Explicit Contract Between the App and the Platform

Preventing breakage isn’t a matter of never changing anything, but of changing in ways that don’t break explicit contracts between the application and the platform. Explicit contracts are how we can achieve almost 100% orthogonality between the app (allowing developers to change their apps with complete freedom) and the platform (allowing Heroku to change the infrastructure with almost complete freedom). As long as both parties adhere to the contract, both have complete autonomy within their respective realms.

Here are some of the contracts between your app running on the Cedar stack and the Heroku platform infrastructure:

  • Dependency management – You declare the libraries your app depends on, completely and exactly, as part of your codebase. The platform can then install these libraries at build time. In Ruby, this is accomplished with Gem Bundler and Gemfile. In Node.js, this is accomplished with NPM and package.json.
  • Procfile – You declare how your app is run with Procfile, and run it locally with Foreman. The platform can then determine how to run your app and how to scale out when you request it.
  • Web process binds to $PORT – Your web process binds to the port supplied in the environment and waits for HTTP requests. The platform thus knows where to send HTTP requests bound for your app’s hostname.
  • stdout for logs – You app prints log messages to standard output, rather than framework-specific or app-specific log paths which would be difficult or impossible for the platform to guess reliably. The platform can then route those log streams through to a central, aggregated location with Logplex.
  • Resource handles in the environment – Your app reads config for backing services such as the database, memcached, or the outgoing SMTP server from environment variables (e.g. DATABASE_URL), rather than hardcoded constants or config files. This allows the platform to easily connect add-on resources (when you run heroku addons:add) without needing to touch your code.

These contracts are not only explicit, but designed in such a way that they shouldn’t have to change very often.

Furthermore, these contracts are based on language-specific standards (e.g., Bundler/NPM) or time- proven unix standards (e.g. port binding, environment variables) whenever possible. Well-written apps are likely already using these contracts or some minor variation on them.

An additional concern when designing contracts is avoiding designs that are Heroku-specific in any way, as that would result in vendor lock-in. We invest heavily in ensuring portability for your apps and data, as it’s one of our core principles.

Properly designed contracts offer not only strong separation between app and platform, but also easy portability between platforms, or even between a platform and a server-based deployment.


Erosion is a problem; erosion-resistance is the solution. Explicit contracts are the way to get there.

Heroku is committed to keeping apps deployed to our platform running, which means we’re fighting erosion on your behalf. This saves you and your development team from the substantial costs of the anti-erosion tax. Cedar is our most erosion-resistant stack yet, and we look forward to seeing it stand the test of time.

Other Posts From This Series

Counters Everywhere

This post is by John Nunemaker from RailsTips by John Nunemaker

Click here to view on the original site: Original Post

Last week, coming off hernia surgery number two of the year (and hopefully the last for a while) I eased back into development by working on Gaug.es.

In three days, I cranked out tracking of three new features. The only reason this was possible is because I have tried, failed, and succeeded on repeat at storing various stats efficiently in Mongo.

While I will be using Mongo as the examples for this article, most of it could very easily be applied to any data store that supports incrementing numbers.

How are you going to use the data?

The great thing about the boon of new data stores is the flexibility that most provide regarding storage models. Whereas SQL is about normalizing the storage of data and then flexibly querying it, NoSQL is about thinking how you will query data and then flexibly storing it.

This flexibility is great, but it means if you do not fully understand how you will be accessing data, you can really muck things up. If, on the other hand, you do understand your data and how it is accessed, you can do some really fun stuff.

So how do we access data on Gaug.es? Depends on the feature (views, browsers, platforms, screen resolutions, content, referrers, etc.), but it can mostly be broken down into these points:

  • Time frame resolution. What resolution is needed? To the month? Day? Hour? Which piece of content was viewed the most matters on a per day basis, but which browser is winning the war only matters per month, or maybe even over several months.
  • Number of variations. Browsers is a finite number of variations (Chrome, Firefox, Safari, IE, Opera, Other). Content is completely the opposite, as it varies drastically from website to website.

Knowing that resolution and variation drive how we need to present data is really important.

One document to rule them all

Due to the amount of data a hosted stats service has to deal with, most store each hit and then process them into reports on intervals. This leads to delays between something happening on your site and you finding out, as reports can be hours or even a day behind. This always bothered me and is why I am working really hard at making Gaug.es completely live.

Ideally, you should be able to check stats anytime and know exactly what just happened. Email newsletter? Watch the traffic pour in a few minutes after you hit send. Post to your blog? See how quickly people pick it up on Twitter and in feed readers.

In order to provide access to data in real-time, we have to store and retrieve our data differently. Instead of storing every hit and all the details and then processing those hits, we make decisions and build reports as each hit comes in.

Resolution and Variations

What kind of decisions? Exactly what I mentioned above.

First, we determine what resolution a feature needs. Top content and referrers need to be stored per day for at least a month. After that, probably month is a good enough resolution.

Browsers and screen sizes are far less interesting on a per day basis. Typically, these are only used a few times a year to make decisions such as dropping IE 6 support or deciding to target 1024×768 instead of 800×600 (remember that back in the day?).

Second, we determine the variations. Content and referrers varies greatly on a per site basis, but we can choose the browsers and screen dimensions to track. For example, with browsers, we picked Chrome, Safari, Firefox, Opera, IE and then we lump the rest of the browsers into Other. Do I really care how many people visit RailsTips in Konquerer? Nope, so why even show it.

The same goes for platforms. We track Mac, Windows, Linux, iPhone, iPad, iPod, Android, Blackberry, and Other.

Document Model

Knowing that we only have 6 variations of browsers and 9 variations of platforms to track, and that the list is not likely to grow much, I store all of them in one document per month per site. This means showing someone browser and/or platform data for an entire month is one query for a very tiny document that looks like this:

  '_id' => 'site_id:month',
  'browsers' => {
    'safari' => {
      '5-0' => 5,
      '4-1' => 2,
    'ie' => {
      '9-0' => 5,
      '8-0' => 2,
      '7-0' => 1,
      '6-0' => 1,
  'platforms' => {
    'macintosh' => 10,
    'windows'   => 5,
    'linux'     => 2,

When a track request comes in, I parse the user agent to get the browser, version, and platform. We only store the major and minor parts of the version. Who cares about What matters is 12.0. This means we end up with 5-10 versions per month per browser instead of 50 or 100. Also, note that Mongo does not allow dots in key names, so I store the dot as a hyphen, thus 12-0.

I then do a single query on that document to increment the platform and browser/version.

query  = {'_id' => "#{hit.site_id}:#{hit.month}"}
update = {'$inc' => {
  "b.#{browser_name}.#{browser_version}" => 1,
  "p.#{platform}" => 1,
collection(hit.created_on).update(query, update, :upsert => true)

b and p are short for browser and platform. No need to waste space. The dot syntax in the strings in the update hash tell Mongo to reach into the document and increment a value for a key inside of a hash.

Also, the _id (or primary key) of the document is the site id and the month since the two together are always unique. There is no need to store a BSON ObjectId or incrementing number, as the data is always accessed for a given site and month. _id is automatically indexed in Mongo and it is the only thing that we query on, so there is no need for secondary indexes.

Range based partitioning

I also do a bit of range based partitioning at the collection level (ie: technology.2011, technology.2012). That is why I pass the date of the hit to the collection method. The collection that stores the browser and platform information is split by year. Maybe unnecessary looking back at it, but it hurts nothing. It means that a given collection stores number of sites * 12 documents at a maximum.

Mongo creates collections on the fly, so when a new year comes along, the new collection will be created automatically. As years go by, we can create smaller summary documents and drop the old collections or move them to another physical server (which is often easier and more performant than removing old data from an active collection).

Because I know that the number of variations is small (< 100-ish), I know that the overall document size is not really going to grow and that it will always efficiently fly across the wire. When you have relatively controllable data like browsers/platforms, storing it all in one document works great.

Closing Thoughts

As I said before, this article is using Mongo as an example. If you wanted to use Redis, Membase or something else with atomic incrementing, you could just have one key per month per site per browser.

Building reports on the fly through incrementing counters means:

  • less storage, as you do not need the raw data
  • less RAM, as there are fewer secondary indexes
  • real-time querying is no problem, as you do not need to generate reports, the data is the report

It definitely involves more thought up front, but several areas of Gaug.es use this pattern and it is working great. I should also note that it increases the number of writes. Creating the reports on the fly means 7 or 8 writes for each “view” instead of 1.

The trade off is that reading the data is faster and avoids the lag caused by having to post-process it. I can see a day in the future where having all these writes will force me to find a different solution, but that is a ways off.

What do you do when you cannot limit the number of variations? I’ll leave that for next time.

Oh, and if you have not signed up for Gaug.es yet, what are you waiting on? Do it!

iaWriter for Mac

This post is by noelrap from Rails Test Prescriptions Blog

Click here to view on the original site: Original Post

I’ll say up front that I’m skeptical of claims of “distraction-free” writing environments, especially the super-precious over the top ones, for all the reasons that Merlin Mann has laid out in various easily findable places on the internet.

That said, I think I really like iaWriter for Mac. And I say that even though I’ve basically abandoned iaWriter/iPad in lieu of somewhat more functional options, like Nebulous and Textastic. But on the Mac, I already have more functional options, and although I was originally skeptical of when I might use iaWriter, I think I have found a niche for it for quick writing tasks like blogging and taking notes.

Having used iaWriter for about a week (this post, plus the last few daily posts, plus some other note kind of things), I think that the “distraction free” frame is a little misleading. iaWriter is not distraction free in the sense that a white room is distraction free. It’s distraction free in the way that a race car is.

Forgive me for this metaphor, I’m not a big car guy. It feels like every part of iaWriter has been precisely engineered to reduce friction and become fastest possible path between your brain and a text file. And if that means that some luxuries have been jettisoned, like cup holders, or comfortable seats, or the ability to change fonts… well, nobody expects a race car to take them to the grocery store.

Visual impressions

It really is a beautiful app. I still really love the Nitti Light font and although it’s minimalist, the type is crystal clear, easy to read. The thing loads almost astonishingly fast, the background is subtly textured so you know it’s there, but doesn’t call attention to itself. The cursor is a big thick blue bar, easily visible but not garish. The window drag bar is small, and fades away if you start to type. There are no preferences, and only a few toggleable menu items.

The width of the page is set to 64 characters, and you can’t change it. There’s a nice margin around the text, but if you go to full screen mode, it looks goofy — a huge margin around a strip of text. (Actually, I think one of the reasons why it feels like I can get text down on the page so quickly is the 64 character limit is smaller than my normal default of 80 which means that I run through lines faster than I otherwise would. The font is big — I normally type with a big font, and this feels natural to me. Again, easy to read. When you hit the bottom of the screen, the cursor and the writing space snaps back up a bit (in focus mode, it snaps back to the middle of the page, which I really wish it would do in regular mode as well).

There are keyboard shortcuts to move up and back one sentence, which I guess is the Mac analogy to the iPad version’s special keyboard navigation. And there is focus mode, which fades out everything other than the current sentence. I don’t know about that one, I think it pushes things too far, but if it works for you don’t listen to me.

iaWriter has a kind of semi-wysiwyg mode based on Markdown, where things that are marked as bold or italic in Markdown will have that font applied, which is nice. Also, when writer sees a line that starts with a Markdown decorations like # for headers and * for bullet lists, it will put the decoration in the left margin so that the text remains lined up with other text. The effect is interesting, making it easy to scan the document for headers, but the fact that all header levels get the same font treatment is a little odd.

What I can say about it, is that using it for five minutes made me want to find something to write so that I could use it some more, and even try and figure out a way to work it into my blogging workflow.

Some oddities

New files open in a window that is only a few lines high, which I think is a little strange.

It remains irritating that you can only open files that are of type .txt and .markdown. I’d love to try some others.

It’s a major irritation that it does not remember what files were open when you reopen the app. Nor, somewhat astonishingly, does it remember the settings for spell and grammar check. This will apparently be fixed in a future release.

A couple of spacing glitches, it fades the last line to grey at the bottom of the screen, which is kind of cool, except when you are actually typing in that line, in which case it still fades the line and it’s a pain. The docs claims that it maintains indentation if you indent a little, like for a Markdown code block, but I’ve found that intermittent in practice.

Filed under: Typing

#272 Markdown with Redcarpet

This post is by Ryan Bates from RailsCasts

Click here to view on the original site: Original Post

Redcarpet is an easy-to-use gem which interprets Markdown. Here I show how to customize it and add syntax highlighting through Pygments and Albino.

The New Heroku (Part 3 of 4): Visibility & Introspection

This post is by Heroku from Heroku

Click here to view on the original site: Original Post

Visibility and introspection capabilities are critical for managing and debugging real-world applications. But cloud platforms are often lacking when it comes to visibility. The magical black box is great when it "just works," but not so great when your app breaks and you can’t look inside the box.

Standard introspection tools used in server-based deployments — such as ssh, ps aux, top, tail -f logfile, iostat — aren’t valid in a multi-tenant cloud environment. We need new tools, new approaches. Heroku’s new runtime stack, Celadon Cedar, includes three powerful tools for visibility and introspection: heroku ps, heroku logs, and heroku releases. This post will examine each.

But first, a story.

The Deploy Gone Wrong (A Parable)

The FooBaz Co development team practices continuous deployment, pushing new code to their production app running on Heroku several times a day. Today, happy-go-lucky coder Ned has done a small refactoring which pulls some duplicated code out into a new library, called Utils. The unit tests are all green, and the app looks good running under Foreman on his local workstation; so Ned pushes his changes up to production.

$ git push heroku
-----> Heroku receiving push
-----> Launching... done, v74
       http://foobaz-production.herokuapp.com deployed to Heroku

Ned likes to use the introspection tools at his disposal to check the health of the app following any deploy. He starts by examining the app’s running processes with heroku ps:

$ heroku ps
Process       State               Command
------------  ------------------  ------------------------------
web.1         crashed for 4s      bundle exec thin start -p $PORT -e..
web.2         crashed for 2s      bundle exec thin start -p $PORT -e..
worker.1      crashed for 3s      bundle exec rake jobs:work
worker.2      crashed for 2s      bundle exec rake jobs:work

Disaster! All the processes in the process formation have crashed. Ned has a red alert on his hands — the production site is down.

Recovering Quickly

Adrenaline pumping, Ned now uses the second introspection tool, heroku logs:

$ heroku logs --ps web.1
2011-06-19T08:35:19+00:00 heroku[web.1]: Starting process with command: `bundle exec thin start -p 38180 -e production`
2011-06-19T08:35:21+00:00 app[web.1]: /app/config/initializers/load_system.rb:12:in `<top (required)>': uninitialized constant Utils (NameError)
2011-06-19T08:35:21+00:00 app[web.1]:   from /app/vendor/bundle/ruby/1.9.1/gems/railties-3.0.7.rc1/lib/rails/engine.rb:201:in `block (2 levels) in <class:Engine>'
2011-06-19T08:35:21+00:00 app[web.1]:   from /app/vendor/bundle/ruby/1.9.1/gems/railties-3.0.7.rc1/lib/rails/engine.rb:200:in `each'
2011-06-19T08:35:21+00:00 app[web.1]:   from /app/vendor/bundle/ruby/1.9.1/gems/railties-3.0.7.rc1/lib/rails/engine.rb:200:in `block in <class:Engine>'

All processes crashed trying to start up, because the Utils module is missing. Ned forgot to add lib/utils.rb to Git.

Ned could press forward by adding the file and pushing again. But the wise move here is to roll back to a known good state, and then think about the fix.

Heroku’s third visibility tool, releases, tracks deploys and other changes. It includes a powerful undo capability: rollback. So Ned takes a deep breath, then runs:

$ heroku rollback
-----> Rolling back to v73... done, v75

Ned checks heroku ps:

$ heroku ps
Process       State               Command
------------  ------------------  ------------------------------
web.1         up for 1s           bundle exec thin start -p $PORT -e..
web.2         up for 1s           bundle exec thin start -p $PORT -e..
worker.1      starting for 3s     bundle exec rake jobs:work
worker.1      starting for 2s     bundle exec rake jobs:work

The app’s processes are successfully restarting in their previous state. Crisis averted, Ned can now take his time examining what went wrong, and how to fix it, before deploying again.

The Fix

Ned investigates locally and confirms with git status that he forgot to add the file. He adds the file to Git and commits, this time double-checking his work.

He pushes to Heroku again, then uses all three introspection techniques to confirm the newly-deployed app is healthy:

$ heroku ps
Process       State               Command
------------  ------------------  ------------------------------
web.1         up for 2s           bundle exec thin start -p $PORT -e..
web.2         up for 2s           bundle exec thin start -p $PORT -e..
worker.1      up for 1s           bundle exec rake jobs:work
worker.1      up for 2s           bundle exec rake jobs:work

$ heroku logs
2011-06-19T08:39:17+00:00 heroku[web.1]: Starting process with command: `bundle exec thin start -p 56320 -e production`
2011-06-19T08:39:19+00:00 app[web.1]: >> Using rack adapter
2011-06-19T08:39:19+00:00 app[web.1]: >> Thin web server (v1.2.11 codename Bat-Shit Crazy)
2011-06-19T08:39:19+00:00 app[web.1]: >> Maximum connections set to 1024
2011-06-19T08:39:19+00:00 app[web.1]: >> Listening on, CTRL+C to stop

$ heroku releases
Rel   Change                          By                    When
----  ----------------------          ----------            ----------
v76   Deploy d706b4a                  ned@foobazco.com      1 minute ago
v75   Rollback to v73                 ned@foobazco.com      14 minutes ago
v74   Deploy 20a5742                  ned@foobazco.com      15 minutes ago
v73   Deploy df7bb82                  rick@foobazco.com     2 hours ago

Golden. Ned breathes a sigh of relief, and starts composing an email to his team about how they should really think about using a staging app as protection against this kind of problem in the future.

Now that we’ve seen each of these three visibility tools in action, let’s look at each in more depth.

Visibility Tool #1: ps

heroku ps is a spiritual sister to the unix ps command, and a natural extension of the process model approach to running apps. But where unix’s ps is for a single machine, heroku ps spans all of the app’s processes on the distributed execution environment of the dyno manifold.

A clever trick here is to use the watch command for a realtime display of your app’s process status:

$ watch heroku ps
Every 2.0s: heroku ps                          Sun Jun 19 01:44:55 2011

Process       State               Command
------------  ------------------  ------------------------------
web.1         up for 16h          bundle exec rackup -p $PORT -E $RA..

Leave this running in a terminal as you push code, scale the process formation, change config vars, or add add-ons, and you’ll get a first-hand look at how the Heroku process manager handles your processes.

Dev Center: ps command

Visibility Tool #2: Logs

In server-based deploys, logs often exist as files on disk, which can lead to us thinking of logs as files (hence "logfiles"). A better conceptual model is:

Logs are a stream of time-ordered events aggregated from the output streams of all the app’s running processes, system components, and backing services.

Heroku’s Logplex routes log streams from all of these diverse sources into a single channel, providing the foundation for truly comprehensive logging. Heroku aggregates three categories of logs for your app:

  • App logs – Output from your app, such as: everything you’d normally expect to see in Rails’ production.log, output from Thin, output from Delayed Job.
  • System logs – Messages about actions taken by the Heroku platform infrastructure on behalf of your app, such as: restarting a crashed process, idling or unidling a web dyno, or serving an error page due to a problem in your app.
  • API logs – Messages about administrative actions taken by you and other developers working on your app, such as: deploying new code, scaling the process formation, or toggling maintenance mode.

Filtering down to just API logs is a great way to see what you and other collaborators on the app have been up to:

$ heroku logs --source heroku --ps api
2011-06-18T08:21:37+00:00 heroku[api]: Set maintenance mode on by kate@foobazco.com
2011-06-18T08:21:39+00:00 heroku[api]: Config add ADMIN_PASSWORD by kate@foobazco.com
2011-06-18T08:21:39+00:00 heroku[api]: Release v4 created by kate@foobazco.com
2011-06-18T08:21:43+00:00 heroku[api]: Scale to web=4 by kate@foobazco.com
2011-06-18T08:22:01+00:00 heroku[api]: Set maintenance mode off by kate@foobazco.com

If your teammates are actively performing administrative actions on the app, running the above command with the additional argument of --tail will let you watch the events unfold as they happen. Add watch heroku ps and watch heroku releases in terminals paneled to all be visible at once, and you’ve got a mini-mission control for your app.

Dev Center: Logging

Visibility Tool #3: Releases

Releases are a history of changes to your app. New releases are created whenever you deploy new code, change config vars, or change add-ons. Each release stores the full state of the code and config, in addition to audit information such as who made the change and a timestamp.

This is the release history following Ned’s bad deploy and subsequent recovery:

$ heroku releases
Rel   Change                          By                    When
----  ----------------------          ----------            ----------
v76   Deploy d706b4a                  ned@foobazco.com      1 minute ago
v75   Rollback to v73                 ned@foobazco.com      14 minutes ago
v74   Deploy 20a5742                  ned@foobazco.com      15 minutes ago
v73   Deploy df7bb82                  rick@foobazco.com     2 hours ago

Starting from the oldest release at the bottom:

  • v73 was a code deploy by a developer named Rick from earlier today. This one ran without problems for about two hours.
  • v74 was Ned’s deploy which was missing a file.
  • v75 was the rollback, which Ned ran just moments after the bad deploy. This is an exact copy of v73 (more on this in a moment).
  • v76 was a deploy of the fixed code. Ned was able to spend some time (about ten minutes) double-checking it; the rollback saved him from needing to rush out a fix under pressure.

The top release is always the currently running one, so in this case, we see that v76 is current and was deployed just one minute ago.

More Than Just An Audit Trail

Releases aren’t just a history of what happened, they are a fundamental unit in the machinery of the Heroku platform. The transaction history is the primary source of record, meaning the release is the canonical source of information about the code and config used to run your app’s processes. This is one way in which the Heroku platform ensures uniformity in the version of code and config used to run all processes in the app’s process formation.

Furthermore, releases are an append-only ledger, a concept taken from financial accounting and applied to transactional systems. A mistake in the ledger can’t be erased, it can only be corrected by a new entry. This is why heroku rollback creates a new release, which is an exact copy the code and config for a previous release.

We’re striving to make releases front-and-center in the output from any command that creates a new release. A release sequence number, such as v10, appears any time you create a new release.

Deploying new code:

$ git push heroku master
-----> Launching... done, v10

Setting a config var:

$ heroku config:add FOO=baz
Adding config vars:
  FOO => baz
Restarting app... done, v11

Rolling back:

$ heroku rollback
-----> Rolling back to v10... done, v12

Adding an add-on:

$ heroku addons:add memcache
-----> Adding memcache to strong-fire-938... done, v13 (free)

And as you’d expect, each of the releases above are now listed in heroku releases:

$ heroku releases
Rel   Change                          By                    When
----  ----------------------          ----------            ----------
v13   Add-on add memcache:5mb         kate@foobazco.com     16 seconds ago
v12   Rollback to v10                 kate@foobazco.com     1 minute ago
v11   Config add FOO                  kate@foobazco.com     1 minute ago
v10   Deploy 20a5742                  kate@foobazco.com     11 minutes ago

Releases and rollback are powerful tools for introspection and management. We encourage all developers using Heroku to get familiar with them.

Dev Center: Releases

The Future

Logging, releases, and ps provide a new level of visibility and introspection on the Heroku cloud application platform. But we’re not done yet.

We know, for example, that long-term archival, search, and analytics capability for logs is crucial for production apps, particularly at large scale. We’re working on improvements to Logplex’s storage and retrieval, as well as capabilities for add-on providers to register themselves as log drains, opening up your app to the world of syslog-as-a-service providers and log analytics tools.

More generally, we’re striving to increase visibility across the board — both in our current introspection tools, and by creating new tools. Visibility may initially seem like a minor concern, but as we’ve seen in this post, it has deep implications the Heroku cloud application platform, and for you as a developer managing your production apps.

June 23, 2011: Distributed Magic Control

This post is by noelrap from Rails Test Prescriptions Blog

Click here to view on the original site: Original Post

1. Today’s News: Github for Mac

Odds are you heard this one already, but the fine folks at GitHub announced a Mac desktop client. It differs from, say, GitX in that it attempts to be a front end to your entire GitHub account rather than one particular repo.

I haven’t used it a ton yet, but a couple of quick impressions:

  • I think we can now definitively say that Tweetie and Loren Brichter is to the current set of Mac applications what Delicious Library was to the batch a few years ago — the source of a widely used design aesthetic.
  • It’s got a nice set of branching features, the one thing I’m really missing is a way to browse the actual current state of the files in the repo, though I guess you can always go to GitHub itself for that information. It feels a bit feature-light overall.
  • I’m guessing the main users of this initially will be team members who aren’t commonly on the command line, but who need current code, like designers. (Though I do use GitX a fair amount to visualize history, and might use this in its place for some things). The merge tools are interesting, I’ll probably try them once to see what they are like.

2. JavaScript Gripes

If you think the main problem with this blog is that I don’t link to enough cranky rants about JavaScript, here’s one by Fredrik Holmström, of the IronJS project. The strong claim is this:

my point of view after having developed IronJS is that there are a couple of critical problems with JavaScript that prevents it from ever being a viable alternative as development platform for server application development.

I suspect the gripes themeselves will be broadly familiar to JavaScript fans — lack of namespace support, crazy language design choices, lots of run times. It’s nicely ranted, though points off for using the comparison between JavaScript: The Definitive Guide and JavaScript: The Good Parts, that’s kind of hacky.

3. jQuery Mobile Goes Beta

jQuery mobile came to my attention via Obtiva apprentice Carl Thuringer. It’s a cross-platform framework intended to simplify web applications targeted at mobile browsers using HTML 5 and JavaScript. It looks really nice, and they just announced beta 1

4. Pottermore

Also nearly breaking news about another RailsRx obsession, ebooks. According to multiple sources, J. K. Rowling’s new Pottermore site will be the curated official fan site she’s always wanted. Also, Rowling will apparently self-publish cross-platform ebooks of the Potter series.

This is interesting for a couple of reasons, not least of which is that it’s another blow to the long-standing model where publishers and labels used bestsellers to subsidize everybody else. As far as I can tell, nobody has mentioned what she’s going to price the books at, but it seems like her overhead costs per-book at this point are rather low. I doubt she will, but it’d be interesting if she tried to break the current price structure by hitting a $4.99 point. I suspect she’s more likely to do a middle ground of $9.99.

5. Soccer Stats

And, as a longtime baseball stat nut, this article about new statistics taking over soccer was interesting. One big flaw in the new soccer stats is obviously that it’s nearly impossible for the casual watcher to track the stats, since they are measuring things like how much distance each player runs at top speed and the like. Still, I like the look at how you even begin to measure a complicated system like this, and how you determine what’s important to look at.

Filed under: Git, Harry Potter, JQuery

Designed for Use: Create Usable Interfaces for Applications and the Web now in print

This post is by Pragmatic Bookshelf from Pragmatic Bookshelf

Click here to view on the original site: Original Post

Designed for Use: Create Usable Interfaces for Applications and the Web now in print

The New Heroku (Part 2 of 4): Node.js & New HTTP Capabilities

This post is by Heroku from Heroku

Click here to view on the original site: Original Post

Node.js has gotten its share of press in the past year, achieving a level of attention some might call hype. Among its touted benefits are performance, high concurrency via a single-threaded event loop, and a parity between client-side and sever-side programming languages which offers the Promethean opportunity of making server-side programming accessible to front-end developers.

But what is Node.js, exactly? It’s not a programming language – it’s simply Javascript. It’s not a VM: it uses the V8 Javascript engine, the same one used by the Chrome web browser. Nor is it simply a set of libraries for Javascript. Nor is it a web framework, though the way it is sweeping the early-adopter developer community is reminiscent of Rails circa 2005.

Most would cite Node’s asynchronous programming model, driven by the reactor pattern, as its key differentiator from other ways of writing server-side code for the web. But even this is not new or unique: Twisted for Python has been around for a decade.

So what makes Node special?

The Right Technology at the Right Time

One part of the reason Node.js has captured the hearts and minds of alpha geeks in web development is because it offers all the things described above – a language, a fast and robust VM, a set of compatible and well-documented libraries, and an async framework – all bundled together into a single, coherent package.

The charismatic leadership of Ryan Dahl may be another part of the answer: strong figureheads are often key elements in driving explosive developer community growth (see: Linux/Linus Torvalds, Rails/David Heinemeier Hansson, Ruby/Matz, Python/Guido van Rossum, Redis/Salvatore Sanfilippo, CouchDB/Damien Katz).

But the primary reason is even simpler: Node.js is the right technology at the right time. Demand for asynchronous capabilities in web apps is being driven by what has become known as "the realtime web." Another label for "realtime" (which has a conflicting definition from computer science) is "evented."

Events, Not Polling

The evented web means users getting updates pushed to them as new data becomes available, rather than having to poll for it. Polling is the software equivalent of nagging – not fun or efficient for either end of the transaction.

The rise of an event-driven web makes the appearance of Node.js timely, because Node.js has evented baked in at a level that no other programming language or framework can match. And Node.js being Javascript – a language born of the web – closes the deal.

Unix daemons (such as the famously-fast Nginx) often use evented programming to maximize concurrency and robustness. The seminal C10K article explores the use of system calls such as kqueue and epoll for evented programming. In scripting languages, evented frameworks such as Twisted for Python and EventMachine for Ruby enjoy popularity.

EventMachine is a fascinating case study to compare against Node.js. Because EventMachine’s reactor takes over the entire execution thread (via EM.run), most vanilla Ruby programs require a rewrite to use it, and the developer is cut off from the vast collection of synchronous libraries found in the Ruby ecosystem. A whole slew of em-* libraries have appeared, effectively creating a sub-language of Ruby. The partial incompatibility between Ruby and em-Ruby is a source of confusion for developers new to evented programming.

This is where Node.js succeeds: pulling together the language, the async framework, and a whole universe of libraries guaranteed to work in the evented model.

Node.js on Heroku/Cedar

Heroku’s new runtime stack, Celadon Cedar, includes first-class support for Node.js apps. A quick taste:


var app = require('express').createServer();

app.get('/', function(request, response) {
  response.send('Hello World!');

app.listen(process.env.PORT || 3000);

NPM has become the community standard for dependency management in Node.js. Writing a package.json declares your app’s dependencies and tells Heroku your app is a Node.js app:


  "name": "node-example",
  "version": "0.0.1",
  "dependencies": {
    "express": "2.2.0"

Read the full Node.js on Heroku/Cedar quickstart guide to dive in and try it for yourself.

New HTTP Routing Stack: herokuapp.com

Because the fate of Node.js is entwined so directly with that of the next-generation HTTP techniques (such as chunked responses and long polling), Cedar comes bundled with a new HTTP stack. The new HTTP stack, which runs on the herokuapp.com domain, provides a more direct connection between the end user’s browser (or other HTTP client) and the app dyno which is serving the request.

Here are two examples of apps which use advanced HTTP features for evented apps:

One caveat: we’ve made the difficult decision to hold off on supporting WebSockets, as the protocol is still in flux and not uniformly supported in modern standards-compliant browsers. Once the WebSockets protocol matures further, we’ll be delighted to support it in the herokuapp.com HTTP stack. In the meantime, use the Socket.IO library, which falls back to the best available transport mechanism given the HTTP stack and browser capabilities for a given HTTP request.

The Future

Node.js is still under rapid development, and staying abreast of new releases can be a challenge. Heroku must balance the goals of being a curated, erosion-resistant platform against keeping pace with extremely active developer communities such as Ruby, Ruby/Rails, and Node.js. We look forward to applying all we’ve learned so far to the highly dynamic world of Node.js.

Cucumber 1.0 and lots of GitHub projects

This post is by Jason Seifer and Dan Benjamin from The Ruby Show

Click here to view on the original site: Original Post

In this episode, Peter and Jason bring you the latest news with updates to your favorite gems including Nokogiri and Cucumber. They also cover a ton of new and interesting projects.

June 21, 2011: In Brightest Day

This post is by noelrap from Rails Test Prescriptions Blog

Click here to view on the original site: Original Post

I’d like to pretend there was some thread connecting these things, but you and I both know there just isn’t…

1. Actual News: Cucumber 1.0

Starting with something approaching a real news story, Cucumber 1.0 was released today. According to that post from Aslak Hellesøy, the project has had nearly 750,000 downloads. Oh, and there’s a native JavaScript port in progress. I didn’t know that.

Anyway, Cucumber 1.0 adds Rake 1.9.2 support. Recent changes you may not know about include a -l and --lines command line switch as in RSpec’s and a new transform syntax that allows you to factor out duplicated parts of step definitions. Haven’t seen official docs on this, but it looks like it allows you to capture bits of step definition and run a block against it. The code example within the Cucumber tests looks like this:

Transform(/a Person aged (\d+)/) do |age|


Given /^(a Person aged \d+) with blonde hair$/ do |person|
puts “#{person} and I have blonde hair”

In other words, the snippet a Person aged \d+ is captured and transformed and the result of that transform block is what is passed to the step definition block.

Interesting. I wonder if people will use it?

2. The End of the World As We Know It

This post from the Armed and Dangerous blog tries to imagine a world without the web. The general idea is that if Congress had understood what DARPA was up to in the early 80′s, then funding would have been cut, and TCP/IP would not have been developed and popularized.

It’s an interesting argument, and as much as I’d like to believe it’s to dark, the examples of the cable and cell phone industries are eloquent. (I’ll grant that the author is probably trying to make a libertarian point I wouldn’t agree with in general…)

3. Books: Fuzzy Nation

Continuing playing catch-up with brief book reviews, Fuzzy Nation by John Scalzi. Fuzzy Nation is something odd — a genuine remake of a beloved SF classic (well, beloved by some, I’ve never read it), namely Little Fuzzy by H. Beam Piper. Scalzi has taken the basic elements — a guy who encounters small, sentient aliens who are, wait for it, Fuzzy — and wound his own story around them.

Fuzzy Nation is pretty much purely entertaining, fun, well structured, fast paced. It’s not as much interested in the existential questions around alien intelligence as the practical question of protecting them from a corporation that wants to strip-mine their planet. (Subtle, it’s not.) It’s one of those books that isn’t interested in re-defining the genre as much as telling a good story inside the existing boundaries.

4. Moving Beyond Thin Controllers To The Downright Emaciated

Gary Bernhardt over at the Destroy All Software blog posts some suggestions about using routing or routing-like structures to effectively remove controllers from the system. The theory is that if controllers just exist to dispatch to a specific method someplace else in the system, and Rails manages all the other connections, then why not route directly to that method with some declarative or rules-based logic to handle things like security logic, exceptional conditions, or other high-level logic.

It’s interesting, and probably could be built within Rails 3. I suspect most systems aren’t pure enough in the controllers to take advantage of it, which I guess is the point, and I wonder if the gain is worth breaking the default linkage between URL and controller/action pairs, but I’d be curious to try it.

5. In Brightest Day, In Blackest Night

Finally, I haven’t seen the Green Lantern movie yet, but I’ve been telling anybody who will listen that I’ve been waiting 30 years to be disappointed by it. Thanks to io9 for reminding me why be recapping an awesomely over-the-top Green Lantern comic from 1980 that I owned, loved, and could still quote alongside the recap. Watch GL stagger through the Arctic wilderness without his ring set upon by polar bears and wolves.

Filed under: Alternate History, Books, Comics, Cucumber, Rails 3

The New Heroku (Part 1 of 4): The Process Model & Procfile

This post is by Adam from Heroku

Click here to view on the original site: Original Post

In the beginning was the command line. The command line is a direct and immediate channel for communicating with and controlling a computer. GUIs and menus are like pointing and gesturing to communicate; whereas the command line is akin to having a written conversation, with all the nuance and expressiveness of language.

This is not lost on developers, for whom the command prompt and blinking cursor represents the potential to run anything, to do anything. Developers use the command line for everything from starting a new project (rails new) to managing revisions (git commit) to launching secondary, more specialized command lines (psql, mysql, irb, node).

With Celadon Cedar, Heroku’s new runtime stack, the power and expressiveness of the command line can be scaled out across a vast execution environment. Heroku provides access to this environment through an abstraction called the process model. The process model maps command-line commands with to app code, creating a collection of processes which together form the running app.

But what does this mean for you, as an app developer? Let’s dive into the details of the process model, and see how it offers a new way of thinking about how to build and run applications.

The Foundation: Running a One-Off Process

The simplest manifestation of the process model is running a one-off process. On your local machine, you can cd into a directory with your app, then type a command to run a process. On Heroku’s Cedar stack, you can use heroku run to launch a process against your deployed app’s code on Heroku’s execution environment (known as the dyno manifold).

A few examples:

$ heroku run date
$ heroku run curl http://www.google.com/
$ heroku run rails console
$ heroku run rake -T
$ heroku run rails server

At first glance, heroku run may seem similar to ssh; but the only resemblance is that the command specified is being run remotely. In contrast to ssh, each of these commands is run on a fresh, stand-alone dyno running in different physical locations. Each dyno is fully isolated, starts up with a pristine copy of the app’s compiled filesystem, and entire dyno (including process, memory, filesystem) is discarded when the process launched by the command exits or is terminated.

The command heroku run rails server launches a webserver process for your Rails app. Running a webserver in the foreground as a one-off process is not terribly useful: for general operation, you want a long-lived process that exists as a part of a fleet of such processes which do the app’s business. To achieve this, we’ll need another layer on top of the single-run process: process types, and a process formation.

Defining an App: Process Types via Procfile

A running app is a collection of processes. This is true whether you are running it on your local workstation, or as a production deploy spread out across many physical machines. Historically, there has been no single, language-agnostic, app-centric method for defining the processes that make up an app. To solve this, we introduce Procfile.

Procfile is a format for declaring the process types that describe how your app will run. A process type declares its name and a command-line command: this is a prototype which can be instantiated into one or more running processes.

Here’s a sample Procfile for a Node.js app with two process types: web (for HTTP requests), and worker (for background jobs).


web:     node web.js
worker:  node worker.js

In a local development environment, you can run a small-scale version of the app by launching one process for each of the two process types with Foreman:

$ gem install foreman
$ foreman start
10:14:40 web.1     | started with pid 13998
10:14:40 worker.1  | started with pid 13999
10:14:41 web.1     | Listening on port 5000
10:14:41 worker.1  | Worker ready to do work

The Heroku Cedar stack has baked-in support for Procfile-backed apps:

$ heroku create --stack cedar
$ git push heroku master
-----> Heroku receiving push
-----> Node.js app detected
-----> Discovering process types
       Procfile declares types -> web, worker

This Procfile-backed app is deployed to Heroku. Now you’re ready to scale out.

Scaling Out: The Processes Formation

Running locally with Foreman, you only need one process for each process type. In production, you want to scale out to much greater capacity. Thanks to a share-nothing architecture, each process type can be instantiated into any number of running processes. Each process of the same process type shares the same command and purpose, but run as separate, isolated processes in different physical locations.

Cedar provides the heroku scale command to make this happen:

$ heroku scale web=10 worker=50
Scaling web processes... done, now running 10
Scaling worker processes... done, now running 50

Like heroku run, heroku scale launches processes. But instead of asking for a single, one-shot process attached to the terminal, it launches a whole group of them, starting from the prototypes defined in your Procfile. The shape of this group of running processes is known as the process formation.

In the example above, the process formation is ten web processes and fifty worker processes. After scaling out, you can see the status of your new formation with the heroku ps command:

$ heroku ps
Process       State               Command
------------  ------------------  ------------------------------
web.1         up for 2s           node web.js
web.2         up for 1s           node web.js
worker.1      starting for 3s     node worker.js
worker.2      up for 1s           node worker.js

The dyno manifold will keep these processes up and running in this exact formation, until you request a change with another heroku scale command. Keeping your processes running indefinitely in the formation you’ve requested is part of Heroku’s erosion-resistance.

Run Anything

The process model, heroku run, and heroku scale open up a whole new world of possibilities for developers like you working on the Heroku platform.

A simple example: swap out the webserver and worker system used for your Rails app (Heroku defaults to Thin and Delayed Job), and use Unicorn and Resque instead:


gem 'unicorn'
gem 'resque'
gem 'resque-scheduler'


web:     bundle exec unicorn -p $PORT -c ./config/unicorn.rb
worker:  bundle exec rake resque:work QUEUE=*

For background work, you can run different types of workers consuming from different queues. Add a clock process as a flexible replacement for cron using resque-scheduler:


worker:    bundle exec rake resque:work QUEUE=*
urgworker: bundle exec rake resque:work QUEUE=urgent
clock:     bundle exec resque-scheduler

Goliath is an innovative new EventMachine-based evented webserver. Write a Goliath-based app and you’ll be able to run it from your Procfile:


gem 'goliath'


web: bundle exec ruby hello_goliath.rb -sv -e prod -p $PORT

Or how about a Node.js push-based pubsub system like Juggernaut or Faye?


  "name": "myapp",
  "version": "0.0.1",
  "dependencies": {
    "juggernaut": "2.0.5"


web: node_modules/.bin/juggernaut

This is just a taste of what you can do with Procfile. The possibilities are nearly limitless.


The command line is a simple, powerful, and time-honored abstraction. Procfile is a layer on top of the command line for declaring how your app gets run. With Cedar, heroku scale becomes your distributed process manager, and heroku run becomes your distributed command line.

We’ve only just seen the beginning of what the process model can do: over the next year, Heroku will be adding new language runtimes, new routing capabilities, and new types of add-on resources. The sky’s the limit, and we can’t wait to see what inventive new kinds of apps developers like you will be building.

Other Posts From This Series

BundleWatcher: Watching Your Gems

This post is by Benjamin Curtis from Benjamin Curtis

Click here to view on the original site: Original Post

My weekend project this weekend was BundleWatcher, a tool that does just one thing: watches the gems in your Gemfile for updates. Once you upload your Gemfile.lock file, BundleWatcher will keep track of updates to the gems upon which your project depends, and you can use the atom feed for your bundle to know when updates have happened.

I built this to scratch my own itch. While rubygems.org provides a way to subscribe to gem updates and an RSS feed to track those updates, I wanted a way to track updates for each of my projects, project by project. Now instead of just knowing that the inherited_resources gem got updated, I can see which of my projects might need an update because that gem got updated.

BundleWatcher uses the (fantastic) API provided by rubygems.org to keep track of gem updates, so as long as you depend Continue reading “BundleWatcher: Watching Your Gems”

BundleWatcher: Watching Your Gems

This post is by Ben from BenCurtis.com

Click here to view on the original site: Original Post

My weekend project this weekend was BundleWatcher, a tool that does just one thing: watches the gems in your Gemfile for updates. Once you upload your Gemfile.lock file, BundleWatcher will keep track of updates to the gems upon which your project depends, and you can use the atom feed for your bundle to know when updates have happened.

I built this to scratch my own itch. While rubygems.org provides a way to subscribe to gem updates and an RSS feed to track those updates, I wanted a way to track updates for each of my projects, project by project. Now instead of just knowing that the inherited_resources gem got updated, I can see which of my projects might need an update because that gem got updated.

BundleWatcher uses the (fantastic) API provided by rubygems.org to keep track of gem updates, so as long as you depend on gems that are listed there, you’ll be good to go.

#271 Resque

This post is by Ryan Bates from RailsCasts

Click here to view on the original site: Original Post

Resque creates background jobs using Redis. It supports multiple queue and comes with an administration interface for monitoring and managing the queues.

Slightly more readable Ruby

This post is by Josh Susser from has_many :through

Click here to view on the original site: Original Post

A simple coding style for slightly more readable Ruby: symbols as flag arguments.

Use a symbol with a meaningful name instead of true. This makes it clear what you’re doing and is just as terse. For example:

def run(log_it = false)
  log_action if log_it

command.run(true)           # mysterious use of `true`
command.run(:with_logging)  # obvious

The second call to #run is functionally equivalent to the first, because every symbol is a truthy value. But it’s a lot easier to read the code with the symbol and understand what the argument means.

The other common pattern I see in Rails is to use an options hash. That call would look like

command.run(:with_logging => true)

If you have a bunch of options, that’s fine. When it’s just a single optional flag, I prefer passing a symbol with a meaningful name.

Rails 3.0.9, Capybara 1.0, and more

This post is by Jason Seifer and Dan Benjamin from The Ruby Show

Click here to view on the original site: Original Post

In this episode, Peter and Jason talk to you about the latest Rails release and Capybara 1.0, some new gems, as well as new tutorials.

We’d like to thank the Rails core team for finally releasing an update to the framework in time for us to cover it for the week!

[ANN] Rails 3.0.9 has been released!

This post is by aaronp from Riding Rails - home

Click here to view on the original site: Original Post

Hi everybody!

Rails 3.0.9 has been released! Since I am at Nordic Ruby, I will deem this Nordic Ruby Edition. 😉

The main boogs fixed in this release are problems dealing with modifications of SafeBuffers.

gem install rails or update your Gemfile and bundle update while it’s hot!


The major changes in this release of Rails are bug fixes surrounding modifications to SafeBuffer strings. We had places that were modifying SafeBuffers and those places raised exceptions after the security fixes in the 3.0.8 release.

We’ve since updated those code paths, and now we have this nice release for you today!

Please check the CHANGELOG files in each section on github for more details.

For an exhaustive list of the commits in this release, please see github.

Gem checksums


  • fb8f3c0b6c866dbad05ec33baf2af7e851f9d745 actionmailer-3.0.9.gem
  • 9bc2c05463962320d0497bb2e30f4ffa66ed4f79 actionpack-3.0.9.gem
  • 2c1004747a22f756722cf95605398bf9ba6244ed activemodel-3.0.9.gem
  • 285759d41c79460a3f49d26d8a0b3f8c9279e868 activerecord-3.0.9.gem
  • 28f2b296525caeca1341467b5f1bbb90de88aaa7 activesupport-3.0.9.gem
  • 09d52fdcbeefba31dd267d3d7484332ec30f7539 rails-3.0.9.gem
  • 8b46dbeddb56e2e4b4ebfb5312fe81eb865a47e7 railties-3.0.9.gem

Please enjoy this release of Rails!

<3 <3 <3