Design Forces in Ekawada, Part 2

In my previous post, I talked a bit about one of the “forces” that influenced the design of Ekawada, my as-yet-unreleased string figure catalog app for iOS. In this post, I want to talk about another of the forces that affected the app: “What are some easy figures to do?”

It’s not hard to imagine someone with little-to-no string figure experience being curious about the app, especially since it is free, and downloading it just to see what it’s all about. They might want to start with the easiest figure, just to see if it is something they could really learn. How should the application show this to them?

The figure list, sorted by complexity

From the UI point of view, I took the easy way out: in the upper-left corner of the index is a toggle that lets you switch between “ABC” (alphabetical) and “123” (difficulty) orderings. I’m not best pleased with that solution, though: it’s not very self-explanatory. Expect that to change in a future version of Ekawada, after I have more opportunity to think about how best to present that.

(Ignore for now the blue button at the bottom—that’s there because the view is being filtered to show only the “starter set” of figures. Tapping that button would return the view to the default of showing all installed figures.)

At any rate, toggling the view to “123” gives you the figure list, sorted by difficulty (actually complexity—more on that shortly). The first figure in the list is the least complex, and the one at the bottom is the most complex.

Here’s where things get sticky, though. There is no formal, standardized way to evaluate a string figure and say how difficult it is. If you look at the various string figure web sites that try to categorize figures by difficulty, they all do it in a very subjective way. Sure, you may agree that the “easy” ones are easier than the “hard” ones, but you may also disagree with specific categorizations.


I wanted a way in Ekawada for people to order figures by difficulty, and thus I needed a way to objectively rate figures. After a bit of experimentation and observation, I came up with a system that considers what types of maneuvers are used in each step, and assigns a number to each type of maneuver. Then, the various maneuvers used in a figure are summed, and the figure is given a rating. “Opening A”, for instance, is rated a 10, “An Apache Door” is a 61, and “Cat’s Cradle” is a whopping 196 42. (edit: I updated the system to handle figure sequences better, so Cat’s Cradle has a much more sane complexity rating.)

This gets to the point of “complexity” versus “difficulty”. Cat’s Cradle is rated at 196, but that’s because it is a complex figure (actually, series of figures), and is not actually very difficult. Thus, the rating still doesn’t necessarily tell you what is easy and what is hard, only what is simple and what is complicated.

Often, that’s good enough. I do plan to tweak the algorithm for computing the complexity after Ekawada is released, based on feedback from users. However, I think it’s good enough for version one as it is.

Next week, I’ll talk about how Ekawada balances another of the design forces on my list: “I can’t hold a figure on my hands and turn pages at the same time.”

HUG Update

A few weeks ago, we announced our first-ever Heroku Users Group (known henceforth and forever more as a HUG, showing just how much we love our developers!) meetup. We’re now a week away, and we thought it’d be a good time to go into a little more detail about the plan.

On November 3rd at 7pm, we’re opening the doors to anyone who uses Heroku – developers who deploy to us, businesses built on our platform, add-on providers who create and manage services for our users, and more. We’re eager to get everyone in the same room, and we’re looking forward to see the new ideas and developments that will come out of all of us talking at this (and at future) meetups. To kick things off at this first event, though, we’re going to provide the content for you.

Adam‘s going to kick things off with a quick peek at several features that are in development. He’ll be talking about the things many of you have been asking for, and they’ll make deploying complex application much more pleasant. After that, we’ll have a panel of representatives from each of our internal teams – the people who make Heroku run – and they’ll be ready and able to answer any questions you have. Curious about the routing mesh? They wrote it! Wondering about the future of Postgres at Heroku? They know! You bring the questions, they’ll bring the answers.

And after all of that, the fun doesn’t stop – it just moves over a few blocks. At 9pm, we’re cosponsoring a drinkup at Bloodhound with our friends from Basho, so be sure to plan to stay out!

As you might be able to tell, we’re getting more and more excited as we close in on the 3rd, and we’re looking forward to seeing you there. Don’t forget to let us know that you’re coming, and we’ll see you next week!

Tuesday Postmortem

Tuesday was not a good day for Heroku and as a result it was not a good day for our customers. I want to take the time to explain what happened, how we addressed the problem, and what we’re doing in the future to keep it from happening again.

Over the past few weeks we have seen unprecedented growth in the rate of new applications being added to the platform. This growth has exacerbated a problem with our internal messaging systems that we’ve known about and been working to address. Unfortunately, the projects that we have underway to address the problem were planned based on previous growth rates and are not yet complete.

A slowdown in our internal messaging systems caused a previously unknown bug in our distributed routing mesh to be triggered. This bug caused the routing mesh to fail. After isolating the bug, we attempted to roll back to a previous version of the routing mesh code. While the rollback solved the initial problem, there as an unexpected incompatibility between the routing mesh and our caching service. This incompatibility forced us to move back to the newer routing mesh code, which required us to perform a “hot patch” of the production system to fix the initial bug. This patch was successful and all applications were returned to service.

As a result of the problems we have seen over the past couple of weeks, culminating with yesterday’s outage, we have reprioritized our ongoing projects. Several engineers have been dedicated to making short-term changes to the platform with an eye toward incrementally improving the stability of our messaging systems as quickly as possible.

The first of these projects was deployed to our production systems last night and is already making an impact. One of our operations engineers, Ricardo Chimal, Jr., has been working for some time on improving the way we target messages between components of our platform. We completed internal testing of these changes yesterday and they were deployed to our production cloud last night at 19:00 PDT (02:00 UTC).

After these changes were deployed, we immediately saw a dramatic improvement in the CPU utilization of our messaging system. The graph above was generated by Cloudkick, one of the tools that we use to manage our infrastructure, which shows a roughly 5x improvement from this first batch of changes on one of our messaging servers. Ricardo’s excellent work is already making a big impact and we expect this progress to continue as additional improvements are rolled out over the coming days.

View our official reason for outage statement here:

We know that you rely on Heroku to run your businesses and that we let you down yesterday. We’re sorry for the trouble today’s problems caused you. We appreciate your faith in us; we’re not going to rest until we’ve lived up to it.

A look inside Ekawada’s design

Around the time I was thinking of doing a native iOS app for cataloging string figures, my co-worker (and master designer) Ryan Singer posted this article and video about “designing with forces”. My take-away from it was that you don’t design with a list of desired features in mind, but instead with a list of focused scenarios that describe barriers that you want your application to overcome. These barriers are “forces” that are acting on the space your solution ought to fit into.

Taking one concrete example in Ekawada, I could imagine someone downloading the app and thinking, “I want to learn a figure that looks cool.” I wanted Ekawada to be able to address that user’s goal, and so this goal became one of the “forces” that the app needed to interact with.

My next step was to consider ways that Ekawada could “balance” that force, pushing back usefully on the “what looks cool” question. It was obvious that in order for a user to know if a figure looked cool, they had to be able to see it, so I knew right away that I needed the illustrations of each figure to feature prominantly in the figure list.

Ekawada index screen

The result is the index screen. The thumbnails are small, but I’ve found that they are sufficiently large to give a good impression of what the final figure will look like. I originally had a second view, with images that were 4x as large, which was intended to let you browse the figures by their illustrations. I took a couple of weeks to build it out, and learned a bunch about subclassing and customizing UITableView. Alas, it was a darling I had to kill, though I may bring it back eventually (once in git, always in git, eh?). It turned out that the view was not really much more useful than the index itself, and removing it helped to simplify the UI a bit.

All told, my original list of “forces” included nine different questions and scenarios that I wanted Ekawada to be able to answer, and for this first version I believe I’ve been able to address at least eight of them. I may do some more posts about those other design forces and how I tackled them, so watch this space.

Ekawada approaches—last night I was able to knock off a few more items from my to-do list (and, well, add one or two more items to it), and I’m really excited about how it is coming. I don’t want to give a release date or anything yet, but my list of remaining issues is only about seven items long, none of which should take more than 30-60 minutes to complete. Almost done!

November Writing Month; Pragmatic Guide to Git in print

Pragmatic Guide to Git is now in print and shipping, November is time for PragProWriMo again.

#138: Unprofessional

Peter Cooper and Jason Seifer go over the latest ruby and rails news. They touch on speeding up rails, Microsoft dropping IronRuby, and more. Peter also busts out a rap.

Links for this episode:

Chronic 0.3.0 Released: Improved Natural Language Date/Time Parsing

chronic.pngTom Preston-Werner has pushed out version 0.3.0 of Chronic, the popular natural language date and time parsing library for Ruby. It’s a significant release because the last was 0.2.3 back in July 2007! Grab it now with gem install chronic

Despite the long time between releases, Chronic hasn’t gone without attention. It’s been sitting on GitHub and attracting patches for years, but Tom (who’s already pretty busy, y’know, running GitHub) has now decided to bundle it up and push it live.

What does 0.3.0 get you?

  • Improved time-zone support
  • Handles “on” in phrases like “10am on Saturday”
  • Now ignores commas (which could throw it off before)
  • Supports “weekend” and “weekday”
  • Allows numeric timezone offsets (e.g. -0500)
  • Support for seasons
  • “a”, “p”, “am”, and “pm” parsing
  • The typical bugfixes and low level improvements

Chronic is basically a Ruby institution by now (I first posted about it in September 2006!) so check it out. But if you’re still itching for other ways to work with dates and times, check out 3 New Date and Time Libraries for Rubyists from May 2010. Tickle is particular interesting as it allows you to parse natural language requests for recurring events rather than single times.


Peter Cooper and Jason Seifer go over the latest ruby and rails news. They touch on speeding up rails, Microsoft dropping IronRuby, and more. Peter also busts out a rap.

Extending a Model from an Engine in your Rails App

While working on the Rails 3 upgrade of the SaaS Rails Kit (which is available for customers, BTW), I moved most of the guts of the Kit into a plugin (engine) to make it easier to integrate into pre-existing apps. Then, as I was working on integrating the Kit into a client app, I ran into a situation where I wanted to extend one of the models provided in the plugin in the app, since this was a project-specific tweak. I mixed together a couple of suggestions that I found on there on the internets to come up with this:

Rails.application.config.to_prepare do
SubscriptionPlan.class_eval do
scope :active, where(:active => true)
scope :by_price, order(:amount)

That is added to config/initializers, and it simply adds a couple of scopes to the model that this project needed (but that I’m not sure other RailsKit customers would care about).

Does that look like the best approach? Or is there a better way to extend models that are defined in a plugin?

Do you know what’s new in Ruby 1.9?

Do you know what’s new in Ruby 1.9?

This guest post is by Carlo Pecchia, who is an IT engineer mainly interested on agile methodologies and “good practices” for developing large and complex systems. He is also interested in web architectures and emerging programming languages.

Carlo Pecchia With major version 1.9 the Ruby language took a series of improvements devoted to rationalizing and better organizing the internal structure of the language itself.

The language “core” sized from around 3 MB in version 1.0 to 30 in version 1.9: we see that both internal and external (some interfaces) refactoring was needed. Let me show you the main improvements introduced.

Smart things

A list of a general – smart – improvements: Rubygems in now officially a part of Ruby, and so is Rake. Some poorly used libraries were removed from the core – but always available as gems: soap, jruby, etc.

Objects hieararchy

A new root in the class hierarchy was introduced BasicObject:

1.8 => [Class, Module, Object, Kernel]
1.9 => [Class, Module, Object, Kernel, BasicObject]

# => [:==, :equal?, :!, :!=, :instance_eval, :instance_exec, :__send__]

This serves to better organize things internally. This class is so “basic” that it doesn’t give even the object_id, in fact the last statement will generate an error (‘undefined method…’):

foo =

Loving chain methods

More often in Ruby than in other languages, the chain methods style is used. In order to improve this “technique” the new method tap has been released (and backported into 1.8 too):

>puts "Hello".reverse
            .tap{ |o| puts "reversed: #{o}" }

# => reversed: olleH
# => OLLEH

Basically, we can now also “do something else too” with the object in the chain.

Main changes

Let us now see the main areas where changes were introduced: symbols, arrays, hashes and blocks. And finally, an interesting improvement toward parallelism: the fibers.


Symbols are now interpreted as string wrt regular expressions:

>a = [:windows, :mac, :amiga]
puts a.grep(/ac/)

# 1.8 => []
# 1.9 => mac

We can also get a Proc from a symbol:

u = :upcase.to_proc'lorem ipsum...')


Arrays, a fundamental store in any modern language, have been deeply revisited:

  • the method to_a doesn’t belong to the Object class.
  • arrays can be easily created with the homonym class (eg: a = Array('apple', 'bananas')) for an improved code readability.
  • in such a creation the implicit separator – default is “\n” – is not considered.
  • the splat operator (*a = some_array_here) has a more consistent behaviour.


Now hashes (finally!) are stable data structures: elements keep order insertion. Even a Hash is not – by definition – such a kind of data abstraction, that feature can be very handy.

It’s now possible to declare hashes differently (use name: value pairs to create a hash if the keys are symbols):

data = { jan: 201234, feb: 234234, mar: 234345 }

And that, obviously, make obsolete some other forms: if-then-else with colon and splat hashes definition (eg: h = {"jan", 201234, "feb", 234234, "mar", 234345}).

Easy access within a string:

puts "First two months of this year was: %{jan} and %{feb}" % data

Finally, to make things more consistently, we have two major differences:

  • select method on hash – that aim to act like a “filter” – return a hash (in 1.8 it returns an array of arrays…)
  • to_s method return and internal representation of the hash, rather than join together keys and values.


Still considering the mantra word “rationalization”, blocks and methods share the same syntax for parameters:

def my_method(foo, bar=10, *baz)

lambda {|foo, bar=10, *baz| ...}

An interesting “correction” was made on block parameters: now they are local variables and don’t collide with external references:

foo = 'this is an external variable'
bar = 23

10.times do |foo|
  bar = foo + 1
  # foo here is unrelated from the external name

This was a major issue in language version 1.8.

Of course, if within a block (outside parameters list) we reference an external variable, that one will be modified. With 1.9 we can “protect” such variables declaring an internal homonym:

foo = 'this is an external variable'
bar = 23

10.times do |i;foo|
  bar = bar + 1
  foo = bar % 2

# foo untouched, but bar modified


We will see an increasing usage of parallelism techniques in programming, and the spread of (really interesting) languages like Clojure and Erlang is a clear sign. Ruby too – with 1.9 – offers a nice tool for programmers: fibers.

They are a lightweight processes – memory footprint is only 4Kbytes – conceived for a cooperative concurrence. Basically we can think of them like Procs that maintain status over calls:

fb = do |val|
  Fiber.yield "That's all... (#{val})"
  Fiber.yield "folks! (#{val})"

puts fb.resume 10
puts fb.resume 20

A final note

The transition – fairly smooth in my opinion – towards version 1.9 is still in progress. I hope this post helps you to understand the major differences and to act accordingly when coding, both by your own and on someone elses code.

Alert!A final tip: pay attention when using a gem – the interesting project can help you see if a gem is already ported on Ruby 1.9.

I hope you found this article valuable. Feel free to ask questions and give feedback in the comments section of this post. Thanks!

Do read these awesome Guest Posts:

Technorati Tags: , ,

The value of a personal bug log

The value of a personal bug log

This guest post is by Brian Tarbox, who is a Distinguished Member of Technical Staff at Motorola where he works on Video On Demand Systems. He also blogs about applying a Wabi Sabi approach to software, cognition and philosophy at He is a regular contributor to the Pragmatic Programmer magazine. His open source project for converting computer log files to music just won an Oracle Duke’s Choice award.

Brian Tarbox Although our field has huge amounts of diversity (languages, platforms, team maturity) we all share the need to keep up and enhance our skills. The person nipping at our heels may live in India, or China or just be recently graduating from a local college. The point is that sharks have to keep swimming or they die and software engineers have to keep growing or they become expendable.

Many of the things we do to keep up our skills need the good graces of others. Attending conferences, going to classes, getting to code a module in a new language are all things that need permission. Some just need time while other need your company to spend real money. Some companies have policies about continuing education while others do not. In either case there is a fair chance that your request will get a “no” (though that shouldn’t stop you from asking). Things like attending a Users Group meeting don’t tend to cost anything but those of you with families are likely familiar with the process of negotiating evenings off. Your mileage may vary.

While I have a very understanding wife and an encouraging boss I don’t like to rely on others for my career. So I look for no-cost, no-permission-required things I can do to keep up and enhance my skills. Maintaining a personal bug diary is a great way to do that.

At this point allow me a segue to illustrate a point. I’m a private pilot and so I’ve spent a lot of time learning to fly and reading up about the process of learning to fly. A key milestone in this learning is getting to your first solo flight. In the private sector there is an enormous range in the amount of training time required for the first solo. Some take as little as nine hours of instruction while others take over a hundred. I was somewhere in the middle. In the military they have a much more standardized approach which results in new pilots getting to solo in remarkably small number of training hours. The secret is that after flight they do a post-flight review. They go over every aspect of the flight, highlighting the good and bad decisions that the student pilot made. This anchors and solidifies the training so it has greater impact.

Maintaining a personal bug log serves the same purpose: creating a little learning experience from each bug.

The conventional wisdom says that the process of dealing with a bug is:

  • create a test case that demonstrates the bug
  • fix the bug, i.e. code until the test case passes
  • check in the code, close the bug in your bug system
  • never think about the bug again

It’s that last point that we want to change. I’m suggesting we change that last step to “think about why the bug got past the unit and system tests”. The answer might be that a requirement changed, that a user tried something you hadn’t thought of, or that a unit test was less thorough than you thought. The reasons you discover for your bugs will vary, both with your environment and with your level of honesty. Be open to the bug reason “because I messed up”. Remember, this is not a log that you have to share with anyone, it’s the log you keep as a way to get better.

One of the nice things about learning to fly was that I knew I didn’t know how. So I was able to leave my ego on the ground and acknowledge that on such and such a flight my landing was bad because I let myself get distracted, or I forgot to factor in the crosswind or whatever. As programmers we often let our pride get in the way of acknowledging our mistakes. This of course leads us to make the mistake again.

As an experiment try maintaining a bug log for a month. At the end of the month look back and see if you notice any patterns. This are areas where you need to change your behavior, and that’s a fair definition of learning.

I hope you found this article valuable. Feel free to ask questions and give feedback in the comments section of this post. Thanks!

Do read this awesome Guest Post:

Technorati Tags: , ,

#237 Dynamic attr_accessible

It is important to use attr_accessible for security with mass assignment, but what if you need it to be dynamic based on user permissions? See how in this episode.

#237 Dynamic attr_accessible

It is important to use attr_accessible for security with mass assignment, but what if you need it to be dynamic based on user permissions? See how in this episode.

Still here…

Yeah, I’m still around. Just haven’t had a whole lot to say!

I’ve been working, on the side, on an iOS app. It’s been my sole side-project pretty much since March, and in that time I’ve gone from practically zero Objective-C to proud-of-myself. My project is nearly ready to launch.

I’ve talked about it some on Twitter. It started as a web app for collecting string figures (yeah, yeah, I’m still a string figure nut). It’s now a native (and universal) iOS app. It will be called Ekawada (the Nauruan term for string figures).

As I said, I’ve learned a ton. In particular, I’ve realized how much I’ve missed compiled languages and memory management. (I’m not being sarcastic there—I’m serious. Done right, explicit memory management can be extremely satisfying.) I’d like to get back into C/ObjC more.

I’ve learned a ton about iOS, too. It really has a beautiful UI framework, even in spite of the horrendously long method names. I’m still trying to learn “best practices” (I hate that term, but you all know what I mean), and my code base is one long trail of blood and tears, but I think I’m figuring it out.

Ekawada will be available “soon”—I’m pretty much down to word-smithing and setting up infrastructure (web site, FAQ, etc). It will be a free app, featuring 8 string figures of varying difficulties, as well as 8 tutorials to get you up to speed on the notation I’m using. (It’s the ISFA standard notation, if you’re curious.) If the 8 figures are enough to whet your interest, there is a store in-app that lets you purchase “packs” of additional figures. Initially, there will be 5 packs available, each with 19 or 20 figures, and each available for $0.99.

I hope to add additional packs after launch, though they will cost more. The initial set of figures uses the (public domain) illustrations from Caroline Furness Jayne’s String Figures and How to Make Them, but figures I add from here on out will have to be illustrated by yours truly, and that takes a lot of work for an artistically challenged left-brainer like myself!

I’m really pleased by how Ekawada is turning out, and I hope by making the core application free that more people can be introduced to string figures. If you’ve got an iOS device, keep your eyes peeled for the announcement!

The Chain Gang

Chain-able interfaces are all the rage — jQuery, ARel, etc. The thing a lot of people do not realize is how easy they are to create. Lets say we want to make the following work:

User.where(:first_name => 'John')
User.where(:first_name => 'John').sort(:age)
User.sort(:age).where(:first_name => 'John')

First, we need to have a class method for both where and sort because we want to allow either one of them to be called and chained on each other.

class User
  def self.where(hash)

  def self.sort(field)

User.where(:first_name => 'John')

Now we can call where or sort and we do not get errors, but we still cannot chain. In order to make this magic happen, lets make a query class. The query needs to know what model, what where conditions and what field to sort by.

class Query
  def initialize(model)
    @model = model

  def where(hash)
    @where = hash

  def sort(field)
    @sort = field

Now that we have this, lets create new query objects when the User class methods are called and pass the arguments through.

class User
  def self.where(hash)

  def self.sort(field)

We might think we are done at this point, but the sauce that makes this all work is still missing. If you try our initial example, you end up with a cryptic error message.

ArgumentError: wrong number of arguments (1 for 0)

The reason is that in order for this to be chainable, we have to return self in Query#where and Query#sort.

class Query
  def initialize(model)
    @model = model

  def where(hash)
    @where = hash

  def sort(field)
    @sort = field

Now, if we put it all together, you can see that this is the basics of creating a chain-able interface. Simply, do what you need to do and return self.

class Query
  def initialize(model)
    @model = model

  def where(hash)
    @where = hash

  def sort(field)
    @sort = field

class User
  def self.where(hash)

  def self.sort(field)

puts User.where(:first_name => 'John').inspect
puts User.sort(:age).inspect
puts User.where(:first_name => 'John').sort(:age).inspect
puts User.sort(:age).where(:first_name => 'John').inspect

# #<Query:0x101020268 @model=User, @where={:first_name=>"John"}>
# #<Query:0x101020060 @model=User, @sort=:age>
# #<Query:0x10101fe30 @model=User, @where={:first_name=>"John"}, @sort=:age>
# #<Query:0x10101fbb0 @model=User, @where={:first_name=>"John"}, @sort=:age>


From here, all we need to do is define kickers, such as all, first, last, etc. that actually assemble and perform the query and return results. Hope this adds a little something to your repertoire next time you are building an interface. It does not work in every situation, but when applied correctly it can improve the usability of a library.

If you are interested in more on this, feel free to peak at the innards of Plucky, which provides a chain-able interface for querying MongoDB.

Bag O’ Links – 24/10/2010 (back!)

Yeah, i know i said bag’o’links will go out of service a few months ago and that i’m going to move it the Nautilus6 website, but overall there were few other things i had to do with Nautilus6 prior to that.. i’ll get to it when i can but it doesn’t mean i can’t re-post links does it?

Links and fun

Favorite RailsRumble Apps

Last weekend this year’s “RailsRumble” took place, of course i missed it but i did had a chance to peek in and choose some favs:

  • RailsWizard – generate your own Rails template by a simple click/choose interface. i like it so far and anxious to see how far it goes.
  • Caviar – charge your clients by results.
  • Empower – build a development environment around your app, nice.
  • FontStacks – generate CSS that contains awesome fonts (font-face).
  • GitWrite – blogging for nerds, looks like a web supported Jekyll.. nice, i like it and waiting for some improvements and additions.


Last sad note

About 10 days ago we lost a friend. Fares Yussuf Donaldson (a.k.a invalidrecord on IRC) died.
We will miss you buddy.

Writing Parsers in Ruby using Treetop

treetop.pngTreetop is one of the most underrated, yet powerful, Ruby libraries out there. If you want to write a parser, it kicks ass. The only problem is unless you’re into reading up about and playing with parsers, it’s not always obvious how to get going with them, or Treetop in particular. Luckily Aaron Gough, Toronto-based Ruby developer, comes to our rescue with some great blog posts.

Aaron, who has a passion for messing around with parsers and language implementations, recently released Koi – a pure Ruby implementation of a language parser, compiler, and virtual machine. If you’re ready to dive in at the deep end, the code for Koi makes for good reading.

Starting more simply, though, is Aaron’s latest blog post: A quick intro to writing a parser with Treetop. In the post, he covers building a “parsing expression grammar” (PEG) for a basic Lisp-like language from start to finish – from installing the gem, through to building up a desired set of results. It’s a great walkthrough and unless you’re already au fait with parsers, you’ll pick something up.

If thinking of “grammars” and Treetop is enough to make your ears itch, though, check out Aaron’s sister article: Writing an S-Expression parser in Ruby. On the surface, this sounds like the same thing as the other one, except that this is written in pure Ruby with no Treetop involvement. But while pure Ruby is always nice to see, it’s a stark reminder of how much a library like Treetop offers us.

If you’re interested in parsing merely as a road to creating your own programming language, though, check out Create Your Own Programming Language by Marc Andre Cournoyer. It’s a good read and even inspired CoffeeScript!

[ad] Check out Rails Cloud Hosting from Joyent. You can get started from a mere 83 cents per day and you get free bandwidth and persistent local storage with a 100% SLA.

Microsoft Jettisons IronRuby Into The Open Source Community

windows-ruby.jpgBack in August, Microsoft seemed to get tired of IronRuby so its project leader Jimmy Schementi jumped ship while asking the Ruby community to step up and get involved in its future. Today, Microsoft has announced new leadership for IronRuby (and IronPython) and has effectively jettisoned it into the community as a true fully open source project.

So who’s in charge of IronRuby now? Jimmy Schementi, naturally, and Miguel de Icaza, the founder of the Mono and Gnome projects and generally all round super famous open source dude.


Schementi has written about what the leadership changes and Microsoft’s announcements mean in the greater scheme of IronRuby’s development. In short, Microsoft is no longer directly funding the projects but isn’t restricting contributions or keeping code hidden behind the scenes anymore either (e.g. the IronRuby tools for Visual Studio).

Want to give IronRuby a try or get the source code? Head over to IronRuby’s official homepage at

[announcement] Want a fistful of links to the week’s top Ruby and Rails news landing in your inbox each Thursday? Ruby Inside’s sister project Ruby Weekly is for you — check her out.

MacRuby + Mac App Store == Low Hanging Fruit for Rubyists

appstoreformac.pngAt its “Back To Mac” presentation yesterday, Apple unveiled the Mac App Store, an equivalent of the iOS App Store for the Mac. Given the relentless development and improvement of MacRuby and the power it brings Rubyists in developing complete OS X applications, I’m convinced that the time is right for Ruby to make a big splash on the OS X GUI app development front.

When I mentioned the above observation on Twitter, Geoffrey Grosenbach of PeepCode pointed out:


He’s right, but things like app stores have a funny way of acting as catalysts for developers to come out of the woodwork and try new things out. Even taking the “evils” of app stores into account (and Apple’s performed more than its fair share of evil incantations on iPhone developers), the ease with which you can put apps for sale and take money from customers (if only a 70% share) is appealing. The iPhone App Store almost reinvented casual gaming, for instance, and people I’d never have considered to try and develop a mobile app have been lured into the iPhone fold.

Given all of this, I think that if you want to develop OS X apps without moving away from Ruby, and you want to make proper money for your apps without setting up a Web site, building up traffic, and what not, now’s the perfect time to look into MacRuby and Apple’s Mac Developer Program. (But if you want to work on open source or sell your own stuff, do that too!)

Will Apple even allow Ruby built apps on to the App Store?

Keeping in mind the now semi-resolved 3.3.1 imbroglio, it’s worth maintaining some healthy disdain as to Apple’s intentions and future actions until they say something officially.

Someone’s leaked the Mac Developer Program terms on to a pastebin site already, and nothing stands out to me as blocking the use of MacRuby for building App Store-deployed apps. Point 2.14 notes that apps must be packaged and submitted with Apple’s own packaging tools included in Xcode, but since MacRuby is being developed at Apple, one hopes this will be easily done. Apple even has a guide called Developing Cocoa Applications Using MacRuby that digs into using Xcode.

Other points note that you can’t install kexts (kernel extensions), have your own licensing system, offer “trials”, download other apps from within your own, or use setuid/root privileges, but these affect the behavior of your program rather than its underlying formation.

John E Vincent suggests that point 2.24 “Apps that use deprecated or optionally installed technologies (e.g., Java, Rosetta) will be rejected” would reject the use of MacRuby. I disagree. In OS X, Java is a giant collection of frameworks maintained and updated by Apple as part of the OS, whereas you can build fully packaged, standalone OS X .app files with the MacRuby framework tightly packaged inside. I could be wrong, though – what say you?

Where next?

First, you’re going to need to test the waters of building OS X GUI-based apps with MacRuby so head to the official site to download and install it. Be aware that you need to be running OS X 10.6 (Snow Leopard) or higher.

Next, read up on how to build a basic app. Choice reads include:

Not a fan of reading? Alex Vollmer and Geoffrey Grosenbach have put together a Meet MacRuby screencast. It costs $12, but you’re going to be raking in the millions with your new Mac app on the App Store, right?

Finally, once you’re happy with the idea of developing Mac software in Ruby, you’ll need to become a member of the Mac Developer Program. This is not the same as the iOS Developer Program and you’ll need to pay another $99 per year fee to get into it. What do you get? Private discussion forums, technical support, pre-release software (including OS X builds), and the ability to sign and submit Mac apps to Apple for inclusion in the Mac App Store.

Disclaimer: If Apple comes out and says you have to use Objective C for your Mac App Store apps, don’t blame me!

The Companies Making Ruby Inside Possible in October 2010

It’s time for us to thank the companies who help keep Ruby Inside (and often other Ruby sites) going by sponsoring our work. Luckily, they’re all pretty interesting in their own right and have some worthwhile products and services to check out.

Joyent — Public Cloud Hosting for Rails

Joyent is a leading infrastructure provider to some of the fastest growing businesses on the web, including those in the social gaming, digital agency, publishing, eCommerce, and iOS industries. Joyent helped customer AKQA, an agency for many of the world’s leading brands, scale on demand to meet wildly successful online campaigns. Joyent helped another customer, Context Optional, a leading provider of social marketing software and services, scale at Facebook levels and support millions of users within the first months of launch.

New Relic — On-demand Application Management

New Relic is a Java and Ruby application performance and reliability monitoring and management service that started life as a Rails-only service (and it’s still great for that!). It’s truly enterprise-grade software but with the flexibility and accessibility of annual, monthly, or “on demand” pricing, catering to nearly all types of customer. With New Relic you can monitor your apps, find slow transactions, see specific SQL queries, and even run a code-level thread profile. Trivia: New Relic is an anagram of founder “Lew Cirne”‘s name!

Linode — Xen VPS Hosting

Linode is a Xen-based VPS (virtual private server) hosting service that’s now in its 8th year. Plans start at $19.95 per month for a plan with 512MB of RAM, 16GB of storage, and 200GB of transfer bandwidth. Want 1GB of RAM, 32GB of storage, and 400GB of bandwidth? That’s $39.95 per month. Linode’s major advantages over the competition are reliability and performance (as shown in these performance tests by Eivind Uggedal) and I’ve even “downgraded” from dedicated servers to using Linode because they’re almost as good but for a fraction of the cost!

A non-disclaimer: Ruby Inside is hosted with Linode but the hosting is all paid for separately and is not related to their sponsorship. I’ll be sticking with them – they provide an amazing service.

Recurly — Subscription Billing In 3 Easy Steps

Recurly is a recurring billing service, ideal for webapps and other subscription based systems. Recurly’s goal is to help you boost your monthly subscription revenue without getting in the way. From their Web site you can sign up for a free trial and get playing in minutes. The customer experience is fully customizable and there’s a “Advanced Subscription Billing” API you can use directly from your app(s).

Want to join them?

If you’re interested in sponsoring Ruby Inside, get in touch with our advertising guru James Avery using this form. We have a great opportunity for any companies interested in being seen in the Ruby and Rails worlds. On a monthly basis (or just a 2 week run, if you prefer) you can take a spot in the “Web Publishers Room” (~75k impressions a month across 15 different Ruby-related sites), Ruby Inside (170-200k impressions per month), RubyFlow (~70k), Rails Inside (~25k) as well as a mention in a post like this. So that’s about 350k impressions over 18 well known Ruby and Rails sites..!