More Lessons Learned

Last year, when Obtiva was purchased by Groupon, I wrote a “what I learned” post talking about things I thought I came to understand about software projects after working on a bunch of them. Now that I’ve moved on from Groupon, I started to think about what, if anything, I learned while I was there.

I keep coming back to three different things — this is a more personal set of lessons than the last batch, so maybe they’ll be less generally useful. Maybe not, though. Think of it as a little long-term career management advice.

These are all things that I think I kind of knew going in, but there’s knowing something and knowing something, know what I mean?

Also — nothing here is meant to say anything against Groupon Engineering — I freely admit that what I’m talking about are issues of me managing my life and time.

Coding is the Input to Everything I Do

I like coding. (I thought I’d start with something controversial…) I like it a lot. That said, I’ve never been the guy who needed hours more coding in his week above and beyond work. For most of my professional career, my personal projects have been writing projects. And while I love to continually learn new things, it’s been a long time since I felt the need to forgo sleep to do so.

Groupon was the first time in my professional career where coding was not a primary responsibility — my primary responsibility was organizing training. And I’ll stipulate that I went into it eyes open. It sounded pretty good. No coding project deadlines, and I love to teach. A room of students who have to listen to me sounds good.

At first, I was actively building the curriculum, that was pretty good. Then there was less curriculum work to, and in for reasons that I don’t really need to detail, the things asked of me became a lot more administrative. The point is that as I got further away from everyday coding, I felt like I got less good at all kinds of things that I want to be good at. I didn’t have interesting problems to blog about, I wasn’t learning new things as much. I felt less credibility as a teacher and speaker as I got a little removed from practice. Combine this with my occasional tendency toward impostor’s syndrome, and things got less fun quickly.

There were options available to me at Groupon that I chose not to take for reasons good and bad. The main point for me is that by the time I realized what I was losing, it was hard for me to feel like I could get it back. The point for you, I guess, is to be confident in knowing what you like to do and what parts of your work are satisfying. Know what you need to have as your inputs to be successful over the long term.

Big Companies are not Small Companies.

I know, duh.

I’ve joked for years about “big company problems” vs. “small company problems”. In a small company you have to maintain the CI server yourself. In a big company, there’s a whole IT department, but it takes you six months to requisition a CI server. (True story, at Motorola. Also they told me the CI server was causing too much network traffic, and did I have to run it after every checkin.)

There are two structural issues at big companies that have tended to drive me batty. The first — that people are often making decisions about other people who they don’t know and only have a dim idea of what they do — was not a major issue for me at Groupon. The second — that big companies tend to encourage people to specialize — decidedly was.

I want this post to be about me, not about them. So here’s a related story: when I first entered the job market, I wound up with two serious offers, salary identical. One of them was at Nokia, for an R&D department in Boston where I would have done usability research tangentially related to what I had been doing as a grad student. The other was what we’d now call a small web consulting shop, 12 people or so separated between two cities. I’d never done any web programming. They had done only a little bit more (they thought of themselves as documentary filmmakers by trade). When I went for my interview, the CEO of the company was vacuuming the floor of their 2-person Boston office.

Obviously I went with the tiny company, which even as I type this sounds kind of insane. And while I’d love to say something self-serving about how I picked the job that scared me, I don’t think that’s true (both jobs scared me). I do think, though, that I was excited by the prospect of doing a lot of different things. Which I did, the job turned out to be a kind of immersion in the entire lifecycle of software projects in the way that pouring ice water on somebody’s head is kind of a way to get their attention.

Ever since, I’ve been happiest when I’ve been able to do all kinds of different things on a regular basis. Big companies, of course, tend to specialize because they need to, and because they can. Once you have 200 developers, suddenly you can spare somebody to be full-time in charge of improving training. (Well, almost full-time…) Which sounded great, for a while, but then, see point #1. I flatter myself that I do a lot of things well, and whether that’s true or not, I still want to try to do a lot of things.

Introversion and local maxima

This one is a little tricky and it’s really a personal anti-pattern.

Look, I’m in introvert in the classic definition. Every time I’ve taken a Myers-Briggs test, I bury the needle for I and N. On a day to day basis, one thing this means is that, while I usually like my co-workers (and my Obtiva/Groupon collegues are an outstanding group of people), I’ll often choose, say, eating at my desk over going up to a group in the lunchroom and joining up with a group of people.

On a related note, for most of my last six to nine months or so at Groupon I was a team of one. Then the person I reported to left, and I wasn’t even reporting to anybody else in Chicago. I swear this is true — I was literally sitting in a corner with nobody on my two orthognal sides. I’m saying I was a little isolated. I’m also saying that of course I could have handled it better. That’s part of my point — one thing I learned is that what seemed like the best thing to do on a day to day basis, ended up being isolating in the long term. I would be able to go large chunks of a day without interacting with co-workers. Even a staggering introvert like myself has limits.

What does it mean?

Dunno. Just being a self-indulgent blogger. I expect most of you to read this, roll your eyes and say “Duh.”

I do know that I’ve spent most of my six weeks at Table XI coding and helping run a small web project. I know it feels great even when it gets weird. It feels like I’m using muscles that got a little rusty. (I’d have some technical blog posts for you, but I’m backed up with the book. Coming, I promise.

Table XI provides lunch in house every day, which makes it a lot easier to actually talk to co-workers. Which is good. (I realize this lunch thing sounds totally insane to a significant percentage of you. I’m okay with that.) During my first week, I had a meeting where we planned out what kinds of things would happen as part of my first few months. One of my cards was “do something new”. I don’t know exactly what yet, but it’s important to me to keep moving forward.

Thanks for listening, I hope you will, to paraphrase Tom Lehrer, find this valuable in bizzarre set of circumstances some day.

Ruby Programming 39th Batch: Registrations now open

Registrations are now open for RubyLearning’s popular Ruby programming course. This is an intensive, online course for beginners that helps you get started with Ruby programming.

Here is what Demetris Demetriou, a participant who just graduated, has to say – “When I joined this course I was sceptical about how useful this course would be for me instead of reading material and watching videos on YouTube and thus saving money. After the course started I realised how valuable this course was. In the past I had read many Ruby books over and over, but never got into really getting practical with it and never had confidence in it. Lots of theory but couldn’t use it. I feel that the exercises in this course and the support, monitoring from our mentor Victor, made the huge difference that all books in the past didn’t. It wasn’t about reading lots of books, but simply few things and get practical and understand them well. I feel I learnt a lot and I’m coming back for more to rubylearning.org Thanks a lot Victor and Satish and all the other Rubyists who gave us today’s Ruby.”

What’s Ruby?

Ruby

According to http://www.ruby-lang.org/en/ – “Ruby is a dynamic, open source programming language with a focus on simplicity and productivity. Ruby’s elegant syntax is natural to read and easy to write.”

Yukihiro Matsumoto, the creator of Ruby, in an interview says –

I believe people want to express themselves when they program. They don’t want to fight with the language. Programming languages must feel natural to programmers. I tried to make people enjoy programming and concentrate on the fun and creative part of programming when they use Ruby.

What Will I Learn?

In the Ruby programming course, you will learn the essential features of Ruby that you will end up using every day. You will also be introduced to Git, GitHub, HTTP concepts, RubyGems, Rack and Heroku.

Depending on participation levels, we throw a Ruby coding challenge in the mix, right for the level we are at. We have been known to give out a prize or two for the ‘best’ solution.

Who’s It For?

A beginner with some knowledge of programming..

You can read what past participants have to say about the course.

Mentors

Satish Talim, Michael Kohl, Satoshi Asakawa, Victor Goff III and others from the RubyLearning team.

Dates

The course starts on Saturday, 19th Jan. 2013 and runs for seven weeks.

RubyLearning’s IRC Channel

Most of the mentors and students hang out at RubyLearning’s IRC (irc.freenode.net) channel (#rubylearning.org) for both technical and non-technical discussions. Everyone benefits with the active discussions on Ruby with the mentors.

How do I register and pay the course fees?

  • The course is based on the The Ultimate Guide to Ruby Programming eBook. This book is normally priced at US$ 19.95 and we are discounting it US$ 10.00 by combining it in the Course+eBook option below.
  • You can pay either by Paypal or send cash via Western Union Money Transfer or by bank transfer (if you are in India). The fees collected helps RubyLearning maintain the site, this Ruby course, the Ruby eBook, and provide quality content to you.
  • Once you pay the fees below, register on the RubyLearning.org site and send us your name and registered email id while creating an account at RubyLearning.org to satish [at] rubylearning [dot] com We will enrol you into the course within 48 hours.
  • If you have purchased the eBook at the time of registration, we will personally email you the eBook within 48 hours.

You can pay the Course Fees by selecting one of the three options from the drop-down menu below. Please select your option and then click on the “Pay Now” button.

Options

At the end of this course you should have all the knowledge to explore the wonderful world of Ruby on your own.


Download ‘Advice for Ruby Beginners’ as a .zip file.


Here are some details on how the course works:

Important:

Once the course starts, you can login and start with the lessons any day and time and post your queries in the forum under the relevant lessons. Someone shall always be there to answer them. Just to set the expectations correctly, there is no real-time ‘webcasting’.

Methodology:

  • The Mentors shall give you URL’s of pages and sometimes some extra notes; you need to read through. Read the pre-class reading material at a convenient time of your choice – the dates mentioned are just for your guideline. While reading, please make a note of all your doubts, queries, questions, clarifications, comments about the lesson and after you have completed all the pages, post these on the forum under the relevant lesson. There could be some questions that relate to something that has not been mentioned or discussed by the mentors thus far; you could post the same too. Please remember that with every post, do mention the operating system of your computer.
  • The mentor shall highlight the important points that you need to remember for that day’s session.
  • There could be exercises every day. Please do them.
  • Participate in the forum for asking and answering questions or starting discussions. Share knowledge, and exchange ideas among yourselves during the course period. Participants are strongly encouraged to post technical questions, interesting articles, tools, sample programs or anything that is relevant to the class / lesson. Please do not post a simple "Thank you" note or "Hello" message to the forum. Please be aware that these messages are considered noises by people subscribed to the forum.

Outline of Work Expectations:

  1. Most of the days, you will have exercises to solve. These are there to help you assimilate whatever you have learned till then.
  2. Some days may have some extra assignments / food for thought articles / programs
  3. Above all, do take part in the relevant forums. Past participants will confirm that they learned the best by active participation.

Some Commonly Asked Questions

  • Qs. Is there any specific time when I need to be online?
    Ans. No. You need not be online at a specific time of the day.
  • Qs. Is it important for me to take part in the course forums?
    Ans. YES. You must Participate in the forum(s) for asking and answering questions or starting discussions. Share knowledge, and exchange ideas among yourselves (participants) during the course period. Participants are strongly encouraged to post technical questions, interesting articles, tools, sample programs or anything that is relevant to the class / lesson. Past participants will confirm that they learned the best by active participation.
  • Qs. How much time do I need to spend online for a course, in a day?
    Ans. This will vary from person to person. All depends upon your comfort level and the amount of time you want to spend on a particular lesson or task.
  • Qs. Is there any specific set time for feedback (e.g., any mentor responds to me within 24 hours?)
    Ans. Normally somebody should answer your query / question within 24 hours.
  • Qs. What happens if nobody answers my questions / queries?
    Ans. Normally, that will not happen. In case you feel that your question / query is not answered, then please post the same in the thread – “Any UnAnswered Questions / Queries”.
  • Qs. What happens to the class (or forums) after a course is over? Can you keep it open for a few more days so that students can complete and discuss too?
    Ans. The course and its forum is open for a month after the last day of the course.

Remember, the idea is to have fun learning Ruby.

Technorati Tags: , , ,


(Powered by LaunchBit)

Programming Groovy, Sublime Text 2 screencast

Run JRuby on Heroku Right Now

Over a year ago Heroku launched the Cedar stack and the ability to run Java on our platform. Java is known as a powerful language – capable of performing at large scale. Much of this potential comes from the JVM that Java runs on. The JVM is the stable, optimized, cross-platform virtual machine that also powers other languages including Scala and Clojure. Starting today you can leverage the power of the JVM in your Ruby applications without learning a new language, by using JRuby on Heroku.

After a beta process with several large production applications, we are pleased to move JRuby support into general availability immediately. One of these companies Travis CI which provides free CI testing to open source repositories, and a pro plan for private projects, was a JRuby beta tester. Josh Kalderimis of the Travis team had this to say about using JRuby on Heroku:

We love JRuby, everything from the threading support to having the power of the JVM at our finger tips. But what we love
most is that we can set up a JRuby app in seconds, the same way as all of our other Heroku apps. Git push and it’s live,
no matter what the language.

We’ve been working with the JRuby team to make sure that the experience using the language on Heroku is going to be everything you’ve come to expect from using our platform. So why should you be interested in running JRuby?

Why JRuby

If you’re coming from a Java background and want to use a more dynamic language, JRuby allows you to leverage the syntax of Ruby with the the ability to run JVM based libraries. If you’re a Ruby developer already on Heroku, the JRuby implementation has several unique features that you can leverage. The most prevalent difference between running code on JRuby and MRI, or cRuby, is JRuby’s lack of a Global Virtual Machine Lock. This means you can run multiple threads of JRuby code within the same process. While cRuby does allow you to perform IO and other non-ruby commands in parallel threads, running Ruby code concurrently can only be done in multiple processes. The second difference is the JVM ecosystem. JRuby can use Java libraries such as JDBC based database drivers. Many of these libraries have been heavily optimized and can offer speed upgrades.

JRuby on Heroku

JRuby on Heroku lowers the barrier of entry to both learning and running a new language in production. The interface of JRuby with the Heroku platform is the same as our other languages: you push your code to us and we do the rest. You don’t need to think about all of the details of running a new language. The result is you get to focus on adding features, not on your how to deploy and keep your systems up.

We have been working with the JRuby community together to make sure the experience is a good one. Charles Nutter, the co-lead of JRuby, is excited about the future of running JRuby on Heroku:

One of the most frequently-requested features for JRuby isn’t a JRuby
feature at all…it’s support for JRuby on Heroku. We’re very excited
that Heroku now officially supports JRuby, and we’re looking forward
to working with and supporting Heroku users trying out JRuby on their
cloud of choice.

By normalizing the interface to deployment across implementations, we hope to ease the process of trying new intrepreters within the Ruby community. We are excited to see a new class of applications, by running Ruby on the JVM, deployed and supported on Heroku. With all these options, which Ruby should you use in production?

Which Ruby to Use?

Heroku supports many languages, and we have a long and happy history of supporting Ruby. We are continuing to invest in the exciting future of MRI we are also excited about the ability for you to run your code on the interpreter of your choice. This can open up new possibilities such as taking advantage of different VM optimizations or concurrent Ruby processing.

As you’re trying JRuby, remember that it may behave slightly differently than you’re used to with MRI. If you’re interested in trying JRuby out on an existing Heroku app, you can read more about converting an existing Rails app to use JRuby.

Every app is different and every project has different requirements. Having the ability to quickly and easily run your app in production on multiple Ruby VMs gives you the power to choose.

Try it Today

If you have an existing Rails app you can deploy your existing app on JRuby. If you’re just starting from scratch, running JRuby on Heroku is a breeze. All you need to do is specify the version of Ruby you want to run, the engine, and the engine version in your Gemfile:

ruby '1.9.3', engine: 'jruby', engine_version: '1.7.1'

You’ll need to run bundle install with JRuby locally, then commit the results and push to Heroku:

$ git add .
$ git commit -m "trying JRuby on Heroku"
$ git push heroku master
Counting objects: 692, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (662/662), done.
Writing objects: 100% (692/692), 141.01 KiB, done.
Total 692 (delta 379), reused 0 (delta 0)

-----> Heroku receiving push
-----> Ruby/Rails app detected
-----> Using Ruby version: ruby-1.9.3-jruby-1.7.1
-----> Installing JVM: openjdk7-latest
-----> Installing dependencies using Bundler version 1.2.1
# ...

That should be all you need to do to run JRuby on Heroku. If you’re converting an existing Rails application, please read moving an existing Rails app to run on JRuby.

While you’re trying JRuby out on Heroku you can also try out Ruby 2.0.0 preview, before it is released in February.

Conclusion

With the release of JRuby on Heroku now you can to run your code on multiple VMs, and leverage concurrent Ruby code in production. You’ve got a standard interface to deployments and the power to choose the right tool for the right job. Special thanks to all of the customers who tried the JRuby beta, and to the JRuby team for being available for technical support. Give JRuby a try and let us know what you think.

The New Gist: What It Is and What It Could Be

Gist is an incredible tool by Github for quickly sharing code, text and files. It has syntax highlighting and rendering for a huge number of programming languages including Markdown for text. For many techies, including myself, Gist is an indispensable tool for quickly sharing code and content with coworkers.

Gist has been around for several years now and, when compared with the pace of development on the main Github.com property, has been relatively neglected. Thankfully, Github recently updated Gist with a fresh new codebase and UI. As a heavy user of Gist I have some thoughts on this update, where it hits the mark and where it’s still lacking.

Search has long been sorely needed in Gist. It is not uncommon for power users to have several hundred to thousands of gists and the previous linear list-view based on creation date was inadequate. Immediate recall of a gist based on a search query was the primary use-case I had in mind when creating Gisted – a tool to quickly search and access all your gists. So, seeing a native Gist search feature was very welcome for me.

Unfortunately, it leaves a bit to be desired. Firstly the search is case sensitive. So searching for proposal is not the same as Proposal. When searching my gists I never care about the case and want to just quickly find the most relevant gist containing that term. Fortunately, I imagine this to be a very easy fix on Github’s end and expect it will be remedied shortly (based on nothing but my intuition).

Indexing

However, more fundamentally, search seems to only apply to the description of your gist (gists don’t have titles – the closest thing is what is labeled as the description). While I try to be very conscious of creating meaningful descriptions, when I search for them I often use some distinct term from within the gist content itself. Searching only based on description is like searching Google only based on the title of the web page.

Consider the results from the new gist search for a term I know exists: dev

Only one result? I think not. Now against descriptions and file contents:

There are lots of relevant results missed by Gist’s search mostly due to the lack of content indexing. Search can be a powerful utility for Gists but it needs some indexing refinement yet.

Advanced operators

Relevant but basic search is a must-have for most users. Search with filtering and other operators is a must-have for power users. For instance, filtering by owner is a great way to quickly list the gists you’ve starred by others. I like this implemented with the @ prefix notation and use it frequently:

The new Gist doesn’t seem to have filtering or any advanced features like phrases ("exactly this") or operators (AND, -). These tend to be built-in features of any search index so I imagine Github is easing into search and will turn these on once they feel comfortable with the infrastructure.

Lists

In the old Gist you really only had one way to view your gists. In a list of your created gists or your starred gists, ordered by when they were created. This was incredibly limiting. The new gist ushers in several refinements including:

  • The ability to sort gists by their updated date. However, this value is not sticky so I find myself always having to select it when I just want it as my default.
  • A much better partial rendering of each gist in the list, allowing you see more of the gist to know which one is the one you’re looking for.

These are just a few of the things that make using Gist a little more enjoyable than the old interface.

Revisions

Although gists have always been backed by a full git repo you didn’t see much of that benefit in the web UI. You had to clone the repo locally to see version diffs and manually fetch other remotes to compare and merge forked versions.

The new Gist takes a small step to solving this puzzle, allowing you to view diffs between your gist’s revisions.

However, it still doesn’t provide an easy way to view diffs or perform merges across forks. These are key collaboration features that would remove significant friction from using gists now. I can only hope Github is seriously thinking about enhancing this aspect of the product.

Content focus

The new Gist has a somewhat awkward focus on the gist files rather than the descriptions. I say awkward because, while I appreciate the direction of putting the content front and center, there are some artifacts that betray the intent.

For instance, when viewing a gist in a list it’s the filename of the gist that gets top billing:

However, gists can have multiple files making it an odd decision to choose only one (the first) to key off of.

Additionally, if files/content are the focus, they should be a first-class citizen in search but are instead ignored (as previously discussed).

Still missing

Some miscellaneous features I was hoping would be added in the new Gist include:

  • A resurrection of comment notifications! Gisted will do this for you, but it really should be a natively supported feature.
  • Markdown editing and rendering parity with Github proper. If you look at your standard project README on Github you’ll notice you can edit the Markdown inline with decent highlighting, preview your content and, when rendered, sections are automatically anchored. Markdown is such a core feature of text-based collaboration that parity here is essential.
  • A de-emphasis of the public gist-stream (now called “Discover Gists”). I just don’t see the value of randomly browsing new gists and think that real estate could be better used.

Summary

The new Gist is definitely an improvement over the old one. However I find it mostly just polishes existing features and doesn’t directly address some of larger issues.

I would encourage Github to focus of the main uses of Gist. From my perspective, gists are used mainly as a collaboration tool. While they’re backed by a full git repo, that is mostly an implementation detail. Commenting, managing collaborator modifications, and finding gists across several sources should be well supported use-cases.

I suspect we’ll see the pace of development on Gist quicken now that a new codebase is in place. Removing technical debt often removes roadblocks that may have prevented a product from evolving. I can only hope if I revisit this post several months from now I’ll have to significantly edit some of my more critical points.

Github has been incredibly supportive in my use of the Gist API and my work with them on this front only reinforces their developer-focused reputation. They’ll get Gist right, it’s just a matter of time.

Given my dependence on Gist for work I have a vested interest in its success. Any critical points made here were done so only in hopes of seeing it evolve into a better product.

RubyMotion and our first Audio Book!

RubyMotion and our first Audio Book, Pomodoro technique Illustrated, now available

#396 Importing CSV and Excel

Allow users to import records into the database by uploading a CSV or Excel document. Here I show how to use Roo to parse these files and present a solution for validations.

I’m Running to Reform the W3C’s TAG

Elections for the W3C’s Technical Architecture Group are underway, and I’m running!

There are nine candidates for four open seats. Among the nine candidates, Alex Russell, Anne van Kesteren, Peter Linss, and Marcos Cáceres are running on a reform platform. What is the TAG, and what do I mean by reform?

What is the TAG?

According to the TAG’s charter, it has several roles:

  • to document and build consensus around principles of Web architecture
  • to interpret and clarify these principles when necessary
  • to resolve issues involving general Web architecture brought to the TAG
  • to help coordinate cross-technology architecture developments inside and outside W3C

As Alex has said before, the existing web architecture needs reform that would make it more layered. We should be able to explain the declarative parts of the spec (like markup) in terms of lower level primitives that compose well and that developers can use for other purposes.

And the W3C must coordinate much more closely with TC39, the (very active) committee that is designing the future of JavaScript. As a member of both TC39 and the W3C, I believe that it is vital that as we build the future of the web platform, both organizations work closely together to ensure that the future is both structurally coherent and pleasant for developers of the web platform to use.

Developers

I am running as a full-time developer on the web platform to bring that perspective to the TAG.

For the past several years, I have lobbied for more developer involvement in the standards process through the jQuery organization. This year, the jQuery Foundation joined both the W3C and ECMA, giving web developers direct representatives in the consensus-building process of building the future.

Many web developers take a very cynical attitude towards the standards process, still burned from the flames of the first browser wars. As a group, web developers also have a very pragmatic perspective: because we can’t use new features in the short-term, it’s very costly to take an early interest in standards that aren’t even done yet.

Of course, as a group, we developers don’t hesitate to complain about standards that didn’t turn out the way we would like.

(The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHOULD”, “SHOULD NOT”, “MAY”, and “OPTIONAL” in the normative parts of this document are to be interpreted as described in RFC2119.)

The W3C and its working groups MUST continue to evangelize to developers about the importance of participating early and often. We MUST help more developers understand the virtues of broad solutions and looking beyond specific present-day scenarios. And we MUST evolve to think of web developers not simply as “authors” of content, but as sophisticated developers on the most popular software development platform ever conceived.

Layering

When working with Tom Dale on Ember.js, we often joke that our APIs are layered, like a delicious cake.

What we mean by layering is that our high-level features are built upon publicly exposed lower-level primitives. This gives us the freedom to experiment with easy-to-use concise APIs, while making it possible for people with slightly different needs to still make use of our hard implementation work. In many cases, such as in our data abstraction, we have multiple layers, making it possible for people to implement their requirements at the appropriate level of abstraction.

It can be tempting to build primitives and leave it up to third parties to build the higher level APIs. It can also be tempting to build higher level APIs only for particular scenarios, to quickly solve a problem.

Both approaches are prevalent on the web platform. Specs like IndexedDB are built at a very low level of abstraction, leaving it up to library authors to build a higher level of abstraction. In contrast, features like App Cache are built at a high level of abstraction, for a particular use-case, with no lower level primitives to use if a user’s exact requirements do not match the assumptions of the specification.

Alex’s effort on this topic is focused on Web Components and Shadow DOM, an effort to explain the semantics of existing tags in terms of lower-level primitives. These primitives allow web developers to create new kinds of elements that can have a similar level of sophistication to the built-in elements. Eventually, it should be possible to describe how existing elements work in terms of these new primitives.

Here’s another example a layer deeper: many parts of the DOM API have magic behavior that are extremely difficult to explain in terms of the exposed API of ECMAScript 3. For example, the innerHTML property has side-effects, and ES3 does not provide a mechanism for declaring setters. The ECMAScript 5 specification provides some additional primitives that make it possible to explain more of the existing DOM behavior in terms of JavaScript. While designing ECMAScript 6, the committee has repeatedly discussed how certain new features could help explain more of the DOM API.

Today, the web platform inherits a large number of existing specifications designed at one of the ends of the layering spectrum. I would like to see the TAG make an explicit effort to describe how the working groups can reform existing APIs to have better layering semantics, and to encourage them to build new specifications with layering in mind.

TC39 and JavaScript

Today, developers of the web platform increasingly use JavaScript to develop full-blown applications that compete with their native counterparts.

This has led to a renaissance in JavaScript implementations, and more focus on the ECMAScript specification itself by TC39. It is important that the evolution of JavaScript and the DOM APIs take one another into consideration, so that developers perceive them as harmonious, rather than awkward and ungainly.

Any developer who has worked with NodeList and related APIs knows that the discrepancies between DOM Array-likes and JavaScript Arrays cause pain. Alex has talked before about how standardizing subclassing of built-in object would improve this situation. This would allow the W3C to explicitly subclass Array for its Array-like constructs in a well-understood, compatible way. That proposal will be strongest if it is championed by active members of both TC39 and the HTML working group.

Similarly, TC39 has worked tirelessly on a proposal for loading JavaScript in an environment-agnostic way (the “modules” proposal). That proposal, especially the aspects that could impact the network stack, would be stronger with the direct involvement of an interested member of a relevant W3C working group.

As the web’s development picks up pace, the W3C cannot see itself as an organization that interacts with ECMA at the periphery. It must see itself as a close partner with TC39 in the development and evolution of the web platform.

Progress

If that (and Alex’s similar post) sounds like progress to you, I’d appreciate your organization’s vote. My fellow reformers Alex Russell, Anne van Kesteren, Peter Linss and Marcos Cáceres are also running for reform.

AC reps for each organization can vote here and have 4 votes to allocate in this election. Voting closes near the end of the month, and it’s also holiday season, so if you work at a member organization and aren’t the AC rep, please, find out who that person in your organization is and make sure they vote.

As Alex said:

The TAG can’t fix the web or the W3C, but I believe that with the right people involved it can do a lot more to help the well-intentioned people who are hard at work in the WGs to build in smarter ways that pay all of us back in the long run.

I’m Running to Reform the W3C’s TAG

Elections for the W3C’s Technical Architecture Group are underway, and I’m running!

There are nine candidates for four open seats. Among the nine candidates, Alex Russell, Anne van Kesteren, Peter Linss, and Marcos Cáceres are running on a reform platform. What is the TAG, and what do I mean by reform?

What is the TAG?

According to the TAG’s charter, it has several roles:

  • to document and build consensus around principles of Web architecture
  • to interpret and clarify these principles when necessary
  • to resolve issues involving general Web architecture brought to the TAG
  • to help coordinate cross-technology architecture developments inside and outside W3C

As Alex has said before, the existing web architecture needs reform that would make it more layered. We should be able to explain the declarative parts of the spec (like markup) in terms of lower level primitives that compose well and that developers can use for Continue reading “I’m Running to Reform the W3C’s TAG”

I’m Running to Reform the W3C’s TAG

Elections for the W3C’s Technical Architecture Group are underway, and I’m running!

There are nine candidates for four open seats. Among the nine candidates, Alex Russell, Anne van Kesteren, Peter Linss, and Marcos Cáceres are running on a reform platform. What is the TAG, and what do I mean by reform?

What is the TAG?

According to the TAG’s charter, it has several roles:

  • to document and build consensus around principles of Web architecture
  • to interpret and clarify these principles when necessary
  • to resolve issues involving general Web architecture brought to the TAG
  • to help coordinate cross-technology architecture developments inside and outside W3C

As Alex has said before, the existing web architecture needs reform that would make it more layered. We should be able to explain the declarative parts of the spec (like markup) in terms of lower level primitives that compose well and that developers can use for Continue reading “I’m Running to Reform the W3C’s TAG”

Postgres 9.2 – The Database You Helped Build

Hosting your data on one of the largest fleets of databases in the world comes with certain advantages. One of those benefits is that we can aggregate the collective pain points that face our users and work within the Postgres community to help find solutions to them.

In the previous year we worked very closely with the broader Postgres community to build features, fix bugs, and resolve pain points. You’ve already seen some of the results of that work in the form of extension support on Heroku and query cancellation. With the 9.2 release we’re delighted to say that with your help, we’ve been able to bring you a whole host of new power and simplicity in your database.

Effective immediately, we’re moving Postgres 9.2 into GA, which will become the new default shortly after. Postgres 9.2 is full of simplifications and new features that will make your life better, including:

  • Expressive new datatypes
  • New tools for getting deep insights into your database’s performance
  • User interface improvements.

You can request a version 9.2 database from the command line like this:

heroku addons:add heroku-postgresql:dev --version=9.2

Get started by provisioning one today or read more about the many great features now available in Postgres 9.2 over on the Heroku Postgres blog

Rails Caching Strategies (a presentation)

Tonight I gave a presentation at the Rails Meetup in Seattle on caching strategies with Rails, and I had a great time. I’m convinced Rails developers don’t use caching enough, and we have so many good options for caching in our apps, we really should be avid practitioners of caching. 🙂

I had a request to put my slides up, so here they are. They are a bit more useful if you have the context of the words I said along with them, but that’s the way these things go, right?

For those of you who were there, I suppose I should apologize for mentioning (so many times) the Airbrake competitor that I recently launched. After the 2nd or so time I mentioned it, it became a bit of a joke to see how many times I could mention it, and I do hope I didn’t annoy you. 🙂

Rails Caching Strategies (a presentation)

Tonight I gave a presentation at the Rails Meetup in Seattle on caching strategies with Rails, and I had a great time. I’m convinced Rails developers don’t use caching enough, and we have so many good options for caching in our apps, we really should be avid practitioners of caching. 🙂

I had a request to put my slides up, so here they are. They are a bit more useful if you have the context of the words I said along with them, but that’s the way these things go, right?

For those of you who were there, I suppose I should apologize for mentioning (so many times) the Airbrake competitor that I recently launched. After the 2nd or so time I mentioned it, it became a bit of a joke to see how many times I could mention it, and I do hope I didn’t annoy you. 🙂

PragPub for December, upcoming Holiday Treats

PragPub magazine for December now available, upcoming Holiday Treats

Leprechauns and Unicorns of Software

Something like 25 years ago, Bill James wrote an essay asserting that one difference between smart and dumb baseball organizations was that dumb organizations behaved as thought Major League talent was normally distributed on a standard bell curve, and smart organizations knew that talent is not (it’s the far end of a bell curve).

Hold that thought.

So I just finished reading Laurent Bossavit’s self published book The Leprechauns of Software Engeneering, which I recommend quite highly. Bossavit looks at some of Software Engineering’s core pieces of received wisdom: that there is a 10x productivity difference between developers, that the cost of change rises exponentially over the course of a project. The kind of thing that all of us have heard and said dozens of times.

Bossavit does an interesting thing. He goes back to the original papers, sometimes wading through a maze of citations, to find the original source of these claims and to find out what empirical backing they may have. Turns out the answer is “basically none”. One by one, these claims are shown to be the result of small sample sizes, or no sample, or other methodological problems. They’ve become authoritative by force of repetition. (Which doesn’t mean they are wrong, just that we don’t know as much as we think we do.)

If you are like me, which is to say you are fascinated by the idea of empirical studies of software, but deeply skeptical of the practice, this book will take your head to some fun places.

The Bill James analogy, for example. What James is talking about is that in order to accurately value what you have, you need some idea of the context in which it occurs. In the baseball example, to know how to value a player who hits ten home runs (which we’ll pretend is average, for the sake of oversimplification), it’s helpful to have a good sense of how many players are out there who are capable of hitting twelve, or eight. If you erroneously assume that there are fewer players capable of hitting eight home runs then ten home runs, then some bad management decisions are in your future. Specifically, you’ll overvalue your ten home run player (or more likely, overpay for somebody else’s ten home run player, when your own eight home run player is a fraction of the cost.)

I’m wary of taking this analogy too far, not least because it doesn’t necessarily reflect well on my overeducated typing fingers. There are all kinds of reasons to think that the curved for web developers is different than for baseball players. We don’t have a good idea of what the distribution curve of productivity is for developers, even if we had a good idea of productivity is (we don’t) or a way of measuring it (ditto) or any idea of how individuals improve or decline based on teams (guess what). That said, I do not think that I have been on an actual team where people were genuinely 10x better than other people. (Total newbies notwithstanding, I guess). Ten times is a lot of times, that’s one person’s week being another person’s half-day. Sustainably.

But see what I did there? I palmed a card. I said that newbies don’t count in my personal recollection of my teams productivity. Why not? For a good reason — I don’t think the productivity of somebody in intense learning mode has a lot to tell me about how software teams work. But that’s my decision, and it’s subjective, and suddenly I’m deciding which of my hypothetical data “really counts” and which doesn’t. That’s a normal process, of course, but it’s not How Science Is Supposed To Work. In reality, I’m already skeptical of the 10x finding, and pulling newbies out moves the data in a way I’m comfortable with, so I’m not likely to examine that decision too closely. (See Bias, Confirmation.)

I spent about five years reading and writing social science academic work, and if there’s one thing I learned it’s to always be skeptical of any finding that confirms the author’s preconceptions. (See also: Stephen Jay Gould’s The Mismeasure of Man — well worth your time if you deal with data.) Data is complicated, any real study is going to generate a ton of it, and seemingly trivial decisions about how to manage it can have dramatic effects on the perceived results.

I spent a lot of time researching education, which shares with software engineering the idea that individual performance is much harder to measure, or even to define, than you might assume at first glance. Empirical education studies tend to fall into one of two groups:

  1. A study under very controlled lab conditions, where the researcher is claiming that a clear data result is applicable to the larger world.
  2. A study in the real world, with messier data, where the researcher is claiming that there is an effect and that it as because of some specific set of clauses.

Both studies are problematic — the first kind often have small or non-representative subjects, the second kind is often a long-term study of one group with real questions as to whether the result is at all reproducible. On top of which you have the Hawthorne Effect (any group that knows it is being observed tends to increase performance no matter what the intervention is) and the effect whose name I can never remember where the more attention is paid to a specific metric the less reliable that metric becomes as a proxy for overall performance.

Or, looking at this another way… I got in a conversation at SCNA this year about why SCNA talks so rarely have empirical results about the value of the software techniques discussed. My kind of glib answer was that we’re all a little afraid that empirical results wouldn’t support the value we perceive in what me might call the “SCNA way”. By which I partially mean that we’re afraid that a badly-designed study might suggest that, say, Test-Driven Development had little or no value, and we’d all have to expend energy dealing with it. (But of course, I’d say that any such study is badly-designed, because of confirmation bias.)

But I also mean, I think, that I’m not interested in that kind of empirical testing and as interesting as I find the pursuit, I have little confidence that it will have much relevance to my day-to-day work. Agile methods, TDD, what we call “good” coding practices make my job easier and more sane. I have my own experience to draw on for that — which I realize is not science, but it’s working for me. Asking them to be proven to be the most efficient way to design software seems like impossible icing on the cake. For me, it’s enough that the methods I favor seem to result in saner, more pleasant work environments. It’s weird to simultaneously be interested in empirical results in my field, and yet at the same time feel they are utterly separated from what I do, but that’s where I am.

Presenting the New Add-ons Site

Heroku Add-ons make it easy for developers to extend their applications with new features and functionality. The Add-on Provider Program has enabled cloud service providers with key business tools, including billing, single sign-on, and an integrated end-user support experience. Since the launch of the Heroku Add-ons site over two years ago, the marketplace has grown to nearly 100 add-ons. As the add-ons ecosystem has grown, we’ve learned a lot about how cloud service providers structure their businesses and how users interact with them.

Today we’re happy to announce the launch of the updated Heroku Add-ons site.

The goal of the new site is to make it even easier to find, compare, purchase, and use add-ons. In addition to categorization, tagging, search, and an add-on showcase, we’ve made it easier to understand the benefits of each add-on, distinguish between plans, access documentation, and provision add-ons from the web or the command line. Here are some highlights of the new design:

Showcase

We’re now featuring add-ons on the homepage in an active rotation based
on three criteria: newness, popularity, and staff picks.


Add-ons Showcase

Categories

We’ve introduced categories to help you make more informed decisions
about which add-ons are right for your use case, like which database to use.


Add-ons Categories

Search

The home page now features a lightning-fast search field. Each search result includes the CLI command to install the add-on, so if you know the add-on you’re looking for you can be on and off the site in a matter of seconds.


Add-ons Search

The search tool also has some handy vim-inspired keyboard shortcuts:

  • / focuses the search field.
  • esc clears the search field.
  • j (or down arrow) moves you down in the results.
  • k (or up arrow) moves you up in the results.
  • o (or enter) opens the currently selected search result.
  • y selects the CLI command so you can copy it.

Emphasis on Productivity

In the new marketplace, we’ve encouraged add-on providers to highlight the ways in which their add-on will improve developers’ lives. Rather than emphasizing technical commodities like megabytes of cache or number of allowed requests, benefits highlight the high-level value of each service, such as ease of integration, time saved, and higher productivity.


Add-on Benefits: CloudAMQP

Clear differentiation of Plans

The new plan interface makes it easier to distinguish how an add-on’s offerings change across plans.


Add-on Plans

Dev Center Documentation

We’ve added tighter integration with Dev Center for easy access to each add-on’s documentation.


Add-on Documentation

Looking Forward

As of today, the new Add-ons Marketplace is the default for everyone on the platform.
Watch closely for updates and new features. To stay up to date as new add-ons enter the marketplace, check out the
new add-ons changelog and subscribe to the
feed or follow our new twitter account, @HerokuAddons.


Heroku is hiring

A Simple Tour of the Ruby MRI Source Code with Pat Shaughnessy

I’m not in Ruby core or, well, even a confident C coder anymore, but I’ve long enjoyed digging in the Ruby MRI source code to understand weird behavior and to pick up stuff for my Ruby course.

Pat Shaughnessy is also a fan of digging around in Ruby’s internals and has written some great posts like How Ruby Executes Your Code, Objects, Classes and Modules, and Exploring Ruby’s Regular Expression Algorithm.

When Pat released his Ruby Under a Microscope book, I knew it would be right up my street! He digs into how objects are represented internally, why MRI, Rubinius and JRuby act in certain ways and, of course, “lots more.”

I invited Pat to take a very high level cruise through the MRI codebase with me so we could share that knowledge with Ruby programmers who haven’t dared take a look ‘under the hood’ and to

Continue reading “A Simple Tour of the Ruby MRI Source Code with Pat Shaughnessy”

A Simple Tour of the Ruby MRI Source Code with Pat Shaughnessy

I’m not in Ruby core or, well, even a confident C coder anymore, but I’ve long enjoyed digging in the Ruby MRI source code to understand weird behavior and to pick up stuff for my Ruby course.

Pat Shaughnessy is also a fan of digging around in Ruby’s internals and has written some great posts like How Ruby Executes Your Code, Objects, Classes and Modules, and Exploring Ruby’s Regular Expression Algorithm.

When Pat released his Ruby Under a Microscope book, I knew it would be right up my street! He digs into how objects are represented internally, why MRI, Rubinius and JRuby act in certain ways and, of course, “lots more.”

I invited Pat to take a very high level cruise through the MRI codebase with me so we could share that knowledge with Ruby programmers who haven’t dared take a look ‘under the hood’ and to show it’s not as scary or pointless as it may seem.

It’s 100% free so enjoy it above or on YouTube in 720p HD.

P.S. Pat is happy to do another video digging deeper into how Ruby actually takes your code and executes it and he’s able to walk through the actual virtual machine for us. If the reaction to this video is good, we’ll sit down again and see if we can do it! 🙂

The Last Week in Ruby: A Great Ruby Shirt, RSpec Team Changes and a Sneaky Segfault Trick

Welcome to this week’s roundup of Ruby news cobbled together from my e-mail newsletter, Ruby Weekly.

Highlights include: A time-limited Ruby shirt you can order, a major change in the RSpec project, how to make Ruby 1.9.3 a lot faster with a patch and compiler flags, a sneaky segmentation fault trick, several videos, and a few great jobs.

Featured

The ‘Ruby Guy’ T-Shirt
Grab a t-shirt with a cute ‘Ruby Guy’ mascot on the front in time for Christmas. Comes in both male and female styles in varying sizes. Only available till Thursday December 6 though as it’s part of a temporary Teespring campaign (Note: I have no connection to this, it just looks cool.)

David Chelimsky Hands Over RSpec to New Project Leads
After several years at the helm, David Chelimsky is handing over the reins to Myron Marston and Andy Lindeman for RSpec and rspec-rails respectively. Thanks for all your hard work, David.

Upgrading to Rails 4: A Forthcoming Book (in Beta)
Andy Lindeman of the RSpec core team is working on a new book designed to bring you up to speed with Rails 4. It’s in beta so you can support him now, if you like.

Reading

Making Your Ruby Fly
Andrei Lisnic demonstrates a few compile time ‘tricks’ you can use to make your MRI Ruby 1.9.3 faster. The benchmark results are compelling.

Avoiding the Tar Pits of Localization
Jeff Casimir gave a talk on the ‘Ruby Hangout’ about the trickiness of handling internationalization and localization and some tools and libraries you can use to help. Lots of notes here or you can watch the video.

Recovering From Segfaults in Ruby, The Sneaky Way
We’ve probably all seen the dreaded ‘segmentation fault’ from Ruby before. Charlie Somerville demonstrates a rather clever but sneaky way you can ‘recover’ from them in plain Ruby. As he says, you probably don’t want to use this trick seriously.

Use Rails Until It Hurts
Evan Light pushes back a little against the recent wave of OO purity and, as DHH calls it, ‘pattern vision.’

Speeding Things Up With jRuby
MRI’s global interpreter lock prevents running code in parallel without forking the Ruby process. That’s where JRuby can help.

Try RubyGems 2.0
Michal Papis demonstrates how you can give the forthcoming RubyGems 2.0 a spin using RVM.

Watching and Listening

Rapid Programming Language Prototypes with Ruby and Racc
At RubyConf 2012, Tom Lee demonstrated how you can use Racc, a LALR(1) parser generator that emits Ruby code from a grammar file, in the process of creating a simple programming language of your own.

A Tour Into An Oddity With Ruby’s Struct Class
In which I look into why Struct.new(:foo?).new(true).foo? doesn’t work, even though the Struct-produced class and its instances are valid. I dive into the MRI source code a bit to get to the bottom of things. 12 minutes in all.

RubyTapas 027: Macros and Modules
Avdi Grimm’s latest Ruby screencast for non-subscribers to his Ruby video site.

A Rails 4.0 Roundup in 3 Videos
A summary and links to three Rails 4 related videos (all linked in RW before) by Marco Campana. A handy catch up if you didn’t already.

Libraries and Code

Introducing the Rails API Project: Rails for API-only Applications
A set of tools to use Rails for building APIs for both heavy Javascript applications as well as non-Web API clients. This isn’t entirely new but the project has now become more formally established.

Zuck: A Little Helper to Access Facebook’s Advertising API
An early, prototype-stage gem but you may still find it useful.

Jobs

Blazing Cloud is looking for software artisans
to join us in handcrafting beautiful mobile experiences. We are looking for people who believe in a whole product-approach and agile development practices, and have a strong sense of quality.

Last but not least..

Come Speak at O’Reilly Fluent 2013
OK, it’s slightly offtopic but I’m the co-chair for O’Reilly’s JavaScript, HTML5 and browser technology event and I know many Rubyists are also involved in these areas. Our CFP is open until December 10 and we have lots of awesome stuff lined up.