Lazy blogging with Twitter and Tumblr

Recently, I was bored with trying to follow links on Twitter Trends for videos and images. So I put this script together quickly to search for all new Twitter results then post the tweets with links on Tumblr, where it would show images and videos.

Setup

You’ll need to have a few gems:

  • hpricot
  • twitter
  • ruby-tumblr

sudo gem install hpricot twitter ruby-tumblr

The Script

In this script will search for ‘#ELC http’ and it should return any tweets with links in them. Then it should rip out the link and process the link (follow redirects, rip out images from twitpic, etc…). Then it would post it to tumblr as a video, image, or link, depending on the ending of the final link. Right now, I’m using the username/password auth for twitter since I don’t have an API key setup. There’s a pid file it creates to make sure that only one of these processes can run. It also uses code from a past post about scraping twitpics.

#!/usr/local/bin/ruby
require 'rubygems'
gem 'twitter'
gem 'ruby-tumblr'
require 'twitter'
require 'net/http'
require 'tumblr'
require 'hpricot'

exit if File.exists?('scraper.pid')
File.open('scraper.pid','w') {|f| f.write('hey')} # primitive Lock to prevent multiple processes from spawning

TUMB_EMAIL =myemail
TUMB_PASS =mypass
TWIT_USERNAME=username
TWIT_PASS=pass

def rip_twitpic(url)
  begin
    code=url.match(/[\w]+$/).to_s
    unless code.blank?
      uri=URI.parse(url)
      resp=Net::HTTP.get_response(uri)
      html=Hpricot(resp.body)
      html.at("#photo-display")['src']
    end
  rescue Exception => e
    puts "Error extracting twitpic: #{e}"
    url
  end
end


def follow_link(link)
  uri=URI.parse(link)
  begin
   resp=Net::HTTP.get_response(uri)
   return follow_link(resp['location']) if resp.class.eql?(Net::HTTPMovedPermanently) || resp.class.eql?(Net::HTTPRedirection) || resp.class.eql?(Net::HTTPFound)
   link
  rescue Exception => e
    puts "Error getting #{link}"
    nil
  end
end

def upload_to_tumblr(url, desc=nil)
  Tumblr::API::write(TUMB_EMAIL, TUMB_PASS)  do
    if url.match(/\.jpg|\.jpeg|\.png|\.gif$/)
      photo(url, desc||"Photo from Twitter")
    elsif url.match(/^http:\/\/www\.youtube\.com\/watch/)
      video(url, desc||"Video From Twitter")
    elsif url.match(/^http:\/\/twitpic\.com\/[\w]+/)
      photo(rip_twitpic(url), desc||"Photo From Twitter")
    else
      link(url, desc||url)
    end
  end
end

httpauth = Twitter::HTTPAuth.new(TWIT_USERNAME,TWIT_PASS)
twit= Twitter::Base.new(httpauth)
results=Twitter::Search.new('#elc http')
converted_links=[]
links=[]
results.each { |tweet| 
text=tweet.text.match(/http:\/\/([\S])+/) .to_s
links << text unless text.blank?
}
links.each do |link| 
final=follow_link(link)
converted_links << final unless final.nil?
end

converted_links.uniq.each do |link|
upload_to_tumblr(link)
end

FileUtils.rm('scraper.pid')

Just put the script as a scheduled task. like cron, and it should scrape the links and post them to your tumblr. I must warn people this code was written in a couple of hours so there is definitely room for improvement.

Europe Here We Come

Since the beginning of our private beta Heroku has been used by developers all over the world. Recently, we’ve been delighted to see a particularly strong interest from Rubyists in Europe looking to take advantage of the deployment and scalability benefits of our platform.

Blake and Orion at Erlang Factory On their trips to Erlang Factory in London and Kings of Code in Amsterdam, Blake and Orion saw immense interest from both individual hackers and established companies.

In August I’ll be making the trip to several European Ruby user group meetings to catch up with even more users, and hopefully gain a better understanding of what they’d like to see from Heroku in the future.

If you’re in or around any of the listed cities on the meetup dates, please consider stopping by. I’ll likely have some time around each of these events, so feel free to contact me if you have a particular project you’d like to discuss.

Schedule:

8/3 – Aarhus, Denmark
8/4 – Dublin, Ireland
8/5 – Copenhagen, Denmark
8/6 – Berlin, Germany
8/10 – Amsterdam, Netherlands

The Rails Underground 2009 Keynotes: Fred George and Yehuda Katz

rails undergroundI attended the Rails Underground conference in London at the weekend (July 24-25, 2009). As always seems to be the case at these events, I got the most value out of the more theoretical and opinion-based talks rather than ‘how-to’ style presentations. Having said that, Pat Allan and George Palmer gave great talks on their respective thinking_sphinx and couch_foo plugins.

I’m going to concentrate on the keynotes from the two days, which give quite differing perspectives.

Fred George – Rails is a hammer. Not everything is a nail.

fred georgeIn the keynote (video link) for the first day, former Thoughtworker, Fred George warned us against using Rails for the wrong kinds of projects.  He started off by discussing how frameworks like Rails let you get started quickly but become harder to manage as the complexity of your problem increases.

The rest of the talk consisted of Fred explaining the architecture of a project for which he decided to roll his own framework with only the parts he needed.  Fred explained that he wasn’t a fan of using SQL-based traditional relational databases, as it can force your object model into an unnatural form that might not suit your problem domain. Fred’s project consisted of a HAML/SASS front-end with Sinatra and pure-Ruby models persisting data to YAML files.  The problem I have with this kind of approach is that by relying on a disparate set of technologies you run the risk of one or more component becoming obsolete over time.

As a couple of people in the audience mentioned afterwards, Fred didn’t really give an exhaustive comparison of Rails versus the alternatives, as he was basically just describing one particular application that he’d written but the basic message that I got from the talk was that if your domain model fits well with a relational database structure, then traditional MVC with Rails is a good fit (e.g. administering users).  However, a service that just exposes the DB structure and its contents isn’t adding much value (i.e. REST/CRUD on a set of tables).  To add value you need to design the models intelligently.

Yehuda Katz – the future is granular

yehudaYehuda Katz gave the keynote (video link) for day two, about how Rails is evolving.  Yehuda explained how the Merb and Rails core teams have come together to try to work on Rails 3.0 – a fusion of the best parts of each framework, but with re-imagined internals.  Rails 3.0, isn’t going to remove any of what makes Rails great, but it will hopefully be better for ‘power users’.  i.e. those developers who care about how the internals work, and take advantage of the concepts therein.

A large portion of the keynote involved an explanation of why interfaces are a good abstraction.  (Not the Java kind of interface, but the hypothetical kind – i.e. contracts between components). Interfaces give you a calling-convention, allowing you to change internal implementations without affecting the calling code.  Classes and modules take this to the next level: mixing in modules is more powerful than inheritance as it allows your classes to learn new things. As Yehuda put it, “Your parents didn’t define all that you can do when you were born. You can learn new things.”  With Ruby modules you can swap out small sections of implementations as and when you need them at runtime.

This concept is used to allow the different components of Rails 3.0 to be separated up.  For example, you will no longer be forced to use ActionView – you’ll just need something that is “ActionView compliant”.  The same applies to ActiveRecord – your models will just need to comply with the ActionModel contract.

Despite what you might think, the contracts for ActionView and ActionModel compliance are actually really simple, and just need a few methods.  If you just want the default behaviour, all you’ll need to do is just include an existing module.  By implementing these interfaces you will end up with something that ‘just works’ with ActionPack, providing you with all the usual form and error helpers.  Furthermore, ActionController::Base will essentially just be ActionController::Metal with a bunch of extra modules included… but you can still use a stripped down version of metal if you don’t need all that extra functionality.

So, all of this will result in a much more granular Rails which will allow you to opt in or out of each part if that’s what you want.  By just pulling in the parts of Rails you need, you can reduce memory usage and complexity in your apps.  This goes some way to answering Fred George’s criticisms of Rails, regarding not being able to just select the parts you need for the job. And maybe it will unite the Ruby community by allowing people to focus on just one de-facto implementation of each component.

Links to slides and videos for each presentation made at the conference can be found on the schedule page of the Rails Underground site.

jslab.pngAlso.. Jumpstart Lab is offering workshops teaching Ruby for beginning female programmers (Ruby Jumpstart) on August 1st and 2nd, then beginning Rails (Rails Jumpstart) for everyone on August 15 & 16. Save 10% with code “rubyinside”!

RailsLab: Scaling Your Database – Part 2

In the first Scaling your Database screencast we learned how to scale our database if our website is read heavy, but how do we scale if our website is write heavy? Also, if you’re running MySQL do you know which database engine your website is using? and why? If you want the answers to these questions, or you just want to learn more about database scaling, it’s time to watch the 18th episode of the Scaling Rails screencast series.

Summary

In this screencast we first learn the difference between MyISAM and InnoDB storage engines. We then take a look at two strategies for scaling your database writes on a system, the first by using master master replication and the second by sharding your database. Along the way we’ll learn about some useful tools for scaling your database, and look at how some big websites like eBay and New Relic shard their database.

Don’t forget to subscribe to the screencast RSS feed or grab it on ITunes to avoid missing any of these episodes. FYI, These videos look great on an iPhone / iPod if you want something to watch on the go.

RailsLab: Scaling Your Database – Part 2

In the first Scaling your Database screencast we learned how to scale our database if our website is read heavy, but how do we scale if our website is write heavy? Also, if you’re running MySQL do you know which database engine your website is using? and why? If you want the answers to these questions, or you just want to learn more about database scaling, it’s time to watch the 18th episode of the Scaling Rails screencast series.

Summary

In this screencast we first learn the difference between MyISAM and InnoDB storage engines. We then take a look at two strategies for scaling your database writes on a system, the first by using master master replication and the second by sharding your database. Along the way we’ll learn about some useful tools for scaling your database, and look at how some big websites like eBay and New Relic shard their database.

Don’t forget to subscribe to the screencast RSS feed or grab it on ITunes to avoid missing any of these episodes. FYI, These videos look great on an iPhone / iPod if you want something to watch on the go.

Community Highlights: Yehuda Katz

Over the past few months, Rails core team member Yehuda Katz has posted a series of great blog articles describing some of the process and technique he’s used while coding Rails 3 with Carl Lerche. In case you haven’t followed his blog posts, I thought I’d repost them here for your educational reading.

Rails 3: The Great Decoupling
is about decoupling components like ActionController and ActionView.

New Rails Isolation Testing
is about the creation of a new test mixin that runs each test case in its own process.

6 Steps to Refactoring Rails
is about the refactoring philosophy he’s using when coding Rails 3.

Rails Edge Architecture
is about Rails 3 Architecture including AbstractController::Base & ActionController::Http

Better Module Organization
is about cleaning up the way modules are included.

alias_method_chain in models
is about alternatives to using alias_method_chain, some of which made it into Rails 3 refactorings.

Rails 3 Extension API
is where Yehuda started documenting the new extension APIs that are being added for Rails 3. There’s not a whole lot there yet, but be sure to watch
this space in the coming weeks.

Rails 2.3.3 upgrade notes: rack, mocha, and _ids

        I upgraded two apps to <a href="http://weblog.rubyonrails.org/2009/7/20/rails-2-3-3-touching-faster-json-bug-fixes">Rails 2.3.3</a> today. It’s a minor release, and there’s not much to report. But I did run into three minor problems.


<h4>Mocha</h4>


Mocha 0.9.5 started throwing an exception:


<code>NameError: uninitialized constant Mocha::Mockery::ImpersonatingAnyInstanceName</code>


A quick update to Mocha 0.9.7 cleared this up.


<h4>Array parameters in tests</h4>


In functional tests with Test::Unit, passing an array to a parameter stopped working. Previously, I had something like this:

post :create, :user => {:role_ids => [1,2,3]}
This would post the following parameters:

"role_ids"=>["1", "2", "3"]
But after the 2.3.3 update, I started seeing an error:


<code>NoMethodError: undefined method `each' for 1:Fixnum</code>


I’m not sure why this stopped working. (Anyone know?) Changing the integers to strings clears up the error:

post 
Continue reading "Rails 2.3.3 upgrade notes: rack, mocha, and _ids"

Rails 2.3.3 upgrade notes: rack, mocha, and _ids

I upgraded two apps to Rails 2.3.3 today. It’s a minor release, and there’s not much to report. But I did run into three minor problems.

Mocha

Mocha 0.9.5 started throwing an exception:

NameError: uninitialized constant Mocha::Mockery::ImpersonatingAnyInstanceName

A quick update to Mocha 0.9.7 cleared this up.

Array parameters in tests

In functional tests with Test::Unit, passing an array to a parameter stopped working. Previously, I had something like this:


post :create, :user => {:role_ids => [1,2,3]}

This would post the following parameters:


"role_ids"=>["1", "2", "3"]

But after the 2.3.3 update, I started seeing an error:

NoMethodError: undefined method `each' for 1:Fixnum

I’m not sure why this stopped working. (Anyone know?) Changing the integers to strings clears up the error:


post :create, :user => {:role_ids => ["1","2","3"]}

Or


post :create, :user => {:role_ids => [1.to_s,2.to_s,3.to_s]}

Rack

Rack apparently no longer comes bundled with Rails. Or at least deployment failed on cap deploy: RubyGem version error: rack(0.4.0 not ~> 1.0.0).

The solution was simple: install (or vendor) Rack 1.0.0.


config.gem 'rack', :version => '>= 1.0.0'

[ActionMailer] Multipart emails with attachments

Ran into a gotcha the other day that I thought was worth a note. I had a multipart email template that was sending invoice receipts in both plain text and html format. This was all working fine by just defining the views as order_placed.text.plain.erb and order_placed.text.html.erb, ActionMailer took care of the nitty gritty.

However, when I wanted to add a pdf attachment along with the email with the standard:

attachment :content_type => "application/pdf", :body => file, :filename => "file.pdf"

my emails starting coming through with only the attachment, no text.

After some light research, I discovered that

Once you use the attachment method, ActionMailer will no longer automagically use the correct template based on the filename, nor will it properly order the alternative parts. You must declare which template you are using for each content type via the part method. And you must declare these templates in the proper order.

Love, Rails

Turns out the proper way to mash all of these different mimetypes together is something like the following:

  def order_placed(purchase)
    setup_email
    recipients purchase.email
    subject "Your #{APP_NAME} receipt"
    content_type "multipart/mixed"
    
    part :content_type => "multipart/alternative" do |a|
      a.part "text/plain" do |p|
        p.body = render_message 'order_placed.text.plain.erb', :purchase => purchase
      end

      a.part "text/html" do |p|
        p.body = render_message 'order_placed.text.html.erb', :purchase => purchase
      end
    end
    
    attachment :content_type => "application/pdf", :body => the_pdf_file, :filename => "some_filename.pdf"
  end

And just like magic, my emails are coming through fine, attachment and all. Hope this saves somebody a headache.

Turbocharge Your Ruby Testing with Parallel Specs

bearonshark.pngIn Make Your Test Suite UNCOMFORTABLY FAST! (called “the best blog post ever written” by one commenter) Jason Morrison of Thoughtbot demonstrates how to use Michael Grosser’s Parallel Specs project to speed up your Ruby tests.

Parallel Specs provides a set of Rake tasks to run specs and tests in parallel, therefore using multiple CPUs (or cores) to multiply your testing power. It does not yet work with Cucumber features but Jason recommends testjour for that purpose – which is designed to work across multiple machines so isn’t quite the same thing.

Thoughtbot has found Parallel Specs typically provides a 30-40% speedup for their projects out of the box, so if you’re doing a lot of testing (and the best developers seem to say you should be) check it out for some instant satisfaction.

devver-icon.gifAlso.. Got a slow Test::Unit or RSpec suite? Run them up to three times faster on Devver’s cloud! Setup is simple and requires no code changes. Request a beta invite today!

Rails BugMash

Some of you may remember the Rails Hackfests that were conducted in 2007 and 2008. Well, with some help from the RailsBridge folks, we’re bringing back something similar :

The First Rails and RailsBridge BugMash

The idea is simple: RailsBridge has a lot of energy. The Rails Lighthouse has a lot of open tickets. By combining the RailsBridge enthusiasm with guidance from some Rails Core team members, we’re going to see what we can do to cut down the number of open tickets, encourage more people to get involved with the Rails source, and have some fun.

Here’s how it will work: the BugMash will run over the weekend of August 8-9. The Rails team will identify open issues that need some help and tag them in Lighthouse. Participants will draw from this pool with four goals:

  1. Confirm that the bug can be reproduced
  2. If it can’t be reproduced, try to figure out what information would make it possible to reproduced
  3. If it can be reproduced, add the missing pieces: better repro instructions, a failing patch, and/or a patch that applies cleanly to the current Rails source
  4. Bring promising tickets to the attention of the Core team

RailsBridge is organizing both face-to-face and online support for BugMash participants. The plan is to do everything possible to make it easy to start contributing to Rails, and to increase even further the substantial pool of developers who have helped make Rails what it is.

For more details, including a checklist of what you can do to get ready to work in the Rails source and details on a scoring system and rewards for the most active participants, keep an eye on the RailsBridge Wiki (a work in progress). For now, though, there are two things for you to do:

  1. Reserve at least a chunk of that weekend to roll up your sleeves and work on the BugMash
  2. Speak up by updating the wiki or posting on the mailing lists ( rubyonrails-core or railsbridge ) if you can contribute prizes, familiarity with the Rails source, or other help to the project.

acts_as_taggable_on_steroids was REALLY slow

I got a text message about some timeouts on a client website over the weekend. I looked in to it and couldn’t really find anything particularly wrong, but the page load times for this one page were definitely much longer than they used to be. Digging deeper and talking with another dev on the team, we thought we may have to look at caching this section or rewrite some of our tagging. We use acts_as_taggable_on_steroids on this project, and I’ve seen some suspect queries come up before when dealing with large datasets. Well, it turns out it was a MySQL issue, and it was easily fixed.

The problem:

mysqlslow.log is reporting this:

# Query_time: 58  Lock_time: 0  Rows_sent: 1  Rows_examined: 18461772

Well, that’s a problem. Let’s run an EXPLAIN on the query:

| rows |
+------+
| 3957 |
|    2 |
|    1 |
|    1 |
|    1 |
+------+

Hmm, that’s strange. The EXPLAIN is saying that it should look at about 4000 records, but the slow query log is saying that it actually examined 18 million.

The solution:

WARNING: Depending on the storage engine, this will READ or WRITE LOCK the TABLE.

mysql> ANALYZE TABLE widgets;
+-------------------------------+---------+----------+----------+
| Table                         | Op      | Msg_type | Msg_text |
+-------------------------------+---------+----------+----------+
| widgets                       | analyze | status   | OK       | 
+-------------------------------+---------+----------+----------+
1 row in set (0.00 sec)

mysql> ANALYZE TABLE tags;
+------------------------------+---------+----------+----------+
| Table                        | Op      | Msg_type | Msg_text |
+------------------------------+---------+----------+----------+
| tags                         | analyze | status   | OK       | 
+------------------------------+---------+----------+----------+
1 row in set (0.10 sec)

mysql> ANALYZE TABLE taggings;
+----------------------------------+---------+----------+----------+
| Table                            | Op      | Msg_type | Msg_text |
+----------------------------------+---------+----------+----------+
| taggings                         | analyze | status   | OK       | 
+----------------------------------+---------+----------+----------+
1 row in set (0.06 sec)

What’s going on here?

The MySQL optimizer uses table and index statistics to determine join order and index selection. In this case, the difference between Rows_examined in the slow log and rows in the explain was the first clue that something was off. Second, the index selection in the explain was off as well. By running ANALAYZE TABLE on the involved tables, this updated the table statistics to properly reflect key cardinality. I put a warning about table locks before those statements, because running this will lock for writes and (for innodb) reads. I was only dealing with a total of 100K records here, but my experience has been that even for millions of rows, this doesn’t take more than a few seconds. But there are certainly scenarios where this may not be the case. In addition, the OPTIMIZE TABLE command takes significantly longer on large tables, as it will completely rebuild the table and indexes.

The results:

The new rows from the explain plan:

| rows |
+------+
| 1034 |
|    1 |
|    1 |
|    1 |
|    1 |
+------+

Now the query runs in less than 0.05 seconds consistently.

Feedback welcome!

Build your live video apps with Justin.tv and Heroku

You probably already know all about our friends and fellow Y Combinator alumni at Justin.tv. For the last couple of years, they’ve been driving an explosion of live video content on the web, streaming thousands of channels featuring events and people from all over the world.

Today, things are about to get even more interesting as Justin.tv launches an extensive API that allows you to build your own live video apps using Justin.tv’s existing content and their technology platform. Whether you’re looking to enhance your own lifecasting project, or add video-based customer service to your company’s website, the Justin.tv API enables a whole new generation of exciting mashups blending live video with other content sources.

An exciting new API needs useful examples, of course, and not only have the guys over at Justin.tv built some really cool ones, they’ve also chosen to host some of them – like Hot Or Not Live – on Heroku. We’re psyched to see this, because we think that the combination of instant, provisionless deployment and easy scalability makes Heroku the ideal spot to launch your Justin.tv app and watch it take off.

To get started, check out the sample code for Hot or Not Live as well as the official API docs wiki.

We hope you’ll have lots of fun with the API, and please remember to tell us all about the cool stuff you’re building with Justin.tv and Heroku. We’d love to hear from you.

Introducing the Purple Blog

Photo credit

Readers of this blog may also be interested in the series of articles I post at the PurpleBlog. The latest entry is called How to Spot a Good Instructor and, if you read between the lines a bit, offers insight into why I started Purple Workshops in the first place. Longtime readers already know that from time to time, Brian and I offered public 1- or 2-day workshops. Like this blog, the workshops reflected our intent on empowering those new to Ruby and to Rails. I hope you’ll peruse the public workshops we have coming this fall and join us for one of them!

We’ll continue to post here on Softies all of the same kinds of articles, tips, and opinions that we’ve always written here. The PurpleBlog is for topics that wouldn’t necessarily fit here, and reflects the work I’m doing over there.

By the way, comments are not enabled on the PurpleBlog; use twitter to ask questions or provide feedback instead. It’s another one of my experiments as I try to find better ways to answer questions and receive constructive feedback.


Upcoming workshops:

Introducing the Purple Blog

Photo credit

Readers of this blog may also be interested in the series of articles I post at the PurpleBlog. The latest entry is called How to Spot a Good Instructor and, if you read between the lines a bit, offers insight into why I started Purple Workshops in the first place. Longtime readers already know that from time to time, Brian and I offered public 1- or 2-day workshops. Like this blog, the workshops reflected our intent on empowering those new to Ruby and to Rails. I hope you’ll peruse the public workshops we have coming this fall and join us for one of them!

We’ll continue to post here on Softies all of the same kinds of articles, tips, and opinions that we’ve always written here. The PurpleBlog is for topics that wouldn’t necessarily fit here, and reflects the work I’m doing over there.

By the way, comments are not enabled on the PurpleBlog; use twitter to ask questions or provide feedback instead. It’s another one of my experiments as I try to find better ways to answer questions and receive constructive feedback.


Upcoming workshops:

#172 Touch and Cache

Rails 2.3.3 brings us a new feature called “touch”. See how to use this to auto-expire associated caches in this episode.

#172 Touch and Cache

Rails 2.3.3 brings us a new feature called “touch”. See how to use this to auto-expire associated caches in this episode.

Automating ec2 deployments with Ruby

Recently I’ve had a couple of clients choose Amazon EC2 for their deployment environments, so I’ve been spending more time playing with EC2 lately than I ever had before. I set out to create a repeatable and time-efficient deployment process, and the result of my work is an easy to use ruby script, detailed below.

First, I have to tip my hat to the creators of chef—with chef it was insanely easy to get an instance configured just the way I like it, without any intervention. Being able to quickly spin up new instances EC2 made it easy to test my chef scripts, and before long I had an image I was ready to bundle.

Now some people don’t like having custom images. They’d rather use chef (or something similar) to configure a bare-bones image every time. I have nothing against that approach, I just happen to prefer having my own image that has all the packages, etc. that I want ready to go.

Once I had my custom image ready, I wanted to have a way to quickly deploy it. This included a few moving parts: the ID of the AMI, the ID of the EBS volume that should be attached (for persistent storage of the database files), and the public IP that should be associated with the running instance. Combining these three elements for a production-ready, single-instance deployment. And here is a ruby script that makes that deployment a one-command affair:


#!/usr/bin/env ruby

require ‘rubygems’

require ‘right_aws’

require ‘net/ssh’

require ‘open-uri’

# Git rid of ssl verification warning

class Net::HTTP

  alias_method :old_initialize, :initialize

  def initialize(*args)

    old_initialize(*args)

    @ssl_context = OpenSSL::SSL::SSLContext.new

    @ssl_context.verify_mode = OpenSSL::SSL::VERIFY_NONE

  end

end

AWS_ACCESS_KEY_ID="access key here"

AWS_SECRET_ACCESS_KEY="secret access key here"

@ec2 = RightAws::Ec2.new(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)

@image = @ec2.describe_images_by_owner.detect {|i| i[:aws_location] ‘bucketname/imagename.<span class="me1">manifest</span>.<span class="me1">xml</span>’ <span class="br0">}</span><br />
@volume = @ec2.<span class="me1">describe_volumes</span>.<span class="me1">first</span><br />
<br />
@instance = @ec2.<span class="me1">launch_instances</span><span class="br0">(</span>@image<span class="br0">[</span>:aws_id<span class="br0">]</span>, :key_name => ‘your-ssh-key-name’, :availability_zone => @volume<span class="br0">[</span>:zone<span class="br0">]</span><span class="br0">)</span>.<span class="me1">first</span><br />
<span class="kw3">sleep</span><span class="br0">(</span><span class="nu0">2</span><span class="br0">)</span> <span class="kw1">until</span> <span class="br0">(</span>@instance = @ec2.<span class="me1">describe_instances</span><span class="br0">(</span>@instance<span class="br0">[</span>:aws_instance_id<span class="br0">]</span><span class="br0">)</span>.<span class="me1">first</span><span class="br0">)</span><span class="br0">[</span>:aws_state<span class="br0">]</span> ‘running’

puts "Attaching volume…"

@ec2.attach_volume(@volume[:aws_id], @instance[:aws_instance_id], ’/dev/sdh’)

sleep(2) until @ec2.describe_volumes.detect {|v| v[:aws_id] = @volume[:aws_id] }[:aws_attachment_status] == ‘attached’

# These commented lines open and close firewall access to the instance from the current IP

# ip = open(‘http://checkip.dyndns.org’).read.match(/(\d+\.?)+/)[0]

# @ec2.authorize_security_group_IP_ingress(‘default’, 22, 22, ‘tcp’, "#{ip}/32")

sleep(2) # Sleep a bit more to give sshd a chance to wake up

Net::SSH.start(@instance[:dns_name], ‘root’, :keys => [File.join(File.dirname(FILE), ‘id_rsa-your-ssh-key-name’)]) do |ssh|

  ssh.exec!("mount /srv")

  ssh.exec("/etc/init.d/postgresql-8.3 start")

end

# @ec2.revoke_security_group_IP_ingress(‘default’, 22, 22, ‘tcp’, "#{ip}/32")

puts "Assocating IP…"

@address = @ec2.describe_addresses.first

@ec2.associate_address(@instance[:aws_instance_id], @address[:public_ip])

Using the excellent RightAws gem, I first grab the id of the custom AMI I want to use, then I launch a new instance based on that AMI, making sure the instance is launched in the same availability zone where the EBS volume is located. The script then does a simple sleep loop while waiting for the instance to show up as “running”. After the EBS volume is attached, then we wait a bit before connecting via SSH to mount the EBS volume (it is setup in /etc/fstab in my custom image) and run any setup commands, which, in this case, is starting postgres (since it can’t be started before the volume with the database files is mounted). After that’s all done, the elastic IP is associated with the instance, and we’re ready to roll.

The commented-out lines assume you want maximum security for your instance and have global SSH access turned off in your security group. In that case, to be able to connect to SSH we’d need to fetch the public IP of the machine running this script, change the security group to allow SSH access from that IP, then undo that change after we make the SSH connection to close the firewall again.

I’ve gotten a lot of mileage out of this script so far. 🙂 One day I’ll get around to smartly handling SSH errors (so it can attempt connecting right away, and retry a few times in case the instance isn’t ready yet), and to adding some configurability for the AMI ID, EBS ID, and EIP. Until then, this works well for someone just getting started with EC2.

Slides from my Rails Underground 2009 talk

Hello from London!

Am currently enjoying the talks at Rails Underground 2009 in London and had the pleasure to be one of the first speakers at the conference. My talk covered a collection of what our team considers best practices. Best practices that aid in the successful launch of a web application and covered a few Rails-specific topics as well.


I&#8217;ll be sharing some posts in the coming week(s) that&#8217;ll expand on some of these topics as promised to the audience.
Since I covered a wide range of topics, I decided to share my slides online. They won&#8217;t provide as much context (as I&#8217;m not speaking as you&#8217;ll look at them), but they might hint at some of the topics that I covered. There was a guy video taping the talks&#8230; so I assume that a video <div class="post-limited-image"><img src="http://feeds.feedburner.com/~ff/RobbyOnRails?d=7Q72WNTAKBA" border="0"></div>

Continue reading “Slides from my Rails Underground 2009 talk”

Slides from my Rails Underground 2009 talk

Hello from London!

Am currently enjoying the talks at Rails Underground 2009 in London and had the pleasure to be one of the first speakers at the conference. My talk covered a collection of what our team considers best practices. Best practices that aid in the successful launch of a web application and covered a few Rails-specific topics as well.

I’ll be sharing some posts in the coming week(s) that’ll expand on some of these topics as promised to the audience.

Since I covered a wide range of topics, I decided to share my slides online. They won’t provide as much context (as I’m not speaking as you’ll look at them), but they might hint at some of the topics that I covered. There was a guy video taping the talks… so I assume that a video of my talk will be posted online in the near future.

Until then… here are the slides