Code Digest #2

When you program for a living, you write lots of code. There is often some code that you are fond of. We started the Code Digest series to present such code written by the RHG developers. We encourage other teams and individual developers to share similar snippets in their blogs so we all can learn from each other and become better rails developers.

Mai Nguyen

Simple AJAX error messaging

When simple javascript validation is not enough and your product managers insist on ajax for error messaging, guess what. You have to implement ajax error messaging. One such case is ‘username availability’. This simple example displays an error message on blur of a the username field. It doesn’t hit the server unless the value of the field is well-formed (at least that will save you *some* network traffic …)

Controller would look something like:

class FooController < ::ApplicationController

def is_username_taken

unless Person.find_by_username(params[:username])
render :nothing => true
return
end

message_html_id = params[:message_html_id]
render :update do |page|
page.replace_html message_html_id ,"Username is not available, please choose another."
# if you want error styling associated with the failed state
page << "document.getElementById('" + message_html_id + "').className = 'failed'"
end

end

end

View would look something like:

<dl>
<dt><label>Username: </label></dt>
<dd id="username_input">
<%= text_field :person, :username, :class => "input", :type => 'text', :id => 'person_username_input' %>
</dd>
<dd id="username_messaging"><%= @person.errors.on(:username)%></dd>
</dl>

Your javascript would look something like:

var UserNameUnique = Class.create();
UserNameUnique.prototype = {
initialize: function( field_id, message_id) {
this.message_id = message_id;
this.field = document.getElementById( field_id );
if (typeof this.field == 'undefined') {return;}
// Observe blur on field
Event.observe(this.field, 'blur', this.checkName.bindAsEventListener(this));
},
checkName: function() {
if( typeof this.field != 'undefined' )
{
var name = this.field.value;
// don't hit server unless username is well-formed
var re = /^[A-Za-z])[a-zA-Z0-9]{2,25}$/;
if( name.match( re ))
new Ajax.Request('/foo/is_username_taken', {asynchronous:true, parameters:'username='+name+'&message_html_id='+this.message_id});
}
}
};

new UserNameUnique('person_username_input', 'username_messaging');

You could also use Rails helper observe_field instead of the writing your own javascript, but there is more flexibility in writing your own javascript (such as the need to check well-formed values before hitting the server).

Mark Brooks

Creating the options list for select tags can be annoying, especially if there is a requirement that the current “option” be displayed as the default option. With javascript, it isn’t such a big deal, but one must take care of the degraded case as well.

In any event, the base object is an array of two-item hashes representing both the name of a video channel and the number of videos in that channel. By way of example:

@channelinfo = [
{"name"=>"Fitness","count"=>4},
{"name"=>"Diabetes", "count"=>1},
{"name"=>"Pregnancy", "count"=>11}
]

The option values are the name fields of each hash. The key is, we want the current option value to be the first one in the option list, and the rest to be in alpha order by name.

So let’s create an option builder from the bottom up.

First, we only need the channel names for the select tag. This gives us what we need:

options = @channelinfo.collect { |channel| channel['name'] }

That gives us a list of channel names. Using the list above, it would be [“Fitness”, “Diabetes”, “Pregnancy”].

However, we need to make sure that the current channel is first in the list. Let’s say that current channel is Diabetes. Since we already have that value in @current_channel, we can exclude it from our options list:

options = @channelinfo.collect do |channel|
channel['name']
end.reject do |channelname|
channelname == @current_channel
end

Now the resulting list will look like [“Fitness”, “Pregnancy”]. However, we still need the current channel to be at the front of the lift, so we add it back as the first element:

options = @channelinfo.collect do |channel|
channel['name']
end.reject do |channelname|
channelname == @current_channel
end.unshift(@current_channel)

Now our options list looks like [“Diabetes”, “Fitness”, “Pregnancy”].

Two points. First, we want to make sure that, while the current channel is at the head of the list, the rest of the list items are in alpha order, so we add a sort directive in the appropriate place. Also, it is probably a good idea to exclude any nils that might pop up in the collection on the original data object, since it comes from a service and it is possible, however unlikely, that a hash might get spit out without a ‘name’ property. The more rigorous code looks like this now:

options = @channelinfo.collect do |channel|
channel['name']
end.compact.reject do | channelname |
channelname == @current_channel
end.sort.unshift(@current_channel)

Now we can generate our options list using:

options.collect do |channelname|
"<option>" + channelname + "</option>"
end.join(",")

although if you want to, you can simply combine the whole thing as follows:

@channelinfo.collect do |channel|
channel['name']
end.compact.reject do |channelname|
channelname == @current_channel
end.sort.unshift(@current_channel).collect do |channelname|
"<option>" + channelname + "</option>"
end.join(",")

to get the same result, and eliminate the unnecessary options binding.

Todd Fisher

Here is an extension trying to execute a block multiple times before giving up. Handy for network operations and such.

module Kernel
def could_fail(retries = 3, &block)
tries = 0
begin
yield
rescue Exception
tries += 1
if tries < retries
retry
else
raise
end
end
end
end

Call it as result = could_fail { some_operation_that_might_fail_first_time }

Todd Fisher

When you are writing a script that needs to auto install gems, you are likely to run into a problem that it stops because there are multiple platform versions available (jruby, win32, etc) and the gem command expects you to pick one that matches your platform. This patch forces to use a specific platform so no user interaction is needed. Original idea from Warren updated to support rubygems 0.9.4.

module GemTasks
def setup

return if $gems_initialized
$gems_initialized = true
Gem.manage_gems

# see => http://svn.bountysource.com/fishplate/scripts/debian_install.pl
Gem::RemoteInstaller.class_eval do

alias_method :find_gem_to_install_without_ruby_only_platform, :find_gem_to_install

def find_gem_to_install( gem_name, version_requirement, caches = nil )
if caches # old version of rubygems used to pass a caches object
caches.each {|k,v| caches[k].each { |name,spect| caches[k].remove_spec(name) unless spec.platform == 'ruby' } }
find_gem_to_install_without_ruby_only_platform( gem_name, version_requirement, caches )
else
Gem::StreamUI.class_eval do

alias_method :choose_from_list_without_choosing_ruby_only, :choose_from_list
def choose_from_list( question, list )
result = nil
result_index = -1
list.each_with_index do |item,index|
if item.match(/\(ruby\)/)
result_index = index
result = item
break
end
end
return [result, result_index]
end

end

find_gem_to_install_without_ruby_only_platform( gem_name, version_requirement )

end
end

end

end
end

Val Aleksenko

Since class-level instance variable are not inherited by subclasses, you need to go some extra steps when writing a plugin using them. Depending on the amount of such variables, I have been either defining a method instead of class-level instance variables or forwarding them to subclasses.

Example #1. acts_as_readonlyable needs to provide a single class level instance variable. Defining a method instead.

def acts_as_readonlyable(*readonly_dbs)
define_readonly_model_method(readonly_models)
end

def define_readonly_model_method(readonly_models)
(class << self; self; end).class_eval do
define_method(:readonly_model) { readonly_models[rand(readonly_models.size)] }
end
end

Example #2. acts_as_secure uses a bunch of variables. Forwarding them to subclasses.

def acts_as_secure(options = {})
@secure_except = unsecure_columns(options.delete(:except))
@secure_storage_type = options.delete(:storage_type) || :binary
@secure_class_crypto_provider = options.delete(:crypto_provider)
@secure_crypto_providers = {}
extend(ActsAsSecureClassMethods)
end

module ActsAsSecureClassMethods
def inherited(sub)
[:secure_except, :secure_storage_type, :secure_class_crypto_provider, :secure_crypto_providers].each do |p|
sub.instance_variable_set("@#{ p }", instance_variable_get("@#{ p }"))
end
super
end
end

Code Digest #2

When you program for a living, you write lots of code. There is often some code that you are fond of. We started the Code Digest series to present such code written by the RHG developers. We encourage other teams and individual developers to share similar snippets in their blogs so we all can learn from each other and become better rails developers.

Mai Nguyen

Simple AJAX error messaging

When simple javascript validation is not enough and your product managers insist on ajax for error messaging, guess what. You have to implement ajax error messaging. One such case is ‘username availability’. This simple example displays an error message on blur of a the username field. It doesn’t hit the server unless the value of the field is well-formed (at least that will save you some network traffic …)

Controller would look something like:

class FooController < ::ApplicationController

def is_username_taken

unless
Continue reading "Code Digest #2"

Our Contribution to Advanced Rails Recipes

We submitted a few recipes, answering the call for sharing what is happing in the rails community, and a half a dozen of them were accepted as entries to the upcoming book. If you enjoyed reading the blog, you would soon have a chance to see some of it on paper.

Our Contribution to Advanced Rails Recipes

We submitted a few recipes, answering the call for sharing what is happing in the rails community, and a half a dozen of them were accepted as entries to the upcoming book. If you enjoyed reading the blog, you would soon have a chance to see some of it on paper.

Acts As Fast But Very Inaccurate Counter

Introduction

If you have chosen the InnoDB MySQL engine over MyISAM for its support of transactions, foreign keys and other niceties, you might be aware of its limitations, like much slower count(*). Our DBAs are in a constant lookout for slow queries in production and the ways to keep DBs happy so they recommended that we should try to fix count(). They suggested to check SHOW TABLE STATUS for an approximate count of rows in a table. This morning I wrote acts_as_fast_counter which proved that the speed is indeed improved but the accuracy might be not acceptable. The rest of the post just records details of the exercise.

The approach

I created a model per engine and seeded each with 100K records. Then I run count on each model for a thousand times and measured the results.

The code:

module ActiveRecord; module Acts; end; end 

module ActiveRecord::Acts::ActsAsFastCounter

def self.included(base)
base.extend(ClassMethods)
end

module ClassMethods

def acts_as_fast_counter
self.extend(FastCounterOverrides)
end

module FastCounterOverrides

def count(*args)
if args.empty?
connection.select_one("SHOW TABLE STATUS LIKE '#{ table_name }'")['Rows'].to_i
else
super(*args)
end
end

end

end

end

ActiveRecord::Base.send(:include, ActiveRecord::Acts::ActsAsFastCounter)

# create_table :myisams, :options => 'ENGINE=InnoDB'  do |t|
# t.column :name, :string
# end
# 100_000.times { Myisam.create(:name => Time.now.to_s) }
#
# create_table :innodbs, :options => 'engine=InnoDB' do |t|
# t.column :name, :string
# end
# 100_000.times { Innodb.create(:name => Time.now.to_s) }

class Bench

require 'benchmark'
require 'acts_as_fast_counter'

def self.run
measure
show_count
convert_to_fast_counter
show_count
add_records
show_count
destroy_records
show_count
measure
end

def self.measure
puts "* Benchhmarks:"
n = 1_000
Benchmark.bm(12) do |x|
x.report('MyISAM') { n.times { Myisam.count } }
x.report('InnoDB') { n.times { Innodb.count } }
end
end

def self.convert_to_fast_counter
Innodb.send(:acts_as_fast_counter)
puts "* Converted Innodb to fast counter"
end

def self.add_records
@myisam = Myisam.create(:name => 'One more')
@innodb = Innodb.create(:name => 'One more')
puts "* Added records"
end

def self.destroy_records
@myisam.destroy
@innodb.destroy
puts "* Destroyed records"
end

def self.show_count
puts "* Record count:"
puts " MyISAM: #{ Myisam.count }"
puts " InnoDB: #{ Innodb.count }"
end

end

The results:

* Benchhmarks:
user system total real
MyISAM 0.180000 0.040000 0.220000 ( 0.289983)
InnoDB 0.430000 0.070000 0.500000 ( 35.102496)
* Record count:
MyISAM: 100000
InnoDB: 100000
* Converted Innodb to fast counter
* Record count:
MyISAM: 100000
InnoDB: 100345
* Added records
* Record count:
MyISAM: 100001
InnoDB: 100345
* Destroyed records
* Record count:
MyISAM: 100000
InnoDB: 100345
* Benchhmarks:
user system total real
MyISAM 0.250000 0.030000 0.280000 ( 0.350673)
InnoDB 0.250000 0.040000 0.290000 ( 0.977711)

Final thoughts

The MySQL manual has a clear warning about inaccuracy of the amount of rows in the SHOW TABLE STATUS results:

Rows – The number of rows. Some storage engines, such as MyISAM, store the exact count. For other storage engines, such as InnoDB, this value is an approximation, and may vary from the actual value by as much as 40 to 50%. In such cases, use SELECT COUNT(*) to obtain an accurate count.

The test confirms it by showing 345 more records then expected thus making it not very useful but for some edge cases. If you know a way to improve the speed of count() on InnoDB with some other approach beyond using a counter table, please share.

Acts As Fast But Very Inaccurate Counter

Introduction

If you have chosen the InnoDB MySQL engine over MyISAM for its support of transactions, foreign keys and other niceties, you might be aware of its limitations, like much slower count(*). Our DBAs are in a constant lookout for slow queries in production and the ways to keep DBs happy so they recommended that we should try to fix count(). They suggested to check SHOW TABLE STATUS for an approximate count of rows in a table. This morning I wrote acts_as_fast_counter which proved that the speed is indeed improved but the accuracy might be not acceptable. The rest of the post just records details of the exercise.

The approach

I created a model per engine and seeded each with 100K records. Then I run count on each model for a thousand times and measured the results.

The code:

module ActiveRecord; module Acts; end; end 

module ActiveRecord
Continue reading "Acts As Fast But Very Inaccurate Counter"

DRYing Up Polymorphic Controllers

Polymorphic routes allow drying up the controller implementation when functionality is identical, regardless of entry point. A good example is comments for articles and blogs. There is a challenge to balance the implementation of the comments controller reflecting the multiple incoming routes. Let’s look at the way it could be written.

Routing is straightforward with blogs and article models acting as commentable and both the comment model and comment controllers being polymorphic:

ActionController::Routing::Routes.draw do |map|
map.resources :articles, :has_many => [ :comments ]
map.resources :blogs, :has_many => [ :comments ]
end

This means that a comment can be created via post to either /articles/1/comments/new or /blogs/1/comments/new. The comments controller can be implemented to handle both:

class CommentsController < ApplicationController

def new
@parent = parent_object
@comment = Comment.new
end

def create

@parent = parent_object
@comment = @parent.comments.build(params[:comment])

if @comment.valid? and @comment.save
redirect_to parent_url(@parent)
else
render :action => 'new'
end

end

private

def parent_object
case
when params[:article_id] then Article.find_by_id(params[:article_id])
when params[:news_id] then News.find_by_id(params[:news_id])
end
end

def parent_url(parent)
case
when params[:article_id] then article_url(parent)
when params[:news_id] then news_url(parent)
end
end

end

This method works fine and there is not much drive to start refactoring it right away. This changes, though, if there is a need to add another commentable or allow some other polymorphic route. Instead of adding more ‘when’ clauses the whole functionality can be extracted and abstracted based on the idea of having fixed naming conventions for resources that allow movement from a controller name to a model. The refactored example has the parent functionality extracted to the application controller to share it as-is with other polymorphic routes:

class ApplicationController < ActionController::Base

protected

class << self

attr_reader :parents

def parent_resources(*parents)
@parents = parents
end

end

def parent_id(parent)
request.path_parameters["#{ parent }_id"]
end

def parent_type
self.class.parents.detect { |parent| parent_id(parent) }
end

def parent_class
parent_type && parent_type.to_s.classify.constantize
end

def parent_object
parent_class && parent_class.find_by_id(parent_id(parent_type))
end

end

class CommentsController < ApplicationController

parent_resources :article, :blogs

def new
@parent = parent_object
@comment = Comment.new
end

def create

@parent = parent_object
@comment = @parent.comments.build(params[:comment])

if @comment.valid? and @comment.save
redirect_to send("#{ parent_type }_url", @parent)
else
render :action => 'new'
end

end

end

The parent_resources call declares resources that are parent for a current controller. An alternative approach is to guess such parent resources from the request URI and routes. Aaron is currently working on a patch on Edge implementing it. We’ll update this post later.

If you currently use multiple polymorphic resources and have if clauses in the controller code, you might want to rethink how it could be DRYed up using this approach. In some cases views are very parent type specific. Then it might be better to have different templates and partials rendered via render :template => “/controller/#{ parent_type }_action”.

DRYing Up Polymorphic Controllers

Polymorphic routes allow drying up the controller implementation when functionality is identical, regardless of entry point. A good example is comments for articles and blogs. There is a challenge to balance the implementation of the comments controller reflecting the multiple incoming routes. Let’s look at the way it could be written.

Routing is straightforward with blogs and article models acting as commentable and both the comment model and comment controllers being polymorphic:

ActionController::Routing::Routes.draw do |map|
map.resources :articles, :has_many => [ :comments ]
map.resources :blogs, :has_many => [ :comments ]
end

This means that a comment can be created via post to either /articles/1/comments/new or /blogs/1/comments/new. The comments controller can be implemented to handle both:

class CommentsController < ApplicationController

def new
@parent = parent_object
@comment = Comment.new
end

def
Continue reading "DRYing Up Polymorphic Controllers"

Capistrano Off the Beaten Path

Introduction

If you use Capistrano, most likely you use it to deploy rails applications by running it from the project directory. The plugem management tool piggy-backs on Capistrano to execute some recipes without using the current path because recipes operate on gems. Recipes do not, however, completely ignore the current directory, but instead use it for optional customization. Customization is not limited to a deployment recipe from the current directory but can be environment-specific across multiple sites if they each have a special extension gem installed.

Background

We use a bunch of ruby scripts at RHG for deployment and delivery of our numerous applications and shared component. When we started preparing some of that functionality for a public release, we needed to provide a way to customize the scripts. Warren Konkel suggested to migrate them to Capistrano since it was the most familiar tool for rails developers and it has a recipes hierarchy that can be used for customization. So, we wrote the plugem command tool that feeds its parameters to Capistrano via Capistrano::CLI.new(plugem_converted_arguments).execute!
after loading its own recipes.

The Code

    1 module Capistrano
2 class Configuration
3
4 alias :standard_cap_load :load
5
6 def load(*args, &block)
7
8 standard_cap_load(*args, &block)
9
10 if args == ["standard"]
11
12 load_plugem_deploy_recipes(File.dirname(__FILE__) + '/../..')
13
14 begin
15 require 'plugems_deploy_ext'
16 load_plugem_deploy_recipes(PLUGEMS_DEPLOY_EXT_DIR) # Overriding from extensions
17 rescue Exception
18 # No extension is loaded
19 end
20
21 end
22
23 end
24
25 def load_plugem_deploy_recipes(dir)
26 Dir[File.join(dir, 'recipes', '*')].each { |f| standard_cap_load(f) }
27 end
28
29 end
30 end

Ignoring lines 14-19 for now, all it does is inject the plugem deployment gem recipes to Capistrano. They are now available for execution by Capistrano. So when I call plugem update my_app it is being translated to cap plugem_update -s plugem_name=my_app(we use the plugem_ namespace for tasks and variables to avoid clashing with standard recipes). Since loading of plugem recipes is purposely not handled via the -f flag of capistrano, the standard deployment recipes like config/deploy.rb are still being loaded.

Customization

The package is extending Capistrano but we wanted it to be customized too. For example, you might want to define your own list of gem servers to download gems from. Capistrano allows to define recipes per project, user, or host. The plugems_deploy takes it a step further and provides a way to customize it per deployment environment, which might contain many hosts. It does this by expecting an optional plugems_deploy_ext gem to be installed on a system (lines 14-19). If it finds the extension gem, it loads recipes from there overriding the default ones.

Conclusion

Capistrano is not only a great deployment tool, it can be used as a base of highly-customizable general purpose tools.

Capistrano Off the Beaten Path

Introduction

If you use Capistrano, most likely you use it to deploy rails applications by running it from the project directory. The plugem management tool piggy-backs on Capistrano to execute some recipes without using the current path because recipes operate on gems. Recipes do not, however, completely ignore the current directory, but instead use it for optional customization. Customization is not limited to a deployment recipe from the current directory but can be environment-specific across multiple sites if they each have a special extension gem installed.

Background

We use a bunch of ruby scripts at RHG for deployment and delivery of our numerous applications and shared component. When we started preparing some of that functionality for a public release, we needed to provide a way to customize the scripts. Warren Konkel suggested to migrate them to Capistrano since it was the most familiar tool for rails developers and it has

Continue reading “Capistrano Off the Beaten Path”

Plugin I Cannot Live Without

The Enhanced Rails Migrations plugin was written to end the constant battle we had with clashing names in db migrations within our large development team. We tried everything: special commit policies, rake tasks, even claiming the next migration number in subversion. Nothing worked and CI server was sending ‘broken build due to conflicting migration number’ messages almost daily. Since the plugin was introduced to all our rails applications around six months ago, I have not heard of a single case of conflicting migrations. Seemingly, the goal was well accomplished.

What I found over time, is that the plugin is not only useful for large projects. Any rails development effort with more than one programmer involved benefits from using it. If you ever had to renumber your new migration after doing svn up you know what I am talking about. It makes sense to install this plugin as the very first one in your project since an amount of migrations at the beginning tends to grow much faster then later in the game.

The plugin works for rails versions 1.1.6 up to the latest edge. When you start your next project with multiple developers, use it and you should be able to forget that you ever had problems with clashing migrations.

Plugin I Cannot Live Without

The Enhanced Rails Migrations plugin was written to end the constant battle we had with clashing names in db migrations within our large development team. We tried everything: special commit policies, rake tasks, even claiming the next migration number in subversion. Nothing worked and CI server was sending ‘broken build due to conflicting migration number’ messages almost daily. Since the plugin was introduced to all our rails applications around six months ago, I have not heard of a single case of conflicting migrations. Seemingly, the goal was well accomplished.

What I found over time, is that the plugin is not only useful for large projects. Any rails development effort with more than one programmer involved benefits from using it. If you ever had to renumber your new migration after doing svn up you know what I am talking about. It makes sense to install this plugin as the very first

Continue reading “Plugin I Cannot Live Without”

Moving models to a different database

There many reasons to use multiple databases (DBs) and when this is done, there is often a case when a model needs to be moved from one DB to another. The impetus could be that part of the data is referential and this is being reflected by moving it to a read-only DB. Another possibility is that we want to protect some data with an additional layer of security, so we extract it to a secure DB. In all cases, the challenge is to migrate the existing data. When the amount of data is considerably large, there is no choice but to do it via SQL data loaders or similar techniques. On the other hand, if it is acceptable to leverage rails db migrations and you prefer to do any data manipulation through them, there are some challenges to face.

Often, you need to have an access to old and new models during data migration. One solution is to move or copy an existing model to a separate namespace and put the new model instead at the old namespace. Let’s look at a couple of examples:

Extracting referential data

I have a model Fruit in our main DB which gets its data from an external source, so we only access it read-only. We want to enforce it by moving the data to a DB which we access with a read-only account. First, I create a referential_db entry in database.yml:

dbs:
database: main

referential_db:
database: referential

Then, I copy the original model, Fruit, to a dedicated namespace, so the model becomes RetiredModels::Fruit. I add establish_connection to the original namespace model:

# create_table :fruits do |t|
# t.column :name, :string
# end
class Fruit < ActiveRecord::Base
establish_connection configurations[RAILS_ENV]['referential_db']
end

Everything is set for migration. Since it is a referential data, the migration needs to preserve data integrity so the models belonging to Fruit can still reference it by an old id:

def self.up
RetiredModels::Fruit.find(:all).each do |old_record|
Fruit.new(old_record.attributes) { |new_record| new_record.id = old_record.id }.save!
end
end

After the successful migration run, all data is replicated to a new DB. The retired model can be removed during next deployment, and the original table dropped.

There is one caveat for development and test modes. If you don’t want to bother with multiple databases in those modes, you need to take care of having no table name clashing. So, the new model would have to use different table names via set_table_name.

Securing sensitive data

One of the models belonging to Fruit is SecretFruit. It contains a secret name for every fruit out there. Our legal department asked the development team to protect that data in case our DB is stolen. We decided to migrate the existing SecretFruit data to a protected DB and keep sensitive data encrypted with help from Acts As Secure. First, I create a secure_db entry in database.yml:

dbs:
database: main

secure_db:
database: secure
host: protected_host

Then, I copy the original model, SecretFruit, to a dedicated namespace, so the model becomes RetiredModels::SecretFruit. I modify the model in the original namespace to reflect the new requirements:

# create_table :secret_fruits do |t|
# t.column :name, :binary
# t.column :fruit_id, :integer
# end
class SecretFruit < ActiveRecord::Base
establish_connection configurations[RAILS_ENV]['secure_db']
acts_as_secure :crypto_provider => MasterKeyProvider
belongs_to :fruit
end

Since data encryption is done on-the-fly and there are no data integrity requirements, the migration is straightforward:

def self.up
RetiredModels::SecretFruit.find(:all).each { |old| SecretFruit.create!(old.attributes) }
end

I can now safely delete RetiredModels::SecretFruit and associated data.

Moving models to a different database

There many reasons to use multiple databases (DBs) and when this is done, there is often a case when a model needs to be moved from one DB to another. The impetus could be that part of the data is referential and this is being reflected by moving it to a read-only DB. Another possibility is that we want to protect some data with an additional layer of security, so we extract it to a secure DB. In all cases, the challenge is to migrate the existing data. When the amount of data is considerably large, there is no choice but to do it via SQL data loaders or similar techniques. On the other hand, if it is acceptable to leverage rails db migrations and you prefer to do any data manipulation through them, there are some challenges to face.

Often, you need to have an access to old and

Continue reading “Moving models to a different database”

DRYing Models via Acts As

ActsAs is an idiom familiar to every Rails developer, which makes it a good candidate for a shared functionality between models. Using it as early in the game as possible allows one to work on its functionality without a need to touch the code in multiple models. Let’s look at a couple of examples.

Acts As Unique

I have some models that I want to have uniqueness across my application. I use some UUID mechanism (initially, a db call) to set a field (:token) after creation. Since I have multiple models, I decide to extract it the code for uniqueness setting to acts_as_unique. After refactoring, my model Fruit looks like:

# create_table :fruits do |t|
# t.column :name, :string
# t.column :token, :string
# end
class Fruit < ActiveRecord::Base
acts_as_unique
end

My acts_as_unique might look like:

module ActiveRecord; module Acts; end; end
module ActiveRecord::Acts::ActsAsUnique

def self.included(base)
base.extend(ClassMethods)
end

module ClassMethods
def acts_as_unique(field = :token)
validates_uniqueness_of field
before_validation_on_create do |o|
o.send("#{ field }=", connection.select_one('SELECT UUID() AS UUID', "#{name} UUID generated")['UUID'])
end
end
end
end

ActiveRecord::Base.send(:include, ActiveRecord::Acts::ActsAsUnique)

Let’s try it:
>> f = Fruit.create(:name => ‘apple’)
>> p f.token
“0a4d7c46-4df0-102a-a4b9-59b995bffdb7”

Now I can work on acts_as_unique to replace the DB call with a UUID gem or some other implementation without affecting the rest of the code.

Acts As Trackable

I have some models for which I want to keep track of when instances are created or updated. I have a polymorphic Event model for storage of such events. Since there are multiple models I want to track, I extract the functionality to acts_as_trackable. After refactoring, my models look like:

# create_table :fruits do |t|
# t.column :name, :string
# end
class Fruit < ActiveRecord::Base
acts_as_trackable
end

# create_table :events do |t|
# t.column "action", :string
# t.column "created_at", :datetime, :null => false
# t.column "trackable_type", :string
# t.column "trackable_id", :integer
# end
class Event < ActiveRecord::Base
belongs_to :trackable, :polymorphic => true
end

module ActiveRecord; module Acts; end; end 
module ActiveRecord::Acts::ActsTrackable

def self.included(base)
base.extend(ClassMethods)
end

module ClassMethods
def acts_as_trackable
has_many :events, :as => :trackable, :dependent => :destroy
after_update { |o| o.events.create(:action => 'updated') }
after_create { |o| o.events.create(:action => 'created') }
end
end

end

ActiveRecord::Base.send(:include, ActiveRecord::Acts::ActsTrackable)

Let’s see what we got:
>> f = Fruit.create(:name => ‘apple’)
>> p f.events.collect(&:action)
[“created”]
>> f.name = ‘passionfruit’
>> f.save!
>> p f.events.collect(&:action)
[“created”, “updated”]

The Event model is likely to evolve but it would be easier to support it since the only place where I need to reflect the changes is acts_as_trackable. The goal is achieved.

DRYing Models via Acts As

ActsAs is an idiom familiar to every Rails developer, which makes it a good candidate for a shared functionality between models. Using it as early in the game as possible allows one to work on its functionality without a need to touch the code in multiple models. Let’s look at a couple of examples.

Acts As Unique

I have some models that I want to have uniqueness across my application. I use some UUID mechanism (initially, a db call) to set a field (:token) after creation. Since I have multiple models, I decide to extract it the code for uniqueness setting to acts_as_unique. After refactoring, my model Fruit looks like:

# create_table :fruits do |t|
# t.column :name, :string
# t.column :token, :string
# end
class Fruit < ActiveRecord::Base
acts_as_unique
end

My acts_as_unique might look like:

module ActiveRecord; module Acts; end; end
module
Continue reading "DRYing Models via Acts As"

[RELEASE] Plugems packaging

The Plugems runtime is enough to justify their existence, but it does not stop there. Since we already defined our dependencies in a gem-like fashion, it is only a small step to start using them for packaging as well. All that is needed for this is the plugems_deploy gem that can be installed from rubyforge. After the gem is installed, a new plugem command comes up. It piggy-backs to capistrano to provide some plugems-related recipes. The recipe packaged with the initial version is build. This action allows you to package your project as a gem.

Let’s take for example our sample application’s config:

:version: [1, 0]
:name: "cool_application"
:description: "My First Plugemified Application"
:dependencies:
- ['some_gem', '~> 1.0']
- ['other_gem', '> 2.0']
- ['one_more', '2.0.1']

When we run the plugem build on the top of the application, we get a gem built:

$ plugem build
* executing task plugem_build
rm -rf pkg
mkdir -p pkg
Successfully built RubyGem
Name: cool_application
Version: 1.0.0
File: cool_application-1.0.0.gem
mv cool_application-1.0.0.gem pkg/cool_application-1.0.0.gem

If you looked inside, you would find that the attributes and dependencies from the manifest file were translated to corresponding gem attributes and dependencies.

Where did the build (micro) revision come from?

You might noticed that the manifest defined only major and minor revisions but there was a micro added during the packaging time. The plugem packaging process follows the Rubygem’s rational versioning policy giving you full control over the build revision. The full version can be defined in the manifest file a-la-Rakefile (i.e. :version: [1, 0, 1]). You can derive it dynamically from a source like svn revision. You just set the capistrano gem_micro_revision variable in a deployment recipe (like config/deploy.rb). An svn based example is:
set :gem_micro_revision, `svn info`.grep(/^Revision:/).first[/(\d+)/][$1]

And you can always overwrite the full version via the ‘–version’ flag of plugem: plugem build –version 3.2.1

The choice is yours.

You might wonder why would you package your rails application as a gem. The rationale is that it allows you to utilize the only ruby-native packaging and distribution system to distribute and deploy your application. The next plugem_deploy releases and this series installments would provide the tools and guidance how to do that.

But it is not just for packaging…

Since you have you dependencies clearly defined, it is really easy to update them to the latest version. Just run plugem update from the project directory:

$ plugem up
* executing task plugem_update
Updating some_gem (~> 1.0)
* executing task plugem_install
Bulk updating Gem source index for: http://gems.rubyforge.org
Installing [ some_gem, 1.2.2 ]

You also have a full control which gem servers to use. Set the variable in your deployment recipe:
set :gem_servers, [ ‘http://gems.mycompany.com:8808’, ‘http://gems.rubyforge.org’ ]

and see the difference:

$ plugem up
* executing task plugem_update
Updating some_gem (~> 1.0)
* executing task plugem_install
Bulk updating Gem source index for: http://gems.mycompany.com:8808
Bulk updating Gem source index for: http://gems.rubyforge.org

To Be Continued …

[RELEASE] Plugems packaging

The Plugems runtime is enough to justify their existence, but it does not stop there. Since we already defined our dependencies in a gem-like fashion, it is only a small step to start using them for packaging as well. All that is needed for this is the plugems_deploy gem that can be installed from rubyforge. After the gem is installed, a new plugem command comes up. It piggy-backs to capistrano to provide some plugems-related recipes. The recipe packaged with the initial version is build. This action allows you to package your project as a gem.

Let’s take for example our sample application’s config:

:version: [1, 0]
:name: "cool_application"
:description: "My First Plugemified Application"
:dependencies:
- ['some_gem', '~> 1.0']
- ['other_gem', '> 2.0']
Continue reading "[RELEASE] Plugems packaging"