Amazon Reduced Redundancy Storage Released

Amazon recently announced their new storage service, Reduced Redundancy Storage (RRS).
We are pleased to introduce a new storage option for Amazon S3 called Reduced Redundancy Storage (RRS) that enables customers to reduce their costs by storing non-critical, reproducible data at lower levels of redundancy than the standard storage of Amazon S3. It provides a […]

Rack::Bug setup

Lately, I’ve been using Rack::Bug to help me find some slow queries. Here’s a quick 6 step tutorial on setting it up.
1. Install Plugin:

script/plugin install git://github.com/brynary/rack-bug.git

2. Add this to development.rb:

config.middleware.use ("Rack::Bug", :secret_key => "epT5uCIchlsHCeR9dloOeAPG66PtHd9K8l0q9avitiaA/KUrY7DE52hD4yWY+8z1",
:password => "some_pass"
)

If you want restricted access by IP, you can add this to the hash

:ip_masks => [IPAddr.new("127.0.0.1")]

3. […]

Code Readability vs Optimization

There are times when I debate whether to use one-liners or break them out into more readable blocks. Recently I had a situation where I needed to check certain fields on an object depending on the status of other fields.
Let’s say I have a Book object and I wanted to see if it’s condition […]

Exporting MySQL database

Exporting mySQL dumps can sometimes be tricky. Some sites suggest exporting dump like so:

mysqldump database_name > dump.sql

However, the problem with this method, is that the stream redirect might not be able to handle UTF-8 encoding correctly on certain OSes. I recently had a project that uses exotic characters and some characters would appear garbled up in the database because it was exported this way.

The way to export it, with the original characters intact, is to let mysqldump write to disk using the -r flag:

mysqldump database_name -r dump.sql

Ok, I lied. Exporting MySQL dump is easy.

Executing SQL commands in Rails

Most of the times, ActiveRecord’s helpers to access database info is all you need; sometimes, you want to do some hacky stuff.

For example, I had to figure out a database’s timezone and schema but I had no shell access to the server. So I ran this used Base.connection.execute and fetch_row to get the result

q=ActiveRecord::Base.connection.execute 'SELECT NOW();'
=> #<mysql::result:0x2b783735c4f8>
>> q.fetch_row
=> ["2009-10-20 17:30:49"]
>> Time.now
=> Tue Oct 20 10:32:02 -0700 2009
>>q.free
</mysql::result:0x2b783735c4f8>

You can basically run any SQL query you want with these two methods. Now go crash some servers!

Note: You should also free your result to free up memory. Thanks Emmanuel.

Copying files between S3 accounts

Recently, I had to transfer a all files from one S3 account to another one. Since I didn’t like to bother Amazon with my petty problems. I decided to use a ruby script to do it. Here’s the script and some steps I took to do it.

Setup

First thing we need to do is to get a list of the buckets we’re using. Since each bucket has to be uniquely named, we need to figure out what to name our corresponding buckets in the new S3 account.

Start up irb. Time for some digging.

>> require 'rubygems'
=> true
>> require 'right_aws'
=> true
>> s3=RightAws::S3Interface.new(old_aws_id, old_aws_key)
=> #<rightaws::s3interface:0x1a56490 ...stuff...>
>> buckets=s3.list_all_my_buckets.collect{|b| b[:name]}
=> ["old_bucket1", "old_bucket2"]
</rightaws::s3interface:0x1a56490>

Looks like we have 2 buckets to copy: old_bucket1 and old_bucket2. Let’s make new buckets for our new account: new_bucket1 and new_bucket2.

Back in console:

>> s3_new=RightAws::S3Interface.new(new_aws_id, new_aws_key)
=> #<rightaws::s3interface:0x1a5649a ...stuff...>
>> s3_new.create_bucket('new_bucket1', :location => :us)
=> true
>> s3_new.create_bucket('new_bucket2', :location => :us)
=> true
</rightaws::s3interface:0x1a5649a>

Now, we need to peek at the ACL settings for the files so we’ll know how to modify the permissions for the new bucket.

In console:

>> key=s3.list_all_my_buckets.first
=> {:owner_display_name=>"my.old.name", :creation_date=>"2009-08-06T23:32:38.000Z", :name=>"old_bucket1", :owner_id=>"123abcdefghijklmnoprxyz45600000000000000000000000000000000000001"}

Make note of the old owner_display_name and owner_id: my.old.name, 123abcdefghijklmnoprxyz45600000000000000000000000000000000000001.
We will need to replace these with our new ones during the process.

Now we get our new owner_id and owner_display_name, the same way.

>> s3_new.list_all_my_buckets.first
=> {:owner_display_name=>"my.new.name", :creation_date=>"2009-08-06T23:32:38.000Z", :name=>"new_bucket1", :owner_id=>"123abcdefghijklmnoprxyz45600000000000000000000000000000000000002"}

Here’s our new owner_display_name and owner_id: my.new.name, 123abcdefghijklmnoprxyz45600000000000000000000000000000000000002.

The Code

Now here’s the script to run. I recommend doing this on an EC2 machine, because Amazon won’t charge for bandwidth within the EC2 and S3 networks. I also suggest running it on a screen session so you can leave the script alone on the server.

  • The script will loop through each item in your buckets.
  • Copy the item over to the new corresponding bucket.
  • Strip out the ACL from the old one and reformat it for the new owner.
  • Update the ACL permissions on the new file.

#!/usr/bin/ruby
require 'rubygems'
require 'right_aws'

oldAWS = RightAws::S3Interface.new(old_aws_id, old_aws_key)
newAWS = RightAws::S3Interface.new('new_aws_id, new_aws_key)
newS3=RightAws::S3.new(new_aws_id, new_aws_key)

bucket_mapping={"old_bucket1" => "new_bucket1",
"old_bucket2" => "new_bucket2"
}

# ACL property differences
old_owner_id='123abcdefghijklmnoprxyz45600000000000000000000000000000000000001'
new_owner_id='123abcdefghijklmnoprxyz45600000000000000000000000000000000000002'

old_disp_name='my.old.name'
new_disp_name='my.new.name'

bucket_mapping.each do |old_bucket, new_bucket|
  # get all keys for old bucket by looping through sets of max keys (1000) amazon gives
  newS3Bucket=newS3.bucket(new_bucket)
  oldAWS.incrementally_list_bucket(old_bucket) do |key_set|
    # loop through content of key_set which contains keys
    key_set[:contents].each do |key|
      # if key already exists, don't copy over
      if newS3Bucket.key(key[:key]).exists?
        puts "#{new_bucket} #{key[:key]} already exists. Skipping..."
      else
        # download data and header from old bucket
        puts "Copying #{old_bucket} #{key[:key]}"
        retries=0
        begin
          data=oldAWS.get_object(old_bucket,key[:key])
        rescue Exception => e
          puts "cannot download, #{e.inspect}\nretrying #{retries} out of 10 times..."
          retries += 1
          retry if retries <= 10
        end
     
        retries=0
        begin
          headers=oldAWS.head(old_bucket,key[:key])
        rescue Exception => e
          puts "cannot get header, #{e.inspect}\nretrying #{retries} out of 10 times..."
          retries += 1
          retry if retries <= 10
        end

        # upload key to bucket
        puts "Putting to #{new_bucket} #{key[:key]}"
        retries=0
        begin
          newAWS.put(new_bucket, key[:key], data, headers)
        rescue Exception => e
          puts "cannot put object, #{e.inspect}\nretrying #{retries} out of 10 times..."
          retries += 1
          retry if retries <= 10
        end

        # copy permissions to new Bucket
        retries=0
        begin
          acl_prop=oldAWS.get_acl(old_bucket, key[:key])
        rescue Exception => e
          puts "cannot get ACL, #{e.inspect}\nretrying #{retries} out of 10 times..."
          retries += 1
          retry if retries <= 10
        end
     
        # Replace Owner ID and Display name for new bucket
        puts "old ACL #{acl_prop[:object]}"
        acl_prop[:object].gsub!(old_owner_id,new_owner_id)
        acl_prop[:object].gsub!(old_disp_name,new_disp_name)
        puts "new ACL #{acl_prop[:object]}"
   
        puts "changing ACL"
     
        retries=0
        begin
          newAWS.put_acl(new_bucket, key[:key], acl_prop[:object])
        rescue Exception => e
          puts "cannot update ACL, #{e.inspect}\nretrying #{retries} out of 10 times..."
          retries += 1
          retry if retries <= 10
        end
      end
     
    end
  end
end

That’s basically all there is to it. If you want to do a check if all the files are there, use this script. Now I’m going to explain each part of the script. So you probably don’t want to stay for this part.

Breakdown

#!/usr/bin/ruby
require 'rubygems'
require 'right_aws'

oldAWS = RightAws::S3Interface.new(old_aws_id, old_aws_key)
newAWS = RightAws::S3Interface.new('new_aws_id, new_aws_key)
newS3=RightAws::S3.new(new_aws_id, new_aws_key)

Initialize connections to S3. I used RightAws::S3 object to able to access the bucket individually.

bucket_mapping={"old_bucket1" => "new_bucket1",
"old_bucket2" => "new_bucket2"
}

Map the old buckets to the new corresponding ones.

old_owner_id='123abcdefghijklmnoprxyz45600000000000000000000000000000000000001'
new_owner_id='123abcdefghijklmnoprxyz45600000000000000000000000000000000000002'

old_disp_name='my.old.name'
new_disp_name='my.new.name'

Store the ACL credentials for the old and new accounts.

bucket_mapping.each do |old_bucket, new_bucket|
  # get all keys for old bucket by looping through sets of max keys (1000) amazon gives
  newS3Bucket=newS3.bucket(new_bucket)
	...

We loop through each of the buckets in our hash, grab the new corresponding bucket

  oldAWS.incrementally_list_bucket(old_bucket) do |key_set|
    key_set[:contents].each do |key|
	...

Since Amazon only lets you get 1000 keys max at one time, we use right_aws’s incrementally_list_bucket method to get eventually loop through all the keys in that bucket. Then we loop through each set of keys.

	  if newS3Bucket.key(key[:key]).exists?
	    puts "#{new_bucket} #{key[:key]} already exists. Skipping..."
	  else
	    # download data and header from old bucket
	    puts "Copying #{old_bucket} #{key[:key]}"
	    retries=0
	    begin
	      data=oldAWS.get_object(old_bucket,key[:key])
	    rescue Exception => e
	      puts "cannot download, #{e.inspect}\nretrying #{retries} out of 10 times..."
	      retries += 1
	      retry if retries <= 10
	    end

	    retries=0
	    begin
	      headers=oldAWS.head(old_bucket,key[:key])
	    rescue Exception => e
	      puts "cannot get header, #{e.inspect}\nretrying #{retries} out of 10 times..."
	      retries += 1
	      retry if retries <= 10
	    end

	    # upload key to bucket
	    puts "Putting to #{new_bucket} #{key[:key]}"
	    retries=0
	    begin
	      newAWS.put(new_bucket, key[:key], data, headers)
	    rescue Exception => e
	      puts "cannot put object, #{e.inspect}\nretrying #{retries} out of 10 times..."
	      retries += 1
	      retry if retries <= 10
	    end
	...

Now, we only copy over the file if it’s not there, and we copy over the header from the old file to the new one.

       retries=0
       begin
         acl_prop=oldAWS.get_acl(old_bucket, key[:key])
       rescue Exception => e
         puts "cannot get ACL, #{e.inspect}\nretrying #{retries} out of 10 times..."
         retries += 1
         retry if retries <= 10
       end

       # Replace Owner ID and Display name for new bucket
       puts "old ACL #{acl_prop[:object]}"
       acl_prop[:object].gsub!(old_owner_id,new_owner_id)
       acl_prop[:object].gsub!(old_disp_name,new_disp_name)
       puts "new ACL #{acl_prop[:object]}"

       puts "changing ACL"

       retries=0
       begin
         newAWS.put_acl(new_bucket, key[:key], acl_prop[:object])
       rescue Exception => e
         puts "cannot update ACL, #{e.inspect}\nretrying #{retries} out of 10 times..."
         retries += 1
         retry if retries <= 10
       end
...

Finally, we get the old ACL, which is an XML file. We then gsub the old values with the new values and replace the new file’s ACL.

Wash, rinse, repeat.

Counting total number of objects in S3

Sometimes it comes in handy to get the total number of objects you have in S3 but it is not as straight forward. Here’s a snippet I use to get total number of objects using right_aws gem.

require 'rubygems' 
require 'right_aws' 

AWS_ID='id'
AWS_KEY='mykey'

s3=RightAws::S3Interface.new(AWS_ID, AWS_KEY) 
count=0

buckets=s3.list_all_my_buckets.collect{|b| b[:name]}
buckets.each do |bucket_name|
    s3.incrementally_list_bucket(bucket_name) { |k| 
        count += k[:contents].size
    }
end
puts count

The script gets all your buckets, count the contents of the buckets incrementally because AWS only allows a max of 1000 items to be returned per request.

Get total size of data being used

You can also modify this script to sum up the amount of space you’re using:

require 'rubygems' 
require 'right_aws' 

AWS_ID='id'
AWS_KEY='mykey'

s3=RightAws::S3Interface.new(AWS_ID, AWS_KEY) 
size=0

buckets=s3.list_all_my_buckets.collect{|b| b[:name]}
buckets.each do |bucket_name|
    s3.incrementally_list_bucket(bucket_name) { |k| 
        k[:contents].each do |content|
            size += content[:size]
        end
    }
end
puts count

Note: I just made up this script right now and haven’t tested it out. Please tell me if it works.

Lazy blogging with Twitter and Tumblr

Recently, I was bored with trying to follow links on Twitter Trends for videos and images. So I put this script together quickly to search for all new Twitter results then post the tweets with links on Tumblr, where it would show images and videos.

Setup

You’ll need to have a few gems:

  • hpricot
  • twitter
  • ruby-tumblr

sudo gem install hpricot twitter ruby-tumblr

The Script

In this script will search for ‘#ELC http’ and it should return any tweets with links in them. Then it should rip out the link and process the link (follow redirects, rip out images from twitpic, etc…). Then it would post it to tumblr as a video, image, or link, depending on the ending of the final link. Right now, I’m using the username/password auth for twitter since I don’t have an API key setup. There’s a pid file it creates to make sure that only one of these processes can run. It also uses code from a past post about scraping twitpics.

#!/usr/local/bin/ruby
require 'rubygems'
gem 'twitter'
gem 'ruby-tumblr'
require 'twitter'
require 'net/http'
require 'tumblr'
require 'hpricot'

exit if File.exists?('scraper.pid')
File.open('scraper.pid','w') {|f| f.write('hey')} # primitive Lock to prevent multiple processes from spawning

TUMB_EMAIL =myemail
TUMB_PASS =mypass
TWIT_USERNAME=username
TWIT_PASS=pass

def rip_twitpic(url)
  begin
    code=url.match(/[\w]+$/).to_s
    unless code.blank?
      uri=URI.parse(url)
      resp=Net::HTTP.get_response(uri)
      html=Hpricot(resp.body)
      html.at("#photo-display")['src']
    end
  rescue Exception => e
    puts "Error extracting twitpic: #{e}"
    url
  end
end


def follow_link(link)
  uri=URI.parse(link)
  begin
   resp=Net::HTTP.get_response(uri)
   return follow_link(resp['location']) if resp.class.eql?(Net::HTTPMovedPermanently) || resp.class.eql?(Net::HTTPRedirection) || resp.class.eql?(Net::HTTPFound)
   link
  rescue Exception => e
    puts "Error getting #{link}"
    nil
  end
end

def upload_to_tumblr(url, desc=nil)
  Tumblr::API::write(TUMB_EMAIL, TUMB_PASS)  do
    if url.match(/\.jpg|\.jpeg|\.png|\.gif$/)
      photo(url, desc||"Photo from Twitter")
    elsif url.match(/^http:\/\/www\.youtube\.com\/watch/)
      video(url, desc||"Video From Twitter")
    elsif url.match(/^http:\/\/twitpic\.com\/[\w]+/)
      photo(rip_twitpic(url), desc||"Photo From Twitter")
    else
      link(url, desc||url)
    end
  end
end

httpauth = Twitter::HTTPAuth.new(TWIT_USERNAME,TWIT_PASS)
twit= Twitter::Base.new(httpauth)
results=Twitter::Search.new('#elc http')
converted_links=[]
links=[]
results.each { |tweet| 
text=tweet.text.match(/http:\/\/([\S])+/) .to_s
links << text unless text.blank?
}
links.each do |link| 
final=follow_link(link)
converted_links << final unless final.nil?
end

converted_links.uniq.each do |link|
upload_to_tumblr(link)
end

FileUtils.rm('scraper.pid')

Just put the script as a scheduled task. like cron, and it should scrape the links and post them to your tumblr. I must warn people this code was written in a couple of hours so there is definitely room for improvement.

Scraping Images from Twitpics

Recently, I’ve been scraping Images and videos from Twitter and one site that has not been too easy to grab pics from is Twitpics. Here’s a snippet of code that I’ve been using to grab the image from Twitpic with Hpricot:

require 'net/http'
require 'hpricot'

def rip_twitpic(url)
  begin
    code=url.match(/[\w]+$/).to_s
    unless code.blank?
      uri=URI.parse(url)
      resp=Net::HTTP.get_response(uri)
      html=Hpricot(resp.body)
      html.at("#photo-display")['src']
    end
  rescue Exception => e
    puts "Error extracting twitpic: #{e}"
    url
  end
end

Note: Thanks to Stephen Boisvert for showing my typo. That’s what I get for rushing something out and drinking too many cups of coffee.

Finding Memory Leaks with Bleak House

Sometimes a project can get large with many lines of code and some of those parts may have some leaks. These leaks might not be bad at first, but it will eventually can eat up all the memory on a server and cause it to act slow. Then, the thins/mongrels will have to be restarted to free up the memory. A bad memory leak will require the mongrels be restarted frequently.

Sometimes the cause may be a gem that uses a C library that has memory leaks or maybe there’s a problem with the way code is using a particular method. This is where Bleak House comes in handy. It’s an implementation of Ruby that tracks Objects in the Ruby Heap and analyzes those Objects.

Here’s some steps I took to try to track down the cause of a memory leak on a project.

Setting up the Server

Depending on how you installed your Ruby, installing the gem can be a breeze or hell. For users that compiled Ruby from source all you need to do is:

sudo gem install bleak_house

If you’re on a packaged version of Ruby, like the one that comes with Leopard. I recommend installing from source and installing the gem. To install it along side of that version of Ruby requires a lot of patching to the gem files.

For Leopard, here’s how I compiled Ruby. WARNING This will basically break all your gems so you will have to reinstall them.

cd /usr/local/src
wget ftp://ftp.ruby-lang.org/pub/ruby/1.8/ruby-1.8.6-p369.tar.gz
tar xzf ruby-1.8.6-p369
cd ruby-1.8.6-p369
./configure --enable-shared --enable-pthread CFLAGS=-D_XOPEN_SOURCE=1
make
sudo make install

Add your new ruby path towards the end of your ~/.bash_profile:

export PATH="/usr/local/bin:/usr/local/sbin:/usr/local/mysql/bin:$PATH" 
source ~/.bash_profile

Finally, install the gem:

gem install bleak_house

After installing that gem, now you need to add this line to your envirnoment.rb:

require 'bleak_house'

Start the server using ruby-bleak-house

BLEAK_HOUSE=1 ruby-bleak-house ./script/server

Find a way to do POST and GET requests to your server to get the problem to resurface

For GETs, I would use httperf and pound the server incrementally.

httperf --hog --server www.awesomeness.site --rate=8 --num-con=800 --uri=/leaky

POSTs were tricky for this particular project. The authentication was done through a centralized server where it does some cookie magic. So, I used curl with a cookie file.

The Cookie File uses Netscape’s cookie format (tab seperated). Belong is an example format

.netscape.com TRUE / FALSE 946684799 NETSCAPE_ID 100103

Here’s the definition of each of those parmeters:
domain – The domain that created AND that can read the variable.
flag – A TRUE/FALSE value indicating if all machines within a given domain can access the variable. This value is set automatically by the browser, depending on the value you set for domain.
path – The path within the domain that the variable is valid for.
secure – A TRUE/FALSE value indicating if a secure connection with the domain is needed to access the variable.
expiration – The UNIX time that the variable will expire on. UNIX time is defined as the number of seconds since Jan 1, 1970 00:00:00 GMT.
name – The name of the variable.
value – The value of the variable.

Here’s my cookie.txt:

# Netscape HTTP Cookie File
# http://www.netscape.com/newsref/std/cookie_spec.html
# This file was generated by libcurl! Edit at your own risk.

www.awesomeness.site	TRUE	/	FALSE	0	_my_session	BAh7BjoPc2Vzc2lvbl9pZCIlZGRjMWEyODdhMGYyYTUzOTRmNDRjMTkzNjExMWYyMDQ%3D--e853584782ddb6561c88459997ff00e6de76e292
www.awesomeness.site	TRUE	/	FALSE	1	SERVERID	
.awesomeness.site	TRUE	/	FALSE	0	user_auth	%20424%3D12dsff32dasd45097374221%7C%3B

Curl command to POST 100 times and use my cookies

for i in {1..1000}; do curl www.awesomeness.site/leak_upload/create -X POST -H "Content-Length:19" -d 'text%5Bbody%5D=abc' -b cookies.txt; done

Analyzing the Results

Stop the server and it’ll start dumping some data to your /tmp/ directory:

  ** BleakHouse: working...
  ** BleakHouse: complete
  ** Bleakhouse: run 'bleak /tmp/bleak.22930.0.dump' to analyze.

Do exactly what it tells you to do:

bleak /tmp/bleak.22930.0.dump

My output:

Displaying top 100 most common line/class pairs
4297007 total objects
4297007 filled heap slots
3724030 free heap slots
2150566 (eval):3:String
1162682 __null__:__null__:__node__
  85544 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:85:Hash
  85543 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1655:Hash
  64865 __null__:__null__:String
  64017 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1651:Role
  58974 /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/core_ext/blank.rb:50:String
  31429 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/attribute_methods.rb:102:String
  26674 /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.2/lib/active_support/callbacks.rb:180:Class
  26673 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/validations.rb:221:Hash
  26673 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/validations.rb:1050:ActiveRecord::Errors
  21345 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/associations/association_collection.rb:387:Array
  21342 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/dirty.rb:102:Hash
  21338 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:3008:Hash
  21338 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:2436:Hash
  16094 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/base.rb:1651:User
  16010 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/associations.rb:1277:ActiveRecord::Associations::HasManyAssociation
  16004 /usr/lib/ruby/gems/1.8/gems/activerecord-2.3.2/lib/active_record/timestamp.rb:22:Time
  11033 (eval):1:__node__

So, the top item is (eval):3:String. For this project, a lot of Strings Manipulation was used but it shouldn’t grow that large. So I searched good old Google and found this blog post. It turns out our version of ruby on the server was 1.8.6 p114 which did have this crazy memory leaks with Strings. So, I updated ruby to 1.8.7 and String’s position in the bleak dump dropped down towards the bottom of the list.

Copy S3 assets with right_aws

Lately, I’ve been using right_aws to interact with S3. One thing that I found helpful was copying assets between buckets and keeping the same permissions on them. However, it’s not as simple as just copying the assets over. You need to get the Access Control Policy from the source and put it in the copied asset.

Here’s a snippet of code that does the magic.

require 'rubygems' 
require 'right_aws' 

s3=RightAws::S3Interface.new(S3_KEY, S3_SECRET)
s3.copy(SOURCE_BUCKET, SOURCE_PATH, DESTINATION_BUCKET, DESTINATION_PATH, :copy, {"Cache-Control" => 'max-age=315360000', "Expires" => '315360000'})
acl_prop=s3.get_acl(SOURCE_BUCKET, SOURCE_PATH)
s3.put_acl(DESTINATION_BUCKET, DESTINATION, acl_prop[:object])