2X Dynos Enter General Availability

2X Dynos

Thousands of Heroku customers have already updated their apps to utilize 2X dynos since they entered public beta on April 5. By providing twice the memory and CPU share, 2X dynos help to improve app performance and efficiency.

2X Dynos enter General Availability today. Starting tomorrow, June 1, 2013, 2X dynos will be billed at the full $0.10 per hour rate.

Heroku customers have used 2X dynos to solve a number of problems:

1. Concurrency for Rails with Unicorn – Rails apps see significant performance improvements using Unicorn. In-dyno queuing allows requests to be served by any available worker. 2X dynos allow more workers per dyno, yielding better-than-linear performance improvements.

2. JVM Languages – The JVM has an explicit memory model designed for multi-threaded concurrency, and has many frameworks explicitly designed to take advantage of this property. Utilizing more threads requires more memory for both the thread stacks and for objects created by these threads. The JVM is fully capable of taking advantage of the vertical scale in a 2X dyno.

3. Memory-intensive background jobs – image processing, big-data crunching, and geospatial processing often need larger dynos. If your app experiences R14 out-of-memory errors, 2X dynos will provide increased head-room.

You can upgrade your app to 2X dynos via the Heroku Toolbelt:

$ heroku ps:resize web=2X worker=1X
Resizing dynos and restarting specified processes... done
web dynos now 2X ($0.10/dyno-hour)
worker dynos now 1X ($0.05/dyno-hour)

… or via the Dashboard on the app’s resources page. For full instructions, see the Dev Center article.

Summary

If you’re looking to improve concurrency on Rails apps, making use of JVMs or other memory-hungry tasks, 2X dynos can make your app faster. Give them a try.

Heroku Platform API, Now Available in Public Beta

Today, we are excited to release our new platform API into public beta, turning Heroku into an extensible platform for building new and exciting services. Our platform API derives from the same command-and-control API we use internally, giving entrepreneurs and innovators unprecedented power to integrate and extend our platform. Some of the uses we’ve imagined include:

  • Building mobile apps that control Heroku from smartphones and tablets;
  • Combining Heroku with other services and integrating with developer tools;
  • Automating custom workflows with programmatic integration to Heroku

Platform API

The platform API empowers developers to automate, extend and combine Heroku with other services. You can use the platform API to programmatically create apps, provision add-ons and perform other tasks that could previously only be accomplished with Heroku toolbelt or dashboard.

Getting Started

The Heroku platform API uses HTTP and JSON to transfer data and is simple enough to experiment with using cURL. The examples below use the -n switch to read credentials from the ~/.netrc file, which is created automatically if you are using toolbelt.

First, create a new Heroku app by sending a POST request to the /apps endpoint:

$ curl -n -X POST https://api.heroku.com/apps \
-H "Accept: application/vnd.heroku+json; version=3"
{
  …
  "name":"mighty-cove-7151",
  …
}

This is equivalent to running $ heroku create or creating an app in dashboard. The platform API uses the mime type provided in the accept header to determine which version to serve. Version 3 is the first publicly available version. Prior versions were internal only.

Now provision the Postgres add-on for use with the created app:

$ curl -n -X POST https://api.heroku.com/apps/mighty-cove-7151/addons \
-H "Accept: application/vnd.heroku+json; version=3" \
-d “{\”plan\”:{\”name\”:\”heroku-postgresql:dev\”}}”

This is the same as running $ heroku addons:add heroku-postgresql:dev -a mightycove-7151 or adding an add-on in dashboard. You can use the API to add any available add-ons and to perform other app setup tasks such as scaling and adding configuration vars. Using the API, you can automate the process of going from new to fully configured app with ease.

These two examples barely scratch the surface of what’s possible. For more details on using the platform API, refer to the quickstart and reference documentation.

The Future

Over the coming weeks and months, while the platform API is in public beta, we will collect and incorporate feedback. During the public beta, we may introduce breaking changes to the API. All changes will be posted on the changelog.

Providing Feedback

The goal of the public beta is to collect and incorporate feedback. Please send feedback to api-feedback@heroku.com. We’re especially interested in your thoughts on the following:

  • What do you think of the overall design?
  • Is there anything missing for your use case?
  • How can we make the API easier to use?

If you have questions about using the new API, please post them on Stack Overflow with the heroku tag.

We hope you like what you find and look forward to your exploration and innovation on top of the Heroku platform.

Ultimate Android Power Tips: Make Your Mobile Work for You

An Extensible Approach to Browser Security Policy

Alex Russell posted some thoughts today about how he wishes the W3C would architect the next version of the Content Security Policy.

I agree with Alex that designing CSP as a “library” that uses other browser primitives would increase its long-term utility and make it compose better with other platform features.

Alex is advocating the use of extensible web principles in the design of this API, and I wholeheartedly support his approach.

Background

You can skip this section if you already understand CSP.

For the uninitiated, Content Security Policy is a feature that allows web sites to opt into stricter security than what the web platform offers by default. For example, it can restrict which domains to execute scripts from, prevent inline scripts from running altogether, and control which domains the network stack is allowed to make HTTP requests to.

To opt into stricter security using the current version of CSP, a website includes a new header (Content-Security-Policy) which can contain a number of directives.

For example, in order to prevent the browser from making any network requests to cross-domain resources, a server can return this header:

Content-Security-Policy: default-src 'self'

This instructs the browser to restrict all network requests to the current domain. This includes images, stylesheets, and fonts. Essentially, this means that scripts run on your page will be unable to send data to third-party domains, which is a common source of security vulnerabilities.

If you want to allow the browser to make requests to its own domain, plus the Google Ajax CDN, your server can do this:

Content-Security-Policy: default-src 'self' ajax.googleapis.com

Factoring the Network Layer

If you look at what CSP is doing, it’s essentially a syntax for controlling what the network stack is allowed to do.

There are other parts of the web platform that likewise control the network stack, and more all the time. What you’d like is for all of these features to be defined in terms of some lower-level primitive—ideally, one that was also exposed to JavaScript itself for more fine-grained, programmatic tweaks.

Imagine that you had the ability to intercept network requests programmatically, and decide whether to allow the request to continue. You might have an API something like this:

var origin = window.location.origin;
 
page.addEventListener('fetch', function(e) {
  var url = e.request.url;
  if (origin !== url.origin) {
    // block the network request
    e.preventDefault();
  }
 
  // otherwise, allow the network request through
});

You would then be able to describe how the browser interprets CSP in terms of this primitive API.

You could even imagine writing a CSP library purely in JavaScript!

page.addEventListener('fetch', function(e) {
  if (e.type === 'navigate') {
    e.respondWith(networkFetch(url).then(function(response) {
      // extract CSP headers and associate them with e.window.id
      // this is a pseudo-API to keep the implementation simple
      CSP.setup(e.window.id, response);
 
      return response;
    });
  } else {
    if (!CSP.isAllowed(e.window.id, e.request)) {
      e.preventDefault();
    }
  }
});

The semantics of CSP itself can be expressed in pure JavaScript, so these primitives are enough to build the entire system ourselves!

I have to confess, I’ve been hiding something from you. There is already a proposal to provide exactly these network layer hooks. It even has exactly the API I showed above.

The Extensible Web

Extensible web principles give us a very simple path forward.

Continue iterating on the declarative form of the Content Security Policy, but describe it in terms of the same primitives that power the Navigation Controller proposal.

When web developers want to tweak or extend the built-in security features, they can write a library that intercepts requests and applies tweaks to the policy by extending the existing header syntax.

If all goes well, those extensions will feed into the next iteration of CSP, giving us a clean way to let platform users inform the next generation of the platform.

This approach also improves the likelihood that other features that involve the network stack will compose well with CSP, since they will also be written in terms of this lower level primitive.

Many of the benefits that Dave Herman outlined in the closing of my last post are brought into concrete terms in this example.

I hope to write more posts that explore how extensible web principles apply to platform APIs, both new and old.


Fellow web developers, let’s persuade Adam Barth, Dan Veditz, Mike West (the CSP specification editors) to factor the next version of CSP in terms of the new Navigation Controller specification.

Then, we will have the tools we need to extend the web’s security model forward.

An Extensible Approach to Browser Security Policy

Alex Russell posted some thoughts today about how he wishes the W3C would architect the next version of the Content Security Policy.

I agree with Alex that designing CSP as a “library” that uses other browser primitives would increase its long-term utility and make it compose better with other platform features.

Alex is advocating the use of extensible web principles in the design of this API, and I wholeheartedly support his approach.

Background

You can skip this section if you already understand CSP.

For the uninitiated, Content Security Policy is a feature that allows web sites to opt into stricter security than what the web platform offers by default. For example, it can restrict which domains to execute scripts from, prevent inline scripts from running altogether, and control which domains the network stack is allowed to make HTTP requests to.

To opt into stricter security using the current version of Continue reading “An Extensible Approach to Browser Security Policy”

An Extensible Approach to Browser Security Policy

Alex Russell posted some thoughts today about how he wishes the W3C would architect the next version of the Content Security Policy.

I agree with Alex that designing CSP as a “library” that uses other browser primitives would increase its long-term utility and make it compose better with other platform features.

Alex is advocating the use of extensible web principles in the design of this API, and I wholeheartedly support his approach.

Background

You can skip this section if you already understand CSP.

For the uninitiated, Content Security Policy is a feature that allows web sites to opt into stricter security than what the web platform offers by default. For example, it can restrict which domains to execute scripts from, prevent inline scripts from running altogether, and control which domains the network stack is allowed to make HTTP requests to.

To opt into stricter security using the current version of Continue reading “An Extensible Approach to Browser Security Policy”

7 secrets every developer should know before getting into a manager or lead role

Send to Kindle

7 secrets every developer should know before getting into a manager or lead role

This guest post is contributed by Pramod Paranjape, who till recently ran a diverse delivery team of IT engineers and managers. He writes articles for new managers at ConverSight.com. He actively contributes on Quora on topics like team management and IT outsourcing. He releases slide decks based on real life management case studies on slideshare.

Pramod Paranjape At some point of time in your career, you have to decide if you want to continue on a technical path or to take up a management role.

Imagine that you have taken up a management role; how would your life look like?

The foundations remain the same for both technical and management tracks. Here is what will not change:

  1. Sound technical background: Many successful project managers have been excellent technical developers earlier in their careers. Strong technical skills go a long way in identifying technology risks in projects. If you have a sound foundation of technical skills, you have equal chances of taking up either of these career paths.
  2. Using software engineering techniques in daily life: Delivering good code in a timely manner requires understanding of standard coding practices, defect management system, version control system and timesheet systems. It may sound obvious, but using the basic software engineering techniques ensures predictable delivery. Whatever path you choose, make sure you have an in-depth knowledge of software engineering techniques.

What will change when you get into manager or lead role?

7 secrets nobody told you:

  1. A developer has to focus on his/her own tasks. When you become a manager, you will need to get the tasks done by the team members. You will need to allocate work to your team members based on their abilities. You will have to identify strengths and weaknesses of each team member. You will also give due consideration to their aspirations.
    As a manager, you will need to allocate tasks according to team members’ strengths to maximize output.
  2. While completing the assigned work, a team member may be stuck. A manager listens to him/her and analyses the situation. The team member may have adopted an unconventional approach to complete the task. This approach may be vastly different from the approach you would have taken.
    In a manager’s role, you will need to analyse from the team member’s perspective.
  3. A manager plans the work based on an overall strategy of solving a problem. Based on the strategy, he/she sets priorities. Prioritizing is deciding what is important over what is less important.
    A manager decides the strategy to obtain a solution because a developer focuses on completing the work assigned to him/her. Be ready to take the bigger picture into account in a manager’s role.
  4. Team members may need protection from conflicting power centers within the organization. Managers who can provide ‘air cover’ get their team’s respect.
    A manager defends his/her team members, so that they can focus on their work. This is a critical leadership trait to succeed as a manager.
  5. Team members like to work with a manager from whom they can learn. A conscious effort to share knowledge motivates the team.
    As a manager, you will have to share your knowledge and let the team learn from you.
  6. A manager conducts meetings to communicate various messages. He/She writes to different stakeholders to communicate the task status. Speaking and writing may seem basic skills, but using these skills effectively is very important for a manager.
    You have to hone your communication skills to become an effective manager.
  7. A manager does not develop code or test it. In some cases, a manager may take up some part of a team member’s work. Ultimately, a manager’s success depends on his team members completing their work. Highly motivated and happy team members complete their work in time.
    You will need to motivate team members to complete their assigned work.

To summarize the seven secrets

To be an effective manager, you must:

  1. Allocate work based on the abilities of a team member.
  2. Analyse issues from a team member’s perspective.
  3. Be ready to take the bigger picture into account in a manager’s role.
  4. Defend your team as your team’s leader.
  5. Let the team learn from you.
  6. Communicate with stakeholders effectively.
  7. Motivate the team to get the best out of them.

Feel free to ask questions and give feedback in the comments section of this post. Thanks!

Technorati Tags:


(Powered by LaunchBit)

Programming Elixir: Functional |> Concurrent |> Pragmatic |> Fun

Programming Elixir: Functional |> Concurrent |> Pragmatic |> Fun now in beta

Extend the Web Forward

If we want to move the web forward, we must increase our ability as web developers to extend it with new features.

For years, we’ve grabbed the browsers extension points with two hands, not waiting for the browser vendors to gift us with new features. We built selector engines, a better DOM API, cross-domain requests, cross-frame APIs.

When the browser has good extension points (or any extension points, really), we live in a virtuous cycle:

  • Web developers build new APIs ourselves, based on use-cases we have
  • We compete with each other, refining our libraries to meet use cases we didn’t think of
  • The process of competition makes the libraries converge towards each other, focusing the competition on sharp use-case distinctions
  • Common primitives emerge, which browser vendors can implement. This improves performance and shrinks the amount of library code necessary.
  • Rinse, repeat.

We’ve seen this time and time again. When it works, it brings us querySelectorAll, the template element, and Object.observe.

The Sad Truth

The sad truth is that while some areas of the browser are extremely extensible, other areas are nearly impossible to extend.

Some examples include the behavior and lifecycle of custom element in HTML, the CSS syntax, and the way that the browser loads an HTML document in the first place. This makes it hard to extend HTML, CSS, or build libraries that support interesting offline capabilities.

And even in some places that support extensibility, library developers have to completely rewrite systems that already exist. For example, John Resig had to rewrite the selector engine from scratch just to add a few additional pseudo-properties, and there is still no way add custom pseudo-properties to querySelectorAll.

Declarative vs. Imperative

A lot of people see this as a desire to write everything using low-level JavaScript APIs, forever.

No.

If things are working well, JavaScript library authors write new declarative APIs that the browser can roll in. Nobody wants to write everything using low-level calls to canvas, but we’re happy that canvas lets us express low-level things that we can evolve and refine.

The alternative, that web authors are stuck with only the declarative APIs that standards bodies have invented, is too limiting, and breaks the virtuous cycle that allows web developers to invent and iterate on new high-level features for the browser.

In short, we want to extend the web forward with new high-level APIs, but that means we need extension points we can use.

Explaining the Magic

If we want to let web authors extend the web forward, the best way to do that is to explain existing and new high-level forms in terms of low-level APIs.

A good example of in-progress work along these lines in Web Components, which explains how elements work in terms of APIs that are exposed to JavaScript. This means that if a new custom element becomes popular, it’s a short hop to implementing it natively, because the JavaScript implementation is not a parallel universe; it’s implemented in terms of the same concepts as native elements.

That doesn’t necessarily mean that browsers will simply rubber-stamp popular components, but by giving library authors the the tools to make components with native-like interfaces, it will be easy for vendors to synthesize web developer zeitgeist into something standard.

Another example is offline support. Right now, we have the much-derided AppCache, which is a declarative-only API that makes it possible to display an HTML page, along with its assets, even if the browser is offline.

AppCache is not implemented in terms of a low-level JavaScript API, so when web developers discovered problems with it, we had no way to extend or modify it to meet our needs. This also meant that we had no way to show the browser vendors what kinds of solutions would work for us.

Instead, we ended up with years of stagnation, philosophical disagreements and deadlock between web developers and specification editors, and no way to move forward.

What we need instead is something like Alex Russell’s proposal that allows applications to install JavaScript code in the cache that intercepts HTTP requests from the page and can fulfill them, even when the app is offline. With an API like this, the current AppCache could be written as a library!

Something like Jonas Sicking’s app cache manifest is a great companion proposal, giving us a nice starting point for a high-level API. But this time if the high-level API doesn’t work, we can fix it by using the low-level API to tweak and improve the manifest.

We can extend the web forward.

Extensions != Rewriting

It’s important to note that web developers don’t want a high level API and then a cliff into the low-level API.

Today, while you can implement custom elements or extend the selector engine, you can only do this by rewriting large chunks of the stack alongside the feature you want.

Real extensibility means an architecture that lets you tweak, not rewrite. For example, it would be possible to add custom rules to CSS by writing a full selector engine and application engine, and apply rules via .style as the DOM changes. With mutation observers, this might even be feasible. In fact, this is how some of the most devious hacks in the platform today (like the Polymer Shadow DOM polyfill) actually work.

That kind of “extensibility” doesn’t fit the bill. It doesn’t compose well with other extensions, defeats the browser’s ability to do performance work on unrelated parts of the stack (because the entire stack had to be rewritten), and is too hard to provide meaningful iteration.

Browser implementers are often wary of providing extension points that can be performance footguns. The biggest footgun is using libraries that rewrite the entire stack in JavaScript, and whole-stack-rewriting strategies are the tactic du jour today. For performance, we have little to lose and much to gain by making extensions more granular.

Extend the Web Forward

So what do we gain from a more extensible web? I’ll let Dave Herman, a member of TC39, answer that for me.

  • When you design new APIs, you are forced to think about how the existing system can express most of the semantics. This cleanly separates what new power is genuinely needed and what isn’t. This prevents cluttering the semantics with unnecessary new magic
  • Avoiding new magic avoids new security surface area
  • Avoiding new magic avoids new complexity (and therefore bugs) in implementation
  • Avoiding new magic makes more of the new APIs polyfillable
  • Being more polyfillable means people can ramp up faster, leading to faster adoption and evolution of the platform
  • Avoiding new magic means that optimizations in the engines can focus on the stable core, which affects more of new APIs as they are added. This leads to better performance with less implementation effort
  • Avoiding new magic means less developer education required; people can understand new APIs more easily when they come out, because they build off of known concepts
  • This means that the underlying platform gets fleshed out to be expressive enough to prototype new ideas. Library authors can experiment with new features and create more cowpaths to fill the Web API pipeline

All this, and more! There’s something for everybody!

Implementors and web developers: let’s work together to extend the web forward!

Extend the Web Forward

If we want to move the web forward, we must increase our ability as web developers to extend it with new features.

For years, we’ve grabbed the browsers extension points with two hands, not waiting for the browser vendors to gift us with new features. We built selector engines, a better DOM API, cross-domain requests, cross-frame APIs.

When the browser has good extension points (or any extension points, really), we live in a virtuous cycle:

  • Web developers build new APIs ourselves, based on use-cases we have
  • We compete with each other, refining our libraries to meet use cases we didn’t think of
  • The process of competition makes the libraries converge towards each other, focusing the competition on sharp use-case distinctions
  • Common primitives emerge, which browser vendors can implement. This improves performance and shrinks the amount of library code necessary.
  • Rinse, repeat.

We’ve seen this time and time again. When it Continue reading “Extend the Web Forward”

Extend the Web Forward

If we want to move the web forward, we must increase our ability as web developers to extend it with new features.

For years, we’ve grabbed the browsers extension points with two hands, not waiting for the browser vendors to gift us with new features. We built selector engines, a better DOM API, cross-domain requests, cross-frame APIs.

When the browser has good extension points (or any extension points, really), we live in a virtuous cycle:

  • Web developers build new APIs ourselves, based on use-cases we have
  • We compete with each other, refining our libraries to meet use cases we didn’t think of
  • The process of competition makes the libraries converge towards each other, focusing the competition on sharp use-case distinctions
  • Common primitives emerge, which browser vendors can implement. This improves performance and shrinks the amount of library code necessary.
  • Rinse, repeat.

We’ve seen this time and time again. When it Continue reading “Extend the Web Forward”

A “FREE” Online Course: Programming the Web with Ruby – 4th batch

Send to Kindle

Programming the Web with Ruby

Registrations are now open for RubyLearning’s “Pay if you like”, online course on “Programming the Web with Ruby“. The first batch had over 2000 participants. Web-based applications offer many advantages, such as instant access, automatic upgrades, and opportunities for collaboration on a massive scale. However, creating Web applications requires different approaches than traditional applications and involves the integration of numerous technologies. The course topics would hopefully help those that have some knowledge of Ruby programming to get started with web programming (this does not cover Ruby on Rails).

Who’s It For?

Anyone with some knowledge of Ruby programming.

Dates

The course starts on Saturday, 29th June 2013 and runs for 2 weeks.

Is the course really free?

A lot of effort and time goes into building such a course and we would really love that you pay at least US$ 15 for the course. Since this is a “Pay if you Like” course, you are under no obligation to pay and hence the course would be free for you.

For those who contribute US$ 15, we shall email them a copy of the book (.pdf) “Programming the Web with Ruby” – the course is based on this book.

How do I register and pay the course fees?

  • First, create an account on the site and then pay the fees of US$ 15 by clicking on the PayPal button Paypal
  • After payment of the fees please send us your name to satish [at] rubylearning [dot] org so that we can send you the eBook, which normally takes place within 48 hours.
  • If you want to take the course for free, please just create an account and send us your name (as mentioned above).

Course Contents

  • Using Git
  • Using GitHub
  • Using RVM (for *nix)
  • Using pik (for Windows)
  • Using bundler
  • Using Heroku
  • Creating a simple webpage using HTML5, CSS and JavaScript
  • Store your webpage files on GitHub
  • Understanding HTTP concepts
  • Using cURL
  • net/http library
  • Using URI
  • Using open-uri
  • Using Nokogiri
  • Creating one’s own Ruby Gem
  • Learning Rack
  • Deploying Pure Rack Apps to Heroku
  • Deploying a static webpage to Heroku
  • What’s JSON?
  • Using MongoDB with Ruby Mongo driver
  • MongoHQ the hosted database
  • Using Sinatra
  • Deploying Sinatra apps to Heroku
  • Sinatra and SQLite3 interaction

The course contents are subject to change.

Mentors

Satish Talim, Victor Goff III, Michele Garoche and others from the RubyLearning team.

RubyLearning’s IRC Channel

Mentors and students hang out at RubyLearning’s IRC (irc.freenode.net) channel (#RubyLearning.org) for both technical and non-technical discussions. Everyone benefits with the active discussions on Ruby with the mentors.

Here are some details on how the course works:

Important:

Once the course starts, you can login and start with the lessons any day and time and post your queries in the forum under the relevant lessons. Just to set the expectations correctly, there is no real-time ‘webcasting’.

Methodology:

  • The Mentors shall give you URL’s of pages and sometimes some extra notes; you need to read through. Read the pre-class reading material at a convenient time of your choice – the dates mentioned are just for your guideline. While reading, please make a note of all your doubts, queries, questions, clarifications, comments about the lesson and after you have completed all the pages, post these on the forum under the relevant lesson. There could be some questions that relate to something that has not been mentioned or discussed by the mentors thus far; you could post the same too. Please remember that with every post, do mention the operating system of your computer.
  • The mentor shall highlight the important points that you need to remember for that day’s session.
  • There could be exercises every day. Please do them.
  • Participate in the forum for asking and answering questions or starting discussions. Share knowledge, and exchange ideas among yourselves during the course period. Participants are strongly encouraged to post technical questions, interesting articles, tools, sample programs or anything that is relevant to the class / lesson. Please do not post a simple "Thank you" note or "Hello" message to the forum. Please be aware that these messages are considered noises by people subscribed to the forum.

Outline of Work Expectations:

  1. Most of the days, you will have exercises to solve. These are there to help reinforce what you have just learned.
  2. Some days may have some extra assignments / food for thought articles / programs.
  3. Above all, do take part in the relevant forums. Past participants have confirmed that they learned the best by active participation.

Some Commonly Asked Questions

  • Qs. Is there any specific time when I need to be online?
    Ans. No. You need not be online at a specific time of the day.
  • Qs. Is it important for me to take part in the course forums?
    Ans. YES. You must Participate in the forum(s) for asking and answering questions or starting discussions. Share knowledge, and exchange ideas among yourselves (participants) during the course period. Participants are strongly encouraged to post technical questions, interesting articles, tools, sample programs or anything that is relevant to the class / lesson. Past participants will confirm that they learned the best by active participation.
  • Qs. How much time do I need to spend online for a course, in a day?
    Ans. This will vary from person to person. All depends upon your comfort level and the amount of time you want to spend on a particular lesson or task.
  • Qs. Is there any specific set time for feedback (e.g., any mentor responds to me within 24 hours?)
    Ans. Normally somebody should answer your query / question within 24 hours.
  • Qs. What happens if nobody answers my questions / queries?
    Ans. Normally, that will not happen. In case you feel that your question / query is not answered, then please post the same in the thread – “Any UnAnswered Questions / Queries”.
  • Qs. What happens to the class (or forums) after a course is over? Can you keep it open for a few more days so that students can complete and discuss too?
    Ans. The course and its forum is open for a month after the last day of the course.

Remember, the idea is to have fun learning Ruby.

Acknowledgments

About RubyLearning.org

RubyLearning.org, since 2005, has been helping Ruby Newbies go from zero to awesome!

Technorati Tags: , , , ,


(Powered by LaunchBit)

Online Ruby on Rails Programming Course

Online Ruby on Rails Programming Course at the Pragmatic Studio

London Fork-a-thon

On 15 May, join Heroku for the London Fork-a-thon, a hack-a-thon-like event (hands-on and live coding), where Heroku engineers will be available to answer any questions you might have regarding Heroku in Europe, and to help you to fork your app to the Europe region.

Space is limited, register now.

Heroku just announced the release of the Heroku Europe region in public beta. The Europe region runs apps from datacenters located in Europe and offers increased performance for customers located in that region.

We're calling this a "fork-a-thon" because fork is the fastest way to move your app to the Europe region. Heroku fork allows you to copy an existing application, including add-ons, config vars, and Heroku Postgres data. Utilizing fork, you can deploy a copy of your app to the Europe region to give your end-users increased app performance.

Come fork your app and see the kind of gains you can get for your users.

Date:

15 May 2013

Location:

LBi

146 Brick Lane

E1 6RU London

Agenda:

5:00 – 5:45 p.m. Heroku Presentation

5:45 – 6:00 p.m. Q&A

6:00 – 8:00 p.m. Engineer-assisted working session with food and drinks

Register Now:

This is a free hands-on event, so please bring a laptop. Space is limited, reserve your space today.

Regards,

The Heroku Team

#415 Upgrading to Rails 4

With the release of Rails 4.0.0.rc1 it’s time to try it out and report any bugs. Here I walk you through the steps to upgrade a Rails 3.2 application to Rails 4.

New Dyno Networking Model

Today we're announcing a change to how networking on Heroku works. Dynos now get a dedicated, virtual networking interface instead of sharing a network interface with other dynos. This makes dynos behave more like standard unix containers resulting in better compatibility with application frameworks and better parity between development and production environments.

Background

Previously, network interfaces were shared between multiple dynos. This weakened the abstraction of a dyno as a standard Unix-style container with a network interface of its own and full disposal of the whole TCP port range.

The shared network interface also resulted in a low grade information leak where one dyno could obtain some information about connections made by other dynos. This information did not include any customer data or other customer identifying information. But it broke the core principle of tenant isolation. With the new networking model, dynos now have fully isolated network configurations. We’d like to thank John Leach for working with us to analyze this aspect of our old networking model and point out the weakness.

Networking improvements

We want the Heroku dyno to resemble a standard OS environment as much as possible, except you don't have to manage anything yourself. A dyno should let you instantly run any application that you can run on another reasonable unix-style system.

The new dyno networking model brings us closer to that goal: The dyno no longer imposes any restrictions on what ports an application can listen on. This improves out-of-the box compatibility with application frameworks that listen on multiple ports (for whatever reason). You can still only connect from the outside world to the port specified in the $PORT environment variable, but now you don't have to painfully reconfigure your web stack to stop it from listening on other ports. In other words, if it worked on your local environment, there is now one less reason it might break on Heroku.

Challenges

You are not required to make any changes to your applications to make it work with the new configuration. We have gradually rolled out the networking update over the last month and at this point it is the default for all new and existing applications.

Along the way we ran into some interesting problems in the underlying stack. Some library code behaves in unexpected ways when running on an OS with a high number of virtual network interfaces.

For example, listing network interfaces in Java using OpenJDK fails if any one network interface's index is greater than 255. We identified this problem during the gradual rollout and updated the buildpacks with a custom patched OpenJDK build so your applications would not be affected.

One problem we're still working on resolving is that Raindrops is not functional with the new networking stack. This is due to a Linux kernel bug. The bug has been addressed in the upstream kernel sources, but we’re still waiting for it show up in the branch that Heroku runs.

Check the Dev Center article on dynos for additional details.

Proven: Customer interviews save you time and money

I’m on my way home from MicroConf 2013,
having learned a lot and having had a lot of fun. This was the third
year Rob and Mike have put on the conference, and the third year that it
has been an awesome experience. As
with the previous years, the speakers and the attendees were bright,
informative, friendly, motivated, and motivating. If you run a
bootstrapped biz, or are thinking about running one, MicroConf is the
place to be to really increase your motivation for and your knowledge
about running your business.

As an aside, I’m looking forward to
BaconBizConf at the end of
this month for the same reasons — I have no doubt it will also be a
great place to be for those who are interested in getting better at
making money. 🙂

I could write several blog posts about terrific takeaways from this
conference, but in this one I want to focus on the urging (especially by
Hiten Shah and Mike Taber)
to focus on customer development when starting and growing your business.
This topic had a particular impact on me at this time because it’s been
on my mind the last few weeks.

A couple of weeks ago I had the privilege of attending
the Switch Workshop put on by the Rewired Group and hosted by Jason
Fried of 37 Signals. That one-day workshop (which I also highly
recommend) was entirely focused on
how people make purchasing decisions, and how understanding that process
can help you find the right customers and give them the value for which
they are searching. Going through that workshop really got
me in the mindset of putting myself in my customer’s shoes (and head),
and letting that drive the decisions I make about my business. I had
been interested in customer development before, but that workshop really
sold me on just how effective it is to talk to customers about how and
why they’ve made the decisions they have made.

I work with entrepreneurs quite a bit in my consulting business (in
fact, I work almost exclusively with people who have an idea and are
wondering how they can convert that idea into a business), and all too
often I see a lack of diligent customer research cause time and money to
be wasted, building a product or service that not enough people really
want. That’s the bad news. But the good news is that this particular
problem is actually easy to overcome! And, even better, it doesn’t have
to take a lot of time or money! And, best of all, it’s even fun! That
workshop, and the emphasis that Hiten and Mike placed on really
understanding your customer, gave me the the information and motivation
I needed to implement a process to solve this problem for my own
business and help others who face this very common problem.

The super-simplified, fun-to-follow and obscenely-effective process
can be broken down into two steps. First, follow Hiten’s advice, and
start with a hypothesis, which I’ll paraphrase a bit:

We believe that (some type of customer / customer segment) has a
problem (with something or doing something)

Once you fill in the blanks with a very specific description of what you
think your ideal customer is and with a description of what the job they
want done is, then you are ready to move on to step number two (and this
is where it gets fun): talking to those customers. But here’s the
trick: Don’t talk about you! Talk about them! Talk about who they are
and what they do (thus making sure they really are your target customer,
and/or helping you learn about who your target customer is). Then, talk
about what they are doing today to solve the problem that you are
thinking of addressing. Ask them how they came to the solution that
they are using, or how they confronted the problem on which you are
focused. If all goes well, you’ll end up talking very little about the
product or service idea in your head, and instead you’ll end up hearing
a lot about what they do and how they do it, which will give you such
good insights that you’ll find yourself smiling as they are talking.

Of course this process isn’t limited to just new ideas, products, or
services. You can use this same process when working on a new feature
for your current product, or when trying to find out why what you are
currently offering isn’t quite as successful as you’d like it to be.

Now, you might be wondering how I know this process is worthwhile and
fun. It’s because I’ve done it myself, of course! 🙂 And I can
certainly attest that it works wonders. In fact, I got to practice the
process while still at MicroConf, since some of the attendees are
ideal customers for my Rails application monitoring service,
Honeybadger. There’s a new feature I’ve
been thinking about the past few weeks, and I wanted to make sure I was
on the right track with it. I had a hypothesis that Rails developers
would like a better way to do activity X than the way they are currently
doing it. So I found a few Rails developers that I had already been
talking to during the conference, and I asked them one question: When
do you do activity X and why? From that one simple question, and a short
conversation that followed it, I realized that I was indeed on to
something with my planned feature. More importantly, what I heard from
my customers let me know that my planned approach was actually a bit
flawed. And better yet, I realized that the approach that would
better solve the problem (as they saw the problem) would actually be
easier for me to implement than my initial plan. I was thrilled!

I hope my small and simple experience of doing customer development —
and particularly interviewing customers — has helped persuade you to get
out there and do it yourself, if you aren’t already doing it.

If you think this sounds interesting, and you’d like some help saving time
and money while using this process to help build your product or your next
feature, do get in touch. I’ll be happy to
answer your questions or provide whatever help I can.

Crafting Rails Applications AND PragPub Magazine

Crafting Rails Applications in beta AND PragPub Magazine