When investing in cryptocurrency, security is a huge concern. In 2014, a famous online bitcoin exchange, Mt.Gox, was the target of a hack that saw 850,000 bitcoins stolen. For many investors, this was a wake-up call. Keeping your money in an online exchange is not a smart move. The best way to protect your investment is to be proactive. There are already many strategies for storing cryptocurrency, and one of the most secure ways is cold storage. Cold storage involves generating and storing your coins private keys offline. While it might not be ideal for you to keep all of your cryptocurrency stored offline, this option ultimately provides the best security.
Like many others, I too had kept my investments on online exchanges (mainly for convenience). After a few months of investing and seeing some positive gains, I decided it was finally time to move my assets off of
Continue reading “Creating A Paper Wallet for Your CryptoCurrency with MyEtherWallet”
2017 Books A Plenty
At long last, the 2017 books that made me happy/recommendations post. Did you miss me?
This year, I’m doing it all in one post, because if you are going to write 4000 words it’s best to get it all in at once, that’s just science.
The rules are:
- These are all books I read in 2017
- That I liked
- The books are organized into arbitrary groups, because there were weird coincidences, in that I read a number of say, unusual time-travel books this year.
- Within each category, books are alphabetical by title.
- The order of the categories is arbitrary
- Links go to Amazon Kindle version.
For each book this year, I tried to add a Recommended If You Like.
Weird Portal Fantasies
For some reason, I read a lot of revisionist
Continue reading “Books I Liked in 2017, All In One Part”
A view is a stored query the results of which can be treated like a table. Note that it is the query that is saved and not the results of the query. Each time you use a view, its associated query is executed.
A related concept is that of the common table expression or CTE. A CTE can be thought of as a short-lived view; you can only use it within the query in which it appears (you can refer to it multiple times, however).
Let's say that you're the SQL-savvy owner of a store that sells kits for building robots. On your site, customers are guided through the process of selecting all of the components needed to build a quadrupedal robot, i.e. getting the right motors, microcontrollers, sensors, batteries, grippers, etc.
Robots have lots of component parts and you have to make sure that the correct components for
Continue reading “Fun with Views and CTEs”
Binary search might be the most well-known search algorithm out there. If you went to school for computer science, you've probably heard the fundamentals of binary search hundreds of times. Chances are, you've been asked about binary search while interviewing for a development job – and for good reason. Binary search is a powerful algorithm. Compared to a linear search it performs very well, and the performance benefit is especially noticeable as the data set grows. A few weeks ago, I got the chance to use it in a live application, but it's been a while since I've worked with this algorithm, so I had to brush up on the implementation.
My pair and I were working on a class in one of our client's applications. This class used a lot of
finds and other linear search-esque code to check for items in sorted collections. Being the
Continue reading “Binary Searching and Ruby’s bsearch Method”
The following is the story of how Randall Degges created a simple API to solve the common problem of external IP address lookup and how he scaled it from zero to over 10 thousand requests per second (30B/month!) using Node.js and Go on Heroku.
Several years ago I created a free web service, ipify. It is a highly scalable IP address lookup service. When you make a GET request against it, it returns your public-facing IP address. Try it out yourself!
I created ipify because, at the time, I was building complex infrastructure management software and needed to dynamically discover the public IP address of some cloud instances without using any management APIs.
When I searched online for freely
Continue reading “Scaling ipify to 30 Billion Requests and Beyond on Heroku”
Today, we’re excited to announce a major update to Heroku Postgres with a new lineup of production plans. These plans are the first component of Heroku Postgres PGX, the next generation of our managed Postgres solution.
PGX Plans introduce larger database sizes, more generous resource allocations, and a broader set of options to suit your needs and to help your applications scale more smoothly. PGX Plans are generally available as of today, and all new Postgres databases will be created on our latest generation of Postgres infrastructure. Underneath the hood, we've upgraded the CPU, memory, storage, and networking aspects to ensure your Postgres database is running smoothly at scale.
To take a look at which of your Heroku Postgres databases can take advantage of PGX Plans now and how, go to data.heroku.com.
Our new lineup of 8 plan levels offers gradual
Continue reading “Heroku Postgres PGX: Bigger Databases, Improved Infrastructure, Same Price”
2017 was a great year for Heroku and our users. We want to thank each of you for your feedback, beta participation, and spirit of innovation, which inspires how we think about our products and evolve the platform.
In the past year, we released a range of new features to make the developer experience even more elegant. We bolstered our existing lineup of data services while providing security controls for building high compliance applications on the platform.
With that, we’d like to take a moment and share some of the highlights from 2017. We hope you enjoy it, and we look forward to an even more exciting 2018!
Run tests with zero queue time on every push to GitHub using a low-setup visual test runner that’s integrated with Heroku Pipelines for strong dev/prod parity.
Heroku Automated Certificate Management handles
Continue reading “The 2017 Heroku Retrospective: Advancing Developer Experience, Data, and Trust”
We are excited to announce that the new Heroku Partner Portal for Add-ons is now generally available.
The new portal offers an improved partner experience for building, managing, and updating Heroku add-ons. Our goal is to create a workflow that will give you more freedom and enable you to bring your add-ons to market more easily.
The new portal has been organized into a simple, elegant interface that is similar to the rest of Heroku's products. In each section, we've made more functionality available via the portal interface, where in the past emails or support tickets might have been necessary. This release brings a more visual approach as well as greater focus to creating and managing key aspects of your add-on offerings such as Marketplace Listing, Feature Plans, and Reports.
The marketplace listing section of the portal is where you create or edit content for your add-on’s listing
Continue reading “Announcing the New Heroku Partner Portal for Add-ons”
I can’t recall having done a year-in-review type of blog post before,
but when Patrick suggested it recently, it seemed like a great idea, so
I thought I’d give it a shot.
In short, 2017 was a great year! 🙂 I moved all of our servers from a
colocation facility to AWS in January, which helped me sleep a lot
better at night. Over the course of the year I continued to improve our
infrastructure, and we now have a very reliable and self-healing system.
Nearly everything we do (application, search, and database servers) is
self-managed, so it’s been fun to level-up my distributed system skills.
In December I put my AWS skills (literally) to the test by getting my
first AWS certification: AWS Certified Developer – Associate exam.
I also spent some time working on developing my marketing skills by
taking The Marketing Seminar by Seth Godin. The Continue reading “2017 in review”
Yesterday, researchers disclosed a security vulnerability affecting side-channel analysis of speculative execution on modern computer processors (CVE-2017-5715, CVE-2017-5753, and CVE-2017-5754).
Heroku’s Product Security team follows emerging trends, and partners closely with the research community. We invest heavily in facilitating conversations regarding vulnerabilities and keeping our customers safe via community partnerships.
In the case of emerging and recently-announced vulnerabilities (including those embargoed or leaked to the press), we have a proven methodology for ingesting, processing, and prioritizing mitigation work. Our team utilizes these methods to address these vulnerabilities as material or actionable information is made available.
Our Security and Platform teams are working closely with AWS and Canonical (makers of the Ubuntu Linux operating system) to investigate and patch any affected systems related to the Meltdown and Spectre announcements. If customer impact or coordination is required, we will post additional information via Heroku Status, DevCenter ChangeLog, or provide instructions
Continue reading “Meltdown and Spectre Security Update”
Designing scalable, fault tolerant, and maintainable stream processing systems is not trivial. The Kafka Streams Java library paired with an Apache Kafka cluster simplifies the amount and complexity of the code you have to write for your stream processing system.
Unlike other stream processing systems, Kafka Streams frees you from having to worry about building and maintaining separate infrastructural dependencies alongside your Kafka clusters. However, you still need to worry about provisioning, orchestrating, and monitoring infrastructure for your Kafka Streams applications.
Heroku makes it easy for you to deploy, run, and scale your Kafka Streams applications by using supported buildpacks from a variety of Java implementations and by offering a fully-managed Kafka solution for handling event streams. That way, you can leverage the Heroku Runtime alongside Apache Kafka on Heroku to manage your Kafka Streams applications so you can focus on building them. Kafka Streams is supported on Heroku with
Continue reading “Kafka Streams on Heroku”
Today, we're happy to announce full support for PostgreSQL 10, opening our managed Postgres solution to the full slate of features released after a successful two-month Beta period. PostgreSQL 10 is now the default version for all new provisioned Heroku Postgres databases. All Postgres extensions, tooling, and integration with the Heroku developer experience are ready to use, giving you the power of PostgreSQL 10 with the ease and usability of Heroku for building data-centric applications.
We'd like to re-emphasize a few features – among the many released in Postgres 10 – that we are particularly excited about.
A pattern we often see in databases in our fleet is one or two tables growing at a rate that’s much larger and faster than the rest of the tables in the database. Query times within the application will start to rise, bulk loads will take longer, and creating indexes
Continue reading “PostgreSQL 10 Generally Available on Heroku”
Jekyll, the static website generator written in Ruby and popularized by GitHub, is a great candidate for being run on Heroku. Originally built to run on GitHub Pages, running Jekyll on Heroku allows you to take advantage of Jekyll’s powerful plugin system to do more than convert Markdown to HTML. On my blog, I have plugins to download my Goodreads current and recently read books and to generate Open Graph images for posts. That said, it’s not straightforward to get up and running on Heroku without using
jekyll serve to do the heavy lifting.
jekyll serve uses Ruby’s built-in, single threaded web server WEBrick, but a public site should be using a web server more suited for production, like nginx.
We’ll start from the very beginning. You’ll need Ruby and Bundler installed.
I like ruby-install and chruby as my Ruby installer and switcher.
This is the platform-agnostic way
Continue reading “Jekyll on Heroku”