Sunday, September 9, 2012

EC2 Reserved Instance "Break Even" Points

Updated July 21, 2013 with the latest EC2 and RDS prices:

While trying to figure out the best deal for some EC2 instances we have running continuously (and inspired by Jordan Sissel's EC2 Reserved vs. On-Demand Chart), I charted the EC2 pricing curves for the various reserve options (light/medium/heavy utilization and 1- or 3-year terms). With reserve pricing, you're basically buying the right to run a specific instance type at a reduced hourly rate for a 1-year or 3-year term. For example, if you buy one "light-utilization" 1-year term for a small EC2 instance in your us-east-1a availability-zone, you pay $61 dollars now; and whenever you run at least one small instance in us-east-1a during the next 365 days, Amazon will automatically apply the reduced hourly rate to your usage for one of those instances (billing you at $0.034 an hour instead of $0.060 an hour for the instance).

Note that the "heavy" plans are a little bit different than the "light" and "medium" plans — Amazon will bill you each month for the full compliment of hours under the "heavy" plan, regardless of whether or not you actually use all the hours. In contrast, with the "light" and "medium" plans, Amazon will bill you only for the hours during which you had an applicable instance running. For example, if you've bought a "light" 1 year plan, and run an applicable instance for 100 hours in a 30-day month, Amazon will bill you $3.40 ($0.034 x 100 hours). But if you've bought a "heavy" 1 year plan, and run the same instance the same 100 hours, Amazon will bill you $10.08 ($0.014 x 24 hours x 30 days) instead.

Amazon's characterization of their reserve options as "light utilization", "medium utilization", and "heavy utilization" is also confusing, since you can use a "light utilization" plan heavily, and a "heavy utilization" plan lightly — you just have to think of the names as suggestions for how the plans might be most useful to you, rather than as restrictions on their use. And when you're using a number of instances of a certain type all the time, what really matters is how long you expect to continue to use that same number of instances in the future: one month, two months, a year, etc.

The Chart

So this is how I use the chart I made (although it shows the "small-instance" costs for the us-east-1 zone, all the EC2 reserve pricing follows the same basic curves): I figure out how likely we are to continue using an instance type (either in its current role, or in some future role) for which we're currently paying on-demand prices, try to put that likelihood into months (like "there's a good chance I'll still be using it in 2 months, and probably will be in 6 months, but I might not be in a year"), and then take a look at where those months fall on the chart:

EC2 price points

The X axis is months, and the Y axis is dollars. Here's the legend:

Option NameTerm Years$ Upfront$ Hourly
On Demand0000.000.060
Light 1yr1061.000.034
Light 3yr3096.000.027
Medium 1yr1139.000.021
Medium 3yr3215.000.017
Heavy 1yr1169.000.014
Heavy 3yr3257.000.012

You can view the full chart separately (it's a javascript chart with fancy tooltips built using elycharts). The chart shows the cumulative cost after each month; for example, the first light-green dot shows that after one month of running a small instance 24/7, you'll have spent a total of $85.82 under the light-utilization 1-year term plan (from the upfront cost of $61, plus the hourly cost of $0.034 for one month). I modeled the "heavy" options as having a fixed price over their term — while technically their price isn't all upfront (Amazon bills you for the hourly component of those plans month-by-month), you'll eventually have to pay for all their hours, regardless of whether you use them or not (so your total cost is the same whether you use 1 hour or 10,000).

The Bottom Line

The chart shows that your best deal for an instance that you use 24/7 is "on demand" if you use it less than three months — but if you use it any longer, you would have been better off buying a "light utilization" plan. Here's a table of the best deal for a each length of time:

MonthsBest Option
00 - 03On Demand
03 - 07Light 1yr
07 - 10Light 3yr
10 - 12Heavy 1yr
12 - 16Light 3yr
16 - 29Medium 3yr
29 - 36Heavy 3yr

As an even rougher rule of thumb: less than half-a-year, "on demand"; half-a-year to a year-and-a-half, "light 3yr"; and over a year-and-a-half, "medium 3yr".

If you do actually use the heavy-utilization 3-year term plan for the full three years, it's a nice deal that can save you almost two thirds of what you'd pay for "on demand" over that time period (and not much more than what you'd pay a low-end VPS-hosting provider for a VPS with similar specs over the same time period) — but if you use it for less than half that time, you'll wish you had chosen any other option. Also, keep in mind that each reserved instance you buy is applicable only to a specific instance-type in a specific availability-zone — so if halfway through your term you stop using so many instances of that type, or move your usage of those instances to a different region, you'll be stuck with reserved instances you never use.

Other AWS Reserved Instances

I also included a chart for RDS instances on the same page as the full chart of EC2 instance pricing (below the EC2 chart). It looks quite similar to the EC2 chart, but if you look closely you'll see that the "break-even" points are slightly different: "light 3yr" becomes a better option than "on demand" around 3 months instead of 4, and as good as "light 1yr" in just 4 months; and "medium 3yr" becomes the best deal at just 14 months. I imagine the reserve options for other AWS services will be similar — same curves, slightly different deals.

Sunday, June 3, 2012

Using /etc/mime.types for Grails Files Downloads

I find it sorely disappointing that ServletContext#getMimeType() doesn't just use apache's standard /etc/mime.types automatically. Instead, it seems to use some smaller subset of extension mappings included in some servlet jar somewhere. Fortunately, it's easy enough to use spring's ConfigurableMimeFileTypeMap bean to get the mappings from the mime.types files of your choosing.

The way I like to set it up is to add a custom configuration property in my grails-app/conf/Config.groovy, and specify a comma-separated list of file paths to use (that way I can use the standard /etc/mime.types file, but also add in my own custom mime.types file if I need some mappings not in the standard one):

mime.types = '/etc/mime.types'

Then, in grails-app/conf/spring/resources.groovy, I define a ConfigurableMimeFileTypeMap bean (I call it fileTypeMap, but you can call it whatever):

fileTypeMap(org.springframework.mail.javamail.ConfigurableMimeFileTypeMap) { application.config.mime.types.split(/,/).each { mappingLocation = new } }

Finally, in the controller actions where I want to stream out a file, I use the ConfigurableMimeFileTypeMap bean's getContentType() method to calculate the file's content type from its name:

def fileTypeMap def download = { try { // some logic here to locate the appropriate file to stream def file = new File('/tmp/foo.txt') // use /etc/mime.types to determine the file's content type from its extension def contentType = fileTypeMap.getContentType( // must special-case text/html; see if (contentType == 'text/html') return render(contentType: contentType, text: file.text) // set up standard (inline) file-download headers response.contentType = contentType response.contentLength = file.length() as Integer response.setHeader 'Content-Disposition', "inline; filename=\"${}\"" // stream the file file.inputStream.withStream { response.outputStream << it } } catch (FileNotFoundException e) { response.sendError 404 } }

Sunday, April 15, 2012

Fastest Ext4 Options

When I got a new external hard drive for storing mp3s, recorded TV shows, etc, I wanted to know how to set it up most optimally (using the ext4 filesystem). I ended up following Luca Spiller's Ext4 Options for a Media Drive for the most part, but with a few tweaks:

Formatting the Drive

After plugging in the hard drive and running sudo fdisk -l to check what name the OS had assigned it (/dev/sdb1), I formatted the drive with the following options: -m 0 to create no extra room for root (I don't intend to use it as a boot drive); and -L bb to assign it a label of bb (so I can reference it by label in my /etc/fstab):

sudo mkfs.ext4 -m 0 -L bb /dev/sdb1
Configuring the Drive Options

Then I updated my /etc/fstab configuration with an entry for the new drive. Beyond the filesystem permission options of user, rw, exec, and suid (the order of which is significant), and the noauto option (to ignore the drive when booting), I added some options that make writing data less safe — but faster. Use man mount to get a brief description of these and all other possible options:

LABEL=bb /mnt/bb ext4 user,rw,exec,suid,noauto,noatime,nobh,nobarrier,commit=60,data=writeback,journal_async_commit 0 0

If this was an internal hard drive, or one that I intended to have connected at all times, I would have skipped the permissions and noauto options (so as to use the default permissions and allow it to auto-mount at boot time), and would have just specified the performance options:

LABEL=bb /mnt/bb ext4 noatime,nobh,nobarrier,commit=60,data=writeback,journal_async_commit 0 2
Creating the Drive Mount Point

In /etc/fstab I had configured the drive's mount point as /mnt/bb, so I created it and set its owner to myself (so I could mount the drive as a regular user, since I had included the user option in the /etc/fstab config):

sudo mkdir /mnt/bb && sudo chown justin:justin /mnt/bb
Mounting the Drive

Now whenever I plug in the drive, I can mount it as a regular user; either via its label:

mount -L bb

Or via its mount point:

mount /mnt/bb

Jetty 6 HTTPS Redirects

Another Jetty 6 trick I have had to use several times is patching it to enable redirects with the correct URL scheme when proxyied via HTTP behind a webserver using SSL. Because the connection between the webserver and jetty is just plain HTTP (without SSL), jetty will send the redirect with a plain http scheme:


But when the webserver is using SSL, what I really want is for it to send the redirect with an https scheme:


X-Forwarded- Headers

As described in jetty's reverse proxy docs, by setting the forwarded property in jetty's connector configuration, you can get jetty to use the server name/port and remote client IP-address from the X-Forwarded-Host and X-Forwarded-For headers that apache's mod_proxy includes automatically.

One header that mod_proxy does not include automatically, however, is X-Forwarded-Proto. You have to add that manually, via the RequestHeader directive in your apache config (wherever you included the ProxyPass directive that forwards requests to jetty):

ProxyPass http://localhost:8080/ ProxyPassReverse http://localhost:8080/ RequestHeader set X-Forwarded-Proto "https"

Patching Jetty 6

For jetty 7/8, that would be sufficient. With the above configuration, they'll send the correct redirects. But jetty 6 doesn't use the X-Forwarded-Proto header, so you have to create your own connector class to handle it. This is what I've done (for the nio connectors):

package com.pitchstone.lib.jetty; import; import; import org.mortbay.jetty.HttpFields; import org.mortbay.jetty.Request; import org.mortbay.jetty.nio.SelectChannelConnector; /** * Jetty nio connector. * Adds the ability to use the 'X-Forwarded-Proto' header * to set the request 'scheme' property. */ public class NioConnector extends SelectChannelConnector { private String _forwardedProtoHeader = "X-Forwarded-Proto"; public NioConnector() { super(); } // AbstractConnector protected void checkForwardedHeaders(EndPoint endpoint, Request request) throws IOException { super.checkForwardedHeaders(endpoint, request); HttpFields httpFields = request.getConnection().getRequestFields(); String forwardedProto = httpFields.getStringField(getForwardedProtoHeader()); forwardedProto = getLeftMostValue(forwardedProto); if ("http".equals(forwardedProto) || "https".equals(forwardedProto)) request.setScheme(forwardedProto); } // impl public String getForwardedProtoHeader() { return _forwardedProtoHeader; } public void setForwardedProtoHeader(String x) { _forwardedProtoHeader = x; } }

To use it, compile it, jar it up, and add the jar to jetty's lib/ext directory (/usr/share/jetty/lib/ext by default under ubuntu/debian). Then configure jetty.xml to use it, replacing org.mortbay.jetty.nio.SelectChannelConnector with this custom version:

<Call name="addConnector"> <Arg> <New class="org.mortbay.jetty.nio.SelectChannelConnector"> <New class="com.pitchstone.lib.jetty.NioConnector"> <Set name="host"><SystemProperty name="" /></Set> ... <Set name="forwarded">true</Set> </New> </Arg> </Call>

With that configuration and patch in place, jetty will now send redirects with the https scheme. It also will map these X-Forwarded-For headers to the ServletRequest API like so:

HeaderServletRequest Method

Thursday, January 5, 2012

Log4j + Jetty 6 + Fedora

Since this is the second time I've had to setup a (Grails) app that uses Log4j with the version of Jetty 6 that comes with Fedora (specifically Fedora 14), I figured I'd document how I had to change the jetty configuration to get the app working right:

Add jsp 2.1 to jetty's lib

Download the latest Jetty 6 release (directly from the Codehaus), and copy its lib/jsp-2.1 directory into the /usr/share/jetty/lib directory you get with Fedora (so that you have a new /usr/share/jetty/lib/jsp-2.1 directory to go along with the existing /usr/share/jetty/lib/jsp-2.0 directory). This should solve your SLF4J problems.

Remove commons-logging.jar from jetty's classpath

The Fedora jetty daemon ultimately runs the /usr/bin/djetty script to start jetty. Edit it to remove the Commons Logging jar from the classpath, and to allow additional java options:

#!/bin/bash if [ -z "$JAVA_OPTIONS" ] then export JAVA_OPTIONS="-Xmx1500m -XX:MaxPermSize=500m" fi if [ -z "$JETTY_CLASSPATH" ] then export JETTY_CLASSPATH="" fi if [ -z "$JETTY_PID" ] then export JETTY_PID=/dev/null fi if [ -z "$JETTY_PORT" ] then export JETTY_PORT=8088 fi export JETTY_HOME=/usr/share/jetty if [ -z "$JETTY_HOME" ] then JETTY_HOME_1=`dirname "$0"` JETTY_HOME_1=`dirname "$JETTY_HOME_1"` JETTY_HOME=${JETTY_HOME_1} fi cd $JETTY_HOME #exec /usr/bin/java -Djetty.class.path=/usr/share/java/commons-logging.jar -Djetty.port=$JETTY_PORT -jar start.jar etc/jetty-logging.xml etc/jetty.xml 2>/dev/null & exec /usr/bin/java -Djetty.class.path="$JETTY_CLASSPATH" -Djetty.port=$JETTY_PORT $JAVA_OPTIONS -jar start.jar etc/jetty-logging.xml etc/jetty.xml 2>/dev/null & echo $! >$JETTY_PID

Now grails' Log4j configuration should take effect.

Remove jetty's javamail.jar

Since the grails apps I've deployed included javamail jars in their own war, I get rid of the /usr/share/jetty/lib/naming/[javamail].jar (symlink) — otherwise sending mail via SMTP just fails silently.

Remove jetty's sample apps

Delete the sample apps in /usr/share/jetty/contexts and /usr/share/jetty/webapps. There's no reason to keep them, and I usually want my apps to use the root context path in their place anyway.