Friday, December 10, 2010

Long Live XML

Reading Norman Walsh's Deprecating XML article (and others, like James Clark's XML vs the Web), on the topic of web apps removing XML from their APIs, made me reflect a little about what's happened to XML over the past decade. I began the decade working for a dot-bomb that was way into XML, envisioning a world-wide "XML backbone" of deep data-integration.

While that has not come to pass, I still see XML as having changed the world. Before XML, data exchange was almost always done either via bare-bones CSV (comma-separated values) or inscrutable binary formats. Even though CSV is at least human-readable, and binary formats can be carefully structured, there was just nothing (popular) in the 90's with more than a couple of these attributes:

  • human-readable
  • unicode built-in
  • platform/vendor-agnostic
  • structured
  • extensible

XML was never great for raw data — its real strength is structuring and annotating text — but each of the above was a big win for developers trying to share data. Having a default data-format that just worked everywhere for everything freed the developers of the 00's to think about what to do with the data, not how to send it on the wire.

In the 10's JSON will be used for data, and XML can go back to what it does best: text. So, okay, XML is dead; long live XML!

Sunday, June 20, 2010

Problem Accessing Grails Layouts

Just a quick note for posterity: If you're running a grails app (or probably any webapp) on jetty, and you configure the app to use the root (/) contextPath, and you leave jetty's default root webapp (/usr/share/jetty/webapps/root) around, you're in for a world of hurt. With grails, you'll get an obtuse 404 error page with the following message:

Problem accessing /WEB-INF/grails-app/views/layouts/my-custom-layout.gsp

(And no info in the logs.) The fix is of course to delete that vestigial root webapp.

Saturday, June 19, 2010

Cherokee Django

The Cherokee Cookbook and Yann Malet's Blog Roll post have pretty helpful directions for setting up the Cherokee webserver with Django. I found it a little tricky to set it up on a virgin Ubuntu Lucid Lynx system, though, so I thought I'd share my steps to get it going:

Install Cherokee and Django

You can get a fairly recent version of each from the Lucid repositories. Because you will compile uWSGI in a sec, you will also need to get python-dev and libxml2-dev:

$ sudo apt-get install cherokee libcherokee-mod-libssl python-django python-dev libxml2-dev

Install uWSGI

According to the cherokee docs, the best way to run django is via uWSGI. On the uwsgi home page, they do have a link to an ubuntu ppa for uwsgi. It looks pretty well maintained, and it's nice of Steve Crozz to maintain it, but I don't feel great about using someone's personal-package archive in a production environment. So I figured I would just get the code and build it myself.

Unfortunately uwsgi doesn't quite have the "forehead install" kind of make file. As I mentioned before, on ubuntu you need the python-dev and libxml2-dev debs, and you also need to specify the version of python you will use it with. These are the exact steps I took:

$ wget -nd -O /tmp/uwsgi.tar.gz $ tar xz -C /tmp -f /tmp/uwsgi.tar.gz $ cd /tmp/uwsgi- $ VERSION_TO_MAKE=`python --version 2>&1 | perl -ne '/(\d)\.(\d)/; print "$1$2"'` $ make -f Makefile.Py$VERSION_TO_MAKE $ sudo mv uwsgi$VERSION_TO_MAKE /usr/local/bin/uwsgi

Create the django_wsgi File

You need to have a wsgi script just like Yann described in his Blog Roll post; since I'm not all that hip to django, though, I didn't know exactly what to call it or where to put it — this is how I eventually ended up organizing my django project:

/srv sites MyProject

That is, I called the wsgi script, and put it in the root of my django project, at the same level as the file. The script contains exactly what Yann described (where MyProject is the name of your project):

import os import django.core.handlers.wsgi # init django settings os.environ['DJANGO_SETTINGS_MODULE'] = 'MyProject.settings' # define wsgi app application = django.core.handlers.wsgi.WSGIHandler() # mount this application at the webroot applications = { '/': 'application' }

Configure Cherokee

Since I'm a complete cherokee noob, the cherokee docs were a little bit thin on this point (the screencast included in the docs might have been exactly what I needed, but I can't stand to sit around and watch screencasts, so I skipped it); this is what you do:

  1. Run the cherokee admin server: $ sudo cherokee-admin
  2. Open up your web browser to and enter the credentials that cherokee-admin spit out.
  3. Click the Virtual Servers navigation link.
  4. Click the Wizards button at the bottom of the virtual servers list.
  5. Select Platforms from the Category list, and click the Run Wizard link next to uWSGI.
  6. On the first page of the wizard, enter the full domain name of your server in the New Host Name (once you get everything working with the new virtual server added by the wizard, you may want to delete it and just re-add the uwsgi handler manually in the default virtual server).
  7. The Document Root field is irrelevant for now; I just set it to my django project's static directory.
  8. For the Configuration File field, enter the path to your wsgi script (you'll change this setting in a sec anyway, so don't sweat it too much), like /srv/sites/MyProject/
  9. For the Same logs as vserver field, I have no idea what this does, so I chose the default.
  10. Click the Submit button to finish the wizard, then go back and click the Information Sources navigation link, and select the new uWSGI 1 source you just created.
  11. The Interpreter field will contain something like this:
    /usr/local/bin/uwsgi -s -t 10 -M -p 1 -C -w /srv/sites/MyProject/
    — change it to this:
    /usr/local/bin/uwsgi -s -t 10 -M -p 1 -C --pythonpath /srv/sites -w MyProject.django_wsgi
    (where /srv/sites is the directory which contains your django project-directory (MyProject), and MyProject is the name of your project).
  12. Save it, and try navigating to your server in a browser.


Getting the right Interperter field setting in cherokee took me a while, and the logs were of no help. What you need to do is just run the uwsgi command (/usr/local/bin/uwsgi ...) yourself in a shell (killing the existing uwsgi instances if necessary); as long as you run it with the same socket as specified in cherokee's Connection field, cherokee will try to use the uwsgi instance you spun up, instead of spinning up its own instance. When you run uwsgi from a shell, you do get all the regular logging and error messages you would expect to see from django (from the shell stdout/stderr).

The other thing I found (which is probably pretty obvious if you know what you're doing) is that I had to give the uwsgi user (by default the same as the cherokee user, www-data) write rights on the django project. So I ended up creating a new copy of the django source from my working copy to /srv/sites/MyProject, and then making www-data the owner of that copy (sudo chown -R www-data:www-data /srv/sites/MyProject). There's probably a better way to do that, but this way worked.

Sunday, June 13, 2010

Mocking Taglibs with Grails Units

Grails taglibs are a million times better than JSP taglibs (requiring way less busywork), but they still can be tricky to unit test. Here are some tricks I've learned so far:

Basic Unit Testing

The first trick is that there's basically no official documentation for how to write even basic taglib unit tests. There is, however, a nice TagLibUnitTestCase class provided with grails that's easy to use (once you figure it out). For a simple taglib like this:

package myapp class MyTagLib { // optionally use <my:tag> namespace instead of <g:tag> static namespace = 'my' def heading = { attrs, body -> def level = attrs.level ?: 1 out << "<h$level>" << body() << "</h$level>" } }

You can write a TagLibUnitTestCase like this:

package myapp import grails.test.TagLibUnitTestCase class MyTagLibUnitTests extends TagLibUnitTestCase { void testHeadingWithNoLevelAndNoContent() { tagLib.heading [:], {''} assertEquals '<h1></h1>', tagLib.out.toString() } void testHeadingWithLevelAndSimpleContent() { tagLib.heading level: '2', {'simple'} assertEquals '<h2>simple</h2>', tagLib.out.toString() } }

The TagLibUnitTestCase class automatically determines which taglib you're testing based on the name of your test class, and sets up a stub taglib as the tagLib member of the test class. Content written to the out variable in your taglib is written to the out (StringWriter) member of this stubbed taglib.

Using Codecs

If you use one of grails' encodeAs string methods (like encodeAsHTML()) in your taglib, you need to explicitly set up that codec in your unit test, using the loadCodec helper method, like this:

package myapp import grails.test.TagLibUnitTestCase import org.codehaus.groovy.grails.plugins.codecs.HTMLCodec class MyTagLibUnitTests extends TagLibUnitTestCase { protected void setUp() { super.setUp() loadCodec(HTMLCodec) } }

Another thing to note specifically about the HTMLCodec is that encodeAsHTML() escapes double-quotes, but not single-quotes — so when you output html from a taglib, make sure you always use double-quotes for attributes in your html. The reason for this is that if you print out a variable in a single-quoted attribute — or you change your code later to print out a variable — and that variable's value came from some content that your users can manipulate, you're vulnerable to xss — even though you used encodeAsHTML() to escape that variable's content. Here's a quick example of what not to do:

class MyTagLib { def foo = { attrs, body -> def bar = "' onmouseover='alert(\"xss\")'" out << "<a href='" << bar.encodeAsHTML() << "'>Bar</a>" } }

When the taglib is used, a single-quote in the variable content "breaks-out" of the attribute, giving the attacker full access to act as the user (presumably to do something nastier than simply display an alert):

<a href='' onmouseover='alert("xss")'>Bar</a>

Calling Taglibs from Taglibs

The first trick for calling taglibs from other taglibs is to make sure always to return an empty string from all your taglibs:

package myapp class MyTagLib { // optionally use <my:tag> namespace instead of <g:tag> static namespace = 'my' def linkToWikipedia = { attrs, body -> out << "<a href=\"${body().encodeAsHTML()}\">${body().encodeAsHTML()}</a>" '' } }

Otherwise you get duplicated text or other weird values in the stubbed output-stream when you try to unit-test the taglib. When you actually call one taglib from another taglib, the second trick is to append the result from the called taglib to the output of the caller:

package myapp class MyTagLib { // optionally use <my:tag> namespace instead of <g:tag> static namespace = 'my' def heading = { attrs, body -> def level = attrs.level ?: 1 out << "<h$level>" << (attrs.wikipedia ? linkToWikipedia([:], body) : body()) << "</h$level>" '' } def linkToWikipedia = { attrs, body -> out << "<a href=\"${body().encodeAsHTML()}\">${body().encodeAsHTML()}</a>" '' } }

Otherwise the output of the called taglib is discarded when you run your app in the full grails environment.

And the third trick is that when you're trying to simulate nested taglibs in a unit test, just call the nested taglibs in the container's closure, and don't do anything else — don't try to append their output to the stubbed output-stream or anything fancy like that; for example, when you're trying to simulate this GSP code:

<my:table in="mydata" var="data"> <my:tr><my:td>${}</my:td><my:td>${}</my:td></my:tr> </my:table>

Your unit test should look like this:

package myapp import grails.test.TagLibUnitTestCase class MyTagLibUnitTests extends TagLibUnitTestCase { void testTableWithDataAndTwoCells() { def mydata = [[id:'123', name:'Foo'], [id:'456', name:'Bar']] tagLib.table([ 'in': mydata, 'var': 'data' ], { m ->[:], {[:], { "${}" })[:], { "${}" }) '' }); '' }) assertEquals '<div class="data"><table><tbody><tr class="odd">' + '<td>123</td>' + '<td>Foo</td>' + '</tr><tr class="even">' + '<td>456</td>' + '<td>Bar</td>' + '</tr></tbody></table></div>', tagLib.out.toString() } }

Note that in the example test, the closures simulating the content of the my:table and my:tr taglibs both return an empty string (so you don't get extra junk in the output, as mentioned above); and although when you use (this made-up and hypothetical) my:table in GSP code you can reference the data variable that my:table sets directly (like ${}), in the test you have to reference it as it is actually passed — as an entry in the map passed to the tag's body closure (like ${}, where m is the first argument of the table body closure in the example test code).

Beyond Grails Stubs

There are some things that aren't covered by the stub tagLib object provided by the TagLibUnitTestCase. For example, you can't stub the request params map, and accessing the taglib's controllerName or actionName properties fails with a giant flaming fireball.

The simplest way to deal with that is with some groovy metaprogramming. For example, say you want to test a taglib that grabs some data from the request params map:

package myapp class MyTagLib { static namespace = 'my' def seeAlsoWikipedia = { attrs, body -> if (params.q) out << "See also <a href=\"${params.q.encodeAsHTML()}\">${params.q.encodeAsHTML()}</a> in wikipedia" } }

You can use the tagLib instance's metaClass property to stub out the params map:

package myapp import grails.test.TagLibUnitTestCase class MyTagLibUnitTests extends TagLibUnitTestCase { void testSeeAlsoWikipediaWithAQuery() { tagLib.metaClass.params = [q: 'foo'] tagLib.seeAlsoWikipedia [:], {''} assertEquals 'See also <a href="">foo</a> in wikipedia', tagLib.out.toString() } }

GMock and Expectations

There's a good deal of overlap between the GrailsUnitTestCase class (superclass of TagLibUnitTestCase) and the functionality provided by GMock (in the form of the GMockTestCase class), but GMock still can be useful when you need to stub functionality not already stubbed by TagLibUnitTestCase — and you'd like to do it just by setting up some simple expectations (ie "mocks" in TDD-correct terminology).

GMock has some pretty good documentation about how to include the GMock jar into Grails; to integrate GMock into a TagLibUnitTestCase, use the @WithGMock annotation:

package myapp import grails.test.TagLibUnitTestCase import org.gmock.WithGMock @WithGMock class MyTagLibUnitTests extends TagLibUnitTestCase { void testSeeAlsoWikipediaWithAQuery() { mock(tagLib) { params.returns([q: 'foo']) } play { tagLib.seeAlsoWikipedia [:], {''} assertEquals 'See also <a href="">foo</a> in wikipedia', tagLib.out.toString() } } }

Use GMock's mock() method to mock an object instance (in the above example, the tagLib object already stubbed initially by TagLibUnitTestCase); in the closure passed to the mock() method, set up your expectations. In the above example, the one and only expectation for the tagLib object is for its params property to be accessed; when accessed, GMock will make it return [q: 'foo']. After you've set up the mocks and expectations, everything in the play {} closure will be executed with those mocks.

The example above performs the same exact test as the example before — the only functional difference is that GMock also validates your expectation that params will be accessed.

GMock with Other Taglibs

One thing GMock is especially good for is when a taglib you want to test calls some other taglib. When unit testing, you don't really want to test that other taglib; GMock can help by replacing that other taglib with a mock which validates your expectation that the other taglib was called with the right attributes — but without doing anything other than returning some fixed content.

For example, say you've got a taglib that prints the label, content, and errors for a given property, rendering the errors with grails' standard g:hasErrors and g:renderErrors tags:

package myapp class MyTagLib { static namespace = 'my' def propertyLine = { attrs, body -> def beanAndField = attrs.findAll { ['bean','field'].contains(it) } out << '<div class="property">' out << "<label>${attrs.label}</label>" out << '<div class="content">' << body() << '</div>' out << g.hasErrors(beanAndField, { out << '<div class="errors">' << g.renderErrors(beanAndField, {''}) << '</div>' '' }) out << '</div>' '' } }

You can pretty much assume that g:hasErrors and g:renderErrors has already been tested thoroughly and is working — you just need to test that you're correctly passing the bean and field parameters to them. With the help of GMock, you can test it like this:

package myapp import grails.test.TagLibUnitTestCase import org.gmock.WithGMock @WithGMock class MyTagLibUnitTests extends TagLibUnitTestCase { void testPropertyLineWithBeanAndField() { def bean = new Expando() mock(tagLib.g) { hasErrors([bean:bean, field:'foo'], match{ it instanceof Closure }).returns('_errors_') } play { tagLib.errors [bean:bean, field:'foo'] {''} assertEquals '<div class="property">' + '<label>null</label>' + '<div class="content"></div>' + '_errors_' + '</div>', tagLib.out.toString() } } }

The GMock expectations in the above example validates that g:hasErrors is called with the right bean and field properties, and is passed a closure. It also makes g:hasErrors return '_errors_', so you can validate that the content g:hasErrors would normally generate is in the right place in the context of all the other output generated by your taglib.

Built-in Groovy Mocking

Groovy also has built-in stubbing and mocking functionality, via its StubFor and MockFor classes. These are most useful when you want to stub an object that your taglib creates itself, like the TokenGenerator class in this example:

package myapp class MyTagLib { static namespace = 'my' def onetimeToken = { attrs, body -> def onetime = new TokenGenerator() out << '<input type="hidden" value="' << onetime.generateToken() << '">' '' } }

You can mock the TokenGenerator class like this:

package myapp import grails.test.TagLibUnitTestCase class MyTagLibUnitTests extends TagLibUnitTestCase { void testOnetimeToken() { def mockTokenGenerator = new MockFor(TokenGenerator) mockTokenGenerator.demand.generateToken { '123' } mockTokenGenerator.use { tagLib.onetimeToken [:] {''} assertEquals '<input type="hidden" value="123">', tagLib.out.toString() } } }

The code inside the use closure will use your mocked TokenGenerator behavior in place of the regular TokenGenerator behavior (ie it will return '123' for the generateToken() method), and it will raise an exception if generateToken() wasn't called within the use closure.

Sunday, June 6, 2010

Grails Passwords, Salted

Getting authentication with Spring Security (s2) set up on Grails is nice and easy; getting your s2 passwords salted with a unique value, not so much. There's a real nice grails s2 plugin that Burt Beckwith maintains. It actually is quite well documented, but there's an awful lot of places where the code is still a ways ahead of the documentation.

So here's a quick tutorial to get your grails passwords salted:

1 Install S2

First install the spring-security plugin:

$ grails install-plugin spring-security-core

Then create your "user" and "role" domain objects. You can call the "user" and "role" classes whatever you want, and put them in whatever package you choose. I called mine cq.User and cq.Role:

$ grails s2-quickstart cq User Role

2 Basic Configuration

The s2-quickstart script will automatically add the following to your grails-app/conf/Config.groovy config file:

// Added by the Spring Security Core plugin: grails.plugins.springsecurity.userLookup.userDomainClassName = 'cq.User' grails.plugins.springsecurity.userLookup.authorityJoinClassName = 'cq.UserRole' grails.plugins.springsecurity.authority.className = 'cq.Role'

If you know that your usernames won't change, you can use them to salt the password. While not ideal as salts (an attacker can still build out rainbow tables of common username/password combinations pretty easily), they're a lot better than no salt at all.

To use the username as a salt, all you need to do is add one config setting to your Config.groovy. Unfortunately, the s2 manual had the wrong name for this setting; this is the right setting to add to Config.groovy:

grails.plugins.springsecurity.dao.reflectionSaltSourceProperty = 'username'

I'd also recommend turning on the setting that encodes the hashed passwords as base-64 strings (instead of strings of hex digits; it'll shave a dozen characters off the size of the hashed password):

grails.plugins.springsecurity.password.encodeHashAsBase64 = true

3 Basic Password Hashing

The other thing you need to do is make sure you hash the password with the salt whenever the password is saved. The s2 quickstart tutorial directs you to do this in your user controller. Don't; you should do this in the domain models, so you don't repeat yourself.

So update your "user" class to look like this:

package cq class User { def springSecurityService String username String password boolean enabled boolean accountExpired boolean accountLocked boolean passwordExpired static mapping = { // password is a keyword in some sql dialects, so quote with backticks // password is stored as 44-char base64 hashed value password column: '`password`', length: 44 } static constraints = { username blank: false, size: 1..50, unique: true password blank: false, size: 8..100 } def beforeInsert() { encodePassword() } def beforeUpdate() { if (isDirty('password')) encodePassword() } Set getAuthorities() { UserRole.findAllByUser(this).collect { it.role } as Set } protected encodePassword() { password = springSecurityService.encodePassword(password, username) } }

The main difference between the above and what s2-quickstart generates is the internal encodePassword() method. When a new user is saved, the beforeInsert() method will be called by the gorm framework, and our user class will hash the password, with the username as a salt. When an existing user is updated, the beforeUpdate() method will be called; it will check if the password has changed, and if it has, it will also hash the new password the same way.

This way you never have to hash a user's password in a controller or other code; just pass it on through to the domain model like any other property.

4 A Quick Test

At this point you've done enough to store passwords hashed with the username as a salt. Test it out by adding some test users in your bootstrap code, and a check for authenticated users on your home page.

In grails-app/conf/BootStrap.groovy, create and save a new test user:

class BootStrap { def init = { servletContext -> new cq.User(username: 'test', enabled: true, password: 'password').save(flush: true) } def destroy = { } }

And in grails-app/views/index.gsp, add this to the top of the body:

... <body> <sec:ifLoggedIn><h1>Hey, I know you; you're <sec:username/>!</h1></sec:ifLoggedIn> <sec:ifNotLoggedIn><h1>Who are you?</h1></sec:ifNotLoggedIn> ...

Now run your app (with clean, just to make sure everything gets rebuilt properly):

$ grails clean && grails run-app

Navigate to http://localhost:8080/cq/login (where cq is the name of your app), and login with a username of test and a password of password. Pretty sweet what you get (just about) out of the box, huh?

5 Adding a Unique Salt

Let's kick it up a notch. To create a unique salt for each user (making it impractical for an attacker to use rainbow tables to crack the hashed passwords), add a salt field to your "user" class, and override the getter for this field to initialize it with a unique salt:

package cq import; // add class User { def springSecurityService String username String password String salt // add boolean enabled boolean accountExpired boolean accountLocked boolean passwordExpired static mapping = { // password is a keyword in some sql dialects, so quote with backticks // password is stored as 44-char base64 hashed value password column: '`password`', length: 44 } static constraints = { username blank: false, size: 1..50, unique: true password blank: false, size: 8..100 // salt is stored as 64-char base64 value salt maxSize: 64 // add } def beforeInsert() { encodePassword() } def beforeUpdate() { if (isDirty('password')) encodePassword() } // add: String getSalt() { if (!this.salt) { def rnd = new byte[48]; new SecureRandom().nextBytes(rnd) this.salt = rnd.encodeBase64() } this.salt } Set getAuthorities() { UserRole.findAllByUser(this).collect { it.role } as Set } protected encodePassword() { password = springSecurityService.encodePassword(password, salt) // update } }

Don't forget to also update the encodePassword() method to hash the password with the salt field, instead of the username field.

6 Adding Custom UserDetails

Here's where it gets tricky. S2 maintains a user class for authentication called UserDetails that's completely separate from your "user" domain model. So you have to provide a custom UserDetailsService class that creates a custom UserDetails object given a username, as well as a custom SaltSource helper-class to extract the salt value from the custom UserDetails.

The good news is that you don't have to write a whole lot of code to do this. Create the following class as src/groovy/cq/MyUserDetailsService.groovy (or with whatever namespace and classname you like):

package cq import org.codehaus.groovy.grails.plugins.springsecurity.GormUserDetailsService import org.codehaus.groovy.grails.plugins.springsecurity.GrailsUser import import class MyUserDetailsService extends GormUserDetailsService { protected UserDetails createUserDetails(user, Collection authorities) { new MyUserDetails((GrailsUser) super.createUserDetails(user, authorities), user.salt ) } }

And create the following as src/groovy/cq/MyUserDetails.groovy:

package cq import org.codehaus.groovy.grails.plugins.springsecurity.GrailsUser class MyUserDetails extends GrailsUser { public final String salt MyUserDetails(GrailsUser base, String salt) { super(base.username, base.password, base.enabled, base.accountNonExpired, base.credentialsNonExpired, base.accountNonLocked, base.authorities, this.salt = salt; } }

And create the following as src/groovy/cq/MySaltSource.groovy:

package cq import import class MySaltSource extends ReflectionSaltSource { Object getSalt(UserDetails user) { user[userPropertyToUse] } }

Note that if you use a java UserDetails implementation, instead of a groovy implementation, you can just use ReflectionSaltSource directly — you need to customize it only to do groovy "reflection" (it does java reflection just fine).

7 Configuring UserDetails

Finally, you can configure s2 to use your custom UserDetails class by adding the following to your grails-app/conf/spring/resources.groovy:

import org.codehaus.groovy.grails.commons.ConfigurationHolder as CH beans = { userDetailsService(cq.MyUserDetailsService) { sessionFactory = ref('sessionFactory') transactionManager = ref('transactionManager') } saltSource(cq.MySaltSource) { userPropertyToUse = CH.config.grails.plugins.springsecurity.dao.reflectionSaltSourceProperty } }

If you were to implement your UserDetails class in java you could omit the saltSource bean (since it comes configured out-of-the-box to do reflection on java classes). Otherwise, the one last piece of the puzzle is to go back and change the dao.reflectionSaltSourceProperty setting in your grails-app/conf/Config.groovy to your new salt field:

grails.plugins.springsecurity.dao.reflectionSaltSourceProperty = 'salt'

8 A Real Test

Now to verify that all this stuff is working (and will still work when you mess around with your user domain model in the future), you need some integration tests. First, let's tackle the password-hashing scheme; create a test/integration/cq/UsersTests.groovy class, and dump this in it:

package cq class UserTests extends GroovyTestCase { def springSecurityService void testPasswordIsEncodedWhenUserIsCreated() { def user = new User(username: 'testuser1', password: 'password').save(flush: true) assertEquals springSecurityService.encodePassword('password', user.salt), user.password } void testPasswordIsReEncodedWhenUserIsUpdatedWithNewPassword() { def user = new User(username: 'testuser1', password: 'password').save(flush: true) // update password user.password = 'password1' true) assertEquals springSecurityService.encodePassword('password1', user.salt), user.password } void testPasswordIsNotReEncodedWhenUserIsUpdatedWithoutNewPassword() { def user = new User(username: 'testuser1', password: 'password').save(flush: true) // update user, but not password user.enabled = true true) assertEquals springSecurityService.encodePassword('password', user.salt), user.password } void testPasswordIsNotReEncodedWhenUserIsReloaded() { new User(username: 'testuser1', password: 'password').save(flush: true) // reload user def user = User.findByUsername('testuser1') assertNotNull user assertEquals springSecurityService.encodePassword('password', user.salt), user.password } }

Now the authentication part. This is a bit awkward, because what we're really testing is that your custom UserDetails is implemented and configured correctly, but let's pretend that it's testing your login controller and stick it in your LoginControllerTests anyway. Create a test/integration/cq/LoginControllerTests.groovy class, and put this in it:

package cq import import; import; class LoginControllerTests extends GroovyTestCase { def daoAuthenticationProvider void testAuthenticationFailsWithIncorrectPassword() { def user = new User( username: 'testuser1', password: 'password', enabled: true ).save(flush: true) def token = new UsernamePasswordAuthenticationToken( new TestPrincipal('testuser1'), 'password1' ) shouldFail(BadCredentialsException) { daoAuthenticationProvider.authenticate(token) } } void testAuthenticationSucceedsWithCorrectPassword() { def user = new User( username: 'testuser1', password: 'password', enabled: true ).save(flush: true) def token = new UsernamePasswordAuthenticationToken( new TestPrincipal('testuser1'), 'password' ) def result = daoAuthenticationProvider.authenticate(token) assertTrue result.authenticated } class TestPrincipal implements Principal { String name TestPrincipal(def name) { = name } boolean equals(Object o) { if (name == null) return o == null return name.equals(o) } int hashCode() { return toString().hashCode() } String toString() { return String.valueOf(name) } } }

And now run your integration tests:

$ grails test-app integration:

The console outputs only a brief overview of the results. If something went wrong, you can find the details in the target/test-reports folder; enter the following in your browser address bar for the html version of the report (where $PROJECT_HOME is the full path to your project):


And hey presto, you've got salt.

Saturday, May 29, 2010

Free Web Hosting on App Engine

I don't have much going on at; just a few static mockups and other documents (this blog is hosted by So I finally got around to transferring it from a paid host (lylix; no complaints other than that they have the gall to charge money for their services) to a free one: Google's App Engine (GAE).

I'm not sure why there isn't already a cookbook for transferring a static site to GAE, since it's pretty easy and painless — so here's my recipe.


But first, note that your lunch is not completely free — Google will 503 your ass if you exceed its (extremely generous) daily quotas for bandwidth and processing power.

Also, there are a few other GAE gotchas which apply to static content:

  1. Can't host a "naked" domain (ie instead of
  2. Limit of 3000 files per app.
  3. No automatic directory listings.
  4. No automatic directory redirect (ie redirect from /foo to /foo/).
  5. No custom 404 page.

You can get around #3 and #4, however (as I describe below).


The first thing you need to do is get the GAE SDK. I'm using the version for python on linux.

With the SDK installed, you can start developing right away on your local dev machine. To deploy an app, of course, you need to sign up for a GAE account. They make you verify your account with an SMS message, so have your cell phone ready.

Hello World

Building a static app is really simple:

  1. Create a directory for your project (ie myapp).
  2. Create a sub-directory named static (or really, whatever you want to name it) inside the project directory. This will be the web root.
  3. Copy your static files into static.
  4. Add a boilerplate app.yml to the project directory.

This is the boilerplate app.yml:

application: myapp version: 1 runtime: python api_version: 1 default_expiration: "1d" handlers: # show index.html for directories - url: (.*/) static_files: static\1index.html upload: static(.*)index.html # all other files - url: / static_dir: static

The first line (application: myapp) specifies the name of your app. This name doesn't matter for development on your local machine; but when you use the GAE dashboard to create a new app, you'll be prompted for a name. The name must be unique globally on GAE (ie it has to be a name no other GAE user has claimed for his or her app), and GAE uses it in the default url for your application (ie Once you've created the name and set up the app in the GAE dashboard, go back and change it here in app.yml.

The default_expiration setting is the default http cache-age for your static files; "1d" = one day, "4h" = 4 hours, etc. You can configure a separate expiration time for each url handler, but "1d" is probably good for most static content.

The first url in the handlers section captures all urls which end with a trailing-slash. These are directories; with static content you usually either want to display the index.html file of the requested directory, or, if there's no index.html, just the bare directory listing. Unfortunately GAE doesn't support listing static files, so this handler always just tries to display the index.html file in the requested directory.

The second url in the handlers section captures all other urls, and simply serves the requested static file.

Test It Out

At this point, you've already built a fully-functioning GAE app. You can test it out by running the test appserver (where $APPENGINE_HOME is the path to the GAE SDK on your local box, and $MYAPP_HOME is the path to your project directory):


This boots up a test GAE appserver on port 8080. Enter http://localhost:8080/ into your browser address bar; it should load up the index.html from the root of your static directory.

Directory Woeage

But as I pointed out above, if you navigate to a sub-directory, and omit the trailing slash (ie http://localhost:8080/foo), you won't see the index.html for that sub-directory — you'll just get a blank page (and Google's generic 404 page when deployed to GAE).

This we can fix, however, by implementing a trivial RequestHandler in python. Create a file called (or whatever the heck you want to call it) in your project directory, and dump this into it:

import cgi from google.appengine.ext import webapp from google.appengine.ext.webapp.util import run_wsgi_app class RedirectToDirectory(webapp.RequestHandler): def get(self): self.redirect(self.request.path + "/", permanent=True) application = webapp.WSGIApplication([('/.*', RedirectToDirectory)], debug=True) def main(): run_wsgi_app(application) if __name__ == "__main__": main()

This creates a RequestHandler called RedirectToDirectory; this class simply appends a trailing slash to the current url and redirects. The other non-boilerplate line is just below, where RedirectToDirectory is registered to be used for all urls (/.*) handled by this script.

Next, drop in a url entry for the script into the handlers section of your app.yml (and yes, the order of the url entries is important):

application: myapp version: 1 runtime: python api_version: 1 default_expiration: "1d" handlers: # show index.html for directories - url: (.*/) static_files: static\1index.html upload: static(.*)index.html # redirect to directories (/foo to /foo/) - url: .*/[^.]+ script: # all other files - url: / static_dir: static

This url entry will capture all the urls which don't end in a trailing slash and don't have a file extension. It will handle requests for these urls by sending them to your script, which in turn will redirect them back to your app — but with a trailing slash this time.

Directory Listings

But if your directory doesn't have a index.html file, you're still SOL — unlike a normal webserver, you can't configure GAE to just display the directory listing. And you can't just implement a handler to do this — GAE apps don't have access to their static filesystem.

One possible workaround to this would be to store all your files as entries in the Big Table DB, and then serve them (and the directory listings) dynamically. This would require writing a bunch of code, however, when all you really want is just to serve some freaking static files already.

So I compromised and wrote a simple perl script which automatically creates static index.html files for a configurable list of static directories. To make it work, create a directories sub-directory in your project, and add to it four files:
#!/usr/bin/perl -w open LIST, "directories/list.txt" or die $!; while (<LIST>) { s/\n//; # strip newline $title = $dir = $_; $title =~ s!.*/!!; # strip path from directory name # open directory index.html for writing open INDEX, ">$dir/index.html" or die $!; # dump header template into index.html # replacing %title% with directory name open HEAD, "directories/head.html" or die $!; while (<HEAD>) { s/%title%/$title/g; print INDEX; } close HEAD; # dump directory listing into index.html open DIR, "ls -lh $dir |" or die $!; while (<DIR>) { s/\n//; # strip newline # parse fields listed for each file @fields = split / +/, $_, 8; # skip lines that aren't file listings # and also skip this index.html next if ($#fields < 7 || $fields[7] eq 'index.html'); # print a table row for this file print INDEX '<tr><td class="name"><a href="' . $fields[7] . '">' . $fields[7] . '</a></td><td class="size">' . format_size($fields[4], $fields[0]) . '</td><td class="modified">' . $fields[5] . ' ' . $fields[6] . '</td></tr>' . "\n"; } close DIR; close INDEX; # dump footer into index.html `cat directories/foot.html >> $dir/index.html`; } close LIST; # format file size a little nicer than ls sub format_size { my($size, $perm) = @_; # skip for subdirectories return '-' if $perm =~ /^d/; # add bytes abbr $size .= 'B' if $size =~ /\d$/; return $size; }

This is the perl script. When you create it, make sure you make it executable:

$ chmod +x directories/

When you run it (after you've created the other three files), make sure you run it from your project directory:

$ cd myapp $ directories/
static/foo static/foo/bar static/baz

This is the list of directories for which to auto-generate index.html files. Please note that the script will delete the existing index.html files in these directories. So make sure that you list only directories which don't have a custom index.html. (Plus this is another good reason to be using version control on your project.)

<html> <head> <title>%title% - My App</title> </head> <body> <h1>%title%</h1> <table> <thead> <tr><th>Name</th><th>Size</th><th>Modified</th></tr> </thead> <tbody>

This is the first part of template for the auto-generated index.html files. The perl script will replace the instances of %title% in this file with the directory name.

</tbody> </table> </body> </html>

This is the second part of the template.

So add to directories/list.txt the paths of the directories you want to have listings, customize directories/head.html and directories/foot.html to your liking, and run directories/ This will create your directory-listing index.html files.


And now you're ready to deploy. Make sure you've created the app in the GAE dashboard and updated your app.yml with its appspot name; then upload it:


You'll have to enter your GAE credentials, and wait for a minute or two while your app boots up on GAE; when your app is deployed and ready to use, the script will let you know. If you screw up your credentials, delete your ~/.appcfg_cookies file (which caches them), and try again.

If you've also signed up for Google Apps, you can use your (separate) Google Apps dashboard to configure a sub-domain of a domain you own to point to your deployed app (ie Otherwise, you have to access it via its sub-domain of the appspot domain (ie

Bon app├ętit!

Thursday, May 27, 2010

Lucid Windows with Xmonad

I just got a new "Bonobo" laptop from System 76 (which comes with Ubuntu 10.04 Lucid Lynx installed); and I set it up with xmonad, a nice clean, tiling window-manager. But instead of messing around with .xsession, I'm launching it via the "hybrid" method detailed on the xmonad haskell wiki. With this mechanism, I can still login to x with the default gnome desktop — but now I can also log in with xmonad; plus I can keep the gnome-panel etc that comes with the default ubuntu install. Also, this is pretty much the simplest way to set up xmonad on Lucid.

1. Install Xmonad

So the first thing is to install xmonad:

$ sudo apt-get xmonad

2. Create an Xmonad.start Script

Now crate a shell script to execute when you login with an xmonad session. It will simply set the WINDOW_MANAGER environment variable to xmonad, and then start up gnome in the regular fashion:

$ sudo echo '#!/bin/sh export WINDOW_MANAGER=xmonad gnome-session' > /usr/local/bin/xmonad.start $ sudo chmod +x /usr/local/bin/xmonad.start

3. Update Xmonad.desktop

Lucid has already created an xmonad.desktop entry in /usr/share/applications and /usr/share/xssessions for you; you just need to update them to run your xmonad.start script (instead of running xmonad directly):

$ sudo perl -pi -e 's/^Exec=xmonad$/Exec=xmonad.start/' /usr/share/{applications,xsessions}/xmonad.desktop

4. Set up Xmonad.hs

If you want some sort of dock displayed (like gnome-panel, dzen, xmobar, etc.) you need to set up a custom .xmonad/xmonad.hs configuration file; and make sure you include manageDocks in the manageHook, and avoidStruts in the layoutHook (and you need to import XMonad.Hooks.ManageDocks for this).

You can try out my xmonad.hs, or this simplified xmonad.hs:

import XMonad import XMonad.Hooks.ManageDocks import XMonad.Layout.Grid import XMonad.Layout.Master -- automatically manage docks (gnome-panel/dzen/xmobar/etc) myManageHook = manageDocks -- each layout is separated by ||| -- layout 1: grid w/ master (expand/contract by 3/100; master takes up 1/2 of screen) -- layout 2: standard full (master window fullscreen, hides dock) myLayoutHook = (avoidStruts $ mastered (3/100) (1/2) $ Grid) ||| Full main = xmonad $ defaultConfig { manageHook = myManageHook <+> manageHook defaultConfig, layoutHook = myLayoutHook }

After editing xmonad.hs, validate it with the following command:

$ xmonad --recompile

If you're already running xmonad, you can reload it with the new config by pressing <mod>-q.

5. Try it Out

You're all ready to try it out! Log out of your current x session (ie the Log Out... menu item in the gnome-panel), and on the login screen, select XMonad from the Sessions menu on the login screen's bottom panel:

If you're new to xmonad, follow the rest of the xmonad guided tour. If you screwed up something and xmonad won't start or appears to hang, hit <ctrl>-<shift>-<f1> to get to a terminal, login, fix (with xmonad --recompile to check that it compiles), and then restart x with sudo service gdm restart.

Here's the finished product on my laptop:

Sunday, May 9, 2010

Firing Up My Jetpack

Jetpack is the new style (still in early development) of firefox extensions. Mozilla developers have made their in-progress work on this available for some time, however, and I finally took a look at the "jetpacks" available at the Jetpack Gallery. I was pretty impressed with what I found: there's already jetpack versions of some of the my favorite existing firefox extensions, like for flash blocking (ClickToFlash), capturing the colors used on a web page ("eyedropper" style: JetColorPicker), and taking screen-grabs (JetShot).

While I think I'll stick with the conventional firefox-extension versions of each of these tools for now, I was really impressed with how short and sweet the code to many of these jetpacks are. I can see why mozilla is planning to make this the future of extension building: there are tons and tons of web developers who are familiar enough with javascript and dom-manipulation to be able to whip these things out, and the jetpack-management UI has that same "view source" concept built into it that helped fuel the original web 1.0 explosion in the 90s.

But anyway, while I was looking at some of the jetpacks, I got the itch to built a few of my own. And the experience was both glorious and frustrating. Glorious because jetpacks are so wickedly easy to build and install; frustrating because there isn't yet much documentation for building them.

The "retired" jetpack documentation at the Mozilla Developer Center (and API Reference on your own installed jetpack page) are the only docs relevant to the current jetpack extension. The in-progress Jetpack SDK is the second cut at the "jetpack" methodology, and is designed to work with a forthcoming version of firefox — and not the current version of the jetpack extension (the distinction between these two different forms of "jetpack" are not called out clearly anywhere, which makes it confusing to figure out where to get started).

But enough ranting for now. If you want to learn how to build jetpacks, you pretty much just have to do it the same way you learned to build web pages — by viewing the source of other jetpacks. That's what I did, and here are the first few jetpacks I built:

Lookup in Wikipedia

I actually built the Lookup in Wikipedia jetpack second, but it's even simpler than the first, so I'll show it off first. It adds a "Lookup in Wikipedia" menu item to the right-click popup menu, so that when you select some text and then right-click and select this menu item, it'll open up a new tab with the wikipedia page for the text you selected:

// ==UserScript== // @name Lookup in Wikipedia // @description Adds a context menu item to lookup the selected text in wikipedia. // @copyright 2010 Justin Ludwig ( // @license The MIT License; // @version 1.0 // ==/UserScript== jetpack.future.import("menu"); jetpack.future.import("selection"); // add page context-menu item{ label: "Lookup in Wikipedia", icon: "", command: function() { // lookup the current document's language var docLang = jetpack.tabs.focused.contentDocument.getElementsByTagName("html")[0].lang; // lookup the browser's preferred language var userLang = jetpack.tabs.focused.contentWindow.navigator.language; // ignore everything in the lang tag except for the general language var lang = (docLang || userLang).toLowerCase().replace(/([a-z]+).*/, "$1"); // when the user selects this menu-item, open a new tab with the wikipedia page for the text on the page that the user had selected"http://" + lang + "" + encodeURIComponent(jetpack.selection.text)); } });

If it didn't have a little fancy code for trying to figure out what language to use, it would be a one-liner. For the language, you can see I justed used the standard browser dom of the current tab (jetpack.tabs.focused.contentDocument for what we normally refer to just as document and jetpack.tabs.focused.contentWindow for window) to get the current document language, or the browser's preferred language. Opening a new tab with the selected text is a breeze with the jetpack api ( to open the tab, and jetpack.selection.text to get the currently selected text). The jetpack.tabs object is part of the core jetpack api, but accessing and jetpack.selection requires the jetpack.future.import() commands at the top of the script.

My Last Modified

I totally ripped off the idea for My Last Modified jetpack from azu_re's lastModified jetpack, but I made the display a little more customizable so it plays nice with Vimperator (Vimperator unfortunately seems to break the display of most of the jetpacks which show stuff in the status bar — I wish in fact Vimperator would just leave the status bar alone entirely, because I can see with time I'm going to want to give it over entirely to jetpacks).

Both mine and azu_re's displays the last-modified date for the current page in a little block on the status bar. The difference with mine is that the date format and css style of the block in the status bar is fully customizable:

// ==UserScript== // @name My Last Modified // @description Displays page's last-modified date. // @copyright 2010 Justin Ludwig ( // @license The MIT License; // @attribution azu_re ( // @version 1.0 // ==/UserScript== var manifest = { settings: [ // date/time format (using strftime) { name: "dateFormat", type: "text", label: "Date Format", "default": "%m/%d/%y %H:%M" }, // css style for status bar display { name: "statusStyle", type: "text", label: "Status Bar CSS Style", "default": "color:white; background-color:black; font-size:9px; width:75px; padding:2px;" } ] }; jetpack.future.import("storage.settings"); /** * Formats a date using the specified format. * @param format Format string. * @param date JS date object. * @return Formatted string. */ function strftime(format, date) { ... } jetpack.statusBar.append({ // add display div to status bar html: "<div id='my-last-modified' style='" + + "'></div>", onReady: function(widget) { // update display when tab focused or loaded function update() { var currentDocument = jetpack.tabs.focused.contentDocument; // get last-modified date of current tab var date = new Date(currentDocument.lastModified); // format date per user preference var status = strftime(, date); // update display $(widget).find('#my-last-modified').text(status); }; jetpack.tabs.onFocus(update); jetpack.tabs.onReady(update); } });

(I omitted the source to the strftime function from this listing, just because it's lengthy and has nothing jetpack-specific in it. You can get the full code for it from the My Last Modified jetpack's page in the jetpack gallery, though.)

The jetpack.statusBar.append() function is a super-easy way of adding some content to the browser's status bar — all you have to do is specify the initial html container, via the html option, and a function that's called when the html has been added to the status bar, via the onReady option. Since in this case the content changes whenever the active tab changes, my onReady function has an update() function in it that it registers to be called when a different tab is focused (via jetpack.tabs.onFocus()) or created (via jetpack.tabs.onReady()). The last line of the update() function uses jquery, conveniently included by default into each jetpack, to lookup the html container and update its text.

This jetpack also makes use of jetpack's super-easy way of exposing and persisting per-jetpack preferences: all you have to do is add a manifest global with a settings array property in it; each item in that array represents a separate preference. Users can access the preferences of any jetpack by typing "about:jetpack" in firefox's address bar, clicking the "Installed Features" link on the resulting page, and then clicking the "settings" link next to the listing for the jetpack. In the source code, you can access each setting as a property on the object (ie for a custom preference named dateFormat). You just have to remember to import the storage.settings "package" at the top of your jetpack to make this all work.

Log Open Ajax Events

The Log Open Ajax Events jetpack is the only really original jetpack I've built so far. It's a little more complicated (and took some trial and error to figure out which APIs to use), but still really simple. If the current page has included the Open Ajax Hub (which does the product I work on at work), this jetpack logs any events published to it to the Firebug console.

This is something for which I had already built a bookmarklet; but in jetpack form I don't have to turn it on every time I reload or view a different page — I can just turn it on when I want to debug something, and continue getting the events until I'm completely finished with the problem.

// ==UserScript== // @name Log OpenAjax Events // @description Logs configured openajax events to firebug. // @copyright 2010 Justin Ludwig ( // @license The MIT License; // @version 1.0 // ==/UserScript== var manifest = { settings: [ // comma-separated list of events to which to subscribe (* and ** wildcards allowed per OpenAjax spec) // clear this value completely to disable logging of any events { name: "eventName", type: "text", label: "Event Name", "default": "**" }, ] }; jetpack.future.import("storage.settings"); function subscribe() { // "window" context object of the current page var w = jetpack.tabs.focused.contentWindow.wrappedJSObject; // no firebug console or no openajax on this page if (!w.console || !w.OpenAjax) return; // lazy init our custom data object on each page var name = || ""; var data = w._logOpenAjaxEventsData; if (!data) data = w._logOpenAjaxEventsData = { name: "", subs: [] }; // already subscribed to this page (or no events to subscribe for) if (name == return; // remove existing subscriptions for (var a = data.subs, i = a.length - 1; i >= 0; i--) w.OpenAjax.hub.unsubscribe(a[i]); data.subs = []; // add new subscriptions if (name) for (var a = name.split(","), i = a.length - 1; i >= 0; i--) data.subs.push(w.OpenAjax.hub.subscribe(a[i], function(name, pubData, subData) { // log event (in a form that's collapsable so as not to pollute the log too badly) var o = {}; o["OpenAjax event: " + name] = pubData; w.console.dir(o); })); // remember event names we subscribed to (to check if they changed later) = name; } // re-subscribe as necessary on every page focus or load jetpack.tabs.onFocus(subscribe); jetpack.tabs.onReady(subscribe);

I have it try to re-subscribe to the Open Ajax Hub every time a tab is focused or created (jetpack.tabs.onFocus() and jetpack.tabs.onReady() again). It stores its own custom _logOpenAjaxEventsData in each tab to remember if it's already subscribed, though, so it only actually re-subscribes if the user has updated the jetpack's eventName preference in the interim. The trick with this jetpack (that I had to find by combing through the source of a bunch of other jetpacks) was how to access the javascript "context" object of the current tab. Usually when scripting an html page, we access both the browser dom and the javascript context of the current page (the thing that holds the "global" objects) via the window property. In jetpack, you have to access these things through two distinct objects: the browser dom through jetpack.tabs.focused.contentWindow, and the javascript context via jetpack.tabs.focused.contentWindow.wrappedJSObject.

The only other tricky thing in this jetpack is just the way the Open Ajax events are logged; to make them display nicely in the firebug console (so that each event is displayed initially on a single line, but you can still drill into all the event properties) I created a new simple object for each event, and set the only property of that object to the event data, dynamically creating this one property's name with the event name in it as a friendly label for the event data object (ie "OpenAjax event: CAF.Update" in the screenshot above).

Log Window Events

I also created a Log Window Events jetpack that works pretty much the same as the Log Open Ajax Events jetpack. The difference is that it listens for standard window events (like onclick, onkeypress, etc.) instead of Open Ajax events. I won't bother including the source to it here, but like all jetpacks in the jetpack gallery, you can view the full code from its jetpack page (which, like I mentioned at the top, is probably the best innovation of this innovative extension style).

Let 'Er Rip!

So I love what the jetpack team has done so far, and I'm looking forward to the day when jetpack is included as part of the default firefox install. And hopefully as work on the new jetpack SDK progresses, many of the documentation holes will be filled in. But even if not, as long as this thing has "view source", I predict it will be a smashing success.

Saturday, April 17, 2010

1 Item Remaining Problem In IE

There are a bunch of different bugs in IE which manifest themselves as a page which doesn't finish loading and displays "X items remaining" (or "1 item remaining") in the status bar. C.M. Palmer on Stack Overflow has a pretty good answer for one of these bugs. I recently found an answer to another.

One of the symptoms of this particular bug (and probably others) is that it only happens in certain environments. I finally got access to one client's environment where this happened, and was able to track it down. Intermittently, and for no apparent reason, IE would fail to load the following script:

<script defer src=//:></script>

Yes, that's a script element with a src of "//:". This ugly hack is actually the best way to simulate the DOMContentLoaded event in IE (the event which signals that the HTML page has been parsed and its document object-model is fully built), as Dean Edwards and friends have figured out (they also tried src values of javascript:void(0), javascript:false, and //0 before settling on //:).

Fortunately, there's another trick that works for this purpose, as Diego Perini discovered: skip this ugly script element, and poll to check if document.body.doScroll() is working yet. The downside is that because it requires a timer to call doScroll() every X milliseconds, you won't be notified of the DOM's readiness quite as quickly as when using the "//:" hack (whereby you listen for onreadystatechange events on the script element — so you're notified of the readiness immediately, rather than having to poll for it).

The upside, of course, is that the doScroll() method works more reliably, with no "1 item remaining" for which to wait indefinitely. Another problem I had noticed in the past with the script element method was that IE sometimes would fire the onreadystate complete event before the DOM was loaded (and I had worked around that by checking if the page's footer element existed, and if not, waiting until it did). I haven't found any holes with this doScroll() technique yet, though, so I'm happy so far.

The 1.6.x version of the Prototype JavaScript Framework used the script technique (and I'm sure other libraries have, or still do); the 1.7 version has been updated with the doScroll() method. Here's the specific code it uses for the doScroll() technique (I've added comments where it uses Prototype-specific code, in case someone wants to adapt it to a non-Prototype environment):

(function() { /* Support for the DOMContentLoaded event is based on work by Dan Webb, Matthias Miller, Dean Edwards, John Resig, and Diego Perini. */ var timer; function fireContentLoadedEvent() { if (document.loaded) return; if (timer) window.clearTimeout(timer); document.loaded = true; // raise the Prototype dom:loaded function // (your custom ondomcontentloaded code goes here)'dom:loaded'); } function checkReadyState() { if (document.readyState === 'complete') { // Prototype for removeEventListener()/detachEvent() document.stopObserving('readystatechange', checkReadyState); fireContentLoadedEvent(); } } function pollDoScroll() { try { document.documentElement.doScroll('left'); } catch(e) { // Prototype for setTimeout() timer = pollDoScroll.defer(); return; } fireContentLoadedEvent(); } if (document.addEventListener) { document.addEventListener('DOMContentLoaded', fireContentLoadedEvent, false); } else { // Prototype for addEventListener()/attachEvent() document.observe('readystatechange', checkReadyState); if (window == top) // Prototype for setTimeout() timer = pollDoScroll.defer(); } // Prototype for addEventListener()/attachEvent() Event.observe(window, 'load', fireContentLoadedEvent); })();

Saturday, April 3, 2010

These Aren't the Droids You're Looking For

So a few weeks ago Google contacted me about the Device Seeding Program for Top Android Market Developers — they wanted to give me (and a bunch of other developers) a free phone for having a popular app on the Android Market (mine is Ring Toggle). I thought "Awesome!", and signed up.

Throughout most of the world, Google is sending developers the Nexus One. In the US, they're sending half the developers (by random distribution) Nexus Ones, and half Motorola Droids. They shipped me my phone yesterday, and when I opened the box, it was a Droid.

I thought "Awesome!", and I shut down my old HTC G1, pulled out the SIM card and popped open the Droid. I hunted around inside the Droid's battery compartment, but I couldn't figure out where to stick the SIM. So I googled it — and it turns out the Droid is hardwired to Verizon's nextwork. Mother fucker!

Right now I've got six months left on my plan with T-Mobile, where I'm paying $30 for like 300 minutes and $25 for the data plan (plus 400 text messages). The cheapest equivalent plan with Verizon is $60 for 450 minutes and unlimited text messages, and $30 for the data plan. So to actually use this new Droid, I have to pay $90 a month on top of my existing T-mobile plan for the next six months. Thanks Google, thanks a bunch.

I know I shouldn't be angry at Google — giving away free android phones is really cool of them — but I can't help feeling that instead of a pat on the back, I got a kick in the nuts. And the thing is, after a few minutes of awkwardness when I turned on the Droid and followed the prompts to "activate" it by calling Verizon (the call-center guy was really nice, but even more confused than I was — turns out there's no point in calling Verizon because they have no clue what's going on), I found that the Droid is a really awesome android device. It's so much more responsive than my G1, and it's got the slicker 2.0 android OS (I'm stuck on 1.6).

So I'm completely sick because I want so badly to use this new Droid instead of my old G1, but I have a hard time justifying adding a second phone plan on top of the one I already have, especially since I don't really use my phone for phone purposes all that much. But at the same time, I can't justify lugging around two phones in my pocket, one to use as a phone and the other as an ipod. And I know I'm eventually going to give in and pay both T-Mobile and Verizon a bunch of money for the privilege of having a second-rate cellular-data plan and occasional phone service, just so I can carry around my super sweet new Droid in my pocket.

So I think a couple of big evil companies just played a Jedi mind trick on me.

Sunday, March 28, 2010

On Donner and Blitzen

The first company I worked for was pretty much a Windows-only shop. They used Exchange for email, VSS for version control, and Visual Studio as an IDE. This was the first time I had ever used Windows, and I disliked it (although not as badly as I thought I would) — but I didn't really have a choice. Since 2004, though, I've been fortunate enough to be able to use Linux on my primary dev box at work. My current company (and all the ones I've ever worked for) still uses Exchange, so I've struggled with how to deal with this over the years.

Now Dasher! Now Dancer!

When I first started using Windows, I didn't know the difference between Outlook and Outlook Express (apparently the former is an Exchange client and the other is a generic SMTP/POP client), and so I used Outlook Express (because I wanted my mail faster, of course). After a few months I finally figured out why I couldn't "accept" meeting requests: I needed to use Outlook for that. So I switched to Outlook proper; and even after I moved my primary dev box to Linux, I still kept a Windows machine on the local Windows domain to keep using Outlook (and also to test IE, of course).

Now Prancer and Vixen!

But switching back and forth between my Windows and Linux boxes got old pretty quick. First I tried Eudora as a SMTP/POP client (which I had used on the old Mac OS before OS X). That was okay, but POP is pretty primitive and usually doesn't work well (it's hard to keep the local read/deleted states in sync with the server, so you usually just end up downloading everything locally and deleting it on the server — which means you better have a good local backup story).

On Comet! On Cupid!

Then I tried Evolution as an IMAP client, which works (at least with Exchange) much better than POP. You can keep mail in sync on the server, and even use Exchange's server-side folder structure. The two big things missing still, though, are contacts and meetings. Evolution is supposed to be able to integrate more tightly with Exchange (possibly including these two features) via Exchange's web interface, but I've never been able to get that to work. So I just used Exchange's web interface whenever I wanted to lookup a person in Exchange, or to schedule/respond-to a meeting.

On Donner

Eventually I got restless, and decided to try Thunderbird. I can't say that it's really any better or worse than Evolution, just somewhat different. It's what I use now, but it still doesn't have contacts and meetings (although for a while I got it little bit of contact integration by syncing it with the corporate LDAP server). But then one day I found Lightning (and you're welcome for taking so long to get to the point of this post).

And Blitzen! (aka Finally, the Point of this Post)

Lightning, an extension for Thunderbird, allows just enough integration with Exchange's meeting functionality to be really really useful to me. Lightning is mainly a calendaring extension, but it's also able to parse Exchange's meeting-request emails — it displays a button right in the email to add the meeting-request to your Lightning calendar. So now I finally have a way to organize my meeting schedule (other than to highlight the meeting-request emails in my inbox — plus in the past Exchange unhelpfully moved these emails to a separate Calendar folder if I used the Exchange web interface to accept the meeting).

Here are some pics of the Lightning "Today Pane" (displayed on the right-side of my normal email listings), and the full Lightning "Calendar" interface (which is pretty much exactly what you'd expect and need from a calendering UI). My apologies for the super-gay Thunderbird persona (I can't seem to login to the Personas website in Thunderbird, so for now I'm stuck with picking one of the "popular" personas).

Lightning "Today Pane"
Lightning "Calendar"

On 64-Bit Linux

The one gotcha with this (and kind of my motivation for writing this post) is that the version of Lightning from doesn't work on 64-bit Linux. Fortunately, Ubuntu has this documented and taken care of on their ThunderbirdLightning Community Documentation page. To make it work, I just had to download the 64-bit lightning.xpi, and install it manually (via the Install... button in the Extensions tab of the Add-ons dialog, which you can get at via the Tools > Add-ons menu item). Now dash away, dash away, dash away all!

Saturday, February 20, 2010

Private Branches in Subversion

For anyone still learning svn, this is what I've been doing when I create and use private branches (aka "feature" branches).

For me, the main motivation for creating a private branch is to stash some code on which I've been working when I have to switch over to something else before I've finished (or can't check it in yet for some other reason). You could also use a private branch to collaborate on (or just share) some experimental feature with another developer, or to submit a bugfix that needs to be reviewed before checking it into the mainline (and of course you can use the same switching/merging techniques with release branches as you do with private branches).

Here are two other tutorials for using branches (although some of Evan Weaver's "best practices," like deleting the trunk of the repository, are probably not actually best practices in practice):

1. Decide where to put the private branch in source control.

Some projects might just use the standard branches directory for this. Other projects have a sibling of the trunk called experimental or exp or just ex specifically for these kind of experimental branches. I'll use exp in this writeup, and I'll put it in a repository named my-project — although it can be anywhere (outside of the trunk or whatever part of the tree from which you're branching, obviously).

So, assuming you already have the my-project repo set up, first create a temporary working copy for exp in a temp directory (checkout with --depth empty to avoid pulling down the whole repo):

$ mkdir /tmp/my-project $ cd /tmp/my-project $ svn checkout —depth empty .

Then create the exp folder, and check it in:

$ mkdir exp $ svn commit -m 'created an experimental section for my-project'

2. Create the private branch.

Ideally, you've already got the working copy of the trunk locally, and updated it to the latest revision. In that case, you can just branch from the head revision of the trunk. You use the svn copy command to do this, specifying the source url (the trunk) and the destination url (the new branch root: my-new-branch). This is completely a server-side operation, so it doesn't matter from where you invoke it. Also, copying is very cheap in subversion, so it should execute quickly (if you ignore the usual network latency):

$ cd ~/projects/my-project $ svn update $ svn copy -m 'branch from trunk at 9269'

In practice, your working copy may be a few days out of date; in that case you should branch from the trunk revision to which you last updated (or update your working copy to the head):

$ svn copy -r 8189 -m 'branch from trunk at 8189'

3. Switch your working copy to the private branch.

Now that the branch has been created, you can switch the working copy of the tree you currently have checked out from the trunk to the new branch. If you want to switch your entire working copy, just use the switch command from the working copy root, and specify the url of the branch to which to switch:

$ cd ~/projects/my-project $ svn switch

You can also switch just part of the tree, like just one specific component on which you want to work:

$ cd ~/projects/my-project/foo/bar $ svn switch

Note that switch is very similar to update — it will update your working copy (at least the files you haven't modified locally) to match the revision of the branch to which you switched. If I just created my-new-branch from the head revision of the trunk, but I haven't updated my working copy in a while, when I switch from the trunk to my-new-branch, it's going to pull down all the files that have been updated since I last sync'd. If your working copy is messy or out of sync, you're probably better off switching just specific projects/components/folders/files within the branch, rather than the whole thing.

4. Commit to the private branch.

You can commit as often as you want to the private branch. Descendants of directories that have been switched automatically are committed to the branch to which their ancestors have been switched. For example, in my local my-project folder, the root of my working copy is pointed to the trunk, but the bar directory is switched to my-second-branch, and a.html is switched to my-first-branch:

projects my-project: ^/my-project/trunk foo bar: ^/my-project/exp/my-second-branch/foo/bar baz a.html: ^/my-project/exp/my-first-branch/foo/bar/baz/a.html b.html

If I commit a.html, it will be committed to my-first-branch; and if I commit b.html, it will be committed to my-second-branch. I haven't actually tried this out, but if try to commit both at the same time (ie by committing from any ancestor folder), I believe each will be committed to the correct branch (a.html to my-first-branch and b.html to my-second-branch) in the same atomic checkin. The same holds for other svn commands, like update, diff, etc: files will be compared to the branch/revision to which you've switched their nearest ancestor.

You can always tell which branch (and revision) to which you have switched a given file with the svn info command. Among other things, it will print out the URL of the corresponding file in source control, and the revision (of the branch) to which you last updated:

$ cd ~/projects/my-project $ svn info
... URL: ... Revision: 9269 ...
$ svn info foo/bar | grep URL

5. Merge updates from the trunk to the private branch.

If you've been working on your branch for a while, you may want to get the latest updates from the trunk to see how they work with your branch (and then continue working on your branch). You use the merge command for this. If you want to merge the entire trunk to your switched branch, it's pretty straightforward. You used to have to specify the revisions from which you originally branched (or the last revision you merged), but the latest version of svn (starting with 1.6?) can figure it out for you automatically — so you only have to specify the url of the branch from which to merge:

$ cd ~/projects/my-project $ svn merge

You can also use the merge command to merge just specific revisions or specific files from the trunk:

$ svn merge -r 9269:10035 foo/bar/baz/a.html

You can always skip this step if you don't care to get the latest from the trunk while you're working — you can always check that your new code works with the trunk when you merge back from the private branch to the trunk (in the next step), anyway.

6. Merge the private branch back into the trunk.

Finally, when you're ready to merge back to the trunk, make sure you've committed everything you're going to commit to the private branch. Then switch your working copy back to the trunk:

$ cd ~/projects/my-project $ svn switch

If you've switched individual subdirectories in your working copy, you have to switch them back individually:

$ svn switch foo/bar $ svn switch foo/bar/baz/a.html

Now merge in the changes from your private branch:

$ svn merge

If you've done any merging from the trunk to the private branch (step 5), you should also use the --reintegrate flag so svn knows it can avoid merging back changes from the branch to the trunk that it previously had merged from the trunk to the branch. Finally, commit the merged changes to the trunk:

$ svn commit -m 'merged my-new-branch into trunk'

7. Delete the private branch.

You don't have to delete the branch, but once svn has merged from the trunk to the branch and back again, it may have problems tracking further merges between the two branches. You can continue developing with the private branch if you really want to — you'll just have to be much more careful when you merge. The easiest thing simply is to create a new private branch.

If you do delete the branch and ever need, for some reason, to get back to it, it will of course still be there — you can just switch back to it with the svn switch command (or check it out as a separate working copy), as long as you know the revision range in which it existed (which you can always dig out of the logs):

$ svn delete $ mkdir ~/projects/my-project-old $ cd ~/projects/my-project-old $ svn checkout -r 10036