Sunday, July 10, 2016

JPGPJ: A new Java GPG Library

The Bouncy Castle PGP implementation is the "standard" GPG/PGP library in Java, and it's quite solid — but it's cumbersome to use directly, since it pretty much forces you to learn and use the raw primitives of the OpenPGP spec (RFC 4880). Also, while there is some helpful example code in the Bouncy Castle examples package (and snippets from the same examples have been copied and pasted into a bunch of Stack Overflow answers), the example code is (appropriately?) cryptic, and covers only a limited subset of functionality in each example.

Encrypting with JPGPJ

So I wrote a small library, JPGPJ, to wrap the Bouncy Castle PGP implementation with a simple API for encrypting and decrypting files. It makes interoperating with the standard gpg command-line client (GnuPGP) a breeze. This is all you need to do to encrypt a file with Bob's public key, and sign it with Alice's private key:

new Encryptor(
    new Key(new File("path/to/my/keys/alice-sec.gpg"), "password123"),
    new Key(new File("path/to/my/keys/bob-pub.gpg"))
).encrypt(
    new File("path/to/plaintext.txt"),
    new File("path/to/ciphertext.txt.gpg")
);

The above Java code does the same thing as following gpg command (where Alice has an `alice` secret key and a `bob` public key on her keyring, and enters "password123" when prompted for her passphrase):

gpg --sign --encrypt --local-user alice --recipient alice --recipient bob \
    --output path/to/ciphertext.txt.gpg path/to/plaintext.txt

JPGPJ is set up to do the right thing by default — sign and encrypt — but if you just want to encrypt without signing, that's easy, too — just set the encryptor's signingAlgoritm property to Unsigned:

Encryptor encryptor = new Encryptor(
    new Key(new File("path/to/my/keys/bob-pub.gpg"))
);
encryptor.setSigningAlgorithm(HashingAlgorithm.Unsigned);
encryptor.encrypt(
    new File("path/to/plaintext.txt"),
    new File("path/to/ciphertext.txt.gpg")
);

To encode with ASCII Armor (ie produce Base64-encoded content, instead of binary content), just turn on the encryptor's asciiArmored flag:

Encryptor encryptor = new Encryptor(
    new Key(new File("path/to/my/keys/bob-pub.gpg"))
);
encryptor.setSigningAlgorithm(HashingAlgorithm.Unsigned);
encryptor.setAsciiArmored(true);
encryptor.encrypt(
    new File("path/to/plaintext.txt"),
    new File("path/to/ciphertext.txt.asc")
);

Decrypting with JPGPJ

Decrypting is just as easy. JPGPJ handles signed or unsigned, encrypted or unencrypted, compressed or uncompressed, ascii-armored or binary messages all the same way. Its default setting is to require messages to be signed by a known key; so for example, to decrypt a message signed by Alice's private key and encrypted with Bob's public key (requiring Alice's public key to verify and Bob's private key to decrypt), this is all the Java you need:

new Decryptor(
    new Key(new File("path/to/my/keys/alice-pub.gpg")),
    new Key(new File("path/to/my/keys/bob-sec.gpg"), "b0bru1z!")
).decrypt(
    new File("path/to/ciphertext.txt.gpg"),
    new File("path/back-to/plaintext.txt")
);

The above Java code does the same thing as the following gpg command (where Bob has a `bob` secret key and an `alice` public key on his keyring, and enters "b0bru1z!" when prompted for his passphrase):

gpg --decrypt --output path/back-to/plaintext.txt path/to/ciphertext.txt.gpg

If the message can't be verified by any known key (that is, any key with which the decryptor had been configured), JPGPJ will raise a VerificationException. If the message can't be decrypted by any known private key (that is, any private key with which the decryptor had been configured), JPGPJ will raise a DecryptionException.

To ignore signatures (in other words, decrypt a message successfully regardless of whether it was signed or not), simply turn off the decryptor's verificationRequired flag:

Decryptor decryptor = new Decryptor(
    new Key(new File("path/to/my/keys/bob-sec.gpg"), "b0bru1z!")
);
decryptor.setVerificationRequired(false);
decryptor.decrypt(
    new File("path/to/ciphertext.txt.gpg"),
    new File("path/back-to/plaintext.txt")
);

Keys in JPGPJ

The key data used by JPGPJ is simply what get when you export a key from GnuPG, like with the following gpg command for a public key:

gpg --export alice > path/to/my/keys/alice-pub.gpg

Or this gpg command to export a private key (which exports both the public and private parts of the key, encrypted with the same password that the key has on your GnuPG keyring):

gpg --export-secret-keys bob > path/to/my/keys/bob-sec.gpg

If you encode keys with ASCII Armor when you export them (via the GnuPG --armor flag), you can load them the same way in JPGPJ; and you can also embed ascii-armored keys as strings in your source code, if you find that more convenient than using external files (see the Key Rings wiki page for more details on loading and using keys in JPGPJ).

Sunday, May 22, 2016

Powerline Subversion Status

For a while I've been using Jasper N. Brouwer's Powerline Gitstatus plugin for the nifty Powerline shell prompt, and it's become a pretty much indispensable part of my Git workflow. I haven't been able to find something comparable for subversion, so I created something myself: Powerline SVN Status. Here's a screenshot of it in action:

Powerline SVN Status Screenshot

I don't use Subversion all that much any more — mainly for a few older projects — but when I do, having a plugin like this makes my life easier, reminding me when I have some changes I need to check in, or if I have some untracked files I need to add. It's also really helpful when you've switched to a branch, having a branch indicator to remind you that you're not looking at the trunk anymore.

This plugin is just a simple Python class that calls svn info and svn status when in a Subversion working directory. With Powerline all set up, you can simply install it via Pip:

pip install powerline-svnstatus
And then add the following block to your Powerline segment configuration to activate it (I put this in my ~/.config/powerline/themes/shell/default_leftonly.json file):
{
    "function": "powerline_svnstatus.svnstatus",
    "priority": 40
},

Saturday, May 14, 2016

Ubuntu 16.04 with Java 7 Timezone Data

As I found when upgrading to Ubuntu 16.04, Java 7 is no longer in the main Ubuntu repository — you have to install it via the OpenJDK PPA. That works nicely, but unfortunately this PPA doesn't include any timezone data.

In previous releases of Ubuntu, Java 6 and 7 timezone data were included via the tzdata-java package; but this package isn't available for Ubuntu 16.04. So I created a new tzdata-java PPA just for Ubuntu 16.04 (Xenial). You can install it like this:

sudo apt-add-repository ppa:justinludwig/tzdata
sudo apt-get install tzdata-java

To update my previous blog post: as a set of Ansible tasks, installing Java 7 on Ubuntu 16.04 now just works like this:

- name: register java 7 ppas
  become: yes
  apt_repository: repo={{ item }}
  with_items:
  - 'ppa:openjdk-r/ppa'
  - 'ppa:justinludwig/tzdata'

- name: install java 7 packages
  become: yes
  apt: pkg={{ item }}
  with_items:
  - openjdk-7-jdk
  - tzdata-java

Sunday, May 1, 2016

Xenial Ansible

I started building out some Ubuntu 16.04 (Xenial Xerus) servers this weekend with Ansible, and was impressed by how smoothly it went. The only major issue I encountered was that Ansible requires Python 2.x, whereas Ubuntu 16.04 ships Python 3.5 by default. Fortunately, it's not too hard to work around; here's how I fixed that — and a couple of other issues specific to the servers I was building out:

Python 2

Since Ansible doesn't work with Python 3, and that's what Ubuntu 16.04 provides by default, this is the error I got when I tried running Ansible against a newly-booted server:

/usr/bin/python: not found

So I had to make this the very first Ansible play (bootstrapping the ability of Ansible to use Python 2 for the rest of its tasks, as well as for its other record keeping — like gathering facts about the server):

- name: bootstrap python 2
  gather_facts: no
  tasks:
  - raw: sudo apt-get update -qq && sudo apt-get install -qq python2.7

And in the inventory variables (or group variables) for the server, I had to add this line (directing it to use Python 2 instead of the server's default Python):

ansible_python_interpreter: /usr/bin/python2.7

Aptitude

The next hiccup I ran into was using the Ansible apt module with the upgrade=full option. This option is implemented by using the aptitude program — which Ubuntu no longer installs by default. I was getting this error trying to use that option:

Could not find aptitude. Please ensure it is installed.

So I just tweaked my playbook to install the aptitude package first before running apt: upgrade=full:

- name: install apt requirements
  become: yes
  apt: pkg=aptitude

- name: update pre-installed packages
  become: yes
  apt: upgrade=full update_cache=yes

Mount with nobootwait

Then I started running into some minor issues that were completely unrelated to Ansible — simply changes Ubuntu had picked up between 15.10 and 16.04. The first of these was the nobootwait option for mountall (eg in /etc/fstab mountpoints). This option seems to be no longer supported — the server hung after rebooting, with this message in the syslog:

Unrecognized mount option "nobootwait" or missing value

Maybe this is just an issue with AWS EC2 instance-store volumes, but I had to change the /etc/fstab definition for the server's instance-store volume from this:

/dev/xvdb /mnt auto defaults,noatime,data=writeback,nobootwait,comment=cloudconfig 0 2

To this:

/dev/xvdb /mnt auto defaults,noatime,data=writeback,comment=cloudconfig 0 2

Java 7

The Ubuntu 16.04 repo no longer includes Java 6 or 7 — only Java 8 and 9. I got this error message trying to install Java 7:

No package matching 'openjdk-7-jdk' is available

So I first had to add a PPA for OpenJDK 7, and then could install it:

- name: register java 7 ppa
  become: yes
  apt_repository: repo=ppa:openjdk-r/ppa

- name: install java 7
  become: yes
  apt: pkg=openjdk-7-jdk

But the PPA doesn't include the timezone database, so the time formatting in our app was restricted to GMT. So I had to "borrow" the timezone data from Ubuntu 14.04:

- name: download tzdata-java
  get_url:
    url: http://mirrors.kernel.org/ubuntu/pool/main/t/tzdata/tzdata-java_2016d-0ubuntu0.14.04_all.deb
    dest: ~/tzdata-java.deb
    checksum: sha256:5131aa5219739ac58c00e18e8c9d8c5d6c63fc87236c9b5f314f7d06b46b79fb

- name: install tzdata-java
  become: yes
  command: dpkg --ignore-depends=tzdata -i tzdata-java.deb

That's probably going to be broken in a month or two after the next tzdata update, but it's good enough for now. Later I'll put together some Ansible tasks to build the data from source (although obviously the long-term solution is to upgrade to java 8).

Update 5/14/2016: I created a Xenial tzdata-java package to simplify the installation of Java 7 timezone data.

MySQL root

The final issue I hit was with the MySQL root user. Unlike previous versions of Ubuntu, which by default would come with 3 or 4 MySQL root users (covering all the different variations for naming your local host), and all with empty passwords, Ubuntu 16.04 comes with just a single root user — with which you can login only via MySQL's auth_socket plugin. So I was getting this error trying to login to MySQL as root the old way:

unable to connect to database, check login_user and login_password are correct or /home/me/.my.cnf has the credentials

The new way is simply to login to MySQL using the system's root user (ie sudo mysql). In Ansible tasks (such as those using the mysql_db module), this just means using the become (aka sudo) directive, and omitting the login_user and login_password options:

- name: create new database
  become: yes
  mysql_db: name=thedata

Sunday, March 6, 2016

Grails 2.5.4 Controller Action Arguments

After upgrading to Grails 2.5.4, one of our projects failed with an exception in the 'canonicalization' phase of the build, due to a NullPointerException in Grails' ControllerActionTransformer:

Caused by: java.lang.NullPointerException
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.getCodeToInitializeCommandObjects(ControllerActionTransformer.java:424)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.addMethodToInvokeClosure(ControllerActionTransformer.java:493)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.processClosures(ControllerActionTransformer.java:479)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.performInjectionOnAnnotatedClass(ControllerActionTransformer.java:206)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.performInjection(ControllerActionTransformer.java:197)
    at org.codehaus.groovy.grails.compiler.injection.GrailsAwareInjectionOperation.call(GrailsAwareInjectionOperation.java:154)
    at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1055)
    ... 517 more
| Error Fatal error during compilation org.apache.tools.ant.BuildException: BUG! exception in phase 'canonicalization' in source unit '/home/justin/projects/apps/grails-app/controllers/com/pitchstone/apps/controller/LandingController.groovy' unexpected NullpointerException

Turns out that was because one of our controller actions, which used the old-style "closure" action definitions, had an empty argument list, like this:

def someAction = { ->
    forward action: 'anotherAction'
}

I think once upon a time that action had a command object, and when the action was updated to just forward somewhere else, the command object was deleted from the action's argument list, leaving it with no arguments (instead of the usual single implicit argument, it). Like at one time it probably looked like this:

def someAction = { SomeCommand cmd ->
    [cmd: cmd]
}

Previous versions of Grails were fine with closure-style actions that had no arguments, but not Grails 2.5.4. Fortunately, simply removing the errant arrow (->) fixed the exception (and everything else with Grails 2.5.4 has gone smoothly):

def someAction = {
    forward action: 'anotherAction'
}

Saturday, February 27, 2016

Better Git Diff Word Colorization

I recently spent some time dialing in the colorization of my git diffs. Modern versions of the git command-line client will colorize diffs by default — but it still doesn't highlight the specific changed words or characters within the changed lines, like some of the fancier diff tools do (vimdiff, github, etc).

The diff-highlight script, a perl script from the "contrib" subtree of the git source repo, highlights the changed characters nicely. Several other blog posts (such as at The Unix Toolbox and at Viget) do a good job of describing how to install this script, and what it gets you. Here's the TLDR:

Installing Diff-Highlight

Download the diff-highlight script, make it executable, and put it on your path:

wget https://raw.githubusercontent.com/git/git/master/contrib/diff-highlight/diff-highlight
chmod +x diff-highlight
sudo mv diff-highlight /usr/local/bin/.

Update your git configuration to use it, adding the following to your ~/.gitconfig file:

[pager]
    diff = diff-highlight | less
    log = diff-highlight | less
    show = diff-highlight | less

By default now, git diff will highlight the characters of lines that differ just by a few characters by reversing the background and foreground colors of the differing characters (ie colorize the removed characters with a red background, and colorize the added characters with a green background). That's okay, but with the light color-scheme I like to use for my terminals, it still looks a little... ugly.

The Pretty Stick

Adding the following settings to your ~/.gitconfig will make it look much more pretty:

[color "diff-highlight"]
    oldNormal = red
    oldHighlight = 88 224
    newNormal = green
    newHighlight = 28 193

The numbers are XTerm 256 color numbers (foreground first, and then the optional background color second). For reference, Wikipedia has a nice XTerm color chart; and there's a really cool VIM plugin you can use to explore custom xterm colors, if you have a different aesthetic.

Let's Encrypt DNS Validation with Lego

Let's Encrypt recently enabled support for DNS challenges, but only a few clients yet support it. Lego is one of these clients, and already features integration with a number of popular DNS management APIs, including AWS Route 53, CloudFlare, DigitalOcean, and DNSimple. Lego also makes it really easy to use DNS challenges even without a supported API — if you run it in "manual" DNS challenge mode, it will print out the TXT record you need to add to your zone file, wait for you to add it, and then continue on to complete the challenge.

Lego is a neat Go project that can also itself be used as an API by other Go projects. However, it's quite easy to simply install and run Lego as a command-line tool. These are the steps I took to install it from scratch on Ubuntu 15.10, and use it with Route 53 to generate a SAN SSL certificate (a single certificate covering multiple domain names):

0. Install Go

If you don't already have Go installed on your system, you need to install it first. I don't know exactly what version Lego requires — but something newer than 1.2.1 (the version packaged in Ubuntu 14.04). It does work with version 1.5.1, the version packaged in Ubuntu 15.10, so if you have Ubuntu 15.10 or newer, you can simply install Go via apt-get:

sudo apt-get install golang

Otherwise, the Go Version Manager (GVM) provides a convenient command-line installer, allowing you to install multiple different versions of Go on the same machine, and switch between them as necessary.

1. Install Lego

You can install Lego with the go get command — but if you're not a Go developer, you probably don't have your GOPATH environment variable set, and you need to have it set first. I'd suggest just creating a lego directory somewhere convenient (like in your home folder), and using it for your GOPATH:

mkdir $HOME/lego
export GOPATH=$HOME/lego
go get -u github.com/xenolf/lego

The above will create a lego directory in your home folder, and install the lego executable in its bin subdirectory.

2. Run Lego

Now you can run the lego executable. Specify each domain you want the cert to cover via a separate --domain argument (the example below covers mail.example.com, www.example.com, and example.com). Specify your email address (for renewal reminders) via the --email flag. To use Route 53 DNS validation, include the --dns=route53 flag (for "manual" DNS validation, where you create the challenge DNS records manually, specify --dns=manual instead).

If you use Route 53, first specify your API key and secret as environment variables, like this:

export AWS_ACCESS_KEY_ID=ABC123
export AWS_SECRET_ACCESS_KEY=ABC+def/123

Alternatively, Lego supports the same credential/configuration files that the standard AWS command-line tools support (and uses the Golang Amazon Library specifically, so check out its aws.GetAuth function for the details of exactly what environment variables and configuration files are supported, and in what order they're checked). So if you've got those files set up with your AWS API credentials already, you don't need to mess around with any additional environment variables.

I'd also suggest explicitly specifying the path to the certificates/account info that Lego will generate, via the --path flag; if you created a lego directory in your home folder, just use that:

$HOME/lego/bin/lego \
    --accept-tos \
    --dns=route53 \
    --path=$HOME/lego \
    --email=me@example.com \
    --domains=mail.example.com \
    --domains=www.example.com \
    --domains=example.com \
    run

3. Check Out the Results

Once run successfully, Lego will output two sets of files into the directory you specified with the --path flag: your Let's Encrypt account info in the accounts subdirectory, and your new SSL cert in the certificates directory:

/home/me/lego/accounts/acme-v01.api.letsencrypt.org/me@example.com/account.json
/home/me/lego/accounts/acme-v01.api.letsencrypt.org/me@example.com/keys/me@example.com.key
/home/me/lego/certificates/mail.example.com.crt
/home/me/lego/certificates/mail.example.com.json
/home/me/lego/certificates/mail.example.com.key

In the accounts hierarchy, the account.json file contains your Let's Encrypt account info (which you can use later to renew or revoke the certificate). The me@example.com.key contains the secret key (ie password) for that account.

In the certificates subdirectory, your new certificate file will be named with the first domain you specified (in the above example, mail.example.com), with a crt extension. This file also includes the intermediate certificate for Let's Encrypt, so it's equivalent to the "fullchain" file that the reference Let's Encrypt client generates (the so-called "all-in-one" file that you'd use with Apache's SSLCertificateFile directive or Nginx's ssl_certificate directive). The corresponding secret key for the certificate will have the same name, but with a key extension. The file with the json extension contains some (non-secret) metadata for the cert that Let's Encrypt will need later if you renew or revoke it.

If you have OpenSSL installed, you can check out the cert details with this command (which will print out the details to stdout):

openssl x509 -text -noout -in $HOME/lego/certificates/mail.example.com.crt

The different domain names the cert covers will be listed in the X509v3 Subject Alternative Name field of the output.

Saturday, January 16, 2016

Filtering Apache Logs by HTTP Status Code

Apparently there was once a Stack Overflow question of the same name, but was deleted. So when I wanted to add a log file just for redirects to an apache httpd server, I had to dig a little through the mod_log_config and expressions documentation.

Starting in version 2.4, apache supports an expr argument to the CustomLog directive to specify an expression that must evaluate to true in order for the request to be logged to the custom log file. So to capture only redirects (responses with a status code in the 300s) for GET requests, I set up the following directive:

CustomLog /var/log/apache2/redirect.log redirect "expr=%{REQUEST_STATUS} -ge 300 && %{REQUEST_STATUS} -lt 400 && %{REQUEST_METHOD} == 'GET'"

The second argument to the directive — redirect — is just the name of the custom log format I defined. It's not a special keyword or anything — it could have been foo or bar or whatever. This is the format I ended up using, capturing the host (%V:%p) and path (%r) requested, the location (%{Location}o) to which the server redirected the request, and the referrer (%{Referer}i) to the original request:

LogFormat "%V:%p %a %t \"%r\" %>s Location: %{Location}o Referer: %{Referer}i" redirect