Sunday, May 20, 2018

OpenDKIM Key Retrieval Failed

I set up OpenDKIM on my mailserver years ago, but while I got it working for signing just fine, I could never get it working for verification. Whenever I'd receive a message signed with DKIM, I'd see an error message like this in my mailserver logs:

May  4 01:20:20 mail opendkim[24874]: 7EE5982132: key retrieval failed (s=20161025, d=gmail.com): '20161025._domainkey.gmail.com' query timed out

When I tried dig on the DNS record listed in the logs, however, I was able to retrieve it just fine:

dig TXT 20161025._domainkey.gmail.com

Recently I set aside some time to "dig" into it further. I found I could use the opendkim-testkey command to at least reproduce the issue (instead of having to keep sending test emails from other accounts to myself). For example, the following command tries to retrieve the DKIM key from the 20161025._domainkey.gmail.com DNS TXT record (20161025 is the DKIM selector and gmail.com is the signing domain):

opendkim-testkey -s 20161025 -d gmail.com

This command gave me the same "query timed out" error message that I saw in my logs. Through a lot of trial and error, I figured out that I could avoid the error by setting the Nameservers property in my /etc/opendkim.conf file to an external DNS server (any external one will do), and restarting the OpenDKIM daemon. I've been running my mailserver on an Ubuntu EC2 instance, and apparently OpenDKIM does not like something about the combination of the Ubuntu DNS resolver and the internal EC2 DNS servers.

So, I added this line to my /etc/opendkim.conf, restarted the OpenDKIM daemon, and now I no longer see the "key retrieval failed" error message in my logs (instead I get a nice Authentication-Results header added by OpenDKIM to my incoming mail)!:

Nameservers 1.1.1.1

1.1.1.1 is Cloudflare's DNS servers — but any external DNS servers should work. One thing to keep in mind with using external DNS servers is that if your network has a stateless firewall, you need to allow inbound access to UDP in the "ephemeral" port range. If you're using EC2's Network ACLs (and not just using the defaults), this means adding a rule like the following to the ACL for the subnet in which your mailserver lives (32768-61000 is Ubuntu's ephemeral port range):

Rule #: [any number lower than your DENY rules]
Type: Custom UDP Rule
Protocol: UDP (17)
Port Range: 32768-61000
Source: 1.1.1.1/32
Allow/Deny: ALLOW

Sunday, April 30, 2017

Jenkins Install-Plugin Remoting Deprecated

Installing and setting up Jenkins through an automated process can be tricky. The new, safer CLI (Command Line Interface) that was implemented for Jenkins 2.54 adds another twist to the process. That twist is the "remoting" mechanism for using the CLI has been deprecated, and turned off by default — but it's still the only way to install plugins via the CLI.

So now, to install plugins through automation, you first have to turn remoting back on. You can do that by changing the enabled element in the jenkins.CLI.xml config file (located in the root of your Jenkins home directory) to true, like so:

<?xml version='1.0' encoding='UTF-8'?>
<jenkins.CLI>
  <enabled>true</enabled>
</jenkins.CLI>

And then restart Jenkins. Now you can use the remoting protocol with the CLI — but it's no longer the default protocol, so you have to specify it explicitly via the -remoting flag, like so (for example to install the ant plugin):

java -jar jenkins-cli.jar -remoting -s http://localhost:8080 \
    install-plugin ant \
    --username admin \
    --password-file secrets/initialAdminPassword

If you don't enable remoting and/or specify the -remoting flag, you'll get an error like this from the CLI:

ERROR: Bad Credentials. Search the server log for 18058afb-86ed-4cc8-856f-b128918cbe8b for more details.

And you'll see this in the Jenkins server log:

INFO: CLI login attempt failed: 18058afb-86ed-4cc8-856f-b128918cbe8b
org.acegisecurity.BadCredentialsException: Failed to read secrets/initialAdminPassword; nested exception is hudson.AbortException: This command is requesting the deprecated -remoting mode. See https://jenkins.io/redirect/cli-command-requires-channel
        at hudson.security.AbstractPasswordBasedSecurityRealm$1.authenticate(AbstractPasswordBasedSecurityRealm.java:74)
        at hudson.cli.CLICommand.main(CLICommand.java:268)
        at hudson.cli.CLIAction$PlainCliEndpointResponse$1.run(CLIAction.java:221)

Once you've got all your plugins installed, you probably will want to go back and disable remoting (by changing the enabled element in the jenkins.CLI.xml config file back to false and restarting Jenkins; or manually via the "Enable CLI over Remoting" checkbox on Jenkins' "Manage Jenkins > Configure Global Security" page).

Sunday, July 10, 2016

JPGPJ: A new Java GPG Library

The Bouncy Castle PGP implementation is the "standard" GPG/PGP library in Java, and it's quite solid — but it's cumbersome to use directly, since it pretty much forces you to learn and use the raw primitives of the OpenPGP spec (RFC 4880). Also, while there is some helpful example code in the Bouncy Castle examples package (and snippets from the same examples have been copied and pasted into a bunch of Stack Overflow answers), the example code is (appropriately?) cryptic, and covers only a limited subset of functionality in each example.

Encrypting with JPGPJ

So I wrote a small library, JPGPJ, to wrap the Bouncy Castle PGP implementation with a simple API for encrypting and decrypting files. It makes interoperating with the standard gpg command-line client (GnuPGP) a breeze. This is all you need to do to encrypt a file with Bob's public key, and sign it with Alice's private key:

new Encryptor(
    new Key(new File("path/to/my/keys/alice-sec.gpg"), "password123"),
    new Key(new File("path/to/my/keys/bob-pub.gpg"))
).encrypt(
    new File("path/to/plaintext.txt"),
    new File("path/to/ciphertext.txt.gpg")
);

The above Java code does the same thing as following gpg command (where Alice has an `alice` secret key and a `bob` public key on her keyring, and enters "password123" when prompted for her passphrase):

gpg --sign --encrypt --local-user alice --recipient alice --recipient bob \
    --output path/to/ciphertext.txt.gpg path/to/plaintext.txt

JPGPJ is set up to do the right thing by default — sign and encrypt — but if you just want to encrypt without signing, that's easy, too — just set the encryptor's signingAlgoritm property to Unsigned:

Encryptor encryptor = new Encryptor(
    new Key(new File("path/to/my/keys/bob-pub.gpg"))
);
encryptor.setSigningAlgorithm(HashingAlgorithm.Unsigned);
encryptor.encrypt(
    new File("path/to/plaintext.txt"),
    new File("path/to/ciphertext.txt.gpg")
);

To encode with ASCII Armor (ie produce Base64-encoded content, instead of binary content), just turn on the encryptor's asciiArmored flag:

Encryptor encryptor = new Encryptor(
    new Key(new File("path/to/my/keys/bob-pub.gpg"))
);
encryptor.setSigningAlgorithm(HashingAlgorithm.Unsigned);
encryptor.setAsciiArmored(true);
encryptor.encrypt(
    new File("path/to/plaintext.txt"),
    new File("path/to/ciphertext.txt.asc")
);

Decrypting with JPGPJ

Decrypting is just as easy. JPGPJ handles signed or unsigned, encrypted or unencrypted, compressed or uncompressed, ascii-armored or binary messages all the same way. Its default setting is to require messages to be signed by a known key; so for example, to decrypt a message signed by Alice's private key and encrypted with Bob's public key (requiring Alice's public key to verify and Bob's private key to decrypt), this is all the Java you need:

new Decryptor(
    new Key(new File("path/to/my/keys/alice-pub.gpg")),
    new Key(new File("path/to/my/keys/bob-sec.gpg"), "b0bru1z!")
).decrypt(
    new File("path/to/ciphertext.txt.gpg"),
    new File("path/back-to/plaintext.txt")
);

The above Java code does the same thing as the following gpg command (where Bob has a `bob` secret key and an `alice` public key on his keyring, and enters "b0bru1z!" when prompted for his passphrase):

gpg --decrypt --output path/back-to/plaintext.txt path/to/ciphertext.txt.gpg

If the message can't be verified by any known key (that is, any key with which the decryptor had been configured), JPGPJ will raise a VerificationException. If the message can't be decrypted by any known private key (that is, any private key with which the decryptor had been configured), JPGPJ will raise a DecryptionException.

To ignore signatures (in other words, decrypt a message successfully regardless of whether it was signed or not), simply turn off the decryptor's verificationRequired flag:

Decryptor decryptor = new Decryptor(
    new Key(new File("path/to/my/keys/bob-sec.gpg"), "b0bru1z!")
);
decryptor.setVerificationRequired(false);
decryptor.decrypt(
    new File("path/to/ciphertext.txt.gpg"),
    new File("path/back-to/plaintext.txt")
);

Keys in JPGPJ

The key data used by JPGPJ is simply what get when you export a key from GnuPG, like with the following gpg command for a public key:

gpg --export alice > path/to/my/keys/alice-pub.gpg

Or this gpg command to export a private key (which exports both the public and private parts of the key, encrypted with the same password that the key has on your GnuPG keyring):

gpg --export-secret-keys bob > path/to/my/keys/bob-sec.gpg

If you encode keys with ASCII Armor when you export them (via the GnuPG --armor flag), you can load them the same way in JPGPJ; and you can also embed ascii-armored keys as strings in your source code, if you find that more convenient than using external files (see the Key Rings wiki page for more details on loading and using keys in JPGPJ).

Sunday, May 22, 2016

Powerline Subversion Status

For a while I've been using Jasper N. Brouwer's Powerline Gitstatus plugin for the nifty Powerline shell prompt, and it's become a pretty much indispensable part of my Git workflow. I haven't been able to find something comparable for subversion, so I created something myself: Powerline SVN Status. Here's a screenshot of it in action:

Powerline SVN Status Screenshot

I don't use Subversion all that much any more — mainly for a few older projects — but when I do, having a plugin like this makes my life easier, reminding me when I have some changes I need to check in, or if I have some untracked files I need to add. It's also really helpful when you've switched to a branch, having a branch indicator to remind you that you're not looking at the trunk anymore.

This plugin is just a simple Python class that calls svn info and svn status when in a Subversion working directory. With Powerline all set up, you can simply install it via Pip:

pip install powerline-svnstatus
And then add the following block to your Powerline segment configuration to activate it (I put this in my ~/.config/powerline/themes/shell/default_leftonly.json file):
{
    "function": "powerline_svnstatus.svnstatus",
    "priority": 40
},

Saturday, May 14, 2016

Ubuntu 16.04 with Java 7 Timezone Data

As I found when upgrading to Ubuntu 16.04, Java 7 is no longer in the main Ubuntu repository — you have to install it via the OpenJDK PPA. That works nicely, but unfortunately this PPA doesn't include any timezone data.

In previous releases of Ubuntu, Java 6 and 7 timezone data were included via the tzdata-java package; but this package isn't available for Ubuntu 16.04. So I created a new tzdata-java PPA just for Ubuntu 16.04 (Xenial). You can install it like this:

sudo apt-add-repository ppa:justinludwig/tzdata
sudo apt-get install tzdata-java

To update my previous blog post: as a set of Ansible tasks, installing Java 7 on Ubuntu 16.04 now just works like this:

- name: register java 7 ppas
  become: yes
  apt_repository: repo={{ item }}
  with_items:
  - 'ppa:openjdk-r/ppa'
  - 'ppa:justinludwig/tzdata'

- name: install java 7 packages
  become: yes
  apt: pkg={{ item }}
  with_items:
  - openjdk-7-jdk
  - tzdata-java

Sunday, May 1, 2016

Xenial Ansible

I started building out some Ubuntu 16.04 (Xenial Xerus) servers this weekend with Ansible, and was impressed by how smoothly it went. The only major issue I encountered was that Ansible requires Python 2.x, whereas Ubuntu 16.04 ships Python 3.5 by default. Fortunately, it's not too hard to work around; here's how I fixed that — and a couple of other issues specific to the servers I was building out:

Python 2

Since Ansible doesn't work with Python 3, and that's what Ubuntu 16.04 provides by default, this is the error I got when I tried running Ansible against a newly-booted server:

/usr/bin/python: not found

So I had to make this the very first Ansible play (bootstrapping the ability of Ansible to use Python 2 for the rest of its tasks, as well as for its other record keeping — like gathering facts about the server):

- name: bootstrap python 2
  gather_facts: no
  tasks:
  - raw: sudo apt-get update -qq && sudo apt-get install -qq python2.7

And in the inventory variables (or group variables) for the server, I had to add this line (directing it to use Python 2 instead of the server's default Python):

ansible_python_interpreter: /usr/bin/python2.7

Aptitude

The next hiccup I ran into was using the Ansible apt module with the upgrade=full option. This option is implemented by using the aptitude program — which Ubuntu no longer installs by default. I was getting this error trying to use that option:

Could not find aptitude. Please ensure it is installed.

So I just tweaked my playbook to install the aptitude package first before running apt: upgrade=full:

- name: install apt requirements
  become: yes
  apt: pkg=aptitude

- name: update pre-installed packages
  become: yes
  apt: upgrade=full update_cache=yes

Mount with nobootwait

Then I started running into some minor issues that were completely unrelated to Ansible — simply changes Ubuntu had picked up between 15.10 and 16.04. The first of these was the nobootwait option for mountall (eg in /etc/fstab mountpoints). This option seems to be no longer supported — the server hung after rebooting, with this message in the syslog:

Unrecognized mount option "nobootwait" or missing value

Maybe this is just an issue with AWS EC2 instance-store volumes, but I had to change the /etc/fstab definition for the server's instance-store volume from this:

/dev/xvdb /mnt auto defaults,noatime,data=writeback,nobootwait,comment=cloudconfig 0 2

To this:

/dev/xvdb /mnt auto defaults,noatime,data=writeback,comment=cloudconfig 0 2

Java 7

The Ubuntu 16.04 repo no longer includes Java 6 or 7 — only Java 8 and 9. I got this error message trying to install Java 7:

No package matching 'openjdk-7-jdk' is available

So I first had to add a PPA for OpenJDK 7, and then could install it:

- name: register java 7 ppa
  become: yes
  apt_repository: repo=ppa:openjdk-r/ppa

- name: install java 7
  become: yes
  apt: pkg=openjdk-7-jdk

But the PPA doesn't include the timezone database, so the time formatting in our app was restricted to GMT. So I had to "borrow" the timezone data from Ubuntu 14.04:

- name: download tzdata-java
  get_url:
    url: http://mirrors.kernel.org/ubuntu/pool/main/t/tzdata/tzdata-java_2016d-0ubuntu0.14.04_all.deb
    dest: ~/tzdata-java.deb
    checksum: sha256:5131aa5219739ac58c00e18e8c9d8c5d6c63fc87236c9b5f314f7d06b46b79fb

- name: install tzdata-java
  become: yes
  command: dpkg --ignore-depends=tzdata -i tzdata-java.deb

That's probably going to be broken in a month or two after the next tzdata update, but it's good enough for now. Later I'll put together some Ansible tasks to build the data from source (although obviously the long-term solution is to upgrade to java 8).

Update 5/14/2016: I created a Xenial tzdata-java package to simplify the installation of Java 7 timezone data.

MySQL root

The final issue I hit was with the MySQL root user. Unlike previous versions of Ubuntu, which by default would come with 3 or 4 MySQL root users (covering all the different variations for naming your local host), and all with empty passwords, Ubuntu 16.04 comes with just a single root user — with which you can login only via MySQL's auth_socket plugin. So I was getting this error trying to login to MySQL as root the old way:

unable to connect to database, check login_user and login_password are correct or /home/me/.my.cnf has the credentials

The new way is simply to login to MySQL using the system's root user (ie sudo mysql). In Ansible tasks (such as those using the mysql_db module), this just means using the become (aka sudo) directive, and omitting the login_user and login_password options:

- name: create new database
  become: yes
  mysql_db: name=thedata

Sunday, March 6, 2016

Grails 2.5.4 Controller Action Arguments

After upgrading to Grails 2.5.4, one of our projects failed with an exception in the 'canonicalization' phase of the build, due to a NullPointerException in Grails' ControllerActionTransformer:

Caused by: java.lang.NullPointerException
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.getCodeToInitializeCommandObjects(ControllerActionTransformer.java:424)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.addMethodToInvokeClosure(ControllerActionTransformer.java:493)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.processClosures(ControllerActionTransformer.java:479)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.performInjectionOnAnnotatedClass(ControllerActionTransformer.java:206)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.performInjection(ControllerActionTransformer.java:197)
    at org.codehaus.groovy.grails.compiler.injection.GrailsAwareInjectionOperation.call(GrailsAwareInjectionOperation.java:154)
    at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1055)
    ... 517 more
| Error Fatal error during compilation org.apache.tools.ant.BuildException: BUG! exception in phase 'canonicalization' in source unit '/home/justin/projects/apps/grails-app/controllers/com/pitchstone/apps/controller/LandingController.groovy' unexpected NullpointerException

Turns out that was because one of our controller actions, which used the old-style "closure" action definitions, had an empty argument list, like this:

def someAction = { ->
    forward action: 'anotherAction'
}

I think once upon a time that action had a command object, and when the action was updated to just forward somewhere else, the command object was deleted from the action's argument list, leaving it with no arguments (instead of the usual single implicit argument, it). Like at one time it probably looked like this:

def someAction = { SomeCommand cmd ->
    [cmd: cmd]
}

Previous versions of Grails were fine with closure-style actions that had no arguments, but not Grails 2.5.4. Fortunately, simply removing the errant arrow (->) fixed the exception (and everything else with Grails 2.5.4 has gone smoothly):

def someAction = {
    forward action: 'anotherAction'
}