Sunday, May 1, 2016

Xenial Ansible

I started building out some Ubuntu 16.04 (Xenial Xerus) servers this weekend with Ansible, and was impressed by how smoothly it went. The only major issue I encountered was that Ansible requires Python 2.x, whereas Ubuntu 16.04 ships Python 3.5 by default. Fortunately, it's not too hard to work around; here's how I fixed that — and a couple of other issues specific to the servers I was building out:

Python 2

Since Ansible doesn't work with Python 3, and that's what Ubuntu 16.04 provides by default, this is the error I got when I tried running Ansible against a newly-booted server:

/usr/bin/python: not found

So I had to make this the very first Ansible play (bootstrapping the ability of Ansible to use Python 2 for the rest of its tasks, as well as for its other record keeping — like gathering facts about the server):

- name: bootstrap python 2
  gather_facts: no
  tasks:
  - raw: sudo apt-get update -qq && sudo apt-get install -qq python2.7

And in the inventory variables (or group variables) for the server, I had to add this line (directing it to use Python 2 instead of the server's default Python):

ansible_python_interpreter: /usr/bin/python2.7

Aptitude

The next hiccup I ran into was using the Ansible apt module with the upgrade=full option. This option is implemented by using the aptitude program — which Ubuntu no longer installs by default. I was getting this error trying to use that option:

Could not find aptitude. Please ensure it is installed.

So I just tweaked my playbook to install the aptitude package first before running apt: upgrade=full:

- name: install apt requirements
  become: yes
  apt: pkg=aptitude

- name: update pre-installed packages
  become: yes
  apt: upgrade=full update_cache=yes

Mount with nobootwait

Then I started running into some minor issues that were completely unrelated to Ansible — simply changes Ubuntu had picked up between 15.10 and 16.04. The first of these was the nobootwait option for mountall (eg in /etc/fstab mountpoints). This option seems to be no longer supported — the server hung after rebooting, with this message in the syslog:

Unrecognized mount option "nobootwait" or missing value

Maybe this is just an issue with AWS EC2 instance-store volumes, but I had to change the /etc/fstab definition for the server's instance-store volume from this:

/dev/xvdb /mnt auto defaults,noatime,data=writeback,nobootwait,comment=cloudconfig 0 2

To this:

/dev/xvdb /mnt auto defaults,noatime,data=writeback,comment=cloudconfig 0 2

Java 7

The Ubuntu 16.04 repo no longer includes Java 6 or 7 — only Java 8 and 9. I got this error message trying to install Java 7:

No package matching 'openjdk-7-jdk' is available

So I first had to add a PPA for OpenJDK 7, and then could install it:

- name: register java 7 ppa
  become: yes
  apt_repository: repo=ppa:openjdk-r/ppa

- name: install java 7
  become: yes
  apt: pkg=openjdk-7-jdk

But the PPA doesn't include the timezone database, so the time formatting in our app was restricted to GMT. So I had to "borrow" the timezone data from Ubuntu 14.04:

- name: download tzdata-java
  get_url:
    url: http://mirrors.kernel.org/ubuntu/pool/main/t/tzdata/tzdata-java_2016d-0ubuntu0.14.04_all.deb
    dest: ~/tzdata-java.deb
    checksum: sha256:5131aa5219739ac58c00e18e8c9d8c5d6c63fc87236c9b5f314f7d06b46b79fb

- name: install tzdata-java
  become: yes
  command: dpkg --ignore-depends=tzdata -i tzdata-java.deb

That's probably going to be broken in a month or two after the next tzdata update, but it's good enough for now. Later I'll put together some Ansible tasks to build the data from source (although obviously the long-term solution is to upgrade to java 8).

Update 5/14/2016: I created a Xenial tzdata-java package to simplify the installation of Java 7 timezone data.

MySQL root

The final issue I hit was with the MySQL root user. Unlike previous versions of Ubuntu, which by default would come with 3 or 4 MySQL root users (covering all the different variations for naming your local host), and all with empty passwords, Ubuntu 16.04 comes with just a single root user — with which you can login only via MySQL's auth_socket plugin. So I was getting this error trying to login to MySQL as root the old way:

unable to connect to database, check login_user and login_password are correct or /home/me/.my.cnf has the credentials

The new way is simply to login to MySQL using the system's root user (ie sudo mysql). In Ansible tasks (such as those using the mysql_db module), this just means using the become (aka sudo) directive, and omitting the login_user and login_password options:

- name: create new database
  become: yes
  mysql_db: name=thedata

Sunday, March 6, 2016

Grails 2.5.4 Controller Action Arguments

After upgrading to Grails 2.5.4, one of our projects failed with an exception in the 'canonicalization' phase of the build, due to a NullPointerException in Grails' ControllerActionTransformer:

Caused by: java.lang.NullPointerException
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.getCodeToInitializeCommandObjects(ControllerActionTransformer.java:424)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.addMethodToInvokeClosure(ControllerActionTransformer.java:493)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.processClosures(ControllerActionTransformer.java:479)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.performInjectionOnAnnotatedClass(ControllerActionTransformer.java:206)
    at org.codehaus.groovy.grails.compiler.web.ControllerActionTransformer.performInjection(ControllerActionTransformer.java:197)
    at org.codehaus.groovy.grails.compiler.injection.GrailsAwareInjectionOperation.call(GrailsAwareInjectionOperation.java:154)
    at org.codehaus.groovy.control.CompilationUnit.applyToPrimaryClassNodes(CompilationUnit.java:1055)
    ... 517 more
| Error Fatal error during compilation org.apache.tools.ant.BuildException: BUG! exception in phase 'canonicalization' in source unit '/home/justin/projects/apps/grails-app/controllers/com/pitchstone/apps/controller/LandingController.groovy' unexpected NullpointerException

Turns out that was because one of our controller actions, which used the old-style "closure" action definitions, had an empty argument list, like this:

def someAction = { ->
    forward action: 'anotherAction'
}

I think once upon a time that action had a command object, and when the action was updated to just forward somewhere else, the command object was deleted from the action's argument list, leaving it with no arguments (instead of the usual single implicit argument, it). Like at one time it probably looked like this:

def someAction = { SomeCommand cmd ->
    [cmd: cmd]
}

Previous versions of Grails were fine with closure-style actions that had no arguments, but not Grails 2.5.4. Fortunately, simply removing the errant arrow (->) fixed the exception (and everything else with Grails 2.5.4 has gone smoothly):

def someAction = {
    forward action: 'anotherAction'
}

Saturday, February 27, 2016

Better Git Diff Word Colorization

I recently spent some time dialing in the colorization of my git diffs. Modern versions of the git command-line client will colorize diffs by default — but it still doesn't highlight the specific changed words or characters within the changed lines, like some of the fancier diff tools do (vimdiff, github, etc).

The diff-highlight script, a perl script from the "contrib" subtree of the git source repo, highlights the changed characters nicely. Several other blog posts (such as at The Unix Toolbox and at Viget) do a good job of describing how to install this script, and what it gets you. Here's the TLDR:

Installing Diff-Highlight

Download the diff-highlight script, make it executable, and put it on your path:

wget https://raw.githubusercontent.com/git/git/master/contrib/diff-highlight/diff-highlight
chmod +x diff-highlight
sudo mv diff-highlight /usr/local/bin/.

Update your git configuration to use it, adding the following to your ~/.gitconfig file:

[pager]
    diff = diff-highlight | less
    log = diff-highlight | less
    show = diff-highlight | less

By default now, git diff will highlight the characters of lines that differ just by a few characters by reversing the background and foreground colors of the differing characters (ie colorize the removed characters with a red background, and colorize the added characters with a green background). That's okay, but with the light color-scheme I like to use for my terminals, it still looks a little... ugly.

The Pretty Stick

Adding the following settings to your ~/.gitconfig will make it look much more pretty:

[color "diff-highlight"]
    oldNormal = red
    oldHighlight = 88 224
    newNormal = green
    newHighlight = 28 193

The numbers are XTerm 256 color numbers (foreground first, and then the optional background color second). For reference, Wikipedia has a nice XTerm color chart; and there's a really cool VIM plugin you can use to explore custom xterm colors, if you have a different aesthetic.

Let's Encrypt DNS Validation with Lego

Let's Encrypt recently enabled support for DNS challenges, but only a few clients yet support it. Lego is one of these clients, and already features integration with a number of popular DNS management APIs, including AWS Route 53, CloudFlare, DigitalOcean, and DNSimple. Lego also makes it really easy to use DNS challenges even without a supported API — if you run it in "manual" DNS challenge mode, it will print out the TXT record you need to add to your zone file, wait for you to add it, and then continue on to complete the challenge.

Lego is a neat Go project that can also itself be used as an API by other Go projects. However, it's quite easy to simply install and run Lego as a command-line tool. These are the steps I took to install it from scratch on Ubuntu 15.10, and use it with Route 53 to generate a SAN SSL certificate (a single certificate covering multiple domain names):

0. Install Go

If you don't already have Go installed on your system, you need to install it first. I don't know exactly what version Lego requires — but something newer than 1.2.1 (the version packaged in Ubuntu 14.04). It does work with version 1.5.1, the version packaged in Ubuntu 15.10, so if you have Ubuntu 15.10 or newer, you can simply install Go via apt-get:

sudo apt-get install golang

Otherwise, the Go Version Manager (GVM) provides a convenient command-line installer, allowing you to install multiple different versions of Go on the same machine, and switch between them as necessary.

1. Install Lego

You can install Lego with the go get command — but if you're not a Go developer, you probably don't have your GOPATH environment variable set, and you need to have it set first. I'd suggest just creating a lego directory somewhere convenient (like in your home folder), and using it for your GOPATH:

mkdir $HOME/lego
export GOPATH=$HOME/lego
go get -u github.com/xenolf/lego

The above will create a lego directory in your home folder, and install the lego executable in its bin subdirectory.

2. Run Lego

Now you can run the lego executable. Specify each domain you want the cert to cover via a separate --domain argument (the example below covers mail.example.com, www.example.com, and example.com). Specify your email address (for renewal reminders) via the --email flag. To use Route 53 DNS validation, include the --dns=route53 flag (for "manual" DNS validation, where you create the challenge DNS records manually, specify --dns=manual instead).

If you use Route 53, first specify your API key and secret as environment variables, like this:

export AWS_ACCESS_KEY_ID=ABC123
export AWS_SECRET_ACCESS_KEY=ABC+def/123

Alternatively, Lego supports the same credential/configuration files that the standard AWS command-line tools support (and uses the Golang Amazon Library specifically, so check out its aws.GetAuth function for the details of exactly what environment variables and configuration files are supported, and in what order they're checked). So if you've got those files set up with your AWS API credentials already, you don't need to mess around with any additional environment variables.

I'd also suggest explicitly specifying the path to the certificates/account info that Lego will generate, via the --path flag; if you created a lego directory in your home folder, just use that:

$HOME/lego/bin/lego \
    --accept-tos \
    --dns=route53 \
    --path=$HOME/lego \
    --email=me@example.com \
    --domains=mail.example.com \
    --domains=www.example.com \
    --domains=example.com \
    run

3. Check Out the Results

Once run successfully, Lego will output two sets of files into the directory you specified with the --path flag: your Let's Encrypt account info in the accounts subdirectory, and your new SSL cert in the certificates directory:

/home/me/lego/accounts/acme-v01.api.letsencrypt.org/me@example.com/account.json
/home/me/lego/accounts/acme-v01.api.letsencrypt.org/me@example.com/keys/me@example.com.key
/home/me/lego/certificates/mail.example.com.crt
/home/me/lego/certificates/mail.example.com.json
/home/me/lego/certificates/mail.example.com.key

In the accounts hierarchy, the account.json file contains your Let's Encrypt account info (which you can use later to renew or revoke the certificate). The me@example.com.key contains the secret key (ie password) for that account.

In the certificates subdirectory, your new certificate file will be named with the first domain you specified (in the above example, mail.example.com), with a crt extension. This file also includes the intermediate certificate for Let's Encrypt, so it's equivalent to the "fullchain" file that the reference Let's Encrypt client generates (the so-called "all-in-one" file that you'd use with Apache's SSLCertificateFile directive or Nginx's ssl_certificate directive). The corresponding secret key for the certificate will have the same name, but with a key extension. The file with the json extension contains some (non-secret) metadata for the cert that Let's Encrypt will need later if you renew or revoke it.

If you have OpenSSL installed, you can check out the cert details with this command (which will print out the details to stdout):

openssl x509 -text -noout -in $HOME/lego/certificates/mail.example.com.crt

The different domain names the cert covers will be listed in the X509v3 Subject Alternative Name field of the output.

Saturday, January 16, 2016

Filtering Apache Logs by HTTP Status Code

Apparently there was once a Stack Overflow question of the same name, but was deleted. So when I wanted to add a log file just for redirects to an apache httpd server, I had to dig a little through the mod_log_config and expressions documentation.

Starting in version 2.4, apache supports an expr argument to the CustomLog directive to specify an expression that must evaluate to true in order for the request to be logged to the custom log file. So to capture only redirects (responses with a status code in the 300s) for GET requests, I set up the following directive:

CustomLog /var/log/apache2/redirect.log redirect "expr=%{REQUEST_STATUS} -ge 300 && %{REQUEST_STATUS} -lt 400 && %{REQUEST_METHOD} == 'GET'"

The second argument to the directive — redirect — is just the name of the custom log format I defined. It's not a special keyword or anything — it could have been foo or bar or whatever. This is the format I ended up using, capturing the host (%V:%p) and path (%r) requested, the location (%{Location}o) to which the server redirected the request, and the referrer (%{Referer}i) to the original request:

LogFormat "%V:%p %a %t \"%r\" %>s Location: %{Location}o Referer: %{Referer}i" redirect

Sunday, November 1, 2015

Sandboxing Firefox with Firejail

I typically run about five or six different instances of Firefox with different profiles for different tasks or groups of websites (like one for dev work, one for my "daily driver", one for financial accounts, one for each organization I work for, etc). I've started using Firejail to better isolate each instance — not only from each other, but from the rest of my system.

Separate Home Directories

To run each profile with a separate home directory, I first created a new ~/fj dir, with a separate directory in it for each Firefox profile (like ~/fj/ff-dev, ~/fj/ff-company-x, etc). Then I moved the existing profile for each into its own .mozilla/firefox sub-directory (like ~/fj/ff-dev/.mozilla/firefox/abc123.dev), and added a single-profile profiles.ini into the same dir, containing just the entry for the single profile:

[General]
StartWithLastProfile=1

[Profile0]
Name=dev
IsRelative=1
Path=abc123.dev

Then I adjusted my startup script for each profile to use Firejail with the separate home dir:

#!/bin/sh
firejail --private=~/fj/ff-dev firefox

Minimum Filesystem Access

I've also been experimenting with custom Firejail profiles to give Firefox just the minimum access to the filesystem it needs to work. I've found that the following profile (saved as ~/.config/firejail/firefox.profile) on Ubuntu 15.04 enables Firefox to use the system's fonts (private-etc fonts), timezone settings (private-etc localtime), and DNS (private-etc resolve.conf and noblacklist /run/resolveconf); my custom DNS overrides from etc/hosts (private-etc hosts,nsswitch.conf); and DRM flash videos from Hulu, Amazon, etc (private-etc alternatives and noblacklist /run/dbus,/var/cache/hald):

noblacklist /run/dbus
noblacklist /run/resolvconf
noblacklist /run/user
noblacklist /var/cache/hald
noblacklist /var/run
blacklist /boot
blacklist /cdrom
blacklist /lost+found
blacklist /media
blacklist /mnt
blacklist /opt
blacklist /proc
blacklist /run/*
blacklist /sbin
blacklist /srv
blacklist /sys
blacklist /usr/sbin
blacklist /var/*
private-dev
private-etc alternatives,firefox,fonts,hosts,localtime,nsswitch.conf,resolv.conf
read-only /bin
read-only /lib
read-only /lib64
read-only /usr
tmpfs /tmp
caps.drop all
seccomp
netfilter
noroot

I've also built a similar custom Firejail profile for Chrome (saved as ~/.config/firejail/google-chrome.profile) which allows for the same (needing a little less access to run its own built-in version of flash):

noblacklist /opt/google
noblacklist /run/resolvconf
noblacklist /run/user
blacklist /boot
blacklist /cdrom
blacklist /lost+found
blacklist /media
blacklist /mnt
blacklist /opt/*
blacklist /proc
blacklist /run/*
blacklist /sbin
blacklist /srv
blacklist /sys
blacklist /usr/sbin
blacklist /var
private-dev
private-etc alternatives,chromium-browser,fonts,hosts,localtime,nsswitch.conf,resolv.conf
read-only /bin
read-only /lib
read-only /lib64
read-only /usr
tmpfs /tmp
caps.drop all
seccomp
netfilter
noroot

Sunday, May 3, 2015

Blocking Attacks on Apache Behind a Load Balancer

OSSEC, a host-based intrusion detection system, comes with a lot of defenses set up and running right out of the box, including detecting malicious web requests and blocking them with firewall rules. This works really well for web servers not running behind load balancers, or when running OSSEC on the load balancers themselves, but doesn't work to actually block the attacks when your web servers are behind load balancers that aren't running OSSEC (for example, when using AWS ELB).

Fortunately, OSSEC has all the infrastructure needed to make it work — you just need to create a custom "active response" that updates your web server's configuration with the IP addresses to block. This is what we've done to make our Apache 2.4 servers block IP addresses as directed by OSSEC:

1. Log client IPs, not LB IPs

If you're running Apache 2.4 behind a load balancer, you should install mod_remoteip (there's also a backport of mod_remoteip for Apache 2.2 available that you can compile yourself). Make sure you register the header used by your load balancer to indicate the originating client IP address via the RemoteIPHeader directive, and the IP addresses (or blocks) used by your load balancer via the RemoteIPInternalProxy directive. Put these directives either in your base Apache config, or in your <VirtualHost> blocks. You'll also want to adjust any of the standard log formats you use (as well your own custom log formats) to replace %h with %a:

# name of the header from your load balancer
# that contains the originating client IP address
RemoteIPHeader X-Forwarded-For
# IP address or block of your load balancer
# (from the perspective of your back-end servers)
RemoteIPInternalProxy 10.0.0.1/24

# standard log file formats rewritten with %a in place of %h
LogFormat "%v:%p %a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined
LogFormat "%a %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\"" combined
LogFormat "%a %l %u %t \"%r\" %>s %O" common

After installing mod_remoteip and updating your Apache config, restart Apache and try accessing your server through the load balancer. You should now see your client IP address in the Apache access logs where your load balancer IP address used to be. This is a prerequisite for blocking web attacks with OSSEC — if your load balancer IP is in the logs instead of the originating client's IP, you'll end up blocking all access from your load balancer instead of just from the attacker!

2. Configure Apache to use a block list

Next, make sure you have mod_authz_core and mod_authz_host installed (which are usually installed by default) so that you can use them to block access to specific IP addresses in your Apache config. What we're going to do is have OSSEC manage a list of blocked IP addresses by writing them to a specific directory, one IP address to a file, in a format mod_authz_host interprets as denying access to the IP address.

If you've used earlier versions of Apache, you're probably used to access-control settings like this:

Order deny,allow
Deny from 1.2.3.4

In Apache 2.4, however, you instead use the Require directive for access control (optionally nested inside <RequireAll>, <RequireAny>, and <RequireNone> blocks to handle AND, OR, and NOT logic). So after upgrading Apache, you'd replace the above with this:

Require not ip 1.2.3.4

So for each IP address OSSEC wants to block, we'll have it write out Require not ip [the IP address] to a file, and keep all those block files in a specific directory. I chose /etc/apache2/authz for that directory, but it can be anywhere you want. We can source the files from that directory into our Apache config via the IncludeOptional directive.

So create that directory (as /etc/apache2/authz or your own preferred location), and then source it in beside your other auth directives in your Apache config. Since we're using negated auth directives in the sourced files (Require not ip 1.2.3.4), make sure you wrap all auth directives in your Apache config with a <RequireAll> block — outside of <RequireAll>, <RequireAny>, and <RequireNone> blocks, auth directives are combined using <RequireAny> semantics, rendering any negated auth directives moot.

Here is a simple example you might use in a <VirtualHost> block, allowing all GET, HEAD, and POST requests except those from the IP addresses included in our sourced directory:

DocumentRoot /srv/www
<Directory />
    # all included directives must pass
    <RequireAll>
        # allow only GET (and HEAD) and POST requests
        Require method GET POST
        # source additional directives from /etc/apache2/authz directory
        IncludeOptional authz/*.conf
    </RequireAll>
</Directory>

If you update your config with something like the above, and restart Apache, you should still be able to access your site as normal. But if you add a test.conf file to your /etc/apache2/authz directory with the following (replacing 1.2.3.4 with your own IP address), and restart Apache, Apache should now respond to you with a 403:

Require not ip 1.2.3.4

3. Create an OSSEC "active response" script

OSSEC comes with a few built-in active-response scripts, including the ability to add an offending IP address to the /etc/hosts.deny file, and to add the offending IP address to a firewall drop rule. It's easy to create custom active-response scripts, however; add this custom script as /var/ossec/active-response/bin/apache-deny.sh (or in the active-response/bin directory of wherever you installed OSSEC):

#!/bin/sh -e
# add/delete ip to/from apache authz deny list
# apache-deny.sh add - 1.2.3.4 1234567890.123456 1234

SCRIPT=$0
ACTION=$1
USER=$2
IP=$3
TIMESTAMP=$4
RULE=$5

# directory in which to add/delete authz entries
AUTHZ_DIR=/etc/apache2/authz
# assume /var/ossec/active-response/bin/apache-deny.sh
OSSEC_DIR=$(dirname $SCRIPT)/../..
# file to log activity
LOG_FILE=$OSSEC_DIR/logs/active-responses.log

log() {
    MSG="$(date) $SCRIPT $1"
    if [ "$ACTION" = "test" ]; then
        echo "$MSG"
    else
        echo "$MSG" >> $LOG_FILE
    fi
}

# log invocation of this script
log "$1 $2 $3 $4 $5"

# use python to validate legal ip address
IP_TYPE=$(cat <<EOF | python -
import socket
try:
    socket.inet_aton('$IP')
    print 'ipv4'
except socket.error:
    try:
        socket.inet_pton(socket.AF_INET6, '$IP')
        print 'ipv6'
    except socket.error:
        print ''
EOF
)

if [ "$IP_TYPE" = "" ]; then
    log "invalid ip $IP"
    exit 1
fi

# create safe name for authz deny file
AUTHZ_FILE="deny-$(echo -n "$IP" | tr -c '[:alnum:]' '-').conf"

case "$ACTION" in
    # add authz deny file
    add)
        echo "Require not ip $IP" > $AUTHZ_DIR/$AUTHZ_FILE
        service apache2 reload
        ;;
    # delete authz deny file
    delete)
        rm -f $AUTHZ_DIR/$AUTHZ_FILE
        service apache2 reload
        ;;
    # display test string
    test)
        echo "Require not ip $IP > $AUTHZ_DIR/$AUTHZ_FILE"
        ;;
    *)
        log "unknown action $ACTION"
        exit 1
esac

If you're using a directory other than /etc/apache2/authz for your block lists, replace the AUTHZ_DIR variable's value in the script with your custom directory path. Try it out by running the following command:

sudo /var/ossec/active-response/bin/apache-deny.sh add - 1.2.3.4

This should add a file named deny-1-2-3-4.conf to your /etc/apache2/authz directory with the following content:

Require not ip 1.2.3.4

The command also reloads your Apache config, so Apache will start acting on the newly added directive right way (you will need to adjust the script if your system has a different command for reloading Apache other than service apache2 reload). And if you run the command with the delete action, it will delete the same file:

sudo /var/ossec/active-response/bin/apache-deny.sh delete - 1.2.3.4

4. Configure OSSEC to invoke an active response

Now that Apache is configured to use the block list, and the apache-deny.sh script is in place to add IP addresses to and remove them from the list programmatically, we can configure OSSEC to trigger the apache-deny.sh script automatically whenever an existing OSSEC rule fires an alert at or above a certain level of importance. To do so, add two entries to your ossec.conf configuration file (typically located at /var/ossec/etc/ossec.conf).

The first entry registers our apache-deny.sh script as a command called apache-deny (and indicates it applies only to alerts that include a source IP). Insert it after the other <command> entries in your ossec.conf file:

  <command>
    <name>apache-deny</name>
    <executable>apache-deny.sh</executable>
    <expect>srcip</expect>
    <timeout_allowed>yes</timeout_allowed>
  </command>

The second entry configures OSSEC to execute the apache-deny command for all alerts at or above alert level 6. OSSEC has a bunch of built-in rules for general web attacks and general webapp security (as well as for common servers like Apache and Ngnix, and common platforms like WordPress, etc), so while you can tune the level setting to your own preferences, setting it to 6 will block anyone trying out common web-app vulnerabilities on your server. Insert it after the other <active-response> entries in your ossec.conf file:

  <active-response>
    <command>apache-deny</command>
    <location>local</location>
    <level>6</level>
    <timeout>600</timeout>
  </active-response>

The <timeout>600</timeout> setting above will direct OSSEC to use the same apache-deny command to remove an IP address from the block list after 600 seconds (10 minutes) has elapsed since adding the IP to the block list. You may also want to add a repeated_offenders entry to your ossec.conf, to extend the timeout for repeat offenders. Insert the following after the other <active-response> entries to extend the timeouts by 30 minutes, then 60 minutes, and finally 120 minutes:

<active-response><repeated_offenders>30,60,120</repeated_offenders></active-response>

One other thing you probably should configure in your ossec.conf, just to be safe, is to whitelist your load balancer IPs. Make sure you have a <white_list> entry in the <global> section of your ossec.conf that matches the RemoteIPInternalProxy setting you added to your Apache config in step 1:

<white_list>10.0.0.1/24</white_list>

Restart OSSEC, and then test it out by trying a URL for which OSSEC alerts automatically. Make sure you test from a box whose IP address is not in your OSSEC whitelist — otherwise you won't see OSSEC do anything. Here's an easy URL to test (replacing www.example.com with your load balancer's public-facing name):

curl -I 'https://www.example.com/?cmd.exe'

After trying that once (and giving OSSEC a second or two to do its thing), any further HTTP requests you make from that box for the next 10 minutes should result in a 403 error page from Apache. If that doesn't happen, look in OSSEC's main log file (/var/ossec/logs/ossec.log) and active-response log (/var/ossec/logs/active-responses.log) to check for errors executing the apache-deny active-response. If everything's working, there shouldn't be any messages about the apache-deny script in the main OSSEC log; but there should be an entry in the active-respone log that looks like the following:

Thu Jan  1 01:23:45 UTC 2015 /var/ossec/active-response/bin/apache-deny.sh - 1.2.3.4 1234567890.1234 31104

The last three fields in the log will be the IP address blocked, the unix timestamp, and the OSSEC rule that triggered the block.

5. Optionally add your own OSSEC rules

Simply using the out-of-the-box OSSEC rules is perfect for blocking all those run-of-the-mill bot drive-bys you see all the time from scanning your web site for common vulnerabilities. However, you can also add custom rules to direct OSSEC to detect potential attacks that are specific to your site, and either alert you, or block them automatically (or both).

One thing in particular you might do is add some rules that block simple/unintentional DOS attacks (especially if your site has certain pages that are especially vulnerable to DOS attacks, like with slow back-end queries, etc). Here is an example of a set of rules you might add to your /var/ossec/rules/local_rules.xml file to detect and block simple DOS attacks (in the form of an unusually large number of web requests from the same IP address in a short amount of time) site-wide, with more sensitive rules for more-sensitive URL paths:

  <rule id="900000" level="1">
    <if_sid>31100,31108,31101,31120</if_sid>
    <description>Any web request.</description>
  </rule>

  <rule id="900001" level="0">
    <if_sid>900000</if_sid>
    <url>^/css|^/img|^/js</url>
    <description>Static css/img/js request.</description>
  </rule>

  <rule id="900002" level="1">
    <if_sid>900000</if_sid>
    <url>^/foo/special|^/baz</url>
    <description>Special foo or baz request.</description>
  </rule>

  <rule id="900003" level="1">
    <if_sid>900000</if_sid>
    <url>^/foo|^/bar</url>
    <description>General foo or bar request.</description>
  </rule>

  <rule id="900010" level="6" frequency="10" timeframe="3">
    <if_matched_sid>900000</if_matched_sid>
    <same_source_ip />
    <options>alert_by_email</options>
    <description>Alert if more than 12 (10+2) requests from same IP in 3 seconds.</description>
  </rule>

  <rule id="900011" level="6" frequency="50" timeframe="30">
    <if_matched_sid>900000</if_matched_sid>
    <same_source_ip />
    <options>alert_by_email</options>
    <description>Alert if more than 52 (50+2) requests from same IP in 30 seconds.</description>
  </rule>

  <rule id="900012" level="6" frequency="5" timeframe="30">
    <if_matched_sid>900002</if_matched_sid>
    <same_source_ip />
    <options>alert_by_email</options>
    <description>Alert if more than 7 (5+2) special foo or baz requests from same IP in 30 seconds.</description>
  </rule>

  <rule id="900013" level="6" frequency="10" timeframe="10">
    <if_matched_sid>900003</if_matched_sid>
    <same_source_ip />
    <options>alert_by_email</options>
    <description>Alert if more than 12 (10+2) general foo or bar requests from same IP in 10 seconds.</description>
  </rule>

You'll have to customize the URL paths, frequencies, and timeframes (and add or substract rules as necessary) if you want to use something similar on your web site. Here's a rule-by-rule explanation of the above (keeping in mind that OSSEC rules work like a pipeline, where you have to connect later rules to earlier ones if you want the later ones to be able to operate on the events that earlier rules have already matched):

Rule 900000 identifies general web requests that haven't otherwise been handled by the default OSSEC rules (gathering the output of the 31100, 31108, 31101, and 31120 rules from the built-in /var/ossec/rules/web_rules.xml file that haven't been matched by any other built-in rules). The rules following 900000 will filter through all the requests that 900000 has collected:

  <rule id="900000" level="1">
    <if_sid>31100,31108,31101,31120</if_sid>
    <description>Any web request.</description>
  </rule>

Rule 900001 filters out any requests for URLs starting with /css, /js, and /img (note the <url> element does not handle full perl-style regexes — it instead uses a very limited string-matching syntax that recognizes only ^, $, and | as special characters). This will prevent any requests for our app's static CSS, JavaScript, or image files from being counted by the other rules that rely on rule 900000:

  <rule id="900001" level="0">
    <if_sid>900000</if_sid>
    <url>^/css|^/img|^/js</url>
    <description>Static css/img/js request.</description>
  </rule>

Rule 900002 identifies requests for our most-sensitive URLs (URLs starting with /foo/special or /baz). Rule 900012 (later on in the pipeline) will raise a higher-level alert if we get too many of these too quickly:

  <rule id="900002" level="1">
    <if_sid>900000</if_sid>
    <url>^/foo/special|^/baz</url>
    <description>Special foo or baz request.</description>
  </rule>

Rule 900003 identifies requests for our moderately-sensitive URLs (URLs starting with /foo or /bar). Since Rule 900002 was specified first, URLs starting with /foo/special will be filtered out by 900002; Rule 900003 won't match them, but will match all other URLs starting with /foo. Rule 900013 (later on in the pipeline) will raise a higher-level alert if we get too many of these too quickly:

  <rule id="900003" level="1">
    <if_sid>900000</if_sid>
    <url>^/foo|^/bar</url>
    <description>General foo or bar request.</description>
  </rule>

Rule 900010 raises a level-6 alert if we get 12 or more requests that have been matched by rule 900000 (not including those matched by rule 900001, 900002, or 900003) from the same IP address in under 3 seconds. Note that one of the eccentricities of OSSEC is that rules configured with the frequency attribute are fired only after the rule has been matched twice more than the configured value — so defining a rule with frequency="10" actually means that the rule must be matched 12 times before it fires.

Also, whenever you add new rules that will result in active responses (like this one will, assuming your apache-deny active response is also set at level 6), it's usually a good idea to include the alert_by_email option until your sure it's working smoothly — that way OSSEC will always send you an email whenever the rule fires, allowing you to check that it's firing only under the circumstances you want it to (and that it's triggered about as often as you'd expect). You can remove the alert_by_email option once your satisfied it's working as planned:

  <rule id="900010" level="6" frequency="10" timeframe="3">
    <if_matched_sid>900000</if_matched_sid>
    <same_source_ip />
    <options>alert_by_email</options>
    <description>Alert if more than 12 (10+2) requests from same IP in 3 seconds.</description>
  </rule>

Rule 900011 raises a level-6 alert if we get 52 or more requests that have been matched by rule 900000 from the same IP in under 30 seconds. So 900010 will result in blocking an IP address if we get a quick burst of requests from it (12 in 3 seconds), whereas 900011 will fire if we get a steady stream (52 in 30 seconds):

  <rule id="900011" level="6" frequency="50" timeframe="30">
    <if_matched_sid>900000</if_matched_sid>
    <same_source_ip />
    <options>alert_by_email</options>
    <description>Alert if more than 52 (50+2) requests from same IP in 30 seconds.</description>
  </rule>

Rule 900012 raises a level-6 alert if we get 7 or more requests matched by rule 900002 — our most sensitive URLs — from the same IP in under 30 seconds. So rule 900012 has a much lesser tolerance for requests than our general rules 900010 and 900011, blocking IPs after just a few requests:

  <rule id="900012" level="6" frequency="5" timeframe="30">
    <if_matched_sid>900002</if_matched_sid>
    <same_source_ip />
    <options>alert_by_email</options>
    <description>Alert if more than 7 (5+2) special foo or baz requests from same IP in 30 seconds.</description>
  </rule>

Rule 900013 raises a level-6 alert if we get 12 or more requests matched by rule 900003 — our moderately-sensitive URLs — from the same IP in under 10 seconds. It straddles the gap between the very-sensitive rule 900012, and the more tolerant rules 900010 and 900011:

  <rule id="900013" level="6" frequency="10" timeframe="10">
    <if_matched_sid>900003</if_matched_sid>
    <same_source_ip />
    <options>alert_by_email</options>
    <description>Alert if more than 12 (10+2) general foo or bar requests from same IP in 10 seconds.</description>
  </rule>

Keep in mind that whenever you do IP-based blocking like this, multiple users behind the same NAT (ie in the same office or using the same Internet connection) will appear to your servers as all having the same IP address — so be sure to allow for that when deciding for your site how many requests from the same IP should trigger alerts.