Friday, July 31, 2020

WeeChat Light Theme

When I started using WeeChat as my IRC client, I searched around for a light theme to use with it (since I use a light theme for my terminal, and the default WeeChat colors make several UI bits difficult to read with a light terminal theme). I couldn't find much in the way of WeeChat light themes, however, so I set up my own — I went through all the color settings listed in the WeeChat User's Guide, and replaced defaults that showed up as dark-on-dark or light-on-light in my terminal with colors that instead would be dark-on-light or light-on-dark.

These are the settings I ended up modifying to build the light theme I now use (you can look up each setting in the User's Guide for a brief description of each, if you're curious).

/set weechat.bar.status.color_bg gray /set weechat.bar.title.color_bg gray /set weechat.color.chat_buffer black /set weechat.color.chat_channel black /set weechat.color.chat_nick_offline_highlight_bg gray /set weechat.color.chat_nick_self darkgray /set weechat.color.chat_prefix_action darkgray /set weechat.color.status_data_msg lightblue /set weechat.color.status_more lightblue /set weechat.color.status_name darkgray /set weechat.color.status_number lightblue /set buflist.format.buffer_current "${color:,yellow}${format_buffer}" /set buflist.format.hotlist_low "${color:cyan}" /set irc.color.topic_new red /set relay.color.text_selected lightblue

There are a lot more color settings, but these were the only ones I needed to change to fix the dark-on-dark and light-on-light issues (I left the other settings alone). I use InWee to manage my custom WeeChat settings; I store these color settings in a file called colors.txt, and then start up WeeChat and run a command like inwee colors.txt whenever I want to make changes.

Here's a screenshot of what it looks like in my termial's color scheme:

Sunday, February 23, 2020

IPv6 for Private AWS Subnets

Instead of setting up "private" subnets in AWS with the use of moderately-expensive AWS NAT Gateways, I've been experimenting with free Egress-Only Internet Gateways (EIGW). The downside with EIGW is that it's IPv6 only — so you can communicate to the outside world from instances in your private subnet only through IPv6.

In theory, that shouldn't be a problem — it's 2020 and IPv6 just works everywhere, right? Well, actually, in practice it does work pretty well — but there are still a few gotchas. Here are a few hurdles I had to work around when setting up a fleet of Ubuntu Linux EC2 instances with Ansible:

APT Repos

The base AMIs that Ubuntu provides are configured to use the Ubuntu APT repos hosted by AWS (like us-east-1.ec2.archive.ubuntu.com); however, these repos only support IPv4. So the first thing you need to do is change the repos listed in /etc/apt/sources.list to use external repos that support IPv6 (like us.archive.ubuntu.com, or other Ubuntu mirrors you might find).

And since you won't be able to use IPv4 to access the repos, you can speed up APT updates by configuring APT to try only IPv6. To do so, add a file in your /etc/apt/apt.conf.d/ directory (call it something like 99force-ipv6) with the following content:

Acquire::ForceIPv6 "true";

Also don't forget that if you do set up a restrictive Network ACL for your private subnet, you'll need to allow inbound TCP access to the standard Linux ethereal port range (32768-61000) from whatever APT repos you use.

NTP Pools

The NTP pools used by the base AMIs also don't support IPv6. I use the traditional NTP daemon provided by the Ubuntu ntp package, rather the default systemd-tymesyncd service. To configure the NTP daemon, I remove all the default pools from the /etc/ntp.conf file, and instead just use the 2.us.pool.ntp.org pool (the convention with NTP is that for domains that have numbered pools, like 0.us.pool.ntp.org, 1.us.pool.ntp.org, 2.us.pool.ntp.org, etc, the pool numbered 2 is the one that supports IPv6).

Specifically, this is how I configure the 2.us.pool.ntp.org pool in /etc/ntp.conf:

pool -6 2.us.pool.ntp.org iburst minpoll 10 maxpoll 12

The -6 flag means to use IPv6; the iburst part is supposed to help speed up initial synchronization; the minpoll 10 part means to poll no more often than every 2^10 seconds (around 17 minutes); and the maxpoll 12 part means to poll no less often than every 2^12 seconds (around 68 minutes).

Also, if you set up a restrictive Network ACL for your private subnet, you'll need to allow inbound access to UDP port 123.

AWS APIs

If you are planning to directly call AWS APIs (either through the various per-language SDKs, or the CLI), a huge gotcha is that very few AWS services as of yet provide IPv6 endpoints. This means that you won't be able to use most AWS services at all from within your private IPv6 subnet (with the exception of services that consist of instances that themselves reside within your VPCs, like RDS; rather than endpoints hosted outside of your VPCs, like DynamoDB).

The only major AWS service I've tried that does support IPv6 through its APIs is S3. When connecting to it via CLI, you can get it to use IPv6 by explicitly specifying the "dualstack" endpoint via command-line flag, like this:

aws --endpoint-url https://s3.dualstack.us-east-1.amazonaws.com --region us-east-1 s3 ls

Or, alternately, you can enable IPv6 usage via the AWS config file (~/.aws/config), like this:

[default] region = us-east-1 s3 = use_dualstack_endpoint = true addressing_style = virtual

Ansible Inventory

To access EC2 instances in a private subnet, typically you'd use a VPN running in a public subnet of the same (or bridged) VPC, with the VPN client set to route the VPC's private IPv4 block through the VPN. For IPv6, I also have my VPN set to route the VPC's IPv6 block through the VPN, too.

Using Ansible through a VPN with IPv4 is pretty much as simple as configuring Ansible's ec2.ini file to set its destination_variable and vpc_destination_variable settings to private_ip_address. But since I decided to disallow any IPv4 access to my private subnets (even from other subnets within the same VPC), I had to jump through a few extra hoops:

1. Custom Internal Domain Names

I use a custom internal domain name for all my servers (I'll use example.net as the custom domain in the following examples), and assign each server its own domain name (like db1.example.net or mail2.example.net, etc). When I launch a new EC2 server, I create a DNS AAAA record for it (via Route53), pointing the DNS record to the IPv6 address of the newly-launched server. In this way I can use the DNS name to refer to the same server throughout the its lifetime.

I also tag my EC2 instances as soon as I launch them with tags from which the DNS name can be constructed. For example, I'd assign the server with the DNS name of fe3.example.net a "node" tag of fe and a "number" tag of 3.

2. SSH Config

In my SSH config file (~/.ssh/config), I have an entry like the following, to make sure SSH (and Ansible) only tries to access my EC2 instances through IPv6:

Host *.example.net AddressFamily inet6
3. Ansible EC2 Config

With the above two elements in place, I can then enable the destination_format (and destination_format_tags) settings in the Ansible ec2.ini configuration file to direct Ansible to use DNS names instead of IP address for EC2 inventory. With the "node" and "number" tags described above, I can use the following configuration in my ec2.ini file:

destination_format = {0}{1}.example.net destination_format_tags = node,number

When the above is set up correctly, you can run the ec2.py script (eg as ./ec2.py), and see your DNS names in its output (like db1.example.net or mail2.example.net, etc), instead of IPv4 addresses. And when you run an ad-hoc Ansible module (like ansible fe3.example.com -i ec2.py -m setup) everything should "just work".

Wednesday, January 8, 2020

Testing Systemd Services with Docker-Compose

I've been using Docker containers to test out the install process for a project I've been working on, and have found it can be a little tricky to get systemd booted up and running in Docker. Normally running a service manager like systemd within a container would be redundant and unnecessary, but in this case I'm specifically trying to test out systemd service files (and directory paths, and user permissions, etc) that have been set up by my install process.

With the base 18.04 Ubuntu image, there are 4 key steps to getting systemd running and testable in a Docker container:

  1. Install systemd
  2. Map a few key system directories
  3. Start up with /lib/systemd/systemd
  4. Use docker exec to test

1. Install systemd

With the base Ubuntu image, it's as simple as installing the systemd package with apt-get — like this Dockerfile:

FROM ubuntu:18.04 ENV DEBIAN_FRONTEND noninteractive RUN apt-get update && apt-get install -y systemd CMD ["/lib/systemd/systemd"]

2. Map a few key system directories

I found I had to mount /run and /run/lock as tmpfs directories and map /sys/fs/cgroup to my local /sys/fs/cgroup directory. You can do that with this docker-compose.yml file:

version: '3' services: my_test_container: build: . image: my_test_image tmpfs: - /run - /run/lock volumes: - /sys/fs/cgroup:/sys/fs/cgroup:ro

Or, alternately, when using the docker run command, specifying the --tmpfs /run --tmpfs /run/lock --volume /sys/fs/cgroup:/sys/fs/cgroup:ro flags.

3. Start up with /lib/systemd/systemd

I added CMD ["/lib/systemd/systemd"] to my Dockerfile to start the container with /lib/systemd/systemd by default; but you can instead add command: /lib/systemd/systemd to a service in your docker-compose.yml file, or just run /lib/systemd/systemd directly with the docker run command.

4. Use docker exec to test

With the above docker-compose.yml and Dockerfile, you can start up the test container with one command:

docker-compose up -d my_test_container

And then, with systemd running, use a second command to execute a shell on the container to test it out:

docker-compose exec my_test_container bash

Or use exec to run whatever other commands you need to test systemd:

docker-compose exec my_test_container systemctl list-units

Alternately, if you built and ran the above Dockerfile with docker commands instead of docker-compose, you'd use the following command to test the container out:

docker exec -it my_test_container bash

Cleaning up

To clean everything up, stop the container with docker-compose:

docker-compose stop my_test_container

Then remove the container:

docker-compose rm my_test_container

And finally, remove the image:

docker image rm my_test_image

Or execute all 3 clean-up steps at once (as well as removing all other containers/images referenced by your docker-compose.yml file), in a single command:

docker-compose down --rmi all

Without docker-compose

The following docker commands would allow you to build, run, and clean up the above Dockerfile without using docker-compose at all:

docker build --tag my_test_image . docker run \ --tmpfs /run --tmpfs /run/lock \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ --detach --rm \ --name my_test_container my_test_image docker exec --interactive --tty my_test_container bash docker stop my_test_container docker image rm my_test_image

Friday, October 25, 2019

Adapting PostgreSQL Timestamps To Arrow With Psycopg2

I did some digging the other day to try to figure out how to use the excellent Python datetime library Arrow with the workhorse psycopg2 Python-PostgreSQL database adapter (plus the nifty Peewee ORM on top of psycopg2). I was pleasantly surprised how easy and painless it was to implement, with help from a blog post by Omar Rayward, and the psycopg2 docs (and source code) as a guide.

There are 5 core PostgreSQL date/time types that Arrow can handle, which psycopg2 maps to the 3 core Python date/time classes — by default through 4 core psycopg2 datatypes:

PostgreSQL Type Example Output Psycopg2 Type Python Type
timestamp [without time zone] 2001-02-03 04:05:06 PYDATETIME datetime
timestamp with time zone 2001-02-03 04:05:06-07 PYDATETIMETZ datetime
date 2001-02-03 PYDATE date
time [without time zone] 04:05:06 PYTIME time
time with time zone 04:05:06-07 PYTIME time

Arrow can be used to handle each of these 5 types, via its single Arrow class. Here's how you set up the mappings:

import arrow import psycopg2.extensions def adapt_arrow_to_psql(value): """Formats an Arrow object as a quoted string for use in a SQL statement.""" # assume Arrow object is being used for TIME datatype if date is 1900 or earlier if value.year <= 1900: value = value.format("HH:mm:ss.SZ") elif value == arrow.Arrow.max: value = "infinity" elif value == arrow.Arrow.min: value = "-infinity" return psycopg2.extensions.AsIs("'{}'".format(value)) # register adapter to format Arrow objects when passed as parameters to SQL statements psycopg2.extensions.register_adapter(arrow.Arrow, adapt_arrow_to_psql) def cast_psql_date_to_arrow(value, conn): """Parses a SQL timestamp or date string to an Arrow object.""" # handle NULL and special "infinity"/"-infinity" values if not value: return None elif value == "infinity": return arrow.Arrow.max elif value == "-infinity": return arrow.Arrow.min return arrow.get(value) def cast_psql_time_to_arrow(value, conn): """Parses a SQL time string to an Arrow object.""" # handle NULL if not value: return None # handle TIME, TIME with fractional seconds (.S), and TIME WITH TIME ZONE (Z) return arrow.get(value, ["HH:mm:ss", "HH:mm:ss.S", "HH:mm:ssZ", "HH:mm:ss.SZ"]) # override default timestamp/date converters # to convert from SQL timestamp/date results to Arrow objects psycopg2.extensions.register_type(psycopg2.extensions.new_type( ( psycopg2.extensions.PYDATETIME.values + psycopg2.extensions.PYDATETIMETZ.values + psycopg2.extensions.PYDATE.values ), "ARROW", cast_psql_date_to_arrow, )) # override default time converter to convert from SQL time results to Arrow objects psycopg2.extensions.register_type(psycopg2.extensions.new_type( psycopg2.extensions.PYTIME.values, "ARROW_TIME", cast_psql_time_to_arrow ))

The 3 slightly tricky bits are:

  1. Deciding whether to format an Arrow object as a date or a time (in adapt_arrow_to_psql()) — you may want to handle it differently, but since Arrow will parse times without dates as occurring on "0001-01-01", the simplest thing to do is assume a date with an early year (like 1900 or earlier) represents a time instead of a date (which allows round-tripping of times from PostgreSQL to Arrow and back).
  2. Handling PostgreSQL's special "-infinity" and "infinity" values when converting between PostgreSQL and Arrow dates (in adapt_arrow_to_psql() and cast_psql_date_to_arrow()) — Arrow.min and Arrow.max are the closest equivalents.
  3. Handling the 4 different time variants that PostgreSQL emits (in cast_psql_time_to_arrow()):
    • "12:34:56" (no fractional seconds or time zone)
    • "12:34:56.123456" (fractional seconds but no time zone)
    • "12:34:56-07" (no fractional seconds but time zone)
    • "12:34:56.123456-07" (fractional seconds and time zone)

With those mappings in place, you can now use Arrow objects natively with psycopg2:

import arrow import psycopg2 def test_datetimes(): conn = psycopg2.connect(dbname="mydbname", user="myuser") try: cur = conn.cursor() cur.execute(""" CREATE TABLE foo ( id SERIAL PRIMARY KEY, dt TIMESTAMP, dtz TIMESTAMP WITH TIME ZONE, d DATE, t TIME, twtz TIME WITH TIME ZONE ) """) cur.execute( "INSERT INTO foo (dt, dtz, d, t, twtz) VALUES (%s, %s, %s, %s, %s)", ( arrow.get("2001-02-03 04:05:06"), arrow.get("2001-02-03 04:05:06-07"), arrow.get("2001-02-03"), arrow.get("04:05:06", "HH:mm:ss"), arrow.get("04:05:06-07", "HH:mm:ssZ"), ), ) cur.execute("SELECT * FROM foo") result = cur.fetchone() assert result[1] == arrow.get("2001-02-03 04:05:06") assert result[2] == arrow.get("2001-02-03 04:05:06-07") assert result[3] == arrow.get("2001-02-03") assert result[4] == arrow.get("04:05:06", "HH:mm:ss") assert result[5] == arrow.get("04:05:06-07", "HH:mm:ssZ") finally: conn.rollback()

Or with the Peewee ORM, you can use Peewee's built-in date/time fields, and pass and receive Arrow objects to/from those fields:

import arrow import peewee import playhouse.postgres_ext db = playhouse.postgres_ext.PostgresqlExtDatabase("mydbname", user="myuser") class Foo(peewee.Model): dt = peewee.DateTimeField( default=arrow.utcnow, constraints=[peewee.SQL("DEFAULT (CURRENT_TIMESTAMP AT TIME ZONE 'UTC')")], ) dtz = playhouse.postgres_ext.DateTimeTZField( default=arrow.utcnow, constraints=[peewee.SQL("DEFAULT (CURRENT_TIMESTAMP AT TIME ZONE 'UTC')")], ) d = peewee.DateField( default=arrow.utcnow, constraints=[peewee.SQL("DEFAULT (CURRENT_DATE AT TIME ZONE 'UTC')")], ) t = peewee.TimeField( default=lambda: arrow.utcnow().time(), constraints=[peewee.SQL("DEFAULT (CURRENT_TIME AT TIME ZONE 'UTC')")], ) class Meta: database = db def test_datetimes(): with db.transaction() as tx: try: Foo.create_table() result = Foo.get_by_id( Foo.create( dt=arrow.get("2001-02-03 04:05:06"), dtz=arrow.get("2001-02-03 04:05:06-07"), d=arrow.get("2001-02-03"), t=arrow.get("04:05:06", "HH:mm:ss"), ).id ) assert result.dt == arrow.get("2001-02-03 04:05:06") assert result.dtz == arrow.get("2001-02-03 04:05:06-07") assert result.d == arrow.get("2001-02-03") assert result.t == arrow.get("04:05:06", "HH:mm:ss") finally: tx.rollback()

Sunday, October 7, 2018

Unbricking My TRENDnet TEW-812DRU Wireless Router

Upgrading my TRENDnet TEW-812DRU v2 router with DD-WRT firmware sometimes goes smoothly, and sometimes not. Usually if the upgrade fails on the first try, I can just unplug the router, wait 10 seconds, plug it in again, wait for the web UI to come up again, re-upload the firmware (and wait), and the upgrade will work on the second try.

But sometimes the router won't boot up correctly. All the blinking lights come on as normal, but it doesn't do any actual routing — or provide any DHCP services, which makes the router look bricked, even for devices connected to it physically with an ethernet cord.

But fortunately, it's not actually bricked. The router still grabs its usual local address (192.168.1.1, if you haven't configured it to be something else), and runs its nifty "TRENDnet - Emergency miniWeb Server" on port 80. The emergency page served up allows you to upload a new firmware image — and every time (so far) that I've gotten to that page, I've simply been able to upload the firmware image I've been trying to install (ie the latest trendnet-812dru-webflash.bin file from DD-WRT); and the router accepts it, installs it, and reboots itself, and everything is back to normal and happy in a few minutes.

The trick to accessing the router when its usual networking services are down is to 1) connect a computer to the router via wired ethernet connection (if you don't have one set up that way already), and 2) configure that computer with a static IP on the router's local subnet.

Since I'm running my router at 192.168.1.1, I just set the computer's static IP address to 192.168.1.10, and point its browser to http://192.168.1.1. The emergency web server seems to listen only for a minute or two after booting, though, and then goes way; so if the emergency page won't load, I unplug the router, wait 10 seconds, and plug it in again.

And since that wired computer is running Ubuntu 16.04 (with a wired interface named enp1s2f3 — look it up via a command like ifconfig or ip address etc), I set its static IP address by adding the following to my /etc/network/interfaces:

iface enp1s2f3 inet static address 192.168.1.10 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 192.168.1.1

And then run sudo service network-manager stop to make NetworkManager cool its butt, and sudo service networking restart to use the static IP.

Saturday, July 21, 2018

DD-WRT Firmware for TRENDnet TEW-812DRU Wireless Router

I'm lucky enough to get gigabit internet access at home, from Wave Broadband, with which I'm quite happy. And I don't even need a modem — with my apartment building, I can just jack directly into the Ethernet port in my living room wall. I originally got a Kasada router, but its firmware hasn't been updated in a couple years, so I decided to get a new router that I knew would be updateable with the Free Software DD-WRT firmware.

I got a TRENDnet TEW-812DRU v2, which although it's like 5 years old, is as fast as I need (supporting gigabit ethernet, plus 1.3 Gbps 802.11ac and 450 Mbps 802.11n wireless on separate 5 GHz and 2.4 GHz channels) — and a quarter of the price of comparable new routers. And, importantly, it looked like it was well supported by DD-WRT.

And it did turn out to be well supported by DD-WRT. Having first read through all the forum posts about the TEW-812DRU, I found that, unlike some other routers, you don't need anything special to use DD-WRT on the TEW-812DRU — just upload the new firmware to the router through its web UI and let it do its thing, no tricks needed.

So the first thing I did when I plugged in the router was login to its web UI and flash it with the "Open Source" firmware I downloaded from the TRENDnet TEW-812DRU downloads page. That turned out to be DD-WRT v24-sp2 r23194, compiled on 12/21/2013. I was happy it worked, but that firmware was just way too old.

So next I looked up the TEW-812DRU in the DD-WRT router database, and that prompted me to download something labeled DD-WRT v24-sp2 r23804 (but turned out actually to be r23808), compiled on 3/27/2014. I flashed that firmware through the web UI — but when the router rebooted, it presented me with an "Emergency Web Server" page. I went to look up what that meant via a working internet connection, and when I checked on the router again, the Emergency Web Server page had been replaced with the working DD-WRT web UI. I figure it must have just taken a little extra while for the router to boot everything up, no big deal.

But that firmware was also way older than I was hoping for, so I went searching through the downloads directory of the DD-WRT site — and finally found the latest version of the TEW-812DRU firmware here:

https://download1.dd-wrt.com/dd-wrtv2/downloads/betas/2018/07-16-2018-r36330/trendnet-812DRUv2/

I flashed that firmware through the router's web UI, and was very pleased to see the router reboot with no issues at all, happily running DD-WRT v3.0-r36330 mini — compiled just a few days ago on 7/16/2018. Finally, peace of mind that no bears, pandas, or kittens will be making themselves at home inside my router!

Sunday, June 24, 2018

Skip the Pre-Commit Hook on Git Rebase or Merge

When you want to skip the git pre-commit hook for a single commit, it's easy — you just add the --no-verify flag (or -n for short) to the git commit command:

git commit --no-verify

But to skip multiple commits executed by another git command, like rebase or merge, the --no-verify flag doesn't work. The best way I've found to skip the pre-commit hook in that case is to code the hook to check for a custom environment variable (I like to use NO_VERIFY), and skip the pre-commit logic if it's not empty. For example, the pre-commit.sh script in my Google Java Format Pre-Commit Hook has a block of code like this at the top of the file, which skips the main functionality of the pre-commit hook if the NO_VERIFY environment variable has been set to anything other than an empty string:

if [ "$NO_VERIFY" ]; then
    echo 'pre-commit hook skipped' 1>&2
    exit 0
fi

So when I want to skip that pre-commit hook when doing a complicated rebase or merge, I simply run the following commands in the same shell:

export NO_VERIFY=1
git rebase -i master # or `git merge some-branch` or whatever
export NO_VERIFY=