Sunday, February 23, 2020

IPv6 for Private AWS Subnets

Instead of setting up "private" subnets in AWS with the use of moderately-expensive AWS NAT Gateways, I've been experimenting with free Egress-Only Internet Gateways (EIGW). The downside with EIGW is that it's IPv6 only — so you can communicate to the outside world from instances in your private subnet only through IPv6.

In theory, that shouldn't be a problem — it's 2020 and IPv6 just works everywhere, right? Well, actually, in practice it does work pretty well — but there are still a few gotchas. Here are a few hurdles I had to work around when setting up a fleet of Ubuntu Linux EC2 instances with Ansible:

APT Repos

The base AMIs that Ubuntu provides are configured to use the Ubuntu APT repos hosted by AWS (like; however, these repos only support IPv4. So the first thing you need to do is change the repos listed in /etc/apt/sources.list to use external repos that support IPv6 (like, or other Ubuntu mirrors you might find).

And since you won't be able to use IPv4 to access the repos, you can speed up APT updates by configuring APT to try only IPv6. To do so, add a file in your /etc/apt/apt.conf.d/ directory (call it something like 99force-ipv6) with the following content:

Acquire::ForceIPv6 "true";

Also don't forget that if you do set up a restrictive Network ACL for your private subnet, you'll need to allow inbound TCP access to the standard Linux ethereal port range (32768-61000) from whatever APT repos you use.

NTP Pools

The NTP pools used by the base AMIs also don't support IPv6. I use the traditional NTP daemon provided by the Ubuntu ntp package, rather the default systemd-tymesyncd service. To configure the NTP daemon, I remove all the default pools from the /etc/ntp.conf file, and instead just use the pool (the convention with NTP is that for domains that have numbered pools, like,,, etc, the pool numbered 2 is the one that supports IPv6).

Specifically, this is how I configure the pool in /etc/ntp.conf:

pool -6 iburst minpoll 10 maxpoll 12

The -6 flag means to use IPv6; the iburst part is supposed to help speed up initial synchronization; the minpoll 10 part means to poll no more often than every 2^10 seconds (around 17 minutes); and the maxpoll 12 part means to poll no less often than every 2^12 seconds (around 68 minutes).

Also, if you set up a restrictive Network ACL for your private subnet, you'll need to allow inbound access to UDP port 123.


If you are planning to directly call AWS APIs (either through the various per-language SDKs, or the CLI), a huge gotcha is that very few AWS services as of yet provide IPv6 endpoints. This means that you won't be able to use most AWS services at all from within your private IPv6 subnet (with the exception of services that consist of instances that themselves reside within your VPCs, like RDS; rather than endpoints hosted outside of your VPCs, like DynamoDB).

The only major AWS service I've tried that does support IPv6 through its APIs is S3. When connecting to it via CLI, you can get it to use IPv6 by explicitly specifying the "dualstack" endpoint via command-line flag, like this:

aws --endpoint-url --region us-east-1 s3 ls

Or, alternately, you can enable IPv6 usage via the AWS config file (~/.aws/config), like this:

[default] region = us-east-1 s3 = use_dualstack_endpoint = true addressing_style = virtual

Ansible Inventory

To access EC2 instances in a private subnet, typically you'd use a VPN running in a public subnet of the same (or bridged) VPC, with the VPN client set to route the VPC's private IPv4 block through the VPN. For IPv6, I also have my VPN set to route the VPC's IPv6 block through the VPN, too.

Using Ansible through a VPN with IPv4 is pretty much as simple as configuring Ansible's ec2.ini file to set its destination_variable and vpc_destination_variable settings to private_ip_address. But since I decided to disallow any IPv4 access to my private subnets (even from other subnets within the same VPC), I had to jump through a few extra hoops:

1. Custom Internal Domain Names

I use a custom internal domain name for all my servers (I'll use as the custom domain in the following examples), and assign each server its own domain name (like or, etc). When I launch a new EC2 server, I create a DNS AAAA record for it (via Route53), pointing the DNS record to the IPv6 address of the newly-launched server. In this way I can use the DNS name to refer to the same server throughout the its lifetime.

I also tag my EC2 instances as soon as I launch them with tags from which the DNS name can be constructed. For example, I'd assign the server with the DNS name of a "node" tag of fe and a "number" tag of 3.

2. SSH Config

In my SSH config file (~/.ssh/config), I have an entry like the following, to make sure SSH (and Ansible) only tries to access my EC2 instances through IPv6:

Host * AddressFamily inet6
3. Ansible EC2 Config

With the above two elements in place, I can then enable the destination_format (and destination_format_tags) settings in the Ansible ec2.ini configuration file to direct Ansible to use DNS names instead of IP address for EC2 inventory. With the "node" and "number" tags described above, I can use the following configuration in my ec2.ini file:

destination_format = {0}{1} destination_format_tags = node,number

When the above is set up correctly, you can run the script (eg as ./, and see your DNS names in its output (like or, etc), instead of IPv4 addresses. And when you run an ad-hoc Ansible module (like ansible -i -m setup) everything should "just work".