Wednesday, December 30, 2020

Ecto RDS SSL Connection with Certificate Verification

It's nice and easy to connect to an AWS RDS instance with Elixir Ecto over SSL/TLS, as long as you're not worried about verifying the database server's certificate. You just add a ssl: true setting when your configure the Ecto Repo, like this snippet from a config/releases.exs file for a hypothetical "myapp":

# config/releases.exs config :myapp, MyApp.Repo, hostname: System.get_env("DB_HOSTNAME"), database: System.get_env("DB_DATABASE"), username: System.get_env("DB_USERNAME"), password: System.get_env("DB_PASSWORD"), ssl: true

That's probably good enough for most cloud environments; but if you want to defend against a sophisticated attacker eavesdropping on or manipulating the SSL connections between your DB client and the RDS server, you also need to configure your Ecto Repo's ssl_opts setting to verify the server's certificate.

Unfortunately, this is not so straightforward. You need to either write your own certificate verification function (not trivial), or use one supplied by another library — like the ssl_verify_fun.erl library.

To use the :ssl_verify_hostname verification function from the ssl_verify_fun.erl library, first add the library as a dependency to your mix.exs file:

# mix.exs defp deps do [ {:ecto_sql, "~> 3.5"}, {:ssl_verify_fun, ">= 0.0.0"} ] end

Then add the following ssl_opts setting to your Ecto Repo config:

# config/releases.exs check_hostname = String.to_charlist(System.get_env("DB_HOSTNAME")) config :myapp, MyApp.Repo, hostname: System.get_env("DB_HOSTNAME"), database: System.get_env("DB_DATABASE"), username: System.get_env("DB_USERNAME"), password: System.get_env("DB_PASSWORD"), ssl: true, ssl_opts: [ cacertfile: "/etc/ssl/certs/rds-ca-2019-root.pem", server_name_indication: check_hostname, verify: :verify_peer, verify_fun: {&:ssl_verify_hostname.verify_fun/3, [check_hostname: check_hostname]} ]

Note the RDS server hostname (which would be something like my-rds-cluster.cluster-abcd1234efgh.us-east-1.rds.amazonaws.com) needs to be passed to the server_name_indication and check_hostname options as a charlist. The above example also assumes that you have downloaded the root RDS SSL certificate to /etc/ssl/certs/rds-ca-2019-root.pem on your DB client hosts.

I'd also suggest pulling out the generation of ssl_opts into a function, to make it easy to set up multiple repos. This is the way I'd do it with out our hypothetical "myapp" repo: I'd add one environment variable (DB_SSL) to trigger the Ecto ssl setting (with or without verifying the server cert), and another environment variable (DB_SSL_CA_CERT) to specify the path for the cacertfile option (triggering cert verification):

# config/releases.exs make_ssl_opts = fn "", _hostname -> [] cacertfile, hostname -> check_hostname = String.to_charlist(hostname) [ cacertfile: cacertfile, server_name_indication: check_hostname, verify: :verify_peer, verify_fun: {&:ssl_verify_hostname.verify_fun/3, [check_hostname: check_hostname]} ] end db_ssl_ca_cert = System.get_env("DB_SSL_CA_CERT", "") db_ssl = db_ssl_ca_cert != "" or System.get_env("DB_SSL", "") != "" db_hostname = System.get_env("DB_HOSTNAME") config :myapp, MyApp.Repo, hostname: db_hostname, database: System.get_env("DB_DATABASE"), username: System.get_env("DB_USERNAME"), password: System.get_env("DB_PASSWORD"), ssl: db_ssl, ssl_opts: make_ssl_opts.(db_ssl_ca_cert, db_hostname)

With this verification in place, you'd see an error like the following if your DB client tries to connect to a server with a SSL certificate signed by a CA other than the one you configured:

{:tls_alert, {:unknown_ca, 'TLS client: In state certify at ssl_handshake.erl:1950 generated CLIENT ALERT: Fatal - Unknown CA\n'}}

And you'd see an error like the following if the certificate was signed by the expected CA, but for a different hostname:

{bad_cert,unable_to_match_altnames} - {:tls_alert, {:handshake_failure, 'TLS client: In state certify at ssl_handshake.erl:1952 generated CLIENT ALERT: Fatal - Handshake Failure\n {bad_cert,unable_to_match_altnames}'}}

Wednesday, December 16, 2020

Using Logstash to Ingest CloudFront Logs Into Elasticsearch

Elasticsearch can be a good way of monitoring usage of your AWS CloudFront websites. There are some fairly straightforward paths to shipping CloudFront logs to hosted Elasticsearch services like Logz.io or Amazon Elasticsearch. Here's how to do it with your own self-hosted Elasticsearch and Logstash instances:

  1. Set up CloudFront logging
  2. Set up SQS notifications
  3. Set up test Logstash pipeline
  4. Set up main Logstash pipeline
  5. View logs in Kibana

Set up CloudFront logging

First, you need an S3 bucket to store your CloudFront logs. You can use an existing bucket, or create a new one. You don't need to set up any special permissions for the bucket — but you probably will want to make sure the bucket denies public access to its content by default. In this example, we'll use an S3 bucket for logs called my-log-bucket, and we'll store our CloudFront logs under a directory of the bucket called my-cloudfront-logs. Also, we'll store each CloudFront distribution's logs in their own subdirectory of that directory; so for the distribution serving the www.example.com domain, we'll store the distributions logs under the my-cloudfront-logs/www.example.com subdirectory.

With the S3 logging bucket created and available, update each of your CloudFront distributions to log to it. You can do this via the AWS console by editing the distribution, turning the "Standard Logging" setting on, setting the "S3 Bucket for Logs" to your S3 logging bucket (my-log-bucket.s3.amazonaws.com), and setting the "Log Prefix" to the directory path of the subdirectory of the S3 bucket under which you'll store the logs (my-cloudfront-logs/www.example.com/). Save your changes, and every few minutes CloudFront will save a new .gz file to the my-cloudfront-logs/www.example.com/ subdirectory of the my-log-bucket (see the CloudFront access logs docs for details).

Set up SQS notifications

Next, create a new SQS queue. We'll call ours my-cloudfront-log-notifications, and we'll create it in the us-east-1 AWS region. When you create the queue, configure its "Receive message wait time" setting to 10 seconds or so; this will ensure the SQS client doesn't make way more SQS requests than needed (a setting of 10 seconds should keep the cost of this queue down to less than $1/month).

The only other thing special you need to do when you create the queue is add an access policy to it that allows S3 to send messages to it. The policy should look like this (replace my-cloudfront-log-notifications with the name of your queue, us-east-1 with your queue's region, my-log-bucket with the name of your log bucket, and 123456789012 with your AWS account ID):

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "SQS:SendMessage", "Resource": "arn:aws:sqs:us-east-1:123456789012:my-cloudfront-log-notifications", "Condition": { "StringEquals": { "aws:SourceAccount": "123456789012" }, "ArnLike": { "aws:SourceArn": "arn:aws:s3:*:*:my-log-bucket" } } } ] }

With the SQS queue created, update the S3 bucket to send all object-create events to the queue. You can do this via the AWS console by selecting the bucket and opening the "Events" block in the "Advanced Settings" section of the "Properties" tab of the bucket. There you can add a notification; name it my-cloudfront-log-configuration, check the "All object create events" checkbox, set the "Prefix" to my-cloudfront-logs/, and send it to your SQS queue my-cloudfront-log-notifications.

Alternately, you can add a notification with the same settings as above via the put-bucket-notification-configuration command of the s3api CLI, using a notification-configuration JSON file like the following:

{ "QueueConfigurations": [ { "Id": "my-cloudfront-log-configuration", "QueueArn": "arn:aws:sqs:us-east-1:123456789012:my-cloudfront-log-notifications", "Events": [ "s3:ObjectCreated:*" ], "Filter": { "Key": { "FilterRules": [ { "Name": "prefix", "Value": "my-cloudfront-logs/" } ] } } } ] }

Now that you've hooked up S3 bucket notifications to the SQS queue, if you look in the AWS console for the SQS queue, under the Monitoring tab's charts you'll start to see messages received every few minutes.

Set up test Logstash pipeline

Download a sample .gz log file from your S3 logging bucket, and copy it over to the machine you have Logstash running on. Move the file to a directory that Logstash can access, and make sure it has read permissions on the file. Our sample file will live at /var/log/my-cloudfront-logs/www.example.com/E123456789ABCD.2020-01-02-03.abcd1234.gz.

Copy the following my-cloudfront-pipeline.conf file into the /etc/logstash/conf.d directory on your Logstash machine (replacing the input path with your sample .gz log file), tail the Logstash logs (journalctl -u logstash -f if managed with systemd), and restart the Logstash service (sudo systemctl restart logstash):

# /etc/logstash/conf.d/my-cloudfront-pipeline.conf input { file { file_completed_action => "log" file_completed_log_path => "/var/lib/logstash/cloudfront-completed.log" mode => "read" path => "/var/log/my-cloudfront-logs/www.example.com/E123456789ABCD.2020-01-02-03.abcd1234.gz" sincedb_path => "/var/lib/logstash/cloudfront-since.db" type => "cloudfront" } } filter { if [type] == "cloudfront" { if (("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) { drop {} } mutate { rename => { "type" => "[@metadata][type]" } # strip dashes that indicate empty fields gsub => ["message", "\t-(?=\t)", " "] # literal tab } #Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields c-port time-to-first-byte x-edge-detailed-result-type sc-content-type sc-content-len sc-range-start sc-range-end csv { separator => " " # literal tab columns => [ "date", "time", "x_edge_location", "sc_bytes", "c_ip", "cs_method", "cs_host", "cs_uri_stem", "sc_status", "cs_referer", "cs_user_agent", "cs_uri_query", "cs_cookie", "x_edge_result_type", "x_edge_request_id", "x_host_header", "cs_protocol", "cs_bytes", "time_taken", "x_forwarded_for", "ssl_protocol", "ssl_cipher", "x_edge_response_result_type", "cs_protocol_version", "fle_status", "fle_encrypted_fields", "c_port", "time_to_first_byte", "x_edge_detailed_result_type", "sc_content_type", "sc_content_len", "sc_range_start", "sc_range_end" ] convert => { "c_port" => "integer" "cs_bytes" => "integer" "sc_bytes" => "integer" "sc_content_len" => "integer" "sc_range_end" => "integer" "sc_range_start" => "integer" "sc_status" => "integer" "time_taken" => "float" "time_to_first_byte" => "float" } add_field => { "datetime" => "%{date} %{time}" "[@metadata][document_id]" => "%{x_edge_request_id}" } remove_field => ["cloudfront_fields", "cloudfront_version", "message"] } # parse datetime date { match => ["datetime", "yy-MM-dd HH:mm:ss"] remove_field => ["datetime", "date", "time"] } # lookup geolocation of client ip address geoip { source => "c_ip" target => "geo" } # parse user-agent into subfields urldecode { field => "cs_user_agent" } useragent { source => "cs_user_agent" target => "ua" add_field => { "user_agent.name" => "%{[ua][name]}" "user_agent.version" => "%{[ua][major]}" "user_agent.device.name" => "%{[ua][device]}" "user_agent.os.name" => "%{[ua][os_name]}" "user_agent.os.version" => "%{[ua][os_major]}" } remove_field => ["cs_user_agent", "ua"] } # pull logfile path from s3 metadata, if present if [@metadata][s3][object_key] { mutate { add_field => { "path" => "%{[@metadata][s3][object_key]}" } } } # strip directory path from logfile path, and canonicalize field name mutate { rename => { "path" => "log.file.path" } gsub => ["log.file.path", ".*/", ""] remove_field => "host" } # canonicalize field names, and drop unwanted fields mutate { rename => { "c_ip" => "client.ip" "cs_bytes" => "http.request.bytes" "sc_content_len" => "http.response.body.bytes" "sc_content_type" => "http.response.body.type" "cs_method" => "http.request.method" "cs_protocol" => "url.scheme" "cs_protocol_version" => "http.version" "cs_referer" => "http.request.referrer" "cs_uri_query" => "url.query" "cs_uri_stem" => "url.path" "sc_bytes" => "http.response.bytes" "sc_status" => "http.response.status_code" "ssl_cipher" => "tls.cipher" "ssl_protocol" => "tls.protocol_version" "x_host_header" => "url.domain" } gsub => [ "http.version", "HTTP/", "", "tls.protocol_version", "TLSv", "" ] remove_field => [ "c_port", "cs_cookie", "cs_host", "fle_encrypted_fields", "fle_status", "sc_range_end", "sc_range_start", "x_forwarded_for" ] } } } output { stdout { codec => "rubydebug" } }

You should see a bunch of entries in the Logstash logs like the following, one for each entry from your sample log file (note the fields will appear in a different order every time you run this):

Jan 02 03:04:05 logs1 logstash[12345]: { Jan 02 03:04:05 logs1 logstash[12345]: "x_edge_detailed_result_type" => "Hit", Jan 02 03:04:05 logs1 logstash[12345]: "@timestamp" => 2020-01-02T03:01:02.000Z, Jan 02 03:04:05 logs1 logstash[12345]: "user_agent.device.name" => "EML-AL00", Jan 02 03:04:05 logs1 logstash[12345]: "time_taken" => 0.001, Jan 02 03:04:05 logs1 logstash[12345]: "http.version" => "2.0", Jan 02 03:04:05 logs1 logstash[12345]: "user_agent.os.version" => "8", Jan 02 03:04:05 logs1 logstash[12345]: "http.response.body.bytes" => nil, Jan 02 03:04:05 logs1 logstash[12345]: "tls.cipher" => "ECDHE-RSA-AES128-GCM-SHA256", Jan 02 03:04:05 logs1 logstash[12345]: "http.response.bytes" => 2318, Jan 02 03:04:05 logs1 logstash[12345]: "@version" => "1", Jan 02 03:04:05 logs1 logstash[12345]: "time_to_first_byte" => 0.001, Jan 02 03:04:05 logs1 logstash[12345]: "http.request.method" => "GET", Jan 02 03:04:05 logs1 logstash[12345]: "x_edge_request_id" => "s7lmJasUXiAm7w2oR34Gfg5zTgeQSTkYwiYV1pnz5Hzv8mRmBzyGrw==", Jan 02 03:04:05 logs1 logstash[12345]: "log.file.path" => "EML9FBPJY2494.2020-01-02-03.abcd1234.gz", Jan 02 03:04:05 logs1 logstash[12345]: "x_edge_result_type" => "Hit", Jan 02 03:04:05 logs1 logstash[12345]: "http.request.bytes" => 388, Jan 02 03:04:05 logs1 logstash[12345]: "http.request.referrer" => "http://baidu.com/", Jan 02 03:04:05 logs1 logstash[12345]: "client.ip" => "192.0.2.0", Jan 02 03:04:05 logs1 logstash[12345]: "user_agent.name" => "UC Browser", Jan 02 03:04:05 logs1 logstash[12345]: "user_agent.version" => "11", Jan 02 03:04:05 logs1 logstash[12345]: "url.query" => nil, Jan 02 03:04:05 logs1 logstash[12345]: "http.response.body.type" => "text/html", Jan 02 03:04:05 logs1 logstash[12345]: "url.domain" => "www.example.com", Jan 02 03:04:05 logs1 logstash[12345]: "x_edge_location" => "LAX50-C3", Jan 02 03:04:05 logs1 logstash[12345]: "http.response.status_code" => 200, Jan 02 03:04:05 logs1 logstash[12345]: "geo" => { Jan 02 03:04:05 logs1 logstash[12345]: "ip" => "192.0.2.0", Jan 02 03:04:05 logs1 logstash[12345]: "region_name" => "Shanghai", Jan 02 03:04:05 logs1 logstash[12345]: "country_name" => "China", Jan 02 03:04:05 logs1 logstash[12345]: "timezone" => "Asia/Shanghai", Jan 02 03:04:05 logs1 logstash[12345]: "longitude" => 121.4012, Jan 02 03:04:05 logs1 logstash[12345]: "country_code3" => "CN", Jan 02 03:04:05 logs1 logstash[12345]: "location" => { Jan 02 03:04:05 logs1 logstash[12345]: "lon" => 121.4012, Jan 02 03:04:05 logs1 logstash[12345]: "lat" => 31.0449 Jan 02 03:04:05 logs1 logstash[12345]: }, Jan 02 03:04:05 logs1 logstash[12345]: "region_code" => "SH", Jan 02 03:04:05 logs1 logstash[12345]: "country_code2" => "CN", Jan 02 03:04:05 logs1 logstash[12345]: "continent_code" => "AS", Jan 02 03:04:05 logs1 logstash[12345]: "latitude" => 31.0449 Jan 02 03:04:05 logs1 logstash[12345]: }, Jan 02 03:04:05 logs1 logstash[12345]: "url.scheme" => "https", Jan 02 03:04:05 logs1 logstash[12345]: "tls.protocol_version" => "1.2", Jan 02 03:04:05 logs1 logstash[12345]: "user_agent.os.name" => "Android", Jan 02 03:04:05 logs1 logstash[12345]: "x_edge_response_result_type" => "Hit", Jan 02 03:04:05 logs1 logstash[12345]: "url.path" => "/" Jan 02 03:04:05 logs1 logstash[12345]: }

These entries show you what Logstash will push to Elasticsearch, once you hook it up. You can adjust this my-cloudfront-pipeline.conf file and restart Logstash again and again until you get the exact field names and values that you want to push to Elasticsearch.

Let's look at each part of the pipeline individually.

In the input section, we're using the file input to read just our one sample file:

input { file { file_completed_action => "log" file_completed_log_path => "/var/lib/logstash/cloudfront-completed.log" mode => "read" path => "/var/log/my-cloudfront-logs/www.example.com/E123456789ABCD.2020-01-02-03.abcd1234.gz" sincedb_path => "/var/lib/logstash/cloudfront-since.db" type => "cloudfront" } }

The key bit here is that we set the type field to cloudfront, which we'll use in the filter section below to apply our filtering logic only to entries of this type. If you're only going to process CloudFront log files in this pipeline, you can omit all the bits of the pipeline that deal with "type", which would simplify it some.

In the filter section, the first step is to check if the type field was set to "cloudfront", and only execute the rest of the filter block if so:

filter { if [type] == "cloudfront" {

Then the next step in filter section is to drop the two header lines in each CloudFront log file, the first beginning with #Version, and the second beginning with #Fields:

if (("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) { drop {} }

After that, the next step renames the type field to [@metadata][type], so that it won't be pushed to the Elasticsearch index. I've opted to use Elasticsearch indexes that are for my CloudFront logs only; however, if you want to push your CloudFront logs into indexes that are shared with other data, you may want to keep the type field.

mutate { rename => { "type" => "[@metadata][type]" }

The second half of this mutate filter strips out the - characters that indicate empty field values from all the columns in the log entry. Note that the last argument of this gsub function is a literal tab character — make sure your text editor does not convert it to spaces!

# strip dashes that indicate empty fields gsub => ["message", "\t-(?=\t)", " "] # literal tab }

For example, it will convert a entry like this:

2020-01-02 03:03:03 HIO50-C1 6564 192.0.2.0 GET d2c4n4ttot8c65.cloudfront.net / 200 - Mozilla/5.0%20(Windows%20NT%206.1;%20WOW64;%20rv:40.0)%20Gecko/20100101%20Firefox/40.1 - - Miss nY0knXse4vDxS5uOBe3YAhDpH809bqhsILUUFAtE_4ZLlfXCiYcD0A== www.example.com https 170 0.164 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/1.1 - - 62684 0.164 Miss text/html 6111 - -

Into this (removing the dashes that indicate empty values, but not the dashes in non-empty values like the date or ciphersuite):

2020-01-02 03:03:03 HIO50-C1 6564 192.0.2.0 GET d2c4n4ttot8c65.cloudfront.net / 200 Mozilla/5.0%20(Windows%20NT%206.1;%20WOW64;%20rv:40.0)%20Gecko/20100101%20Firefox/40.1 Miss nY0knXse4vDxS5uOBe3YAhDpH809bqhsILUUFAtE_4ZLlfXCiYcD0A== www.example.com https 170 0.164 TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/1.1 62684 0.164 Miss text/html 6111

The next step is the meat of the process, using the csv filter to convert each tab-separated log line into named fields. Note that the separator property value is also a literal tab character:

#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields c-port time-to-first-byte x-edge-detailed-result-type sc-content-type sc-content-len sc-range-start sc-range-end csv { separator => " " # literal tab columns => [ "date", "time", "x_edge_location", "sc_bytes", "c_ip", "cs_method", "cs_host", "cs_uri_stem", "sc_status", "cs_referer", "cs_user_agent", "cs_uri_query", "cs_cookie", "x_edge_result_type", "x_edge_request_id", "x_host_header", "cs_protocol", "cs_bytes", "time_taken", "x_forwarded_for", "ssl_protocol", "ssl_cipher", "x_edge_response_result_type", "cs_protocol_version", "fle_status", "fle_encrypted_fields", "c_port", "time_to_first_byte", "x_edge_detailed_result_type", "sc_content_type", "sc_content_len", "sc_range_start", "sc_range_end" ] }

The columns property lists out each field name, in order. Later on in this pipeline, we'll rename many of these fields to use the ECS nomenclature, but this step uses the field names as defined by CloudFront, for clarity.

The middle part of the csv filter converts the numeric fields to actual numbers, via the convert property mapping:

convert => { "c_port" => "integer" "cs_bytes" => "integer" "sc_bytes" => "integer" "sc_content_len" => "integer" "sc_range_end" => "integer" "sc_range_start" => "integer" "sc_status" => "integer" "time_taken" => "float" "time_to_first_byte" => "float" }

The add_field part of the csv filter combines the individual date and time fields into a combined datetime field (to be converted to a timestamp object later); and also copies the x_edge_request_id field value as the [@metadata][document_id] field:

add_field => { "datetime" => "%{date} %{time}" "[@metadata][document_id]" => "%{x_edge_request_id}" }

The [@metadata][document_id] field will be used later on when we push the record to Elasticsearch (to be used as the record's ID). Like with the [@metadata][type] field, this is another case where if you're only going to process CloudFront log files in this pipeline, you could omit this extra metadata field, and just use the x_edge_request_id directly when configuring the Elasticsearch record ID.

The final part of the csv filter removes some fields that are redundant once the log entry has been parsed: message (the full log entry text itself), and cloudfront_fields and cloudfront_version (which the s3snssqs input we'll add later automatically includes):

remove_field => ["cloudfront_fields", "cloudfront_version", "message"] }

The next filter step is to convert the datetime field (created from the date and time fields above) into a proper datetime object:

# parse datetime date { match => ["datetime", "yy-MM-dd HH:mm:ss"] remove_field => ["datetime", "date", "time"] }

This sets the datetime as the value of the @timestamp field. We'll also remove the datetime, date, and time fields, since we won't need them now that we have the parsed datetime in the @timestamp field.

The next filter uses the client IP address to lookup a probable physical location for the client:

# lookup geolocation of client ip address geoip { source => "c_ip" target => "geo" }

This creates a geo field with a bunch of subfields (like [geo][country_name], [geo][city_name], etc) containing the probable location details. Note that many IP address won't have a mapping value for many of the subfields; see the Geoip filter docs for more details.

The next filter decodes the user-agent field, and the filter after that parses it. The useragent filter parses the cs_user_agent field into the ua field, which, like the geo field, will contain a bunch of subfields. We'll pull out a few of those subfields, and add fields with ECS names for them:

# parse user-agent into subfields urldecode { field => "cs_user_agent" } useragent { source => "cs_user_agent" target => "ua" add_field => { "user_agent.name" => "%{[ua][name]}" "user_agent.version" => "%{[ua][major]}" "user_agent.device.name" => "%{[ua][device]}" "user_agent.os.name" => "%{[ua][os_name]}" "user_agent.os.version" => "%{[ua][os_major]}" } remove_field => ["cs_user_agent", "ua"] }

Since the user-agent info we want are now in those newly added user_agent.* fields, the last part of the useragent filter removes the cs_user_agent field and intermediate ua field.

When using the file input, like we are while testing this pipeline, the file input will add a path field to each record, containing the path to the file its reading. Later on, when we use the s3snssqs input, the s3snssqs input will pass the same path as the [@metadata][s3][object_key] field. So that we can access this value uniformly, regardless of which input we used, we have this next filter step, where if the [@metadata][s3][object_key] field is present, we set the path field to the [@metadata][s3][object_key] field's value:

# pull logfile path from s3 metadata, if present if [@metadata][s3][object_key] { mutate { add_field => { "path" => "%{[@metadata][s3][object_key]}" } } }

With the path field now containing the file path, regardless of input, we use the next filter to chop the path down to just the log file name (like E123456789ABCD.2020-01-02-03.abcd1234.gz):

# strip directory path from logfile path, and canonicalize field name mutate { rename => { "path" => "log.file.path" } gsub => ["log.file.path", ".*/", ""] remove_field => "host" }

We also have the filter rename the path field to log.file.path (the canonical ECS name for it); and have the filter remove the host field (added by the file input along with the path field, based on the host Logstash is running on — which we don't really care to have as part of our log record in Elasticsearch).

The last filter in our pipeline renames all CloudFront fields that have equivalent ECS (Elastic Common Schema) field names:

# canonicalize field names, and drop unwanted fields mutate { rename => { "c_ip" => "client.ip" "cs_bytes" => "http.request.bytes" "sc_content_len" => "http.response.body.bytes" "sc_content_type" => "http.response.body.type" "cs_method" => "http.request.method" "cs_protocol" => "url.scheme" "cs_protocol_version" => "http.version" "cs_referer" => "http.request.referrer" "cs_uri_query" => "url.query" "cs_uri_stem" => "url.path" "sc_bytes" => "http.response.bytes" "sc_status" => "http.response.status_code" "ssl_cipher" => "tls.cipher" "ssl_protocol" => "tls.protocol_version" "x_host_header" => "url.domain" }

To match the ECS field specs, the middle part of the filter removes the HTTP/ prefix from the http.version field values (converting values like HTTP/2.0 to just 2.0); and removes the TLSv prefix from the tls.protocol_version field values (converting values like TLSv1.2 to just 1.2):

gsub => [ "http.version", "HTTP/", "", "tls.protocol_version", "TLSv", "" ]

And finally, the last part of the filter removes miscellaneous CloudFront fields that we don't care about:

remove_field => [ "c_port", "cs_cookie", "cs_host", "fle_encrypted_fields", "fle_status", "sc_range_end", "sc_range_start", "x_forwarded_for" ] } } }

The output section of the pipeline simply outputs each log record to Logstash's own log output — which is what you see when you tail Logstash's logs:

output { stdout { codec => "rubydebug" } }

Set up main Logstash pipeline

Once you have this test pipeline working to your satisfaction, it's time to change the output section of the pipeline to push the output to Elasticsearch. Replace the output block of the /etc/logstash/conf.d/my-cloudfront-pipeline.conf file with this block (substituting your own host, user, and password settings, as well as any custom SSL settings you need — see the Elasticsearch output plugin docs for details):

output { # don't try to index anything that didn't get a document_id if [@metadata][document_id] { elasticsearch { hosts => ["https://elasticsearch.example.com:9243"] user => "elastic" password => "password123" document_id => "%{[@metadata][document_id]}" ecs_compatibility => "v1" index => "ecs-logstash-%{[@metadata][type]}-%{+YYYY.MM.dd}" } } }

This following line in this block serves as one more guard to avoid indexing anything that didn't get parsed properly (you may want to send such log entries to a dedicated errors index, to keep an eye on entries that failed to parse):

if [@metadata][document_id] {

And this line uses the [@metadata][document_id] field to set the record ID for each entry (recall in the pipeline filters, we copied the value of the CloudFront x_edge_request_id, which should be unique for each request, to the [@metadata][document_id] field):

document_id => "%{[@metadata][document_id]}"

And since our output block includes setting ecs_compatibility to v1, which directs Logstash to use ECS-compatible index templates, this line directs Logstash to create a separate index for each day and type of log entry we process:

index => "ecs-logstash-%{[@metadata][type]}-%{+YYYY.MM.dd}"

For example, Logstash will create an index named ecs-logstash-cloudfront-2020.01.02 if we process a CloudFront log entry for January 2, 2020 (or use the existing index with that name, if it already exists).

Restart Logstash once you change the output block. In Logstash's own log output, you should see entries indicating succesful connections to your Elasticsearch host, as well as a ginormous entry for the index template it installs in Elasticsearch. Once you see that, check your Elasticsearch instance — you should see a new ecs-logstash-cloudfront-YYYY.MM.DD index created, with entries from your sample CloudFront log file.

You can use this same mechanism to backfill your existing CloudFront log files to Elastic search — manually download the log files to backfill to your Logstash machine (like via the sync command of the s3 CLI), and customize the file input block's path property (with wildcards) to direct Logstash to read them in.

For future CloudFront log files, however, we're going to make one more change to our pipeline, and use the S3 via SNS/SQS input (aka s3snssqs) to pull CloudFront log files from S3 as soon as CloudFront publishes them.

First, create a new IAM policy for your Logstash machine to use that will allow it to both read from your logging bucket, and to read and delete items from the SQS queue we set up above. The policy should look like this (change the Resource elements to point to your own S3 log bucket and SQS log queue, set up in the first two sections of this article):

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::my-log-bucket" }, { "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-log-bucket/my-cloudfront-logs/*" }, { "Effect": "Allow", "Action": [ "sqs:Get*", "sqs:List*", "sqs:ReceiveMessage", "sqs:ChangeMessageVisibility", "sqs:DeleteMessage" ], "Resource": [ "arn:aws:sqs:us-east-1:123456789012:my-cloudfront-log-notifications" ] } ] }

Then install the logstash-input-s3-sns-sqs plugin on your Logstash machine:

cd /usr/share/logstash sudo -u logstash bin/logstash-plugin install logstash-input-s3-sns-sqs

Then update the input section of your pipeline to be the following (substituting your own SQS queue name and its AWS region):

input { # pull new logfiles from s3 when notified s3snssqs { region => "us-east-1" queue => "my-cloudfront-log-notifications" from_sns => false type => "cloudfront" } }

If you're running the Logstash machine in AWS, you can use the usual EC2 instance profiles or IAM roles for tasks to grant the machine access to the policy you created above. Otherwise, you'll need to add some AWS credential settings to the s3snssqs input as well; consult the S3 input plugins docs for options (the s3snssqs input allows for the same AWS credential options as the s3 input does, but the s3 input has better documentation for them).

Now restart Logstash. You should see the same output in Logstash's own log as before; but if you check Elasticsearch, you should see new records being added.

View logs in Kibana

Eventually you'll want to create fancy dashboards in Kibana for your new CloudFront data; but for now we'll just get started by setting up a listing where you can view them in the "Discover" section of Kibana.

First log into Kibana, and navigate to the "Management" > "Stack Management" section of Kibana. Within the "Stack Management" section, if you navigate to the "Data" > "Index management" subsection, you should see a bunch of new indexes named in the form of ecs-logstash-cloudfront-YYYY.MM.DD (like ecs-logstash-cloudfront-2020.01.01 and so on):

Once you've verified Kibana is seeing the indexes, navigate to the "Kibana" > "Index Patterns" subsection, and click the "Create index pattern" button. Specify ecs-logstash-cloudfront-* as the pattern, and select @timestamp as the time field:

With the new index pattern created, navigate out of the "Stack Management" section of Kibana into the main "Kibana" > "Discover" section. This will show your most recent "Discover" search. On the left side of the page, change the selected index pattern to the pattern you just created (ecs-logstash-cloudfront-*). You should now see your most recent CloudFront entries listed (if not, use the time window selector in the top right of the page to expand the time window to include a range you know should include some entries). You can use this page to create a list with custom columns and custom filter settings for your CloudFront logs:

Friday, December 4, 2020

Building a Logstash Offline Plugin Pack with Docker

If you run a Logstash node in an environment where it doesn't have access to the public Internet, and need to install some extra plugins, you have to build an "offline plugin pack" (a zip containing the plugins and their dependencies) on a machine that does have public Internet access. You can then copy the pack to your Logstash node, and install the plugins from it directly.

Here's a quick little script I whipped up to build the offline plugin pack using the official Logstash docker container:

#!/bin/sh -e logstash_version=7.10.0 logstash_plugins=$(echo ' logstash-codec-cloudfront logstash-input-s3-sns-sqs ' | xargs) echo " bin/logstash-plugin install $logstash_plugins bin/logstash-plugin prepare-offline-pack \ --output /srv/logstash/logstash-plugins.zip \ $logstash_plugins " | docker run -i -u $(id -u) -v $(pwd):/srv/logstash --rm \ docker.elastic.co/logstash/logstash:$logstash_version /bin/sh

Set the script's logstash_version variable to the version of Logstash you're using, and set the (whitespace-separated) list of plugins in the logstash_plugins variable to the plugins you need. Run the script, and it will output a logstash-plugins.zip into your working directory.

You can then copy the logstash-plugins.zip file to your Logstash node (for example, to the /usr/share/logstash directory of the machine), and install the contained plugins like this:

cd /usr/share/logstash sudo -u logstash bin/logstash-plugin install file://logstash-plugins.zip

Make sure you run the logstash-plugin command as the same user you use to run Logstash itself (typically the logstash user) — otherwise the plugins will be installed with the wrong filesystem permissions (and you'll see errors about it when you run the main Logstash process).

Wednesday, November 18, 2020

Antora Deploy to S3 and CloudFront

Even though there aren't any dedicated Antora components for deploying to AWS CloudFront or S3, it's still really easy to do — most of the Antora settings you'd use for a generic web hosting site work perfectly for S3 + CloudFront. Here's how:

  1. Playbook Settings
  2. S3 Settings
  3. CloudFront Settings
  4. Upload Script
  5. Redirects Script

Playbook Settings

Here's an example playbook file that you'd use to build your documentation as part of your production deploy process:

# antora-playbook.yml site: robots: allow start_page: example-user-guide::index.adoc title: Example Documentation url: https://docs.example.com content: sources: - url: https://git.example.com/my-account/my-docs.git branches: master start_path: content/* output: clean: true runtime: fetch: true ui: bundle: url: https://ci.example.com/my-account/my-docs-ui/builds/latest/ui-bundle.zip snapshot: true urls: html_extension_style: indexify

If you run Antora in the same directory as this playbook, with a command like the following, Antora will generate your site to the build/site sub-directory:

antora generate antora-playbook.yml

These are the key playbook settings for S3/CloudFront (including some settings omitted from the above playbook, because the default value is perfect already):

site.robots: Set this to allow (or disallow if you want to forbid search-engines from crawling your docs), so that Antora will generate a robots.txt file for you.

site.url: Make sure you set this to an absolute URL — doing so will trigger Antora to build out a bunch of desirable files, like a 404.html and sitemap.xml. If your documentation has its own dedicated domain name, like docs.example.com, set site.url to https://docs.example.com; if instead your documentation can be found at a sub-directory of your main website, like under the docs directory of www.example.com, set site.url to https://www.example.com/docs. In either case, omit the trailing slash (eg don't set it to https://www.example.com/docs/do set it to https://www.example.com/docs).

output.dir: By default, Antora will generate your site to the build/site sub-directory of whatever directory you ran the antora command from. If this is good for you, you can omit the output.dir setting; otherwise you can set output.dir to some other local filesystem path.

urls.html_extension_style: Set this to indexify, which directs Antora to a) build out each documentation page to an index.html file in a sub-directory named for the path of the page, and b) to build links to each page via the path to the page with a trailing slash. For example, for a page named how-it-works.adoc in the ROOT module of the example-ui-guide component, with indexify Antora will build the page out as a file named example-ui-guide/how-it-works/index.html (within its build/site output directory), and build links to the page as /example-ui-guide/how-it-works/. This is exactly what you want when you your site is served by S3.

urls.redirect_facility: The default setting, static is what you want for S3, so you can omit this setting from your playbook (or set it explicity to static if you like).

S3 Settings

When hosting Antora-generated sites on S3, you don't need to do anything different than you would for any other statically-generated website, so you can following any of the dozens of online guides for S3 website hosting, like Amazon's own S3 static website hosting guide. The key things you need to set are:

  1. Turn on static website hosting for the S3 bucket.
  2. Set the "index document" to index.html (the default for S3 website hosting).
  3. Set the "error document" to 404.html (Antora generates this file for you).
  4. Either configure the permissions of the S3 bucket to explicitly allow public access to read all objects in the bucket; or when you upload files to the bucket, explicitly upload them with a canned ACL setting that allows public read access (as the scripts covered later in this article will).

You need to make the files in your S3 bucket publicly-accessible (point #4 above) so that CloudFront can access them. While there technically is a way to configure S3 and CloudFront so that the files are not publicly-accessible in S3 but CloudFront can still access them (via an Origin Access Identity), it's kind of a pain. Since these files are ultimately meant to be served to the public through CloudFront anyway, it's simpler just to make them publicly-accessible in S3.

CloudFront Settings

There's also nothing special you need to do for Antora-generated sites with CloudFront — any of the dozens of online guides for S3 + CloudFront hosting will work to set it up. Just make sure that when you set the origin for your CloudFront distribution, you use the "website endpoint" of your S3 bucket, and not the standard endpoint.

For example, if your S3 bucket is named "example-bucket" and it's located in the us-west-2 region, don't use example-bucket.s3.us-west-2.amazonaws.com as your CloudFront origin — instead do use example-bucket.s3-website-us-west-2.amazonaws.com. Using the website endpoint will ensure that CloudFront serves the Antora-generated 404.html page for pages that don't exist, and that it also serves a 301 redirect for pages for which you've configured S3 to redirect (as the scripts covered later in this article will).

Upload Script

Once you've set up your Antora playbook, S3 bucket, and CloudFront distribution, you're ready to deploy your site. If you've set up your antora-playbook.yml as above, you can build your documentation, upload it to S3, and clear the CloudFront caches of the old version of your docs with the following simple script:

#!/bin/sh -e build_dir=build/site cf_distro=E1234567890ABC s3_bucket=example-bucket antora generate antora-playbook.yml aws s3 sync $build_dir s3://$s3_bucket --acl public-read --delete aws cloudfront create-invalidation --distribution-id $cf_distro --paths '/*'

The first line generates your documentation to the build/site directory. The second line replaces the existing content of example-bucket with the content of the build/site directory (granting public read-access to each individual file uploaded). The third line clears the CloudFront caches for all the content of your CloudFront distribution.

If you documentation is part of a larger site (eg hosted as https://www.example.com/docs/ instead of being hosted as its own site (eg https://docs.example.com/), add the sub-directory under which your documentation is hosted (eg /docs) to the last two lines of the above script; for example, like the following:

aws s3 sync $build_dir s3://$s3_bucket/docs --acl public-read --delete aws cloudfront create-invalidation --distribution-id $cf_distro --paths '/docs/*'

Redirects Script

The redirect pages that Antora will generate when you set the Antora urls.redirect_facility setting to static will work fine for your website users as is. But search engines will like it better if you serve real HTTP redirect responses (with the redirect information embedded in HTTP header fields) instead of just HTML pages that indicate that the client browser should redirect to a different location once parsed. You can get S3 + CloudFront to serve 301 Moved Permanently redirects in place of all the redirect pages Antora generates by uploading them separately to S3 with a special x-amz-website-redirect-location header.

To do so, insert the following block into your upload script between the aws s3 sync and aws cloudfront create-invalidation commands:

#!/bin/sh build_dir=build/site cf_distro=E1234567890ABC s3_bucket=docs.example.com antora generate antora-playbook.yml aws s3 sync $build_dir s3://$s3_bucket --acl public-read --delete grep -lR 'http-equiv="refresh"' $build_dir | while read file; do redirect_url=$(awk -F'"' '/rel="canonical"/ { print $4 }' $file) aws s3 cp $file s3://$s3_bucket/${file##$build_dir/} \ --website-redirect $redirect_url --acl public-read done aws cloudfront create-invalidation --distribution-id $cf_distro --paths '/*'

The above script block will search the Antora build dir for all redirect pages (with the grep command), and loop over each (with the while command, reading the local filepath to each into the file variable). It will pull out the canonical URL of the page to redirect to from the redirect page (via the awk command, into the redirect_url variable), and re-upload the file using the --website-redirect flag of the aws s3 cp command to indicate that S3 should serve a 301 redirect to the specified URL instead of the file content itself (when accessed through the S3 website endpoint).

As a concrete example of this redirect capability, say you had a page named how-it-works.adoc in the ROOT module of your example-ui-guide component. If you added metadata to that how-it-works.adoc page to add a redirect to it from the non-existant inner-workings.adoc page (eg via a page-aliases header attribute value of inner-workings.adoc), Antora would generate the following redirect page for you at build/site/example-user-guide/inner-workings/index.html:

<!DOCTYPE html> <meta charset="utf-8"> <link rel="canonical" href="https://docs.example.com/example-user-guide/how-it-works/"> <script>location="../how-it-works/"</script> <meta http-equiv="refresh" content="0; url=../how-it-works/"> <meta name="robots" content="noindex"> <title>Redirect Notice</title> <h1>Redirect Notice</h1> <p>The page you requested has been relocated to <a href="../how-it-works/">https://docs.example.com/example-user-guide/how-it-works/</a>.</p>

The above script would re-upload this file to S3 like so (with all variables expanded, and some additional line-wrapping for legibility):

aws s3 cp build/site/example-user-guide/inner-workings/index.html \ s3://example-bucket/example-user-guide/inner-workings/index.html \ --website-redirect https://docs.example.com/example-user-guide/how-it-works/ --acl public-read

If a user (or search engine) then navigates to https://docs.example.com/example-user-guide/inner-workings/, S3 + CloudFront will send this response back:

HTTP/2 301 location: https://docs.example.com/example-user-guide/how-it-works/

Friday, October 30, 2020

Antora Quick Start Tutorial

Antora is a sophisticated documentation "static site generator" (SSG) for AsciiDoc/Asciidoctor. It's built for big projects with multiple sets of documentation, so just getting started can be a little intimidating. Here's a quick guide for how to get up and running with Antora, using Docker and some Makefiles on your local dev machine.

At minimum, you really should use at least three separate git repos with Antora:

  1. Your documentation content (AsciiDoc source files).
  2. Your customized UI theme/style (JavaScript, CSS, and Handlebars templates).
  3. Your Antora build and deployment configuration (antora-playbook.yml, and whatever additional scripts/config files you use to deploy to production).

In practice, you may in fact have several different repos for #1, like the docs directory from several different software products, or simply different repos for different pieces of documentation (like an Install Guide, Product Manual, API Reference, etc). And #3 might be part of other repos you use for devops configuration/deployment/infrastructure/etc.

But you're really going to want to have a new dedicated repo for #2 — this is where you customize your page header and footer content with links to your own websites, with your own logos, colors, and general look-and-feel.

So this guide is just going to focus on #2, with development-focused Antora config (#3) and initial documentation content (#1) snuck into the same repo. Before you begin, make sure you have Git, Make, and Docker installed on your local dev machine.

  1. UI Set Up
  2. UI Customization
  3. Documentation Set Up
  4. Documentation Customization
  5. Development Workflow

UI Set Up

Clone Antora Default UI

To start off, clone the Antora Default UI repo. We're going to save all our UI customizations, our initial doc content, and our dev Antora config in this new repo, which we'll call docs-ui. Run these commands in a terminal:

$ git clone https://gitlab.com/antora/antora-ui-default.git docs-ui $ cd docs-ui $ git remote rename origin antora-ui-default $ git branch --move antora

This will keep antora-ui-default as a remote source for your git repository, so you can easily diff and pull in core fixes to the UI from the Antora Default UI repo. The local copy of the Antora Default UI we'll keep in a branch called antora.

But you'll want to set the main remote source of your customized repo (called origin by convention) to a new remote source. For example, if you have a github account named my-account, create a new repo named docs-ui in it, and push the local content of this new repo to it. We'll call the local branch of the main source main:

$ git checkout -b main $ git remote add origin https://github.com/my-account/docs-ui.git $ git push -u origin main
Dockerized UI Build

Now we're ready to start making changes to the repo. To make it easy to preview and build those changes, we'll add three files to the root of the repo: ui.dockerfile, docker-compose.yml, and Makefile. Create these files:

# ui.dockerfile FROM node:12-buster WORKDIR /srv/docs-ui
# docker-compose.yml version: '3' services: ui: build: context: . dockerfile: ui.dockerfile ports: - 8052:5252 volumes: - .:/srv/docs-ui
# Makefile # help: @ Lists available make tasks help: @egrep -oh '[0-9a-zA-Z_\.\-]+:.*?@ .*' $(MAKEFILE_LIST) | \ awk 'BEGIN {FS = ":.*?@ "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort # node_modules: @ Runs initial ui npm install node_modules: docker-compose run ui npm install # ui.build: @ Builds ui production output (to build/ui-bundle.zip) ui.build: node_modules docker-compose run -u $$(id -u) ui node_modules/.bin/gulp bundle # ui.lint: @ Runs ui linting ui.lint: node_modules docker-compose run -u $$(id -u) ui node_modules/.bin/gulp lint # ui.run: @ Runs ui server in preview mode (on port 8052) ui.run: node_modules docker-compose run -u $$(id -u) --service-ports ui node_modules/.bin/gulp preview # ui.shell: @ Opens bash shell in ui container ui.shell: CMD ?= /bin/bash ui.shell: docker-compose run -u $$(id -u) ui $(CMD)

Note that Makefiles require you to use tabs for indentation, instead of spaces, so make sure your editor has preserved the indentation in the above Makefile with tabs.

Run UI Preview

With those three files in place, run this command from the repo root:

make ui.run

This will build the Docker image defined by the ui.dockerfile file, launch it as a container, and run the Antora UI preview server in it, exposed on port 8052. Open up a browser, and navigate to:

http://localhost:8052/

You should see a test page for the Antora UI (titled "Hardware and Software Requirements") with a bunch of sample content.

UI Customization

Customize Header Content

Now open up the src/partials/header-content.hbs file from the repo. This is the template for the page header content. Change the line with the "Home" link to this:

<a class="navbar-item" href="https://example.com/">Example Co Home</a>

Save the file, go back to your browser, and refresh the page. The "Home" link in the page header should now read "Example Co Home" (and link to https://example.com/).

Customize Header Color

Next open up the src/css/vars.css file. This file defines (via CSS variables, which begin with the -- prefix) the basic colors and sizes of the UI. Change the --navbar-background variable to this:

--navbar-background: #39f;

Save the file, go back to your browser, and refresh the page. The background of the page header should now be a medium blue.

Build UI Bundle

Kill the make ui.run task (by pressing control-C in the terminal it's running in), and run this command in it's place:

make ui.build

This will build an Antora UI bundle as a zip file called ui-bundle.zip in the repo's build directory. We'll use this bundle in the next step, when we set up our basic Antora documentation build.

Documentation Set Up

Now we're ready to actually build some documentation. The Antora Default UI repo includes an Antora module in its docs directory, so we'll start by building it.

Create Antora Playbook

The first thing to do is create an antora-playbook.yml file. This file defines the configuration for the Antora build process. Add the following antora-playbook.yml to the root of the repo:

# antora-playbook.yml site: robots: allow start_page: antora-ui-default::index.adoc title: Example Documentation url: https://docs.example.com content: sources: - url: ./ branches: HEAD start_path: docs runtime: cache_dir: ./build/cache ui: bundle: url: ./build/ui-bundle.zip urls: html_extension_style: indexify

This will configure Antora to use "author mode", where it pulls its documentation content from local files instead of remote git repos. Your production build scripts will probably configure Antora to pull documentation content from specific release branches of various product or documentation repos, but while you're developing the documentation, you'll just want to build the latest docs from your working copy.

The Antora documentation covers all playbook settings thoroughly in its Antora Playbook section, but let's briefly touch on each setting of this antora-playbook.yml:

site.robots: Setting this to allow directs Antora to generate a robots.txt file which allows bots to spider your site; setting it to disallow generates a robots.txt that disallows bots. While you can omit this setting in your dev build (and not generate a robots.txt file), it's nice to see what Antora will generate.

site.start_page: This is the page ID of the page to redirect to when a visitor navigates to the root of your site (eg https://docs.example.com/). A value of antora-ui-default::index.adoc sets it to redirect to the index.adoc page of the ROOT module in the antora-ui-default component (https://docs.example.com/antora-ui-default/ under this configuration). This source file for this page is located at docs/modules/ROOT/pages/index.adoc in our repo.

site.title: This is the site-wide title that will be displayed in the header of every page, as well as the browser titlebar.

site.url: This is the base URL of the site, when deployed, minus the trailing slash. For deployments where the documentation site has its own domain name (eg docs.example.com), the value of this setting should just be the URL scheme plus domain name (eg https://docs.example.com); for deployments where the documentation lives as a sub-directory of a larger site, the value should also include the sub-directory under which the documentation will live (eg https://www.example.com/docs). While you can safely omit this setting for dev builds, it's nice to include just so you can see how this URL will be used in production.

content.sources: This is a list of sources (each separate item in a YAML list is denoted by a - sign); initially, we'll just have one source: the docs directory that came with the Antora Default UI repo.

content.sources[0].url: With your production configuration, you'd usually specify a full URL to a git repo with this setting; but with our dev config, we'll just use this local repo, indicated by ./. Note that this setting has to be the path to the root of a git repo — it can't specify a sub-directory of the repo itself (like say ./docs).

content.sources[0].branches: This specifies which branch of the repo to use. With your production config, you'd probably want to specify specific branches that correspond to releases of your product (eg [1.0, 1.1, 1.2]); but for development, we just want whatever your current working branch is (HEAD in git parlance).

content.sources[0].start_path: This specifies the path to the content source root of each Antora component in the repo you want to include. We just have one to start with, located in the docs directory. Each component must have an antora.yml file in it's source root; this file defines its component name, display title, version string, and navigation structure (and it can also include AsciiDoc setting customizations that apply to just the particular component).

runtime.cache_dir: This specifies Antora's internal cache of files. The default location (~/.cache/antora) would put it inside our Docker container, which is fine except that it means that Antora has to rebuild the cache again every time we run one of our Makefile commands. So for a moderately improved experience, we'll move it to the build/cache directory inside the repo itself (the build directory is already conveniently listed in the .gitignore file we cloned from the Antora Default UI).

ui.bundle.url: This specifies the location where Antora should fetch your customized UI files from. We want to use the UI built by this very repo, so we'll specify the path to to the UI bundle built by our make ui.build command, ./build/ui-bundle.zip. We prefix the path with ./ to indicate to Antora that this is a local file path — usually this would be the full URL to the location where your build system has stored the latest build of your customized UI; the build system for the Antora project saves the latest stable build of the Antora Default UI to https://gitlab.com/antora/antora-ui-default/-/jobs/artifacts/master/raw/build/ui-bundle.zip?job=bundle-stable.

urls.html_extension_style: Setting this to indexify directs Antora to generate each page as an index.html file within a directory named for the page — eg https://docs.example.com/antora-ui-default/index.html for our start page — and to drop index.html from the page URL when linking to it — eg https://docs.example.com/antora-ui-default/. This is the style you'd usually use when hosting static content with NGINX, Apache, and many static hosting services. Antora allows for several different options, however, and while you don't need to set this in your dev config, it's nice to just so you can see what your URLs will look like in production.

Dockerized Docs Build

To make it easy to run the build, we'll add another Dockerfile, and add to our existing docker-compose configuration and Makefile. Add this antora.dockerfile (substituting 2.3.4 in the file for whatever Antora's latest stable version number is):

# antora.dockerfile FROM antora/antora:2.3.4 RUN yarn global add http-server onchange WORKDIR /srv/docs

Then add an antora service to our existing docker-compose.yml file:

# docker-compose.yml version: '3' services: antora: build: context: . dockerfile: antora.dockerfile environment: CI: 'true' ports: - 8051:8080 volumes: - .:/srv/docs ui: build: context: . dockerfile: ui.dockerfile ports: - 8052:5252 volumes: - .:/srv/docs-ui

Note that the CI: 'true' environment variable will suppress the "Edit this Page" link that otherwise would be displayed in the top right of each page (making the output generated by Antora in our Docker containers more similar to what we'd see in production).

Finally, add some antora.* tasks to our Makefile:

# Makefile # help: @ Lists available make tasks help: @egrep -oh '[0-9a-zA-Z_\.\-]+:.*?@ .*' $(MAKEFILE_LIST) | \ awk 'BEGIN {FS = ":.*?@ "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}' | sort # antora.build: @ Builds documentation production output (to build/site) antora.build: docker-compose run -u $$(id -u) antora antora generate --clean antora-playbook.yml # antora.run: @ Serves documentation output (on port 8051) antora.run: docker-compose run --service-ports antora http-server build/site -c-1 # antora.watch: @ Watches for documentation changes and rebuilds (to build/site) antora.watch: docker-compose run -u $$(id -u) -T antora onchange \ -i antora-playbook.yml 'docs/**' \ -- antora generate antora-playbook.yml # antora.shell: @ Opens bash shell in antora container antora.shell: CMD ?= /bin/sh antora.shell: docker-compose run -u $$(id -u) antora $(CMD) # node_modules: @ Runs initial ui npm install node_modules: docker-compose run ui npm install # ui.build: @ Builds ui production output (to build/ui-bundle.zip) ui.build: node_modules docker-compose run -u $$(id -u) ui node_modules/.bin/gulp bundle # ui.lint: @ Runs ui linting ui.lint: node_modules docker-compose run -u $$(id -u) ui node_modules/.bin/gulp lint # ui.run: @ Runs ui server in preview mode (port 8052) ui.run: node_modules docker-compose run -u $$(id -u) --service-ports ui node_modules/.bin/gulp preview # ui.shell: @ Opens bash shell in ui container ui.shell: CMD ?= /bin/bash ui.shell: docker-compose run -u $$(id -u) ui $(CMD)

This antora.dockerfile takes the base Antora image produced by the Antora project, and adds the http-server and onchange Node.js modules to it. Our Makefile's antora.run task uses the http-server module to serve the content Antora generates, and our antora.watch task uses the onchange module to automatically rebuild that content whenever we change a documentation source file.

Build Documentation

With those file changes in place, run this command from the repo root:

make antora.build

This will build the Docker image defined by the antora.dockerfile file, launch it as a container, and run Antora using the antora-playbook.yml playbook we just wrote. Antora will generate the built documentation files to the build/site directory in our repo.

Run Documentation Preview

Now let's take a look at that documentation — run this command from the repo root:

make antora.run

This will serve the built files via the http-server Node.js module on port 8051. Open up a browser, and navigate to:

http://localhost:8051/

You should be redirected to http://localhost:8051/antora-ui-default/, and see the start page of the Antora Default UI documentation.

Rebuild on Documentation Changes

Finally, in another terminal (while the make antora.run command is still running in the first terminal), run this command:

make antora.watch

This will use the onchange Node.js module to watch for changes to the documentation source, and automatically trigger Antora to rebuild whenever you make a change. We'll rely on this functionality with the next step.

Documentation Customization

Create New Component

Now, finally, we're ready to start writing our own documentation! We'll start with a "User Guide" component, initially consisting of a "root" module with one page. Create a content directory, and a user-guide directory within it — this is where our "User Guide" component will live.

Within the user-guide directory, create a new antora.yml file. This file will contain the basic metadata about our "User Guide" component. Create it with this content:

# content/user-guide/antora.yml name: example-user-guide title: Example User Guide version: master

Note that master is a special keyword in Antora that means a component with no version. Usually Antora includes the version number of a component in all the URLs it generates for that component, but it will omit the version number if it is master.

Create First Page

Also within the user-guide directory, create a modules directory; within the modules directory, create an ROOT directory; and within the ROOT directory, create a pages directory. Within the ROOT directory, create our first page, called index.adoc:

# content/user-guide/modules/ROOT/pages/index.adoc = User Guide == Welcome Welcome to our product! This is the user guide.
Add Watch Path

Update our make antora.watch command in our Makefile to the following, adding components/** as a path to watch for changes:

# Makefile antora.watch: docker-compose run -u $$(id -u) -T antora onchange \ -i antora-playbook.yml 'components/**' 'docs/**' \ -- antora generate antora-playbook.yml

If you still have the make antora.watch command running in a terminal, kill it (by pressing control-C in the terminal it's running in), and re-run make antora.watch again.

Register Component

Now we'll register the component in the antora-playbook.yml at the root of our repo, so that we can have Antora build it. We could add it as a second item to our sources list, but since we already have the current repo listed as a source, we can just change the existing single start_path setting to be a multiple start_paths setting (note the "s" on the end of the setting), and direct Antora to include any sub-directory of the content directory in our repo as a component:

# antora-playbook.yml site: robots: allow start_page: antora-ui-default::index.adoc title: Example Documentation url: https://docs.example.com content: sources: - url: ./ branches: HEAD start_path: docs start_paths: - docs - content/* runtime: cache_dir: ./build/cache ui: bundle: url: ./build/ui-bundle.zip urls: html_extension_style: indexify

With our make antora.watch command running, as soon as you save this change to antora-playbook.yml, Antora will rebuild all its content. So make the change, and refresh the browser window you have opened at http://localhost:8051.

The page will look exactly the same as it did before — but click the "Antora Default UI" label in the bottom-left corner of the page. The bottom of the left navigation should expand to show two items: "Antora Default UI" and "Example User Guide". Click the "master" label directly below "Example User Guide", and you will navigate to your new "Example User Guide" index page at http://localhost:8051/example-user-guide/.

The component is listed as "Example User Guide", because that is the title we gave it in the content/user-guide/antora.yml file. And its URL path is /example-user-guide/ because we set its name to example-user-guide in that same file.

The title of the page is "User Guide", as displayed in the browser titlebar and the main title of the page's body, because that's the title we gave the page in its AsciiDoc source at content/user-guide/modules/ROOT/pages/index.adoc. And because we added a "Welcome" section to that page, we see "Welcome" as a section title in the page's body, as well as its right navigation.

You'll also want to change your antora-playbook.yml to make the new component the start page for the site (so that whenever someone navigates to https://docs.example.com/, they'll be redirected to https://docs.example.com/example-user-guide/ instead of https://docs.example.com/antora-ui-default/). Just replace antora-ui-default in the start_page setting of your antora-playbook.yml with example-user-guide:

# antora-playbook.yml start_page: antora-ui-default::index.adoc start_page: example-user-guide::index.adoc
Create Navigation

We don't have any left navigation for our new component, however, so let's fix that. Create a navigation file at content/user-guide/modules/ROOT/nav.adoc with this content:

# content/user-guide/modules/ROOT/nav.adoc * xref:index.adoc#_welcome[Welcome!]

Then configure Antora to use this navigation file by updating the content/user-guide/modules/antora.yml file like so:

# content/user-guide/antora.yml name: example-user-guide title: Example User Guide version: master nav: - modules/ROOT/nav.adoc

Go back to your browser and refresh the page, and you will see that Antora added a navigation list for the user-guide component to the left navigation, with one item beneath the root. This item links to the "Welcome" section of the "root" module's index page, displaying the link text as "Welcome!".

Create Another Page

Now create a second page for our component at content/user-guide/modules/ROOT/pages/how-it-works.adoc with this content:

# content/user-guide/modules/ROOT/pages/how-it-works.adoc = This Is How It Works :navtitle: But How Does It Work? Well, to be honest, we're not exactly sure how any of this works.

Update the navigation file at content/user-guide/modules/ROOT/nav.adoc to add an item for our new page:

# content/user-guide/modules/ROOT/nav.adoc * xref:index.adoc#_welcome[Welcome!] * xref:how-it-works.adoc[]

Go back to your browser and refresh the page again, and you will see that Antora added another item to the left navigation. The item label is "But How Does It Work?", matching the navtitle attribute set under the title of the how-it-works.adoc page. The URL it links to is http://localhost:8051/example-user-guide/how-it-works/ — the first path segment comes from the component name defined in the antora.yml file of the component, and the last path segment comes from the file name of the page itself. If the file was part of any module other than the "root" module for the component, it would have another path segment between the component name and page name for the module name (taken from the directory name of the module).

When you click the link, you'll see that the page title is "This Is How It Works", as displayed in the browser titlebar and the main title of the page's body, matching the page title from the how-it-works.adoc file.

Development Workflow

Work on the UI

With your initial UI and documentation content now set up, whenever you want to make changes to the look-and-feel of your documentation, you would follow these steps:

  1. Fire up the Antora UI preview server with make ui.run
  2. View the Antora UI preview in your web browser at http://localhost:8052/
  3. Iteratively make changes to the CSS, Handlebars, and other UI files in the src directory of your project
Build the UI

Once your changes look good with the Antora UI preview content, follow these steps to incorporate them into the local version of your documentation:

  1. Build the UI bundle with make ui.build
  2. Pull the new UI bundle into your local doc build with make antora.build
  3. Fire up a web server for your local docs with make antora.run
  4. View your UI changes with your local copy of the docs at http://localhost:8051/
Write Documentation

And whenever you get the hankering to write some documentation, follow these steps:

  1. Fire up a web server for your local docs with make antora.run
  2. View your local copy of the docs at http://localhost:8051/
  3. Fire up the onchange watcher for the docs in a separate terminal with make antora.watch (to automatically re-build the docs whenever you make a change)
  4. Iteratively make changes to AsciiDoc files in the content directory of your project

Tuesday, October 13, 2020

How To Test OpenRC Services with Docker-Compose

Similar to how I abused Docker conceptually to test systemd services with docker-compose, I spent some time recently trying to do the same thing with OpenRC for Alpine Linux.

It basically requires the same steps as systemd. With the base 3.12 Alpine image, it's a matter of:

  1. Install OpenRC
  2. Optionally map /sys/fs/cgroup
  3. Start up with /sbin/init
  4. Run tests via docker exec

1. Install OpenRC

The base Alpine images don't include OpenRC, so you have to install it with apk. I do this in my Dockerfile:

FROM alpine:3.12 RUN apk add openrc CMD ["/sbin/init"]

2. Optionally map /sys/fs/cgroup

Unlike with systemd, I didn't have to set up any tmpfs mounts to get OpenRC services running. I also didn't have to map the /sys/fs/cgroup directory -- but if I didn't, I would get a bunch of cgroup-related error messages when starting and stopping services (although the services themselves still seemed to work fine). So I just went ahead and mapped the dir in my docker-compose.yml to avoid those error messages:

version: '3' services: my_test_container: build: . image: my_test_image volumes: - /sys/fs/cgroup:/sys/fs/cgroup:ro

3. Start up with /sbin/init

With the Alpine openrc package, the traditional /sbin/init startup command works to start OpenRC. I added CMD ["/sbin/init"] to my Dockerfile to start up with it, but you could instead add command: /sbin/init to the service in your docker-compose.yml file.

4. Run tests via docker exec

The above docker-compose.yml and Dockerfile will allow you to start up OpenRC in my_test_container with one command:

docker-compose up -d my_test_container

With OpenRC up and running, you can use a second command to execute a shell on the very same container to test it out:

docker-compose exec my_test_container /bin/sh

Or use exec to run other commands to test the services managed by OpenRC:

docker-compose exec my_test_container rc-status --servicelist

Cleaning up

The clean up steps with OpenRC are also basically the same as with systemd:

  1. Stop the running container: docker-compose stop my_test_container
  2. Remove the saved container state: docker-compose rm my_test_container
  3. Remove the built image: docker image rm my_test_image

Friday, October 2, 2020

Testing Systemd Services on Arch, Fedora, and Friends

Following up on a previous post about how to test systemd services with docker-compose on Ubuntu, I spent some time recently trying to do the same thing with a few other Linux distributions. I was able to get the same tricks to work on these other distributions:

  • Amazon Linux
  • Arch
  • CentOS
  • Debian
  • Fedora
  • openSUSE
  • RHEL

A few of those distros required an additional tweak, however.

One more tmpfs directory for Arch and Fedora

For Arch and Fedora, I had to do one more thing: add /tmp as a tmpfs mount.

So the docker-compose.yml file for those distros should look like this:

version: '3' services: my_test_container: build: . image: my_test_image tmpfs: - /run - /run/lock - /tmp volumes: - /sys/fs/cgroup:/sys/fs/cgroup:ro

Or when running as a regular docker command, start the container like this:

docker run \ --tmpfs /run --tmpfs /run/lock --tmpfs /tmp \ --volume /sys/fs/cgroup:/sys/fs/cgroup:ro \ --detach --rm \ --name my_test_container my_test_image

Different init script location for openSUSE

For openSUSE, the systemd init script is located at /usr/lib/systemd/systemd instead of just /lib/systemd/systemd. So the Dockerfile I used for it looks like this:

FROM opensuse/leap:15 RUN zypper install -y systemd CMD ["/usr/lib/systemd/systemd"]

Monday, September 14, 2020

Elixir Ed25519 Signatures With Enacl

The most-actively supported library for using ed25519 with Elixir currently looks to be enacl. It provides straightforward, idiomatic Erlang bindings for libsodium.

Installing

Installing enacl for a Mix project requires first installing your operating system's libsodium-dev package on your dev & build machines (as well as the regular libsodium package anywhere else you run your project binaries). Then in the mix.exs file of your project, add {:enacl, "~> 1.0.0"} to the deps section of that file; and then run mix deps.get to download the enacl package from Hex.

Keys

In the parlance of libsodium, the "secret key" is the full keypair, the "public key" is the public part of the keypair (the public curve point), and the "seed" is the private part of the keypair (the 256-bit secret). The seed is represented in enacl as a 32-byte binary string, as is the public key; and the secret key is the 64-byte binary concatenation of the seed plus the public key.

You can generate a brand new ed25519 keypair with enacl via the sign_keypair/0 function. After generating, usually you'd want to save the keypair somewhere as a base64- or hex-encoded string:

iex> keypair = :enacl.sign_keypair() %{ public: <<215, 90, 152, 1, 130, 177, 10, 183, 213, 75, 254, 211, 201, 100, 7, 58, 14, 225, 114, 243, 218, 166, 35, 37, 175, 2, 26, 104, 247, 7, 81, 26>>, secret: <<157, 97, 177, 157, 239, 253, 90, 96, 186, 132, 74, 244, 146, 236, 44, 196, 68, 73, 197, 105, 123, 50, 105, 25, 112, 59, 172, 3, 28, 174, 127, 96, 215, 90, 152, 1, 130, 177, 10, 183, 213, 75, 254, 211, 201, 100, 7, 58, ...>> } iex> <<seed::binary-size(32), public_key::binary>> = keypair.secret <<157, 97, 177, 157, 239, 253, 90, 96, 186, 132, 74, 244, 146, 236, 44, 196, 68, 73, 197, 105, 123, 50, 105, 25, 112, 59, 172, 3, 28, 174, 127, 96, 215, 90, 152, 1, 130, 177, 10, 183, 213, 75, 254, 211, 201, 100, 7, 58, 14, 225, ...>> iex> public_key == keypair.public true iex> seed <> public_key == keypair.secret true iex> public_key_base64 = public_key |> Base.encode64() "11qYAYKxCrfVS/7TyWQHOg7hcvPapiMlrwIaaPcHURo=" iex> public_key_hex = public_key |> Base.encode16(case: :lower) "d75a980182b10ab7d54bfed3c964073a0ee172f3daa62325af021a68f707511a" iex> private_key_base64 = seed |> Base.encode64() "nWGxne/9WmC6hEr0kuwsxERJxWl7MmkZcDusAxyuf2A=" iex> private_key_hex = seed |> Base.encode16(case: :lower) "9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60"

You can also reconstitute a keypair from just the private part (the "seed") with the enacle sign_seed_keypair/1 function:

iex> reloaded_keypair = ( ...> "9d61b19deffd5a60ba844af492ec2cc44449c5697b326919703bac031cae7f60" ...> |> Base.decode16!(case: :lower) ...> |> :enacl.sign_seed_keypair() ...>) %{ public: <<215, 90, 152, 1, 130, 177, 10, 183, 213, 75, 254, 211, 201, 100, 7, 58, 14, 225, 114, 243, 218, 166, 35, 37, 175, 2, 26, 104, 247, 7, 81, 26>>, secret: <<157, 97, 177, 157, 239, 253, 90, 96, 186, 132, 74, 244, 146, 236, 44, 196, 68, 73, 197, 105, 123, 50, 105, 25, 112, 59, 172, 3, 28, 174, 127, 96, 215, 90, 152, 1, 130, 177, 10, 183, 213, 75, 254, 211, 201, 100, 7, 58, ...>> } iex> reloaded_keypair == keypair true

Signing

Libsodium has a series of functions for signing large documents that won't fit into memory or otherwise have to be split into chunks — but for most cases, the simpler enacl sign/2 or sign_detached/2 functions are what you want to use.

The enacl sign/2 function produces a binary string that combines the original message with the message signature, which the sign_open/2 function can later unpack and verify. This is ideal for preventing misuse, since it makes it harder to just use the message without verifying the signature first.

The enacl sign_detached/2 function produces the message signature as a stand-alone 64-byte binary string — if you need to store or send the signature separately from the message itself, this is the function you'd use. And often when using detached signatures, you will also base64- or hex-encode the resulting signature:

iex> message = "test" "test" iex> signed_message = :enacl.sign(message, keypair.secret) <<143, 152, 176, 38, 66, 39, 246, 31, 9, 107, 120, 221, 227, 176, 240, 13, 25, 1, 236, 254, 16, 80, 94, 65, 71, 57, 6, 144, 122, 82, 53, 107, 233, 83, 26, 215, 109, 77, 1, 219, 7, 67, 77, 72, 147, 94, 245, 81, 222, 80, ...>> iex> signature = :enacl.sign_detached(message, keypair.secret) <<143, 152, 176, 38, 66, 39, 246, 31, 9, 107, 120, 221, 227, 176, 240, 13, 25, 1, 236, 254, 16, 80, 94, 65, 71, 57, 6, 144, 122, 82, 53, 107, 233, 83, 26, 215, 109, 77, 1, 219, 7, 67, 77, 72, 147, 94, 245, 81, 222, 80, ...>> iex> signature <> message == signed_message true iex> signature |> Base.encode64() "j5iwJkIn9h8Ja3jd47DwDRkB7P4QUF5BRzkGkHpSNWvpUxrXbU0B2wdDTUiTXvVR3lBULDNm0/t1DY8GBoxfCA==" iex> signature |> Base.encode16(case: :lower) "8f98b0264227f61f096b78dde3b0f00d1901ecfe10505e41473906907a52356be9531ad76d4d01db07434d48935ef551de50542c3366d3fb750d8f06068c5f08"

Verifying

To verify a signed message (the message combined with the signature), and then access the message itself, you'd use the enacl sign_open/2 function:

iex> unpacked_message = :enacl.sign_open(signed_message, public_key) {:ok, "test"}

If you try to verify the signed message with a different public key (or if the message is otherwise improperly signed or not signed at all), you'll get an error result from the sign_open/2 function:

iex> wrong_public_key = ( ...> "3d4017c3e843895a92b70aa74d1b7ebc9c982ccf2ec4968cc0cd55f12af4660c" ...> |> Base.decode16!(case: :lower) ...> ) <<61, 64, 23, 195, 232, 67, 137, 90, 146, 183, 10, 167, 77, 27, 126, 188, 156, 152, 44, 207, 46, 196, 150, 140, 192, 205, 85, 241, 42, 244, 102, 12>> iex> error_result = :enacl.sign_open(signed_message, wrong_public_key) {:error, :failed_verification}

To verify a message with a detached signature, you need the original message itself (in the same binary form with which it was signed), and the signature (in binary form as well). You pass them both, plus the public key, to the sign_verify_detached/3 function; sign_verify_detached/3 returns true if the signature is legit, and false otherwise:

iex> :enacl.sign_verify_detached(signature, message, public_key) true iex> :enacl.sign_verify_detached(signature, "wrong message", public_key) false iex> :enacl.sign_verify_detached(signature, message, wrong_public_key) false

Full Example

To put it all together, if you have an ed25519 private key, like "nWGxne/9WmC6hEr0kuwsxERJxWl7MmkZcDusAxyuf2A=", and you want to sign a message ("test") that someone else already has in their possession, you'd do the following to produce a stand-alone signature that you can send them:

iex> secret_key = ( ...> "nWGxne/9WmC6hEr0kuwsxERJxWl7MmkZcDusAxyuf2A=" ...> |> Base.decode64!() ...> |> :enacl.sign_seed_keypair() ...> |> Map.get(:secret) ...> ) <<157, 97, 177, 157, 239, 253, 90, 96, 186, 132, 74, 244, 146, 236, 44, 196, 68, 73, 197, 105, 123, 50, 105, 25, 112, 59, 172, 3, 28, 174, 127, 96, 215, 90, 152, 1, 130, 177, 10, 183, 213, 75, 254, 211, 201, 100, 7, 58, 14, 225, ...>> iex> signature_base64 = ( ...> "test" ...> |> :enacl.sign_detached(secret_key) ...> |> Base.encode64() ...> ) "j5iwJkIn9h8Ja3jd47DwDRkB7P4QUF5BRzkGkHpSNWvpUxrXbU0B2wdDTUiTXvVR3lBULDNm0/t1DY8GBoxfCA=="

And if you're the one given an ed25519 public key ("11qYAYKxCrfVS/7TyWQHOg7hcvPapiMlrwIaaPcHURo=") and signature ("j5iwJkIn9h8Ja3jd47DwDRkB7P4QUF5BRzkGkHpSNWvpUxrXbU0B2wdDTUiTXvVR3lBULDNm0/t1DY8GBoxfCA=="), with the original message ("test") in hand you can verify the signature like the following:

iex> public_key = ( ...> "11qYAYKxCrfVS/7TyWQHOg7hcvPapiMlrwIaaPcHURo=" ...> |> Base.decode64!() ...> ) <<215, 90, 152, 1, 130, 177, 10, 183, 213, 75, 254, 211, 201, 100, 7, 58, 14, 225, 114, 243, 218, 166, 35, 37, 175, 2, 26, 104, 247, 7, 81, 26>> iex> signature_legitimate? = ( ...> "j5iwJkIn9h8Ja3jd47DwDRkB7P4QUF5BRzkGkHpSNWvpUxrXbU0B2wdDTUiTXvVR3lBULDNm0/t1DY8GBoxfCA==" ...> |> Base.decode64!() ...> |> :enacl.sign_verify_detached("test", public_key) ...> ) true