GitHub Actions

For work stuff we use Jenkins as CI/CD solution. No choice. It works though, so no complains either.

On the Internet there’s many possible solutions:

Which one to use? While I did not dig too deep into each one, they are all basically very similar, simply because the problem is clear and a “solved problem”. And yes, I simplify things here a lot:

  • You do good CI if you merge often. The common solution is to use git, create short-lived branches and merge them (depending on your development strategy)
  • Run linters, formatters and tests upon a code commit (and verify during a push upstream)

The CD part is different and depends a lot on the back-end how and where your application runs. Once you have an application, a zip artifact, a container image etc., deploying those is not a technical problem. The strategy how to move from QA to PROD is an entirely different problem but it very much depends on your back-end. Kubernetes has its ArgoCD/FluxCD while in a non-container environment you have to use other solutions.

Back to GitHub Actions

I never needed it: as the only developer for my little software world, I use a locally hosted git repo. Some items I put on GitHub, but only when I think someone can benefit of it. This is a hackathon-starter template from here (quite nice) which is a NodeJS app with plenty dependencies and it served as my test sample to play with Git Hub Actions.

And this is the result:

name: NodeJS Steps
on: [push]
jobs:
  build:
    runs-on: ubuntu-latest
    strategy:
      matrix:
        node: [ '14' ]
    name: My Build
    steps:
      - uses: actions/checkout@v2
      - name: Setup node
        uses: actions/setup-node@v2
        with:
          node-version: ${{ matrix.node }}
      - run: npm install
      - run: npm test
  docker:
    needs: build
    runs-on: ubuntu-latest
    name: Docker Build and Push
    steps:
      - uses: actions/checkout@v2
      - name: Build and push
        id: docker_build
        uses: mr-smithers-excellent/docker-build-push@v5
        with:
          image: hkubota/hackathon
          registry: docker.io
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

Certain things I like a lot:

  • The matrix option is great to test various versions of (in this case) NodeJS
  • Jobs are easy to define incl. dependencies
  • Accessing secrets and env variables from GitHub is straightforward

Basically it’s like Jenkins. Which is nice. I do prefer GitHub Actions though as it’s one thing less to maintain for me.

Advertisement

Minio S3 and Events

One of the great points of AWS’s S3 is that you can get notifications about objects being uploaded/deleted/changed/etc.

Turns out Minio can do that too! See https://docs.min.io/docs/minio-bucket-notification-guide.html for details. A quick test using nsq as queue manager worked.

Start nsq as Docker container (on hosts t620.lan):

$ docker run --rm -p 4150-4151:4150-4151 nsqio/nsq /nsqd

Configure Minio:

$ mc admin config set cubie notify_nsq:1 nsqd_address="t620.lan:4150" queue_dir="" queue_limit="0" tls="off" tls_skip_verify="on" topic="minio"

Restart the minio server. It’ll now show one extra line when starting:

Dec 08 22:37:37 cubie minio[4502]: SQS ARNs:  arn:minio:sqs::1:nsq

Now configure the actual events (any changes in the bucket “Downloads”):

$ mc event add cubie/Downloads arn:minio:sqs::1:nsq

After uploading a file I get an event like this:

{ 
  "EventName": "s3:ObjectCreated:Put", 
  "Key": "Downloads/to-kuro.sh", 
  "Records": [ 
    { 
      "eventVersion": "2.0", 
      "eventSource": "minio:s3", 
      "awsRegion": "", 
      "eventTime": "2020-12-08T13:40:56.970Z", 
      "eventName": "s3:ObjectCreated:Put", 
      "userIdentity": { 
        "principalId": "minio" 
      }, 
      "requestParameters": { 
        "accessKey": "minio", 
        "region": "", 
        "sourceIPAddress": "192.168.1.134" 
      }, 
      "responseElements": { 
        "content-length": "0", 
        "x-amz-request-id": "164EC17C5E9BEB3E", 
        "x-minio-deployment-id": "d3d81f71-a06c-451e-89be-b1dc4e891054", 
        "x-minio-origin-endpoint": "https://192.168.1.36:9000" 
      }, 
      "s3": { 
        "s3SchemaVersion": "1.0", 
        "configurationId": "Config", 
        "bucket": { 
          "name": "Downloads", 
          "ownerIdentity": { 
            "principalId": "minio" 
          }, 
          "arn": "arn:aws:s3:::Downloads" 
        }, 
        "object": { 
          "key": "testscript.sh", 
          "size": 337, 
          "eTag": "5f604e1b35b1ca405b35503b86b56d51", 
          "contentType": "application/x-sh", 
          "userMetadata": { 
            "content-type": "application/x-sh" 
          }, 
          "sequencer": "164EC17C6153CB1E" 
        } 
      }, 
      "source": { 
        "host": "192.168.1.134", 
        "port": "", 
        "userAgent": "MinIO (linux; amd64) minio-go/v7.0.6 mc/2020-11-25T23:04:07Z" 
      } 
    } 
  ] 
}

Neat!

Testing Nextcloud

I like DropBox: it’s convenient and works on all my devices (Linux, Windows, Android). Except now it only works on 3 devices. Time to look for alternatives: Nextcloud.

Runs on Linux (ARM and Intel), runs in containers or Kubernetes, and has clients for anything I use.

First install: on my old and unused Cubietruck: 2 core, 1 GHz ARM Cortex-A7, 2 GB RAM, SATA disk. Should be more than capable. Tested and installed with this docker-compose.yaml file:

version: '3' 
 
services: 
  db: 
    image: linuxserver/mariadb 
    restart: always 
    volumes: 
      - "./data/mariadb:/config" 
    environment: 
      - PUID=2000 
      - PGID=100 
      - TZ=Asia/Tokyo 
      - REMOTE_SQL=http://URL1/your.sql  #optional
      - MYSQL_ROOT_PASSWORD=somethingrootpw 
      - MYSQL_PASSWORD=somethingpw 
      - MYSQL_DATABASE=nextcloud 
      - MYSQL_USER=nextcloud 
 
  app: 
    image: nextcloud 
    depends_on: 
      - db 
    ports: 
      - 8080:80 
    links: 
      - db 
    volumes: 
      - "./data/var/www/html:/var/www/html" 
    environment: 
      - MYSQL_DATABASE=nextcloud 
      - MYSQL_USER=nextcloud 
      - MYSQL_PASSWORD=somethingpw
      - MYSQL_HOST=db 
    restart: always

start with the usual

$ docker-compose up -d

and that’s about it. If you want to use cron from the Docker host, then do

$ docker exec -it nextcloud_app_1 /bin/bash   
# apt update
# apt install -y sudo
^D

and add a cron job on the Docker host:

*/5 * * * * docker exec nextcloud_app_1 /bin/bash -c "sudo -u www-data php -f /var/www/html/cron.php" >/tmp/docker.log 2>&1

Test once manually. If it worked, Nextcloud is aware of it and now expects cron to kick off every 5min.

Now log in to the web interface (http://dockerhost:8080) and follow the normal procedure how to set up Nextcloud.

Nice! I didn’t get a preview of Photos though. I can live with that.

Not so nice is the performance. Or the lack thereof. telegraf shows the 2 cores to be quite busy when I load any page. Here a light use on the Cubietruck:

And this is on my AMD based fanless mini-PC:

Basically the Cubietruck works, but it’s slow. Both systems have SATA disks and are fanless, and thus I have no reason to use the Cubietruck for this purpose.
And I have to say: it’s neat. It synchronizes data nicely with Linux and Windows. Android works too, but to make it meaningful, I have to make my Nextcloud instance connectable to the Internet first.

My git Server

For the longest time I use my Synology NAS as my git server. Not only does it “just work”, but it has 2 disks in a RAID1 setup and I do a regular backup of it. The only problem is that it’s a bit noisy: 1 fan and 2 spinning 3.5″ disks make background noise. It’s beside my gaming PC the noisiest thing I have. And I don’t use my gaming PC much.

So I wanted to test a git server on an solid state disk (SSD, USB stick etc.) based system. And I happen to have one already: my fan-less PC which runs Docker!

Here’s the docker-compose.yml file:

version: '3'
services:
  gitserver:
    restart: always
    container_name: gitserver
    image: ensignprojects/gitserver
    ports:
            - "2222:22"
    volumes:
      - "./opt_git:/opt/git"
      - "./dot.ssh/authorized_keys:/home/git/.ssh/authorized_keys"
      - "./etc_ssh/ssh_host_ed25519_key:/etc/ssh/ssh_host_ed25519_key"
      - "./etc_ssh/ssh_host_ed25519_key.pub:/etc/ssh/ssh_host_ed25519_key.pub"
      - "./etc_ssh/ssh_host_rsa_key:/etc/ssh/ssh_host_rsa_key"
      - "./etc_ssh/ssh_host_rsa_key.pub:/etc/ssh/ssh_host_rsa_key.pub"

./opt_git/ is where the repos will be stored.

etc_ssh contains the host keys for the gitserver. If you skip those ssh keys the container host ssh key will change. You don’t want that.

dot.ssh/authorized_keys is my public ssh key I use for this git server. Create one via

ssh-keygen -t ed25519 -f gitserver

and then add ~/.ssh/gitserver.pub into ./dot.ssh/authorized_keys

To run the git server container:

docker-compose up -d
docker-compose start

Docker Volume Backup

Doing a backup of volumes is simple if you know the command:

docker run --volumes-from influxdb \
-v $(pwd)/backup:/backup -i -t ubuntu \
tar cfvj /backup/influxdb.tar.bz2 /var/lib/influxdb

influxdb is the stopped container. /var/lib/influxdb is where the (only) volume is mounted. ~/backup/ is where I’d like the compressed tar backup file.

Grafana Alerts

So far I did not have to bother about alerts. InfluxDB collects stats (via API or telegraf) and I can watch it via Grafana. Today I wanted alerts.

First you have to create notification channels in Grafana: Alerting/Notification Channels.

Email

Since I use Gmail, this section in grafana.ini works:

##### SMTP / Emailing #####
[smtp]
enabled = true
host = smtp.gmail.com:587
user = MY_EMAIL_ADDRESS@gmail.com
password = MY_16_CHARACTER_PASSWORD
from_address = MY_EMAIL_ADDRESS@gmail.com
from_name = GrafanaAlerts

The only tricky part is the password which is an application specific password you create at https://myaccount.google.com/security under “App Passwords”.

Line

If you use Line, this works:

ELK with HTTPS

The previous blog entry lacked using https so all communication is in plain text, which makes using passwords less than ideal. This blog entry fixes this.

The full source code for the docker-compose.yml and haproxy.cfg file is available here.

docker-compose.yml

version: '3'
services:
  elk:
    restart: always
    container_name: ek
    image: sebp/elk
    environment:
      - LOGSTASH_START=0
      - TZ=Asia/Tokyo
    expose:
      - 9200
      - 5601
      - 5044
    volumes:
      - "./data:/var/lib/elasticsearch"
  haproxy:
    restart: always
    container_name: haproxy2
    image: haproxy:2.1
    ports:
      - 9100:9100
      - 9990:9990
      - 5601:5601
      - 6080:6080
    volumes:
      - "./haproxy:/usr/local/etc/haproxy"

What’s this docker-compose file doing?

It starts 2 containers: the ELK container and the HAProxy container. ELK can receive traffic via port 9200 (ES) and 5601 (Kibana). HAProxy connects to the outer world via ports 9100, 9990, 5601 and 6080.

The data directory for ElasticSearch is in ./data. The HAProxy configuration incl. TLS certificate is in ./haproxy/

What’s HAProxy doing?

HAProxy is the TLS terminator and port forwarder to the ELK container. Here’s part of the HAProxy config file (in ./haproxy/haproxy.conf)

frontend kibana-http-in
    bind *:6080
    default_backend kibana

frontend kibana-https-in
    bind *:5601 ssl crt /usr/local/etc/haproxy/ssl/private/
    default_backend kibana

backend kibana
    balance roundrobin
    option httpclose
    option forwardfor
    server kibana elk:5601 maxconn 32
  • HAProxy listens on port 6080 for HTTP traffic. It forwards traffic to the backend.
  • HAProxy listens on port 5601 for HTTPS traffic. TLS connection is terminated here. It then forwards the unencrypted traffic to the backend.
  • The backend is on port 5601 on the elk container
  • Not displayed above, but the same happens for ElasticSearch traffic.

Thus you would connect via HTTPS to port 5601 for Kibana and port 9100 for ElasticSearch. You could use HTTP on port 6080 for Kibana and port 9990 for ElasticSearch.

Steps to get ELK with HTTPS running

While I like automation, I doubt I’ll configure this more than twice until the automation breaks due to new versions of ELK. So it’s “half-automated”.

Pre-requisites:

  • A Linux server with min. 2GB RAM, Docker and docker-compose installed. I prefer Debian. Both version 9 and 10 works.
  • A TLS certificate (e.g. via Let’s Encrypt) in ./haproxy/ssl/private/DOMAIN.pem (full cert chain + private key, no passphrase)
  • Define some variables which we’ll refer to later:
DOCKER_HOST=elastic.my.domain.org
ES_HTTP_PORT=9990

PW_BOOTSTRAP="top.Secret"
PW_KIBANA="My-Kibana-PW"
PW_APM="My-APM-PW"
PW_LOGSTASH="My-LogStash-PW"
PW_BEATS="My-Beats-PW"
PW_MONITORING="My-Monitoring-PW"
PW_ELASTIC="My-Elastic-PW"

Prepare your Docker host:

# ES refuses to start if vm.max_map_count is less
sudo echo "vm.max_map_count = 262144" > /etc/sysctl.d/10es.conf
sudo sysctl -f --system

# verify:
sysctl vm.max_map_count
# Output should be "vm.max_map_count = 262144"

# Directory for docker-compose files
mkdir elk
cd elk

# Data directory for ES
mkdir data
sudo chown 991:991 data

# Start docker-compose
docker-compose up -d

It takes about 1-2 minutes. You’ll see the CPU being suddenly less busy. Now you could connect to http://DOCKER_HOST:6080. No check for accounts or passwords yet.

Note: If you delete the container (e.g. via “docker-compose down”), you have to do the following steps again!

# Enter the ELK container
docker exec -it ek /bin/bash

Inside the container modify some settings (replace the variables PW_BOOTSTRAP and PW_KIBANA with the actual passwords):

cd /opt/elasticsearch
mkdir /etc/elasticsearch/certs
bin/elasticsearch-certutil cert -out /etc/elasticsearch/certs/elastic-certificates.p12 -pass ""

cat >> /etc/elasticsearch/elasticsearch.yml <<_EOF_
xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

_EOF_

cd /opt/elasticsearch
echo "${PW_BOOTSTRAP}" | bin/elasticsearch-keystore add bootstrap.password

chown -R elasticsearch:elasticsearch /etc/elasticsearch

# edit /opt/kibana/config/kibana.yml with Kibana user name and password
sed 's/.*elasticsearch.username:.*/elasticsearch.username: "kibana"/;s/.*elasticsearch.password:.*/elasticsearch.password: "${PW_KIBANA}"/' < /opt/kibana/config/kibana.yml > /tmp/kibana.yml && cp /tmp/kibana.yml /opt/kibana/config/kibana.yml

cat >> /opt/kibana/config/kibana.yml <<_EOF_
xpack.security.encryptionKey: "Secret 32 char long string of chars"
xpack.security.secureCookies: true
_EOF_

chown kibana:kibana /opt/kibana/config/kibana.yml

exit

Restart the containers:

docker-compose restart

Again wait about 1 minute. Then execute from anywhere (i.e. not the docker host):

function setPW() {
curl -uelastic:${PW_BOOTSTRAP} -XPUT -H "Content-Type:application/json" "http://${DOCKER_HOST}:${ES_HTTP_PORT}/_xpack/security/user/$1/_password" -d "{ \"password\":\"$2\" }"
}

# Since ES is running by now, set the passwords
setPW "kibana" "${PW_KIBANA}"
setPW "apm_system" "${PW_APM}"
setPW "logstash_system" "${PW_LOGSTASH}"
setPW "beats_system" "${PW_BEATS}"
setPW "remote_monitoring_user" "${PW_MONITORING}"
setPW "elastic" "${PW_ELASTIC}"

Now you should be able to log in to Kibana at https://DOCKER_HOST:5601, account is “elastic” with its password. And you can connect to ES via https://DOCKER_HOST:9100

Done!

For convenience I create an index, and a user and a role to access only that index:

# Create index logs
curl -uelastic:${PW_ELASTIC} -XPUT -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs" -d '{"settings": { "number_of_shards": 1 }, \
"mappings": { "properties": { "timestamp": { "type": "date" }, "status": { "type": "integer" },\
 "channel": { "type": "text" }, "msg": { "type": "text" }}}}'

# Create a data item
curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs/_doc" \
-d '{ "timestamp": "'$(date --iso-8601=seconds -u)'", \
"status": 200, "channel": "curl", "msg": "Initial test via curl"}'

# Just for verification: find your previous entry via this

curl -uelastic:${PW_ELASTIC} -XGET -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs/_search" 

# Create role and user to index data directly into ElasticSearch

curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/_security/role/logwriter" \
-d '{"cluster":[],"indices":[{"names":["logs*"],"privileges":["create_doc"], \
"allow_restricted_indices":false}],"applications":[],"run_as":[], \
"metadata":{},"transient_metadata":{"enabled":true}}'

curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/_security/user/logger" -d '{ "password": "PASSWORD", \
"roles": [ "logwriter" ], "full_name": "Logging user", "email": "my.email@gmail.com" }'

And finally I can log from Cloudflare Workers into ELK via HTTPS:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function esLog(status, msg) {
  const esUrl='https://DOCKER_HOST:9100/logs/_doc';
  let headers = new Headers;
  let now = new Date();
  const body = {
    timestamp: now.toISOString(),
    status: status,
    channel: "log-to-es",
    msg: msg
  };
  headers.append('Authorization', 'Basic ' + btoa('logger:PASSWORD'));
  headers.append('Content-Type', 'application/json');
  try {
    res = await fetch(esUrl, {
      method: 'POST',
      body:    JSON.stringify(body),
      headers: headers,
      });
      return true;
  } catch(err) {
    return false;
  }
}

/**
 * Respond to the request
 * @param {Request} request
 */
async function handleRequest(request) {

  let now1 = new Date();
  await esLog(201, "now1 is set...");
  let now2 = new Date();
  await esLog(202, `now2-now1=${now2-now1} ms`);

  return new Response("Check Kibana for 2 log entries", {status: 200})
}

and then you run this once, you’ll get 2 log entries in ES: