GitHub Actions

For work stuff we use Jenkins as CI/CD solution. No choice. It works though, so no complains either.

On the Internet there’s many possible solutions:

Which one to use? While I did not dig too deep into each one, they are all basically very similar, simply because the problem is clear and a “solved problem”. And yes, I simplify things here a lot:

  • You do good CI if you merge often. The common solution is to use git, create short-lived branches and merge them (depending on your development strategy)
  • Run linters, formatters and tests upon a code commit (and verify during a push upstream)

The CD part is different and depends a lot on the back-end how and where your application runs. Once you have an application, a zip artifact, a container image etc., deploying those is not a technical problem. The strategy how to move from QA to PROD is an entirely different problem but it very much depends on your back-end. Kubernetes has its ArgoCD/FluxCD while in a non-container environment you have to use other solutions.

Back to GitHub Actions

I never needed it: as the only developer for my little software world, I use a locally hosted git repo. Some items I put on GitHub, but only when I think someone can benefit of it. This is a hackathon-starter template from here (quite nice) which is a NodeJS app with plenty dependencies and it served as my test sample to play with Git Hub Actions.

And this is the result:

name: NodeJS Steps
on: [push]
    runs-on: ubuntu-latest
        node: [ '14' ]
    name: My Build
      - uses: actions/checkout@v2
      - name: Setup node
        uses: actions/setup-node@v2
          node-version: ${{ matrix.node }}
      - run: npm install
      - run: npm test
    needs: build
    runs-on: ubuntu-latest
    name: Docker Build and Push
      - uses: actions/checkout@v2
      - name: Build and push
        id: docker_build
        uses: mr-smithers-excellent/docker-build-push@v5
          image: hkubota/hackathon
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}

Certain things I like a lot:

  • The matrix option is great to test various versions of (in this case) NodeJS
  • Jobs are easy to define incl. dependencies
  • Accessing secrets and env variables from GitHub is straightforward

Basically it’s like Jenkins. Which is nice. I do prefer GitHub Actions though as it’s one thing less to maintain for me.

DataDog on ARMv7

Playing with DataDog. There’s amd64 and arm64 (ARMv8) agents available. Since I got several ARMv7 machines, time to compile it!

Instructions here worked, although make sure you have a Python 3 virtualenv set up. Some dependencies I had to install:

apt install python-dev cmake
pip install wheel
pip install -r requirements

I already had go installed. Make sure $GOROOT and $GOPATH are correct. My $GOROOT points to /usr/local/go while $GOPATH points to ~/go. Until I made sure they are correct, I got all kind of odd error messages when trying to use the invoke script.

The whole compilation on a AllWinner A20 @ 1GHz takes about 2h. Also ~/go/pkg swelled up to 4GB and it filled my disk which cost me another hour.

For the config file datalog.yaml I copied the corresponding file from a “normal” amd64 agent (from /etc/datadog-agent/). To start the agent:

(venv) harald@cubie:~/go/src/$ ./bin/agent/agent run -c bin/agent/dist/datadog.yaml

And it shows up nicely in the DataDog UI:

CubieTruck on DataDog

It’s not perfect as there are those 3 integration issues which I have to track down. But I got the Docker/containerd integration and basic CPU/RAM/disks statistics which is mainly what I needed from my ArmV7 machines.

Integration issues

Kustomize vs Helm

While both Helm and Kustomize are often used as if they are alternatives, I do not see them this way:


Kustomize allows your K8S yaml files to have “variations”: you define a base, and on top of that you add some modifications: You create a base configuration (YAML files), and on top of that you create a DEV and a QA variation: e.g. DEV uses a different image, a different port, different namespace, but otherwise it matches QA. And variations can have further variations too. Nice. Elegant. To install the DEV variation:

kubectl apply -k dev

What I like about it is that is very close how we did OS configuration at work about 15 years ago: a base (global) configuration, then a directory for the region, then one per country, and one per data center. Differences between DEV and QA were handled in those scripts too at the earliest possible layer. It worked then, was immediately understood by everyone (KISS) and was robust and extensible.

It’s also built-in into kubectl.


Helm is more similar to a package manager like apt or yum: you just ask for “I’d like WordPress to be installed” and it gets all its dependencies and installs them.

Creating a Helm chart for your own purposes is not easy though and it seems to me to be overkill for simple projects unless you plan to distribute it as a simple-to-install software package. Microservices is my main target, so they tend to stay simple.

Which one to use?

If you are a developer, Kustomize makes much more sense to me. If you want to distribute a non-trivial K8S application which possibly has dependencies, then Helm does that well.

That said, both still force you to understand and spell out all those K8S resources which the application needs. One can hope that OAM will address this.

Paul Graham’s Articles

By pure chance via Matthew Ruffell’s blog in the books section at the bottom there’s a link to articles written by Paul Graham. Well worth reading. Very insightful. In particular the article about Good and Bad Procrastination.

Here an excerpt from How to Do What You Love about choosing a job when you are young:

Don’t decide too soon. Kids who know early what they want to do seem impressive, as if they got the answer to some math question before the other kids. They have an answer, certainly, but odds are it’s wrong.

A friend of mine who is a quite successful doctor complains constantly about her job. When people applying to medical school ask her for advice, she wants to shake them and yell “Don’t do it!” (But she never does.) How did she get into this fix? In high school she already wanted to be a doctor. And she is so ambitious and determined that she overcame every obstacle along the way—including, unfortunately, not liking it.

Now she has a life chosen for her by a high-school kid.

Time Lapse Videos

I got a TP-Link KC120. Looks good. Good lens. Very nice magnetic stand. But the firmware…I should have checked that there is an API to get a single frame or a video stream out of it. Because there is no such thing on that camera which means it has to be used via its TP-Link software which is not to my liking.

Since there seems to be no alternative firmware and TP-Link seems to not develop this camera further, I am looking for alternatives. There’s good info at for exactly that. The Android IP WebCam caught my eye as I use an Android app for adding a camera to OBS Studio via DroidCam OBS which uses the NDI protocol to convert an Android phone into a NDI camera. Android IP WebCam is the same idea (and yes, I probably could have used the DroidCam OBS software too).

So I tried it and it’s everything I ever wanted from a security web cam: I can do singe shoot pictures (great for time lapse). It can record continuously. Detect motion, or things happening on the screen. Here’s the screenshot of the configuration screen:

Android IP WebCam Control Screen

The only drawback is that the lens of the phone is not wide-angle. Next Android phone will have a wider angle lens.

But beside this, finally I can do time laps videos. Here the recording part. Note the attempt to make sure to get a frame every X seconds. WiFi being as unreliable as it is, wget sometimes hangs for 15min (its normal timeout) which game me initially random 15min gaps.

set -euo pipefail

# Record single pictures
# Maybe add time stamp to each


export cnt=0
export pic_every_sec=30
export timeout_in_sec=25

rm -f shot.jpg

while true ; do
  sleep $pic_every_sec &
  ( date +%T.%6N
  t=$(printf "%05d" $cnt)
  wget -q --timeout=$timeout_in_sec --user=USERNAME--password="PASSWORD"
  if [[ $? -eq 0 ]] ; then
    exiv2 -M"set Exif.Photo.DateTimeOriginal $(date +'%Y:%m:%d %H:%M:%S')" shot.jpg
    mv shot.jpg ${prefix}-$t.jpg
    rm -f shot.jpg
  let cnt=cnt+1

And here the making of a movie:

set -euo pipefail


# Add timestamp
for i in ${prefix}-* ; do
  timestamp=$(exiv2 -g Exif.Photo.DateTimeOriginal -Pv $i | sed 's/:/-/1;s/:/-/1')
  echo $timestamp >&2
  echo -n $timestamp >/tmp/timestamp.txt
  ffmpeg -v quiet -i $i -vf 'drawtext=textfile=/tmp/timestamp.txt:x=(w-tw)-10:y=h-(2*lh):fontcolor=white:box=0: boxcolor=0x00000000@1:borderw=4:fontsize=40' -f mjpeg -
done | ffmpeg -v quiet -r:v 30 -i - -codec:v libx265 -threads 2 -preset fast -r:v 30 -x265-params crf=28:pools=2 -f mp4 -an the_movie.mp4

I was worried that the phone gets hot, but that turned out to be a total non-issue.

Doxygen, Sphinx and RTD

TIL about Sphinx and that it can create neat documentation by extracting relevant information from Python code with the help of a comment which basically describes when will be define in the line afterwards. Doxygen can extract a lot of information from C++ code, but the output is…visually lacking.

So Exhale was created which builds on Breathe (Anyone else see a pattern here?) to make the output of Doxygen into something Sphinx can process.

And the result is neat:

Example of Exhale HTML Output

I’m using those Read-The-Docs pages a lot. I never knew how they are created. Now I know. Not that I care about C++ in particular. Just found it neat to use a Python documentation tool to document C++ code.


Controllers are great when they work and when it’s easy enough to create them. But they are not trivial at all although several attempts are in progress to simplify this:

I tried the latter as it allows to create controllers in your language of choice and their example list looks sufficient to handle plenty typical use-cases.

And it works. 2 weeks ago it did not. No idea if it was my K8S cluster or something else, but it’s working out-of-the-box. kubebuilder and the operatorframework need way more Go skills and time.

Deleting stuck namespaces in K8S

Somehow I get namespaces which are in “Terminating” state forever:

❯ kubectl get ns
NAME                        STATUS        AGE
default                     Active        22d
ingress-nginx               Active        11d
kube-node-lease             Active        22d
kube-public                 Active        22d
kube-system                 Active        22d
memcached-operator-system   Terminating   37m
olm                         Terminating   69m
operators                   Active        69m

Root cause is the finalizers which…don’t finalize. No idea yet why. Until then, this is how to delete those never-terminating namespaces:

❯ NS="olm"
❯ kubectl get namespace $NS -o json > $NS.json
# Edit $NS.json and delete items in spec.finalizers
❯ more $NS.json
    "apiVersion": "v1",
    "kind": "Namespace",
    "metadata": {
        "creationTimestamp": "2021-06-21T09:08:48Z",
        "deletionTimestamp": "2021-06-21T09:28:32Z",
        "name": "olm",
        "resourceVersion": "2779704",
        "uid": "6bf2112a-85bc-42c0-b17f-cf9010f7dab7"
    "spec": {
        "finalizers": []
    "status": {
❯ kubectl replace --raw "/api/v1/namespaces/$NS/finalize" -f ./$NS.json

jq and jc: JSON in the shell

Everyone knows jq: it’s the JSON parser which can do nifty things like picking out data from a JSON document.

The problem is that many commands to not output JSON. E.g. the df command. Or pretty much any normal shell command. kubectl being one of the exceptions.

So what can we do about this? Re-implement all tools! But that’s a lot of work…

Or we use jc! Then you can do things like:

(venv) harald@r2s1:~$ ping -c 4 | jc --ping | jq
  "destination_ip": "",
  "data_bytes": 56,
  "pattern": null,
  "destination": "",
  "packets_transmitted": 4,
  "packets_received": 4,
  "packet_loss_percent": 0,
  "duplicates": 0,
  "time_ms": 3005,
  "round_trip_ms_min": 1,
  "round_trip_ms_avg": 1.337,
  "round_trip_ms_max": 1.663,
  "round_trip_ms_stddev": 0.241,
  "responses": [
      "type": "reply",
      "timestamp": null,
      "bytes": 64,
      "response_ip": "",
      "icmp_seq": 1,
      "ttl": 64,
      "time_ms": 1.26,
      "duplicate": false
      "type": "reply",
      "timestamp": null,
      "bytes": 64,
      "response_ip": "",
      "icmp_seq": 2,
      "ttl": 64,
      "time_ms": 1.43,
      "duplicate": false
      "type": "reply",
      "timestamp": null,
      "bytes": 64,
      "response_ip": "",
      "icmp_seq": 3,
      "ttl": 64,
      "time_ms": 1,
      "duplicate": false
      "type": "reply",
      "timestamp": null,
      "bytes": 64,
      "response_ip": "",
      "icmp_seq": 4,
      "ttl": 64,
      "time_ms": 1.66,
      "duplicate": false

PowerShell has something similar: INPUT/OUTPUT is more objects and less a byte stream as it is in Unix. Now I have both: byte stream by default, and JSON structured data if I want to! It’s like eating a cake and keeping it too!

Battery on my Surface Pro 3

Devices which use batteries should not be stored at full charge. Unfortunately that’s what effectively happens when you constantly connect it to power. Like my Surface Pro 3 does.

My old Dell had a feature to stop charging at 80%. My Samsung tablet can do something similar. Turns out my old Surface Pro 3 can do the same. It’s just labeled “Kiosk Mode” and it’s in the UEFI (enter via Power+Vol Up). It stops charging at 50%.

You can also check the battery health:

How to create a battery report on a Surface

And this is how the report looks like:

My battery report
Create your website with
Get started