Creating TLS Certificates for Home Use – Part 2

Part 1 was technically correct, but turns out that it’s too manual to be used by me:

  • you have to do it only once a while (once a year, because certs might have a 1 year validity time)
  • you don’t do it if it’s a lot of extra manual work

So here is Part 2 because I found something easier: Step CLI and Step CA.

The main difference to the openssl method (which continues to work): this CA runs as a service. So on the client side, you just need once to connect and then you can get certificates from a single place.

Get the releases and install, either as Debian package or tar file.

Extra step on ARMv7 (and possible all 32 bit architectures): replace “badger” with “badgerv2” in .step/config/ca.json. Also add those “claims” section under “authority” key unless you like the defaults:

    "authority": {
        "claims": {
            "minTLSCertDuration": "5m",
            "maxTLSCertDuration": "168h",
            "defaultTLSCertDuration": "24h",
            "disableRenewal": false,
            "minHostSSHCertDuration": "5m",
            "maxHostSSHCertDuration": "168h",
            "minUserSSHCertDuration": "5m",
            "maxUserSSHCertDuration": "24h"
        },
        "provisioners": [

Then run your CA by

harald@opz3:~$ step-ca .step/config/ca.json --password-file step-ca-pw.txt
 2020/11/19 18:57:55 Serving HTTPS on :8443 …

To set up the client side, install step-cli and then do:

step ca bootstrap --ca-url https://opz3.lan:8443 \ 
                    --fingerprint 8d9345ed9c8f84729fb82005cf36b9e595c7a40efe2db52bee114c2dbdabd63d

The fingerprint you got during ca initialization, or get the root certificate and run a fingerprint:

# on CA server
> step ca root root.crt
> step certificate fingerprint root.crt

Once done, back on the client, getting a certificate is simple:

❯ step ca certificate m75q.lan m75q.crt m75q.key --provisioner-password-file ./pass.txt

and renew like this (–force to overwrite without asking):

❯ step ca renew --force m75q.crt m75q.key
# Or automated and restarting nginx:
❯ step ca renew --daemon --exec "nginx -s reload" m75q.crt m75q.key

Proper docs for step CLI and step CA. There’s also nice examples for mTLS

Update: See https://github.com/smallstep/certificates/discussions/427 which gave me some hints how to decouple the key encryption passwords for the 3 different keys:

  1. the root CA key
  2. the intermediate CA key
  3. the provisioner key

To change the passphrases:

cd $(step path)
cd secret/
cp intermediate_ca_key intermediate_ca_key.original
openssl ec -in intermediate_ca_key.original | openssl ec -out intermediate_ca_key -aes256
read EC key 
read EC key 
Enter PEM pass phrase: <OLD PASSPHRASE>
writing EC key 
writing EC key 
Enter PEM pass phrase: <NEW_PASSPHRASE>
Verifying - Enter PEM pass phrase: <NEW_PASSPHRASE>

Now when you start your CA (via

step-ca .step/config/ca.json

it’ll ask for the passphrase of the intermediate certificate. Repeat for the root certificate. You can remove it to another place as it’s not used.

The JWK provisioner’s encryptedKey (see $(step path)/config/ca.json) still has the original passphrase from the initial setup. Let’s fix this:

❯ cd $(step path)/config
jq -r '.authority.provisioners[] | select(.type=="JWK") | .encryptedKey' ca.json | step crypto jwe decrypt > k.json
Please enter the password to decrypt the content encryption key: 
{"use":"sig","kty":"EC","kid":"cnUEPau6NgopLjZHrsL3v3PXvF14EIAe-PIHmjA5fQQ","crv":"P-256","alg":"ES256","x":"xxxxxxxxxx","y":"yyyyyyyyyy","d":"dddddddddd"}

On the clients side you can now use the decrypted k.json like this:

❯ TOKEN=$(step ca token foo --provisioner "provisioner@name" --key k.json)
✔ Provisioner: provisioner@name (JWK) [kid: -6pq-22r6yaQg]
❯ step ca certificate foo foo.crt foo.key --token $TOKEN 

Update 2020-11-21: This is working so well, I made an Ansible Playbook to make (re-)installs much easier for me: https://github.com/haraldkubota/step-ca

Renewing certificates is quite simple too (more details here):

step ca renew --daemon --exec "nginx -s reload" internal.crt internal.key
 

Testing Nextcloud

I like DropBox: it’s convenient and works on all my devices (Linux, Windows, Android). Except now it only works on 3 devices. Time to look for alternatives: Nextcloud.

Runs on Linux (ARM and Intel), runs in containers or Kubernetes, and has clients for anything I use.

First install: on my old and unused Cubietruck: 2 core, 1 GHz ARM Cortex-A7, 2 GB RAM, SATA disk. Should be more than capable. Tested and installed with this docker-compose.yaml file:

version: '3' 
 
services: 
  db: 
    image: linuxserver/mariadb 
    restart: always 
    volumes: 
      - "./data/mariadb:/config" 
    environment: 
      - PUID=2000 
      - PGID=100 
      - TZ=Asia/Tokyo 
      - REMOTE_SQL=http://URL1/your.sql  #optional
      - MYSQL_ROOT_PASSWORD=somethingrootpw 
      - MYSQL_PASSWORD=somethingpw 
      - MYSQL_DATABASE=nextcloud 
      - MYSQL_USER=nextcloud 
 
  app: 
    image: nextcloud 
    depends_on: 
      - db 
    ports: 
      - 8080:80 
    links: 
      - db 
    volumes: 
      - "./data/var/www/html:/var/www/html" 
    environment: 
      - MYSQL_DATABASE=nextcloud 
      - MYSQL_USER=nextcloud 
      - MYSQL_PASSWORD=somethingpw
      - MYSQL_HOST=db 
    restart: always

start with the usual

$ docker-compose up -d

and that’s about it. If you want to use cron from the Docker host, then do

$ docker exec -it nextcloud_app_1 /bin/bash   
# apt update
# apt install -y sudo
^D

and add a cron job on the Docker host:

*/5 * * * * docker exec nextcloud_app_1 /bin/bash -c "sudo -u www-data php -f /var/www/html/cron.php" >/tmp/docker.log 2>&1

Test once manually. If it worked, Nextcloud is aware of it and now expects cron to kick off every 5min.

Now log in to the web interface (http://dockerhost:8080) and follow the normal procedure how to set up Nextcloud.

Nice! I didn’t get a preview of Photos though. I can live with that.

Not so nice is the performance. Or the lack thereof. telegraf shows the 2 cores to be quite busy when I load any page. Here a light use on the Cubietruck:

And this is on my AMD based fanless mini-PC:

Basically the Cubietruck works, but it’s slow. Both systems have SATA disks and are fanless, and thus I have no reason to use the Cubietruck for this purpose.
And I have to say: it’s neat. It synchronizes data nicely with Linux and Windows. Android works too, but to make it meaningful, I have to make my Nextcloud instance connectable to the Internet first.

CGroups V2 on Debian

CGroups V2 is not enabled by default on Debian, but it can be enabled easily:

# echo 'GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} systemd.unified_cgroup_hierarchy=1"' >> /etc/default/grub
# grub-mkconfig -o /boot/grub/grub.cfg
# reboot

To find out which cgroups version you use, this is v1:

❯ ls /sys/fs/cgroup
 blkio  cpuacct      cpuset   freezer  memory   net_cls,net_prio  perf_event  rdma     unified
 cpu    cpu,cpuacct  devices  hugetlb  net_cls  net_prio          pids        systemd

and this is v2:

ls /sys/fs/cgroup 
cgroup.controllers  cgroup.max.descendants  cgroup.stat             cgroup.threads  system.slice 
cgroup.max.depth    cgroup.procs            cgroup.subtree_control  init.scope      user.slice

Google Play Services Sucking Battery Dry

Yesterday my battery on my phone went all empty. That’s unheard of and rather odd. Turns out that Google Play Services, which usually uses only a small amount of power, used an enormous amount of power. The projected time for the full battery to last was 4h. The phone was slightly warm too. Odd. A reboot did not help.

This post has a working fix: clear Play Services Data. Adding a reboot for good measure, since then battery drain is back to normal.

Update: Happened again. Turned off Backup. Since then the battery sucking problem is gone. Of course now I have no backup…

Edit InfluxDB Data

If you have outliers in InfluxDB, you might want to get rid of them. I had 2 temperature readings of 0°C out of about 3M data points. They make graphs go from from 0 to 40 when it could be 20 to 40 instead.

So I had to get rid of those data points. And it was quite simple (see here):

bash-5.0# influx
Connected to http://localhost:8086 version 1.8.2
InfluxDB shell version: 1.8.2
> use telegraf.autogen
Using database telegraf
Using retention policy autogen
> select * from room1 where temp < 5
name: room1
time                host sensor              temp
----                ---- ------              ----
1587770762000000000 opz2 w1_slave_temp_input 0
1587770793000000000 opz2 w1_slave_temp_input 0
delete from temp where time = 1587770762000000000
delete from temp where time = 1587770793000000000
^D

My git Server

For the longest time I use my Synology NAS as my git server. Not only does it “just work”, but it has 2 disks in a RAID1 setup and I do a regular backup of it. The only problem is that it’s a bit noisy: 1 fan and 2 spinning 3.5″ disks make background noise. It’s beside my gaming PC the noisiest thing I have. And I don’t use my gaming PC much.

So I wanted to test a git server on an solid state disk (SSD, USB stick etc.) based system. And I happen to have one already: my fan-less PC which runs Docker!

Here’s the docker-compose.yml file:

version: '3'
services:
  gitserver:
    restart: always
    container_name: gitserver
    image: ensignprojects/gitserver
    ports:
            - "2222:22"
    volumes:
      - "./opt_git:/opt/git"
      - "./dot.ssh/authorized_keys:/home/git/.ssh/authorized_keys"
      - "./etc_ssh/ssh_host_ed25519_key:/etc/ssh/ssh_host_ed25519_key"
      - "./etc_ssh/ssh_host_ed25519_key.pub:/etc/ssh/ssh_host_ed25519_key.pub"
      - "./etc_ssh/ssh_host_rsa_key:/etc/ssh/ssh_host_rsa_key"
      - "./etc_ssh/ssh_host_rsa_key.pub:/etc/ssh/ssh_host_rsa_key.pub"

./opt_git/ is where the repos will be stored.

etc_ssh contains the host keys for the gitserver. If you skip those ssh keys the container host ssh key will change. You don’t want that.

dot.ssh/authorized_keys is my public ssh key I use for this git server. Create one via

ssh-keygen -t ed25519 -f gitserver

and then add ~/.ssh/gitserver.pub into ./dot.ssh/authorized_keys

To run the git server container:

docker-compose up -d
docker-compose start

Hiding the Mouse Cursor in X

Konsole hides the mouse cursor when typing. This is generally good. But this is Konsole’s behavior and not a general one. I don’t generally want the mouse cursor when typing. So how to remove it?

Help comes from an old utility: xbanish. Clone the repo, and assuming you got the needed items to compile, 0.2s later you got a 27k binary. Refreshing.

❯ time make
cc -O2 -Wall -Wunused -Wmissing-prototypes -Wstrict-prototypes -Wunused -I/usr/X11R6/include -c xbanish.c -o xbanish.o
cc xbanish.o -L/usr/X11R6/lib -lX11 -lXfixes -lXi -o xbanish
make  0.18s user 0.04s system 84% cpu 0.265 total

❯ ls -la xbanish
-rwxr-xr-x 1 harald users 27288 Sep 22 09:05 xbanish

Put it in /usr/local/bin and make it autostart when KDE starts. And now the mouse cursor disappears in any X11 program.

k3s – local persistent storage

When using k3s and the built-in local persistent storage provider, once in a while you have to edit those files. While that usually works, sometimes you have to replace a 150kB binary file and when containers usually don’t have scp installed, there’s a problem…

The fix is to modify the storage from outside the container. That depends on the persistent storage provider. If it’s NFS, mount by NFS from another machine. If it’s an S3 bucket, edit it directly etc.

k3s has a local persistent storage driver called “local-path”. But where are those files so I can replace one of them? Turns out they are on /var/lib/rancher/k3s/storage/ on a node. Which node and what directory inside storage/ ?

Finding Your PVC

To find the PVC named “grafana-lib”, do

❯ kubectl describe persistentVolumeClaim grafana-lib
Name:          grafana-lib
Namespace:     default
StorageClass:  local-path
Status:        Bound
Volume:        pvc-a89cee51-0000-47d7-a095-2d48400768e3
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
               volume.kubernetes.io/selected-node: knode5
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      3Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    grafana-deployment-669fc6d658-l78z7
Events:        <none>

and the volume shows where it is: knode5:/var/lib/rancher/k3s/storage/pvc-a89cee51…
A bit of jq magic and you get a complete list of all PVCs:

❯ kubectl get persistentVolumeClaims -o json | jq '[.items[] | { "Name": .metadata.name, "Volume": .spec.volumeName, "Node": .metadata.annotations."volume.kubernetes.io/selected-node" }]'
[
  {
    "Name": "influxdb-data",
    "Volume": "pvc-bbac2312-0000-450e-aee1-41a0d5517adb",
    "Node": "knode6"
  },
  {
    "Name": "grafana-log",
    "Volume": "pvc-22814c7b-0000-4b8b-99b6-ab4a4ca6c65c",
    "Node": "knode5"
  },
  {
    "Name": "grafana-lib",
    "Volume": "pvc-a89cee51-0000-47d7-a095-2d48400768e3",
    "Node": "knode5"
  }
]

Video Editing on Linux – OpenShot

Video editing is fun, but I suck at it. So I keep it simple. Long time I used Kino but it’s no longer developed. But it had 3 main things:

  • It ran on Linux
  • It was simple
  • It worked with my DV camera (FireWire AKA IEEE-1394, remember that?)

Since I had a small video to “edit” (mainly add a title, cut off some seconds from the start and end), I looked and found a Kino replacement: OpenShot. It’s simple enough to almost immediately use without a steep learning curve. I like it.

Create your website with WordPress.com
Get started