Minio S3 and Events

One of the great points of AWS’s S3 is that you can get notifications about objects being uploaded/deleted/changed/etc.

Turns out Minio can do that too! See https://docs.min.io/docs/minio-bucket-notification-guide.html for details. A quick test using nsq as queue manager worked.

Start nsq as Docker container (on hosts t620.lan):

$ docker run --rm -p 4150-4151:4150-4151 nsqio/nsq /nsqd

Configure Minio:

$ mc admin config set cubie notify_nsq:1 nsqd_address="t620.lan:4150" queue_dir="" queue_limit="0" tls="off" tls_skip_verify="on" topic="minio"

Restart the minio server. It’ll now show one extra line when starting:

Dec 08 22:37:37 cubie minio[4502]: SQS ARNs:  arn:minio:sqs::1:nsq

Now configure the actual events (any changes in the bucket “Downloads”):

$ mc event add cubie/Downloads arn:minio:sqs::1:nsq

After uploading a file I get an event like this:

{ 
  "EventName": "s3:ObjectCreated:Put", 
  "Key": "Downloads/to-kuro.sh", 
  "Records": [ 
    { 
      "eventVersion": "2.0", 
      "eventSource": "minio:s3", 
      "awsRegion": "", 
      "eventTime": "2020-12-08T13:40:56.970Z", 
      "eventName": "s3:ObjectCreated:Put", 
      "userIdentity": { 
        "principalId": "minio" 
      }, 
      "requestParameters": { 
        "accessKey": "minio", 
        "region": "", 
        "sourceIPAddress": "192.168.1.134" 
      }, 
      "responseElements": { 
        "content-length": "0", 
        "x-amz-request-id": "164EC17C5E9BEB3E", 
        "x-minio-deployment-id": "d3d81f71-a06c-451e-89be-b1dc4e891054", 
        "x-minio-origin-endpoint": "https://192.168.1.36:9000" 
      }, 
      "s3": { 
        "s3SchemaVersion": "1.0", 
        "configurationId": "Config", 
        "bucket": { 
          "name": "Downloads", 
          "ownerIdentity": { 
            "principalId": "minio" 
          }, 
          "arn": "arn:aws:s3:::Downloads" 
        }, 
        "object": { 
          "key": "testscript.sh", 
          "size": 337, 
          "eTag": "5f604e1b35b1ca405b35503b86b56d51", 
          "contentType": "application/x-sh", 
          "userMetadata": { 
            "content-type": "application/x-sh" 
          }, 
          "sequencer": "164EC17C6153CB1E" 
        } 
      }, 
      "source": { 
        "host": "192.168.1.134", 
        "port": "", 
        "userAgent": "MinIO (linux; amd64) minio-go/v7.0.6 mc/2020-11-25T23:04:07Z" 
      } 
    } 
  ] 
}

Neat!

Wireguard VPN

Tested between my home machine and my server-on-the-Internet, and…it just worked once I stopped using my usual Debian 10 server and “upgraded” to Ubuntu 20.04. Really good description is here.

Significantly easier to configure compared to SoftEther which is what I used before. SoftEther can do much more though, but if all you want is a VPN tunnel, Wireguard it is.

Synology’s DSM and Minio’s S3

It’s no straightforward to use Minio’s S3 server as back-end for DSM’s Cloud Sync. Here how to make it work:

Enable minio with TLS

# Create a certificate for Minio
step ca certificate cubie.lan ~/.minio/certs/public.crt ~/.minio/certs/private.key --provisioner-password-file=$HOME/.step/pass/provisioner_pass.txt

export MINIO_ACCESS_KEY=access_key
export MINIO_SECRET_KEY=secret_key_very_secret
export MINIO_DOMAIN=cubie.lan
 
minio server /s3

You can now access this storage via https://cubie.lan:9000. Note the MINIO_DOMAIN which enables access to buckets via BUCKET.cubie.lan instead of cubie.lan/BUCKET.

Cloud Sync and Minio’s S3

In DSM open Cloud Sync, create a new connection, select S3 Storage:

Now the trick: DSM does not use http://cubie.lan:9000/BUCKET/ to access your bucket, but instead it uses https://BUCKET.cubie.lan:9000/, so you need a DNS entry for each bucket you use in DSM (CNAME will do).

Click Next in above DSM screen and leave the Remote Path empty (Root folder). Changing this will break the replication.

That’s it: 3 points really:

  • Must use https
  • Uses DNS names to access buckets
  • Don’t use sub-directories inside a bucket

HTTPS on Synology’s DSM

My NAS is a Synology DS212 and it can do https. But to make it use my own CA’s certificate, a bit extra work is needed:

Add my own root CA’s Certificate

# Copy to the default folder for CA Root Certs of DSM 
cp root_ca.crt /usr/share/ca-certificates/mozilla/myCA.crt

# Linking to the system folder
ln -s /usr/share/ca-certificates/mozilla/myCA.crt /etc/ssl/certs/myCA.pem 

# Create hashed link
cd /etc/ssl/certs
ln -s myCA.pem `openssl x509 -hash -noout -in myCA.pem`.0

cat myCA.pem >> /etc/ssl/certs/ca-certificates.crt

# Testing
openssl verify -CApath /etc/ssl/certs myCA.pem

Use our own TLS Certificate

Create certificate

step ca certificate ds.lan ds.crt ds.key --kty RSA --size 2048 --not-after=8760h

DSM → Control Panel → Security → Certificate → Add. Then Configure and use the new one as system default.

Now https://ds.lan:5001 will use the new certificate. Repeat in 1 year. Since the default maximum lifetime of certificates was 720h, I had to change this to 1 year (8760h) on the step CA server:

    "minTLSCertDuration": "5m", 
    "maxTLSCertDuration": "8760h",
    "defaultTLSCertDuration": "24h",

USB Checker QWAY U2P

USB QC and PD can use 5V, 9V, 12V, 15V, 20V at various Ampere from 0.5 to 5, but when you want to know which charger offers what and how much does that device actually use, you are mostly hoping for the best. A fix is some measurement device. Like the QWAY U2P (AKA WITRN U2).

Here is a great and very detailed wiki page incl. an analysis of the HID and BT protocol.

To understand the different modes (4 buttons with some having short/long press behaviors), best watch some YouTube videos (this, this and this).

Creating TLS Certificates for Home Use – Part 2

Part 1 was technically correct, but turns out that it’s too manual to be used by me:

  • you have to do it only once a while (once a year, because certs might have a 1 year validity time)
  • you don’t do it if it’s a lot of extra manual work

So here is Part 2 because I found something easier: Step CLI and Step CA.

The main difference to the openssl method (which continues to work): this CA runs as a service. So on the client side, you just need once to connect and then you can get certificates from a single place.

Get the releases and install, either as Debian package or tar file.

Extra step on ARMv7 (and possible all 32 bit architectures): replace “badger” with “badgerv2” in .step/config/ca.json. Also add those “claims” section under “authority” key unless you like the defaults:

    "authority": {
        "claims": {
            "minTLSCertDuration": "5m",
            "maxTLSCertDuration": "168h",
            "defaultTLSCertDuration": "24h",
            "disableRenewal": false,
            "minHostSSHCertDuration": "5m",
            "maxHostSSHCertDuration": "168h",
            "minUserSSHCertDuration": "5m",
            "maxUserSSHCertDuration": "24h"
        },
        "provisioners": [

Then run your CA by

harald@opz3:~$ step-ca .step/config/ca.json --password-file step-ca-pw.txt
 2020/11/19 18:57:55 Serving HTTPS on :8443 …

To set up the client side, install step-cli and then do:

step ca bootstrap --ca-url https://opz3.lan:8443 \ 
                    --fingerprint 8d9345ed9c8f84729fb82005cf36b9e595c7a40efe2db52bee114c2dbdabd63d

The fingerprint you got during ca initialization, or get the root certificate and run a fingerprint:

# on CA server
> step ca root root.crt
> step certificate fingerprint root.crt

Once done, back on the client, getting a certificate is simple:

❯ step ca certificate m75q.lan m75q.crt m75q.key --provisioner-password-file ./pass.txt

and renew like this (–force to overwrite without asking):

❯ step ca renew --force m75q.crt m75q.key
# Or automated and restarting nginx:
❯ step ca renew --daemon --exec "nginx -s reload" m75q.crt m75q.key

Proper docs for step CLI and step CA. There’s also nice examples for mTLS

Update: See https://github.com/smallstep/certificates/discussions/427 which gave me some hints how to decouple the key encryption passwords for the 3 different keys:

  1. the root CA key
  2. the intermediate CA key
  3. the provisioner key

To change the passphrases:

cd $(step path)
cd secret/
cp intermediate_ca_key intermediate_ca_key.original
openssl ec -in intermediate_ca_key.original | openssl ec -out intermediate_ca_key -aes256
read EC key 
read EC key 
Enter PEM pass phrase: <OLD PASSPHRASE>
writing EC key 
writing EC key 
Enter PEM pass phrase: <NEW_PASSPHRASE>
Verifying - Enter PEM pass phrase: <NEW_PASSPHRASE>

Now when you start your CA (via

step-ca .step/config/ca.json

it’ll ask for the passphrase of the intermediate certificate. Repeat for the root certificate. You can remove it to another place as it’s not used.

The JWK provisioner’s encryptedKey (see $(step path)/config/ca.json) still has the original passphrase from the initial setup. Let’s fix this:

❯ cd $(step path)/config
jq -r '.authority.provisioners[] | select(.type=="JWK") | .encryptedKey' ca.json | step crypto jwe decrypt > k.json
Please enter the password to decrypt the content encryption key: 
{"use":"sig","kty":"EC","kid":"cnUEPau6NgopLjZHrsL3v3PXvF14EIAe-PIHmjA5fQQ","crv":"P-256","alg":"ES256","x":"xxxxxxxxxx","y":"yyyyyyyyyy","d":"dddddddddd"}

On the clients side you can now use the decrypted k.json like this:

❯ TOKEN=$(step ca token foo --provisioner "provisioner@name" --key k.json)
✔ Provisioner: provisioner@name (JWK) [kid: -6pq-22r6yaQg]
❯ step ca certificate foo foo.crt foo.key --token $TOKEN 

Update 2020-11-21: This is working so well, I made an Ansible Playbook to make (re-)installs much easier for me: https://github.com/haraldkubota/step-ca

Renewing certificates is quite simple too (more details here):

step ca renew --daemon --exec "nginx -s reload" internal.crt internal.key
 

Testing Nextcloud

I like DropBox: it’s convenient and works on all my devices (Linux, Windows, Android). Except now it only works on 3 devices. Time to look for alternatives: Nextcloud.

Runs on Linux (ARM and Intel), runs in containers or Kubernetes, and has clients for anything I use.

First install: on my old and unused Cubietruck: 2 core, 1 GHz ARM Cortex-A7, 2 GB RAM, SATA disk. Should be more than capable. Tested and installed with this docker-compose.yaml file:

version: '3' 
 
services: 
  db: 
    image: linuxserver/mariadb 
    restart: always 
    volumes: 
      - "./data/mariadb:/config" 
    environment: 
      - PUID=2000 
      - PGID=100 
      - TZ=Asia/Tokyo 
      - REMOTE_SQL=http://URL1/your.sql  #optional
      - MYSQL_ROOT_PASSWORD=somethingrootpw 
      - MYSQL_PASSWORD=somethingpw 
      - MYSQL_DATABASE=nextcloud 
      - MYSQL_USER=nextcloud 
 
  app: 
    image: nextcloud 
    depends_on: 
      - db 
    ports: 
      - 8080:80 
    links: 
      - db 
    volumes: 
      - "./data/var/www/html:/var/www/html" 
    environment: 
      - MYSQL_DATABASE=nextcloud 
      - MYSQL_USER=nextcloud 
      - MYSQL_PASSWORD=somethingpw
      - MYSQL_HOST=db 
    restart: always

start with the usual

$ docker-compose up -d

and that’s about it. If you want to use cron from the Docker host, then do

$ docker exec -it nextcloud_app_1 /bin/bash   
# apt update
# apt install -y sudo
^D

and add a cron job on the Docker host:

*/5 * * * * docker exec nextcloud_app_1 /bin/bash -c "sudo -u www-data php -f /var/www/html/cron.php" >/tmp/docker.log 2>&1

Test once manually. If it worked, Nextcloud is aware of it and now expects cron to kick off every 5min.

Now log in to the web interface (http://dockerhost:8080) and follow the normal procedure how to set up Nextcloud.

Nice! I didn’t get a preview of Photos though. I can live with that.

Not so nice is the performance. Or the lack thereof. telegraf shows the 2 cores to be quite busy when I load any page. Here a light use on the Cubietruck:

And this is on my AMD based fanless mini-PC:

Basically the Cubietruck works, but it’s slow. Both systems have SATA disks and are fanless, and thus I have no reason to use the Cubietruck for this purpose.
And I have to say: it’s neat. It synchronizes data nicely with Linux and Windows. Android works too, but to make it meaningful, I have to make my Nextcloud instance connectable to the Internet first.

CGroups V2 on Debian

CGroups V2 is not enabled by default on Debian, but it can be enabled easily:

# echo 'GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_DEFAULT} systemd.unified_cgroup_hierarchy=1"' >> /etc/default/grub
# grub-mkconfig -o /boot/grub/grub.cfg
# reboot

To find out which cgroups version you use, this is v1:

❯ ls /sys/fs/cgroup
 blkio  cpuacct      cpuset   freezer  memory   net_cls,net_prio  perf_event  rdma     unified
 cpu    cpu,cpuacct  devices  hugetlb  net_cls  net_prio          pids        systemd

and this is v2:

ls /sys/fs/cgroup 
cgroup.controllers  cgroup.max.descendants  cgroup.stat             cgroup.threads  system.slice 
cgroup.max.depth    cgroup.procs            cgroup.subtree_control  init.scope      user.slice

Google Play Services Sucking Battery Dry

Yesterday my battery on my phone went all empty. That’s unheard of and rather odd. Turns out that Google Play Services, which usually uses only a small amount of power, used an enormous amount of power. The projected time for the full battery to last was 4h. The phone was slightly warm too. Odd. A reboot did not help.

This post has a working fix: clear Play Services Data. Adding a reboot for good measure, since then battery drain is back to normal.

Update: Happened again. Turned off Backup. Since then the battery sucking problem is gone. Of course now I have no backup…

Create your website at WordPress.com
Get started