Google Play Services Sucking Battery Dry

Yesterday my battery on my phone went all empty. That’s unheard of and rather odd. Turns out that Google Play Services, which usually uses only a small amount of power, used an enormous amount of power. The projected time to last was 4h. The phone was slightly warm too. Odd. A reboot did not help.

This post has a working fix: clear Play Services Data. Adding a reboot for good measure, since then battery drain is back to normal.

Edit InfluxDB Data

If you have outliers in InfluxDB, you might want to get rid of them. I had 2 temperature readings of 0°C out of about 3M data points. They make graphs go from from 0 to 40 when it could be 20 to 40 instead.

So I had to get rid of those data points. And it was quite simple (see here):

bash-5.0# influx
Connected to http://localhost:8086 version 1.8.2
InfluxDB shell version: 1.8.2
> use telegraf.autogen
Using database telegraf
Using retention policy autogen
> select * from room1 where temp < 5
name: room1
time                host sensor              temp
----                ---- ------              ----
1587770762000000000 opz2 w1_slave_temp_input 0
1587770793000000000 opz2 w1_slave_temp_input 0
delete from temp where time = 1587770762000000000
delete from temp where time = 1587770793000000000

My git Server

For the longest time I use my Synology NAS as my git server. Not only does it “just work”, but it has 2 disks in a RAID1 setup and I do a regular backup of it. The only problem is that it’s a bit noisy: 1 fan and 2 spinning 3.5″ disks make background noise. It’s beside my gaming PC the noisiest thing I have. And I don’t use my gaming PC much.

So I wanted to test a git server on an solid state disk (SSD, USB stick etc.) based system. And I happen to have one already: my fan-less PC which runs Docker!

Here’s the docker-compose.yml file:

version: '3'
    restart: always
    container_name: gitserver
    image: ensignprojects/gitserver
            - "2222:22"
      - "./opt_git:/opt/git"
      - "./dot.ssh/authorized_keys:/home/git/.ssh/authorized_keys"
      - "./etc_ssh/ssh_host_ed25519_key:/etc/ssh/ssh_host_ed25519_key"
      - "./etc_ssh/"
      - "./etc_ssh/ssh_host_rsa_key:/etc/ssh/ssh_host_rsa_key"
      - "./etc_ssh/"

./opt_git/ is where the repos will be stored.

etc_ssh contains the host keys for the gitserver. If you skip those ssh keys the container host ssh key will change. You don’t want that.

dot.ssh/authorized_keys is my public ssh key I use for this git server. Create one via

ssh-keygen -t ed25519 -f gitserver

and then add ~/.ssh/ into ./dot.ssh/authorized_keys

To run the git server container:

docker-compose up -d
docker-compose start

Hiding the Mouse Cursor in X

Konsole hides the mouse cursor when typing. This is generally good. But this is Konsole’s behavior and not a general one. I don’t generally want the mouse cursor when typing. So how to remove it?

Help comes from an old utility: xbanish. Clone the repo, and assuming you got the needed items to compile, 0.2s later you got a 27k binary. Refreshing.

❯ time make
cc -O2 -Wall -Wunused -Wmissing-prototypes -Wstrict-prototypes -Wunused -I/usr/X11R6/include -c xbanish.c -o xbanish.o
cc xbanish.o -L/usr/X11R6/lib -lX11 -lXfixes -lXi -o xbanish
make  0.18s user 0.04s system 84% cpu 0.265 total

❯ ls -la xbanish
-rwxr-xr-x 1 harald users 27288 Sep 22 09:05 xbanish

Put it in /usr/local/bin and make it autostart when KDE starts. And now the mouse cursor disappears in any X11 program.

k3s – local persistent storage

When using k3s and the built-in local persistent storage provider, once in a while you have to edit those files. While that usually works, sometimes you have to replace a 150kB binary file and when containers usually don’t have scp installed, there’s a problem…

The fix is to modify the storage from outside the container. That depends on the persistent storage provider. If it’s NFS, mount by NFS from another machine. If it’s an S3 bucket, edit it directly etc.

k3s has a local persistent storage driver called “local-path”. But where are those files so I can replace one of them? Turns out they are on /var/lib/rancher/k3s/storage/ on a node. Which node and what directory inside storage/ ?

Finding Your PVC

To find the PVC named “grafana-lib”, do

❯ kubectl describe persistentVolumeClaim grafana-lib
Name:          grafana-lib
Namespace:     default
StorageClass:  local-path
Status:        Bound
Volume:        pvc-a89cee51-0000-47d7-a095-2d48400768e3
Labels:        <none>
Annotations: yes
Finalizers:    []
Capacity:      3Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Mounted By:    grafana-deployment-669fc6d658-l78z7
Events:        <none>

and the volume shows where it is: knode5:/var/lib/rancher/k3s/storage/pvc-a89cee51…
A bit of jq magic and you get a complete list of all PVCs:

❯ kubectl get persistentVolumeClaims -o json | jq '[.items[] | { "Name":, "Volume": .spec.volumeName, "Node": .metadata.annotations."" }]'
    "Name": "influxdb-data",
    "Volume": "pvc-bbac2312-0000-450e-aee1-41a0d5517adb",
    "Node": "knode6"
    "Name": "grafana-log",
    "Volume": "pvc-22814c7b-0000-4b8b-99b6-ab4a4ca6c65c",
    "Node": "knode5"
    "Name": "grafana-lib",
    "Volume": "pvc-a89cee51-0000-47d7-a095-2d48400768e3",
    "Node": "knode5"

Video Editing on Linux – OpenShot

Video editing is fun, but I suck at it. So I keep it simple. Long time I used Kino but it’s no longer developed. But it had 3 main things:

  • It ran on Linux
  • It was simple
  • It worked with my DV camera (FireWire AKA IEEE-1394, remember that?)

Since I had a small video to “edit” (mainly add a title, cut off some seconds from the start and end), I looked and found a Kino replacement: OpenShot. It’s simple enough to almost immediately use without a steep learning curve. I like it.

Line Notify – Useful!

Since I use Line, using Line Notify is an easy solution to send myself messages. Here’s how it works:

  1. Go to and generate a token.
  2. Use cURL to send a message:
curl -X POST -H 'Authorization: Bearer YOUR_TOKEN' \
-F 'message=A message from myself to myself.' \
-F 'stickerPackageId=1' \
-F 'stickerId=15' \
-F 'imageFile=@ttgo-eight.jpg' \

See the API for Line Notify for more details incl. the list of stickers.

Here an example using Node.js:

const fs = require('fs');
const fetch = require("node-fetch");
const FormData = require('form-data');

const params = new FormData();
params.append('message', 'Test via Node.js');
params.append('stickerPackageId', '1');
params.append('stickerId', '15');
params.append('imageFile', fs.createReadStream('ttgo-eight.jpg'));

try {
    fetch('', {
        method: 'POST',
        headers: {
            'Authorization': 'Bearer YOUR_TOKEN'
        redirect: 'follow',
        body: params
        .then(res => res.json())
        .then(json => console.log(json));
} catch (e) {
    console.error(`Error: ${e}`);

Grafana Alerts

So far I did not have to bother about alerts. InfluxDB collects stats (via API or telegraf) and I can watch it via Grafana. Today I wanted alerts.

First you have to create notification channels in Grafana: Alerting/Notification Channels.


Since I use Gmail, this section in grafana.ini works:

##### SMTP / Emailing #####
enabled = true
host =
user =
from_address =
from_name = GrafanaAlerts

The only tricky part is the password which is an application specific password you create at under “App Passwords”.


If you use Line, this works:

Installing HashiCorp’s Vault

HashiCorp Vault Icon

Trying to use Vault at work to keep secrets in there. However knowing not much about it makes me want to test it at home first.

Installing on k8s seemed most sensible since I already have 3 node k3s cluster. Since k3s does support Helm v3 and Vault can be installed via Helm v3 charts, that’s what I did.


See, but in short:

$ helm repo add hashicorp
"hashicorp" has been added to your repositories
$ helm install vault hashicorp/vault

Initialize and Unseal

Initialize is a one-time action. Unsealing is always needed when you restart Vault. See Short summary:

# Initialize and get the keys
$ kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > cluster-keys.json

# Unseal
$ VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")
$ kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY

# Show pod status
$ kubectl get pods -l
vault-0   1/1     Running   2          16h

# Show Vault status
$ kubectl exec vault-0 -- vault status
Key             Value
---             -----
Seal Type       shamir
Initialized     true
Sealed          false
Total Shares    1
Threshold       1
Version         1.5.2
Cluster Name    vault-cluster-f6c361da
Cluster ID      a757fd57-3032-59ec-d03a-4ad0556536ea
HA Enabled      false

The important part is: the Vault pod(s) run and it’s Sealed=False.

Create your website at
Get started