Which one to use? While I did not dig too deep into each one, they are all basically very similar, simply because the problem is clear and a “solved problem”. And yes, I simplify things here a lot:
You do good CI if you merge often. The common solution is to use git, create short-lived branches and merge them (depending on your development strategy)
Run linters, formatters and tests upon a code commit (and verify during a push upstream)
The CD part is different and depends a lot on the back-end how and where your application runs. Once you have an application, a zip artifact, a container image etc., deploying those is not a technical problem. The strategy how to move from QA to PROD is an entirely different problem but it very much depends on your back-end. Kubernetes has its ArgoCD/FluxCD while in a non-container environment you have to use other solutions.
Back to GitHub Actions
I never needed it: as the only developer for my little software world, I use a locally hosted git repo. Some items I put on GitHub, but only when I think someone can benefit of it. This is a hackathon-starter template from here (quite nice) which is a NodeJS app with plenty dependencies and it served as my test sample to play with Git Hub Actions.
I like DropBox: it’s convenient and works on all my devices (Linux, Windows, Android). Except now it only works on 3 devices. Time to look for alternatives: Nextcloud.
Runs on Linux (ARM and Intel), runs in containers or Kubernetes, and has clients for anything I use.
First install: on my old and unused Cubietruck: 2 core, 1 GHz ARM Cortex-A7, 2 GB RAM, SATA disk. Should be more than capable. Tested and installed with this docker-compose.yaml file:
Test once manually. If it worked, Nextcloud is aware of it and now expects cron to kick off every 5min.
Now log in to the web interface (http://dockerhost:8080) and follow the normal procedure how to set up Nextcloud.
Nice! I didn’t get a preview of Photos though. I can live with that.
Not so nice is the performance. Or the lack thereof. telegraf shows the 2 cores to be quite busy when I load any page. Here a light use on the Cubietruck:
And this is on my AMD based fanless mini-PC:
Basically the Cubietruck works, but it’s slow. Both systems have SATA disks and are fanless, and thus I have no reason to use the Cubietruck for this purpose. And I have to say: it’s neat. It synchronizes data nicely with Linux and Windows. Android works too, but to make it meaningful, I have to make my Nextcloud instance connectable to the Internet first.
For the longest time I use my Synology NAS as my git server. Not only does it “just work”, but it has 2 disks in a RAID1 setup and I do a regular backup of it. The only problem is that it’s a bit noisy: 1 fan and 2 spinning 3.5″ disks make background noise. It’s beside my gaming PC the noisiest thing I have. And I don’t use my gaming PC much.
So I wanted to test a git server on an solid state disk (SSD, USB stick etc.) based system. And I happen to have one already: my fan-less PC which runs Docker!
Doing a backup of volumes is simple if you know the command:
docker run --volumes-from influxdb \
-v $(pwd)/backup:/backup -i -t ubuntu \
tar cfvj /backup/influxdb.tar.bz2 /var/lib/influxdb
influxdb is the stopped container. /var/lib/influxdb is where the (only) volume is mounted. ~/backup/ is where I’d like the compressed tar backup file.
The only tricky part is the password which is an application specific password you create at https://myaccount.google.com/security under “App Passwords”.
The previous blog entry lacked using https so all communication is in plain text, which makes using passwords less than ideal. This blog entry fixes this.
The full source code for the docker-compose.yml and haproxy.cfg file is available here.
It starts 2 containers: the ELK container and the HAProxy container. ELK can receive traffic via port 9200 (ES) and 5601 (Kibana). HAProxy connects to the outer world via ports 9100, 9990, 5601 and 6080.
The data directory for ElasticSearch is in ./data. The HAProxy configuration incl. TLS certificate is in ./haproxy/
What’s HAProxy doing?
HAProxy is the TLS terminator and port forwarder to the ELK container. Here’s part of the HAProxy config file (in ./haproxy/haproxy.conf)
HAProxy listens on port 6080 for HTTP traffic. It forwards traffic to the backend.
HAProxy listens on port 5601 for HTTPS traffic. TLS connection is terminated here. It then forwards the unencrypted traffic to the backend.
The backend is on port 5601 on the elk container
Not displayed above, but the same happens for ElasticSearch traffic.
Thus you would connect via HTTPS to port 5601 for Kibana and port 9100 for ElasticSearch. You could use HTTP on port 6080 for Kibana and port 9990 for ElasticSearch.
Steps to get ELK with HTTPS running
While I like automation, I doubt I’ll configure this more than twice until the automation breaks due to new versions of ELK. So it’s “half-automated”.
Pre-requisites:
A Linux server with min. 2GB RAM, Docker and docker-compose installed. I prefer Debian. Both version 9 and 10 works.
A TLS certificate (e.g. via Let’s Encrypt) in ./haproxy/ssl/private/DOMAIN.pem (full cert chain + private key, no passphrase)
# ES refuses to start if vm.max_map_count is less
sudo echo "vm.max_map_count = 262144" > /etc/sysctl.d/10es.conf
sudo sysctl -f --system
# verify:
sysctl vm.max_map_count
# Output should be "vm.max_map_count = 262144"
# Directory for docker-compose files
mkdir elk
cd elk
# Data directory for ES
mkdir data
sudo chown 991:991 data
# Start docker-compose
docker-compose up -d
It takes about 1-2 minutes. You’ll see the CPU being suddenly less busy. Now you could connect to http://DOCKER_HOST:6080. No check for accounts or passwords yet.
Note: If you delete the container (e.g. via “docker-compose down”), you have to do the following steps again!
# Enter the ELK container
docker exec -it ek /bin/bash
Inside the container modify some settings (replace the variables PW_BOOTSTRAP and PW_KIBANA with the actual passwords):
cd /opt/elasticsearch
mkdir /etc/elasticsearch/certs
bin/elasticsearch-certutil cert -out /etc/elasticsearch/certs/elastic-certificates.p12 -pass ""
cat >> /etc/elasticsearch/elasticsearch.yml <<_EOF_
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
_EOF_
cd /opt/elasticsearch
echo "${PW_BOOTSTRAP}" | bin/elasticsearch-keystore add bootstrap.password
chown -R elasticsearch:elasticsearch /etc/elasticsearch
# edit /opt/kibana/config/kibana.yml with Kibana user name and password
sed 's/.*elasticsearch.username:.*/elasticsearch.username: "kibana"/;s/.*elasticsearch.password:.*/elasticsearch.password: "${PW_KIBANA}"/' < /opt/kibana/config/kibana.yml > /tmp/kibana.yml && cp /tmp/kibana.yml /opt/kibana/config/kibana.yml
cat >> /opt/kibana/config/kibana.yml <<_EOF_
xpack.security.encryptionKey: "Secret 32 char long string of chars"
xpack.security.secureCookies: true
_EOF_
chown kibana:kibana /opt/kibana/config/kibana.yml
exit
Restart the containers:
docker-compose restart
Again wait about 1 minute. Then execute from anywhere (i.e. not the docker host):
function setPW() {
curl -uelastic:${PW_BOOTSTRAP} -XPUT -H "Content-Type:application/json" "http://${DOCKER_HOST}:${ES_HTTP_PORT}/_xpack/security/user/$1/_password" -d "{ \"password\":\"$2\" }"
}
# Since ES is running by now, set the passwords
setPW "kibana" "${PW_KIBANA}"
setPW "apm_system" "${PW_APM}"
setPW "logstash_system" "${PW_LOGSTASH}"
setPW "beats_system" "${PW_BEATS}"
setPW "remote_monitoring_user" "${PW_MONITORING}"
setPW "elastic" "${PW_ELASTIC}"