Installing the Citrix Workspace (formerly Receiver) on a Linux machine should be simple. After all, installing it on a Chromebook worked just fine (after enabling High DPI since my Chromebook has such a display).
But on Linux all I got was: You have not chosen to trust “Entrust Root Certification Authority – G2”, the issuer of the server’s security certificate.
Well, I was not aware of it and Chrome itself as well as Firefox had no issues with https://validg2.entrust.net/
Get the missing Entrust G2 root certificate from here
Copy it to /opt/Citrix/ICAClient/keystore/cacerts/ (this is where the Debian package installed into)
Rehash the certificates
sudo /opt/Citrix/ICAClient/util/ctx_rehash
Et voilà: it works. See also this entry in the Citrix forum. Different root certificate, but same problem.
Seems the Citrix Workspace app has its own certificate store. Why is beyond me as the system usually has a certificate store already. Why not use that one?
Note: The need for the Entrust certificate seems to be because the point to what I connect to uses a certificate signed by Entrust. Thus not everyone will need this particular root certificate. But whichever you need, it needs to be in /opt/Citrix/ICAClient/keystore/cacerts/
Yesterday I ordered a Garmin vívoactive 3. At ¥15200 it was hard to not get one. As Garmin has an app store for their watches and there’s also a developer forum and of course an SDK, the next obvious step was to install that SDK.
Installing the ConnectIQ SDK
Get it from here. It seems to be slightly buggy. Details only though. On a recent Ubuntu build you run into several issues, but they are easy to fix (this was a great start):
Also have OpenJDK 8 installed. 11 does not work and is not supported by Garmin.
Now waiting for the watch to arrive…
Creating a Developer Key
The first problem trying out the samples in the SDK was that a developer key was required. While the instructions are here, it took me way longer than needed to find them:
The previous blog entry lacked using https so all communication is in plain text, which makes using passwords less than ideal. This blog entry fixes this.
The full source code for the docker-compose.yml and haproxy.cfg file is available here.
It starts 2 containers: the ELK container and the HAProxy container. ELK can receive traffic via port 9200 (ES) and 5601 (Kibana). HAProxy connects to the outer world via ports 9100, 9990, 5601 and 6080.
The data directory for ElasticSearch is in ./data. The HAProxy configuration incl. TLS certificate is in ./haproxy/
What’s HAProxy doing?
HAProxy is the TLS terminator and port forwarder to the ELK container. Here’s part of the HAProxy config file (in ./haproxy/haproxy.conf)
HAProxy listens on port 6080 for HTTP traffic. It forwards traffic to the backend.
HAProxy listens on port 5601 for HTTPS traffic. TLS connection is terminated here. It then forwards the unencrypted traffic to the backend.
The backend is on port 5601 on the elk container
Not displayed above, but the same happens for ElasticSearch traffic.
Thus you would connect via HTTPS to port 5601 for Kibana and port 9100 for ElasticSearch. You could use HTTP on port 6080 for Kibana and port 9990 for ElasticSearch.
Steps to get ELK with HTTPS running
While I like automation, I doubt I’ll configure this more than twice until the automation breaks due to new versions of ELK. So it’s “half-automated”.
Pre-requisites:
A Linux server with min. 2GB RAM, Docker and docker-compose installed. I prefer Debian. Both version 9 and 10 works.
A TLS certificate (e.g. via Let’s Encrypt) in ./haproxy/ssl/private/DOMAIN.pem (full cert chain + private key, no passphrase)
# ES refuses to start if vm.max_map_count is less
sudo echo "vm.max_map_count = 262144" > /etc/sysctl.d/10es.conf
sudo sysctl -f --system
# verify:
sysctl vm.max_map_count
# Output should be "vm.max_map_count = 262144"
# Directory for docker-compose files
mkdir elk
cd elk
# Data directory for ES
mkdir data
sudo chown 991:991 data
# Start docker-compose
docker-compose up -d
It takes about 1-2 minutes. You’ll see the CPU being suddenly less busy. Now you could connect to http://DOCKER_HOST:6080. No check for accounts or passwords yet.
Note: If you delete the container (e.g. via “docker-compose down”), you have to do the following steps again!
# Enter the ELK container
docker exec -it ek /bin/bash
Inside the container modify some settings (replace the variables PW_BOOTSTRAP and PW_KIBANA with the actual passwords):
cd /opt/elasticsearch
mkdir /etc/elasticsearch/certs
bin/elasticsearch-certutil cert -out /etc/elasticsearch/certs/elastic-certificates.p12 -pass ""
cat >> /etc/elasticsearch/elasticsearch.yml <<_EOF_
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12
_EOF_
cd /opt/elasticsearch
echo "${PW_BOOTSTRAP}" | bin/elasticsearch-keystore add bootstrap.password
chown -R elasticsearch:elasticsearch /etc/elasticsearch
# edit /opt/kibana/config/kibana.yml with Kibana user name and password
sed 's/.*elasticsearch.username:.*/elasticsearch.username: "kibana"/;s/.*elasticsearch.password:.*/elasticsearch.password: "${PW_KIBANA}"/' < /opt/kibana/config/kibana.yml > /tmp/kibana.yml && cp /tmp/kibana.yml /opt/kibana/config/kibana.yml
cat >> /opt/kibana/config/kibana.yml <<_EOF_
xpack.security.encryptionKey: "Secret 32 char long string of chars"
xpack.security.secureCookies: true
_EOF_
chown kibana:kibana /opt/kibana/config/kibana.yml
exit
Restart the containers:
docker-compose restart
Again wait about 1 minute. Then execute from anywhere (i.e. not the docker host):
function setPW() {
curl -uelastic:${PW_BOOTSTRAP} -XPUT -H "Content-Type:application/json" "http://${DOCKER_HOST}:${ES_HTTP_PORT}/_xpack/security/user/$1/_password" -d "{ \"password\":\"$2\" }"
}
# Since ES is running by now, set the passwords
setPW "kibana" "${PW_KIBANA}"
setPW "apm_system" "${PW_APM}"
setPW "logstash_system" "${PW_LOGSTASH}"
setPW "beats_system" "${PW_BEATS}"
setPW "remote_monitoring_user" "${PW_MONITORING}"
setPW "elastic" "${PW_ELASTIC}"
The Elastic Stack is a simple way to log “things” into ElasticSearch and make them nicely visible via Kibana. Since ELK can handle logs as well as time series data, I’ll use it for my own logging incl. performance logging.
For pure time series data I’d use the TIG stack: Telegraf, InfluxDB and Grafana.
POST /_sql?format=txt
{
"query": """
SELECT * FROM logs WHERE timestamp IS NOT NULL ORDER BY timestamp DESC LIMIT 5
"""
}
Sending Logs
Via curl:
curl -H "Content-Type: application/json" -X POST "http://the.docker.host:9200/logs/_doc" -d '{ "timestamp": "'$(date --iso-8601="sec")'", "status": 200, "msg": "Testing from curl" }'
Via Node.js:
const { Client } = require('@elastic/elasticsearch');
const client = new Client({ node: 'http://your.docker.host:9200' });
async function run () {
let now = new Date();
try {
await client.index({
index: 'logs',
body: {timestamp: now.toISOString(),status: 200,msg: 'Testing from Node.js'
}
})
} catch (err) {
console.error("Error: ", err);
}
run().catch(console.log)
Security?
Until here there’s no security. Nothing at all. Anyone can connect to ElasticSearch and execute commands to add data or delete indices. Anyone can login to Kibana and look at data. Not good.
This describes how to set up security. Here you got the explanation for the permissions you can give to roles.
I created a role ‘logswriter’ which has the permissions for ‘create_doc’ for the index ‘logs*’. Then a user ‘logger’ with above role. Now I can log via that user. In Node.js this looks like:
Note that while the username and password can come from an environment variable so it’s not in in the source code, the transport protocol is still unencrypted http.
To be continued. To make the use of https necessary, I have to deploy this on the Internet first.
Google changed the Address bar in Chrome to remove https:// and the leading www. I am fine with the closed lock for https:// and an open lock for http://, but I do not like the mangled DNS name.
So it looks like this as per the new default:
If you are like me and prefer to see the protocol and the complete DNS name like this:
It goes quite deep, e.g. explains the internal structure of FLASH memory as well as how the Flash-Translation-Layer (FTL) works. I have not see any such detailed description yet.
A Bit Related
hdparm works well doe SCSI disks (or disks which behave like SCSI disks), but it does not work with NVME disks. But nvme does. See here: https://tinyapps.org/docs/nvme-sanitize.html how to erase NVME SSDs.
I bought my domains quite a while ago via GoDaddy, but I had my DNS servers at Linode as that’s where the VMs were I used. Linode have an easy-to-use GUI and a well-working API (important for Let’s Encrypt certificates via acme.sh). Life was good. when it was renewal time and GoDaddy’s prices increased quite a bit, I moved the registrar from GoDaddy to AWS: it was actually cheaper than GoDaddy and there are zero attempts to sell more things to me I don’t need.
Recently I started to look at Cloudflare Workers when I saw that CF is also a registrar. And it’s a rather special one:
Cloudflare Registrar securely registers and manages your domain names with transparent, no-markup pricing that eliminates surprise renewal fees and hidden add-on charges. You pay what we pay — you won’t find better value.
Well, that sold me. So I moved from AWS to CF for one of my 2 domains, and it was done in about 30min. Most of that time I was waiting for an email from AWS on my wrong email account. It probably could have been done within 5 minutes.
Moral of the story: Don’t be afraid of moving your DNS registrar.
Part 2: Moving DNS Server
While moving the registrar is a non-event with no outages whatsoever, moving DNS servers for a zone is more worrisome as there can be outages for your domain. Cloudflare makes it quite easy to move a zone to them: add your zone to CF, and they’ll give you 2 DNS servers which you need to update in your registrar’s zone info. Then CF will shortly later copy the zone information (mostly) over.
Few rarely used entries were not copied for some reasons. Add those missing ones. Keep the old zone active for a while (look at your TTL time in the SOA record).
And that’s it.
The most interesting part of Cloudfront is that their DNS server (usually) proxies traffic through their servers before contacting my server. And they cache the data. So theoretically that might make my web page faster. It now got some DDoS protection, I can turn on firewalls, do HTTP/3, and learn where traffic comes from. Like this:
I can understand that plenty requests come from US. That’s where Google & Friends got their crawlers and my blog is in English, but why France?
The main reason I moved DNS servers is that now I can use CF Workers using my domains. And that works just fine. An unintended side-effect of the DNS servers move is that Let’s Encrypt certificates via acme.sh now take seconds instead of (15) minutes to renew. That’s 2 wins for me!
Now let me see what shenanigans I can do with those CF Workers ☺
Objective – Key Results (OKR) is a way to align teams to move towards a common goal. OKRs are result-oriented: It’s not prescriptive how to do something as that’s left to the implementing team. There’s a clear connection between objective and key results.
KPIs (Key Performance Indicators) and goals which come down from management on the other side give you numbers you should reach. The “why” is not relevant.
It might seem that “result” and “KPI” are the same: If you do reach your KPI, you get the intended result. However that’s more a hypothesis and it’s often wrong. If you have a metric of what you want to get done, you can try several approaches of how to do it. While it might match the method you’d do via KPI, you might change your approach if you see that it does not work. In a KPI world you simply continue down the (wrong) path: KPI is the goal.
A quick summary of OKRs how I understand it
Have a vision: define where you want to go. The boss usually does that.
Define key results and metrics which show that you move in the right direction. Done by boss and employees. This is where the alignment happens: Everyone must agree here that this is what matters and this is how we measure it.
Define tasks which will likely make the key results move into the right direction. Done by whoever is the expert in this area.
Confirm once in a while (daily, weekly) that the metric looks good resp. better than before.
Common Problem: Defining Key Results
Regarding the key results, the main problem I see is that instead of providing key results, often simply tasks are listed here with the implicit assumption that doing this task will help the objective. And if you can count this, it’s a great key result. Well, it ain’t like this.
The Awesome Notebook-Shop
You own a small computer service shop which among other things builds custom notebooks for your clients. This is your main income generator. Your average is 10 per week. You’d like to afford a cruise holiday next year. It’s quite expensive.
Objective: Go to a cruise next year
What’s the key result? One answer could be:
Key result: Build 20 notebooks per week.
Easy to measure. Nice metric. Or is it?
The fallacy is that the unspoken assumption is that building more notebooks means more can be sold and thus you can go on the cruise next year. But do you have staff to build that many notebooks? Maybe you have to hire people. Or pay overtime. And can you even sell 20 per week? How about:
Key result: Sell 20 notebooks per week.
This includes building them. But if you have to lower the price in order to sell 20 per week, you’d not be able to go to your cruise.
Why don’t we instead measure directlywhat matters? This seems to be better:
Key result: Increase profit by 20%. Key metric: Net money earned
You now have more options available: You can do advertising or you can offer trade-in’s. Selling 20 instead of 10 notebooks per week might be an option too, as would be buying from a cheaper vendor or selling for higher prices by adding some extra services. By measuring what actually matters, you always know you are on track. No assumptions needed.
Side note: You might want to add some key results to ensure that shortcuts are discouraged.
Otherwise you get a short term increased profit, but if the quality goes significantly down or you are overcharging your customers, you will hurt your long term objective. If you own a shop, you might do this automatically, but if you don’t measure it, how do you know your customers are happy with your work?
Why I like OKRs
It’s a chance to focus on something (the hand full of objectives) to improve by everyone marching in the same direction. Teams who don’t usually care what the other teams do, suddenly work together because they have a common objective and a common metric. Employees understand why they do something. OKRs are transparent, removing potentially duplicate work done by different teams. Metrics show where you are instead of you making up status reports which main purpose is to make you look good. Results are rewarded. Effort is not. (Those who put in more effort, usually get better results.)
I started to host my own WordPress blog since January 2008. First as a stand-alone installation on a cloud server, then as containers on a cloud server.
December 2017 I moved this to an AWS hosted serverless blog (see here for the source code). Technically interesting, but harder to use unfortunately.
Now I’m back on WordPress. I think it’s easier to use and looks better. Posting blog entries is just more fun. The difference compared to before is that now it’s hosted on wordpress.com instead of being self-hosted.
The Old Blog Entries
My old blog items are copied and are still available here: https://blog-static.harald.workers.dev/. It’s missing all dynamic features as it’s a static copy.