Making JetBrain’s IDEs less sluggish

WebStorm works on my Chromebook in Crostini. I’m happy it works. It’s a bit sluggish, but I attributed this to the CPU (m3), the many pixels on the screen (2400×1600) and/or Crostini itself. It’s fine as I won’t run thousands of lines of code here.

On my home Linux desktop (i7-4510U) it’s much better. Not speedy, but ok. Sublime Text is way snappier though.

Turns out you can improve both significantly by changing the Java runtime environment to the one from JetBrain one. Here the steps:

  1. File->Settings->Plugins.
  2. Click marketplace, search for “Choose Runtime”
  3. Install official Choose Runtime addon from JetBrains
  4. Wait for install and click to restart IDE.
  5. Once back in project, press shift twice to open the search window
  6. Search for Runtime. Select “Choose Runtime”
  7. Change to “jbrsdk-8u-232-linux-x64-b1638.6.tar.gz”, which should be the very last one at the bottom of the list.
  8. Click install, restart IDE, enjoy!

Obviously pick the latest version of the jbrsdk-8u-232-linux-x64. Why this is not default, I cannot say. It should be.

Source: https://www.reddit.com/r/Crostini/comments/e67tij/pycharmwebstormjetbrains_ide_fix/

Citrix Workspace and Linux

Installing the Citrix Workspace (formerly Receiver) on a Linux machine should be simple. After all, installing it on a Chromebook worked just fine (after enabling High DPI since my Chromebook has such a display).

But on Linux all I got was: You have not chosen to trust “Entrust Root Certification Authority – G2”, the issuer of the server’s security certificate.

Well, I was not aware of it and Chrome itself as well as Firefox had no issues with https://validg2.entrust.net/

So what’s throwing that error message?

Seems it’s the Citrix Workspace application itself which uses its own certificate store and it needs some more certificates (see https://support.citrix.com/article/CTX231524):

  1. Get the missing Entrust G2 root certificate from here
  2. Copy it to /opt/Citrix/ICAClient/keystore/cacerts/ (this is where the Debian package installed into)
  3. Rehash the certificates
sudo /opt/Citrix/ICAClient/util/ctx_rehash

Et voilà: it works. See also this entry in the Citrix forum. Different root certificate, but same problem.

Seems the Citrix Workspace app has its own certificate store. Why is beyond me as the system usually has a certificate store already. Why not use that one?

Note: The need for the Entrust certificate seems to be because the point to what I connect to uses a certificate signed by Entrust. Thus not everyone will need this particular root certificate. But whichever you need, it needs to be in /opt/Citrix/ICAClient/keystore/cacerts/

Installing the Garmin ConnectIQ SDK

Yesterday I ordered a Garmin vívoactive 3. At ¥15200 it was hard to not get one. As Garmin has an app store for their watches and there’s also a developer forum and of course an SDK, the next obvious step was to install that SDK.

Installing the ConnectIQ SDK

Get it from here. It seems to be slightly buggy. Details only though. On a recent Ubuntu build you run into several issues, but they are easy to fix (this was a great start):

$ cd ~
$ mkdir connectiq
$ cd connectiq/
$ unzip ../Downloads/connectiq-sdk-lin-3.1.7-2020-01-23-a3869d977.zip

$ cd /var/tmp/
$ wget https://launchpad.net/~ubuntu-security/+archive/ubuntu/ppa/+build/15108504/+files/libpng12-0_1.2.54-1ubuntu1.1_amd64.deb
$ sudo dpkg -i libpng12-0_1.2.54-1ubuntu1.1_amd64.deb
$ sudo apt-get install libwebkitgtk-1.0

$ tr -d '\r' < bin/monkeygraph >/tmp/monkeygraph && cp /tmp/monkeygraph bin/monkeygraph

Also have OpenJDK 8 installed. 11 does not work and is not supported by Garmin.

Now waiting for the watch to arrive…

Creating a Developer Key

The first problem trying out the samples in the SDK was that a developer key was required. While the instructions are here, it took me way longer than needed to find them:

$ openssl genrsa -out developer_key.pem 4096
$ openssl pkcs8 -topk8 -inform PEM -outform DER -in developer_key.pem -out developer_key.der -nocrypt

Testing the Examples

Make sure you use Java 8 (not Java 11). Start the simulator, compile a program and run it in the simulator:

$ simulator &
$ cd samples/Timer
$ monkeyc -f monkey.jungle -y ~/connectiq/developer_key.der -o Timer.prg -d vivoactive3
$ monkeydo Timer.prg vivoactive3

Not sure how realistic the simulation is, but it’s certainly ok to ensure it might work in real life.

Links to Working Examples

ELK with HTTPS

The previous blog entry lacked using https so all communication is in plain text, which makes using passwords less than ideal. This blog entry fixes this.

The full source code for the docker-compose.yml and haproxy.cfg file is available here.

docker-compose.yml

version: '3'
services:
  elk:
    restart: always
    container_name: ek
    image: sebp/elk
    environment:
      - LOGSTASH_START=0
      - TZ=Asia/Tokyo
    expose:
      - 9200
      - 5601
      - 5044
    volumes:
      - "./data:/var/lib/elasticsearch"
  haproxy:
    restart: always
    container_name: haproxy2
    image: haproxy:2.1
    ports:
      - 9100:9100
      - 9990:9990
      - 5601:5601
      - 6080:6080
    volumes:
      - "./haproxy:/usr/local/etc/haproxy"

What’s this docker-compose file doing?

It starts 2 containers: the ELK container and the HAProxy container. ELK can receive traffic via port 9200 (ES) and 5601 (Kibana). HAProxy connects to the outer world via ports 9100, 9990, 5601 and 6080.

The data directory for ElasticSearch is in ./data. The HAProxy configuration incl. TLS certificate is in ./haproxy/

What’s HAProxy doing?

HAProxy is the TLS terminator and port forwarder to the ELK container. Here’s part of the HAProxy config file (in ./haproxy/haproxy.conf)

frontend kibana-http-in
    bind *:6080
    default_backend kibana

frontend kibana-https-in
    bind *:5601 ssl crt /usr/local/etc/haproxy/ssl/private/
    default_backend kibana

backend kibana
    balance roundrobin
    option httpclose
    option forwardfor
    server kibana elk:5601 maxconn 32
  • HAProxy listens on port 6080 for HTTP traffic. It forwards traffic to the backend.
  • HAProxy listens on port 5601 for HTTPS traffic. TLS connection is terminated here. It then forwards the unencrypted traffic to the backend.
  • The backend is on port 5601 on the elk container
  • Not displayed above, but the same happens for ElasticSearch traffic.

Thus you would connect via HTTPS to port 5601 for Kibana and port 9100 for ElasticSearch. You could use HTTP on port 6080 for Kibana and port 9990 for ElasticSearch.

Steps to get ELK with HTTPS running

While I like automation, I doubt I’ll configure this more than twice until the automation breaks due to new versions of ELK. So it’s “half-automated”.

Pre-requisites:

  • A Linux server with min. 2GB RAM, Docker and docker-compose installed. I prefer Debian. Both version 9 and 10 works.
  • A TLS certificate (e.g. via Let’s Encrypt) in ./haproxy/ssl/private/DOMAIN.pem (full cert chain + private key, no passphrase)
  • Define some variables which we’ll refer to later:
DOCKER_HOST=elastic.my.domain.org
ES_HTTP_PORT=9990

PW_BOOTSTRAP="top.Secret"
PW_KIBANA="My-Kibana-PW"
PW_APM="My-APM-PW"
PW_LOGSTASH="My-LogStash-PW"
PW_BEATS="My-Beats-PW"
PW_MONITORING="My-Monitoring-PW"
PW_ELASTIC="My-Elastic-PW"

Prepare your Docker host:

# ES refuses to start if vm.max_map_count is less
sudo echo "vm.max_map_count = 262144" > /etc/sysctl.d/10es.conf
sudo sysctl -f --system

# verify:
sysctl vm.max_map_count
# Output should be "vm.max_map_count = 262144"

# Directory for docker-compose files
mkdir elk
cd elk

# Data directory for ES
mkdir data
sudo chown 991:991 data

# Start docker-compose
docker-compose up -d

It takes about 1-2 minutes. You’ll see the CPU being suddenly less busy. Now you could connect to http://DOCKER_HOST:6080. No check for accounts or passwords yet.

Note: If you delete the container (e.g. via “docker-compose down”), you have to do the following steps again!

# Enter the ELK container
docker exec -it ek /bin/bash

Inside the container modify some settings (replace the variables PW_BOOTSTRAP and PW_KIBANA with the actual passwords):

cd /opt/elasticsearch
mkdir /etc/elasticsearch/certs
bin/elasticsearch-certutil cert -out /etc/elasticsearch/certs/elastic-certificates.p12 -pass ""

cat >> /etc/elasticsearch/elasticsearch.yml <<_EOF_
xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

_EOF_

cd /opt/elasticsearch
echo "${PW_BOOTSTRAP}" | bin/elasticsearch-keystore add bootstrap.password

chown -R elasticsearch:elasticsearch /etc/elasticsearch

# edit /opt/kibana/config/kibana.yml with Kibana user name and password
sed 's/.*elasticsearch.username:.*/elasticsearch.username: "kibana"/;s/.*elasticsearch.password:.*/elasticsearch.password: "${PW_KIBANA}"/' < /opt/kibana/config/kibana.yml > /tmp/kibana.yml && cp /tmp/kibana.yml /opt/kibana/config/kibana.yml

cat >> /opt/kibana/config/kibana.yml <<_EOF_
xpack.security.encryptionKey: "Secret 32 char long string of chars"
xpack.security.secureCookies: true
_EOF_

chown kibana:kibana /opt/kibana/config/kibana.yml

exit

Restart the containers:

docker-compose restart

Again wait about 1 minute. Then execute from anywhere (i.e. not the docker host):

function setPW() {
curl -uelastic:${PW_BOOTSTRAP} -XPUT -H "Content-Type:application/json" "http://${DOCKER_HOST}:${ES_HTTP_PORT}/_xpack/security/user/$1/_password" -d "{ \"password\":\"$2\" }"
}

# Since ES is running by now, set the passwords
setPW "kibana" "${PW_KIBANA}"
setPW "apm_system" "${PW_APM}"
setPW "logstash_system" "${PW_LOGSTASH}"
setPW "beats_system" "${PW_BEATS}"
setPW "remote_monitoring_user" "${PW_MONITORING}"
setPW "elastic" "${PW_ELASTIC}"

Now you should be able to log in to Kibana at https://DOCKER_HOST:5601, account is “elastic” with its password. And you can connect to ES via https://DOCKER_HOST:9100

Done!

For convenience I create an index, and a user and a role to access only that index:

# Create index logs
curl -uelastic:${PW_ELASTIC} -XPUT -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs" -d '{"settings": { "number_of_shards": 1 }, \
"mappings": { "properties": { "timestamp": { "type": "date" }, "status": { "type": "integer" },\
 "channel": { "type": "text" }, "msg": { "type": "text" }}}}'

# Create a data item
curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs/_doc" \
-d '{ "timestamp": "'$(date --iso-8601=seconds -u)'", \
"status": 200, "channel": "curl", "msg": "Initial test via curl"}'

# Just for verification: find your previous entry via this

curl -uelastic:${PW_ELASTIC} -XGET -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs/_search" 

# Create role and user to index data directly into ElasticSearch

curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/_security/role/logwriter" \
-d '{"cluster":[],"indices":[{"names":["logs*"],"privileges":["create_doc"], \
"allow_restricted_indices":false}],"applications":[],"run_as":[], \
"metadata":{},"transient_metadata":{"enabled":true}}'

curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/_security/user/logger" -d '{ "password": "PASSWORD", \
"roles": [ "logwriter" ], "full_name": "Logging user", "email": "my.email@gmail.com" }'

And finally I can log from Cloudflare Workers into ELK via HTTPS:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function esLog(status, msg) {
  const esUrl='https://DOCKER_HOST:9100/logs/_doc';
  let headers = new Headers;
  let now = new Date();
  const body = {
    timestamp: now.toISOString(),
    status: status,
    channel: "log-to-es",
    msg: msg
  };
  headers.append('Authorization', 'Basic ' + btoa('logger:PASSWORD'));
  headers.append('Content-Type', 'application/json');
  try {
    res = await fetch(esUrl, {
      method: 'POST',
      body:    JSON.stringify(body),
      headers: headers,
      });
      return true;
  } catch(err) {
    return false;
  }
}

/**
 * Respond to the request
 * @param {Request} request
 */
async function handleRequest(request) {

  let now1 = new Date();
  await esLog(201, "now1 is set...");
  let now2 = new Date();
  await esLog(202, `now2-now1=${now2-now1} ms`);

  return new Response("Check Kibana for 2 log entries", {status: 200})
}

and then you run this once, you’ll get 2 log entries in ES:

Logging via ElasticSearch

The Elastic Stack is a simple way to log “things” into ElasticSearch and make them nicely visible via Kibana. Since ELK can handle logs as well as time series data, I’ll use it for my own logging incl. performance logging.

For pure time series data I’d use the TIG stack: Telegraf, InfluxDB and Grafana.

Installing

sudo sysctl vm.max_map_count=262144
mkdir elk
cd elk
cat >docker-compose.yaml <<_EOF_
version: '3'
services:
  elk:
    restart: always
    container_name: elk
    image: sebp/elk
    environment:
      - LOGSTASH_START=0
      - TZ=Asia/Tokyo
    ports:
      - "5601:5601"
      - "9200:9200"
      - "5044:5044"
    volumes:
      - "./data:/var/lib/elasticsearch"
_EOF_
mkdir -m 0755 data
sudo chown 991:991 data
docker-compose up -d

Starting takes a minute. It’s up when the Kibana web interface is reachable.

Reference: elk-docker

Using Kibana

Connect to http://the.docker.host:5601/ to get the Kibana interface. Click on the Dev Tools icon to create a new index:

PUT /logs
{
  "settings": {
  "number_of_shards": 1
  },
  "mappings": {
    "properties": {
      "timestamp": { "type": "date" },
      "status": { "type": "integer" },
      "msg": { "type": "text" }
    }
  }
}

To see that there’s something inside the index:

GET /logs/_search

To put something into the index:

POST /logs/_doc
{
  "timestamp": "2020-02-02T15:15:18+09:00",
  "status": 201,
  "msg": "Not so difficult, is it?"
}

And did you know you can use SQL statements for ElasticSearch too?

POST /_sql?format=txt
{
  "query": """
    SELECT * FROM logs WHERE timestamp IS NOT NULL ORDER BY timestamp DESC LIMIT 5
  """
}

Sending Logs

Via curl:

curl -H "Content-Type: application/json" -X POST "http://the.docker.host:9200/logs/_doc" -d '{ "timestamp": "'$(date --iso-8601="sec")'", "status": 200, "msg": "Testing from curl" }'

Via Node.js:

const { Client } = require('@elastic/elasticsearch');
const client = new Client({ node: 'http://your.docker.host:9200' });

async function run () {
  let now = new Date();
  try {
    await client.index({
      index: 'logs',
      body: {
        timestamp: now.toISOString(),
        status: 200,
        msg: 'Testing from Node.js'
      }
    })
  } catch (err) {
    console.error("Error: ", err);
}

run().catch(console.log)

Security?

Until here there’s no security. Nothing at all. Anyone can connect to ElasticSearch and execute commands to add data or delete indices. Anyone can login to Kibana and look at data. Not good.

This describes how to set up security. Here you got the explanation for the permissions you can give to roles.

I created a role ‘logswriter’ which has the permissions for ‘create_doc’ for the index ‘logs*’. Then a user ‘logger’ with above role. Now I can log via that user. In Node.js this looks like:

const client = new Client({
  node: 'http://the.docker.host:9200',
  auth: {
    username: 'logger',
    password: 'PASSWORD'
  }
});

the rest is unchanged. Via fetch API it’s simple too:

try {
  fetch('http://logger:PASSWORD@the.docker.host:9200/logs/_doc', {
    method: 'post',
    body: JSON.stringify({
      timestamp: now.toISOString(),
      status: 206,
      msg: 'Fetch did this')
    }),
    headers: { 'Content-Type': 'application/json' },
  })
} catch(err) {
  console.error('Error: ', ${err});
}

Note that while the username and password can come from an environment variable so it’s not in in the source code, the transport protocol is still unencrypted http.

To be continued. To make the use of https necessary, I have to deploy this on the Internet first.

Get your https:// back in Chrome

Google changed the Address bar in Chrome to remove https:// and the leading www. I am fine with the closed lock for https:// and an open lock for http://, but I do not like the mangled DNS name.

So it looks like this as per the new default:

If you are like me and prefer to see the protocol and the complete DNS name like this:

then install the Suspicious Site Reporter from Google and you get back your https://www.

Source: this post in reddit.

There used to be a Chrome flag to enable this (chrome://flags/#omnibox-ui-hide-steady-state-url-scheme-and-subdomains), but it’s gone. Or use Firefox.

The Inner Workings Of SSDs

Found a most interesting article series about how SSD internally work: http://codecapsule.com/2014/02/12/coding-for-ssds-part-1-introduction-and-table-of-contents/

It goes quite deep, e.g. explains the internal structure of FLASH memory as well as how the Flash-Translation-Layer (FTL) works. I have not see any such detailed description yet.

A Bit Related

hdparm works well doe SCSI disks (or disks which behave like SCSI disks), but it does not work with NVME disks. But nvme does. See here: https://tinyapps.org/docs/nvme-sanitize.html how to erase NVME SSDs.

Moving DNS

Part 1: Moving DNS Registrar

I bought my domains quite a while ago via GoDaddy, but I had my DNS servers at Linode as that’s where the VMs were I used. Linode have an easy-to-use GUI and a well-working API (important for Let’s Encrypt certificates via acme.sh). Life was good. when it was renewal time and GoDaddy’s prices increased quite a bit, I moved the registrar from GoDaddy to AWS: it was actually cheaper than GoDaddy and there are zero attempts to sell more things to me I don’t need.

Recently I started to look at Cloudflare Workers when I saw that CF is also a registrar. And it’s a rather special one:

Cloudflare Registrar securely registers and manages your domain names with transparent, no-markup pricing that eliminates surprise renewal fees and hidden add-on charges. You pay what we pay — you won’t find better value.

See https://www.cloudflare.com/products/registrar/

Well, that sold me. So I moved from AWS to CF for one of my 2 domains, and it was done in about 30min. Most of that time I was waiting for an email from AWS on my wrong email account. It probably could have been done within 5 minutes.

Moral of the story: Don’t be afraid of moving your DNS registrar.

Part 2: Moving DNS Server

While moving the registrar is a non-event with no outages whatsoever, moving DNS servers for a zone is more worrisome as there can be outages for your domain. Cloudflare makes it quite easy to move a zone to them: add your zone to CF, and they’ll give you 2 DNS servers which you need to update in your registrar’s zone info. Then CF will shortly later copy the zone information (mostly) over.

Few rarely used entries were not copied for some reasons. Add those missing ones. Keep the old zone active for a while (look at your TTL time in the SOA record).

And that’s it.

The most interesting part of Cloudfront is that their DNS server (usually) proxies traffic through their servers before contacting my server. And they cache the data. So theoretically that might make my web page faster. It now got some DDoS protection, I can turn on firewalls, do HTTP/3, and learn where traffic comes from. Like this:

I can understand that plenty requests come from US. That’s where Google & Friends got their crawlers and my blog is in English, but why France?

The main reason I moved DNS servers is that now I can use CF Workers using my domains. And that works just fine. An unintended side-effect of the DNS servers move is that Let’s Encrypt certificates via acme.sh now take seconds instead of (15) minutes to renew. That’s 2 wins for me!

Now let me see what shenanigans I can do with those CF Workers ☺

OKR – I like it!

Objective – Key Results (OKR) is a way to align teams to move towards a common goal. OKRs are result-oriented: It’s not prescriptive how to do something as that’s left to the implementing team. There’s a clear connection between objective and key results.

KPIs (Key Performance Indicators) and goals which come down from management on the other side give you numbers you should reach. The “why” is not relevant.

It might seem that “result” and “KPI” are the same: If you do reach your KPI, you get the intended result. However that’s more a hypothesis and it’s often wrong. If you have a metric of what you want to get done, you can try several approaches of how to do it. While it might match the method you’d do via KPI, you might change your approach if you see that it does not work. In a KPI world you simply continue down the (wrong) path: KPI is the goal.

A quick summary of OKRs how I understand it

  1. Have a vision: define where you want to go. The boss usually does that.
  2. Define key results and metrics which show that you move in the right direction. Done by boss and employees. This is where the alignment happens: Everyone must agree here that this is what matters and this is how we measure it.
  3. Define tasks which will likely make the key results move into the right direction. Done by whoever is the expert in this area.
  4. Confirm once in a while (daily, weekly) that the metric looks good resp. better than before.

Common Problem: Defining Key Results

Regarding the key results, the main problem I see is that instead of providing key results, often simply tasks are listed here with the implicit assumption that doing this task will help the objective. And if you can count this, it’s a great key result. Well, it ain’t like this.

The Awesome Notebook-Shop

You own a small computer service shop which among other things builds custom notebooks for your clients. This is your main income generator. Your average is 10 per week. You’d like to afford a cruise holiday next year. It’s quite expensive.

Objective: Go to a cruise next year

What’s the key result? One answer could be:

Key result: Build 20 notebooks per week.

Easy to measure. Nice metric. Or is it?

The fallacy is that the unspoken assumption is that building more notebooks means more can be sold and thus you can go on the cruise next year. But do you have staff to build that many notebooks? Maybe you have to hire people. Or pay overtime. And can you even sell 20 per week? How about:

Key result: Sell 20 notebooks per week.

This includes building them. But if you have to lower the price in order to sell 20 per week, you’d not be able to go to your cruise.

Why don’t we instead measure directly what matters? This seems to be better:

Key result: Increase profit by 20%.
Key metric: Net money earned

You now have more options available: You can do advertising or you can offer trade-in’s. Selling 20 instead of 10 notebooks per week might be an option too, as would be buying from a cheaper vendor or selling for higher prices by adding some extra services. By measuring what actually matters, you always know you are on track. No assumptions needed.

Side note: You might want to add some key results to ensure that shortcuts are discouraged.

Constraint key result: Keep customer happy
Key metrics: User retention rate, Customer satisfaction

Otherwise you get a short term increased profit, but if the quality goes significantly down or you are overcharging your customers, you will hurt your long term objective. If you own a shop, you might do this automatically, but if you don’t measure it, how do you know your customers are happy with your work?

Why I like OKRs

It’s a chance to focus on something (the hand full of objectives) to improve by everyone marching in the same direction. Teams who don’t usually care what the other teams do, suddenly work together because they have a common objective and a common metric. Employees understand why they do something. OKRs are transparent, removing potentially duplicate work done by different teams. Metrics show where you are instead of you making up status reports which main purpose is to make you look good. Results are rewarded. Effort is not. (Those who put in more effort, usually get better results.)

More Info About OKRs

Some good links in no particular order:

Moving Blog – Again

I started to host my own WordPress blog since January 2008. First as a stand-alone installation on a cloud server, then as containers on a cloud server.

December 2017 I moved this to an AWS hosted serverless blog (see here for the source code). Technically interesting, but harder to use unfortunately.

Now I’m back on WordPress. I think it’s easier to use and looks better. Posting blog entries is just more fun. The difference compared to before is that now it’s hosted on wordpress.com instead of being self-hosted.

The Old Blog Entries

My old blog items are copied and are still available here: https://blog-static.harald.workers.dev/. It’s missing all dynamic features as it’s a static copy.

For those who are curious how this works: It’s copy of the previous blog via httrack and stored on BackBlaze B2 object storage (copy via MinIO) and served via CloudFlare Workers.

Here the rather simple code. Replace B2BaseUrl and downloadKey with your own values.

// Hosting static web files on BackBlaze's B2

const B2BaseUrl = 'https://f000.backblazeb2.com/file/BUCKET/blog/static/';
const downloadKey = '3_20200112......33_0005_dnld';

addEventListener('fetch', event => {
    event.respondWith(handleRequest(event.request))
})

async function handleRequest(request) {
  let response;

  if (request.method !== 'GET') {
    response = new Response('Expect only GET requests', { status: 400 })
  } else {
    let b2Url = request.url.split('/').slice(3).join('/');
    if (b2Url == "" || b2Url == "/") {
            b2Url = "index.html";
    }
    b2Url = B2BaseUrl + b2Url;

    let b2Headers = new Headers(request.headers);
    b2Headers.append("Authorization", downloadKey);
    modRequest = new Request(b2Url, {
        method: request.method,
        headers: b2Headers
    });
    response = await fetch(modRequest);
    // Make the headers mutable by re-constructing the Response.
    response = new Response(response.body, response);
    response.headers.set('Cache-Control', 'max-age=7200')
  }
  return response;
}