ELK with HTTPS

The previous blog entry lacked using https so all communication is in plain text, which makes using passwords less than ideal. This blog entry fixes this.

The full source code for the docker-compose.yml and haproxy.cfg file is available here.

docker-compose.yml

version: '3'
services:
  elk:
    restart: always
    container_name: ek
    image: sebp/elk
    environment:
      - LOGSTASH_START=0
      - TZ=Asia/Tokyo
    expose:
      - 9200
      - 5601
      - 5044
    volumes:
      - "./data:/var/lib/elasticsearch"
  haproxy:
    restart: always
    container_name: haproxy2
    image: haproxy:2.1
    ports:
      - 9100:9100
      - 9990:9990
      - 5601:5601
      - 6080:6080
    volumes:
      - "./haproxy:/usr/local/etc/haproxy"

What’s this docker-compose file doing?

It starts 2 containers: the ELK container and the HAProxy container. ELK can receive traffic via port 9200 (ES) and 5601 (Kibana). HAProxy connects to the outer world via ports 9100, 9990, 5601 and 6080.

The data directory for ElasticSearch is in ./data. The HAProxy configuration incl. TLS certificate is in ./haproxy/

What’s HAProxy doing?

HAProxy is the TLS terminator and port forwarder to the ELK container. Here’s part of the HAProxy config file (in ./haproxy/haproxy.conf)

frontend kibana-http-in
    bind *:6080
    default_backend kibana

frontend kibana-https-in
    bind *:5601 ssl crt /usr/local/etc/haproxy/ssl/private/
    default_backend kibana

backend kibana
    balance roundrobin
    option httpclose
    option forwardfor
    server kibana elk:5601 maxconn 32
  • HAProxy listens on port 6080 for HTTP traffic. It forwards traffic to the backend.
  • HAProxy listens on port 5601 for HTTPS traffic. TLS connection is terminated here. It then forwards the unencrypted traffic to the backend.
  • The backend is on port 5601 on the elk container
  • Not displayed above, but the same happens for ElasticSearch traffic.

Thus you would connect via HTTPS to port 5601 for Kibana and port 9100 for ElasticSearch. You could use HTTP on port 6080 for Kibana and port 9990 for ElasticSearch.

Steps to get ELK with HTTPS running

While I like automation, I doubt I’ll configure this more than twice until the automation breaks due to new versions of ELK. So it’s “half-automated”.

Pre-requisites:

  • A Linux server with min. 2GB RAM, Docker and docker-compose installed. I prefer Debian. Both version 9 and 10 works.
  • A TLS certificate (e.g. via Let’s Encrypt) in ./haproxy/ssl/private/DOMAIN.pem (full cert chain + private key, no passphrase)
  • Define some variables which we’ll refer to later:
DOCKER_HOST=elastic.my.domain.org
ES_HTTP_PORT=9990

PW_BOOTSTRAP="top.Secret"
PW_KIBANA="My-Kibana-PW"
PW_APM="My-APM-PW"
PW_LOGSTASH="My-LogStash-PW"
PW_BEATS="My-Beats-PW"
PW_MONITORING="My-Monitoring-PW"
PW_ELASTIC="My-Elastic-PW"

Prepare your Docker host:

# ES refuses to start if vm.max_map_count is less
sudo echo "vm.max_map_count = 262144" > /etc/sysctl.d/10es.conf
sudo sysctl -f --system

# verify:
sysctl vm.max_map_count
# Output should be "vm.max_map_count = 262144"

# Directory for docker-compose files
mkdir elk
cd elk

# Data directory for ES
mkdir data
sudo chown 991:991 data

# Start docker-compose
docker-compose up -d

It takes about 1-2 minutes. You’ll see the CPU being suddenly less busy. Now you could connect to http://DOCKER_HOST:6080. No check for accounts or passwords yet.

Note: If you delete the container (e.g. via “docker-compose down”), you have to do the following steps again!

# Enter the ELK container
docker exec -it ek /bin/bash

Inside the container modify some settings (replace the variables PW_BOOTSTRAP and PW_KIBANA with the actual passwords):

cd /opt/elasticsearch
mkdir /etc/elasticsearch/certs
bin/elasticsearch-certutil cert -out /etc/elasticsearch/certs/elastic-certificates.p12 -pass ""

cat >> /etc/elasticsearch/elasticsearch.yml <<_EOF_
xpack.security.enabled: true

xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: certs/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: certs/elastic-certificates.p12

_EOF_

cd /opt/elasticsearch
echo "${PW_BOOTSTRAP}" | bin/elasticsearch-keystore add bootstrap.password

chown -R elasticsearch:elasticsearch /etc/elasticsearch

# edit /opt/kibana/config/kibana.yml with Kibana user name and password
sed 's/.*elasticsearch.username:.*/elasticsearch.username: "kibana"/;s/.*elasticsearch.password:.*/elasticsearch.password: "${PW_KIBANA}"/' < /opt/kibana/config/kibana.yml > /tmp/kibana.yml && cp /tmp/kibana.yml /opt/kibana/config/kibana.yml

cat >> /opt/kibana/config/kibana.yml <<_EOF_
xpack.security.encryptionKey: "Secret 32 char long string of chars"
xpack.security.secureCookies: true
_EOF_

chown kibana:kibana /opt/kibana/config/kibana.yml

exit

Restart the containers:

docker-compose restart

Again wait about 1 minute. Then execute from anywhere (i.e. not the docker host):

function setPW() {
curl -uelastic:${PW_BOOTSTRAP} -XPUT -H "Content-Type:application/json" "http://${DOCKER_HOST}:${ES_HTTP_PORT}/_xpack/security/user/$1/_password" -d "{ \"password\":\"$2\" }"
}

# Since ES is running by now, set the passwords
setPW "kibana" "${PW_KIBANA}"
setPW "apm_system" "${PW_APM}"
setPW "logstash_system" "${PW_LOGSTASH}"
setPW "beats_system" "${PW_BEATS}"
setPW "remote_monitoring_user" "${PW_MONITORING}"
setPW "elastic" "${PW_ELASTIC}"

Now you should be able to log in to Kibana at https://DOCKER_HOST:5601, account is “elastic” with its password. And you can connect to ES via https://DOCKER_HOST:9100

Done!

For convenience I create an index, and a user and a role to access only that index:

# Create index logs
curl -uelastic:${PW_ELASTIC} -XPUT -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs" -d '{"settings": { "number_of_shards": 1 }, \
"mappings": { "properties": { "timestamp": { "type": "date" }, "status": { "type": "integer" },\
 "channel": { "type": "text" }, "msg": { "type": "text" }}}}'

# Create a data item
curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs/_doc" \
-d '{ "timestamp": "'$(date --iso-8601=seconds -u)'", \
"status": 200, "channel": "curl", "msg": "Initial test via curl"}'

# Just for verification: find your previous entry via this

curl -uelastic:${PW_ELASTIC} -XGET -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/logs/_search" 

# Create role and user to index data directly into ElasticSearch

curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/_security/role/logwriter" \
-d '{"cluster":[],"indices":[{"names":["logs*"],"privileges":["create_doc"], \
"allow_restricted_indices":false}],"applications":[],"run_as":[], \
"metadata":{},"transient_metadata":{"enabled":true}}'

curl -uelastic:${PW_ELASTIC} -XPOST -H "Content-Type:application/json" \
"http://${DOCKER_HOST}:${ES_HTTP_PORT}/_security/user/logger" -d '{ "password": "PASSWORD", \
"roles": [ "logwriter" ], "full_name": "Logging user", "email": "my.email@gmail.com" }'

And finally I can log from Cloudflare Workers into ELK via HTTPS:

addEventListener('fetch', event => {
  event.respondWith(handleRequest(event.request))
})

async function esLog(status, msg) {
  const esUrl='https://DOCKER_HOST:9100/logs/_doc';
  let headers = new Headers;
  let now = new Date();
  const body = {
    timestamp: now.toISOString(),
    status: status,
    channel: "log-to-es",
    msg: msg
  };
  headers.append('Authorization', 'Basic ' + btoa('logger:PASSWORD'));
  headers.append('Content-Type', 'application/json');
  try {
    res = await fetch(esUrl, {
      method: 'POST',
      body:    JSON.stringify(body),
      headers: headers,
      });
      return true;
  } catch(err) {
    return false;
  }
}

/**
 * Respond to the request
 * @param {Request} request
 */
async function handleRequest(request) {

  let now1 = new Date();
  await esLog(201, "now1 is set...");
  let now2 = new Date();
  await esLog(202, `now2-now1=${now2-now1} ms`);

  return new Response("Check Kibana for 2 log entries", {status: 200})
}

and then you run this once, you’ll get 2 log entries in ES:

Advertisement