DataDog on ARMv7

Playing with DataDog. There’s amd64 and arm64 (ARMv8) agents available. Since I got several ARMv7 machines, time to compile it!

Instructions here worked, although make sure you have a Python 3 virtualenv set up. Some dependencies I had to install:

apt install python-dev cmake
pip install wheel
pip install -r requirements

I already had go installed. Make sure $GOROOT and $GOPATH are correct. My $GOROOT points to /usr/local/go while $GOPATH points to ~/go. Until I made sure they are correct, I got all kind of odd error messages when trying to use the invoke script.

The whole compilation on a AllWinner A20 @ 1GHz takes about 2h. Also ~/go/pkg swelled up to 4GB and it filled my disk which cost me another hour.

For the config file datalog.yaml I copied the corresponding file from a “normal” amd64 agent (from /etc/datadog-agent/). To start the agent:

(venv) harald@cubie:~/go/src/github.com/DataDog/datadog-agent$ ./bin/agent/agent run -c bin/agent/dist/datadog.yaml

And it shows up nicely in the DataDog UI:

CubieTruck on DataDog

It’s not perfect as there are those 3 integration issues which I have to track down. But I got the Docker/containerd integration and basic CPU/RAM/disks statistics which is mainly what I needed from my ArmV7 machines.

Integration issues
Advertisement

Google Cloud Platform

My AWS Certified Solution Architect – Professional is expiring in June! Since renewing it is a bit boring, it’s a great reason to get to know GCP better. I generally like their way of thinking more and today I understood why:

  • AWS has DevOps as their focus point for many products
  • GCP has the developer as the focus point for many products

Of course there’s plenty overlap, but the philosophy is fundamentally different. But that might just be my opinion. It would explain why I am more comfortable with AWS with my Sysadmin background, but more curious with GCP (as a wanna-be small-scale developer).

Pub/Sub

Beside creating VMs, traditionally one of the easiest ways to interact with a cloud environment is message queues. In GCP this is Pub/Sub. And it’s easy.

  1. Create a Topic. With a schema (to keep yourself sane).

Schema (AVRO):

{
  "type": "record",
  "name": "Avro",
  "fields": [
    {
      "name": "Sensor",
      "type": "string"
    },
    {
      "name": "Temp",
      "type": "int"
    }
  ]
}

Then you can publish via gcloud (thanks to Pavan for providing a working example):

❯ gcloud pubsub topics publish Temp --message='{"Sensor":"Storage","Temp":9}'

And in Node.js:

const {PubSub} = require('@google-cloud/pubsub');

function main(
  topicName = 'Temp',
  data = JSON.stringify({Sensor: 'Living room', Temp: 22})
) {

  const pubSubClient = new PubSub();

  async function publishMessage() {
    const dataBuffer = Buffer.from(data);

    try {
      const messageId = await pubSubClient.topic(topicName).publish(dataBuffer);
      console.log(`Message ${messageId} published.`);
    } catch (error) {
      console.error(`Received error while publishing: ${error.message}`);
      process.exitCode = 1;
    }
  }

  publishMessage();
}

process.on('unhandledRejection', err => {
  console.error(err.message);
  process.exitCode = 1;
});

main(...process.argv.slice(2));

And with plumber:

# Subscribe
❯ plumber read gcp-pubsub --project-id=training-307604 --sub-id=Temp2-sub -f

# Publish
❯ plumber write gcp-pubsub --topic-id=Temp --project-id=training-376841 --input-data='{"Sensor":"Kitchen","Temp":19}'