Minio S3 and Events

One of the great points of AWS’s S3 is that you can get notifications about objects being uploaded/deleted/changed/etc.

Turns out Minio can do that too! See https://docs.min.io/docs/minio-bucket-notification-guide.html for details. A quick test using nsq as queue manager worked.

Start nsq as Docker container (on hosts t620.lan):

$ docker run --rm -p 4150-4151:4150-4151 nsqio/nsq /nsqd

Configure Minio:

$ mc admin config set cubie notify_nsq:1 nsqd_address="t620.lan:4150" queue_dir="" queue_limit="0" tls="off" tls_skip_verify="on" topic="minio"

Restart the minio server. It’ll now show one extra line when starting:

Dec 08 22:37:37 cubie minio[4502]: SQS ARNs:  arn:minio:sqs::1:nsq

Now configure the actual events (any changes in the bucket “Downloads”):

$ mc event add cubie/Downloads arn:minio:sqs::1:nsq

After uploading a file I get an event like this:

{ 
  "EventName": "s3:ObjectCreated:Put", 
  "Key": "Downloads/to-kuro.sh", 
  "Records": [ 
    { 
      "eventVersion": "2.0", 
      "eventSource": "minio:s3", 
      "awsRegion": "", 
      "eventTime": "2020-12-08T13:40:56.970Z", 
      "eventName": "s3:ObjectCreated:Put", 
      "userIdentity": { 
        "principalId": "minio" 
      }, 
      "requestParameters": { 
        "accessKey": "minio", 
        "region": "", 
        "sourceIPAddress": "192.168.1.134" 
      }, 
      "responseElements": { 
        "content-length": "0", 
        "x-amz-request-id": "164EC17C5E9BEB3E", 
        "x-minio-deployment-id": "d3d81f71-a06c-451e-89be-b1dc4e891054", 
        "x-minio-origin-endpoint": "https://192.168.1.36:9000" 
      }, 
      "s3": { 
        "s3SchemaVersion": "1.0", 
        "configurationId": "Config", 
        "bucket": { 
          "name": "Downloads", 
          "ownerIdentity": { 
            "principalId": "minio" 
          }, 
          "arn": "arn:aws:s3:::Downloads" 
        }, 
        "object": { 
          "key": "testscript.sh", 
          "size": 337, 
          "eTag": "5f604e1b35b1ca405b35503b86b56d51", 
          "contentType": "application/x-sh", 
          "userMetadata": { 
            "content-type": "application/x-sh" 
          }, 
          "sequencer": "164EC17C6153CB1E" 
        } 
      }, 
      "source": { 
        "host": "192.168.1.134", 
        "port": "", 
        "userAgent": "MinIO (linux; amd64) minio-go/v7.0.6 mc/2020-11-25T23:04:07Z" 
      } 
    } 
  ] 
}

Neat!

Synology’s DSM and Minio’s S3

It’s no straightforward to use Minio’s S3 server as back-end for DSM’s Cloud Sync. Here how to make it work:

Enable minio with TLS

# Create a certificate for Minio
step ca certificate cubie.lan ~/.minio/certs/public.crt ~/.minio/certs/private.key --provisioner-password-file=$HOME/.step/pass/provisioner_pass.txt

export MINIO_ACCESS_KEY=access_key
export MINIO_SECRET_KEY=secret_key_very_secret
export MINIO_DOMAIN=cubie.lan
 
minio server /s3

You can now access this storage via https://cubie.lan:9000. Note the MINIO_DOMAIN which enables access to buckets via BUCKET.cubie.lan instead of cubie.lan/BUCKET.

Cloud Sync and Minio’s S3

In DSM open Cloud Sync, create a new connection, select S3 Storage:

Now the trick: DSM does not use http://cubie.lan:9000/BUCKET/ to access your bucket, but instead it uses https://BUCKET.cubie.lan:9000/, so you need a DNS entry for each bucket you use in DSM (CNAME will do).

Click Next in above DSM screen and leave the Remote Path empty (Root folder). Changing this will break the replication.

That’s it: 3 points really:

  • Must use https
  • Uses DNS names to access buckets
  • Don’t use sub-directories inside a bucket

HTTPS on Synology’s DSM

My NAS is a Synology DS212 and it can do https. But to make it use my own CA’s certificate, a bit extra work is needed:

Add my own root CA’s Certificate

# Copy to the default folder for CA Root Certs of DSM 
cp root_ca.crt /usr/share/ca-certificates/mozilla/myCA.crt

# Linking to the system folder
ln -s /usr/share/ca-certificates/mozilla/myCA.crt /etc/ssl/certs/myCA.pem 

# Create hashed link
cd /etc/ssl/certs
ln -s myCA.pem `openssl x509 -hash -noout -in myCA.pem`.0

cat myCA.pem >> /etc/ssl/certs/ca-certificates.crt

# Testing
openssl verify -CApath /etc/ssl/certs myCA.pem

Use our own TLS Certificate

Create certificate

step ca certificate ds.lan ds.crt ds.key --kty RSA --size 2048 --not-after=8760h

DSM → Control Panel → Security → Certificate → Add. Then Configure and use the new one as system default.

Now https://ds.lan:5001 will use the new certificate. Repeat in 1 year. Since the default maximum lifetime of certificates was 720h, I had to change this to 1 year (8760h) on the step CA server:

    "minTLSCertDuration": "5m", 
    "maxTLSCertDuration": "8760h",
    "defaultTLSCertDuration": "24h",