About a year ago I found out that the Yubikey Neo can be used as a SmartCard which can keep a secret key on-board. You can also use an actual SmartCard if you have one. But the setup procedure is quite involved and you need gpg.
Yubikey to the rescue! Or maybe OpenSSH in this case: As this explains, most Yubikeys, including the cheap blue ones which can only do U2F or FIDO2, can work with OpenSSH 8.2 to provide the private key without storing the secret key unencrypted on disk. Similar to using a SmartCard, but much easier.
More important is that those keys are supported by GitHub since May 2021 and GitLab 14.8+ since March 2022.
Create a key:
❯ ssh-keygen -t ecdsa-sk
Generating public/private ecdsa-sk key pair.
You may need to touch your authenticator to authorize key generation.
Enter PIN for authenticator:
You may need to touch your authenticator (again) to authorize key generation.
Enter file in which to save the key (/home/harald/.ssh/id_ecdsa_sk):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/harald/.ssh/id_ecdsa_sk
Your public key has been saved in /home/harald/.ssh/id_ecdsa_sk.pub
The key fingerprint is:
SHA256:2xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8 harald@m75q
The key's randomart image is:
+-[ECDSA-SK 256]--+
| .ooo |
[...]
| .*+ . . |
+----[SHA256]-----+
❯ ls -la .ssh/id_ecdsa_sk*
-rw------- 1 harald users 626 Apr 11 20:35 .ssh/id_ecdsa_sk
-rw-r--r-- 1 harald users 224 Apr 11 20:35 .ssh/id_ecdsa_sk.pub
Once the public key part is added on the target system in its ~/.ssh/authorized_keys file, you can connect to it like this:
❯ ssh -i .ssh/id_ecdsa_sk t621.lan
Confirm user presence for key ECDSA-SK SHA256:2xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Welcome to Ubuntu 21.10 (GNU/Linux 5.13.0-39-generic x86_64)
[...]
❯
Note that the private key ~/.ssh/id_ecdsa_sk is encrypted by the Yubikey, so this is a complete 2-factor authentication here, plus it checks for the user presence. And the maybe best part: that works on old U2F-only keys as well as new FIDO2 security keys. Love it!
Some FIDO2 keys can store the private key on the key directly which is convenient, but unfortunately less secure. Unlike SmartCards which have a limit for unsuccessful attempts, Yubikeys miss that feature for the U2F/FIDO2 part.
In case of errors…
I found 2 potential problems:
ssh-keygen failed with a “Key enrollment failed: invalid format”. If you run ssh-keygen with -vvv, you’ll see a line “debug1: sk_probe: 0 device(s) detected”. What this means is that /dev/hidrawX either does not exist or has the wrong permissions. Default is “0600” and owner is “root:root”.
Quick fix: sudo chmod a+rw /dev/hidrawX
Better fix: edit /lib/udev/rules.d/60-fido-id.rules and add a ‘, MODE=”0666″‘ to the line which starts with SUBSYSTEM==”hidraw”, then do a sudo udevadm control –reload and when you plug in the Yubikey, /dev/hidrawX will have 0666 permissions. You might need to install libyubikey-udev.
The above problem is likely only an issue when you do not use a graphical UI. If you log in via a graphical UI, all input devices should be owned by the logged in user.
Your OpenSSH version is older than 8.2. Check with “ssh -V”
U2F works well and easily via a web browser, but you can also use it directly on the command line. You “just” have to implement the USB protocol part of U2F, namely talk to /dev/hidrawX.
harald@r2s2:~/git$ git clone git@github.com:mdp/u2fcli.git
Cloning into 'u2fcli'...
remote: Enumerating objects: 57, done.
remote: Total 57 (delta 0), reused 0 (delta 0), pack-reused 57
Receiving objects: 100% (57/57), 19.26 KiB | 1.20 MiB/s, done.
Resolving deltas: 100% (21/21), done.
harald@r2s2:~/git$ cd u2fcli
harald@r2s2:~/git/u2fcli$ go mod init u2fcli
go: creating new go.mod: module u2fcli
go: to add module requirements and sums:
go mod tidy
harald@r2s2:~/git/u2fcli$ go mod tidy
go: finding module for package github.com/flynn/u2f/u2ftoken
go: finding module for package github.com/flynn/hid
go: finding module for package github.com/mdp/u2fcli/cmd
go: finding module for package github.com/flynn/u2f/u2fhid
go: finding module for package github.com/spf13/cobra
go: found github.com/mdp/u2fcli/cmd in github.com/mdp/u2fcli v0.0.0-20180327171945-2b7ae3bbca08
go: found github.com/flynn/hid in github.com/flynn/hid v0.0.0-20190502022136-f1b9b6cc019a
go: found github.com/flynn/u2f/u2fhid in github.com/flynn/u2f v0.0.0-20180613185708-15554eb68e5d
go: found github.com/flynn/u2f/u2ftoken in github.com/flynn/u2f v0.0.0-20180613185708-15554eb68e5d
go: found github.com/spf13/cobra in github.com/spf13/cobra v1.2.1
harald@r2s2:~/git/u2fcli$ go build
harald@r2s2:~/git/u2fcli$ ls
cmd go.mod go.sum LICENSE main.go README.md u2fcli
For my home Kubernetes installation I guess it’s time to enable TLS. Can’t use Let’s Encrypt for this as my internal network is not reachable and while I have workaround for this problem, I’d rather use my internal Certificate Authority via step-ca.
It’s actually simpler than I thought it is, mainly because the documentation I found first included options which were not explained at all. Turns out they are indeed fully optional… Thus on the CA server do:
step ca provisioner add acme --type ACME
This adds the ACME provisioner to the ~/.step/config/ca.json file:
The first 2 items were added by above command. The next were added by me. They are optional. Restart step-ca:
harald@r2s1:~$ sudo systemctl restart step-ca
harald@r2s1:~$ systemctl status step-ca
● step-ca.service - Step Certificates
Loaded: loaded (/etc/systemd/system/step-ca.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2021-05-11 18:27:39 JST; 15s ago
Main PID: 547880 (step-ca)
Tasks: 8 (limit: 998)
Memory: 10.5M
CGroup: /system.slice/step-ca.service
└─547880 /usr/local/bin/step-ca /home/harald/.step/config/ca.json --password-file /home/harald/.step/pass/key_pass.txt
To create a new certificate on a different machine which runs no HTTP server on port 80:
❯ sudo REQUESTS_CA_BUNDLE=$(step path)/certs/root_ca.crt \
certbot certonly --standalone \
--server https://ca.lan:8443/acme/acme/directory
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator standalone, Installer None
Enter email address (used for urgent renewal and security notices)
(Enter 'c' to cancel): my.mail@some.mail.server
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at None. You must agree in order to register
with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: y
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: n
Account registered.
Please enter in your domain name(s) (comma and/or space separated) (Enter 'c'
to cancel): m75q.lan
Requesting a certificate for m75q.lan
Performing the following challenges:
http-01 challenge for m75q.lan
Waiting for verification...
Cleaning up challenges
IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/m75q.lan/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/m75q.lan/privkey.pem
Your certificate will expire on 2021-05-11. To obtain a new or
tweaked version of this certificate in the future, simply run
certbot again. To non-interactively renew *all* of your
certificates, run "certbot renew"
- If you like Certbot, please consider supporting our work by:
Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate
Donating to EFF: https://eff.org/donate-le
And now let’s renew:
❯ sudo openssl x509 -in /etc/letsencrypt/live/m75q.lan/fullchain.pem -noout -text | grep After
Not After : May 11 11:16:01 2021 GMT
❯ sudo REQUESTS_CA_BUNDLE=$(step path)/certs/root_ca.crt \
certbot renew --server https://ca.lan:8443/acme/acme/directory
[...]
❯ sudo openssl x509 -in /etc/letsencrypt/live/m75q.lan/fullchain.pem -noout -text | grep After
Not After : May 11 11:17:04 2021 GMT
This is documented here. Note that the certificate ends up in /etc/letsencrypt/live/ and because root ran the command, you need root to get it out. Not the way it should be, but this is more a test for the ACME provider for step-ca.
Turns out that my マイナンバーカード is not the only thing which can do things like signing files and PIV is the official(?) standard for this. My old Yubikey 3 Neo can do that too thanks to the yubico-piv-tool.
And it’s basically the same as the マイナンバーカード except I have to set up everything myself.
As on my マイナンバーカード there’s 2 slots for 2 different keys and certificates:
Slot 9a for identification
Slot 9c for signing
Creating them is simple. I just show the ones for the signing slot 9c:
❯ yubico-piv-tool -s9c -AECCP256 -agenerate -o f2-9c.pub
❯ yubico-piv-tool -s9c -S'/CN=Harald Kubota/OU=Home/O=lan/' -averify -arequest -i f2-9c.pub -o f2-9c.csr
Enter PIN:
Successfully verified PIN.
Successfully generated a certificate request.
# I need a DNS Name. And 8670h is about 1 year.
❯ step ca sign --set=dnsNames='["test5.lan"]' --not-after=8760h f2-9c.csr f2-9c.crt
✔ Provisioner: myCA@home (JWK) [kid: IFXxmmZDCX76WMNbFfUoBOBZdubx0SG45Jsd0VGxaz1]
✔ Please enter the password to decrypt the provisioner key:
✔ CA: https://ca.lan:8443
✔ Certificate: f2-9c.crt
❯ yubico-piv-tool -s9c -aimport-certificate -i f2-9c.crt
Successfully imported a new certificate.
And here is how to sign a file and verify the signature:
For anyone outside Japan this is probably not of any interest. Please pass. Nothing to see here.
For me it was interesting: this is a smart card which can also use NFC, which makes it very interesting: How does it work? What data is inside? Can I look at it? Can other people look at it (without the PIN)? Why does it have 2 different PINs?
Things I learned in half a day since I got my MyNumber card:
It can do NFC too.
This app works to read data off the card. Including pictures, certificates and other stuff. Interesting.
On this site it explains the file system structure and other internals of the card. Very interesting.
That unrelated app works great to read my Suica/PASMO card. And both apps work and they figure out which card is for which app. Neat.
Here is a 5 year old article about how to use the data via its PKCS#11 API. And how to use this for ssh with OpenSC. I did that a while ago with a YubiKey. I prefer the YubiKey form factor a lot.
I got so many gadgets at home, but no Smart Card reader/writer. I should get this fixed so I can read the certificate with my PC. Makea paying tax via e-Tax much easier too.
When programming in Node.js, a huge problem is that “npm install” downloads libraries you did not specify. It downloaded all dependencies listed in package.json, but it also downloaded their dependencies and the dependencies of their dependencies etc., which is code you did not explicitly ask for. While you can point your direct dependencies to trustworthy sources, you have no control about anything further down the line. In short: this is a (known) security hazard. A recent example is here. Auditing code in npm helps, but the whole concept is a fundamental problem.
Dart and Deno are reducing the problem significantly since you have to name all dependencies, but it does not necessarily help you if that dependency itself is compromised.
The runtime of Deno as well as wasmtime use a sandbox-approach to mitigate that: you have to enable access explicitly to anything: A Deno program has very few permissions otherwise. From a security point of view, this is much better.
Node.js nor Python have no sandbox model and when loading libraries from the Internet, which both do a lot, do you always know what you get? So I’m looking for choices how to retrofit programs with potentially questionable code.
My requirements:
Possible to use ad-hoc: I want to run a program with somehow limited access (e.g.: no root and no ability to become root, network access only when I allowed it, no access to files it does not need access to)
Protect my files from programs which run as me and thus with my normal privileges (e.g. very few program would need access to my ssh keys)
Test case:
Run a Node.js program which wants to read ~/.test_me and access http://www.google.com. It should not be able to do either unless it’s enabled.
❯ node index.js
File: test 1
[...many more lines from .test.me...]
[
`<!doctype html><html itemscope="" itemtype="http://schema.org/WebPage"
[...some HTML code from www.google.com...]
When it comes to security and sandboxing, those choices came up after a quick check with Google:
SELinux needs special policies/contexts set up for the whole system. While this is great, it’s something root does. I see no simple way to do ad-hoc configurations to run a single command with the “correct” permissions. Plus the policy files are not easy to read nor write.
AppArmor is similar. A bit easier to read policy files, but they are all root owned, so not suitable for ad-hoc commands.
Both have a point to secure the complete system with the user explicitly not allowed to change the policies. Their purpose it not to protect the user from hurting themselves.
Sandbox Tools
Docker or containers in general provide a good way of isolation from the rest of the system and via bind mounts you can allow access to files or directories easily, but you have to create a container image first, upload it to a container registry and download and run it (Update: turns out that this is not required and a locally created image can be executed without problems). While it has its use, creating containers is a significant overhead if it’s needed for every program you are suspicious about.
minijail from Google looks good:
Minijail […] provides an executable that can be used to launch and sandbox other programs, […]
Installing on Debian was straightforward (needs kernel-headers and libcap-dev). Running a command with a specific user-definable policy is possible:
# minijail0 -S /usr/share/minijail0/$(uname -m)/cat.policy -- \\ /bin/cat /proc/self/seccomp_filter but the examples/ directory was a small shock to me: a single example, and not a well explained one.
❯ cat examples/cat.policy
# In this directory, test with:
# make LIBDIR=.
# ./minijail0 -n -S examples/cat.policy -- /bin/cat /proc/self/status
# This policy only works on x86_64.
read: 1
write: 1
restart_syscall: 1
rt_sigreturn: 1
exit_group: 1
open: 1
openat: 1
close: 1
fstat: 1
# Enforce W^X.
mmap: arg2 in ~PROT_EXEC || arg2 in ~PROT_WRITE
fadvise64: 1
While there is a tool to record the uses system calls (via strace) to create a policy (similar to what SELinux’s audit2allow tool), that means running a potentially harmful program once without restrictions. Plus the policy file is not exactly easy to understand. And the documentation does not help.
This is a dead end for my purpose.
bubblewrap is used as security layer for FlatPack installations before it was spun out. Using it is very command-line-option intensive, but a wrapper script will handle this. A test run:
Removing the “–share-net” and removing the “–ro-bind” for .test_me stops both from being accessible:
❯ ./bwrap.test
Error while accessing .test_me: Error: ENOENT: no such file or directory, open '/home/harald/.test_me'
Error: FetchError: request to http://www.google.com/ failed, reason: getaddrinfo ENOTFOUND www.google.com
Note that you also need /etc/resolv.conf too to allow resolving DNS names. Also the order of “–unshare-all” and “–share-net” is important as the last one wins.
firejail is conceptually similar to bubblewrap, but beside having a large list of command line options, it also has configuration files in /etc/firejail/ and it also allows user-owned configurations (default in ~/.config/firejail):
❯ cat ~/.config/firejail/nodejs.profile
whitelist /home/harald/js
#whitelist /home/harald/.test_me
net none
#quiet
include /usr/local/etc/firejail/whitelist-common.inc
include /usr/local/etc/firejail/default.profile
❯ firejail --profile=~/.config/firejail/nodejs.profile node index.js
Reading profile /home/harald/.config/firejail/nodejs.profile
Reading profile /usr/local/etc/firejail/whitelist-common.inc
Reading profile /usr/local/etc/firejail/default.profile
Reading profile /usr/local/etc/firejail/disable-common.inc
Reading profile /usr/local/etc/firejail/disable-passwdmgr.inc
Reading profile /usr/local/etc/firejail/disable-programs.inc
Parent pid 231521, child pid 231522
Warning: cleaning all supplementary groups
Warning: cleaning all supplementary groups
Warning: cleaning all supplementary groups
Warning: cleaning all supplementary groups
Warning: cleaning all supplementary groups
Child process initialized in 94.84 ms
Error while accessing .test_me: Error: ENOENT: no such file or directory, open '/home/harald/.test_me'
Error: FetchError: request to http://www.google.com/ failed, reason: getaddrinfo ENOTFOUND www.google.com
Parent is shutting down, bye...
❯ firejail --quiet --net=none node index.js
Error while accessing .test_me: Error: EACCES: permission denied, open '/home/harald/.test_me'
Error: FetchError: request to http://www.google.com/ failed, reason: getaddrinfo ENOTFOUND www.google.com
The last sample shows that you don’t need to create a separate profile but similar to bwrap you can use command line options for most settings.
Uncommenting the “whitelist /home/harald/.test_me” line allows access to that file. Commenting out the “net none” allows network access. Per default network access is granted, but you can change this in /etc/firejail/default.profile. Once disabled in a profile, it cannot be re-enabled though. (Update: A “–ignore=net” option will ignore the “net none” in a profile).
After above tests I found out you can skip the “–profile=~/.config/firejail/PROFILENAME” if PROFILENAME is the binary name plus “.profile” as firejail will pick this up automatically. Very neat!
And you can make it less verbose too and with sensible defaults you don’t even need to create any profiles. E.g. shell history files are inaccessible by default:
❯ firejail --quiet bash
$ cd
$ cat .bash_history
cat: .bash_history: Permission denied
$ ls -la .bash_history
-r-------- 1 nobody nogroup 0 Dec 30 23:39 .bash_history
My Conclusion
SELinux and AppArmor are not something users can manage by themselves. Different scope than what I am looking for.
Using containers, especially when running as non-root works as long as you want to use containers anyway. Otherwise it’s a huge overhead: create container image, store it in a registry, and then run it. Any code changes would need a new container image to be created. Good for certain workload, especially those which will run as containers anyway later on. While I use containers extensively, a lot of programs I run are not a container.
bubblewrap works. It needs an extensive list of options to be useful. That’s not hard to put into a script. Since you have to add a lot of options and there’s no default options you can specify, it’s very explicit about permissions which makes it easier to debug since everything is configured when running your suspicious program. As the order of options is important, I can see this getting complicated quickly for non-trivial programs. Here is an example. Luckily most programs are trivial: few accesses are needed plus some capabilities like network access.
firejail got the spot between security and ease-of-use right in my opinion: sensible defaults (e.g. disabling at and crontab commands) with profiles for many programs. You can also have user-configurable profiles and they are not hard to create. The amount of extra work when using firejail is low: just adding “firejail” before the command helps a lot already out-of-the-box by hiding sensitive files and disabling miss-usable commands. Creating a specific profile makes this very configurable. And if you name the profiles like the binary you plan to use, it’s both simple to use while still being configurable.
Note that no solution is 100% secure. There’s always a trade-off between convenience and security. Unless you enforce it, if it’s inconvenient, it won’t be done.
PS: While testing I experienced firejail to not be able run programs which have capabilities set if you use “caps.drop all” which is included in the default profile. See bug report. Can’t say yet if it’s a bug or badly worded option or lack of documentation or just unexpected behavior.
DSM → Control Panel → Security → Certificate → Add. Then Configure and use the new one as system default.
Now https://ds.lan:5001 will use the new certificate. Repeat in 1 year. Since the default maximum lifetime of certificates was 720h, I had to change this to 1 year (8760h) on the step CA server:
Part 1 was technically correct, but turns out that it’s too manual to be used by me:
you have to do it only once a while (once a year, because certs might have a 1 year validity time)
you don’t do it if it’s a lot of extra manual work
So here is Part 2 because I found something easier: Step CLI and Step CA.
The main difference to the openssl method (which continues to work): this CA runs as a service. So on the client side, you just need once to connect and then you can get certificates from a single place.
Get the releases and install, either as Debian package or tar file.
Extra step on ARMv7 (and possible all 32 bit architectures): replace “badger” with “badgerv2” in .step/config/ca.json. Also add those “claims” section under “authority” key unless you like the defaults: