Tony Norlin
banner
tnorlin.se
Tony Norlin
@tnorlin.se
1.3K followers 360 following 310 posts
Homelabber (BSD/illumos/Linux/Kubernetes), Interests span across tech, music, photo, food (pasta & pizza napoletana), coffee and my family
Posts Media Videos Starter Packs
I bought the Pro Max 16 poe a couple of months ago, but at last they now have a bunch of 10GbE switches such as Pro XG 10 PoE. That price though..
ssh -o CertificateFile=ed25519-${DATASET}-cert.pub -i ed25519-sk-${DATASET} zfskey@${luks_vm} | zfs load-key -n zones/${DATASET}

might not be as transparent as the curl one, but with luks, pass and a ssh certificate (with yubikey) may have moved the weakest link to another layer instead..
Instead, I've now created a user with SSH certificates for each corresponding dataset with a forced command that extracts the key (with pass on that luks vm), so each certificate can only reach the corresponding key and not any other...
I ended up with replacing the https for ssh (something I meant to do years ago).. I thought about using pass or bitwarden, but those solutions seem sub optimal to distribute on each physical server..
I've felt a bit puzzled on this "issue". Still way more convenient than loop-AES (I still believe it's one of the better solutions, albeit not that smooth and transparent). And I prefer native solutions instead of having my data in a vm. While I would prefer to not have the keys in a vm..
curl -s --key /root/.zfsencryption/user.key --cert /root/.zfsencryption/user.crt:${SSL_KEY_PASSWORD} -k ${luks_vm}/$%7BDATASET%... | zfs load-key zones/${DATASET}
I had a "temporary" solution, by which I created a luks-encrypted vm on which I stored the corresponding keys, and whenever (during a reboot) I needed to unlock a dataset I just booted the vm, entered the luks passphrase into the console and then..
Tricks & treats at Halloween evening.. since some 5-6 years ago I switched from loop-AES to OpenZFS encryption for my main storage...
Reposted by Tony Norlin
The FreeBSD extensions got merged into the OCI Runtime Spec today! This is another step on the way to bringing the container ecosystem to FreeBSD. runj will add support for this soon (most of the work is already done): github.com/opencontaine...
Add FreeBSD as a platform by dfr · Pull Request #1286 · opencontainers/runtime-spec
This uses FreeBSD jails to implement container isolation.
github.com
It can't sell well, sounds really strange with "pumpkin spice".. I know pumpkin seed on bread, but...
luberneters, folks
Reposted by Tony Norlin
One day the industry will recognize the drawbacks of AI agents and nondeterministic automation, and rediscover the UNIX philosophy of chaining together small purpose built tools in a low cost and predictable way, otherwise known as shell scripts.
Let's bring back POSIX as the golden standard.
Kubernetes v1.34.0 - Of Wind & Will (O' WaW) was released Today and I've updated my port for illumos, FreeBSD and OpenBSD (with corresponding binaries).

github.com/tnorlin/kube...

#kubernetes #homelab #illumos #freebsd #openbsd
Releases · tnorlin/kubernetes
Production-Grade Container Scheduling and Management - tnorlin/kubernetes
github.com
kubectl version -oyaml
clientVersion:
buildDate: "2025-08-27T19:09:43Z"
compiler: gc
gitCommit: d736e489c368c26e7782fbe9559ebcce7adbf7b9
gitTreeState: clean
gitVersion: v1.34.0-3+d736e489c368c2
goVersion: go1.24.6
major: "1"
minor: 34+
platform: illumos/amd64
[..]
Kubernets v1.34 - Sneak Peek is about to be released!

This time it breaks for me:
+++ [0827 17:47:48] Building go targets for illumos/amd64
k8s.io/kubernetes/cmd/kube-apiserver (static)
# k8s.io/kubernetes/pkg/securitycontext
pkg/securitycontext/util.go:212:23: undefined: possibleCPUs
No, I think that the storagelayer probably was a bad match for Ceph (a dataset from a ZFS pool) but I'm a bit torn if I should keep it like that as I want to have some ceph for learning
This is back to the days of ATA33 or UDMA66
Still a tad speedier than Longhorn.. but compared with OpenEBS on a SFF Lenovo with a consumer NVME...
grep IOPS openebs-zfspv
IOPS=10044.806641 BW(KiB/s)=40196
IOPS=6487.756348 BW(KiB/s)=25967
IOPS=7215.422363 BW(KiB/s)=924111
IOPS=7382.805664 BW(KiB/s)=945536

I'm back to #0 😅
./kubestr fio -s longhorn
[..]
Elapsed time- 2m26.593707708s
FIO test results:
[..]
read:
IOPS=154.668747 BW(KiB/s)=635
[..]
write:
IOPS=59.918293 BW(KiB/s)=256
[..]
read:
IOPS=171.361801 BW(KiB/s)=22457
[..]
write:
IOPS=99.210823 BW(KiB/s)=13233
[..]
./kubestr fio -s ceph-block
[..]
Elapsed time- 1m48.072393042s
FIO test results:
[...]
read:
IOPS=318.433960 BW(KiB/s)=1289
[...]
write:
IOPS=166.506378 BW(KiB/s)=682
[..]
read:
IOPS=305.519897 BW(KiB/s)=39624
[..]
IOPS=170.620377 BW(KiB/s)=22357
Decided to give rook ceph a try (again) and it did no big difference if the raw disk was set as async or sync (the physical flash is a Samsung PM9A3) - it felt underperforming in my specific setup.
For the k8s part I have swapped around the worker nodes (quite easy when I have an external control plane) and currently running stuff on 3 virtual (bhyve) worker nodes /w dedicated nic (only 1G link currently due to the switch) and nvme backend (sharing the same physical nvme)
that's a wrap on summer 2025 and it's about time to get that homelab in shape again (it has not really been it since I moved into a house last year).