God, writing helm charts is mindnumblingly tedious, while also being frustratingly finnicky.
God, writing helm charts is mindnumblingly tedious, while also being frustratingly finnicky.
Trying to work out if it's possible to configure Containerd to set (or the equivalent of) the `--init` flag on container run so `tini` hosts PID 1 and signal handling works properly.
It looks like `--init` does some magic - I can see runc has some PID/FD parameters, but it loooks like CRI doesn't have an escape hatch for any say, arbitrary arguments - even if I did work out the magic.
Containerd config is even more sparse on the CRI (or at least, the man page is)
https://github.com/containerd/containerd/blob/main/docs/man/containerd-config.toml.5.md
I really don't want to have to insist that every damn Dockerfile bundles `tini` and sets `ENTRYPOINT`.
Ok rough todo when I get back
* Get Netbird functional
* Finish setting up firewalls (there is a very stupid issue with this)
* Write ipv6 blog post (I wrote an outline)
* Fix blog (related to above lmao)
* Wireguard IPv6
* BGP ipv6 LB?
* Setup Proxmox Talos outpost node (requires learning new terraform module)
* Setup hetzner Talos outpost (idk the issue with this yet)
* Decide on a new GitHub username instead of deadnamelastname
* Learn kustomize
* Start on network policies in k8s
* Decide how to run home assistant on parents server (remote standalone k8s? Podman?)
* Learn alertmanager for some key metrics
* Dashboards?
* Fix VPN issues on android (timeouts??)
* Will probably add to this before I return
#Homelab #Kubernetes #SelfHosted
Einen eigenen #Matrix Homeserver einsetzen ist immer noch so ein wenig "pain in the *ss" (zig Möglichkeiten, aber keine, die wirklich zu 100% Standard wäre) - mittlerweile gibt es aber einen Full Stack, der mal eben auf #Kubernetes (#k3s) aufbaut, aber wirklich alles wie PostgreSQL, #Synapse, MAS, Element Web und #Element Call mitbringt.
https://element.io/server-suite/community
Wirkt erst einmal heavy, läuft aber mit zwei Kernen und 4 GB RAM problemlos. Sollte es das endlich sein?
This is so incredibly cursed. I'm probably gonna have nightmares about it.
This allows you to query YAML files using SQL. But... But... What sane person would want to do that?
I'm looking for work!
I'm a high-level infra and devops engineer and team lead.
I've previously run my own team, and previously worked at Mozilla and Facebook. I'm looking for infra/devops lead or senior infra/devops engineer positions.
I'm not looking for pure development positions, but writing scripts, glue, and things like CI - as demanded by infra/devops - are totally fine. I just don't want to be developing the product.
The one thing I can't budge on is that I am exclusively looking for 100% remote positions, due to physical disability. I am based in the UK but I'm happy working with companies anywhere in the world, and capable of shifting my circadian rhythm around to match yours.
My CV is available at cv.dave.io. I am available to start immediately. The CV is a Notion page and can be cloned directly into your workspace if you use Notion. I'm more than happy to answer any questions, and all leads are graciously appreciated. Drop me a public or private mention, or use the other contact details listed on my CV.
My email address is gated behind a humanity check (don't worry, it's automated) at https://dave.io.
You are very welcome to boost this post, with my thanks.
Obligatory hashtags: #GetFediHired #FediJobs #Kubernetes #DevOps #Infra #Infrastructure #Engineer #Engineering #TeamLead
Cat tax supplied.
Considering switching the #MinIO backend of #mstdndk to #Garage by #deuxfleurs. 3 replicas on #Kubernetes. Anyone with real life experience and/or tips? :-)
I bought tons of memory and maxed out all the workstations in my homelab, back when memory was really cheap.
But now that I'm losing my job… time to shutdown the big iron.
Going from a Proxmox cluster with 1.5TB of RAM, to a Proxmox standalone with 96GB of RAM.
Also… time to focus on learning containers and container orchestration.
Got #kubernetes anyone??
What kinds of things do you need to learn about #k8s to be passable for a job that uses it?
I've been stuck in VM land for aaaaaages!
Most of the orgs I've been with have deliberately *avoided* ever touching anything that came remotely close to Kubernetes… so it's hurting me in my search.
Would love to get #Fedihired.
"Man gewinnt keine Zeit mit Kubernetes" - hörte ich früher oft. Jetzt frage ich Claude: "Check den Namespace, schlage Network Policies vor" - und Sekunden später kommen durchdachte, sichere Manifeste.
Von ResourceQuota-Kalkulationen bis Default-Deny-Policies: Was früher Stunden dauerte, funktioniert jetzt in Minuten?!
Zeit für den Community-Diskurs!
New Course Release! The Kubernetes Course by Nigel Poulton
This course is based on the 2025 edition of the best-selling Kubernetes book, that has been fully updated for the latest versions of Kubernetes and the latest industry trends. You won't find a better and more up-to-date book-based course on Kubernetes. Hand-crafted over the past 8 years by best-selling author Nigel Poulton.
Find it on Leanpub!
Uff, that one hurts: https://www.theregister.com/2025/10/01/critical_red_hat_openshift_ai_bug/
#OpenShift AI installed a role that allowed creating arbitrary Jobs in arbitrary namespaces. Granting full access to the cluster's control-plane.
https://bugzilla.redhat.com/show_bug.cgi?id=2396641
Time to check your OpenShift Clusters...
And if you want to check your clusters in general for similar issues:
kubectl get clusterrolebindings.rbac.authorization.k8s.io -o json | jq '.items[] | select(.subjects[]?.name == "system:authenticated")'
You're welcome :)
Whelp back I go down into the IPv6 mines wish me luck
(I need to get Cilium/Talos running in dual stack before I get these remote nodes working)
The nodes are getting ipv6 addresses but it isn't propagating into k8s/Cilium for some reason...
#SelfHosted #Homelab #IPv6 #Networking #Talos #Kubernetes
Intelligent Kubernetes Load Balancing at Databricks
Link: https://www.databricks.com/blog/intelligent-kubernetes-load-balancing-databricks
Discussion: https://news.ycombinator.com/item?id=45434417
hey fedi I’m Jason Hill (he/him). Ex-911 dispatcher & Army medic → cybersecurity student. Accidentally nuked my old instance. Working on my first AWS app, tending a noisy homelab, and tinkering with K8s/Docker/forensics. Into OSS, automation, leatherworking, and games. Down to collaborate on cyber/AI tools. More: linkedin.com/in/jasondenson
#Intro #Cybersecurity #Homelab #AWS #Kubernetes #Docker #OpenSource #Infosec #Automation #AI #Law #Gaming #Leatherworking #Texas #Reintroduction #Tech
Any thoughts on #ZFS as the filesystem for a #Kubernetes node? My very unscientific initial sandbox impression is that it really squeezes the maximum performance out of the #Hetzner auction servers, which usually features a lot of slow rotating storage with a couple of #NVMe sticks thrown in to sweeten the deal. When the latter are used for caching, the throughput seems almost bearable. I'll do some more practical testing over the next weeks, but I'm quite optimistic. Maybe this will solve the disk saturation issues we're seeing, without breaking the budget.
New Course Release! The Kubernetes Course by Nigel Poulton
Find it on Leanpub!
One interesting thing I didn't realize until today, because I never had to do this before: 2 node Kubernetes cluster is "not reliable" (I've only done 1 or 3 so far). If I reboot the node that is holding the API VIP, I lose access to the working node because etcd refuses to assume it can "steal" it.
Unsure if it's a general Kubernetes problem, or a specific Talos quirkiness, but the only way to recover it is to drop the etcd ephemeral partition, and reboot the node for it to "rebuild itself".
Most likely I will drop the older node from the cluster. Very unlikely I will bring up a third one just to deal with this kind of issue. Both nodes are very underutilized, no need for a third one wasting power.
Kubernetes v1.34: Pod Level Resources Graduated to Beta | Kubernetes
https://kubernetes.io/blog/2025/09/22/kubernetes-v1-34-pod-level-resources/