Kubernetes & Container Orchestration
Cluster architecture, multi-tenancy, autoscaling, GitOps, security policy, upgrade strategy, and the operational maturity to run it without drama.
Kubernetes works. What doesn't work is treating it as a destination instead of a substrate. Most "Kubernetes problems" turn out to be "we never decided what good looks like" problems.
I take you from 'we have a cluster running' to 'we have a platform engineers can ship on safely'.
What I cover
Cluster topology — How many clusters, why, how they're segregated by team / environment / blast radius. Documented decisions.
GitOps deploy pipeline — ArgoCD or Flux, app-of-apps, environment promotion via PR. No more kubectl apply from laptops, no more 'who changed prod last Tuesday'.
Autoscaling that actually scales — HPA + VPA configured per workload, Karpenter or Cluster Autoscaler tuned, stop-the-bleed budgets, bin-packing optimisations. Capacity that follows traffic without drama.
Security baseline — Pod Security Standards, network policy with Cilium or Calico, image signing with Cosign, runtime monitoring, secrets via External Secrets Operator. Supply chain hardened.
Upgrade muscle — Tested upgrade procedure, deprecated-API scanner, blue/green node group strategy. Going from 1.27 to 1.30 is a Tuesday afternoon, not a quarterly project.
Where I help most
Teams running 1–20 clusters, somewhere on the spectrum from "we deployed it last year and haven't touched it since" to "we want to consolidate seven snowflakes into a managed platform".
Adjacent services.
Cloud & DevOps Engineering
Production cloud environments designed deliberately — resilient, cost-aware, and ready for the day you actually need them.
Internal developer platformsPlatform Engineering
Self-service platforms that turn 'open a ticket and wait three days' into 'open a PR and ship in fifteen minutes'.
GitHub Actions · GitLab · BuildkiteCI/CD Pipeline Engineering
Pipelines that are fast, deterministic, and trustworthy. Merging to main should be a non-event.