How I use Deno and cdk8s to deploy my homelab

This post details the setup of my homelab. It goes into the technical details of how I setup a single-node k3s Kubernetes cluster using cdk8s and Deno to generate all of the required YAML manifests. This is a practical deployment with:

  • Automated backups
  • Monitoring and alerting
  • Automated deployments
  • Automatic image/chart upgrades
  • Support for GPU acceleration
  • Secure secrets with 1Password
  • Secure remote access through Tailscale
  • Direct access for game servers and certain protocols like mdns

tl;dr: the repository is on GitHub. The /cdk8s contains the TypeScript code that generates the Kubernetes manifests in /cdk8s/dist.

Table of Contents


I’ve had a homelab for around a decade. The hardware itself has gone from repurposed parts in college (a Core Duo served me very well from 2017-2022) to a very beefy server today:

(The full build is on PCPartPicker)

TODO: insert pictures

Over the years I’ve tried quite a few ways to manage it:

  • Manually installing everything without any automation
  • Artisinal, hand-written bash scripts
  • Ansible
  • Docker Compose

Those methods were mostly informed by what I was wanting to learn. That trend hasn’t changed with my move to Kubernetes. I had no experience with K8s back in December, and today I use it to manage my homelab quite successfully. Kubernetes is overkill for a homelab, but it does provide a great learning environment for me where the consequences are relatively low (as long as my backups keep working).

I name each iteration of my server so that I can disambiguate between references of older installations. Previously I named my servers after Greek/Roman gods, but now I’m using the names of famous computer scientists. The name of the latest iteration is “lamport”, named after Leslie Lamport who is known for his work in distributed systems.


cdk and cdk8s

If you’ve used CloudFormation, then you know how much it sucks. You use a weird dialect of YAML to define your AWS resources. Back in 2017 AWS introduced the cdk library. It allows you to generate your CloudFormation YAML using a real language like Go, Python, Java, or TypeScript.

This idea turned out to be execellent, so they did the same thing for Kubernetes with cdk8s. cdk8s seems to be abandonded, but it still works quite well and the TypeScript defitions are generated from Kubernetes’ resources (including third-party custom resource definitions!), so the library should continue to work for quite a while longer.

Here’s a “hello world” program from cdk8s’ documentation:

import { Construct } from "constructs";
import { App, Chart } from "cdk8s";
import { KubeDeployment } from "./imports/k8s";

class MyChart extends Chart {
  constructor(scope: Construct, ns: string, appLabel: string) {
    super(scope, ns);

    // Define a Kubernetes Deployment
    new KubeDeployment(this, "my-deployment", {
      spec: {
        replicas: 3,
        selector: { matchLabels: { app: appLabel } },
        template: {
          metadata: { labels: { app: appLabel } },
          spec: {
            containers: [
                name: "app-container",
                image: "nginx:1.19.10",
                ports: [{ containerPort: 80 }],

const app = new App();
new MyChart(app, "getting-started", "my-app");


The result of running this program is a Kubernetes YAML file that you can deploy using kubectl apply:

apiVersion: apps/v1
kind: Deployment
  name: getting-started-my-deployment-c85252a6
  replicas: 3
      app: my-app
        app: my-app
        - image: nginx:1.19.10
          name: app-container
            - containerPort: 80

Why is this useful? Static typing! cdk8s can inform guide you as you write your Kubernetes resources. For example, it can let you know what properties are valid when you’re creating a resource or let you know when you’ve specifiy an invalid property.

Inspired by Xe's blog

cdk8s has support for all of Kubernete’s resources. These definitions are generated by the cdk8s import command, which generates types for every Kubernetes resource on your server including CRDs (custom resource definitions). Here’s an example of a generated definition for 1Password, which I use to handle all of the secrets in my Kubernetes cluster:

export class OnePasswordItem extends ApiObject {
  public constructor(scope: Construct, id: string, props: OnePasswordItemProps = {}) {
    super(scope, id, {

export interface OnePasswordItemProps {
  readonly metadata?: ApiObjectMetadata;
  readonly spec?: OnePasswordItemSpec;
  readonly type?: string;

export interface OnePasswordItemSpec {
  readonly itemPath?: string;

Here’s how I use it to store my Tailscale key:

new OnePasswordItem(chart, "tailscale-operator-oauth-onepassword", {
  spec: {
    itemPath: "vaults/v64ocnykdqju4ui6j6pua56xw4/items/mboftvs4fyptyqvg3anrfjy6vu",
  metadata: {
    name: "operator-oauth",
    namespace: "tailscale",

Takeaway: cdk8s supports all Kubernetes resources, including third-party resources from 1Password, Tailscale, Traefik, etc.


TODO: write about Deno

So, with cdk8s I have an execellent way to author my Kubernetes manifests. How do I deploy them?


The workflow is actually quite simple. I store my Kubernetes manifests in a GitHub repo and I point ArgoCD to it.

How do I configure ArgoCD? With cdk8s, of course:

import { Chart } from "[email protected]";
import { Application } from "../../imports/";

export function createLamportApp(chart: Chart) {
  return new Application(chart, "lamport-app", {
    metadata: {
      name: "lamport",
    spec: {
      project: "default",
      source: {
        repoUrl: "",
        path: "cdk8s/dist/",
        targetRevision: "main",
      destination: {
        server: "https://kubernetes.default.svc",
        namespace: "lamport",
      syncPolicy: {
        automated: {},
        syncOptions: ["CreateNamespace=true"],

Ingress and HTTPS with Tailscale

Direct connections and local networks

Persistant volumes




Helm, Kustomize, and operators


This does require a small amount of bootstrapping, which I describe in my repository README. Whenever I setup a new cluster/node, I need to:

  • Install K3s: curl -sfL | sh -
  • Install ArgoCD: kubectl apply -n argocd -f
  • Create a secrets to access my 1Password vaults
  • Deploy the manifests in this repo: kubectl apply -f cdk8s/dist/apps.k8s.yaml

That’s it!

Keeping things up-to-date