Back
Our Infrastructure: How We Self-Host at SAI Technology
Infrastructure diagram showing our self-hosted setup

At SAI Technology, we believe in building systems that are not only powerful and reliable but also sovereign and cost-effective. That's why we've chosen to run the majority of our infrastructure in-house using a self-hosted model. Here's a behind-the-scenes look at how our in house servers power everything from production apps to media services — all optimized for performance, control, and scalability.

The Core of Our Setup: In House Server + Cloud VPS

Our infrastructure is centered around a custom-built in-house cluster and 2 Cloud VPSs. Our cluster contains a Geekom A8 with Proxmox as our hypervisor, 2 Raspberry Pi 5's and 1 Raspberry Pi 4. Each node in the cluster serves a unique purpose and runs lightweight, containerized services using Docker and Docker Compose.

Our Cluster at a Glance

  • Nodes: We operate a 8-node cluster, 3 physical & 5 virtual, all running 64-bit Linux.

  • Storage: We have a dedicated network attached storage (NAS) as our primary data store. It stores everything from app data, media, backups, etc.

  • Backups: Regular snapshots are taken and stored on the NAS.

Dockerized Services & Application Hosting

We run a variety of services in Docker containers, previously via Docker Compose, currently via Coolify:

Media & Entertainment

  • Prowlarr, Radarr, Sonarr, Transmission
  • Plex Media Server

Content Management

DevOps & Monitoring

Business Tools

  • Chatwoot customer support
  • n8n automation platform
  • Custom APIs and LangChain-powered AI agents
  • Client portals and admin tools

Public and Local Access: Network Separation

Public Access

Services like our custom APIs, client APIs, LangChain agents, and client CMS dashboards use Cloudflare tunnels for secure, password-protected external access.

Local Access

Media services like Plex and Home Assistant are only available on our local network for privacy and performance.

Monitoring & Observability

We use Checkmate to monitor system performance across all nodes, including CPU, memory, and disk I/O. This ensures we can act quickly if a container crashes or if resources are maxing out. Checkmate also acts as an uptime monitor so if anything we are hosting goes down, we get informed immediately.

Resilience & Automation

We've built automation and redundancy into every layer of our infrastructure to ensure uptime, stability, and peace of mind:

Mounting & Reconnect Logic

If Samba or NFS shares go down (e.g., due to a network hiccup or power event), our scripts automatically detects the issue and remounts them. This keeps critical services and backups running without manual intervention.

Startup Scripts & Health Checks

All Docker containers are configured with automatic restart policies and custom health checks. This means services restart on boot or failure and are continually monitored for liveness and readiness.

CI/CD Pipelines

Deployments are automated using Coolify, giving us push-to-deploy workflows for everything from our internal tools and Telegram bots to client-facing CMS and websites.

ZFS Snapshots

While our main NAS uses ext4, we're actively integrating ZFS on additional nodes for snapshotting, deduplication, and bit-rot protection.

Power Redundancy via UPS & Solar

Our cluster setup is backed by a reliable Uninterruptible Power Supply (UPS) to handle short outages, and solar panels provide clean, consistent power during the day.

Dual ISP Redundancy

We use both MTN and Starlink connections, with automatic failover configured via our router. This ensures continued access to public-facing services, even if one ISP goes down.

Why We Self-Host

Privacy & Control

No third-party dependencies mean no lock-in and full control over data.

Cost-Effectiveness

We avoid expensive cloud bills and make use of hardware we already own.

Performance

Local hosting reduces latency for internal tools and services.

What's Next

We're currently expanding our infrastructure with support for:

  • Load balancing between Cloud VPS and our in-house server
  • Distributed backups to remote storage

We treat our cluster as a production-grade platform because it is our production environment — powering everything from our internal bots to client-facing services.

Want to learn more or set up a similar stack?

Get in touch — we love talking infrastructure.

Contact Us

NEWSLETTER

Sign up for exclusive offers, original stories, activism awareness, events and more