<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>Tom Seidel – Articles</title>
  <subtitle>Freelance Java consultant with 20+ years of experience in cloud-native, microservices, and DevOps.</subtitle>
  <link href="https://remus-software.org/feed.xml" rel="self" type="application/atom+xml"/>
  <link href="https://remus-software.org/" rel="alternate" type="text/html"/>
  <id>https://remus-software.org/</id>
  <author>
    <name>Tom Seidel</name>
    <email>tom.seidel@remus-software.org</email>
  </author>
  <updated>2026-04-04T00:00:00.000Z</updated>
  <entry>
    <title>Restic Explorer 1.0 — A Lightweight Monitoring Dashboard for Restic Backups</title>
    <link href="https://remus-software.org/articles/rest-explorer-1-0-released/" rel="alternate" type="text/html"/>
    <id>https://remus-software.org/articles/rest-explorer-1-0-released/</id>
    <published>2026-04-04T00:00:00.000Z</published>
    <updated>2026-04-04T00:00:00.000Z</updated>
    <summary>Restic Explorer 1.0 is out — a lightweight, self-hosted web dashboard that monitors all restic backup repositories across S3, Azure, SFTP, REST, and Rclone from a single UI with automated scans, integrity checks, and retention policy tracking.</summary>
    <content type="html">&amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Backups are only as good as the confidence that they actually work.&amp;lt;/strong&amp;gt; Restic Explorer 1.0 is now available — a focused, self-hosted web dashboard that provides exactly that confidence for all &amp;lt;a href=&amp;quot;https://restic.net/&amp;quot;&amp;gt;restic&amp;lt;/a&amp;gt; repositories in one place.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;img src=&amp;quot;https://raw.githubusercontent.com/tmseidel/restic-explorer/main/docs/screenshot_dashboard.png&amp;quot; alt=&amp;quot;Restic Explorer Dashboard&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;the-problem&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#the-problem&amp;quot;&amp;gt;The Problem&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Restic is an outstanding backup tool. Fast, encrypted, deduplicated — it has become the go-to choice for backing up servers, NAS devices, and cloud workloads. But restic is a CLI tool by design. When running multiple repositories across different backends — S3 buckets, Azure Blob, SFTP servers — keeping track of &amp;lt;em&amp;gt;“is everything still running?”&amp;lt;/em&amp;gt; becomes a chore. It often means writing shell scripts, parsing JSON output, wiring up cron jobs, and hoping someone notices when something breaks.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;Existing monitoring solutions are excellent pieces of software, but they tend to come with far more complexity than many use cases require: agent-based architectures, extensive plugin systems, or dashboards designed for hundreds of repositories across large teams. For operators who simply need a single pane of glass that answers &amp;lt;strong&amp;gt;are the backups running, are they healthy, and do they meet retention requirements?&amp;lt;/strong&amp;gt; — a lighter approach is needed.&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;the-solution&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#the-solution&amp;quot;&amp;gt;The Solution&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Restic Explorer is that single pane of glass. It connects directly to restic repositories — wherever they live — and provides:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Multi-Repository Dashboard&amp;lt;/strong&amp;gt; — status of all repos at a glance with color-coded badges (green/red/amber)&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Automated Scanning&amp;lt;/strong&amp;gt; — scheduled &amp;lt;code&amp;gt;restic snapshots&amp;lt;/code&amp;gt; calls cache metadata for fast browsing without CLI round-trips&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Integrity Checks&amp;lt;/strong&amp;gt; — scheduled &amp;lt;code&amp;gt;restic check --read-data&amp;lt;/code&amp;gt; runs with configurable intervals per repository&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Retention Policy Monitoring&amp;lt;/strong&amp;gt; — daily/weekly/monthly/yearly rules with soft warnings when snapshots fall short&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Health Endpoint&amp;lt;/strong&amp;gt; — &amp;lt;code&amp;gt;/actuator/health&amp;lt;/code&amp;gt; JSON endpoint reporting per-repo status, ready for Uptime Kuma, Prometheus, or any HTTP health checker&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Snapshot Browser&amp;lt;/strong&amp;gt; — paginated, sortable snapshot list with a dedicated detail page showing paths, tags, hostname, and size&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Lock Detection&amp;lt;/strong&amp;gt; — automatic stale lock detection with one-click unlock&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Encrypted Credentials&amp;lt;/strong&amp;gt; — AES-256-GCM encryption at rest for repository passwords and backend keys&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;h3 id=&amp;quot;five-backends%2C-one-ui&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#five-backends%2C-one-ui&amp;quot;&amp;gt;Five Backends, One UI&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;table&amp;gt;
&amp;lt;thead&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;th&amp;gt;Backend&amp;lt;/th&amp;gt;
&amp;lt;th&amp;gt;What it covers&amp;lt;/th&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;/thead&amp;gt;
&amp;lt;tbody&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;S3 / S3-Compatible&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;AWS S3, MinIO, Wasabi, Backblaze B2 (S3 API)&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Azure Blob Storage&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Native Azure integration&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;SFTP&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Any SSH-accessible server, key-based auth&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;REST Server&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Restic’s own REST backend with optional HTTP auth&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Rclone&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Google Drive, Dropbox, OneDrive, B2, and 40+ more via rclone&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;/tbody&amp;gt;
&amp;lt;/table&amp;gt;
&amp;lt;h2 id=&amp;quot;getting-started-in-60-seconds&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#getting-started-in-60-seconds&amp;quot;&amp;gt;Getting Started in 60 Seconds&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;The fastest way to get running is Docker Compose:&amp;lt;/p&amp;gt;
&amp;lt;pre&amp;gt;&amp;lt;code class=&amp;quot;language-yaml&amp;quot;&amp;gt;services:
  app:
    image: tmseidel/restic-explorer:latest
    ports:
      - &amp;amp;quot;8080:8080&amp;amp;quot;
    environment:
      SPRING_PROFILES_ACTIVE: docker
      DB_HOST: db
      DB_PORT: 5432
      DB_NAME: resticexplorer
      DB_USER: resticexplorer
      DB_PASSWORD: resticexplorer
    depends_on:
      db:
        condition: service_healthy
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_DB: resticexplorer
      POSTGRES_USER: resticexplorer
      POSTGRES_PASSWORD: resticexplorer
    volumes:
      - db-data:/var/lib/postgresql/data
    healthcheck:
      test: [&amp;amp;quot;CMD-SHELL&amp;amp;quot;, &amp;amp;quot;pg_isready -U resticexplorer&amp;amp;quot;]
      interval: 10s
      timeout: 5s
      retries: 5
    restart: unless-stopped

volumes:
  db-data:
&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;
&amp;lt;pre&amp;gt;&amp;lt;code class=&amp;quot;language-bash&amp;quot;&amp;gt;docker compose up -d
&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;
&amp;lt;p&amp;gt;Open &amp;lt;code&amp;gt;http://localhost:8080&amp;lt;/code&amp;gt;, create the admin account, and start adding repositories. That’s it.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;The image ships with restic, rclone, and openssh-client pre-installed — no additional setup required for any backend type.&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;why-restic-is-a-great-fit-for-cloud-%26-infrastructure-as-code&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#why-restic-is-a-great-fit-for-cloud-%26-infrastructure-as-code&amp;quot;&amp;gt;Why Restic is a Great Fit for Cloud &amp;amp;amp; Infrastructure-as-Code&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;For teams managing cloud infrastructure through Terraform, Ansible, Pulumi, or similar tools, restic fits naturally into the workflow:&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;stateless-by-design&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#stateless-by-design&amp;quot;&amp;gt;Stateless by Design&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;Restic repositories are self-contained. There is no central server, no daemon, no database to maintain. A repository is just a structured set of encrypted blobs in any storage backend. This makes restic trivially reproducible — IaC can provision the storage bucket and the backup job in the same run.&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;backend-agnostic&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#backend-agnostic&amp;quot;&amp;gt;Backend Agnostic&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;Moving from AWS to Azure? Migrating from on-prem to cloud? Restic’s backend abstraction means the backup strategy isn’t tied to a vendor. A Terraform module provisions an S3 bucket today; tomorrow it provisions Azure Blob Storage. The restic commands stay the same.&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;encryption-without-infrastructure&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#encryption-without-infrastructure&amp;quot;&amp;gt;Encryption Without Infrastructure&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;Restic encrypts everything client-side. There is no need for a KMS, a Vault instance, or an HSM for backup encryption. One password, stored in the secrets manager of choice, and data is encrypted at rest regardless of the storage backend’s capabilities.&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;deduplication-saves-cloud-storage-costs&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#deduplication-saves-cloud-storage-costs&amp;quot;&amp;gt;Deduplication Saves Cloud Storage Costs&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;Restic’s content-defined chunking and deduplication means incremental backups are genuinely incremental — even across different source machines backing up to the same repository. In cloud environments where storage is metered, this translates directly to lower costs.&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;scriptable-and-composable&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#scriptable-and-composable&amp;quot;&amp;gt;Scriptable and Composable&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;Restic is a CLI tool that outputs JSON. It composes perfectly with cron, systemd timers, CI/CD pipelines, and container sidecars. No agents to install, no ports to open, no protocols to configure — just a binary and a repository URL.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;Restic Explorer adds the monitoring layer on top: existing restic workflows remain untouched, and Restic Explorer watches the repositories and surfaces issues when they need attention.&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;what%E2%80%99s-in-1.0&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#what%E2%80%99s-in-1.0&amp;quot;&amp;gt;What’s in 1.0&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;This release marks the point where the feature set is stable, tested, and production-ready:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Five backend types&amp;lt;/strong&amp;gt; — S3, Azure, SFTP, REST, Rclone&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Repository groups&amp;lt;/strong&amp;gt; — organize repos by team, environment, or purpose&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Configurable scan and check intervals&amp;lt;/strong&amp;gt; per repository&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Retention policy monitoring&amp;lt;/strong&amp;gt; with violation warnings&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Error log&amp;lt;/strong&amp;gt; with date filtering and auto-cleanup&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Dark mode&amp;lt;/strong&amp;gt; with automatic theme detection&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Health &amp;amp;amp; info endpoints&amp;lt;/strong&amp;gt; for external monitoring integration&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Admin-only download&amp;lt;/strong&amp;gt; of snapshots as &amp;lt;code&amp;gt;.tar&amp;lt;/code&amp;gt; archives&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Encrypted credential storage&amp;lt;/strong&amp;gt; (AES-256-GCM)&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Docker image&amp;lt;/strong&amp;gt; running as non-root user with built-in healthcheck&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;table&amp;gt;
&amp;lt;thead&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;th&amp;gt;Snapshots&amp;lt;/th&amp;gt;
&amp;lt;th&amp;gt;Snapshot Detail&amp;lt;/th&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;/thead&amp;gt;
&amp;lt;tbody&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;img src=&amp;quot;https://raw.githubusercontent.com/tmseidel/restic-explorer/main/docs/screenshot_snapshots.png&amp;quot; alt=&amp;quot;Snapshots&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;img src=&amp;quot;https://raw.githubusercontent.com/tmseidel/restic-explorer/main/docs/screenshot_snapshot.png&amp;quot; alt=&amp;quot;Detail&amp;quot;&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;/tbody&amp;gt;
&amp;lt;/table&amp;gt;
&amp;lt;h2 id=&amp;quot;get-it&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#get-it&amp;quot;&amp;gt;Get It&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Docker Hub&amp;lt;/strong&amp;gt;: &amp;lt;a href=&amp;quot;https://hub.docker.com/r/tmseidel/restic-explorer&amp;quot;&amp;gt;&amp;lt;code&amp;gt;tmseidel/restic-explorer:latest&amp;lt;/code&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;GitHub&amp;lt;/strong&amp;gt;: &amp;lt;a href=&amp;quot;https://github.com/tmseidel/restic-explorer&amp;quot;&amp;gt;tmseidel/restic-explorer&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Documentation&amp;lt;/strong&amp;gt;: &amp;lt;a href=&amp;quot;https://github.com/tmseidel/restic-explorer/blob/main/docs/USER_GUIDE.md&amp;quot;&amp;gt;User Guide&amp;lt;/a&amp;gt; · &amp;lt;a href=&amp;quot;https://github.com/tmseidel/restic-explorer/blob/main/docs/CONFIGURATION.md&amp;quot;&amp;gt;Configuration&amp;lt;/a&amp;gt; · &amp;lt;a href=&amp;quot;https://github.com/tmseidel/restic-explorer/blob/main/docs/ARCHITECTURE.md&amp;quot;&amp;gt;Architecture&amp;lt;/a&amp;gt;&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;p&amp;gt;Licensed under MIT. Contributions, issues, and feedback welcome.&amp;lt;/p&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;Restic Explorer is built with Spring Boot 4, Thymeleaf, and Bootstrap 5. It runs as a single container alongside PostgreSQL and requires no additional infrastructure beyond what is already in place.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
</content>
    <author>
      <name>Tom Seidel</name>
    </author>
    <category term="backup"/>
    <category term="Self-Hosting"/>
    <category term="news"/>
    <category term="Restic"/>
  </entry>
  <entry>
    <title>From Legacy to Lean: Rethinking Your Backup Strategy</title>
    <link href="https://remus-software.org/articles/replacing-veeam-with-restic/" rel="alternate" type="text/html"/>
    <id>https://remus-software.org/articles/replacing-veeam-with-restic/</id>
    <published>2026-03-29T00:00:00.000Z</published>
    <updated>2026-03-29T00:00:00.000Z</updated>
    <summary>How we replaced a costly, complex backup system with a simple shell script and S3 storage — and the key questions to ask before you do the same.</summary>
    <content type="html">&amp;lt;h1 id=&amp;quot;&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#&amp;quot;&amp;gt;&amp;lt;/a&amp;gt;&amp;lt;/h1&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;We ditched our expensive, bloated backup platform for a shell script and S3. Here’s how — and what to think about before you do the same.&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;h2 id=&amp;quot;the-problem-nobody-wants-to-touch&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#the-problem-nobody-wants-to-touch&amp;quot;&amp;gt;The Problem Nobody Wants to Touch&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Let’s be honest: most backup systems are set up once and then nobody looks at them again. They just… run. Hopefully.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;We were in that exact spot. A centralized commercial backup server on Windows, proprietary agents on every machine, enterprise licenses, the whole deal. It worked — until it didn’t:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;The config kept breaking.&amp;lt;/strong&amp;gt; More than once, the backup server’s internal state got corrupted. Trying to add a new backup job? Error dialog. Can’t configure anything until someone fixes it manually.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Way too much overhead.&amp;lt;/strong&amp;gt; Each server needed a proprietary agent, a service user, SSH access, firewall rules — all for what’s basically “copy some files somewhere safe.”&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;We used 5% of the features.&amp;lt;/strong&amp;gt; Bare-metal recovery? Granular restore? Application-aware snapshots? We never used any of that. Our servers are provisioned with automation — we can rebuild them from scratch. We just needed the &amp;lt;em&amp;gt;data&amp;lt;/em&amp;gt;.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;It cost real money.&amp;lt;/strong&amp;gt; A Windows Server with commercial licenses, just to store backups. For a team that runs Linux everywhere else, that’s an expensive oddball.&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;h2 id=&amp;quot;before-you-migrate%3A-ask-yourself-these-questions&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#before-you-migrate%3A-ask-yourself-these-questions&amp;quot;&amp;gt;Before You Migrate: Ask Yourself These Questions&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Don’t jump to a new tool just because the old one annoys you. Think it through first:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;What are you actually backing up?&amp;lt;/strong&amp;gt; If your servers can be rebuilt from code, you probably just need data-level backups (database dumps, config files), not full disk images.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Have you ever restored from backup?&amp;lt;/strong&amp;gt; If the answer is “uh, I think so?” — that’s your real problem, regardless of the tool.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;What’s the total cost?&amp;lt;/strong&amp;gt; Licenses + the server it runs on + agent maintenance + engineer time spent debugging weird issues.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Do you get alerts when a backup fails?&amp;lt;/strong&amp;gt; A backup that silently breaks is worse than no backup at all.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Is backup part of your provisioning?&amp;lt;/strong&amp;gt; If setting up backup for a new server is a separate manual process, it &amp;lt;em&amp;gt;will&amp;lt;/em&amp;gt; get skipped eventually.&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;h2 id=&amp;quot;what-we-switched-to&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#what-we-switched-to&amp;quot;&amp;gt;What We Switched To&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;We landed on &amp;lt;a href=&amp;quot;https://restic.net/&amp;quot;&amp;gt;Restic&amp;lt;/a&amp;gt; — open-source, encrypts everything, deduplicates, compresses, and stores to any S3-compatible backend. It’s in the default Debian repos. Install is literally &amp;lt;code&amp;gt;apt install restic&amp;lt;/code&amp;gt;.&amp;lt;/p&amp;gt;
&amp;lt;table&amp;gt;
&amp;lt;thead&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;th&amp;gt;&amp;lt;/th&amp;gt;
&amp;lt;th&amp;gt;Old System&amp;lt;/th&amp;gt;
&amp;lt;th&amp;gt;Restic&amp;lt;/th&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;/thead&amp;gt;
&amp;lt;tbody&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Install&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Proprietary repo + agent + service user + firewall rules&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;code&amp;gt;apt install restic&amp;lt;/code&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Storage&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Dedicated Windows backup server&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Any S3-compatible object storage&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Config&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;GUI on backup server&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Environment variables + shell script&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Licensing&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Per-server commercial license&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Free&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;tr&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;strong&amp;gt;Restore&amp;lt;/strong&amp;gt;&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;Through backup server UI&amp;lt;/td&amp;gt;
&amp;lt;td&amp;gt;&amp;lt;code&amp;gt;restic restore&amp;lt;/code&amp;gt; from anywhere&amp;lt;/td&amp;gt;
&amp;lt;/tr&amp;gt;
&amp;lt;/tbody&amp;gt;
&amp;lt;/table&amp;gt;
&amp;lt;p&amp;gt;When picking any replacement tool, look for: simple deployment, storage flexibility (don’t get locked in), full CLI scriptability, client-side encryption, active community, and built-in retention management.&amp;lt;/p&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;h2 id=&amp;quot;the-architecture&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#the-architecture&amp;quot;&amp;gt;The Architecture&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Here’s what we ended up with — three layers:&amp;lt;/p&amp;gt;
&amp;lt;div class=&amp;quot;mermaid&amp;quot;&amp;gt;graph TB
    subgraph servers[&amp;quot;Servers&amp;quot;]
        native[&amp;quot;&amp;lt;b&amp;gt;Native App&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;pg_dumpall → gzip&amp;lt;br/&amp;gt;→ restic backup&amp;quot;]
        docker[&amp;quot;&amp;lt;b&amp;gt;Docker App&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;docker exec → pg_dump&amp;lt;br/&amp;gt;→ gzip → restic backup&amp;quot;]
        legacy[&amp;quot;&amp;lt;b&amp;gt;Legacy App&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;mysqldump&amp;lt;br/&amp;gt;→ legacy agent&amp;quot;]
    end

    subgraph storage[&amp;quot;Storage Layer&amp;quot;]
        s3[&amp;quot;&amp;lt;b&amp;gt;S3-Compatible Object Store&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;One bucket per project&amp;quot;]
        legacysrv[&amp;quot;&amp;lt;b&amp;gt;Legacy Backup Server&amp;lt;/b&amp;gt;&amp;quot;]
    end

    subgraph monitoring[&amp;quot;Monitoring Layer&amp;quot;]
        explorer[&amp;quot;&amp;lt;b&amp;gt;Backup Explorer&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;Browse repos,&amp;lt;br/&amp;gt;check health&amp;quot;]
        heartbeat[&amp;quot;&amp;lt;b&amp;gt;Heartbeat Monitor&amp;lt;/b&amp;gt;&amp;lt;br/&amp;gt;Push-based alerts on&amp;lt;br/&amp;gt;success / failure&amp;quot;]
    end

    native -- &amp;quot;Restic + S3&amp;quot; --&amp;gt; s3
    docker -- &amp;quot;Restic + S3&amp;quot; --&amp;gt; s3
    legacy -- &amp;quot;Legacy Agent&amp;quot; --&amp;gt; legacysrv
    s3 --&amp;gt; explorer
    s3 --&amp;gt; heartbeat
&amp;lt;/div&amp;gt;&amp;lt;p&amp;gt;A few rules we learned the hard way:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;One bucket per project.&amp;lt;/strong&amp;gt; Never mix backups from different apps in the same bucket. Isolation, access control, cost tracking — all easier this way.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Every backup is individual.&amp;lt;/strong&amp;gt; A Postgres DB needs &amp;lt;code&amp;gt;pg_dumpall&amp;lt;/code&amp;gt;. A Docker service needs &amp;lt;code&amp;gt;docker compose exec&amp;lt;/code&amp;gt;. A VPN server needs its config files. There’s no universal “back up everything” script. Write one per app.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Credentials go in a team vault.&amp;lt;/strong&amp;gt; If the person who set up the backup leaves, you don’t want the passwords leaving with them.&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;h2 id=&amp;quot;the-script-pattern&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#the-script-pattern&amp;quot;&amp;gt;The Script Pattern&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;After iterating across a bunch of projects, we settled on a template every backup job follows:&amp;lt;/p&amp;gt;
&amp;lt;pre&amp;gt;&amp;lt;code class=&amp;quot;language-bash&amp;quot;&amp;gt;#!/usr/bin/env bash
set -euo pipefail

source /opt/app/.restic-env

# Error trap — always notify on failure
trap &amp;#039;notify_monitor &amp;amp;quot;down&amp;amp;quot; &amp;amp;quot;Backup failed&amp;amp;quot;; rm -f &amp;amp;quot;${DUMP_FILE}&amp;amp;quot;; exit 1&amp;#039; ERR

# Init repo if first run
restic snapshots &amp;amp;gt; /dev/null 2&amp;amp;gt;&amp;amp;amp;1 || restic init

# Create the dump (customize this per app)
pg_dumpall | gzip &amp;amp;gt; &amp;amp;quot;${DUMP_FILE}&amp;amp;quot;

# Don&amp;#039;t upload empty dumps
[[ -s &amp;amp;quot;${DUMP_FILE}&amp;amp;quot; ]] || { notify_monitor &amp;amp;quot;down&amp;amp;quot; &amp;amp;quot;Empty dump&amp;amp;quot;; exit 1; }

# Upload, clean up, prune old snapshots
restic backup &amp;amp;quot;${DUMP_FILE}&amp;amp;quot; --tag app-name
rm -f &amp;amp;quot;${DUMP_FILE}&amp;amp;quot;
restic forget --keep-daily 30 --keep-weekly 8 --keep-monthly 12 --prune

# All good
notify_monitor &amp;amp;quot;up&amp;amp;quot; &amp;amp;quot;OK&amp;amp;quot;
&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;
&amp;lt;p&amp;gt;The important bits: the &amp;lt;strong&amp;gt;error trap&amp;lt;/strong&amp;gt; makes sure you hear about failures. The &amp;lt;strong&amp;gt;empty-dump check&amp;lt;/strong&amp;gt; catches silent breakage (like a database dump that exits 0 but produces nothing). &amp;lt;strong&amp;gt;Retention runs on every backup&amp;lt;/strong&amp;gt;, not as a separate task. And &amp;lt;strong&amp;gt;tags&amp;lt;/strong&amp;gt; let you filter snapshots later.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;With default retention (30 daily, 8 weekly, 12 monthly) you end up with about 44 snapshots at any given time — good granularity without blowing up storage.&amp;lt;/p&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;h2 id=&amp;quot;monitoring%3A-don%E2%80%99t-skip-this&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#monitoring%3A-don%E2%80%99t-skip-this&amp;quot;&amp;gt;Monitoring: Don’t Skip This&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Two layers — you need both:&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Heartbeat monitoring:&amp;lt;/strong&amp;gt; Every backup script pings a monitor on success or failure (we use &amp;lt;a href=&amp;quot;https://github.com/louislam/uptime-kuma&amp;quot;&amp;gt;Uptime Kuma&amp;lt;/a&amp;gt;, but anything push-based works). If no ping arrives within 26 hours → alert. This catches script failures, cron being broken, and servers being down.&amp;lt;/p&amp;gt;
&amp;lt;pre&amp;gt;&amp;lt;code class=&amp;quot;language-bash&amp;quot;&amp;gt;curl -sf &amp;amp;quot;${MONITOR_URL}?status=up&amp;amp;amp;msg=OK&amp;amp;quot;       # on success
curl -sf &amp;amp;quot;${MONITOR_URL}?status=down&amp;amp;amp;msg=Failed&amp;amp;quot;  # in error trap
&amp;lt;/code&amp;gt;&amp;lt;/pre&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Repository browser:&amp;lt;/strong&amp;gt; A heartbeat tells you &amp;lt;em&amp;gt;if&amp;lt;/em&amp;gt; the backup ran. A browser tells you &amp;lt;em&amp;gt;what’s in it&amp;lt;/em&amp;gt; — snapshot counts, sizes, retention compliance, integrity checks. This catches things like backups that “succeed” but are suspiciously small.&amp;lt;/p&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;h2 id=&amp;quot;how-to-actually-migrate&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#how-to-actually-migrate&amp;quot;&amp;gt;How to Actually Migrate&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Don’t flip the switch overnight. We did it in phases:&amp;lt;/p&amp;gt;
&amp;lt;ol&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;New servers get the new tool from day one.&amp;lt;/strong&amp;gt; Zero risk, no migration needed.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Old servers run both systems in parallel.&amp;lt;/strong&amp;gt; Set up the new backup alongside the legacy one.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Test restores from the new backup.&amp;lt;/strong&amp;gt; Actually restore on a test environment. Verify the data.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Remove the legacy agent per server&amp;lt;/strong&amp;gt; after the new backup has been solid for a couple of months.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Kill the legacy server last&amp;lt;/strong&amp;gt; — only after every server is migrated and validated.&amp;lt;/li&amp;gt;
&amp;lt;/ol&amp;gt;
&amp;lt;p&amp;gt;Don’t rush step 4. Storage is cheap. Lost data is not.&amp;lt;/p&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;h2 id=&amp;quot;tl%3Bdr&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#tl%3Bdr&amp;quot;&amp;gt;TL;DR&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;If your servers are provisioned from code, you don’t need image-level backups. Just back up the data.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Write a backup script per application — there is no one-size-fits-all.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Monitor everything. Heartbeats for “did it run?”, a browser for “what’s in it?”&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Bake backup into your provisioning. If it’s manual, it’ll get skipped.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Test your restores. A backup you’ve never restored from is a hope, not a strategy.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Migrate gradually. Parallel-run, validate, then decommission.&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;p&amp;gt;A shell script, a cron job, encrypted uploads to S3, and a heartbeat ping. That’s the whole system. No servers, no GUI, no licenses.&amp;lt;/p&amp;gt;
&amp;lt;hr&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;The best backup system is the one your team actually understands, maintains, and tests.&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
</content>
    <author>
      <name>Tom Seidel</name>
    </author>
    <category term="backup"/>
    <category term="Self-Hosting"/>
    <category term="DevOps"/>
    <category term="Restic"/>
  </entry>
  <entry>
    <title>Evaluating Self-Hosted AI Services: A Translation Service Case Study</title>
    <link href="https://remus-software.org/articles/self-hosted-ai-translation-service/" rel="alternate" type="text/html"/>
    <id>https://remus-software.org/articles/self-hosted-ai-translation-service/</id>
    <published>2026-02-02T00:00:00.000Z</published>
    <updated>2026-02-02T00:00:00.000Z</updated>
    <summary>A practical evaluation of replacing DeepL with a self-hosted translation service using open-source LLMs — comparing quality, performance, and cost.</summary>
    <content type="html">&amp;lt;p&amp;gt;With freely available large language models now widely accessible, it has become straightforward to self-host software that was previously only available through commercial providers. The key question always comes down to the resulting costs and the effort involved.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;In this case study, I examined whether the translation service DeepL can be replaced by a self-hosted solution. The goal was to provide a DeepL-compatible REST API that:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;achieves comparable translation quality,&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;offers similar performance, and&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;implements the same REST API specification&amp;lt;sup class=&amp;quot;footnote-ref&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#fn1&amp;quot; id=&amp;quot;fnref1&amp;quot;&amp;gt;[1]&amp;lt;/a&amp;gt;&amp;lt;/sup&amp;gt;,&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;p&amp;gt;in order to then compare the one-time and ongoing costs. Using the DeepL API requires a paid subscription; while the pay-as-you-go model is transparent, it can become very expensive with heavy usage. Additionally, data leaves the corporate network, and the API’s behaviour under heavy load is not fully transparent.&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;choosing-a-suitable-local-model&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#choosing-a-suitable-local-model&amp;quot;&amp;gt;Choosing a Suitable Local Model&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;The first question is which freely available models are suitable for translation tasks. Hugging Face offers a large selection of models that can be easily integrated into custom software&amp;lt;sup class=&amp;quot;footnote-ref&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#fn2&amp;quot; id=&amp;quot;fnref2&amp;quot;&amp;gt;[2]&amp;lt;/a&amp;gt;&amp;lt;/sup&amp;gt;. For this evaluation, Meta’s &amp;lt;strong&amp;gt;nllb-200-distilled&amp;lt;/strong&amp;gt; model was chosen, as it is widely used, easy to deploy, and available in three sizes (600M, 1.3B, and 3.3B parameters).&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;implementing-the-deepl-compatible-rest-api&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#implementing-the-deepl-compatible-rest-api&amp;quot;&amp;gt;Implementing the DeepL-Compatible REST API&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;A pragmatic approach was taken for the implementation: a Spring Boot application serves as the API frontend and delegates the actual translation request to a Python Flask component that controls the LLM.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;For easy deployment, the system can be run either:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;in Docker containers, or&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;natively on a Debian/Ubuntu server.&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;p&amp;gt;The goal was a straightforward deployment on various cloud hardware platforms to test quality and performance there. The complete implementation is available on GitHub&amp;lt;sup class=&amp;quot;footnote-ref&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#fn3&amp;quot; id=&amp;quot;fnref3&amp;quot;&amp;gt;[3]&amp;lt;/a&amp;gt;&amp;lt;/sup&amp;gt;. Ansible was used for automated native deployment.&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;test-%E2%80%94-translation-quality&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#test-%E2%80%94-translation-quality&amp;quot;&amp;gt;Test — Translation Quality&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;The following German reference sentence was used to evaluate translation quality:&amp;lt;/p&amp;gt;
&amp;lt;blockquote&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;em&amp;gt;“Sobald der Glasfaser-Ausbau abgeschlossen ist, erhalten Sie eine Mitteilung zum Schaltungstermin und eine Schnell-Start-Anleitung für die Einrichtung des Glasfaser-Anschlusses.”&amp;lt;/em&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/blockquote&amp;gt;
&amp;lt;p&amp;gt;DeepL produces the following translation:&amp;lt;/p&amp;gt;
&amp;lt;blockquote&amp;gt;
&amp;lt;p&amp;gt;“Once the fiber optic expansion is complete, you will receive a notification of the activation date and a quick start guide for setting up your fiber optic connection.”&amp;lt;/p&amp;gt;
&amp;lt;/blockquote&amp;gt;
&amp;lt;p&amp;gt;This translation serves as the reference.&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;test-with-nllb-200-distilled-600m&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#test-with-nllb-200-distilled-600m&amp;quot;&amp;gt;Test with nllb-200-distilled-600M&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;The smallest model was first run on a development machine via Docker. Performance was not a concern at this stage. The generated translation was:&amp;lt;/p&amp;gt;
&amp;lt;blockquote&amp;gt;
&amp;lt;p&amp;gt;“Once the glass-faser-Ausbau is closed, you receive a Mitteilung zum Schaltungstermin und eine Schnell-Start-Anleitung für die Einrichtung der Glasfaser-Anschlusses.”&amp;lt;/p&amp;gt;
&amp;lt;/blockquote&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;img src=&amp;quot;nllb-200-distilled-600M.png&amp;quot; alt=&amp;quot;Response of the small model&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;test-with-nllb-200-distilled-1.3b&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#test-with-nllb-200-distilled-1.3b&amp;quot;&amp;gt;Test with nllb-200-distilled-1.3B&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;The medium model produced the following output:&amp;lt;/p&amp;gt;
&amp;lt;blockquote&amp;gt;
&amp;lt;p&amp;gt;“The Commission shall inform the Member States of the date of the entry into force of this Regulation.”&amp;lt;/p&amp;gt;
&amp;lt;/blockquote&amp;gt;
&amp;lt;h3 id=&amp;quot;test-with-nllb-200-distilled-3.3b&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#test-with-nllb-200-distilled-3.3b&amp;quot;&amp;gt;Test with nllb-200-distilled-3.3B&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;The largest model generated the following translation:&amp;lt;/p&amp;gt;
&amp;lt;blockquote&amp;gt;
&amp;lt;p&amp;gt;“Once the glass fibre installation is completed, you will receive a notice on the date of installation and a quick start guide for the installation of the glass fibre connections.”&amp;lt;/p&amp;gt;
&amp;lt;/blockquote&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;img src=&amp;quot;nllb-200-distilled-3.3B.png&amp;quot; alt=&amp;quot;Response of the large model&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;translation-quality-conclusion&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#translation-quality-conclusion&amp;quot;&amp;gt;Translation Quality Conclusion&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;A comprehensive assessment is difficult after just a few tests. Nevertheless, it became clear that only the largest model is viable for production use. It was also notable that the models performed significantly more reliably when the source language was English. If translation is exclusively from English, the medium model might therefore be sufficient.&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;test-%E2%80%94-performance&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#test-%E2%80%94-performance&amp;quot;&amp;gt;Test — Performance&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Once the suitable model was identified, the next step was to determine under which hardware conditions productive operation is feasible. As a benchmark, it was assumed that translating the reference sentence should take no longer than two seconds. Additionally, the difference between a traditional CPU-based server and a GPU-based system was to be determined.&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;test%3A-traditional-server&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#test%3A-traditional-server&amp;quot;&amp;gt;Test: Traditional Server&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;A Hetzner CX53 with 16 vCPUs and 32 GB RAM was used as the CPU server (cost: €17 per month).&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Response time: 12.93 seconds&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;test%3A-gpu-server&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#test%3A-gpu-server&amp;quot;&amp;gt;Test: GPU Server&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;An Amazon g4dn.large with 16 GB GPU RAM (Nvidia) was used as the GPU server. The cost is €0.67 per hour, roughly €500 per month.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;strong&amp;gt;Response time: 1.31 seconds&amp;lt;/strong&amp;gt; — GPU memory usage: approx. 13 GB&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;performance-conclusion&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#performance-conclusion&amp;quot;&amp;gt;Performance Conclusion&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;The difference between the two systems was significantly larger than expected. Even without deep knowledge of the internal workings of LLMs, it is clear that productive operation is practically only feasible with GPU-based hardware. Costs on AWS are currently high, but cheaper alternatives exist — for example at Hetzner&amp;lt;sup class=&amp;quot;footnote-ref&amp;quot;&amp;gt;&amp;lt;a href=&amp;quot;#fn4&amp;quot; id=&amp;quot;fnref4&amp;quot;&amp;gt;[4]&amp;lt;/a&amp;gt;&amp;lt;/sup&amp;gt;. The achieved response time is fundamentally suitable for production use. Parallel requests had no significant impact on latency in the tests.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;&amp;lt;img src=&amp;quot;nvidia-smi.png&amp;quot; alt=&amp;quot;nvidia-smi output showing GPU memory usage&amp;quot;&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;overall-conclusion&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#overall-conclusion&amp;quot;&amp;gt;Overall Conclusion&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;This evaluation clearly demonstrates that it is possible to self-host AI-based services like machine translation using freely available models and modern hardware — with reasonable effort and competitive quality. While the ongoing costs for GPU-based systems are still relatively high, falling prices and increasing efficiency can be expected as adoption grows and technology advances. Moreover, more affordable hosting alternatives beyond the major cloud providers already exist today.&amp;lt;/p&amp;gt;
&amp;lt;p&amp;gt;Especially in heavily regulated industries — such as finance, healthcare, or the public sector — a self-hosted AI service can offer significant advantages:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Data sovereignty&amp;lt;/strong&amp;gt; is fully preserved, as no sensitive information leaves external systems.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Compliance requirements&amp;lt;/strong&amp;gt; are easier to meet, since infrastructure and data flows are fully controllable.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Performance and scalability&amp;lt;/strong&amp;gt; can be precisely tailored to your own needs.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Competitive advantages&amp;lt;/strong&amp;gt; emerge when you can offer services that are not only cheaper but also more secure and flexible than commercial alternatives.&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;h2 id=&amp;quot;references&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#references&amp;quot;&amp;gt;References&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;hr class=&amp;quot;footnotes-sep&amp;quot;&amp;gt;
&amp;lt;section class=&amp;quot;footnotes&amp;quot;&amp;gt;
&amp;lt;ol class=&amp;quot;footnotes-list&amp;quot;&amp;gt;
&amp;lt;li id=&amp;quot;fn1&amp;quot; class=&amp;quot;footnote-item&amp;quot;&amp;gt;&amp;lt;p&amp;gt;&amp;lt;a href=&amp;quot;https://developers.deepl.com/docs/getting-started/intro&amp;quot;&amp;gt;DeepL API Documentation&amp;lt;/a&amp;gt; &amp;lt;a href=&amp;quot;#fnref1&amp;quot; class=&amp;quot;footnote-backref&amp;quot;&amp;gt;↩︎&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/li&amp;gt;
&amp;lt;li id=&amp;quot;fn2&amp;quot; class=&amp;quot;footnote-item&amp;quot;&amp;gt;&amp;lt;p&amp;gt;&amp;lt;a href=&amp;quot;https://huggingface.co/models?pipeline_tag=translation&amp;quot;&amp;gt;Hugging Face Translation Models&amp;lt;/a&amp;gt; &amp;lt;a href=&amp;quot;#fnref2&amp;quot; class=&amp;quot;footnote-backref&amp;quot;&amp;gt;↩︎&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/li&amp;gt;
&amp;lt;li id=&amp;quot;fn3&amp;quot; class=&amp;quot;footnote-item&amp;quot;&amp;gt;&amp;lt;p&amp;gt;&amp;lt;a href=&amp;quot;https://github.com/tmseidel/simple_ai_translation_service&amp;quot;&amp;gt;simple_ai_translation_service on GitHub&amp;lt;/a&amp;gt; &amp;lt;a href=&amp;quot;#fnref3&amp;quot; class=&amp;quot;footnote-backref&amp;quot;&amp;gt;↩︎&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/li&amp;gt;
&amp;lt;li id=&amp;quot;fn4&amp;quot; class=&amp;quot;footnote-item&amp;quot;&amp;gt;&amp;lt;p&amp;gt;&amp;lt;a href=&amp;quot;https://www.hetzner.com/dedicated-rootserver/matrix-gpu/&amp;quot;&amp;gt;Hetzner GPU Dedicated Servers&amp;lt;/a&amp;gt; &amp;lt;a href=&amp;quot;#fnref4&amp;quot; class=&amp;quot;footnote-backref&amp;quot;&amp;gt;↩︎&amp;lt;/a&amp;gt;&amp;lt;/p&amp;gt;
&amp;lt;/li&amp;gt;
&amp;lt;/ol&amp;gt;
&amp;lt;/section&amp;gt;
</content>
    <author>
      <name>Tom Seidel</name>
    </author>
    <category term="AI"/>
    <category term="LLM"/>
    <category term="Self-Hosting"/>
    <category term="DevOps"/>
    <category term="Spring Boot"/>
    <category term="Python"/>
  </entry>
  <entry>
    <title>Migrating a Monolith to Microservices: A Practical Guide</title>
    <link href="https://remus-software.org/articles/monolith-to-microservices/" rel="alternate" type="text/html"/>
    <id>https://remus-software.org/articles/monolith-to-microservices/</id>
    <published>2024-03-15T00:00:00.000Z</published>
    <updated>2024-03-15T00:00:00.000Z</updated>
    <summary>A hands-on walkthrough of the architectural decisions and patterns I use when migrating Java monoliths to cloud-native microservices.</summary>
    <content type="html">&amp;lt;p&amp;gt;Migrating a monolithic Java application to microservices is one of the most impactful — and challenging — transformations you can undertake. This article shares the practical approach I’ve refined over multiple engagements.&amp;lt;/p&amp;gt;
&amp;lt;h2 id=&amp;quot;why-migrate%3F&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#why-migrate%3F&amp;quot;&amp;gt;Why Migrate?&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Before touching a single line of code, ask: &amp;lt;em&amp;gt;why are we doing this?&amp;lt;/em&amp;gt; The most common drivers I encounter are:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Deployment bottlenecks&amp;lt;/strong&amp;gt;: A single deployable artifact blocks independent team delivery.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Scalability constraints&amp;lt;/strong&amp;gt;: You need to scale a specific module, not the entire application.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Technology modernisation&amp;lt;/strong&amp;gt;: Teams want to adopt newer frameworks or languages for specific domains.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Organisational growth&amp;lt;/strong&amp;gt;: Conway’s Law — architecture tends to mirror team structure.&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;blockquote&amp;gt;
&amp;lt;p&amp;gt;“Never migrate for migration’s sake. Identify the concrete pain point and validate that microservices solve it.”&amp;lt;/p&amp;gt;
&amp;lt;/blockquote&amp;gt;
&amp;lt;h2 id=&amp;quot;the-strangler-fig-pattern&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#the-strangler-fig-pattern&amp;quot;&amp;gt;The Strangler Fig Pattern&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;My go-to approach is the &amp;lt;strong&amp;gt;Strangler Fig Pattern&amp;lt;/strong&amp;gt;: incrementally replace monolith functionality behind a facade, leaving the monolith running until it’s fully strangled.&amp;lt;/p&amp;gt;
&amp;lt;div class=&amp;quot;mermaid&amp;quot;&amp;gt;graph LR
    Client --&amp;gt;|All traffic| Facade[API Gateway / Facade]
    Facade --&amp;gt;|Legacy routes| Monolith[(Monolith)]
    Facade --&amp;gt;|New routes| SvcA[User Service]
    Facade --&amp;gt;|New routes| SvcB[Order Service]
    Monolith -.-&amp;gt;|Shared DB - phase 1| DB[(Database)]
    SvcA --&amp;gt;|Own DB - phase 2| DBA[(Users DB)]
    SvcB --&amp;gt;|Own DB - phase 2| DBB[(Orders DB)]
&amp;lt;/div&amp;gt;&amp;lt;p&amp;gt;This lets you:&amp;lt;/p&amp;gt;
&amp;lt;ol&amp;gt;
&amp;lt;li&amp;gt;Ship value incrementally&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Reduce risk by keeping the fallback running&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Validate each new service before extracting the next&amp;lt;/li&amp;gt;
&amp;lt;/ol&amp;gt;
&amp;lt;h2 id=&amp;quot;identifying-service-boundaries&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#identifying-service-boundaries&amp;quot;&amp;gt;Identifying Service Boundaries&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;p&amp;gt;Domain-Driven Design (DDD) gives us the best tools for finding service boundaries. I use &amp;lt;strong&amp;gt;Event Storming&amp;lt;/strong&amp;gt; workshops to:&amp;lt;/p&amp;gt;
&amp;lt;ol&amp;gt;
&amp;lt;li&amp;gt;Map all domain events with the business team&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Identify &amp;lt;strong&amp;gt;bounded contexts&amp;lt;/strong&amp;gt; — areas with consistent language and ownership&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Use bounded contexts as candidate service boundaries&amp;lt;/li&amp;gt;
&amp;lt;/ol&amp;gt;
&amp;lt;div class=&amp;quot;mermaid&amp;quot;&amp;gt;graph TD
    subgraph &amp;quot;Order Context&amp;quot;
        OE1[OrderPlaced]
        OE2[OrderConfirmed]
        OE3[OrderShipped]
    end
    subgraph &amp;quot;Inventory Context&amp;quot;
        IE1[StockReserved]
        IE2[StockReleased]
    end
    subgraph &amp;quot;Notification Context&amp;quot;
        NE1[EmailSent]
        NE2[SMSSent]
    end
    OE2 --&amp;gt; IE1
    OE3 --&amp;gt; NE1
    IE2 --&amp;gt; NE2
&amp;lt;/div&amp;gt;&amp;lt;h2 id=&amp;quot;practical-steps&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#practical-steps&amp;quot;&amp;gt;Practical Steps&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;h3 id=&amp;quot;1.-start-with-the-api-layer&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#1.-start-with-the-api-layer&amp;quot;&amp;gt;1. Start with the API Layer&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;Deploy an &amp;lt;strong&amp;gt;API Gateway&amp;lt;/strong&amp;gt; (AWS API Gateway, Kong, or a simple Spring Cloud Gateway) in front of the monolith. This gives you:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;A single entry point for traffic&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;The ability to route selectively to new services&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;A foundation for cross-cutting concerns (auth, rate limiting, logging)&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;h3 id=&amp;quot;2.-extract-stateless-services-first&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#2.-extract-stateless-services-first&amp;quot;&amp;gt;2. Extract Stateless Services First&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;Pick a bounded context that:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;Has clear, stable APIs&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Is relatively self-contained&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Has low coupling to the rest of the monolith&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;p&amp;gt;Notification services, reporting modules, and authentication are often good first targets.&amp;lt;/p&amp;gt;
&amp;lt;h3 id=&amp;quot;3.-database-decomposition&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#3.-database-decomposition&amp;quot;&amp;gt;3. Database Decomposition&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;The hardest part. Never share a database between the monolith and a new service in the long run. The interim approach:&amp;lt;/p&amp;gt;
&amp;lt;div class=&amp;quot;mermaid&amp;quot;&amp;gt;sequenceDiagram
    participant New Service
    participant Monolith
    participant Shared DB
    participant New DB

    Note over New Service, Shared DB: Phase 1 – Dual Write
    New Service-&amp;gt;&amp;gt;Shared DB: Write (compatibility)
    New Service-&amp;gt;&amp;gt;New DB: Write (new schema)
    Monolith-&amp;gt;&amp;gt;Shared DB: Read/Write

    Note over New Service, New DB: Phase 2 – Cutover
    New Service-&amp;gt;&amp;gt;New DB: Write only
    Monolith-&amp;gt;&amp;gt;Shared DB: Read/Write (deprecated path)
&amp;lt;/div&amp;gt;&amp;lt;h3 id=&amp;quot;4.-embrace-eventual-consistency&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#4.-embrace-eventual-consistency&amp;quot;&amp;gt;4. Embrace Eventual Consistency&amp;lt;/a&amp;gt;&amp;lt;/h3&amp;gt;
&amp;lt;p&amp;gt;With separate services comes eventual consistency. Use &amp;lt;strong&amp;gt;domain events&amp;lt;/strong&amp;gt; over synchronous REST calls wherever possible:&amp;lt;/p&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;Publish events to a message broker (Kafka, RabbitMQ)&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Services subscribe to relevant events&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;Saga pattern for distributed transactions&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;h2 id=&amp;quot;key-takeaways&amp;quot; tabindex=&amp;quot;-1&amp;quot;&amp;gt;&amp;lt;a class=&amp;quot;header-anchor&amp;quot; href=&amp;quot;#key-takeaways&amp;quot;&amp;gt;Key Takeaways&amp;lt;/a&amp;gt;&amp;lt;/h2&amp;gt;
&amp;lt;ul&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Migrate iteratively&amp;lt;/strong&amp;gt; — the Strangler Fig pattern is your friend.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Define clear boundaries&amp;lt;/strong&amp;gt; using DDD bounded contexts.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Decouple the database&amp;lt;/strong&amp;gt; as a separate, explicit step.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Invest in observability&amp;lt;/strong&amp;gt; early — distributed tracing (Jaeger, Zipkin) and centralised logging (ELK stack) become essential.&amp;lt;/li&amp;gt;
&amp;lt;li&amp;gt;&amp;lt;strong&amp;gt;Automate everything&amp;lt;/strong&amp;gt; — CI/CD per service, infrastructure as code, automated testing.&amp;lt;/li&amp;gt;
&amp;lt;/ul&amp;gt;
&amp;lt;p&amp;gt;The migration journey is long, but each extracted service pays dividends in team autonomy and deployment velocity. Start small, validate, and build momentum.&amp;lt;/p&amp;gt;
</content>
    <author>
      <name>Tom Seidel</name>
    </author>
    <category term="Java"/>
    <category term="Microservices"/>
    <category term="Cloud"/>
    <category term="Architecture"/>
  </entry>
</feed>

