November 28, 2025

From Single‑Node RDS to Distributed SQL with YugabyteDB Aeon

At Tofu, we’re building a fast‑growing B2B SaaS platform that automates bookkeeping with AI. Tofu’s platform connects to accounting systems, financial data sources, and enterprise workflows—processing large volumes of structured and semi‑structured financial data in near real time.

As our customer base and integration complexity grew, scalability, reliability, and zero‑downtime operations became mission‑critical. This post is the story of how Tofu evolved its database architecture—from a single‑node AWS RDS (PostgreSQL) instance to YugabyteDB Aeon, a fully managed, PostgreSQL‑compatible distributed SQL database that now powers production.

Why We Moved: When Single‑Node PostgreSQL Hit Its Ceiling

When Tofu launched, simplicity was the right call. AWS RDS (PostgreSQL) gave us reliability, predictable pricing, and minimal operations—ideal for an early product.

As Tofu onboarded more organizations and automated more bookkeeping workflows, the workload shape changed:

  • Integration jobs and background workers caused write bursts.
  • Real‑time analytics increased concurrent reads.
  • Schema changes required maintenance windows.
  • Vertical scaling delivered diminishing returns.

We at Tofu needed a distributed, fault‑tolerant, PostgreSQL‑compatible database—one that could scale reads and writes without forcing application rewrites.

Evaluating the Options

We at Tofu looked at managed and open‑source paths that would keep PostgreSQL semantics while giving us true horizontal scale. Here’s how our short‑list stacked up.

Scope: Comparing the options we evaluated: YugabyteDB Aeon, Amazon Aurora PostgreSQL (classic), CockroachDB, and Citus (PostgreSQL extension).
Note:
AWS also offers Aurora PostgreSQL Limitless Database, a separate sharded, multi‑writer architecture; our evaluation focused on classic Aurora.

Distributed Postgres options—side‑by‑side

Criterion YugabyteDB Aeon (managed) Amazon Aurora PostgreSQL (classic) CockroachDB Citus (PostgreSQL extension)
Architecture (write model) Distributed SQL (tablet-sharded with Raft replication) 1 2 PostgreSQL with decoupled storage; single writer per cluster Distributed SQL (multi-node, Raft) 1 PostgreSQL + Citus extension (coordinator + workers) 1
Write scale-out Yes—multi-shard; tablets auto-split and rebalance 1 Nosingle writer (Limitless is a separate, sharded product) 1 Yes—multi-writer across nodes 1 Yes—via sharding; depends on good distribution keys 1
Read scale-out Yes—leaders/followers spread across nodes Yes—read replicas (up to 15) and Global Database Yes Yes—coordinator parallelizes across workers
Schema changes to distribute Typically none for many apps (optional colocation/partitioning controls) N/A (not a write-sharded model) Typically none Yes—choose a distribution key; align DDL/constraints
Maintenance model Rolling upgrades on fault-tolerant clusters (service remains available) 1 Managed patching; ZDP/blue-green tools minimize downtime 1 Rolling, version-gated upgrades 1 Depends on Postgres/cluster setup and host (self-managed/Azure)
PostgreSQL compatibility (client/SQL) PostgreSQL wire & SQL (YSQL) 1 Native PostgreSQL PostgreSQL-compatible wire/protocol (behavioral differences) 1 Native PostgreSQL + Citus features 1

Why YugabyteDB Aeon fit Tofu’s needs
Tofu’s engineering team prioritized: PostgreSQL wire compatibility (no ORM rewrites), horizontal read/write scale, rolling maintenance with automatic failover, backups + PITR, private networking (VPC + PrivateLink), and a Datadog integration that plugged into existing dashboards.

Migrating from RDS to YugabyteDB Aeon with YugabyteDB Voyager

We at Tofu used YugabyteDB Voyager to keep the migration controlled and low‑risk:

  1. Assessment & schema
    YugabyteDB Voyager analyzed our RDS schema, produced YSQL‑compatible DDL, and highlighted areas worth tweaking for a distributed layout.
  2. Parallel data movement
    YugabyteDB Voyager orchestrated parallel export/import across tables; for the final stretch, it supported a catch‑up phase to close the gap before cutover.
  3. Validation
    YugabyteDB Voyager provided verification helpers. We at Tofu still ran our own row‑count checks and spot validations to be safe.
  4. Cutover
    We updated connection strings. YugabyteDB Aeon automatically rebalanced tablets post‑switch, and Tofu’s application stack kept humming.

From the application’s perspective, it was still PostgreSQL—only now distributed, resilient, and cloud‑native.

Operational Benefits After the Migration

Zero‑downtime rolling maintenance

 YugabyteDB Aeon performs rolling upgrades/maintenance on fault‑tolerant clusters, so the service stays available as nodes are patched one at a time. Connections to the node being updated can drop; we rely on pool/driver retries and schedule a maintenance window during off‑peak hours. 

High availability and self‑healing

 Data is sharded into tablets and synchronously replicated via Raft; leaders are elected automatically, and failovers happen without manual promotion. 

Scale out, not just up

 Adding nodes increases capacity; the cluster rebalances data across nodes, so we can scale reads and writes without app rewrites (performance tracks node count on well‑distributed workloads). 

Backups and point‑in‑time recovery (PITR)

 YugabyteDB Aeon supports scheduled full/incremental backups and PITR. Restores/clones pick the closest snapshot and use Flashback to reach your chosen timestamp within the retention window. 

Observability that fits our stack

 The Datadog integration streams cluster metrics to our existing dashboards with an out‑of‑the‑box view of health/perf. No custom exporters required.

Private networking on AWS

 We use YugabyteDB Aeon VPCs and AWS PrivateLink (Private Service Endpoints) to keep traffic private, granting access to specific AWS principals (ARNs). Database auth uses PostgreSQL mechanisms (SCRAM/LDAP).

Developer simplicity

 YSQL is built from PostgreSQL code (v15 lineage) and is wire‑compatible, so our Rust + SQLx code worked unchanged.

Lessons Learned: DDL & Query Behavior in a Distributed Postgres

Transaction isolation & retries
In YSQL, the effective default is Snapshot (Repeatable Read)—PostgreSQL’s READ COMMITTED maps to Snapshot unless you enable a server flag. Under concurrency (and occasionally due to clock skew), you may see read restart/serialization errors; lightweight, idempotent retries are the right fix. For long‑running read‑only jobs, SERIALIZABLE READ ONLY DEFERRABLE avoids restarts by waiting for a safe snapshot. If your logic allows, enabling true READ COMMITTED can reduce visible restarts.

Query planning & statistics
YugabyteDB ships a Cost‑Based Optimizer (CBO) for YSQL that accounts for topology and network costs. Keep stats fresh: YugabyteDB Aeon provides Auto Analyze, and we still run ANALYZE after large imports or schema changes. Some plans (join order, index selection) will differ from single‑node Postgres—expected in a distributed layout.

Indexes are online by default
Index creation uses online backfill by default, so routine index adds don’t block writes. That made iterative schema changes safer during releases.

Extensions
Extensions we rely on—uuid-ossp, pgcrypto, pg_trgm—work as expected; you can also use gen_random_uuid() for UUIDs. (As with Postgres, low‑level C extensions that assume specific storage internals aren’t applicable.)

Results

  • Throughput headroom for mixed OLTP + background processing.
  • No more planned downtime for patching/upgrades; maintenance is rolling.
  • Simpler operations: PITR, private networking, and observability integrated with our existing workflows.
  • No code rewrite: PostgreSQL wire compatibility made the move nearly invisible to application code.

Conclusion

Moving from a single‑node RDS instance to YugabyteDB Aeon has been one of the most impactful infrastructure upgrades in Tofu’s journey. We gained horizontal scale, fault tolerance, and operational simplicity—without sacrificing PostgreSQL familiarity or developer velocity.

If you’re hitting RDS/Aurora limits and need elastic, zero‑downtime operations on PostgreSQL semantics, YugabyteDB Aeon is worth a close look.

Read the Full Case Study

A detailed white paper about Tofu’s use of YugabyteDB Aeon will soon be available on the YugabyteDB website.

Appendix (for the curious)

  • Aurora “Limitless Database” exists as a separate, sharded deployment to scale writes; our evaluation targeted classic Aurora. Amazon Web Services, Inc.+1
  • Gotcha to avoid: Don’t describe YugabyteDB Aeon logins as IAM DB authentication—use SCRAM/LDAP for DB users; use PrivateLink for private connectivity and OIDC/SSO for the console. YugabyteDB Docs+1

Written by: Ken Kanai, VP of Engineering at Tofu
Tofu is a fast‑growing B2B SaaS platform transforming the bookkeeping industry with AI‑driven automation.
Learn more at www.gotofu.com.