The collective delusion starts innocently enough. You’re three sprints into building a CRUD app for internal reporting. The conversation drifts from “how do we ship this quarter” to “what happens when we have a million users?” Suddenly, someone suggests Cassandra. Or maybe CockroachDB. Definitely something with “eventual consistency” in the docs. Before you know it, you’re architecting a multi-region, sharded system for a product that currently has twelve beta testers and a database smaller than a Netflix download.
This isn’t hypothetical. It’s happening in startups and enterprises right now, teams building distributed systems for problems that could be solved with a single PostgreSQL instance running on a modest VPS. The research is clear: premature distribution is a productivity killer, and the industry is finally waking up to how much momentum it destroys.
The Psychology of Premature Distribution
The trap isn’t technical. It’s psychological. Developers aren’t choosing distributed databases because they’ve benchmarked their workload and proven it’s necessary. They’re choosing them because of three powerful forces: fear, ego, and cargo-cult engineering.
Fear manifests as architectural anxiety. What if we grow 1000x overnight? What if our single database becomes a bottleneck? What if we look foolish for not planning ahead? This fear drives teams to solve problems they don’t have yet, burning weeks on infrastructure that won’t be relevant for years, if ever.
Ego is sneakier. Building a simple, boring solution doesn’t feel impressive. But architecting a sophisticated distributed system? That feels like senior-level work. It signals expertise. The problem is, that expertise is misplaced when your entire dataset fits in RAM. As one developer bluntly put it on a recent forum discussion, they’re “building a rocket ship to go to the grocery store.”
Cargo-cult engineering is the most insidious. Teams read how Netflix or Uber scaled and assume those patterns apply universally. They forget that Netflix didn’t start with microservices, they evolved into them after hitting actual scaling walls. Copying the architectures of companies at massive scale without having massive-scale problems isn’t learning from success, it’s ignoring the journey that got them there.
The True Cost of Over-Engineering
The operational overhead of distributed databases is staggering for early-stage systems. Let’s count the ways:
- Cognitive load: Every developer must understand consistency models, partition tolerance, and distributed transactions. Simple queries become distributed joins. Debugging requires tracing requests across nodes.
- Operational complexity: Backups, monitoring, and migrations multiply. A single PostgreSQL instance can be backed up with a simple
pg_dump. A distributed cluster requires orchestrated snapshots, node coordination, and complex restore procedures. - Development velocity: Features that took hours now take days. Adding a column requires migration scripts across shards. Testing requires standing up multiple nodes. The feedback loop collapses from seconds to minutes, or hours.
The data bears this out. Companies adopting distributed databases prematurely report 3-5x slower feature development. One engineering manager noted that their “future-proofed” system became so complex that adding a simple user profile field required changes across seven services and two weeks of coordination.
Meanwhile, a “boring” PostgreSQL setup on a single server could handle their entire workload with 90% less stress and a much smaller bill. The math is stark: a managed PostgreSQL instance on RDS can handle millions of rows, thousands of concurrent connections, and scales vertically to impressive heights before horizontal sharding becomes necessary.
Boring Tech Can Scale Farther Than You Think
The evidence that traditional databases can handle serious scale is overwhelming, and growing.
Amazon RDS for PostgreSQL now supports instances with up to 64TB of storage and 488GB of RAM. With read replicas, you can scale reads horizontally to handle read-heavy workloads. With Multi-AZ deployments, you get high availability without managing failover yourself. For the vast majority of applications, this is more than sufficient.
But what about truly large scale? Look at Bitso, a Latin American digital financial services company processing millions of transactions with hundreds of engineers. They built their platform on PostgreSQL, not a distributed NoSQL store. Their challenge wasn’t that PostgreSQL couldn’t handle the load, it was that their development workflow couldn’t handle hundreds of engineers sharing the same staging environment.
Their solution? They used Neon’s branching to give every engineer isolated PostgreSQL instances for testing. Each sandbox gets its own production-like database branch, created instantly via copy-on-write. This approach eliminated the staging bottleneck while keeping the simplicity and reliability of PostgreSQL. The key insight: they scaled their development process, not their database architecture.
Bitso’s 250+ engineers now work with isolated, production-like databases without the operational nightmare of managing hundreds of database clusters. The system scales automatically, incurs no cost when idle, and maintains the familiar PostgreSQL interface. This is what modern scaling looks like, smart tooling around proven technology, not premature architectural complexity.
The Momentum Killer
The most devastating cost of premature distribution is what it does to product velocity. When you’re building something new, momentum is everything. The faster you ship, the faster you learn. The faster you learn, the faster you find product-market fit.
Over-engineering kills momentum dead. A team that spends three months building a distributed data layer has three months less learning about their users. While they’re configuring consensus algorithms, competitors are shipping features and capturing market share.
The pattern is predictable. A team starts with enthusiasm and clear vision. Then someone introduces complexity “for scale.” Development slows. Bugs increase. The team spends more time debugging distributed edge cases than building user-facing features. Morale drops. The product ships late, if at all.
This is how startups die. Not because their architecture couldn’t handle scale, but because they never shipped fast enough to get the users that would require scaling.
The Rule of Three: A Practical Framework
So when is distribution justified? Use the Rule of Three:
- First instance: Build it simple. Use PostgreSQL. Don’t abstract.
- Second instance: Watch carefully. Is it really the same problem? Or just similar?
- Third instance: Now you have a pattern. Consider extraction, but only if the duplication is actual duplication of knowledge, not just code.
This applies to services, databases, and abstractions. Don’t build a microservice until you have three separate services that would genuinely benefit from independence. Don’t shard your database until you’ve proven a single instance can’t handle the load.
The corollary: Build for today’s problems, not tomorrow’s fantasies. You cannot predict the future. Every hour spent adding flexibility for hypothetical requirements is an hour not spent on real ones. And when those future requirements arrive, if they ever do, they’re never quite what you imagined.
Signals You Actually Need Distribution
How do you know when you’ve truly outgrown a single database? Look for these signals:
- Measurable performance degradation: Your queries are optimized, indexes are tuned, and you’re still hitting 90%+ CPU utilization during normal operations.
- Verifiable data size limits: You’re approaching the storage limits of your largest instance type (64TB on RDS).
- True fault isolation needs: A database outage must not take down your entire business, and you have the engineering maturity to handle multi-master complexity.
- Geographic latency requirements: Users on different continents experience unacceptable latency from a single region.
Notice what’s not on this list: “We might get a lot of users someday.” That’s not a signal. That’s a hypothesis. Test it by shipping.
Refactor, Don’t Prematurely Architect
The alternative to premature distribution is simple: start boring, refactor when needed. This approach has several advantages:
- You have real data: You know exactly which parts of the system need scaling because you have production metrics.
- You have revenue: You’re not burning runway on infrastructure, you’re shipping features that generate income.
- You have focus: Your team concentrates on user value, not theoretical architecture.
When you do need to refactor, you’re doing it with maximum information. You know the actual query patterns, the real bottlenecks, the true data relationships. Your abstractions will be correct because they’re based on reality, not imagination.
This is how successful companies actually scale. They start with a monolith. They extract services when specific components need independent scaling. They shard databases when specific tables grow too large. Each step is a response to a real problem, not a prediction of a future one.
The Courage to Stay Boring
The hardest part of avoiding the infinite scale trap is psychological. It takes courage to ship something simple when your peers are building sophisticated distributed systems. It takes discipline to say “we’ll solve that when it’s a problem.” It takes confidence to trust that you’ll be able to refactor when the time comes.
But the results speak for themselves. Teams that stay boring ship faster, learn faster, and ultimately scale more successfully. They don’t waste time on problems they don’t have. They build expertise in their actual domain, not in distributed systems theory.
The next time someone suggests a distributed database for your early-stage product, ask: “What specific problem will this solve today?” If the answer is about future scale, smile, nod, and suggest PostgreSQL. Your future self, who’s actually shipping code, will thank you.
The infinite scale trap is seductive because it feels like responsible engineering. In reality, it’s the opposite. Responsible engineering means solving today’s problems with tomorrow’s flexibility in mind, not tomorrow’s problems with today’s resources. Stay boring. Ship fast. Scale when you must, not when you imagine.
Further Reading
- Stop Overengineering: How to Write Clean Code That Actually Ships – A deep dive into the momentum-killing effects of premature architecture
- Amazon RDS for PostgreSQL – Managed PostgreSQL that scales to impressive heights
- Inside Bitso’s Branch-Based Workflow – How a 250+ engineer team scales development with PostgreSQL
- Neon Database Branching – Modern tooling for scaling PostgreSQL development workflows
What scaling challenges have you faced with “boring” technology? Share your experiences in the comments.




