The confusion starts innocently enough. You’re designing a new system, someone mentions “events”, and suddenly you’re in a rabbit hole of Kafka topics, event stores, and heated debates about immutability. Both event sourcing and event streaming promise audit trails, replay capabilities, and decoupled architectures. But here’s what the architecture diagrams won’t tell you: conflating these two patterns is the fastest path to a distributed system that works perfectly in development and catastrophically in production.

This isn’t just semantic nitpicking. The Reddit thread that sparked this discussion reveals a widespread industry malaise, engineers recognizing the surface-level similarities but missing the structural differences that determine whether your system scales or collapses under concurrency. Let’s cut through the noise.
Event Sourcing: Your Database Is a Lie (And That’s the Point)
Event sourcing, born from Domain-Driven Design (DDD), fundamentally reimagines state persistence. Instead of storing the current state of an entity, you store the sequence of state-changing events. The current state becomes a left-fold over that sequence, a projection you can rebuild at will.
The GitScrum documentation illustrates this pattern clearly: events append to an immutable store, and aggregates reconstruct their state by replaying those events. This isn’t just a different data model, it’s a different philosophy of truth. Your relational database’s “current state” is merely a cached interpretation of the event log.
The critical, and often missed, requirement is deterministic, per-aggregate ordering with optimistic concurrency control. As one technical commenter pointed out, this is the litmus test: if you can’t guarantee that events for a specific aggregate (say, Account:123) are ordered and can detect concurrent modifications, you don’t have event sourcing. You have a fancy audit log.
This matters because event sourcing’s value proposition hinges on state reconstruction accuracy. When you replay 10,000 events to rebuild an account balance, you must get the exact same result every time, down to the penny. This deterministic guarantee enables powerful capabilities:
- Temporal queries: “What was the portfolio value last Tuesday at 3 PM?”
- Debugging by replay: Clone production events into a test environment
- Compliance audit trails: Regulators don’t trust your “current state” table
The challenges and real-world trade-offs of implementing event sourcing in financial systems are well-documented, event sourcing shines when auditability trumps query simplicity.
Event Streaming: It’s About Transport, Not Truth
Event streaming, by contrast, is a distribution mechanism. When Kafka or Kinesis enters the conversation, we’re discussing how events move between systems, not how they’re canonically stored. The event stream is a river, event sourcing is the reservoir you draw from to guarantee water purity.
Redis Streams documentation explicitly lists “event sourcing” as a use case, which ironically perpetuates the confusion. Yes, you can build an event-sourced system on top of Redis Streams, but the stream itself is just the transport layer. It doesn’t give you per-aggregate ordering or optimistic concurrency guarantees out of the box.
Event streaming’s superpower is parallel processing and decoupling. A single order placement event can fan out to inventory, shipping, analytics, and notification systems simultaneously. Each consumer processes at its own pace, scales independently, and failures in one pipeline don’t block others.
The key distinction: streaming doesn’t care about state reconstruction. It cares about moving data from A to B reliably and enabling real-time processing. You can stream events that are ephemeral, derived, or eventually consistent. The stream itself has no opinion on whether those events represent a source of truth.
The Four Similarities That Fool Smart Engineers
The Reddit confusion is understandable. Both patterns share:
- Immutability: Events don’t change once recorded
- Replay capability: You can reprocess historical events
- Audit trails: Every state change is captured
- Projection flexibility: Consumers build their own views
These surface-level parallels mask three fundamental chasms:
1. Ordering Guarantees vs. Partitioning Freedom
Event sourcing demands strict, per-aggregate ordering. Event streaming optimizes for throughput via partitioning. Kafka will happily place Account:123 events on different partitions for parallel consumption, breaking the deterministic ordering event sourcing requires. This is why the KAFKA-2260 issue, adding optimistic concurrency control, has languished for years. It’s not a streaming concern.
2. Consistency Models
Event sourcing uses optimistic concurrency: “I’ll write this event, but fail if someone else modified the aggregate since I read it.” This prevents lost updates. Event streaming defaults to eventual consistency: “I’ll deliver this event, and consumers will figure it out.” The practical implementation of reliable event publishing using the outbox pattern exists precisely because streaming infrastructure can’t guarantee transactional consistency with your database.
3. Purpose-Driven Design
Event sourcing answers: “How do we store state immutably?” Event streaming answers: “How do we distribute state changes at scale?” One is a persistence pattern, the other is a communication pattern.
The Dangerous Convergence: When Your Event Stream Tries to Be a Source of Truth
The real architectural accident happens when teams build “event-sourced” systems on pure streaming infrastructure. They dump events into Kafka, treat the topic as their source of truth, and discover too late that:
- Replaying from Kafka is non-deterministic if partitions have been rebalanced
- No built-in concurrency control means lost updates under load
- Schema evolution breaks state reconstruction without versioning discipline
- Retention policies delete “historical truth” to save disk space
As one commenter noted, Kafka could be an event store if it implemented per-key optimistic concurrency and you never rebalanced partitions. But that’s like saying your Toyota Corolla could be a Formula 1 car if you replaced the engine, suspension, and entire drivetrain. Technically true, practically absurd.
This convergence attempt creates systems that are too complex for simple streaming needs and too weak for true event sourcing requirements. You get the worst of both worlds: operational complexity of Kafka with the query limitations of an event store.
Practical Guidance: Choose Your Pattern, Accept the Trade-offs
Use Event Sourcing When:
- Auditability is non-negotiable (finance, healthcare, compliance)
- Temporal queries are frequent business requirements
- Debugging via replay will save weeks of production firefighting
- Your domain model benefits from DDD aggregates
But accept that you’ll need event replay and snapshot strategies for bootstrapping read models, and that read models are eventually consistent.
Use Event Streaming When:
- System integration requires decoupled, asynchronous communication
- Real-time analytics need parallel processing
- Event-driven microservices need reliable inter-service messaging
- Throughput matters more than per-entity consistency
But accept that you’ll need the outbox pattern for reliable publishing, and that consumers must handle duplicate and out-of-order events.
Use Both (Correctly) When:
You’re GitScrum, tracking user actions immutably (event sourcing) while syndicating those events to analytics pipelines (streaming). The event store is your source of truth, Redis Streams is your distribution mechanism. They serve different purposes in the same architecture.
The Verdict: Fundamentally Different, Occasionally Complementary
Event sourcing and event streaming aren’t competing patterns, they’re orthogonal concerns that happen to share a vocabulary. The controversy isn’t which is better, it’s why we keep forcing them into the same architectural box.
The teams that succeed are those that respect each pattern’s core purpose: event sourcing for state truth, event streaming for state distribution. They don’t try to make Kafka into a database, and they don’t try to query their event store like a stream.
The next time someone proposes “event sourcing on Kafka”, ask them: “How do we guarantee per-aggregate ordering under partition rebalancing?” If they can’t answer, you’re not architecting, you’re experimenting with someone else’s production system. And that experiment has already failed for enough teams that we should know better.
The operational challenges in event-driven systems multiply when patterns are misapplied. Architecture isn’t about using the shiniest tools, it’s about choosing the right constraints for your problem. Event sourcing and event streaming each impose different constraints. Choose wisely, or pay the price in production outages and compliance failures.




