Event Store Comparison (choosing your storage)
Not all event stores are created equal. Your choice affects performance, operations, and what's possible.
The options landscape
| Category | Examples | Best for |
|---|---|---|
| Purpose-built | KurrentDB/EventStoreDB, Axon Server | Teams committed to ES |
| Document DB | Marten (PostgreSQL), CosmosDB | .NET shops, existing infra |
| Relational | DIY on PostgreSQL/SQL Server | Full control, existing skills |
| Cloud-native | AWS EventBridge, Azure Event Hubs | Serverless, managed |
| Actor-based | Orleans, Akka.NET | Actor model systems |
KurrentDB / EventStoreDB
The original purpose-built event store, now called KurrentDB.
Strengths
- Native stream semantics: Streams, subscriptions, projections built-in
- Global ordering:
$allstream with total ordering - Category streams:
$ce-{category}for cross-aggregate queries - Persistent subscriptions: Competing consumers, checkpointing
- Projections: Server-side JavaScript projections
Weaknesses
- Operational complexity: Another database to manage
- Learning curve: New concepts (projections, subscriptions)
- Limited cloud options: Self-hosted or managed (limited regions)
When to use
- You're building a serious ES system
- You need server-side projections
- Global ordering matters
.NET example
using KurrentDB.Client;
// Connection
var settings = KurrentDBClientSettings.Create("kurrentdb://localhost:2113?tls=false");
var client = new KurrentDBClient(settings);
// Append with optimistic concurrency
var streamName = "order-12345";
var eventData = new EventData(
Uuid.NewUuid(),
"OrderPlaced",
JsonSerializer.SerializeToUtf8Bytes(new { OrderId = "12345", Amount = 99.99m })
);
// First event - expect stream doesn't exist
await client.AppendToStreamAsync(
streamName,
StreamState.NoStream,
new[] { eventData }
);
// Subsequent events - expect specific revision
var readResult = client.ReadStreamAsync(Direction.Backwards, streamName, StreamPosition.End, 1);
var lastEvent = await readResult.FirstAsync();
await client.AppendToStreamAsync(
streamName,
lastEvent.Event.EventNumber,
new[] { newEventData }
);
// Subscribe to stream
await using var subscription = client.SubscribeToStream(streamName, FromStream.Start);
await foreach (var message in subscription.Messages)
{
if (message is StreamMessage.Event(var resolvedEvent))
{
Console.WriteLine($"Received: {resolvedEvent.Event.EventType}");
}
}
// Subscribe to all events (for projections)
await using var allSub = client.SubscribeToAll(FromAll.Start);
await foreach (var message in allSub.Messages)
{
// Process all events across all streams
}
Resources
https://github.com/kurrent-io/kurrentdb-client-dotnethttps://developers.eventstore.com/
Marten (PostgreSQL)
Document database and event store built on PostgreSQL.
Strengths
- PostgreSQL: Use existing infrastructure and skills
- Document store + ES: Both in one library
- Strong .NET integration: LINQ, async streams
- Projections: Inline and async projections built-in
- Multi-tenancy: Built-in support
Weaknesses
- PostgreSQL dependency: Must use PostgreSQL
- No global ordering: Per-stream ordering only
- Scale limits: PostgreSQL limits apply
When to use
- You already use PostgreSQL
- You want document store + event store
- .NET is your primary platform
.NET example
using Marten;
using Marten.Events;
using Marten.Events.Projections;
// Configuration
var store = DocumentStore.For(opts =>
{
opts.Connection("Host=localhost;Database=myapp;Username=postgres;Password=postgres");
// Register events
opts.Events.AddEventType<OrderPlaced>();
opts.Events.AddEventType<OrderShipped>();
// Register projections
opts.Projections.Add<OrderSummaryProjection>(ProjectionLifecycle.Inline);
});
// Append events
await using var session = store.LightweightSession();
var orderId = Guid.NewGuid();
var stream = session.Events.StartStream<Order>(
orderId,
new OrderPlaced(orderId, "customer-123", 99.99m),
new OrderItemAdded(orderId, "SKU-001", 2)
);
await session.SaveChangesAsync();
// Append to existing stream with optimistic concurrency
session.Events.Append(
orderId,
expectedVersion: 2, // Must match current version
new OrderShipped(orderId, "TRACK-123")
);
await session.SaveChangesAsync();
// Read stream
var events = await session.Events.FetchStreamAsync(orderId);
foreach (var @event in events)
{
Console.WriteLine($"{@event.EventType}: {System.Text.Json.JsonSerializer.Serialize(@event.Data)}");
}
// Aggregate on-the-fly
var order = await session.Events.AggregateStreamAsync<Order>(orderId);
// Projection
public sealed class OrderSummaryProjection : SingleStreamProjection<OrderSummary>
{
public OrderSummary Create(OrderPlaced @event) =>
new(
@event.OrderId,
@event.CustomerId,
@event.Amount,
OrderStatus.Placed,
null
);
public OrderSummary Apply(OrderShipped @event, OrderSummary current) =>
current with { Status = OrderStatus.Shipped, TrackingNumber = @event.TrackingNumber };
}
public sealed record OrderSummary(
Guid OrderId,
string CustomerId,
decimal Amount,
OrderStatus Status,
string? TrackingNumber
);
Resources
https://martendb.io/https://github.com/JasperFx/marten
Azure Cosmos DB
Microsoft's globally distributed NoSQL database with change feed.
Strengths
- Global distribution: Multi-region, low latency
- Fully managed: No infrastructure to manage
- Change feed: Built-in event streaming
- Serverless option: Pay per request
Weaknesses
- No native streams: Must model yourself
- Cost: Can be expensive at scale
- Eventual consistency: Strong consistency costs more
- Vendor lock-in: Azure-specific
When to use
- You're on Azure
- Global distribution is critical
- You want fully managed
.NET example
using Microsoft.Azure.Cosmos;
// Event document structure
public sealed record EventDocument(
string id, // EventId
string streamId, // Partition key
long streamVersion,
string eventType,
DateTimeOffset occurredAt,
JsonElement data
);
// Repository
public sealed class CosmosEventStore
{
private readonly Container _container;
public CosmosEventStore(CosmosClient client, string databaseId, string containerId)
{
_container = client.GetContainer(databaseId, containerId);
}
public async Task AppendToStream(
string streamId,
long expectedVersion,
IReadOnlyList<IDomainEvent> events,
CancellationToken ct)
{
// Use transactional batch for atomicity
var batch = _container.CreateTransactionalBatch(new PartitionKey(streamId));
var version = expectedVersion;
foreach (var @event in events)
{
version++;
var doc = new EventDocument(
id: Guid.NewGuid().ToString(),
streamId: streamId,
streamVersion: version,
eventType: @event.GetType().Name,
occurredAt: DateTimeOffset.UtcNow,
data: JsonSerializer.SerializeToElement(@event)
);
batch.CreateItem(doc);
}
// Optimistic concurrency via stored procedure or conditional check
var response = await batch.ExecuteAsync(ct);
if (!response.IsSuccessStatusCode)
{
throw new WrongExpectedVersionException(streamId, expectedVersion, version);
}
}
public async IAsyncEnumerable<EventDocument> ReadStream(
string streamId,
long fromVersion,
[EnumeratorCancellation] CancellationToken ct)
{
var query = new QueryDefinition(
"SELECT * FROM c WHERE c.streamId = @streamId AND c.streamVersion >= @fromVersion ORDER BY c.streamVersion"
)
.WithParameter("@streamId", streamId)
.WithParameter("@fromVersion", fromVersion);
using var iterator = _container.GetItemQueryIterator<EventDocument>(
query,
requestOptions: new QueryRequestOptions { PartitionKey = new PartitionKey(streamId) }
);
while (iterator.HasMoreResults)
{
var response = await iterator.ReadNextAsync(ct);
foreach (var doc in response)
{
yield return doc;
}
}
}
}
// Change feed for projections
public sealed class CosmosChangeFeedProcessor
{
public async Task StartAsync(
Container eventsContainer,
Container leaseContainer,
Func<IReadOnlyCollection<EventDocument>, CancellationToken, Task> handler,
CancellationToken ct)
{
var processor = eventsContainer
.GetChangeFeedProcessorBuilder<EventDocument>("projections",
async (changes, token) => await handler(changes, token))
.WithInstanceName("instance-1")
.WithLeaseContainer(leaseContainer)
.WithStartTime(DateTime.MinValue.ToUniversalTime())
.Build();
await processor.StartAsync();
}
}
Resources
https://learn.microsoft.com/en-us/azure/cosmos-db/https://learn.microsoft.com/en-us/azure/architecture/databases/guide/transactional-outbox-cosmos
PostgreSQL (DIY)
Roll your own event store on PostgreSQL.
Strengths
- Full control: Design exactly what you need
- Existing skills: SQL is well-known
- Mature tooling: Backups, monitoring, etc.
- Cost effective: No additional licensing
Weaknesses
- Build everything: Subscriptions, projections, etc.
- Performance tuning: Must optimize yourself
- No global ordering: Without careful design
When to use
- You want full control
- Budget is constrained
- Team knows PostgreSQL well
Schema
-- Events table
CREATE TABLE events (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
stream_id VARCHAR(255) NOT NULL,
stream_version BIGINT NOT NULL,
event_type VARCHAR(255) NOT NULL,
schema_version INT NOT NULL DEFAULT 1,
occurred_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
data JSONB NOT NULL,
metadata JSONB,
UNIQUE (stream_id, stream_version)
);
-- Index for stream reads
CREATE INDEX idx_events_stream ON events (stream_id, stream_version);
-- Index for global ordering (using id or a sequence)
CREATE INDEX idx_events_global ON events (id);
-- Optimistic concurrency function
CREATE OR REPLACE FUNCTION append_events(
p_stream_id VARCHAR(255),
p_expected_version BIGINT,
p_events JSONB
) RETURNS BIGINT AS $$
DECLARE
v_current_version BIGINT;
v_event JSONB;
v_new_version BIGINT;
BEGIN
-- Get current version (lock the stream)
SELECT COALESCE(MAX(stream_version), 0) INTO v_current_version
FROM events
WHERE stream_id = p_stream_id
FOR UPDATE;
-- Check expected version
IF v_current_version != p_expected_version THEN
RAISE EXCEPTION 'WrongExpectedVersion: expected %, actual %',
p_expected_version, v_current_version;
END IF;
-- Insert events
v_new_version := v_current_version;
FOR v_event IN SELECT * FROM jsonb_array_elements(p_events)
LOOP
v_new_version := v_new_version + 1;
INSERT INTO events (stream_id, stream_version, event_type, data, metadata)
VALUES (
p_stream_id,
v_new_version,
v_event->>'eventType',
v_event->'data',
v_event->'metadata'
);
END LOOP;
RETURN v_new_version;
END;
$$ LANGUAGE plpgsql;
-- Checkpoints for projections
CREATE TABLE projection_checkpoints (
projection_name VARCHAR(255) PRIMARY KEY,
last_processed_id UUID NOT NULL,
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
.NET implementation
using Npgsql;
using System.Text.Json;
public sealed class PostgresEventStore : IEventStore
{
private readonly NpgsqlDataSource _dataSource;
public PostgresEventStore(NpgsqlDataSource dataSource)
{
_dataSource = dataSource;
}
public async Task<AppendResult> AppendToStream(
string streamId,
long expectedVersion,
IReadOnlyList<StoredEvent> events,
CancellationToken ct)
{
var eventsJson = JsonSerializer.Serialize(events.Select(e => new
{
eventType = e.EventType,
data = JsonDocument.Parse(e.Data).RootElement,
metadata = e.Metadata is not null
? JsonDocument.Parse(e.Metadata).RootElement
: (JsonElement?)null
}));
await using var conn = await _dataSource.OpenConnectionAsync(ct);
await using var cmd = new NpgsqlCommand(
"SELECT append_events(@streamId, @expectedVersion, @events::jsonb)",
conn);
cmd.Parameters.AddWithValue("streamId", streamId);
cmd.Parameters.AddWithValue("expectedVersion", expectedVersion);
cmd.Parameters.AddWithValue("events", eventsJson);
try
{
var newVersion = (long)(await cmd.ExecuteScalarAsync(ct))!;
return new AppendResult(newVersion);
}
catch (PostgresException ex) when (ex.Message.Contains("WrongExpectedVersion"))
{
throw new WrongExpectedVersionException(streamId, expectedVersion, -1);
}
}
public async IAsyncEnumerable<StoredEvent> ReadStream(
string streamId,
long fromVersionInclusive,
[EnumeratorCancellation] CancellationToken ct)
{
await using var conn = await _dataSource.OpenConnectionAsync(ct);
await using var cmd = new NpgsqlCommand(
@"SELECT id, event_type, schema_version, occurred_at, data, metadata
FROM events
WHERE stream_id = @streamId AND stream_version >= @fromVersion
ORDER BY stream_version",
conn);
cmd.Parameters.AddWithValue("streamId", streamId);
cmd.Parameters.AddWithValue("fromVersion", fromVersionInclusive);
await using var reader = await cmd.ExecuteReaderAsync(ct);
while (await reader.ReadAsync(ct))
{
yield return new StoredEvent(
EventId: reader.GetGuid(0),
EventType: reader.GetString(1),
SchemaVersion: reader.GetInt32(2),
OccurredAt: reader.GetFieldValue<DateTimeOffset>(3),
Data: JsonSerializer.SerializeToUtf8Bytes(reader.GetFieldValue<JsonElement>(4)),
Metadata: reader.IsDBNull(5)
? null
: JsonSerializer.SerializeToUtf8Bytes(reader.GetFieldValue<JsonElement>(5))
);
}
}
}
Orleans
Actor-based event sourcing with JournaledGrain.
Strengths
- Actor model: Natural fit for aggregates
- Built-in clustering: Distributed by design
- Multiple storage providers: Azure, AWS, PostgreSQL, etc.
- Virtual actors: No lifecycle management
Weaknesses
- All-in on Orleans: Big commitment
- Learning curve: Actor model concepts
- Limited querying: No ad-hoc queries across grains
When to use
- You're building with actors
- High throughput, low latency required
- Distributed system from the start
.NET example
using Orleans;
using Orleans.EventSourcing;
using Orleans.Providers;
// Grain state
[GenerateSerializer]
public sealed class BankAccountState
{
[Id(0)] public Guid AccountId { get; set; }
[Id(1)] public string OwnerName { get; set; } = "";
[Id(2)] public decimal Balance { get; set; }
[Id(3)] public bool IsClosed { get; set; }
public void Apply(AccountOpened @event)
{
AccountId = @event.AccountId;
OwnerName = @event.OwnerName;
Balance = 0;
IsClosed = false;
}
public void Apply(MoneyDeposited @event) => Balance += @event.Amount;
public void Apply(MoneyWithdrawn @event) => Balance -= @event.Amount;
public void Apply(AccountClosed @event) => IsClosed = true;
}
// Events
[GenerateSerializer]
public sealed record AccountOpened(
[property: Id(0)] Guid AccountId,
[property: Id(1)] string OwnerName
);
[GenerateSerializer]
public sealed record MoneyDeposited([property: Id(0)] decimal Amount);
[GenerateSerializer]
public sealed record MoneyWithdrawn([property: Id(0)] decimal Amount);
[GenerateSerializer]
public sealed record AccountClosed();
// Grain interface
public interface IBankAccountGrain : IGrainWithGuidKey
{
Task Open(string ownerName);
Task Deposit(decimal amount);
Task Withdraw(decimal amount);
Task Close();
Task<decimal> GetBalance();
}
// Journaled grain implementation
[LogConsistencyProvider(ProviderName = "LogStorage")]
public sealed class BankAccountGrain : JournaledGrain<BankAccountState>, IBankAccountGrain
{
public async Task Open(string ownerName)
{
if (State.AccountId != Guid.Empty)
throw new InvalidOperationException("Account already opened");
RaiseEvent(new AccountOpened(this.GetPrimaryKey(), ownerName));
await ConfirmEvents();
}
public async Task Deposit(decimal amount)
{
EnsureOpen();
if (amount <= 0)
throw new ArgumentException("Amount must be positive");
RaiseEvent(new MoneyDeposited(amount));
await ConfirmEvents();
}
public async Task Withdraw(decimal amount)
{
EnsureOpen();
if (amount <= 0)
throw new ArgumentException("Amount must be positive");
if (State.Balance < amount)
throw new InvalidOperationException("Insufficient funds");
RaiseEvent(new MoneyWithdrawn(amount));
await ConfirmEvents();
}
public async Task Close()
{
EnsureOpen();
RaiseEvent(new AccountClosed());
await ConfirmEvents();
}
public Task<decimal> GetBalance()
{
EnsureOpen();
return Task.FromResult(State.Balance);
}
private void EnsureOpen()
{
if (State.AccountId == Guid.Empty)
throw new InvalidOperationException("Account not opened");
if (State.IsClosed)
throw new InvalidOperationException("Account is closed");
}
}
// Silo configuration
var builder = Host.CreateDefaultBuilder(args)
.UseOrleans(siloBuilder =>
{
siloBuilder
.UseLocalhostClustering()
.AddLogStorageBasedLogConsistencyProvider("LogStorage")
.AddMemoryGrainStorage("LogStorage"); // Use Azure/PostgreSQL in production
});
Resources
https://learn.microsoft.com/en-us/dotnet/orleans/grains/event-sourcing/https://github.com/dotnet/orleans
Decision matrix
| Requirement | Best Choice |
|---|---|
| Purpose-built ES, self-hosted | KurrentDB |
| PostgreSQL shop, .NET | Marten |
| Azure, global distribution | Cosmos DB |
| Full control, existing SQL skills | DIY PostgreSQL |
| Actor model, distributed | Orleans |
| Serverless, event-driven | AWS EventBridge / Azure Event Grid |
Feature comparison
| Feature | KurrentDB | Marten | Cosmos DB | PostgreSQL DIY | Orleans |
|---|---|---|---|---|---|
| Native streams | ✅ | ✅ | ❌ | ❌ | ✅ |
| Global ordering | ✅ | ❌ | ❌ | ⚠️ | ❌ |
| Subscriptions | ✅ | ✅ | ✅ (Change Feed) | ❌ (Build) | ❌ |
| Server projections | ✅ | ❌ | ❌ | ❌ | ❌ |
| Managed option | ⚠️ | ❌ | ✅ | ✅ (RDS, etc.) | ❌ |
| Multi-tenancy | ❌ | ✅ | ✅ | ⚠️ | ⚠️ |
| .NET integration | ✅ | ✅✅ | ✅ | ✅ | ✅✅ |
Legend: ✅ = Yes, ❌ = No, ⚠️ = Partial/Manual
Next
Advanced conflict resolution strategies.
Next: Advanced conflict resolution
Sources
https://developers.eventstore.com/https://martendb.io/https://learn.microsoft.com/en-us/azure/cosmos-db/https://learn.microsoft.com/en-us/dotnet/orleans/