Skip to main content

Snapshots (when replay is too slow)

Replaying 50 events is fine. Replaying 50,000 events for every command is not.

Snapshots are a performance optimization:

  • you still store events as source of truth
  • you occasionally store a materialized aggregate state at version N
  • on load, you start from the snapshot + replay events after N

When to use snapshots

Use snapshots when:

  • streams grow large (thousands+ events per aggregate)
  • load latency is too high
  • your aggregate state is expensive to compute

Don’t snapshot “just because”.

Snapshot design rules

  • Snapshot must be reconstructable (it’s a cache).
  • Snapshot is tied to a stream version.
  • You can delete all snapshots and still be correct.
  • If you encrypt events, consider encrypting snapshots too (same sensitivity).

Minimal snapshot types

public sealed record Snapshot(
string StreamId,
long StreamVersion,
byte[] State
);

public interface ISnapshotStore
{
Task<Snapshot?> GetLatest(string streamId, CancellationToken ct);
Task Put(Snapshot snapshot, CancellationToken ct);
}

Applying snapshots to our aggregate load

Conceptually:

  1. load snapshot at version N (optional)
  2. hydrate aggregate state from snapshot
  3. read events from version N (or N+1 depending on convention)
  4. apply remaining events

Example sketch:

public static async Task<BankAccount> LoadWithSnapshot(
IEventStore store,
ISnapshotStore snapshots,
Guid accountId,
CancellationToken ct
)
{
var streamId = $"bank-account-{accountId}";
BankAccount acc;

var snap = await snapshots.GetLatest(streamId, ct);
var from = 0L;

if (snap is not null)
{
// Deserialize snapshot state into acc (your choice of format)
var state = System.Text.Json.JsonSerializer.Deserialize<BankAccountState>(snap.State)
?? throw new InvalidOperationException("Invalid snapshot.");

acc = BankAccount.FromSnapshot(state, snap.StreamVersion);

from = snap.StreamVersion; // adjust to your version convention
}
else
{
acc = new BankAccount();
}

var events = new List<IDomainEvent>();
await foreach (var se in store.ReadStream(streamId, from, ct))
events.Add(DeserializeDomainEvent(se));

acc.LoadFromHistory(events);
return acc;
}

public sealed record BankAccountState(Guid Id, string OwnerName, decimal Balance, bool IsClosed);

public sealed partial class BankAccount
{
public static BankAccount FromSnapshot(BankAccountState state, long version)
{
var acc = new BankAccount
{
Id = state.Id,
OwnerName = state.OwnerName,
Balance = state.Balance,
IsClosed = state.IsClosed
};

// In real code, expose a protected setter or internal method on AggregateRoot.
// This is a docs-only illustration of the concept.
var versionProp = typeof(AggregateRoot).GetProperty("Version");
versionProp?.SetValue(acc, version);
return acc;
}
}

private static IDomainEvent DeserializeDomainEvent(StoredEvent e)
{
var type = e.EventType switch
{
nameof(AccountOpened) => typeof(AccountOpened),
nameof(MoneyDeposited) => typeof(MoneyDeposited),
nameof(MoneyWithdrawn) => typeof(MoneyWithdrawn),
nameof(AccountClosed) => typeof(AccountClosed),
_ => throw new NotSupportedException($"Unknown event type: {e.EventType}")
};

return (IDomainEvent)(System.Text.Json.JsonSerializer.Deserialize(e.Data, type)
?? throw new InvalidOperationException("Failed to deserialize event."));
}

The reflection trick above is only to keep the docs compact—don’t do that in real code. In real code: provide a protected setter for Version or use a dedicated rehydration API.

Snapshot frequency

Common strategies:

  • every N events (e.g., every 100)
  • time-based (e.g., once per day per hot stream)
  • heuristic-based (only snapshot large streams)

Next

Event sourcing becomes “real” when it crosses service boundaries. That’s where outbox helps.

Next: Integration: outbox + sagas