Skip to main content

Event design + schema (don't skip)

Your biggest "future tax" in event sourcing is event evolution.

Events are contracts that you'll replay in 6–24 months when:

  • you rebuild projections
  • you add new read models
  • you fix a bug and need to replay history

Rule 1: Events are facts, not requests

Bad (request-ish): WithdrawMoney { amount: 50 }
Good (fact-ish): MoneyWithdrawn { amount: 50 }

Why: requests can be rejected; facts can't.

Rule 2: Put identity in the stream, not every event

Stream ID: bank-account-{accountId}
Events don't need AccountId in every payload

Exception: if processing events outside stream context (fan-out projections).

Rule 3: Separate payload vs metadata (envelope)

public sealed record EventEnvelope(
Guid EventId,
string EventType,
int SchemaVersion,
DateTimeOffset OccurredAt,
Guid? CorrelationId,
Guid? CausationId,
string StreamId,
long StreamVersion,
IDomainEvent Payload
);

Rule 4: Version events intentionally

Option A: New event type (preferred)

AccountOpenedV1(AccountId, OwnerName)
AccountOpenedV2(AccountId, OwnerName, Email)

Option B: Upcast old events on read

public interface IEventUpcaster
{
bool CanUpcast(string eventType, int schemaVersion);
(string eventType, int schemaVersion, byte[] json) Upcast(...);
}

Rule 5: Make events small but sufficient

Events should carry enough data to make sense later.

Rule 6: Don't leak infrastructure into domain events

Avoid: Kafka partition keys, SQL row ids, HTTP status codes.

That belongs in integration events or metadata.