Andrew Cairns
@acairns.co.uk
2.1K followers 2.1K following 300 posts
Using metaphors and analogies to explain Software Engineering in fun ways: https://youtube.com/@metaphoricallyspeaking Staff Software Engineer. Passionate about DDD, CQRS, Event Sourcing and Distributed Systems. Kayaking, too.
Posts Media Videos Starter Packs
acairns.co.uk
I like the idea, but I've never practiced it in production.

I have a local app which displays me some metrics which is aggregate-less. But the complexity is low. I don't know how I'd handle enforcing a constraint, without duplicating it's source of truth, required by multiple commands.

wdyd?
acairns.co.uk
Common Event Sourcing mistake: multiple aggregates in one stream.

One stream = One aggregate

Customer[123] stream → CustomerRegistered, EmailChanged, AddressUpdated
Order[456] stream → OrderPlaced, ItemAdded, OrderShipped

Different streams. Different things.
Reposted by Andrew Cairns
duncanjonesmerrion.bsky.social
You can have the event chain organised as a workflow (based on an event stream) identified by an unique identifier and pass that identifier to the individual processes as a correlation id...and have them write back their doings to that event stream.

Thus your workflow is its own log.
acairns.co.uk
Correlation IDs allow you to:
- Trace an entire business flow across services
- Subscribe to "everything about order ABC123"
- Debug distributed systems without questioning your life choices

It is a massive debuggability improvement.
acairns.co.uk
I'm an advocate for giving every event a Correlation ID copied from the command that caused it.

OrderAccepted → OrderPaid → OrderShipped
All share CorrelationId: "ABC123"

Now you can... 👇
acairns.co.uk
I know these sound like negatives, but in many ways, these are the positives!

I’ll expand more, soon :)
acairns.co.uk
It can be designed FOR their queries and everything can be denormalised.

It's read-only and can optimised for the specific use case.

This way, you can give them EXACTLY what they need!
acairns.co.uk
Your data team wants to build reports and dashboards but your events live in an append-only log...

They want a relational schema.

Solution: project events into a dedicated reporting database!

Why? 👇
acairns.co.uk
The event store checks: "Is current version still 67?"
- Yes: Write succeeds
- No: Reject with a `ConcurrencyException`

Retrying the command here is quite common.

The check happens atomically during the write with no separate locking step needed.
acairns.co.uk
How do you handle two processes acting on the same thing in Event Sourcing?

Optimistic concurrency with expected version:

1. Load events from stream (version 67)
2. Process command, generate new events
3. Write with expectedVersion=67

And then... 👇
acairns.co.uk
Really does. Currently advocating for a shift towards workflows, too! 🤞
acairns.co.uk
"First Attempt In Learning"

That's incredible 👏
acairns.co.uk
Ah sorry - this particular service isn’t part of our banking context. Thank goodness! 😅
acairns.co.uk
I’m working on this just now! Original state-based service wasn’t designed to be bi-temporal, but has upcoming requirements.

Have extracted the service, now figuring out how to migrate to event sourcing.

Has been fun 😅
acairns.co.uk
I totally get it. Working in fintech for a while now - no idea how things could function without ledgers.
acairns.co.uk
Many ways to be GDRP compliant with Event Sourcing. Crypto-shredding, for example. And obfuscation can go a long way when removing PII, but needing to retain things like metrics.
acairns.co.uk
Event Sourcing requires a mental model shift.

"But I need to DELETE something!".

Here's the reality: you don't delete events.

You record a NEW event that something was deleted/cancelled/revoked.

OrderPlaced → OrderCancelled

Both facts are true. History doesn't have an undo button.
acairns.co.uk
Used to do something similar.

Must admit - not quite there with my current team and project… but we’ll get there!
acairns.co.uk
Event Sourcing tests double as documentation.

Given: [OrderPlaced, ItemAdded]
When: RemoveItem
Then: [ItemRemoved]

This reads like a spec. It IS a spec.
Your tests become the living documentation of your business rules.
acairns.co.uk
A Projection is something that converts your event stream into state for reading.

Kinda like a database view, but better:
- You can have dozens of them
- Optimised for specific queries
- Rebuild anytime from events
- Different databases even
Reposted by Andrew Cairns
ted.dev
Just gave a talk about this at #dev2next and it’s incredibly powerful yet not very complicated once you get in the right mindset.

ted.dev/talks

#Java #EventSourcing #cqrs
acairns.co.uk
Absolutely!

I regularly write .json files into S3 buckets and trigger CDN invalidations. You can achieve super high availability with very little complexity.