Event Store in AWS DynamoDB

Update 05/27/2019: How to build an event store masterclass” is now available. Learn how to build an event store using C#. NET Core, DynamoDB, MySQL for read models, and more.

In the past few weeks, I have been working on creating an event store in AWS DynamoDB and AWS S3. I use an event store for the domain driven design (DDD) concept in that system. Specifically, one of my systems uses CQRS and Event Sourcing (which is awesome, btw).

The idea for the event store started when I wanted to create my own event store for several reasons. I played with EventStore from Greg Young available at https://eventstore.org/ I encountered errors when I tested EventStore and I expected 100% functionality without issues. Besides, I did not want to babysit another persistence mechanism. I just do not have time for that.

I have created an event store before that was based entirely on Redis. That had worked great and it was super-fast. I used https://redislabs.com/ service to allow zero-maintenance of a fully clustered Redis solution. This has been working for some time now. The only problem was that it can have inconsistencies when it comes to running many applications nodes that use the Redis event store. Long story short, this has to do with complex timing, concurrency violations, etc.

So, I thought there must be a better way. I want zero administration headaches ideally, but I want to take advantage of the consistency capability of the underlying storage mechanism. I want the event store to be a service to the application itself that runs within the same process of the application. So, no need to babysit a separate event store cluster. I want the event store conceptually living side by side within the application.

If I can only take advantage of the consistency of the persistent mechanism, the application can then handle error conditions etc. accordingly based on what the use cases are.

After some experimenting and doing quiet a few load tests using http://loader.io, I can now say that the Event Store in DynamoDB has been born. My first 10,000 user / min load test has passed with flying colors and reached API response times of less than 10 ms with sustained rates of 170 clients / second. All this was running on a single cheap t2.medium instance. DynamoDB had to increase the throttling automatically to take this kind of load. This is one of those cool features, btw.

The event store I created in DynamoDB uses two concepts, Aggregates and Change Sets. Aggregates are the aggregates in your DDD system but only a handful of meta data. Change sets are one ore more domain events that are created when your DDD system processes a command.

I know I can get much higher numbers even by adding additional nodes and doing further tweaking. I would have to do much more load testing of course and tweaking but so far this has been a tremendous success and I’m super excited to take advantage of DynamoDB.

Anyways, I wanted to share this information. Maybe in the in future I can go into much more details. Time is the only problem I have really.

3 thoughts on “Event Store in AWS DynamoDB

  1. Me and my team is also working on a event source system using DynamoDB. So far it works great, but we only have the writing part in place and is now gonna start building on the reading part. This is the first time I’m building an event store. We use CQRS and ES pattern. Can you show me how you structure your data in dynamodb, maybe I’ve left something important out?

    streamId and sortKey (timestamp) is the composite partitionKey.
    streamId is a combination of aggregate and userId in this case. Maybe streamId should be called aggregateId?

    “streamId”: “user-124304”,
    “sortKey”: “1537634094123”,
    “aggregate”: “user”,
    “event”: “user-updated”,
    “timestamp”: “2018-09-22T16:34:54.123Z”,
    “userId”: “124304”
    “payload”: {
    “schema”: “user/updated/1-0-0”,

    • Hi Robert,

      There are two parts of the EvtnStore in my system. First part is a DynamoDB table for the aggregate streams and another DynamoDB table for the changeset lookup information. The second part are the changesets themselfs which is a list of domain events that have accoured to an aggregate.

      The aggregate table consists if:

      AGGREGATE_ID String Hash

      No need for a secondary indexes since the AGGREGATE_ID is used to lookup the aggregate. The change set table consists of:

      CHANGESET_ID String Hash

      The above are the minimum when you create the tables. With the first aggregagte being inserted, the following attributes are used in the aggregate table:

      CHANGESET_ID String

      AGGREGATE_VERSION comes from the changset and AGGREGATE_TYPE is the class name / qualifier just for informational purposes but not needed in the EventStore itself.

      Hope this helps!

      • Ok, thanks for the help! We’re having a problem in our projections and how to make sure events are coming in the right order. As we might want to project information from different aggregates into one object, we need to make sure that for instance “user-created” event has occured before “user-clap”, these two events we want to store in our reading model like this:
        userId: 123,
        name: “Robert Larsson”,
        claps: 127

        But if user is not created, and we try to insert claps into the user object on the readin side (cqrs), it will be empty, or if the events occur at pretty much the same time, it will be a race condition. How are people solving these problems?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s