Serverless Event Sourcing & CQRS (Part 1)

An example of event sourcing and CQRS in serverless, with code examples in TypeScript and the AWS CDK. In Part 1 we cover Event Sourcing.

Serverless Advocate
18 min readApr 24, 2024

--

Preface (Part 1)

✔️ We talk through what event sourcing is.
✔️ We move on to discussing CQRS as a related pattern.
✔️ We talk through an architecture example of event sourcing.
✔️ We talk through the associated code examples in Typescript.

Part 2 of the series can be found here:

Introduction 👋🏽

In this article, we are going to cover two related architectural patterns that people typically would have heard of, and also typically thought were the same thing. It is the two related patterns of event sourcing and CQRS (Command Query Responsibility Segregation).

The reason for writing the article and building out the code repo is that I have never seen a full example written wholesale and shared with the wider community.

To make the content easier to consume we will talk through the architecture and requirements of a fictitious company called ‘Gilmore HR’ which builds employee-related cloud services (think booking your vacation/leave).

Our example is going to focus on an employee requesting leave (vacation) and this having an impact on their overall balance for the year. The employee API allows users to:

  • Create a new employee.
  • Update an existing employee.
  • Request leave.
  • Cancel leave.
  • Delete an employee.

The full architecture for part 1 and part 2 of the series can be seen here:

What are we building with full code examples?

In Part 1 we are going to focus on the event-sourcing side of the architecture as shown below:

The code for the article can be found here:

The code for part 2 of the article can be found here:

https://www.serverlessadvocate.com/patterns

💡 Note: This is not a production version of the code and I have used functional over object-oriented style to make it easier to discuss in the article. Im also not an authoritative expert on this matter, but wanted to share my own implementation.

What is Event Sourcing?

“We may not want to just know the current state, but what the events were to get there”

Let’s start with what event sourcing is. This software design pattern involves capturing changes to an application's state as immutable and historical events within a sequential event log. These events act as the ultimate reference point and can be replayed to rebuild the application's current state at any given moment. Projections are subsequently employed to extract the system's current state from the event log for querying purposes, utilising the event history to project the current state effectively.(i.e. we use the history of events to project the current state).

“Instead of storing just the current state of the data in a domain, use an append-only store to record the full series of actions taken on that data. The store acts as the system of record and can be used to materialise the domain objects.” — https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing

This is very different from how non-event sourced systems (typically CRUD) work where we don’t store the events as a log of historic activity, but rather continuously update the state of individual records in the database.

What problems does it solve?

Event sourcing typically solves the following issues compared to CRUD-based solutions:

✔️ Auditing: In event sourcing, each change to the system’s state is represented as a distinct event, providing a comprehensive history of operations (audit), which is often lacking in CRUD systems.

✔️ Replay & Time Travel: Event sourcing captures changes to the system’s state as immutable events in a sequential log, allowing for efficient replay-ability and audit-ability, unlike traditional CRUD systems (this also includes time travel when debugging issues).

✔️ Performance: Unlike CRUD systems, event sourcing separates write operations from read operations, facilitating scalability and performance optimisation by enabling tailored querying mechanisms based on the event log.

✔️ Atomic: Event sourcing can help prevent concurrent updates to the same records which potentially causes conflicts because it avoids the requirement to directly update objects in the data store.

Now let’s cover CQRS as a pattern in the next section.

What is CQRS?

CQRS, which divides the responsibilities for handling commands and queries, naturally aligns with event sourcing, which allows for a clear separation of concerns in efficiently deriving queryable states from the event history. In this regard, we can create eventually consistent projections of our command (event) history which allows us to create queryable read-only views specific to the needs of consumers.

“Ultimately, read and write workloads are often opposing, with very different performance and scale requirements.”

In conventional architectures, a singular data model handles both querying and updating of a database, which suits straightforward CRUD operations effectively. However, for intricate applications, this method can become cumbersome and impractical; especially as writing can contain complex business logic, and querying on the other hand may need specialised views of the data.

Ultimately, read and write workloads are often opposing, with very different performance and scale requirements.

“CQRS separates reads and writes into different models, using commands to update data, and queries to read data.” — https://learn.microsoft.com/en-us/azure/architecture/patterns/cqrs

What problems does it solve?

CQRS typically solves the following issues:

✔️ Optimised Operations: CQRS typically resolves the issues where there is often a mismatch between the read and write representations of the data.

✔️ Performance: CQRS can eliminate the performance issues seen when using one data store for both querying and writing of data.

👇 Before we go any further — please connect with me on LinkedIn for future blog posts and Serverless news https://www.linkedin.com/in/lee-james-gilmore/

Why use Event Sourcing + CQRS together? 💜

Why do event sourcing and CQRS naturally fit together like toast and butter? This is because a sequential event log through event sourcing on its own doesn’t make for great queries, yet combining with CQRS to build one or more materialised views for efficient queries works well!

Before we get started specifically on Event Sourcing; lets first cover some key terms which will be mentioned below:

✔️ Aggregates — aggregates are a consistency boundary around related entities. They are generated from a single event stream when replayed in order, and during this operation, a current (valid) state of the aggregate is being calculated so that it can be used to handle a command. In our example this will be the employee events in order which starts with an employee being created, and subsequent events such as requesting and cancelling leave.

✔️ Command — a command is an intent which generates a new event which we add to the event stream and store in our database. An example could be ‘REQUEST_LEAVE’ which ultimately creates a new event called ‘LEAVE_REQUESTED’. In our example, our aggregate receives a command of ‘REQUEST_LEAVE’, we then read all events to generate the current view of the employee, and at that point we decide whether we accept the command or not (i.e. we apply business rules in our aggregate, or invariants as we call them). If we do, we generate the ‘LEAVE_REQUESTED’ event.

✔️ Invariants — an invariant is a business rule within our aggregate which always needs to be true to ensure that our aggregate is in a valid state. In our example, we don’t want to allow an employee to request leave when their balance is 0.

OK, now let’s talk through some key code!

Talking through some key code 👨‍💻

OK, so let’s talk through some of the key code starting with creating an employee!

Create Employee

A person adding a new employee to the Gilmore HR system

We can see in the code below that we have a Lambda primary adapter for creating a new employee (we won’t look at the AWS CDK code but you can see it in the stateless.ts stack file):

import { MetricUnit, Metrics } from '@aws-lambda-powertools/metrics';
import { errorHandler, logger, schemaValidator } from '@shared';
import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';

import { injectLambdaContext } from '@aws-lambda-powertools/logger/middleware';
import { logMetrics } from '@aws-lambda-powertools/metrics/middleware';
import { Tracer } from '@aws-lambda-powertools/tracer';
import { captureLambdaHandler } from '@aws-lambda-powertools/tracer/middleware';
import { CreateEmployeeCommand } from '@dto/create-employee';
import { Employee } from '@dto/employee';
import { ValidationError } from '@errors/validation-error';
import middy from '@middy/core';
import { createEmployeeUseCase } from '@use-cases/create-employee';
import { schema } from './create-employee.schema';

const tracer = new Tracer();
const metrics = new Metrics();

export const createEmployeeAdapter = async ({
body,
}: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
if (!body) throw new ValidationError('no payload body');

const employee = JSON.parse(body) as CreateEmployeeCommand;

schemaValidator(schema, employee);

const created: Employee = await createEmployeeUseCase(employee);

metrics.addMetric('SuccessfulEmployeeCreated', MetricUnit.Count, 1);

return {
statusCode: 200,
body: JSON.stringify(created),
};
} catch (error) {
let errorMessage = 'Unknown error';
if (error instanceof Error) errorMessage = error.message;
logger.error(errorMessage);

metrics.addMetric('CreateEmployeeError', MetricUnit.Count, 1);

return errorHandler(error);
}
};

export const handler = middy(createEmployeeAdapter)
.use(injectLambdaContext(logger))
.use(captureLambdaHandler(tracer))
.use(logMetrics(metrics));

It calls through to our use case (business logic) which does the main work as shown below:

import {
Event,
createEmployee,
eventTypes,
getCurrentEmployeeView,
} from '@aggregates/employee-aggregate';

import { CreateEmployeeCommand } from '@dto/create-employee';
import { Employee } from '@dto/employee';
import { save } from '@repositories/employee-repository';
import { schema } from '@schemas/employee';
import { schemaValidator } from '@shared';

export async function createEmployeeUseCase(
createEmployeeCommand: CreateEmployeeCommand
): Promise<Employee> {
createEmployeeCommand.type = 'CREATE_EMPLOYEE';

// create the command which produces a new event
const newEvent: Event = createEmployee(createEmployeeCommand);

// get the current state using the event produced
const newState = getCurrentEmployeeView([newEvent], newEvent.id);

// ensure the event is valid
schemaValidator(schema, newState);

// save our new employee created event and we dont need a snapshot
await save(
newEvent,
{
...newEvent,
version: newEvent.version + 1,
type: eventTypes.SNAPSHOT,
amount: newState.amount,
firstName: newState.firstName,
surname: newState.surname,
},
false
);

return newState;
}

We can see that it:

  1. Creates a ‘CREATE_EMPLOYEE’ command with the user’s request which returns the event ‘EMPLOYEE_CREATED’ from our aggregate.
  2. It gets the current state of the employee, which is just this event as it has just been created i.e. we have no other changes to the employee at this point (no other events in the stream).
  3. We then save the event to the stream without creating a snapshot (more on snapshots later).

Let’s now look at the aggregate where we called the createEmployee function.

export function createEmployee(command: CreateEmployeeCommand): Event {
logger.info(`createEmployee: ${JSON.stringify(command)}`);

if (command.type !== 'CREATE_EMPLOYEE')
throw new ValidationError('Invalid operation');

// business logic
if (command.amount < 0 || command.amount === 0)
throw new ValidationError('Leave entitlement should be 1 or more');

// create new event based on the command
return {
type: 'EMPLOYEE_CREATED',
id: uuid(),
firstName: command.firstName,
surname: command.surname,
datetime: getISOString(),
amount: 25,
version: 1,
};
}

The createEmployee function on our employee aggregate has our business logic (invariants), such as ensuring that the base holiday amount total is 1 or more in our example. It then returns a new ‘EMPLOYEE_CREATED’ event which we need to persist to the database.

To create the event record we call the save function on the employee repository as shown below:

export async function save(
event: Event,
snapshot: Event,
createSnapshot: boolean = false
): Promise<void> {
// persist the event on its own or with a snapshot
if (!createSnapshot) {
await create(event);
} else {
await createWithSnapshot(event, snapshot);
}
}

We can see that it creates the new event record, and optionally creates a snapshot record in the database too.

💡 Note: On this occasion with it being the first event we elect not to create a snapshot record of course.

Now let’s actually create an employee using Postman:

If we now look in DynamoDB we will see our employee record as shown below:

Our one record for the employee we have just created

We can use the API to return the employee as shown below (although at this point there is only one event)

Request Leave

A person requesting some leave on a rainy day

Now we do something fun, like requesting some leave! We do this via a PATCH on the employee by ID where we stipulate whether the command is a ‘REQUEST_LEAVE’ or ‘CANCEL_LEAVE’.

import {
Event,
Events,
cancelLeave,
createSnapshot,
eventTypes,
getCurrentEmployeeView,
requestLeave,
} from '@aggregates/employee-aggregate';
import { getEvents, save } from '@repositories/employee-repository';
import { logger, schemaValidator } from '@shared';

import { Employee } from '@dto/employee';
import { UpdateLeaveCommand } from '@dto/update-leave';
import { ValidationError } from '@errors/validation-error';
import { schema as employeeSchema } from '@schemas/employee';

export async function updateLeaveUseCase(
updateLeaveCommand: UpdateLeaveCommand
): Promise<Employee> {
let currentState: Employee;
let newEvent: Event;

logger.info(
`command: ${updateLeaveCommand.type} for id: ${updateLeaveCommand.id} for amount ${updateLeaveCommand.amount}`
);
// get the records to build the aggregate for a specific id
const events: Events = await getEvents(updateLeaveCommand.id);

// read all past events for the aggregate to reconstitute the current state
currentState = getCurrentEmployeeView(events, updateLeaveCommand.id);

// use the correct command which uses our aggregate logic which will throw if not valid (invariants)
switch (updateLeaveCommand.type) {
case 'REQUEST_LEAVE':
newEvent = requestLeave(currentState, updateLeaveCommand);
break;
case 'CANCEL_LEAVE':
newEvent = cancelLeave(currentState, updateLeaveCommand);
break;
default:
throw new ValidationError('Event type error');
}

// recreate the new state with the new command to return to the user which we can also use as a snapshot
const newState = getCurrentEmployeeView(
[...events, newEvent],
updateLeaveCommand.id
);

// validate the new state
schemaValidator(employeeSchema, newState);

// save new event & snaphot in a transaction (we dont save the current state just the event itself)
await save(
newEvent,
{
...newEvent,
version: newEvent.version + 1,
type: eventTypes.SNAPSHOT,
amount: newState.amount,
firstName: newState.firstName,
surname: newState.surname,
},
createSnapshot(events)
);

return newState;
}

We can see from the code above that we first grab the last ten events from the database for this employee ID. This uses the secondary adapter (database-adapter.ts) which uses a ‘Limit’ of ten and a ‘ScanIndexForward’ to limit the return to the last ten events.

“What if there were 2000 events?”

Why do we limit the events to the last ten if we need to build the full view of the employee? More on snapshotting in a little bit.. I’ve got you covered!

export async function list(id: string): Promise<any[] | undefined> {
try {
// get the last ten records with consistent read which could include a snapshot
// and return them in reverse order i.e. the most recent ten records
const params: QueryCommandInput = {
TableName: tableName,
Limit: 10,
ScanIndexForward: false,
ConsistentRead: true,
KeyConditionExpression: '#id = :id',
ExpressionAttributeNames: {
'#id': 'id',
},
ExpressionAttributeValues: {
':id': { S: id },
},
};
const command = new QueryCommand(params);
const response = await dynamoDBClient.send(command);
if (response.Items) {
return response.Items.map((item) => unmarshall(item));
} else {
throw new ValidationError('items not found');
}
} catch (error) {
logger.error(`error: ${JSON.stringify(error)}`);
throw error;
}
}

We then call the getCurrentEmployeeView function on the employee aggregate to get the constituted up-to-date view of the employee. This is the build-up of the current employee by applying all historic events in order.

Employee Aggregate

To get the up-to-date view of the employee we replay all of the events in order by applying each version over the top of the next one. This is shown in the code below from the ‘employee-aggregate.ts’ file in the getCurrentEmployeeView function:

export function getCurrentEmployeeView(
requests: Events,
id: string,
currentLeaveAmount: number = 25
): Employee {
logger.info(
`getCurrentEmployeeView- id: ${id}, currentLeaveAmount: ${currentLeaveAmount}`
);
let currentEmployee: EmployeeDetails = { firstName: '', surname: '' };

const filteredRequests = filterEvents(requests);

const sortedRequests = filteredRequests
.slice()
.sort((a, b) => b.version - a.version)
.reverse();

const lastSnapshot = sortedRequests.find(
(request) => request.type === eventTypes.SNAPSHOT
);

// if there is a snapshot
if (lastSnapshot && lastSnapshot.type === eventTypes.SNAPSHOT) {
currentLeaveAmount = lastSnapshot.amount;
currentEmployee = {
firstName: lastSnapshot.firstName,
surname: lastSnapshot.surname,
};

const lastSnapshotIndex = sortedRequests.indexOf(lastSnapshot);

for (let i = lastSnapshotIndex + 1; i < sortedRequests.length; i++) {
const request = sortedRequests[i];
if (request.type === eventTypes.LEAVE_REQUESTED) {
currentLeaveAmount -= request.amount;
} else if (request.type === eventTypes.LEAVE_CANCELLED) {
currentLeaveAmount += request.amount;
} else if (
request.type === eventTypes.EMPLOYEE_CREATED ||
request.type === eventTypes.EMPLOYEE_UPDATED
) {
currentEmployee = {
firstName: request.firstName,
surname: request.surname,
};
} else if (request.type === eventTypes.EMPLOYEE_DELETED) {
currentLeaveAmount = request.amount;
}
}
}
// no snapshot
else {
for (const request of sortedRequests) {
if (
request.type === eventTypes.EMPLOYEE_CREATED ||
request.type === eventTypes.EMPLOYEE_UPDATED
) {
currentEmployee = {
firstName: request.firstName,
surname: request.surname,
};
} else if (request.type === eventTypes.LEAVE_REQUESTED) {
currentLeaveAmount -= request.amount;
} else if (request.type === eventTypes.LEAVE_CANCELLED) {
currentLeaveAmount += request.amount;
} else if (request.type === eventTypes.EMPLOYEE_DELETED) {
currentLeaveAmount = request.amount;
}
}
}

return {
id,
firstName: currentEmployee.firstName,
surname: currentEmployee.surname,
amount: currentLeaveAmount,
version: getCurrentVersion(requests),
lastUpdated: getLastUpdatedDate(requests),
};
}

From the code above you can see that we pass in the events as an argument which we then apply one by one over the top of the previous one. We ensure that when leave is requested we reduce the previous event amount by the event amount value, or increase it based on cancelling leave.

We also update the firstName or surname properties when an employee updated event is processed.

💡 Note: For the eagle-eyed you would see that we are also first checking if we have any snapshots in the events, which we will discuss in detail further in the article.

Back to requesting leave

We then issue the correct command via the payload, for example, ‘REQUEST_LEAVE’ as shown below, which after going through our business logic, generates a new ‘LEAVE_REQUESTED’ event:

export function requestLeave(
employee: Employee,
command: UpdateLeaveCommand
): Event {
if (command.type !== 'REQUEST_LEAVE')
throw new ValidationError('Invalid operation');

// business logic
if (employee.amount === 0)
throw new ValidationError('Employee has no remaining leave');

if (employee.amount - command.amount < 0)
throw new ValidationError(
'Employee does not have enough remaining leave for request'
);

// create new event based on the command
return {
type: 'LEAVE_REQUESTED',
amount: command.amount,
id: employee.id,
datetime: getISOString(),
version: employee.version + 1,
};
}

We then generate the new up-to-date view of the employee which incorporates the latest event for ‘LEAVE_REQUESTED’ too, and we save the event with an optional snapshot depending on how many items are stored in DynamoDB against the ID.

We can request leave through the API using the POST request on /requests for a given employee ID, in this example requesting 5 days leave:

If we now look in the DynamoDB table we will see the following:

We can see see the employee created event, as well as the leave request, and our first snapshot

If we now get the current view of the employee we will see the following:

This is the current view of the employee based on replaying the event history

Snapshots

What we are essentially doing is rather than updating one specific row in DynamoDB like we typically do with CRUD, we are storing every change on the entity as an immutable event. When we do this, we can end up having to read a lot of events just to get the current view of the employee. What if there were 2000 events? Yikes…

This is why we use an approach called ‘snapshots’ where every tenth event generates a snapshot view of the employee. We start by always checking if we need to generate a snapshot or not when working with our events:

export function createSnapshot(events: Events): boolean {
// if there are less than 9 events we create a snapshot
if (events.length < 9) {
const hasSnapshot = events.some(
(event) => event.type === eventTypes.SNAPSHOT
);
return !hasSnapshot;
}

// Check if the first 9 events don't contain a snapshot, and if not create one
const firstNineEvents = events.slice(0, 9);
const hasSnapshotInFirstNine = firstNineEvents.some(
(event) => event.type === eventTypes.SNAPSHOT
);
return !hasSnapshotInFirstNine;
}

We can see that the function above always generates a snapshot when we have less than 9 event items, and then following this going forward, if the first 9 events in the batch don’t have a snapshot we create one.

The createSnapshot function is then used when saving the newly created event as shown in the snippet below:

...
// save new event & snapshot in a transaction
// (we dont save the current state just the event itself)
await save(
newEvent,
{
...newEvent,
version: newEvent.version + 1,
type: eventTypes.SNAPSHOT,
amount: newState.amount,
firstName: newState.firstName,
surname: newState.surname,
},
createSnapshot(events)
);
...

If we look at the save function below we can see that it calls our secondary adapter to actually persist the events using the AWS SDK v3:

...
export async function save(
event: Event,
snapshot: Event,
createSnapshot: boolean = false
): Promise<void> {
// persist the event on its own or with a snapshot
if (!createSnapshot) {
await create(event);
} else {
await createWithSnapshot(event, snapshot);
}
}
...

Its obviously important that we save the event and the snapshot at the same time so we use a transaction in DynamoDB as shown below for consistency:

export async function createWithSnapshot(
item: any,
snapshot: any
): Promise<void> {
const transactionRequests: TransactWriteItem[] = [];

try {
// add the event to the transaction
const itemParams: PutItemCommandInput = {
TableName: tableName,
ConditionExpression: 'attribute_not_exists(version)', // ensure we dont have a conflict
Item: marshall({
...item,
id: item.id,
version: item.version,
}),
};
transactionRequests.push({ Put: itemParams });

// create a snapshot then add it to the transaction
const snapshotParams: PutItemCommandInput = {
TableName: tableName,
ConditionExpression: 'attribute_not_exists(version)', // ensure we dont have a conflict
Item: marshall({
...snapshot,
id: snapshot.id,
version: snapshot.version,
}),
};
transactionRequests.push({ Put: snapshotParams });

// execute the transaction
await dynamoDBClient.send(
new TransactWriteItemsCommand({
TransactItems: transactionRequests,
})
);
} catch (error) {
logger.error(`error: ${JSON.stringify(error)}`);
throw error;
}
}

You can also see from the code above that we use a guard to ensure that the version doesn't already exist using a DynamoDB condition expression:

ConditionExpression: 'attribute_not_exists(version)'

Now when we generate the current view of the employee in the employee-aggregate getCurrentEmployeeView function we exercise the code below:

...
const lastSnapshot = sortedRequests.find(
(request) => request.type === eventTypes.SNAPSHOT
);

// if there is a snapshot
if (lastSnapshot && lastSnapshot.type === eventTypes.SNAPSHOT) {
currentLeaveAmount = lastSnapshot.amount;
currentEmployee = {
firstName: lastSnapshot.firstName,
surname: lastSnapshot.surname,
};

const lastSnapshotIndex = sortedRequests.indexOf(lastSnapshot);

for (let i = lastSnapshotIndex + 1; i < sortedRequests.length; i++) {
const request = sortedRequests[i];
if (request.type === eventTypes.LEAVE_REQUESTED) {
currentLeaveAmount -= request.amount;
} else if (request.type === eventTypes.LEAVE_CANCELLED) {
currentLeaveAmount += request.amount;
} else if (
request.type === eventTypes.EMPLOYEE_CREATED ||
request.type === eventTypes.EMPLOYEE_UPDATED
) {
currentEmployee = {
firstName: request.firstName,
surname: request.surname,
};
} else if (request.type === eventTypes.EMPLOYEE_DELETED) {
currentLeaveAmount = request.amount;
}
}
}
...

This now means that if we are passing in the last ten events each time we will always have a snapshot in there, so we only apply events from the last snapshot onwards; meaning it is more cost-effective and quicker.

If we use our API to create many more events, we will see the following in DynamoDB:

All of the employee events for a specific ID

We can see that we have all of the events stored for the employee with ID ca8c41b7-f8e2–4c9e-8e96-e3523f58d53c including our snapshots.

If we now get the latest view of the employee we will see the correctly constituted view from replaying the events:

The current view of the employee when we replay all of the events

What are the advantages and disadvantages?

So what are the disadvantages of event-sourcing before we jump straight in with two feet?

  • Complexity: Event sourcing can introduce additional complexity to the system, especially in terms of implementation and understanding.
  • Event Versioning: As the domain evolves, events might need to be versioned to accommodate changes in business requirements. Managing backward and forward compatibility of events can become complex, especially in systems with long event histories.
  • Read Performance: Reconstructing the current state of an entity from its events can be computationally expensive, especially for large event stores or complex event processing.
  • Data Storage: Storing every state change as an event can lead to a large volume of data, which can increase storage costs and complexity, especially for systems with high throughput.

What I would say here is use only in the correct scenarios and don’t go wild with this pattern!

Conclusion

Thanks for reading through part 1 of this series, and as a final recap we have covered:

✔️ We talked about what event sourcing is.
✔️ We moved on to discussing CQRS as a related pattern.
✔️ We talked through an architecture example for event sourcing.
✔️ We talked through the associated code examples in Typescript.

In the next part of the series we will cover:

✔️ We talk through the architecture example for CQRS.
✔️ We talk through the associated code examples in Typescript.

Wrapping up 👋🏽

I hope you enjoyed this short article, and if you did then please feel free to share and feedback!

Please go and subscribe to my YouTube channel for similar content!

I would love to connect with you also on any of the following:

https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore

If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋

Please also use the ‘clap’ feature at the bottom of the post if you enjoyed it! (You can clap more than once!!)

About me

Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect, and Global Head of Technology & Architecture based in the UK; currently working for City Electrical Factors (UK) & City Electric Supply (US), having worked primarily in full-stack JavaScript on AWS for the past 6 years.

I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture, and technology.

*** The information provided are my own personal views and I accept no responsibility for the use of the information. ***

You may also be interested in the following:

--

--

Global Head of Technology & Architecture | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀