Photo by Minh Pham on Unsplash

Serverless Lightweight Clean Code Approach

An opinionated example of a lightweight ‘clean code’ Lambda function architecture, with code examples written in the AWS CDK and TypeScript.

Serverless Advocate
16 min readMay 11, 2023

--

Preface

✔️ We discuss what evolutionary or hexagonal architecture is — and why it has been discussed a lot in the past week.

✔️ We cover a lightweight ‘clean code’ hexagonal architecture for Lambda functions that don’t contain complex domain logic.

The full code repo for the article can be found here below:

Introduction 👋🏽

This article is split over the following sections:

✔️ What are we building? 🔩
✔️ What is Clean Hexagonal code? 🧑🏽‍💻
✔️ Talking through key code 🚀

“I always urge builders to consider the evolution of their systems over time and make sure the foundation is such that you can change and expand them with the minimum number of dependencies.” — Dr Werner Vogels, CTO - Amazon.com.

Over the past week there have been a foray of articles and hot takes when it comes to hexagonal architecture, or ‘clean code’ in Serverless; and the advantages and disadvantages of this approach (i.e. is the added perceived complexity up front worth it to make your code and services adaptable to change..)

The conversation started with this great article by my friend Allen Helton, who made the statement “We’re making Lambda too hard”.

https://www.readysetcloud.io/blog/allen.helton/are-we-making-lambda-too-hard/

Taking a piece of the article:

“I’m beginning to see a trend that I’m not sure I like… yet. There have been quite a few posts lately about how people recommend structuring Lambda-heavy projects. They talk about the different layers of code, responsibility separation, and code reusability. While these things might sound like no-brainers, it’s beginning to make Lambda development feel a bit complex and bring in some habits that seem to fight against the modularity and simplicity the service provides.”

Allen goes on to say:

“Function code should be explicit. I should be able to look at the code and know exactly what it does. I don’t want to wade through nested method calls and multiple files to figure out that a function is saving an entity to DynamoDB. That’s unnecessary complexity that distracts programmers from figuring out exactly how it solves the business problem.”

I will let you consume the great article in its entirety in your own time, where he details his personal approach where the handler, business logic, and data access etc are all in one handler for simplicity and ease of reading the code (nothing is shared across handlers — and that is a design decision).

The same week there was a really interesting article around the Amazon Prime engineering team moving a key part of their service to what they describe as a monolith as a container, and away from the original Serverless code and implementation:

“Conceptually, the high-level architecture remained the same. We still have exactly the same components as we had in the initial design (media conversion, detectors, or orchestration). This allowed us to reuse a lot of code and quickly migrate to a new architecture.” — Prime Video Tech

And James Lewis, Rebecca Parsons and Neal Ford released a great discussion the same week on Building Evolutionary Architectures which I have shared below:

Building Evolutionary Architectures • Rebecca Parsons, Neal Ford & James Lewis • GOTO 2023

This has started a myriad of great discussions, articles, tweets and hot takes on areas such as:

  • Evolutionary architecture — what is evolvable architecture, and how do you build it? How do you prevent brittle code and architecture that is hard to adapt? And do you even need too?
  • Future proofing your organisation — should you be able to switch between serverless functions and containers (and vice verser) when needed with minimal effort? Or perhaps switching from a synchronous process through an API Gateway to a Storage First pattern async at ease without large re-writes of code and refactor of tests?
  • Business logic vs infra— should your business/domain/processing logic be separate to your infrastructure — or is it one of the same thing in Serverless? What about direct integrations on AWS?

An example of these great discussions online would be:

https://twitter.com/thdxr/status/1653817160941051908?s=20

And also some key takes from this great article the same week by Dr Werner Vogels (CTO, Amazon.com), as a reply to the initial Amazon Prime article:

“Software is quite different, once we are running our software, we may get insights about our workloads that we did not have when it was designed. And, if we had realised this at the start, and we chose an evolvable architecture, we could change components without impacting the customer experience.”

“So, monoliths aren’t dead (quite the contrary), but evolvable architectures are playing an increasingly important role in a changing technology landscape, and it’s possible because of cloud technologies.”

There are always trade-offs on a scale of pure simplicity to possibly over-engineered for the future, which we cover in this article in detail.

“There are always trade-offs on a scale of pure simplicity to possibly over-engineered for the future

What we will be covering in this article?

In this article we are going to discuss an opinionated lightweight structure and style for clean code evolutionary architectures with Lambda functions with example code repo, specifically for smaller BFFs or integration services that typically live in the Experience Layer, and discuss the differences between this approach and a more structured and DDD focused approach for Domain Services.

👇 Before we go any further — please connect with me on LinkedIn for future blog posts and Serverless news https://www.linkedin.com/in/lee-james-gilmore/

What are we building? 🔩

This is what we will be building at a high level to talk through the lighter implementation of clean code (certain services left off for brevity):

We can see that:

  1. We use Amazon API Gateway as our interface with the outside World.
  2. We have five AWS Lambda functions that contain our business logic.
  3. The functions persist and retrieve the records into Amazon DynamoDB.
  4. We raise events off the back of certain business logic, which is published and stored on Amazon EventBridge.

Let’s now talk through what clean hexagonal architecture is in the next section.

What is Clean Hexagonal code? 🧑🏽‍💻

I previously wrote an article on DDD and Clean Code over a two part series (with example code repo) that focused very much on complex domain services, and how you could approach this using the AWS CDK and TypeScript.

The article covers the history and origins of hexagonal architectures, the parts that are in play (Aggregates, Value Objects, Domain Events etc), and how we may look to apply this in the Serverless World for domain services. A high level diagram showing this is below:

A more structured approach to functions with DDD when we have high complexity and domain logic

Would we use this approach for a smaller simple service or CRUD app? Certainly not. Would we use this with a domain service with complex business logic and rules when an organisation is following DDD? I would say potentially yes.

One of my tweets on this recently

The following diagram shows at a high level how you might consider complexity vs refactor risk & cost, and where evolutionary architecture can potentially help:

Potential complexity and refactor cost risk

We can see from the diagram that:

  1. CRUD Apps. We have CRUD style apps that are essentially low complexity and small in size. Here it probably doesn’t make sense to look at full hexagonal architectures, as the cost implications of a refactor are probably relatively low. It may even make sense to look at direct integrations.
  2. Integrations/Channels. We have Integrations or Channels, such as integration services between services, or potentially BFFs for Web or Mobile. Here, the complexity ramps up somewhat, as does typically the size of that service. In my opinion, here we should look at a light version of clean code and evolutionary architecture (which we discuss in this article).
  3. Domain Services. Finally, we have larger domain services (DDD) that contain more complex business logic, where the potential refactor costs and risks are higher (and this is the core IP of your business and what differentiates you from competitors). In this example, I would potentially look at a more structured DDD approach to hexagonal architecture as discussed here.

So, what might an alternative lightweight version look like for an Integration or Channel?

The diagram below shows a lighter version of this pattern, which focuses more on decoupling a use case (business logic) from the technical implications through the same use of primary and secondary adapters as before:

A lightweight version of the hexagonal architecture pattern, which still decouples the business logic (use case) from the services.

The main differences here is that:

  1. The business logic now all resides in the Use Case, and no implementation of entities and domain models.
  2. We have no repository pattern i.e. the Use Case interacts with services simply directly through secondary adapters.
  3. We remove the notion of entity’s, aggregates and value objects at a code level — but still apply the principles.

How does this look as a slice?

If we look at this for our example code repo, you can see that the use cases are separate from the adapters, and that the secondary adapters are reused across the use cases:

An example of how our business logic is accessed

We can see that in this scenario our use cases (business logic) are interfaced with using their own Lambda function handlers (primary adapters), and that they use the same database and event modules for interacting with Amazon DynamoDB and Amazon EventBridge. The Lambda functions are obviously the unit of deployment here.

Note: As the structures and setup are the same essentially between the two code repo examples (just with folders removed); teams can easily go between both approaches with low cognitive load.

It is also important to note that evolutionary architectures are not just about functions; it is how you define your bounded contexts, reduce coupling of services, and look at an event-driven async first approach between services (as well as other factors).

“It is also important to note that evolutionary architectures are not just about functions; it is how you define your bounded contexts, reduce coupling of services, and look at an event-driven async first approach between services (as well as other factors).”

Now let’s walk through the code in the following section to show how simple this is.

Talking through key code 🚀

So we have now talked through what hexagonal architecture is, and why we need a light version for non Domain services; so let’s talk through the key code!

We can start by looking at the folder structure below:

https://github.com/leegilmorecode/serverless-clean-code-experience

They key folders are shown in the light blue, which are the adapters and the use cases. The middle darker box is essentially everything a production style application should have regardless (schemas, events, types etc).

If we now take a look at an example primary adapter (inputs) for upgrade-customer-account.adapter.ts, we can see that it has no logic at all, and simply deals with calling the use case for the logic, and deals with the inputs and outputs specific to the Lambda integration.

Note: We could create an adapter for an S3 file upload event, or AppSync Lambda resolver — essentially this is just the technical mechanics of interfacing with the logic in the use case. This is what makes it evolvable — we simply create a new file and everything else remains the same. An example could be swapping the functions for containers — and simply creating a new adapter for Express/NextJS/Fastify.

import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import {
MetricUnits,
Metrics,
logMetrics,
} from '@aws-lambda-powertools/metrics';
import { Tracer, captureLambdaHandler } from '@aws-lambda-powertools/tracer';

import { CustomerAccountDto } from '@dto/customer-account';
import { ValidationError } from '@errors/validation-error';
import { errorHandler } from '@packages/apigw-error-handler';
import { injectLambdaContext } from '@aws-lambda-powertools/logger';
import { logger } from '@packages/logger';
import middy from '@middy/core';
import { upgradeCustomerAccountUseCase } from '@use-cases/upgrade-customer-account';

const tracer = new Tracer();
const metrics = new Metrics();

// (primary adapter) --> use case --> secondary adapters(s)
export const upgradeCustomerAccountAdapter = async ({
pathParameters,
}: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
if (!pathParameters || !pathParameters?.id)
throw new ValidationError('no id in the path parameters of the event');

const { id } = pathParameters;

logger.info(`customer account id: ${id}`);

const customerAccount: CustomerAccountDto =
await upgradeCustomerAccountUseCase(id);

logger.info(
`upgraded customer account: ${JSON.stringify(customerAccount)}`
);

metrics.addMetric(
'SuccessfulCustomerAccountUpgraded',
MetricUnits.Count,
1
);
metrics.addMetadata('CustomerAccountId', id);

return {
statusCode: 200,
body: JSON.stringify(customerAccount),
};
} catch (error) {
return errorHandler(error);
}
};

export const handler = middy(upgradeCustomerAccountAdapter)
.use(injectLambdaContext(logger))
.use(captureLambdaHandler(tracer))
.use(logMetrics(metrics));

If we now move onto the use case in the file upgrade-customer-account.ts, we can see that it has no notion of what is invoking or interfacing with it, and it is simply all about the business logic.

import * as customerAccountUpgradedEvent from '@events/customer-account-upgraded';

import {
CustomerAccountDto,
PaymentStatus,
SubscriptionType,
} from '@dto/customer-account';
import {
retrieveAccount,
updateAccount,
} from '@adapters/secondary/database-adapter';

import { PaymentInvalidError } from '@errors/payment-invalid-error';
import { SubscriptionAlreadyUpgradedError } from '@errors/subscription-already-upgraded-error';
import { getISOString } from '@shared/date-utils';
import { logger } from '@packages/logger';
import { publishEvent } from '@adapters/secondary/event-adapter';
import { schema } from '@schemas/customer-account.schema';
import { schemaValidator } from '@packages/schema-validator';

// primary adapter --> (use case) --> secondary adapter(s)

/**
* Upgrade an existing Customer Account
* Input: Customer account ID
* Output: CustomerAccountDto
*
* Primary course:
*
* 1. Retrieve the customer account based on ID
* 2. Upgrade and validate the customer account
* 3. Publish a CustomerAccountUpdated event.
*/
export async function upgradeCustomerAccountUseCase(
id: string
): Promise<CustomerAccountDto> {
const updatedDate = getISOString();

const customerAccount: CustomerAccountDto = await retrieveAccount(id);

if (customerAccount.paymentStatus === PaymentStatus.Invalid) {
throw new PaymentInvalidError('Payment is invalid - unable to upgrade');
}

// we can not upgrade an account which is already upgraded
if (customerAccount.subscriptionType === SubscriptionType.Upgraded) {
throw new SubscriptionAlreadyUpgradedError(
'Subscription is already upgraded - unable to upgrade'
);
}

// upgrade the account
customerAccount.subscriptionType = SubscriptionType.Upgraded;
customerAccount.updated = updatedDate;

// validate the account before saving it so it is always valid
schemaValidator(schema, customerAccount);
logger.debug(`customer account validated for ${customerAccount.id}`);

await updateAccount(customerAccount);
logger.info(`customer account ${id} upgraded`);

await publishEvent(
customerAccount,
customerAccountUpgradedEvent.eventName,
customerAccountUpgradedEvent.eventSource,
customerAccountUpgradedEvent.eventVersion,
updatedDate
);
logger.info(
`customer account upgraded event published for ${customerAccount.id}`
);

return customerAccount;
}

Unlike the heavier approach for domain services, the use case now has the business logic that the domain entities would typically have — and everything is now in one file (including the validation of the records etc).

We also no longer have the notion of repositories, so the use case talks through secondary adapters to interface with DynamoDB and EventBridge (essentially abstractions).

If we look at the file database-adapter.ts it is an interface to Amazon DynamoDB that performs the retrieval, creation and updating of accounts:

import * as AWS from 'aws-sdk';

import { CustomerAccountDto } from '@dto/customer-account';
import { config } from '@config/config';
import { logger } from '@packages/logger';

const dynamoDb = new AWS.DynamoDB.DocumentClient();

// this is the secondary adapter which creates the account from the db
// Note: you would typically use a module or package here to interact
// with the database technology - for example dynamoose

// primary adapter --> use case --> (secondary adapter)
export async function createAccount(
customerAccount: CustomerAccountDto
): Promise<CustomerAccountDto> {
const tableName = config.get('tableName');

const params: AWS.DynamoDB.DocumentClient.PutItemInput = {
TableName: tableName,
Item: customerAccount,
};

await dynamoDb.put(params).promise();
logger.info(`Customer account ${customerAccount.id} stored in ${tableName}`);

return customerAccount;
}

// this is the secondary adapter which updates the account in the db
// primary adapter --> use case --> (secondary adapter)
export async function updateAccount(
customerAccount: CustomerAccountDto
): Promise<CustomerAccountDto> {
const tableName = config.get('tableName');

const params: AWS.DynamoDB.DocumentClient.PutItemInput = {
TableName: tableName,
Item: customerAccount,
};

await dynamoDb.put(params).promise();
logger.info(`Customer account ${customerAccount.id} updated in ${tableName}`);

return customerAccount;
}

// this is the secondary adapter which retrieves the account from the db
// primary adapter --> use case --> (secondary adapter)
export async function retrieveAccount(id: string): Promise<CustomerAccountDto> {
const tableName = config.get('tableName');

const params: AWS.DynamoDB.DocumentClient.GetItemInput = {
TableName: tableName,
Key: {
id,
},
};

const { Item: item } = await dynamoDb.get(params).promise();

const customer: CustomerAccountDto = {
...(item as CustomerAccountDto),
};

logger.info(`Customer account ${customer.id} retrieved from ${tableName}`);

return customer;
}

We have a similar secondary adapter in the file event-adapter.ts which has one method for publishing events to Amazon EventBridge which all use cases can utilise:

import * as AWS from 'aws-sdk';

import { PutEventsRequestEntry } from 'aws-sdk/clients/eventbridge';
import { config } from '@config/config';
import { logger } from '@packages/logger';

class NoEventBodyError extends Error {
constructor(message: string) {
super(message);
this.name = 'NoEventBodyError';
}
}

const eventBridge = new AWS.EventBridge();

// this is a secondary adapter which will publish the event to eventbridge
// primary adapter --> use case --> (secondary adapter)
export async function publishEvent(
event: Record<string, any>,
detailType: string,
source: string,
eventVersion: string,
eventDateTime: string
): Promise<void> {
const eventBus = config.get('eventBus');

if (Object.keys(event).length === 0) {
throw new NoEventBodyError('There is no body on the event');
}

const createEvent: PutEventsRequestEntry = {
Detail: JSON.stringify({
metadata: {
eventDateTime: eventDateTime,
eventVersion: eventVersion,
},
data: {
...event,
},
}),
DetailType: detailType,
EventBusName: eventBus,
Source: source,
};

const subscriptionEvent: AWS.EventBridge.PutEventsRequest = {
Entries: [createEvent],
};

await eventBridge.putEvents(subscriptionEvent).promise();

logger.info(
`event ${detailType} published for ${event.id} to bus ${eventBus} with source ${source}`
);
}

This means that we are not duplicating that code across all use cases.

As a summary of the implementation as discussed above, we are now dealing with three folders: Primary Adapters, Use Cases and Secondary Adapters — which keeps the implementation light, but still providing the loose coupling and adaptability that we need:

Diagram showing our three main folders in use

Moving this application above to containers would simply be about a new primary adapter, perhaps using Express JS, as shown below:

Diagram showing moving from functions to container

Conclusion

I think it is important to understand where you can benefit from some up front structure and abstraction to your code, and a real life example of where building in the right way can save you massive headaches in the future.

Real World Example

Let’s take a look at a real world example. In a prior role we had multiple teams working on the replacement of a monolith which had been built over five years by the previous teams, and we moved this to fully serverless over 12 months. We decided upon DynamoDB as our main database of choice, and carefully and meticulously considered all database access patterns based on ‘what we knew’, including being supported by the AWS DynamoDB team over calls.

As the project had progressed 3/4 of the way through we acquired an HR SaaS product which needed to be fully integrated, we needed to integrate with new desktop applications, and we therefore had many, many new access pattern requirements which required aggregation pipelines, dynamic counts and more. This affected all of our domain services that had quite a bit of business logic.

Luckily we had created a data access layer (secondary adapter) for our database services and decoupled our business logic (use cases) which our 200+ functions used, so we were able to replace a few files, unit and integration test those files fully, swapping out for Amazon DocumentDB; and as we confined to the same interfaces (DTOs) it all just worked within a weeks worth of effort.

We also decoupled the interface with the business logic (primary adapters) — which again paid off massively when the desktop applications needed to use REST over GraphQL, so we used the same business logic with different primary adapters (specific this time to API Gateway over AppSync)— simply by creating some new files.

The lesson learned here was we took the up front decision to add some levels of abstraction and structure to make our code adaptable to change, using pretty much the same pattern as described in the article— and this paid off massively for sure. Would I have used this same approach for a new startup vs a cloud platform which has millions of users? Perhaps not.

It’s all about context, and no approach is necessarily ‘right’ — there are many many factors at play when you make that decision across a sliding scale, and you have to make the right one for your organisation at that time.

The key quote to leave on:

“Mono or Micro — we will all be dinosaurs one day” — Werner Vogels

Wrapping up 👋

Please go and subscribe on my YouTube channel for similar content!

I would love to connect with you also on any of the following:

https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore

If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋

Please also use the ‘clap’ feature at the bottom of the post if you enjoyed it! (You can clap more than once!!)

About me

Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect and Global Serverless Architect based in the UK; currently working for City Electrical Factors (UK) & City Electric Supply (US), having worked primarily in full-stack JavaScript on AWS for the past 6 years.

I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture and technology.

*** The information provided are my own personal views and I accept no responsibility on the use of the information. ***

You may also be interested in the following:

--

--

Global Head of Technology & Architecture | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀