Photo by Brendan Church on Unsplash

Lightweight Clean Code: Containers to Lambda Functions (and back again)

This tutorial explores hexagonal architecture in TypeScript, leveraging the AWS CDK for seamless transitions between containers and functions on AWS using a Serverless approach. Code examples are written in TypeScript and the AWS CDK.

Serverless Advocate
11 min readDec 10, 2023

--

Preface

✔️ We cover using evolutionary architecture to swap between containers and functions in serverless.

✔️ We talk through Amazon API Gateway and AWS Lambda integrations vs AWS App Runner and containers.

✔️ We talk through a full solution code example using the AWS CDK and TypeScript.

This was covered recently at a high level at AWS re:Invent 2023

Introduction 👋🏽

In this article we cover evolutionary architecture and ‘clean code’, showing you how you can future-proof your serverless solutions with a lightweight hexagonal architecture approach (rather than monolithic lambda handlers).

We will show how you can write your code in a way where you can seamlessly go from containers to Lambda functions and vice versa. If we did have our code in large handler files we wouldn’t be able to swap out as discussed without a full re-write of the code.

The full code repository for the article can be found here.

For a deep dive into the lightweight clean code approach please see the following previous article:

👇 Before we go any further — please connect with me on LinkedIn for future blog posts and Serverless news https://www.linkedin.com/in/lee-james-gilmore/

What are we building?

OK, so let’s first of all take a look at what we will be building in this article:

We can see from the diagram above that we are either allowing a customer to use our API Gateway with Lambda function integrations (blue circles) or using App Runner and containers (green circles). We can switch these out very easily due to how we have structured our code.

“We can switch these out very easily due to how we have structured our code.”

The business logic code and all code for interacting with DynamoDB etc is the same (denoted by the code block on the diagram), and all we are doing here is swapping out the primary adapter i.e. the service code that the customer interacts with directly.

Note: We would never have both running at the same time, and we would utilise custom domain names so switching out is seamless for our customers.

Let’s talk through the numbered coloured circles from the diagram:

API Gateway (Functions)

First of all, we will cover customers interacting with API Gateway and Lambda functions as shown below:

The service code being consumed through Amazon API Gateway and Lambda functions
  1. The customers interact with our orders API using Amazon API Gateway using our primary adapter.
  2. We have a direct integration with a Lambda function which contains our business logic (use case) and code to interact with DynamoDB (secondary adapter).
  3. We use secondary adapters to persist the data into a DynamoDB table.

App Runner (Containers)

Now let’s look at the exact same code being consumed by customers through AWS App Runner:

The same service code is being consumed through AWS App Runner
  1. The customers interact with our orders API using AWS App Runner (primary adapter)
  2. This container contains our business logic (use case) and code to interact with DynamoDB (secondary adapter).
  3. We use secondary adapters to persist the data into a DynamoDB table (same as above)

OK, so what is the benefit of this design and approach?

With this approach, we can simply change the entry point i.e. primary adapter and no other code needs to change if we want to swap from containers to functions, or vice versa.

“With this approach, we can simply change the entry point i.e. primary adapter and no other code needs to change if we want to swap from containers to functions, or vice versa.”

It is important to understand that we have the notion of primary adapters, use cases and secondary adapters as detailed below:

✔️ Primary Adapters — This is how our customers or other services interact with us on the driving side. This could be Amazon API Gateway, an SQS queue, AppSync GraphQL API, or any other interaction point through an AWS service. This is totally devoid of business logic and is very thin.

✔️ Use Case — This is where our main business logic resides and contains no framework or service-related code. It uses secondary adapters to communicate with AWS services on the driven side, such as persisting items and retrieving them from a DynamoDB table or perhaps publishing an event to Amazon EventBridge.

✔️ Secondary Adapters — These are used to communicate on the driven side with AWS or other external services and contain no business logic. They are only used via a Use Case to deal with side effects.

This is shown in the diagram below for our example:

We can then easily swap out the primary adapter (entry point) so our code now utilised AWS App Runner and containers instead of functions as shown below:

Let’s now look at this at a code level to make this more clear.

Talking through the code

Now that we have an idea of how a lightweight version of clean code and evolutionary architecture works, now let’s look at this at the code level.

Use Case & Secondary Adapter

Let’s start by looking at the use case which is devoid of any frameworks, is purely business logic, and utilising secondary adapters for any interactions with other services:

import { getISOString, logger, schemaValidator } from '@shared/index';

import { Order } from '@dto/create-order';
import { saveOrder } from '@adapters/secondary/database-adapter';
import { schema } from '@schemas/order';
import { v4 as uuid } from 'uuid';

export async function createOrderUseCase(order: Order): Promise<Order> {
const created = getISOString();

// create the order object and add the id and created date
const orderDto: Order = {
id: uuid(),
created,
...order,
};

// Note - you would do your super duper business logic here

logger.info(`order: ${JSON.stringify(order)}`);

schemaValidator(schema, orderDto);

// save the created order using the secondary adapter
await saveOrder(orderDto);

logger.info(`order with ${orderDto.id} saved`);

return orderDto;
}

As you can see, this would remain the same regardless of using containers or functions. We can then see it uses a secondary adapter for writing the order item to a DynamoDB table as shown below:

import { DynamoDBClient, PutItemCommand } from '@aws-sdk/client-dynamodb';

import { Order } from '@dto/create-order';
import { config } from '@config/config';
import { logger } from '@shared/index';
import { marshall } from '@aws-sdk/util-dynamodb';

const dynamoDb = new DynamoDBClient({});

export async function saveOrder(createOrderDto: Order): Promise<Order> {
const tableName = config.get('tableName');

const params = {
TableName: tableName,
Item: marshall(createOrderDto),
};

try {
await dynamoDb.send(new PutItemCommand(params));

logger.info(`order created with ${createOrderDto.id} into ${tableName}`);

return createOrderDto;
} catch (error) {
console.error('error creating order:', error);
throw error;
}
}

The code above is completely devoid of any business logic and is simply used to interact with external services (in this case Amazon DynamoDB). This again does not change regardless of running as functions or containers.

Primary Adapters

OK, so now let’s look at how we can create two different primary adapters (one for AppRunner and one for API Gateway + Lambda functions), both of which use the same ‘use case’. Let’s start by looking at the primary adapter for the Lambda function:

import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import {
MetricUnits,
Metrics,
logMetrics,
} from '@aws-lambda-powertools/metrics';
import { Tracer, captureLambdaHandler } from '@aws-lambda-powertools/tracer';
import { errorHandler, logger, schemaValidator } from '@shared/index';

import { Order } from '@dto/create-order';
import { ValidationError } from '@errors/validation-error';
import { createOrderUseCase } from '@use-cases/create-order';
import { injectLambdaContext } from '@aws-lambda-powertools/logger';
import middy from '@middy/core';
import { schema } from '@schemas/create-order';

const tracer = new Tracer();
const metrics = new Metrics();

export const createOrderAdapter = async ({
body,
}: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
if (!body) throw new ValidationError('no payload body');

const order = JSON.parse(body) as Order;

logger.info(`order: ${order}`);

// validate the order payload for create order
schemaValidator(schema, order);

// use the use case for the actual business logic and secondary adapters
const created: Order = await createOrderUseCase(order);

metrics.addMetric('SuccessfulCreateOrderCreated', MetricUnits.Count, 1);

return {
statusCode: 201,
body: JSON.stringify(created),
};
} catch (error) {
let errorMessage = 'Unknown error';
if (error instanceof Error) errorMessage = error.message;
logger.error(errorMessage);

metrics.addMetric('CreateOrderCreatedError', MetricUnits.Count, 1);

return errorHandler(error);
}
};

export const handler = middy(createOrderAdapter)
.use(injectLambdaContext(logger))
.use(captureLambdaHandler(tracer))
.use(logMetrics(metrics));

We can see from the code above that this primary adapter is specific to API Gateway and Lambda, and confines to the interface used in the Use Case. We can quite easily swap this out for a container service as shown below by using a different primary adapter for the customer to interact with:

import { logger, schemaValidator, serviceErrorHandler } from '@shared/index';

import { Order } from '@dto/create-order';
import { config } from '@config/index';
import { createOrderUseCase } from '@use-cases/create-order';
import fastify from 'fastify';
import { schema } from '@schemas/create-order';

const server = fastify({
logger: false,
});

const address = config.get('address');
const port = config.get('port');

server.get('/', async () => {
return { version: 'v1' };
});

server.get('/health-check', async () => {
logger.debug('healthcheck successfull');
return { status: 'healthy' };
});

// this is the same as our api gateway and lambda intergration for creating orders
server.post('/orders', async (request, reply) => {
try {
const orderPayload = request.body as Order;
logger.info(`order payload: ${JSON.stringify(orderPayload)}`);

// validate the order payload for create order
schemaValidator(schema, orderPayload);

// use the use case for the actual business logic and secondary adapters
const createdOrder: Order = await createOrderUseCase(orderPayload);

return reply.status(201).send(createdOrder);
} catch (error) {
return serviceErrorHandler(error, reply);
}
});

server.listen({ host: address, port: parseInt(port, 10) }, (err, address) => {
if (err) {
logger.error(err.message);
process.exit(1);
}
logger.info(`server listening at ${address}`);
});

We can see from the code above that we are using a Fastify primary adapter i.e. a new API service ran as a container, which again confines to the same interface and Use Case.

“In this manner, swapping out the primary adapter i.e. the entry point for the customer, is all we do, and everything else remains the same.”

In this manner, swapping out the primary adapter i.e. the entry point for the customer, is all we do, and everything else remains the same. This allows us to seamlessly change the primary adapter and we are now using containers (or vice versa with Lambda functions).

Note — we could write this using separate route files to split this out even further if we wanted to.

What about the AWS CDK changes?

OK, all we have done so far is at the service code level, so now let’s look at the differences between API Gateway and App Runner.

API Gateway and Lambda

As a primary adapter entry point, we need to create the Lambda function itself as shown below:

...
// create the lambda for create order
const createOrderLambda: nodeLambda.NodejsFunction =
new nodeLambda.NodejsFunction(this, 'CreateOrder', {
runtime: lambda.Runtime.NODEJS_20_X,
entry: path.join(
__dirname,
'src/adapters/primary/create-order/create-order.adapter.ts'
),
memorySize: 1024,
timeout: cdk.Duration.seconds(5),
tracing: Tracing.ACTIVE,
handler: 'handler',
bundling: {
minify: true,
externalModules: [],
},
environment: {
TABLE_NAME: this.table.tableName,
...lambdaPowerToolsConfig,
},
});
...

We then hook this up to our Amazon API Gateway as shown below:

...
// create the api gateway for orders
const api: apigw.RestApi = new apigw.RestApi(this, 'OrdersApi', {
description: 'Orders API',
endpointTypes: [apigw.EndpointType.REGIONAL],
deploy: true,
deployOptions: {
stageName: 'prod',
dataTraceEnabled: true,
loggingLevel: apigw.MethodLoggingLevel.INFO,
tracingEnabled: true,
metricsEnabled: true,
},
});
api.applyRemovalPolicy(cdk.RemovalPolicy.DESTROY);

// create the orders resource and add a post endpoing for our function
const orders: apigw.Resource = api.root.addResource('orders');
orders.addMethod(
'POST',
new apigw.LambdaIntegration(createOrderLambda, {
proxy: true,
})
);
...

Now our entry point for the service is the orders API which targets the Lambda function code here in the primary adapter: src/adapters/primary/create-order/create-order.adapter.ts

At this point once deployed our customers are interacting with our code through an Amazon API Gateway REST API which targets a Lambda function.

AWS App Runner and Containers

OK, so now let’s look at the different entry point of AWS App Runner as we want this to run as a container now:

// allows building the docker image and uploading to ecr
const imageAsset = new DockerImageAsset(this, 'ImageAssets', {
directory: path.join(__dirname, '../'),
});

// create the apprunner service
const service: apprunner.Service = new apprunner.Service(this, 'Service', {
accessRole,
instanceRole,
serviceName: 'orders-service-app-runner',
cpu: apprunner.Cpu.QUARTER_VCPU,
memory: apprunner.Memory.HALF_GB,
healthCheck: apprunner.HealthCheck.http({
path: '/health-check',
interval: cdk.Duration.seconds(20),
timeout: cdk.Duration.seconds(5),
healthyThreshold: 3,
unhealthyThreshold: 3,
}),
source: apprunner.Source.fromAsset({
imageConfiguration: { port: 80 },
asset: imageAsset,
}),
autoDeploymentsEnabled: true,
});
service.applyRemovalPolicy(cdk.RemovalPolicy.DESTROY);

We can see from the code above that we point our Docker (container) image asset to the Docker file, and then create an App Runner service which utilises it.

The Docker file (Dockerfile) looks like this:

# We use node v 20 same as the Lambda functions
FROM node:20

# Create app directory
WORKDIR /app

# Install app dependencies
# (A wildcard is used to ensure both package.json and package-lock.json are copied)
COPY package*.json ./

RUN npm install

# Bundle app source i.e. the dist folder to the app folder in the container
COPY dist ./

ENV ADDRESS=0.0.0.0 PORT=80

# We expose port 80 and run the primary adapter using node
EXPOSE 80
CMD ["node", "./adapters/primary/orders-service/orders-service.adapter.js"]

We can see that the entry point for the code is our orders service primary adapter here: ./adapters/primary/orders-service/orders-service.adapter.js which takes it from the ‘dist’ folder.

As part of our CDK deployment, we run the following NPM script (npm run deploy):

"build": "tsc -p tsconfig.service.json && tsc-alias -p tsconfig.service.json",
"deploy": "npm run build && cdk deploy --outputs-file ./cdk-outputs.json --all",

We have a build step which uses a specific tsconfig file for the service which builds the output into the ‘dist’ folder, and then we use ‘tsc-alias’ to swap out the alias paths for hard paths since we are now working with containers and Node JS transpiled code. We then deploy the stack which deploys the App Runner container via Amazon ECR.

We obviously wouldn’t have both primary adapters defined at the same time in our AWS CDK code, but I wanted to show how easy this is to swap out from containers and functions (and back again), whilst only changing the entry point (i.e. the primary adapter).

Wrapping up 👋🏽

I hope you enjoyed this quick article showing you how to utilise evolutionary architecture to future-proof your serverless products. I am always eager to understand your own approach, so please feel free to add comments to the article, and all feedback is always welcome.

About me

Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect and Global Head of Technology & Architecture based in the UK; currently working for City Electrical Factors (UK) & City Electric Supply (US), having worked primarily in full-stack JavaScript on AWS for the past 6 years.

I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture and technology.

*** The information provided are my own personal views and I accept no responsibility on the use of the information. ***

You may also be interested in the following:

--

--

Global Head of Technology & Architecture | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀