Serverless Fitness Functions: What they are, and how to use them in the AWS CDK

We discuss the need for fitness functions in serverless cloud workloads and give examples of how we can automate this using ‘Fitness function-driven development’ with the AWS CDK and TypeScript

Serverless Advocate
17 min readJan 22, 2024

--

Preface

✔️ We discuss the need for fitness functions in our evolutionary architecture.
✔️ We discuss ‘Fitness function-driven development’ in serverless solutions.
✔️ We show how you can automate this using the AWS CDK.

Introduction 👋🏽

Picture your serverless system as a living entity, constantly adjusting and growing. In the world of evolutionary architecture, think of a fitness function as the vital signs for this service, ensuring it is working as you designed it, and as it changes, that it doesn’t degrade. As an industry, we are used to testing our code, but certainly not our architecture, or its non-functional requirements.

“Architects can no longer rely on static upfront design to meet the change rate required to be successful in such an environment.” — AWS

As the surroundings shift — whether due to business requirements, user expectations, or solution changes — the fitness function serves as the array of assessments we rely on to guarantee our architecture stays well-suited for its intended purpose and meets our architectural requirements. If it is not well suited, alarms should be raised, progressive deployments stopped, or pipelines failed.

“As these architectures evolve over time, it’s crucial to have mechanisms in place for assessing how alterations affect the vital traits of the architecture and for preventing any degradation in these characteristics over time.”

We need to ensure that as the system evolves over time it remains healthy and fit

Just like in nature where species adapt to survive, in the serverless world, your architecture needs to adapt continuously to changes in requirements, workload, and technology — yet still conform to quality gates that you have as an organisation. Although engineers often care more about things like unit testing (among others), architects typically think holistically about the ‘illities’ that are often more difficult to test.

“The fitness function becomes your set of checks and balances, ensuring that your architecture is not only surviving but thriving in this ever-changing ecosystem”

The fitness function becomes your set of checks and balances, ensuring that your architecture is not only surviving but thriving in this ever-changing ecosystem, and meeting your agreed metric standards.

We always need to check that the health of our systems remains fit as they change and evolve

In this article, we will discuss what fitness functions are, how they fit into our serverless evolutionary architecture practices, and how we can automate them through ‘Fitness function-driven development’ using the AWS CDK.

The code for the repo can be found here:

What are we building? 🛠️

As we progress through the article we will build an example to talk through using TypeScript and the AWS CDK for a fictitious company called
Lee EV Charging’.

Our fictitious EV Charging company, Lee.

👇 Before we go any further — please connect with me on LinkedIn for future blog posts and Serverless news https://www.linkedin.com/in/lee-james-gilmore/

What is a Fitness Function? 👨‍💻

Let’s start by looking at three separate definitions:

An architectural fitness function provides an objective integrity assessment of some architectural characteristic(s).” — https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/

and

a fitness function is used to summarize how close a given design solution is to achieving the set aims.…may encompass existing verification criteria, such as unit testing, metrics, monitors, and so on. We believe architects can communicate, validate and preserve architectural characteristics in an automated, continual manner, which is the key to building evolutionary architectures.
https://www.thoughtworks.com/en-gb/radar/techniques/architectural-fitness-function

and finally

Any mechanism that provides an objective integrity assessment of some architecture characteristic or combination of architecture characteristics
https://fundamentalsofsoftwarearchitecture.com/

If we break down these three statements, we can see that:

✔️ It is an architecture design assessment that checks evolving software.
✔️ It is based on measurable characteristics (metrics).
✔️ It validates if a design achieves a set of aims as it evolves (or if it degrades).
✔️ They should be automated and run continually (not manually).

“They support the practice of iterative change and offer confidence that the architecture is not degrading.”

What are some examples?

Now let’s look at some tangible examples of fitness functions that we typically need in our serverless solutions:

✔️ Cost — does the design meet our cost controls as it changes? An enabler is ensuring all resources are tagged correctly.

✔️ API Response Times — we have a requirement that all requests come back within 200ms, and we need to validate if the design meets this as it evolves.

✔️ Error Rates — we want to validate that downtime or error rates are within our acceptable range as we change the solution (which may be minimal to none)

✔️ Security Compliance — the design meets our security and solution standards as it changes.

✔️ Code Quality — does the code meet our quality gates as it evolves?

✔️ Load Testing — can the design handle the load that we expect as it changes over time? Or does this degrade?

✔️ Tagging — are all of our resources tagged to allow for cost tracking?

✔️ Fault tolerance — can the design meet our availability range whilst injecting faults using chaos engineering?

Fitness functions are guardrails for architecture design. They support the practice of iterative change and offer confidence that the architecture is not degrading, allowing the business to trust the architecture still caters to the main concerns as it adapts.

Pat Kua talking about Fitness Functions with Evolutionary Architecture

Types of Fitness Functions in our Serverless Solutions

If we look at the examples above, they fall into four different categories as shown below:

✔️ Atomic + Triggered
Examples: Unit testing and code quality which runs in our pipeline as these values are atomic. We fail the pipeline if we degrade the quality compared to our gates (atomic values).

✔️ Atomic + Continual
Examples: This could be alerting on uncaught 500 errors which is an atomic metric. Netflix’s Chaos Monkey (Chaos Engineering) is a great example of this which is continually injecting random faults and allows us to check atomic values continually (as well as any actual user-generated errors).

✔️ Holistic + Triggered
Examples: Load testing in our pipeline which is triggered on a code change, and is holistic as it exercises the full solution through the API. If we fail the load test then we fail the pipeline (triggered). Other great examples are security scanning and e2e (Cypress, for example, generates errors and fails the pipeline if the expected user experience degrades).

✔️ Holistic + Continual
Examples: This could be cloud watch alarms based on response time metrics for an API whilst the solution is continually used (i.e. tested in the wild outside of the pipeline or through synthetics). Another great example is cost metrics which is for the full solution and continual in nature as we track this.

These examples lead to ‘Fitness function-driven development’ in the next section, and how we can automate this through AWS CDK L3 constructs.

Note: Many of these can be covered using the AWS DPRA as shown below:

Fitness function-driven development (FFDD) 🧪

We discussed earlier that Fitness Functions should be automated, but why can’t we run these architecture validations in an automated fashion as we do with TDD? And how can we reduce the cognitive load on teams so they can do this with minimal effort and we get agreement across an organisation?

This is where we look at ‘Fitness function-driven development’ where we can automatically run these checks through our AWS CDK code (L3 constructs); whether that be in our pipelines, or continually when the service is being used.

During test-driven development, we write tests to verify that features conform to desired business outcomes; with fitness function-driven development we can also write tests that measure a system’s alignment to architectural goals.
- https://www.thoughtworks.com/en-gb/insights/articles/fitness-function-driven-development

As these architectures evolve over time, it’s crucial to have mechanisms in place for assessing how alterations affect the vital traits of the architecture and for preventing any degradation in these characteristics over time.

With fitness functions, we can write tests that measure a system’s alignment with architectural goals (attributes such as security, resilience, or stability) in a similar fashion to TDD.” — https://www.oreilly.com/library/view/building-evolutionary-architectures/9781491986356/

When outlining an architectural feature, the objective is to safeguard or improve on it as the architecture evolves. Fitness functions serve as a valuable tool for precisely this purpose.

Let’s look at how we can do this using the AWS CDK in the next section.

What are we building? 🏗️

In this example we want to check the following fitness functions through our AWS CDK code to ensure our architectural design meets our fitness tests:

  • The API responses are expected to be equal to or less than 200ms. Holistic + Continual.
  • There are no unhandled errors affecting our customers (500 error responses). Atomic + Continual.
  • All of our deployments of Lambda functions are safe (progressive with rollbacks). Atomic + Triggered.
  • Ensure that our solution meets non-functional requirements, such as security and solution best practices. Holistic + Triggered
  • Ensure that all of our resources are tagged correctly to allow for cost controls. Holistic + Triggered

These fitness functions allow us to:

  • Add caching, amend the deployment size, and change the function memory size if responses don’t meet expected levels — all of which can affect the outcome of the API response time as the design evolves (it’s holistic as it affects many services).
  • We can proactively be alerted to any unhandled exceptions (500 error codes) and resolve these for our customers to ensure the numbers are minimal to none (these are atomic values).
  • We can prevent the use of non-progressive deployments in our CDK applications using custom Aspects which can have adverse effects on our customers (the possibility to rollback if fitness is not improving). These are atomic values and binary.
  • We can use cdk-nag to ensure the compliance of our solution against best practices and compliance packs such as HIPPA, NIST, PCI DSS etc. This is holistic in nature as it affects all of our services that make up the solution architecture.
  • We can create a custom aspect which will ensure that our Stack is tagged with all of the required tags needed for cost control and that all resources are tagged.

To allow this we are going to build the following architecture for ‘Lee EV Charging’:

We can see that this is a super basic app that:

  • We have an API Gateway REST API where cars can start and stop charging. The API has a CloudWatch Alarm based on request latency, where it is triggered when the average latency is over 200ms.
  • The two Lambda functions that are integrated with the API have progressive deployments using AWS Code Deploy, and both have alarms based on 500 errors breaching the threshold.
  • Charging sessions are stored in a DynamoDB table.
  • Any alerts trigger an SNS Topic which sends an email to the engineering team.

Now let’s talk through the key code.

Talking through key code 👨‍💻

Now let’s talk through some of the key code, starting with API responses.

The API responses are =< 200ms (Holistic & Continual)

Let’s start by amending our custom L3 rest-api construct to automatically add an alarm based on latency metrics:

import * as cdk from 'aws-cdk-lib';
import * as apigw from 'aws-cdk-lib/aws-apigateway';
import * as cloudwatch from 'aws-cdk-lib/aws-cloudwatch';
import * as logs from 'aws-cdk-lib/aws-logs';

import { Construct } from 'constructs';

interface ApiProps extends Pick<apigw.RestApiProps, 'description' | 'deploy'> {
/**
* The stage name which the api is being used with
*/
stageName: string;
/**
* The api description
*/
description: string;
/**
* Whether or not to deploy the api
*/
deploy: boolean;
/**
* The latency threshold
*/
latencyThreshold?: number;
}

type FixedApiProps = Omit<apigw.RestApiProps, 'description' | 'deploy'>;

export class Api extends Construct {
public readonly api: apigw.RestApi;
public readonly alarm: cloudwatch.Alarm;

constructor(scope: Construct, id: string, props: ApiProps) {
super(scope, id);

const latencyThreshold = props.latencyThreshold
? props.latencyThreshold
: 200;

const fixedProps: FixedApiProps = {
defaultCorsPreflightOptions: {
allowOrigins: apigw.Cors.ALL_ORIGINS,
allowCredentials: true,
allowMethods: ['OPTIONS', 'POST', 'GET', 'PUT', 'DELETE', 'PATCH'],
allowHeaders: ['*'],
},
endpointTypes: [apigw.EndpointType.REGIONAL],
cloudWatchRole: true,
retainDeployments: false,
restApiName: `api-${props.stageName}`,
disableExecuteApiEndpoint: false,
deployOptions: {
stageName: 'api',
loggingLevel: apigw.MethodLoggingLevel.INFO,
tracingEnabled: true,
metricsEnabled: true,
accessLogDestination: new apigw.LogGroupLogDestination(
new logs.LogGroup(this, 'Logs' + id, {
logGroupName: `ev-changing-api-logs-${props.stageName}`,
removalPolicy: cdk.RemovalPolicy.DESTROY,
retention: logs.RetentionDays.ONE_DAY,
})
),
},
};

this.api = new apigw.RestApi(this, id, {
// fixed props
...fixedProps,
// custom props
description: props.description
? props.description
: `API ${props.stageName}`,
deploy: props.deploy ? props.deploy : true,
});

// create a cloudwatch alarm for Latency metric
const latencyMetric = this.api.metricLatency({
statistic: 'Average',
});

// create the alarm
this.alarm = new cloudwatch.Alarm(this, id + 'LatencyAlarm', {
alarmName: id + 'LatencyAlarm',
alarmDescription: `Latency over ${latencyThreshold} ms limit alarm`,
metric: latencyMetric,
threshold: latencyThreshold,
evaluationPeriods: 1,
treatMissingData: cloudwatch.TreatMissingData.NOT_BREACHING,
comparisonOperator:
cloudwatch.ComparisonOperator.GREATER_THAN_OR_EQUAL_TO_THRESHOLD,
});
}
}

You can see from the code above that we create an alarm based on the Rest API ‘metricLatency’ property automatically when somebody uses our rest-api construct, which is triggered when the average latency is over 200 ms in a five-minute period.

We have added the following function which is called from the use cases which allows us to inject some artificial latency when we set the LATENCY environment variable to ‘true’:

import { config } from '@config';

const latencyBool = config.get('createLatencyBool') as string;

function stringToBool(value: string): boolean {
return value.toLowerCase() === 'true';
}

export async function createLatency(ms: number): Promise<void> {
if (stringToBool(latencyBool)) {
await new Promise((resolve) => setTimeout(resolve, ms));
}
}

This is a great example of reducing the cognitive load on teams and ensuring that our latency does not degrade as we make changes to our solutions; and that the fitness function is embedded in our solution architecture.

If we now look at the alarm and latency after setting the environment variable we can see the alarm triggered below.

An image showing the alarm being triggered when average latency is over 200ms

There are no unhandled errors affecting our customers (Atomic & Continual)

The next fitness function would be around unhandled errors, and ensuring that this is minimal to none. This means we proactivley alert if our solution degrades when we make changes.

We do this by updating our custom L3 construct, ProgressiveLambda, to include a CloudWatch alarm based on custom metrics i.e. the fitness function is embedded into all of our Lambda functions by default.

import * as cloudwatch from 'aws-cdk-lib/aws-cloudwatch';
import * as actions from 'aws-cdk-lib/aws-cloudwatch-actions';
import * as codeDeploy from 'aws-cdk-lib/aws-codedeploy';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as nodeLambda from 'aws-cdk-lib/aws-lambda-nodejs';
import * as logs from 'aws-cdk-lib/aws-logs';
import * as sns from 'aws-cdk-lib/aws-sns';

import { Duration, RemovalPolicy } from 'aws-cdk-lib';

import { Construct } from 'constructs';

interface ProgressiveLambdaProps extends nodeLambda.NodejsFunctionProps {
/**
* The stage name which the lambda is being used with
*/
stageName: string;
/**
* The code deploy application which this lambda is part of
*/
application: codeDeploy.LambdaApplication;
/**
* The code deploy lambda deployment config
*/
deploymentConfig: codeDeploy.ILambdaDeploymentConfig;
/**
* whether or not the alarm is enabled
*/
alarmEnabed: boolean;
/**
* A reference to the sns topic which the alarm will use
*/
snsTopic: sns.Topic;
/**
* the metric name for our alarm
*/
metricName: string;
/**
* the namespace for our alarm
*/
namespace: string;
/**
* the service name for our alarm
*/
serviceName: string;
}

export class ProgressiveLambda extends Construct {
public readonly lambda: nodeLambda.NodejsFunction;
public readonly alias: lambda.Alias;
public readonly alarm: cloudwatch.Alarm;
public readonly uncaughtErrorsAlarm: cloudwatch.Alarm;

public readonly deploymentGroup: codeDeploy.LambdaDeploymentGroup;
private readonly application: codeDeploy.LambdaApplication;
private readonly deploymentConfig: codeDeploy.ILambdaDeploymentConfig;

constructor(scope: Construct, id: string, props: ProgressiveLambdaProps) {
super(scope, id);

this.application = props.application;
this.deploymentConfig = props.deploymentConfig;

// creation of the lambda passing through the props
this.lambda = new nodeLambda.NodejsFunction(this, id, {
...props,
});

// create a metric filter for 500 errors which we create an alarm for
const uncaughtErrorMetricFilter = this.lambda.logGroup.addMetricFilter(
'UncaughtErrorsFilter',
{
filterPattern: logs.FilterPattern.literal('{ $.statusCode = 500 }'),
metricName: id + 'UncaughtErrors',
metricNamespace: props.namespace,
}
);

this.uncaughtErrorsAlarm = new cloudwatch.Alarm(
this,
'UncaughtErrorsAlarm',
{
alarmName: id + 'UncaughtErrorsAlarm',
alarmDescription: 'Error 500 over limit',
metric: uncaughtErrorMetricFilter.metric(),
threshold: 1,
comparisonOperator:
cloudwatch.ComparisonOperator.GREATER_THAN_OR_EQUAL_TO_THRESHOLD,
evaluationPeriods: 1,
treatMissingData: cloudwatch.TreatMissingData.NOT_BREACHING,
}
);

// the lambda alias
this.alias = new lambda.Alias(this, id + 'Alias', {
aliasName: props.stageName,
version: this.lambda.currentVersion,
});

// a fixed prop cloudwatch alarm for errors on deployment
this.alarm = new cloudwatch.Alarm(this, id + 'Failure', {
alarmDescription: `${props.namespace}/${props.metricName} deployment errors > 0 for ${id}`,
actionsEnabled: props.alarmEnabed,
treatMissingData: cloudwatch.TreatMissingData.NOT_BREACHING, // ensure the alarm is only triggered for a period
metric: new cloudwatch.Metric({
metricName: props.metricName,
namespace: props.namespace,
statistic: cloudwatch.Stats.SUM,
dimensionsMap: {
service: props.serviceName,
},
period: Duration.minutes(1),
}),
threshold: 1,
comparisonOperator:
cloudwatch.ComparisonOperator.GREATER_THAN_OR_EQUAL_TO_THRESHOLD,
evaluationPeriods: 1,
});

// add the alarm for the progressive deployment errors
this.alarm.addAlarmAction(new actions.SnsAction(props.snsTopic));
this.alarm.applyRemovalPolicy(RemovalPolicy.DESTROY);

// add the alarm for the 500 errors filter
this.uncaughtErrorsAlarm.addAlarmAction(
new actions.SnsAction(props.snsTopic)
);
this.uncaughtErrorsAlarm.applyRemovalPolicy(RemovalPolicy.DESTROY);

// the code deploy deployment group
this.deploymentGroup = new codeDeploy.LambdaDeploymentGroup(
this,
id + 'CanaryDeployment',
{
alias: this.alias,
deploymentConfig: this.deploymentConfig,
alarms: [this.alarm],
application: this.application,
}
);
}
}

You can see from the code above that our metric filter for any function has a filter pattern on 500 status code errors, and if we have more than one in a given period of one minute, we alarm and send an email to the team.

To test this we can update the environment variable RANDOM_ERROR on the Lambda function to ‘true’ as shown below which will start to randomly throw 500 errors:

We can see this using the function here called random-errors.ts:

import { config } from '@config';

const randomErrorBool = config.get('randomErrorBool') as string;

function stringToBool(value: string): boolean {
return value.toLowerCase() === 'true';
}

export function randomError(): void {
const randomCondition = Math.random() < 0.8;

if (stringToBool(randomErrorBool) && randomCondition) {
throw new Error('Random error occurred!');
}
}

Now when we hit the endpoints using the Postman collection in the repo we can see the alarm trigger, and an email is sent to notify us of the degradation.

Example of our alarm when we have one or more 500 status code errors in a one minute period

All of our deployments of Lambda functions are safe (Atomic & Triggered)

We can easily add a custom Aspect that ensures that all of our Lambda functions have been created using our ProgressiveLambda custom construct which allows for canary deployments, as opposed to using the NodeJsFunction construct directly (which by default is an all-at-once deployment without a safe rollback). This fitness function is now embedded in code into our solutions.

import { Annotations, IAspect } from 'aws-cdk-lib';

import { NodejsFunction } from 'aws-cdk-lib/aws-lambda-nodejs';
import { IConstruct } from 'constructs';
import { ProgressiveLambda } from '../../shared-constructs';

export class ProgressiveLambdaRule implements IAspect {
constructor() {}

// ensure that we don't use the NodeJsFunction construct directly, so if we find one on the tree
public visit(node: IConstruct): void {
if (node instanceof NodejsFunction) {
// ensure that the NodeJsFunction is a parent of a ProgressiveLambda construct
if (!(node.node.scope instanceof ProgressiveLambda)) {
Annotations.of(node).addError(
'NodeJsFunction used directly. Please use ProgressiveLambda construct.'
);
}
}
}
}

You can see from the code above that we visit the app’s construct tree looking for any instances of a NodeJsFunction, and we check that its parent is an instance of a ProgressiveLambda.

We can then use this rule by adding it to our Stateless stack as shown below:

// ensure that we only use our progressive lambda function and not NodeJsFunction
cdk.Aspects.of(this).add(new ProgressiveLambdaRule());

If we were to add a NodeJsFunction directly to our code we would see that the synth of the stack fails with the following error message:

Ensure that our solution meets security and solution best practices (Holistic & Triggered)

In our example we have added cdk-nag to our application which checks for solution best practices and security standards; and our synth will fail if we have any issues (whether that be from the outset or if the application degrades with changes). This fitness function is therefore embedded into our solution code.

We add the following to both our ‘Stateful’ and ‘Stateless’ stacks so we run cdk-nag during the synth process:

// cdk nag check and suppressions to ensure compliance to best practices
cdk.Aspects.of(this).add(new AwsSolutionsChecks({ verbose: false }));
NagSuppressions.addStackSuppressions(this, [...supressions], true);

And similar to above we will see error messages for any lack of compliance.

Ensure that all of our resources are tagged correctly to allow for cost controls (Holistic & Triggered)

Finally, we want to fail the synth if we find that our Stack is not tagged with our required tags.

import { Annotations, IAspect, Stack } from 'aws-cdk-lib';

import { IConstruct } from 'constructs';

export class RequiredTagsChecker implements IAspect {
constructor(private readonly requiredTags: string[]) {}

// ensure that our stacks are tagged correctly
public visit(node: IConstruct): void {
if (!(node instanceof Stack)) return;

if (!node.tags.hasTags()) {
Annotations.of(node).addError(`There are no tags on "${node.stackName}"`);
}

this.requiredTags.forEach((tag) => {
if (!Object.keys(node.tags.tagValues()).includes(tag)) {
Annotations.of(node).addError(
`"${tag}" is missing from stack with id "${node.stackName}"`
);
}
});
}
}

We can see from the code above that we use a custom Aspect which checks the stack to see if it has any tags added, and if not, it fails the synth straight away and throws an error.

We then subsequently also check that it has the required tags that we use in our application, which are pulled in from a separate file as shown below:

// for compliance ensure we have the required tags added to the stack
cdk.Aspects.of(this).add(new RequiredTagsChecker(requiredTags));

The required tags coming from the following file:

import * as cdk from 'aws-cdk-lib';

export const requiredTags = [
'ev:operations:StackId',
'ev:operations:ServiceId',
'ev:operations:ApplicationId',
'ev:cost-allocation:Owner',
'ev:cost-allocation:ApplicationId',
];

export type Tags = Record<string, string>;

export function addTagsToStack(stack: cdk.Stack, tags: Tags) {
Object.entries(tags).forEach((tag) => {
cdk.Tags.of(stack).add(...tag);
});
}

We also have the function addTagsToStack which we can use in our applications to ensure that we add the required tags to our stacks (which filter down into children constructs which are taggable):

...
// we add the tags for the stack
const tags: Tags = {
'ev:operations:StackId': 'Stateless',
'ev:operations:ServiceId': 'EV',
'ev:operations:ApplicationId': 'Api',
'ev:cost-allocation:Owner': 'Lee',
'ev:cost-allocation:ApplicationId': 'Api',
};
...

// add the tags to all constructs in the stack
// note: stack level tags apply to all supported resources by default
addTagsToStack(this, tags);

Our Aspect check is in essence a check that we have used the addTagsToStack function. This again means our fitness functions are embedded into our code.

Conclusion

As we come to the end of this article, it is important to note that there is definitely a cross-over to some extent with non-functional requirements; and this is an automated and coded way to test these requirements against our architectural design, and more importantly, when it evolves.

The value comes from constantly checking the ‘fitness’ of our designs automatically as they evolve in line with changes in business requirements, user trends, and design changes; and taking this cognitive load off engineering teams by automating it.

Wrapping up 👋🏽

I hope you enjoyed this article, and if you did then please feel free to share and feedback!

Please go and subscribe to my YouTube channel for similar content!

I would love to connect with you also on any of the following:

https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore

If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋

Please also use the ‘clap’ feature at the bottom of the post if you enjoyed it! (You can clap more than once!!)

About me

Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect, and Global Head of Technology & Architecture based in the UK; currently working for City Electrical Factors (UK) & City Electric Supply (US), having worked primarily in full-stack JavaScript on AWS for the past 6 years.

I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture, and technology.

*** The information provided are my own personal views and I accept no responsibility for the use of the information. ***

You may also be interested in the following:

--

--

Global Head of Technology & Architecture | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀