Photo by David von Diemar on Unsplash

Serverless OpenAPI & Amazon API Gateway with the AWS CDK — Part 1

How we can utilise the “OpenAPI first approach” alongside Amazon API Gateway in our AWS CDK TypeScript solutions.

Serverless Advocate
16 min readMay 28, 2023

--

Preface

✔️ We discuss using the OpenAPI first approach for our CDK apps. 🔩
✔️ We discuss publishing/hosting our API spec on Amazon CloudFront ☁
️✔️ We create a mock version of the service for other teams. 🧑🏿‍🤝‍🧑🏿
✔️ We demo this and also walk through the code. 🧑🏽‍💻

Part 1 — OpenAPI specification first approach (JSON).
Part 2OpenAPI code first approach.

Introduction

This series is going to discuss the various ways we can create our API Gateways on AWS with the AWS CDK and TypeScript; and different approaches to importing and exporting our OpenAPI specifications. We will cover the advantages and disadvantages of each approach through the series.

Part 1 of the series (this article) covers an “OpenAPI first approach” to development, whereby our Amazon API Gateway and the associated integrations to Lambda functions etc all happen via the uploaded spec.

Our example for Part 1 in the series

The diagram above shows:

  1. The engineering team create the OpenAPI 3.0 spec up front, for both a mock version of the service, and the actual production service itself.
  2. Within the CDK stack code we use the SpecRestApi function to generate our Amazon API Gateway and the associated integrations with the backing Lambda functions using the OpenAPI spec.
  3. The CDK stack is deployed via the cdk deploy command which uses CloudFormation under the hood.
  4. The OpenAPI spec through CloudFormation generates the API Gateway(s) and Lambda functions.
  5. We also deploy the OpenAPI spec to a private S3 bucket using the s3Deploy.BucketDeployment function and the Swagger UI components.
  6. We put Amazon CloudFront in front of the private S3 bucket and use origin access identity to allow the two services to work in tandem. This allows other teams to access our API spec for the Movies domain service.

👇 Before we go any further — please connect with me on LinkedIn for future blog posts and Serverless news https://www.linkedin.com/in/lee-james-gilmore/

Let’s talk through the code 🎤

Let’s now walk through the code and setup, starting with the mock service.

Our Mock API Service ✔️

OK, so let’s start with our mock OpenAPI spec in the movies-mock-api.ts file, which means that we can quickly make this accessible to other teams:

Note - some of the endpoints have been removed for brevity.
import { movies } from '../data';
import { version } from '../shared/open-api-info/schema-version';

export const getMockApiJson = () => {
return {
openapi: '3.0.3',
'x-amazon-apigateway-request-validators': {
validation: {
validateRequestBody: true,
validateRequestParameters: true,
},
},
info: {
title: 'Movie Mock API Example',
version,
description: 'A Mock API for movies',
},
paths: {
[`/${version}/movies`]: {
get: {
'x-amazon-apigateway-request-validator': 'validation',
summary: 'Get all movies',
responses: {
'200': {
description: 'Successful response',
content: {
'application/json': {
schema: {
type: 'array',
items: { $ref: '#/components/schemas/Movie' },
},
},
},
},
'400': {
description: 'Bad Request',
content: {
'application/json': {
schema: { $ref: '#/components/schemas/ErrorResponse' },
},
},
},
'404': {
description: 'Not Found',
content: {
'application/json': {
schema: { $ref: '#/components/schemas/ErrorResponse' },
},
},
},
},
'x-amazon-apigateway-integration': {
type: 'mock',
requestTemplates: {
'application/json': '{"statusCode": 200}',
},
responses: {
default: {
statusCode: '200',
responseTemplates: {
'application/json': JSON.stringify(movies),
},
},
},
},
},
post: {
'x-amazon-apigateway-request-validator': 'validation',
summary: 'Create a new movie',
requestBody: {
required: true,
content: {
'application/json': {
schema: { $ref: '#/components/schemas/NewMovie' },
},
},
},
responses: {
'200': {
description: 'Successful response',
content: {
'application/json': {
schema: { $ref: '#/components/schemas/Movie' },
},
},
},
'400': {
description: 'Bad Request',
content: {
'application/json': {
schema: { $ref: '#/components/schemas/ErrorResponse' },
},
},
},
'404': {
description: 'Not Found',
content: {
'application/json': {
schema: { $ref: '#/components/schemas/ErrorResponse' },
},
},
},
},
'x-amazon-apigateway-integration': {
type: 'mock',
requestTemplates: {
'application/json':
'#set($context.requestOverride.path.body = $input.body)\n{\n "statusCode": 200,\n}',
},
responses: {
default: {
statusCode: '200',
responseTemplates: {
'application/json':
'#set($body = $util.parseJson($context.requestOverride.path.body))\n{"id": "c4887ba4-0782-471c-bddc-af50265c96b9",\n "title": "$body.title",\n "rating": "$body.rating",\n "year": "$body.year"\n}',
},
},
},
},
},
},
},
components: {
schemas: {
ErrorResponse: {
type: 'object',
required: { '0': 'message' },
properties: { message: { type: 'string' } },
},
NewMovie: {
type: 'object',
required: { '0': 'title', '1': 'year', '2': 'rating' },
properties: {
title: {
type: 'string',
pattern: '^[a-zA-Z0-9 ]*$',
minLength: 1,
maxLength: 100,
description:
'The movie title (alphanumeric characters and spaces only)',
},
year: {
type: 'string',
pattern: '^\\d{4}$',
description: 'The release year of the movie',
},
rating: {
type: 'string',
enum: { '0': 'U', '1': 'PG', '2': '12', '3': '15', '4': '18' },
pattern: '^[UPG]|1[258]$',
description: 'The rating of the movie',
},
},
},
Movie: {
type: 'object',
required: { '0': 'id', '1': 'title', '2': 'year', '3': 'rating' },
properties: {
id: {
type: 'string',
pattern:
'^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-4[0-9a-fA-F]{3}-[89aAbB][0-9a-fA-F]{3}-[0-9a-fA-F]{12}$',
description: 'The movie ID (numeric characters only)',
},
title: {
type: 'string',
pattern: '^[a-zA-Z0-9 ]*$',
minLength: 1,
maxLength: 100,
description:
'The movie title (alphanumeric characters and spaces only)',
},
year: {
type: 'string',
pattern: '^\\d{4}$',
description: 'The release year of the movie',
},
rating: {
type: 'string',
enum: { '0': 'U', '1': 'PG', '2': '12', '3': '15', '4': '18' },
pattern: '^[UPG]|1[258]$',
description: 'The rating of the movie',
},
},
},
},
},
};
};

We can see from the code above that we have a TypeScript function which returns the JSON, which allows us to manipulate this dynamically where we need too.

The first instance of this is with the version of the API, as we wan’t this consistent across all files in the solution:

import { version } from '../shared/open-api-info/schema-version';
...
info: {
title: 'Movie Mock API Example',
version,
description: 'A Mock API for movies',
},
paths: {
[`/${version}/movies`]: {
...

We also return hard coded data for the mock API which we pull in from the:

import { movies } from '../data';

Which can be returned in the mocked requests (in this example for listing all movies)

...
'x-amazon-apigateway-integration': {
type: 'mock',
requestTemplates: {
'application/json': '{"statusCode": 200}',
},
responses: {
default: {
statusCode: '200',
responseTemplates: {
'application/json': JSON.stringify(movies),
},
},
},
},
},
...

The three mocked endpoints also have the type in the ‘x-amazon-apigateway-integration’ set to ‘mock’ which means API Gateway will set this up as a mocked API.

“This feature enables API developers to generate API responses from API Gateway directly, without the need for an integration backend. As an API developer, you can use this feature to unblock dependent teams that need to work with an API before the project development is complete” — https://docs.aws.amazon.com/apigateway/latest/developerguide/how-to-mock-integration.html

Things are a little more interesting for the POST request of a new Movie, where in our Mock we return the request payload (i.e. the movie being created), in the response to the user with a hard coded ID using VTL:

'x-amazon-apigateway-integration': {
type: 'mock',
requestTemplates: {
'application/json':
'#set($context.requestOverride.path.body = $input.body)\n{\n "statusCode": 200,\n}',
},
responses: {
default: {
statusCode: '200',
responseTemplates: {
'application/json':
'#set($body = $util.parseJson($context.requestOverride.path.body))\n{"id": "c4887ba4-0782-471c-bddc-af50265c96b9",\n "title": "$body.title",\n "rating": "$body.rating",\n "year": "$body.year"\n}',
},
},
},
},
},
Note - you could go as far as generating a new UUID each time
using VTL if you wanted too.

We then use the SpecRestApi function to generate a version of our mock API based on our Mock OpenAPI spec:

// create the mock api for development teams to consume
const mockApi: apigw.SpecRestApi = new apigw.SpecRestApi(this, 'MockApi', {
apiDefinition: apigw.ApiDefinition.fromInline(getMockApiJson()),
deploy: true, // the getMockApiJson function returns our mock open api json
deployOptions: {
stageName: 'api',
loggingLevel: apigw.MethodLoggingLevel.INFO,
},
endpointTypes: [apigw.EndpointType.REGIONAL],
description: `Movies Mock API ${stageName}`,
});
mockApi.applyRemovalPolicy(RemovalPolicy.DESTROY);

If we now deploy this and look in the console we will see our mocked API:

We can now allow other teams to utilise our mocked version of our API which is useful in e2e and integration tests situations. It is also useful for frontend engineers looking to get a head start on development with an actual API to work against.

Our Prod API Service ✔️

OK, now let’s move onto the actual main production service in our example. We start by creating a role for the ‘apigateway.amazonaws.com’ service to assume:

// we create an api service role for api gateway
const apiRole: Role = new Role(this, 'apiRole', {
roleName: 'apiRole',
assumedBy: new ServicePrincipal('apigateway.amazonaws.com'),
});

And we create our three Lambda functions for list-movies, get-movie-by-id and create-movie (list movies code shown below):

// list movies lambda handler
const listMoviesLambda: nodeLambda.NodejsFunction =
new nodeLambda.NodejsFunction(this, 'ListMoviesLambda', {
runtime: lambda.Runtime.NODEJS_18_X,
entry: path.join(
__dirname,
'../src/handlers/list-movies/list-movies.handler.ts'
),
memorySize: 1024,
handler: 'handler',
bundling: {
minify: true,
},
});

We now override the logical ID of the function as this needs to be deterministic and match our OpenAPI spec (more on this later)

const forceListMoviesLambdaId = listMoviesLambda.node
.defaultChild as lambda.CfnFunction;
forceListMoviesLambdaId.overrideLogicalId(functionNames.listMovies);

As the SpecRestApi will create our integrations for us between API Gateway and the three Lambda functions, we need to allow this role to invoke our functions:

// Grant the apiRole permissions to invoke this Lambda
// through a lambda integration
listMoviesLambda.grantInvoke(apiRole);
getMovieLambda.grantInvoke(apiRole);
createMovieLambda.grantInvoke(apiRole);

We can now use the SpecRestApi to generate our API using the getApiJson function which we will cover next:

// we create the specRestApi which builds all of our
// lambda integrations for us, as well as adding the basic
// schema validation etc
const api: apigw.SpecRestApi = new apigw.SpecRestApi(this, 'Api', {
apiDefinition: apigw.ApiDefinition.fromInline(
getApiJson(accountId)
), // the getApiJson function returns our open api json
deploy: false,
endpointTypes: [apigw.EndpointType.REGIONAL],
description: `Movies API ${stageName}`,
});

The getApiJson function returns the OpenAPI spec with the required ‘x-amazon-apigateway-integrations’ which point the REST endpoint to the given Lambda function, as well as setting up all of the basic validation:

// get the raw api json for our integrations
export const getApiJson = (accountId: string): Record<string, any> => {
return {
openapi: '3.0.3',
'x-amazon-apigateway-request-validators': {
validation: {
validateRequestBody: true,
validateRequestParameters: true,
},
},
info: {
title: 'Movie API Example',
version,
description: 'An API for movies',
},
paths: {
[`/${version}/movies`]: {
get: {
'x-amazon-apigateway-request-validator': 'validation',
'x-amazon-apigateway-integration': {
type: 'aws_proxy',
httpMethod: 'POST',
uri: {
'Fn::Sub': `arn:aws:apigateway:\${AWS::Region}:lambda:path/2015-03-31/functions/\${${functionNames.listMovies}.Arn}/invocations`,
},
passthroughBehavior: 'when_no_match',
credentials: `arn:aws:iam::${accountId}:role/apiRole`,
responses: {
default: {
statusCode: '200',
responseTemplates: {
'application/json': '',
},
},
},
},
summary: 'Get all movies',
responses: {
'200': {
description: 'Successful response',
content: {
'application/json': {
schema: {
type: 'array',
items: {
$ref: '#/components/schemas/Movie',
},
},
},
},
},
'400': {
description: 'Bad Request',
content: {
'application/json': {
schema: {
$ref: '#/components/schemas/ErrorResponse',
},
},
},
},
'404': {
description: 'Not Found',
content: {
'application/json': {
schema: {
$ref: '#/components/schemas/ErrorResponse',
},
},
},
},
},
},
...

The first part to note is the following which ensures that we have API Gateway request validators setup based on our models:

...
'x-amazon-apigateway-request-validators': {
validation: {
validateRequestBody: true,
validateRequestParameters: true,
},
},
...

You will also notice that we setup the integration for the Lambda functions using the following example:

...
get: {
'x-amazon-apigateway-request-validator': 'validation',
'x-amazon-apigateway-integration': {
type: 'aws_proxy',
httpMethod: 'POST',
uri: {
'Fn::Sub': `arn:aws:apigateway:\${AWS::Region}:lambda:path/2015-03-31/functions/\${${functionNames.listMovies}.Arn}/invocations`,
},
passthroughBehavior: 'when_no_match',
credentials: `arn:aws:iam::${accountId}:role/apiRole`,
responses: {
default: {
statusCode: '200',
responseTemplates: {
'application/json': '',
},
},
},
},
...

This shows that:

  • We use the validators that we created further up for request validation.
  • We set the type as ‘aws_proxy’ as this is the type of integration we need.
  • We give this endpoint the assumed API Role we created.
  • We set the URI, i.e. the Lambda function which will be invoked, to the ARN of the function.

If we look at this URI line of code:

'Fn::Sub': `arn:aws:apigateway:\${AWS::Region}:lambda:path/2015-03-31/functions/\${${functionNames.listMovies}.Arn}/invocations`,

We can see that we use the CloudFormation ‘Sub’ function to replace the region at deploy time with the region from the stack, and we set the function name from a separate functions file which is also being used in overriding the logical ID of the function.

This means that they are deterministic, and we won’t deploy an OpenAPI spec with the wrong function name for the ARN lookup:

// we want to ensure that we don't have any mistakes made
// between the open-api spec and the functions in cdk code
export enum functionNames {
createMovie = 'CreateMovieLambda',
getMovieById = 'GetMovieByIdLambda',
listMovies = 'ListMoviesLambda',
}

If we didn’t use this approach we could end up with a different logical ID between the JSON and the Lambda function for the integration(s) which would fail at deployment time (as shown below)

An example error when the logical ID is different in the JSON compared to the function

We also create a new API Gateway Document Version for our API, with a new Resource ID each time and a retain on the removal policy (this ensures that we get a new version of the documentation each time).

As we said above we are using the native request and response validation for API Gateway, which means we can define these in the OpenAPI spec, but note we are actually pulling these in from another file:

...
components: {
schemas: {
ErrorResponse: errorResponse,
NewMovie: createMovieSchema,
Movie: movieSchema,
},
},
...

If we look at the createMovieSchema we will see this is defined and tested in it’s own file, which essentially means API Gateway will apply this validation for us on incoming request payloads for new movies:

export const createMovieSchema = {
type: 'object',
required: ['title', 'year', 'rating'],
properties: {
title: {
type: 'string',
pattern: '^[a-zA-Z0-9 ]*$',
minLength: 1,
maxLength: 100,
description: 'The movie title (alphanumeric characters and spaces only)',
},
year: {
type: 'string',
pattern: '^\\d{4}$',
description: 'The release year of the movie',
},
rating: {
type: 'string',
enum: ['U', 'PG', '12', '15', '18'],
pattern: '^[UPG]|1[258]$',
description: 'The rating of the movie',
},
},
};
An example of our request validator being set and the request body having to confine to the ‘NewMovie’ schema

Which means we can also utilise this validation elsewhere, for example in the Lambda function too if we so wished:

import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';

import { Movie } from '../../data/movies';
import { createMovieSchema } from './create-movie.schema';
import { schemaValidator } from '../../shared/utils/schema-validator';
import { v4 as uuid } from 'uuid';

export const handler = async ({
body,
}: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
if (!body) throw new Error('No movie');

// let's return the same movie in this example and
// we won't commit to a datastore
const movie: Movie = JSON.parse(body) as Movie;

// we could optionally perform the validation in code
// which is the same one as the basic validation on apigw
schemaValidator(createMovieSchema, movie);

return {
statusCode: 201,
body: JSON.stringify({
...movie,
id: uuid(),
}),
};
} catch (error) {
let errorMessage = 'Unknown error';
if (error instanceof Error) errorMessage = error.message;
console.error(errorMessage);

return {
statusCode: 400,
body: JSON.stringify({ message: errorMessage }),
};
}
};

Documentation Versions ✔️

We also create documentation versions of our OpenAPI spec per stage.

You can see we are utilising the same version from a separate file as we do with all files to ensure this remains consistent:

// we create some documentation which will be pushed to apigw
const documentation = new apigw.CfnDocumentationVersion(
this,
'ApiDocumentation' + version, // this ensures we get a new version each time
{
restApiId: api.restApiId,
documentationVersion: version,
description: 'api schema',
}
);
documentation.applyRemovalPolicy(RemovalPolicy.RETAIN);

When we create the stage for our API deployment we point to this document version as shown below:

// create stage of api with documentation version
const stage = new apigw.Stage(this, 'ApiStage', {
deployment: deployment,
documentationVersion: version,
stageName: 'api',
loggingLevel: apigw.MethodLoggingLevel.INFO,
});
stage.applyRemovalPolicy(RemovalPolicy.DESTROY);
stage.node.addDependency(documentation);

If we look in the console we will see the following as a history of our API documentation:

An example of our documentation history of OpenAPI changes

Deploying the OpenAPI spec for other teams ✔️

OK, so we now have a mock API deployed, and we have our production API deployed, but how can we push this to a place for other teams to consume and use from an OpenAPI spec perspective?

We start by creating an S3 bucket to house the OpenAPI spec as shown below:

// the bucket which will house swagger ui and our openapi spec
const bucket: s3.Bucket = new s3.Bucket(this, 'AssetsBucket', {
bucketName: 'serverless-pro-openapi-bucket-json',
removalPolicy: RemovalPolicy.DESTROY,
autoDeleteObjects: true,
websiteIndexDocument: 'index.html', // this is our swagger ui main file
websiteErrorDocument: 'index.html',
publicReadAccess: false,
});

We then setup a CloudFront distribution to point to the S3 bucket (Origin Identity Access removed for brevity but you can see this in the code repo)

// cloudfront distribution for the openapi spec which is public i.e. no auth for this example
const cloudFrontDistribution = new cloudFront.CloudFrontWebDistribution(
this,
'Distribution',
{
originConfigs: [
{
s3OriginSource: {
s3BucketSource: bucket,
originAccessIdentity,
},
behaviors: [
{
isDefaultBehavior: true,
defaultTtl: Duration.minutes(0), // we set the cache values so there is no caching
minTtl: Duration.minutes(0),
maxTtl: Duration.minutes(0),
},
],
},
],
comment: `${props.stageName} client web distribution`,
defaultRootObject: 'index.html',
priceClass: cloudFront.PriceClass.PRICE_CLASS_100,
enabled: true,
}
);

We then utilise the s3Deploy.BucketDeployment function to deploy a redacted version of the OpenAPI spec to our S3 bucket using the getApiJsonRedacted function:

new s3deploy.BucketDeployment(this, 'ClientBucketDeployment', {
sources: [
// deploy our open api json data
s3deploy.Source.jsonData(
'movies-openapi.json',
getApiJsonRedacted(accountId) // this removes our integration content
),
// deploy the raw swagger-ui setup
s3deploy.Source.asset('src/swagger-ui/dist/'),
],
destinationBucket: bucket,
});

You may be asking why we have redacted certain information from the JSON schema and why? Well, firstly if we have external consumers using the OpenAPI spec then I don’t want them seeing the underlying integrations to the Lambda function ARNs from a security point of view.

We can see the code for removing the ‘x-amazon-apigateway-integration’ parts of the OpenAPI spec before deploying it here:

// redact properties from a dynamic json schema i.e. the 'x-amazon-apigateway-integration' values
// as we don't want to leak this information to people viewing our openapi spec online
export const getApiJsonRedacted = (accountId: string): Record<string, any> => {
const json: Record<string, any> = getApiJson(accountId);
return removeDynamicObjects(json, 'x-amazon-apigateway-integration');
};
Note - it is using a custom function called
removeDynamiCObjects which is too verbose to show here.

The final piece in the puzzle which you will also notice is that we are deploying the ‘src/swagger-ui/dist/’ assets in the same way, which are essentially the distribution version of the Swagger UI files which points to our uploaded OpenAPI spec which results in the following:

An example of our OpenAPI spec hosted for other teams

Advantages and Disadvantages

OK, so before moving onto the next part of the series with a different approach, let’s discuss the advantages and disadvantages of this one:

Advantages ✔️

  1. We go with an OpenAPI definition first approach which means QA team members and security can start building out their integration, security and load tests which will test the proposed patterns (regexes), min/max values, under/over posting and load etc
  2. It means that we can generate our mock API from day one based on our definitions, so client (front end) team members can start to work.
  3. It means other teams working in parallel can potentially start using this mock API or agreed definition.
  4. We don’t have to write all of the wiring up in CDK code between the Lambda functions and the API methods. This reduces the stack code massively.
  5. We can easily deploy the agreed OpenAPI definition to S3/CloudFront or a DevX platform like Backstage very easily from day one. This allows other teams to constantly get access to our latest versioned API schemas.

Disadvantages ❌

  1. We need to write the definition up front, with many teams preferring this to be generated for them based off CDK code instead.
  2. We need to wire up the Lambda function integration in JSON rather than in code which you could argue is more prone to mistakes. (as this is deterministic I think we could auto generate a lot of this boiler plate if needed).
  3. Some teams may feel that working with JSON over TypeScript CDK objects is potentially cumbersome.
  4. No easy way to generate a mock service (only for non-production stages) based on the same schemas in this regard. This feels fairly cumbersome when it is OpenAPI spec first as an approach as we need to create two separate schemas (one for our prod API and one for the mock service).

The ask! 💙

From an #AWSWishlist perspective I would love to see the SpecRestApi function in the ‘aws-cdk-lib/aws-apigateway’ module take an array of placeholders which can be replaced in the ApiDefinition file dynamically, for example, a key value pair where we could pass in the reference to the Lambda functions (or any other replacements such as a version) which performs the CloudFormation Sub function for us under the hood (without the workarounds in this article around logical ID replacements and lookup files for consistency):

const api: apigw.SpecRestApi = new apigw.SpecRestApi(this, 'Api', {
apiDefinition: apigw.ApiDefinition.fromInline(openApiFile,
// optional key value replacements
{
version: 'v1',
createMovieLambda: createMovieLambda.functionArn,
}),
deploy: false,
endpointTypes: [apigw.EndpointType.REGIONAL],
description: `Movies API ${stageName}`,
});

Closing

In the next part of the series (Part 2) we will look at building out the specification in CDK code instead.

Part 1 — OpenAPI specification first approach (JSON).
Part 2OpenAPI code first approach.

Wrapping up 👋

Please go and subscribe on my YouTube channel for similar content!

I would love to connect with you also on any of the following:

https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore

If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋

Please also use the ‘clap’ feature at the bottom of the post if you enjoyed it! (You can clap more than once!!)

About me

Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect and Global Serverless Architect based in the UK; currently working for City Electrical Factors (UK) & City Electric Supply (US), having worked primarily in full-stack JavaScript on AWS for the past 6 years.

I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture and technology.

*** The information provided are my own personal views and I accept no responsibility on the use of the information. ***

You may also be interested in the following:

--

--

Global Head of Technology & Architecture | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀