Photo by julien Tromeur on Unsplash

AI Clean Code Generation through a Plop CLI using TypeScript & Amazon Bedrock

Using generative AI to enhance our clean code generation Plop CLI to generate the complex parts, using TypeScript and the AWS CDK.

Serverless Advocate
15 min readOct 16, 2023

--

Preface

✔️ We create a CLI to autogenerate our serverless clean code app. 🔩
✔️ We add AI using Amazon Bedrock to auto generate complex code. 🧑🏽‍💻

Introduction 👋🏽

As we discussed in the previous article, there are two key areas that I am personally passionate about when it comes to Serverless:

Developer Experience + Clean Code

You can see the original article here, which covered using Plop to create a basic Node CLI which will autogenerate our AWS CDK app and code based on the lightweight clean code approach:

In this article we will take this approach a step further and integrate AI code generation through Amazon Bedrock; further enhancing the lightweight clean code example:

The GitHub code repository for this article can be found here:

👇 Before we go any further — please connect with me on LinkedIn for future blog posts and Serverless news https://www.linkedin.com/in/lee-james-gilmore/

What is Amazon Bedrock? 🤖

Before we delve into the article and seeing the CLI in action, let’s cover what Amazon Bedrock is, how we can utilise generative AI, and how it can further enhance our CLI.

“Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon with a single API, along with a broad set of capabilities you need to build generative AI applications, simplifying development while maintaining privacy and security” — https://aws.amazon.com/bedrock/

We can therefore compliment our static deterministic Plop CLI code generation with generative AI using Amazon Bedrock, to generate additional dynamic code flexibly based on user input into the CLI!

💡 Note: This article will not go into provisioning Amazon Bedrock on your AWS account — as currently this process is through requesting access to the base models as shown below

Example screen in the console to request access to specific AI models

Let’s run the CLI 👨‍🚀

Note: This is a very basic example and is not production
ready at all! It is simply to show a serverless CLI in action
using Plop to autogenerate our code in the correct structure
using basic templating! Don't @ me!

Let’s run the CLI to see it in action so we can see how AI can compliment our developer experience with dynamic complex code:

A YouTube video showing the CLI and AI in action

Covering some of the new terms 🤖

Let’s now cover some of the terms being used with the use of Amazon Bedrock.

Firstly, let’s look at some new terms when making an SDK call to Amazon Bedrock:

  • Temperature — Tunes the degree of randomness in generation. Lower temperatures mean less random generations. (default is 1)
  • Top P — If set to float less than 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. (default is 0.999)
  • Top K — Can be used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. (default is 250)
  • Maximum Length — Maximum number of tokens to generate. Responses are not guaranteed to fill up to the maximum desired length.
  • Stop sequences — Up to four sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

The values are as follows that we can set:

Let’s look at a basic example of a request we can make through the console playground:

Example: “Human: Please return me a TypeScript JSON schema only without any code explanations which includes the relevant patterns for a Customer who has an id property which is a UUID, a firstname property which is a string, a surname property which is a string, and a status value which can either be 'VALID' or 'INVALID'”

This gives us the following response:

Here is the JSON schema without explanations:

```json
{
"type": "object",
"properties": {
"id": {
"type": "string",
"pattern": "^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$"
},
"firstname": {
"type": "string"
},
"surname": {
"type": "string"
},
"status": {
"type": "string",
"enum": ["VALID", "INVALID"]
}
},
"required": ["id", "firstname", "surname", "status"]
}
```

A SDK call looks like this which is generated using the console for the given prompt:

{
"modelId": "anthropic.claude-v2",
"contentType": "application/json",
"accept": "*/*",
"body": "{\"prompt\":\"Human: \\n\\nHuman: Please return me a TypeScript JSON schema only without any code explanations which includes the relevant patterns for a Customer who has an id property which is a UUID, a firstname property which is a string, a surname property which is a string, and a status value which can either be 'VALID' or 'INVALID'\\n\\nAssistant:\",\"max_tokens_to_sample\":300,\"temperature\":0.2,\"top_k\":250,\"top_p\":0.999,\"stop_sequences\":[\"\\n\\nHuman:\"],\"anthropic_version\":\"bedrock-2023-05-31\"}"
}

With the body property being stringified in the following shape:

{
"prompt": "\n\nHuman:<prompt>\n\nAssistant:",
"temperature": float,
"top_p": float,
"top_k": int,
"max_tokens_to_sample": int,
"stop_sequences": ["\n\nHuman:"]
}

Now that we have an idea of the terminology, the models, and the shape of an SDK call; now let’s look at using the AWS SDK v3 via an API as part of our CLI to autogenerate the more dynamic parts of the code using Amazon Bedrock:

Let’s walk through the architecture 📐

OK, now the fun part, let’s walk through the architecture for the following:

  1. The user uses the CLI to autogenerate the AWS CDK app with the lightweight deterministic hexagonal architecture design.

2. As part of the CLI prompts, the user is asked what the relevant use case DTO looks like; which subsequently calls our AI Prompt API Gateway. An example of the dynamic nature is:

“id as uuid, orderNo as string, status as ‘Valid’ or ‘Invalid’”

3. A Lambda function is invoked for our AI business logic from our Amazon API Gateway API which calls our use case.

4. It firstly checks if we have had that prompt before, and if so it returns it’s previous result from our DynamoDB cache.

5. If the prompt result is not cached, it will call Amazon Bedrock to generate the result using AI, and subsequently cache it for future calls.

What would the result of this be?

Based on the dynamic input above of “id as uuid, orderNo as string, status as ‘Valid’ or ‘Invalid’” we get the following exported TypeScript JSON schema:

export const schema = {
$schema: 'http://json-schema.org/draft-07/schema#',
type: 'object',
properties: {
id: {
type: 'string',
pattern:
'^[0-9a-fA-F]{8}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{4}-[0-9a-fA-F]{12}$',
},
orderNo: { type: 'string' },
status: { type: 'string', enum: ['Valid', 'Invalid'] },
},
required: ['id', 'orderNo', 'status'],
additionalProperties: false,
};

and the following typed and exported TypeScript DTO:

export type CreateOrderDto = {
id: string;
orderNo: string;
status: 'Valid' | 'Invalid';
}

This means that we have fully generated a CDK application in TypeScript with the more complex schemas and types autogenerated through AI and our CLI!

“This means that we have fully generated a CDK application in TypeScript with the more complex schemas and types autogenerated through AI and our CLI!”

Let’s walk through the code 🧑🏽‍💻

OK, now let’s walk through the code, starting with the AI Prompt API.

AI Prompt API 🤖

Firstly we create our stateful stack which contains our DynamoDB cache table as shown below:

import * as cdk from 'aws-cdk-lib';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';

import { Construct } from 'constructs';
import { RemovalPolicy } from 'aws-cdk-lib';

export class AiServiceStatefulStack extends cdk.Stack {
public readonly table: dynamodb.Table;

constructor(scope: Construct, id: string, props?: cdk.StackProps) {
super(scope, id, props);

// create the dynamodb table for storing cached prompts
this.table = new dynamodb.Table(this, 'PromptsTable', {
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
encryption: dynamodb.TableEncryption.AWS_MANAGED,
pointInTimeRecovery: false,
contributorInsightsEnabled: true,
removalPolicy: RemovalPolicy.DESTROY,
partitionKey: {
name: 'id',
type: dynamodb.AttributeType.STRING,
},
});
}
}

We then create our stateless stack for our API Gateway, Lambda Function, and IAM policy for Amazon Bedrock:

import * as apigw from 'aws-cdk-lib/aws-apigateway';
import * as cdk from 'aws-cdk-lib';
import * as dynamodb from 'aws-cdk-lib/aws-dynamodb';
import * as iam from 'aws-cdk-lib/aws-iam';
import * as lambda from 'aws-cdk-lib/aws-lambda';
import * as nodeLambda from 'aws-cdk-lib/aws-lambda-nodejs';
import * as path from 'path';

import { Construct } from 'constructs';
import { Tracing } from 'aws-cdk-lib/aws-lambda';

export interface StatelessStackProps extends cdk.StackProps {
table: dynamodb.Table;
}

export class AiServiceStatelessStack extends cdk.Stack {
public readonly table: dynamodb.Table;

constructor(scope: Construct, id: string, props: StatelessStackProps) {
super(scope, id, props);

this.table = props.table;

const lambdaPowerToolsConfig = {
LOG_LEVEL: 'DEBUG',
POWERTOOLS_LOGGER_LOG_EVENT: 'true',
POWERTOOLS_LOGGER_SAMPLE_RATE: '1',
POWERTOOLS_TRACE_ENABLED: 'enabled',
POWERTOOLS_TRACER_CAPTURE_HTTPS_REQUESTS: 'captureHTTPsRequests',
POWERTOOLS_SERVICE_NAME: 'AiService',
POWERTOOLS_TRACER_CAPTURE_RESPONSE: 'captureResult',
POWERTOOLS_METRICS_NAMESPACE: 'Advocate',
};

const actionPromptLambda: nodeLambda.NodejsFunction =
new nodeLambda.NodejsFunction(this, 'ActionPromptLambda', {
runtime: lambda.Runtime.NODEJS_18_X,
entry: path.join(
__dirname,
'src/adapters/primary/action-prompt/action-prompt.adapter.ts'
),
memorySize: 1024,
functionName: 'action-prompt-lambda',
timeout: cdk.Duration.seconds(29),
tracing: Tracing.ACTIVE,
handler: 'handler',
bundling: {
minify: true,
externalModules: [],
},
environment: {
TABLE_NAME: this.table.tableName,
...lambdaPowerToolsConfig,
},
});

// give the lambda access to the database table
this.table.grantReadWriteData(actionPromptLambda);

// allow the lambda function to use amazon bedrock
actionPromptLambda.addToRolePolicy(
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['bedrock:InvokeModel'],
resources: [
'arn:aws:bedrock:us-east-1::foundation-model/anthropic.claude-v2',
],
})
);
actionPromptLambda.addToRolePolicy(
new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['bedrock:ListFoundationModels'],
resources: ['*'],
})
);

// create the api gateway prompt api
const api: apigw.RestApi = new apigw.RestApi(this, 'AiApi', {
description: 'Advocate AI API',
endpointTypes: [apigw.EndpointType.EDGE],
deploy: true,
deployOptions: {
stageName: 'prod',
dataTraceEnabled: true,
loggingLevel: apigw.MethodLoggingLevel.INFO,
tracingEnabled: true,
metricsEnabled: true,
},
});

const prompts: apigw.Resource = api.root.addResource('prompts');

// note: this is not production ready and you would use the most appropriate authentication for you
const apiKey = api.addApiKey('AiApiKey', {
apiKeyName: 'ai-api-key',
value: 'ce6fd602-7923-4230-9517-f3a4cb8a25a4',
description: 'The AI API Key',
});
apiKey.applyRemovalPolicy(cdk.RemovalPolicy.DESTROY);

// hook up the post method to the lambda
prompts.addMethod(
'POST',
new apigw.LambdaIntegration(actionPromptLambda, {
proxy: true,
}),
{
apiKeyRequired: true, // ensure that the consumer needs to send the api key
}
);

// create a usage plan for the api with the relevant key
const usagePlan = api.addUsagePlan('APIUsagePlan', {
apiStages: [{ stage: api.deploymentStage }],
name: 'AI-API-Usage-Plan-With-Key',
description: 'The AI API Usage Plan',
throttle: {
rateLimit: 10,
burstLimit: 2,
},
});
usagePlan.addApiKey(apiKey);
usagePlan.applyRemovalPolicy(cdk.RemovalPolicy.DESTROY);
}
}

Now that we have the serverless architecture setup, we create our primary adapter for the Lambda function being invoked via API Gateway:

import { APIGatewayProxyEvent, APIGatewayProxyResult } from 'aws-lambda';
import {
ActionPromptUseCaseResult,
actionPromptUseCase,
} from '@use-cases/action-prompt';
import {
MetricUnits,
Metrics,
logMetrics,
} from '@aws-lambda-powertools/metrics';
import { Tracer, captureLambdaHandler } from '@aws-lambda-powertools/tracer';
import { errorHandler, logger, schemaValidator } from '@shared/index';

import { ActionPromptDto } from '@dto/action-prompt';
import { ValidationError } from '@errors/validation-error';
import { injectLambdaContext } from '@aws-lambda-powertools/logger';
import middy from '@middy/core';
import { schema } from './action-prompt.schema';

const tracer = new Tracer();
const metrics = new Metrics();

export const actionPromptAdapter = async ({
body,
}: APIGatewayProxyEvent): Promise<APIGatewayProxyResult> => {
try {
if (!body) throw new ValidationError('no payload body');

const prompt = JSON.parse(body) as ActionPromptDto;

// validate the input prompt from api gateway
schemaValidator(schema, prompt);

// call the use case to return the prompt (either new or cached)
const created: ActionPromptUseCaseResult = await actionPromptUseCase(
prompt
);

metrics.addMetric('SuccessfulActionPromptCreated', MetricUnits.Count, 1);

return {
statusCode: 200,
body: JSON.stringify(created),
};
} catch (error) {
let errorMessage = 'Unknown error';
if (error instanceof Error) errorMessage = error.message;
logger.error(errorMessage);

metrics.addMetric('ActionPromptCreatedError', MetricUnits.Count, 1);

return errorHandler(error);
}
};

export const handler = middy(actionPromptAdapter)
.use(injectLambdaContext(logger))
.use(captureLambdaHandler(tracer))
.use(logMetrics(metrics));

Which calls our use case (business logic) to retrieve the cached prompt if it exists, otherwise it calls Amazon Bedrock to dynamically generate the result, and to then cache for subsequent calls:

import { getISOString, logger, schemaValidator } from '@shared/index';
import {
retrievePrompt,
savePrompt,
} from '@adapters/secondary/database-adapter';

import { ActionPromptDto } from '@dto/action-prompt';
import { config } from '@config/config';
import { createPrompt } from '@adapters/secondary/prompt-adapter';
import { schema } from '@schemas/prompt';
import { v5 as uuid } from 'uuid';

const namespace = config.get('namespace');

export type ActionPromptUseCaseResult = {
type: string;
result: string;
};

// generate or return a cached response from amazon bedrock
export async function actionPromptUseCase(
prompt: ActionPromptDto
): Promise<ActionPromptUseCaseResult> {
const createdDate = getISOString();
const id = uuid(JSON.stringify(prompt), namespace);

// validate the prompt
schemaValidator(schema, prompt);

const cachedPromptResult = await retrievePrompt(id);

// if we already have a cached prompt result then return it
if (cachedPromptResult) {
logger.info(`Returning cached response for id: ${id}`);

return {
type: prompt.type,
result: cachedPromptResult.result,
};
}

const promptResult = await createPrompt(prompt);

logger.info(`New prompt result: ${promptResult}`);

// we use uuid v5 for caching as it is determinstic given the same payload
await savePrompt({
id,
createdDate: createdDate,
prompt: JSON.stringify(prompt),
result: JSON.stringify(promptResult),
});

logger.info(`Cached response for id: ${id}`);

return {
type: prompt.type,
result: JSON.stringify(promptResult),
};
}

It uses the secondary database adapter to retrieve and cache results in the DynamoDB table:

import {
DynamoDBClient,
GetItemCommand,
PutItemCommand,
} from '@aws-sdk/client-dynamodb';
import { marshall, unmarshall } from '@aws-sdk/util-dynamodb';

import { PromptDto } from '@dto/action-prompt';
import { config } from '@config/config';
import { logger } from '@shared/index';

const dynamoDb = new DynamoDBClient({});

// cache a prompt in the dynamodb table
export async function savePrompt(
actionPromptDto: PromptDto
): Promise<PromptDto> {
const tableName = config.get('tableName');

const params = {
TableName: tableName,
Item: marshall(actionPromptDto),
};

try {
await dynamoDb.send(new PutItemCommand(params));

logger.info(`Prompt created with ${actionPromptDto.id} into ${tableName}`);

return actionPromptDto;
} catch (error) {
console.error('Error creating prompt:', error);
throw error;
}
}

// retrieve a cached prompt based on the deterministic v5 uuid id
export async function retrievePrompt(id: string): Promise<PromptDto | null> {
const tableName = config.get('tableName');

const params = {
TableName: tableName,
Key: marshall({ id }),
};

try {
const { Item } = await dynamoDb.send(new GetItemCommand(params));

if (Item) {
const prompt = unmarshall(Item) as PromptDto;
logger.info(`Prompt retrieved with ${prompt.id} from ${tableName}`);
return prompt;
} else {
return null;
}
} catch (error) {
logger.error(`Error retrieving prompt: ${error}`);
throw error;
}
}

And more interestingly, generates the code dynamically using Amazon Bedrock through our prompt adapter:

import {
BedrockRuntimeClient,
InvokeModelCommand,
} from '@aws-sdk/client-bedrock-runtime';
import { extractPromptResult, logger } from '@shared/index';

import { ActionPromptDto } from '@dto/action-prompt';
import { config } from '@config/config';

const client = new BedrockRuntimeClient({
region: 'us-east-1',
apiVersion: '2023-09-30',
});
const modelId = config.get('modelId');

export async function createPrompt(
actionPromptDto: ActionPromptDto
): Promise<Record<string, any>> {
const {
accept,
contentType,
max_tokens_to_sample,
top_p,
top_k,
prompt,
stop_sequences,
temperature,
} = actionPromptDto;

const body = JSON.stringify({
prompt: `Human:${prompt} Assistant:`,
temperature,
top_k,
top_p,
max_tokens_to_sample,
stop_sequences,
});

logger.info(`Prompt body: ${body}`);

const input = {
body,
contentType,
accept,
modelId,
};
const command = new InvokeModelCommand(input);
const { body: promptResponse } = await client.send(command);

const promptResponseJson = JSON.parse(
new TextDecoder().decode(promptResponse)
);

const result = promptResponseJson.completion;

logger.info(`Full prompt response: ${result}`);

// extract the json specifically from the response
return extractPromptResult(result, actionPromptDto.type);
}

OK, so now we have covered our AI Prompt API which allows consumers to generate prompt results using Amazon Bedrock, now let’s look at how the CLI can call this API.

AI CLI 🤖

We start by updating the CLI code to ask the following additional question:

{
type: 'input',
name: 'useCaseSchemaValues',
message:
'Please provide schema props\n\n (e.g. firstName as string, surname as string, status as "VALID" or "INVALID")\n\n:',
validate: (value: string) => value.length > 0,
when(context: any) {
return context.useCaseRequired === 'Y';
},
},

Now we add to our existing CLI code the ability to add AI generated dynamic schemas using the following code:

import { NodePlopAPI } from 'plop';
import { config } from '../../config/config.js';
import { httpsRequest } from '../../utils/https-request.js';
import { toKebabCase } from '../../utils/to-kebab-case.js';
import { writeDataToFile } from '../../utils/write-data-to-file.js';

const api = config.get('api');
const apiKey = config.get('apiKey');

export const createAiSchemaViaApi = (plop: NodePlopAPI) => {
return plop.setActionType('create ai schema', async (answers, config) => {
const { useCaseSchemaValues, useCaseSchemaName, cdkFolderPath, name } =
config.data as any;

// create the prompt specific to schemas using the dynamic input from the user
const prompt = {
type: 'json',
contentType: 'application/json',
accept: 'application/json',
prompt: `Please return a json schema with only the following properties: ${useCaseSchemaValues}, with all properties having relevant regex patterns`,
temperature: 0,
top_p: 0.8,
top_k: 5,
max_tokens_to_sample: 400,
stop_sequences: [],
};

console.log(`🤖 dyamically generating the schema using ai...`);

// make the request and get the autogenerated schema for the file contents
const response = await httpsRequest(prompt, apiKey, api);

// use our helper methods to convert values to kebab case for file names
const nameKebabCase = toKebabCase(name);
const useCaseNameKebabCase = toKebabCase(useCaseSchemaName);

// create the primary adapter schema values
const adapterFolderPath = `../${cdkFolderPath}/stateless/src/adapters/primary/${nameKebabCase}`;
const adapterFileName = `${nameKebabCase}.schema.ts`;
const data = `export const schema = ${response.result};`;

// create the use case schema values
const useCaseFolderPath = `../${cdkFolderPath}/stateless/src/schemas/${useCaseNameKebabCase}`;
const useCaseFileName = `${useCaseNameKebabCase}.schema.ts`;

// write the files to the correct folder
writeDataToFile(data, adapterFolderPath, adapterFileName);
writeDataToFile(data, useCaseFolderPath, useCaseFileName);

return '🤖 generated the dynamic schema using ai';
});
};

We can see using the code above we:

  • Create the correct prompt which is taken from the CLI
  • Make a subsequent call to our AI API to generate or retrieve the cached prompt result. (This includes passing our API Key with the request).
  • Generate the correct files containing the auto-generated code.

And we can create our Typescript DTO file dynamically in the same way using the following similar code:

import { NodePlopAPI } from 'plop';
import { config } from '../../config/config.js';
import { httpsRequest } from '../../utils/https-request.js';
import { toKebabCase } from '../../utils/to-kebab-case.js';
import { writeDataToFile } from '../../utils/write-data-to-file.js';

const api = config.get('api');
const apiKey = config.get('apiKey');

export const createAiTypeViaApi = (plop: NodePlopAPI) => {
return plop.setActionType('create ai type', async (answers, config) => {
const { useCaseSchemaValues, cdkFolderPath, name } = config.data as any;

// create the prompt specific to types using the dynamic input from the user
const prompt = {
type: 'typescript',
contentType: 'application/json',
accept: 'application/json',
prompt: `Please return a typescript type with only the following properties: ${useCaseSchemaValues}, where the typescript type is named ${name}Dto in pascal case`,
temperature: 0,
top_p: 0.8,
top_k: 5,
max_tokens_to_sample: 400,
stop_sequences: [],
};

console.log(`🤖 dyamically generating the type using ai...`);

// make the request and get the autogenerated type for the file contents
const response = await httpsRequest(prompt, apiKey, api);

// use our helper methods to convert values to kebab case for file names
const nameKebabCase = toKebabCase(name);

// create the dto type values
const folderPath = `../${cdkFolderPath}/stateless/src/dto/${nameKebabCase}`;
const fileName = `${nameKebabCase}.ts`;
const data = `${JSON.parse(response.result)}`;

// write the file to the correct folder
writeDataToFile(data, folderPath, fileName);

return '🤖 generated the dynamic schema using ai';
});
};

This means we have used our Plop CLI to auto generate the clean hexagonal architecture code for our CDK application, and utilised generative AI to dynamically generate the schemas and types using prompts.

Wrapping up

I hope you enjoyed this article, and if you did then please feel free to share and feedback!

Please go and subscribe on my YouTube channel for similar content!

I would love to connect with you also on any of the following:

https://www.linkedin.com/in/lee-james-gilmore/
https://twitter.com/LeeJamesGilmore

If you enjoyed the posts please follow my profile Lee James Gilmore for further posts/series, and don’t forget to connect and say Hi 👋

Please also use the ‘clap’ feature at the bottom of the post if you enjoyed it! (You can clap more than once!!)

About me

Hi, I’m Lee, an AWS Community Builder, Blogger, AWS certified cloud architect and Global Head of Technology & Architecture based in the UK; currently working for City Electrical Factors (UK) & City Electric Supply (US), having worked primarily in full-stack JavaScript on AWS for the past 6 years.

I consider myself a serverless advocate with a love of all things AWS, innovation, software architecture and technology.

*** The information provided are my own personal views and I accept no responsibility on the use of the information. ***

You may also be interested in the following:

--

--

Global Head of Technology & Architecture | Serverless Advocate | Mentor | Blogger | AWS x 7 Certified 🚀