Skip to content

AWS CDK Pipeline

This presents a reference implementation of the Application Pipeline reference architecture. The pipeline is built with AWS CodePipeline and uses AWS CodeBuild for building the software and performing testing tasks. All the infrastructure for this reference implementation is defined with AWS Cloud Development Kit. The pipelines are defined using the CDK Pipelines L3 constructs. The source code for this reference implementation is available in GitHub for running in your own local account.

Architecture

Disclaimer

This reference implementation is intended to serve as an example of how to accomplish the guidance in the reference architecture using CDK Pipelines. The reference implementation has intentionally bypassed the following AWS Well-Architected best practices to make it accessible by a wider range of customers. Be sure to address these before using parts of this code for any workloads in your own environment:

  • TLS on HTTP endpoint - the listener for the sample application uses HTTP instead of HTTPS to avoid having to create new ACM certificates and Route53 hosted zones. This should be replaced in your account with an HTTPS listener.

Pipeline

Local Development

Developers need fast-feedback for potential issues with their code. Automation should run in their developer workspace to give them feedback before the deployment pipeline runs.

Pre-Commit Hooks

Pre-Commit hooks are scripts that are executed on the developer's workstation when they try to create a new commit. These hooks have an opportunity to inspect the state of the code before the commit occurs and abort the commit if tests fail. An example of pre-commit hooks are Git hooks. Examples of tools to configure and store pre-commit hooks as code include but are not limited to husky and pre-commit.

The following .pre-commit-config.yaml is added to the repository that will build the code with Maven, run unit tests with JUnit, check for code quality with Checkstyle, run static application security testing with PMD and check for secrets in the code with gitleaks.

repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
  rev: v2.3.0
  hooks:
  -   id: check-yaml
  -   id: check-json
  -   id: trailing-whitespace
- repo: https://github.com/pre-commit/mirrors-eslint
  rev: v8.23.0
  hooks:
  -   id: eslint
- repo: https://github.com/ejba/pre-commit-maven
  rev: v0.3.3
  hooks:
  -   id: maven-test
- repo: https://github.com/zricethezav/gitleaks
  rev: v8.12.0
  hooks:
    - id: gitleaks

Source

Application Source Code

The application source code can be found in the src/main/java directory. It is intended to serve only as a reference and should be replaced by your own application source code.

This reference implementation includes a Spring Boot application that exposes a REST API and uses a database for persistence. The API is implemented in FruitController.java:

public class FruitController {
    /**
     * JPA repository for fruits.
     */
    private final FruitRepository repository;

    /**
     * Logic to map between entities and DTOs
     */
    private final FruitMapper mapper;

    FruitController(final FruitRepository r, final FruitMapper m) {
        this.repository = r;
        this.mapper = m;
    }

    @GetMapping("/api/fruits")
    List<FruitDTO> all() {
        return repository.findAll()
                .stream()
                .map(mapper::toDto)
                .collect(Collectors.toList());
    }

    @PostMapping("/api/fruits")
    FruitDTO newFruit(@RequestBody final FruitDTO fruit) {
        return mapper.toDto(repository.save(mapper.toEntity(fruit)));
    }

    @GetMapping("/api/fruits/{id}")
    FruitDTO one(@PathVariable final Long id) {
        return repository.findById(id)
                .map(mapper::toDto)
                .orElseThrow(() -> new FruitNotFoundException(id));
    }

    @PutMapping("/api/fruits/{id}")
    FruitDTO replaceFruit(
            @RequestBody final FruitDTO newFruit,
            @PathVariable final Long id) {
        newFruit.setId(id);
        return mapper.toDto(repository.save(mapper.toEntity(newFruit)));
    }

    @DeleteMapping("/api/fruits/{id}")
    void deleteFruit(@PathVariable final Long id) {
        repository.deleteById(id);
    }
}

The application source code is stored in AWS CodeCommit repository that is created and initialized from the CDK application in the CodeCommitSource construct:

super(scope, id);
this.trunkBranchName = props?.trunkBranchName || 'main';
let gitignore = fs.readFileSync('.gitignore').toString().split(/\r?\n/);
gitignore.push('.git/');

// Allow canary code to package properly
// see: https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch_Synthetics_Canaries_WritingCanary_Nodejs.html#CloudWatch_Synthetics_Canaries_package
gitignore = gitignore.filter(g => g != 'node_modules/');
gitignore.push('/node_modules/');

const codeAsset = new Asset(this, 'SourceAsset', {
  path: '.',
  ignoreMode: IgnoreMode.GIT,
  exclude: gitignore,
});
this.repository = new Repository(this, 'CodeCommitRepo', {
  repositoryName: props.repositoryName,
  code: Code.fromAsset(codeAsset, this.trunkBranchName),
});

if (props.associateCodeGuru !== false) {
  new CfnRepositoryAssociation(this, 'CfnRepositoryAssociation', {
    name: this.repository.repositoryName,
    type: 'CodeCommit',
  });
}
this.codePipelineSource = CodePipelineSource.codeCommit(this.repository, this.trunkBranchName);
Test Source Code

The test source code can be found in the src/test/java directory. It is intended to serve only as a reference and should be replaced by your own test source code.

The reference implementation includes source code for unit, integration and end-to-end testing. Unit and integration tests can be found in src/test/java. For example, FruitControllerWithoutClassificationTest.java performs unit tests of each API path with the JUnit testing library:

public void shouldReturnList() throws Exception {
  when(repository.findAll()).thenReturn(Arrays.asList(new Fruit("Mango", FruitClassification.pome), new Fruit("Dragonfruit", FruitClassification.berry)));

  this.mockMvc.perform(get("/api/fruits")).andDo(print()).andExpect(status().isOk())
      .andExpect(content().json("[{\"name\": \"Mango\"}, {\"name\": \"Dragonfruit\"}]"));
}

Acceptance tests are preformed with SoapUI and are defined in fruit-api-soapui-project.xml. They are executed by Maven using plugins in pom.xml.

Infrastructure Source Code

The infrastructure source code can be found in the infrastructure directory. It is intended to serve as a reference but much of the code can also be reused in your own CDK applications.

Infrastructure source code defines both the deployment of the pipeline and the deployment of the application are stored in infrastructure/ folder and uses AWS Cloud Development Kit.

super(scope, id, props);

const image = new AssetImage('.', { target: 'build' });

const appName = Stack.of(this).stackName.toLowerCase().replace(`-${Stack.of(this).region}-`, '-');
const vpc = new ec2.Vpc(this, 'Vpc', {
  maxAzs: 3,
  natGateways: props?.natGateways,
});
new FlowLog(this, 'VpcFlowLog', { resourceType: FlowLogResourceType.fromVpc(vpc) });

const dbName = 'fruits';
const dbSecret = new DatabaseSecret(this, 'AuroraSecret', {
  username: 'fruitapi',
  secretName: `${appName}-DB`,
});
const db = new ServerlessCluster(this, 'AuroraCluster', {
  engine: DatabaseClusterEngine.AURORA_MYSQL,
  vpc,
  credentials: Credentials.fromSecret(dbSecret),
  defaultDatabaseName: dbName,
  deletionProtection: false,
  clusterIdentifier: appName,
});

const cluster = new ecs.Cluster(this, 'Cluster', {
  vpc,
  containerInsights: true,
  clusterName: appName,
});
const appLogGroup = new LogGroup(this, 'AppLogGroup', {
  retention: RetentionDays.ONE_WEEK,
  logGroupName: `/aws/ecs/service/${appName}`,
  removalPolicy: RemovalPolicy.DESTROY,
});
let deploymentConfig: IEcsDeploymentConfig | undefined = undefined;
if (props?.deploymentConfigName) {
  deploymentConfig = EcsDeploymentConfig.fromEcsDeploymentConfigName(this, 'DeploymentConfig', props.deploymentConfigName);
}
const appConfigEnabled = props?.appConfigRoleArn !== undefined && props.appConfigRoleArn.length > 0;
const service = new ApplicationLoadBalancedCodeDeployedFargateService(this, 'Api', {
  cluster,
  capacityProviderStrategies: [
    {
      capacityProvider: 'FARGATE_SPOT',
      weight: 1,
    },
  ],
  minHealthyPercent: 50,
  maxHealthyPercent: 200,
  desiredCount: 3,
  cpu: 512,
  memoryLimitMiB: 1024,
  taskImageOptions: {
    image,
    containerName: 'api',
    containerPort: 8080,
    family: appName,
    logDriver: AwsLogDriver.awsLogs({
      logGroup: appLogGroup,
      streamPrefix: 'service',
    }),
    secrets: {
      SPRING_DATASOURCE_USERNAME: Secret.fromSecretsManager( dbSecret, 'username' ),
      SPRING_DATASOURCE_PASSWORD: Secret.fromSecretsManager( dbSecret, 'password' ),
    },
    environment: {
      SPRING_DATASOURCE_URL: `jdbc:mysql://${db.clusterEndpoint.hostname}:${db.clusterEndpoint.port}/${dbName}`,
      APPCONFIG_AGENT_APPLICATION: this.node.tryGetContext('workloadName'),
      APPCONFIG_AGENT_ENVIRONMENT: this.node.tryGetContext('environmentName'),
      APPCONFIG_AGENT_ENABLED: appConfigEnabled.toString(),
    },
  },
  deregistrationDelay: Duration.seconds(5),
  responseTimeAlarmThreshold: Duration.seconds(3),
  targetHealthCheck: {
    healthyThresholdCount: 2,
    unhealthyThresholdCount: 2,
    interval: Duration.seconds(60),
    path: '/actuator/health',
  },
  deploymentConfig,
  terminationWaitTime: Duration.minutes(5),
  apiCanaryTimeout: Duration.seconds(5),
  apiTestSteps: [{
    name: 'getAll',
    path: '/api/fruits',
    jmesPath: 'length(@)',
    expectedValue: 5,
  }],
});

if (appConfigEnabled) {
  service.taskDefinition.addContainer('appconfig-agent', {
    image: ecs.ContainerImage.fromRegistry('public.ecr.aws/aws-appconfig/aws-appconfig-agent:2.x'),
    essential: false,
    logging: AwsLogDriver.awsLogs({
      logGroup: appLogGroup,
      streamPrefix: 'service',
    }),
    environment: {
      SERVICE_REGION: this.region,
      ROLE_ARN: props!.appConfigRoleArn!,
      ROLE_SESSION_NAME: appName,
      LOG_LEVEL: 'info',
    },
    portMappings: [{ containerPort: 2772 }],
  });

  service.taskDefinition.addToTaskRolePolicy(new PolicyStatement({
    actions: ['sts:AssumeRole'],
    resources: [props!.appConfigRoleArn!],
  }));
}

service.service.connections.allowTo(db, Port.tcp(db.clusterEndpoint.port));

this.apiUrl = new CfnOutput(this, 'endpointUrl', {
  value: `http://${service.listener.loadBalancer.loadBalancerDnsName}`,
});

Notice that the infrastructure code is written in Typescript which is different from the Application Source Code (Java). This was done intentionally to demonstrate that CDK allows defining infrastructure code in whatever language is most appropriate for the team that owns the use of CDK in the organization.

Static Assets

There are no static assets used by the sample application.

Dependency Manifests

All third-party dependencies used by the sample application are define in the pom.xml:

<dependencies>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-data-jpa</artifactId>
    </dependency>
    <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-actuator</artifactId>
    </dependency>
    <dependency>
        <groupId>org.liquibase</groupId>
        <artifactId>liquibase-core</artifactId>
    </dependency>
</dependencies>
Static Configuration

Static configuration for the application is defined in src/main/resources/application.yml:

spring:
  application:
    name: fruit-api
  main:
    banner-mode: "off"
  jackson:
    default-property-inclusion: non_null


springdoc:
  swagger-ui:
    path: /swagger-ui

appconfig-agent:
  environment: alpha
  log-level-from:
    configuration: operations
Database Source Code

The database source code can be found in the src/main/resources/db directory. It is intended to serve only as a reference and should be replaced by your own database source code.

The code that manages the schema and initial data for the application is defined using Liquibase in src/main/resources/db/changelog/db.changelog-master.yml:

databaseChangeLog:
   - changeSet:
       id: "1"
       author: AWS
       changes:
       - createTable:
           tableName: fruit
           columns:
           - column:
               name: id
               type: bigint
               autoIncrement: true
               constraints:
                   primaryKey:  true
                   nullable:  false
           - column:
               name: name
               type: varchar(250)

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Apple

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Orange

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Banana

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Cherry

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Grape

   - changeSet:
       id: "2"
       author: AWS
       changes:
       - addColumn:
           tableName: fruit
           columns:
           - column:
               name: classification
               type: varchar(250)
               constraints:
                 nullable: true

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: pome
           where: name='Apple'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Orange'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Banana'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: drupe
           where: name='Cherry'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Grape'

Build

Actions in this stage all run in less than 10 minutes so that developers can take action on fast feedback before moving on to their next task. Each of the actions below are defined as code with AWS Cloud Development Kit.

Build Code

The Java source code is compiled, unit tested and packaged by Maven. A step is added to the pipeline through a CDK construct called MavenBuild:

const stepProps = {
  input: props.source,
  commands: [],
  buildEnvironment: {
    buildImage: LinuxBuildImage.STANDARD_6_0,
  },
  partialBuildSpec: BuildSpec.fromObject({
    env: {
      variables: {
        MAVEN_OPTS: props.mavenOpts || '-XX:+TieredCompilation -XX:TieredStopAtLevel=1',
        MAVEN_ARGS: props.mavenArgs || '--batch-mode --no-transfer-progress',
      },
    },
    phases: {
      install: {
        'runtime-versions': {
          java: (props.javaRuntime || 'corretto17'),
        },
      },
      build: {
        commands: [`mvn \${MAVEN_ARGS} clean ${props.mavenGoal || 'verify'}`],
      },
    },
    cache: props.cacheBucket ? {
      paths: ['/root/.m2/**/*'],
    } : undefined,
    reports: {
      unit: {
        'files': ['target/surefire-reports/*.xml'],
        'file-format': 'JUNITXML',
      },
      integration: {
        'files': ['target/soapui-reports/*.xml'],
        'file-format': 'JUNITXML',
      },
    },
    version: '0.2',
  }),
  cache: props.cacheBucket ? Cache.bucket(props.cacheBucket) : undefined,
  primaryOutputDirectory: '.',
};
super(id, stepProps);
Unit Tests

The unit tests are run by Maven at the same time the Build Code action occurs. The results of the unit tests are uploaded to AWS Code Build Test Reports to track over time.

Code Quality

A CDK construct was created to require that Amazon CodeGuru performed a review on the most recent changes and that the recommendations don't exceed the severity thresholds. If no review was found or if the severity thresholds were exceeded, the pipeline fails. The construct is added to the pipeline with:

import { CodeGuruReviewCheck, CodeGuruReviewFilter } from './codeguru-review-check';



    const codeGuruSecurity = new CodeGuruReviewCheck('CodeGuruSecurity', {
      source: source.codePipelineSource,
      reviewRequired: false,
      filter: CodeGuruReviewFilter.defaultCodeSecurityFilter(),
    });
    const codeGuruQuality = new CodeGuruReviewCheck('CodeGuruQuality', {
      source: source.codePipelineSource,
      reviewRequired: false,
      filter: CodeGuruReviewFilter.defaultCodeQualityFilter(),
    });

The Filter attribute can be customized to control what categories of recommendations are considered and what the thresholds are:

export enum CodeGuruReviewRecommendationCategory {
    AWS_BEST_PRACTICES = 'AWSBestPractices',
    AWS_CLOUDFORMATION_ISSUES = 'AWSCloudFormationIssues',
    CODE_INCONSISTENCIES = 'CodeInconsistencies',
    CODE_MAINTENANCE_ISSUES = 'CodeMaintenanceIssues',
    CONCURRENCY_ISSUES = 'ConcurrencyIssues',
    DUPLICATE_CODE = 'DuplicateCode',
    INPUT_VALIDATIONS = 'InputValidations',
    JAVA_BEST_PRACTICES = 'JavaBestPractices',
    PYTHON_BEST_PRACTICES = 'PythonBestPractices',
    RESOURCE_LEAKS = 'ResourceLeaks',
    SECURITY_ISSUES = 'SecurityIssues',
}
export class CodeGuruReviewFilter {
    // Limit which recommendation categories to include
    recommendationCategories!: CodeGuruReviewRecommendationCategory[];

    // Fail if more that this # of lines of code were suppressed aws-codeguru-reviewer.yml
    maxSuppressedLinesOfCodeCount?: number;

    // Fail if more than this # of CRITICAL recommendations were found
    maxCriticalRecommendations?: number;

    // Fail if more than this # of HIGH recommendations were found
    maxHighRecommendations?: number;

    // Fail if more than this # of MEDIUM recommendations were found
    maxMediumRecommendations?: number;

    // Fail if more than this # of INFO recommendations were found
    maxInfoRecommendations?: number;

    // Fail if more than this # of LOW recommendations were found
    maxLowRecommendations?: number;
}

Additionally, cdk-nag is run against both the pipeline stack and the deployment stack to identify any security issues with the resources being created. The pipeline will fail if any are detected. The following code demonstrates how cdk-nag is called as a part of the build stage. The code also demonstrates how to suppress findings.

import { App, Aspects } from 'aws-cdk-lib';
import { Annotations, Match, Template } from 'aws-cdk-lib/assertions';
import { SynthesisMessage } from 'aws-cdk-lib/cx-api';
import { AwsSolutionsChecks, NagSuppressions } from 'cdk-nag';
import { DeploymentStack } from '../src/deployment';


function synthesisMessageToString(sm: SynthesisMessage): string {
  return `${sm.entry.data} [${sm.id}]`;
}
expect.addSnapshotSerializer({
  test: (val) => typeof val === 'string' && val.match(/^dummy.dkr.ecr.us-east.1/) !== null,
  serialize: () => '"dummy-ecr-image"',
});
expect.addSnapshotSerializer({
  test: (val) => typeof val === 'string' && val.match(/^[a-f0-9]+\.zip$/) !== null,
  serialize: () => '"code.zip"',
});

describe('cdk-nag', () => {
  let stack: DeploymentStack;
  let app: App;

  beforeAll(() => {
    const appName = 'fruit-api';
    const workloadName = 'food';
    const environmentName = 'unit-test';
    app = new App({ context: { appName, environmentName, workloadName } });
    stack = new DeploymentStack(app, 'TestStack', {
      env: {
        account: 'dummy',
        region: 'us-east-1',
      },
    });
    Aspects.of(stack).add(new AwsSolutionsChecks());

    // Suppress CDK-NAG for TaskDefinition role and ecr:GetAuthorizationToken permission
    NagSuppressions.addResourceSuppressionsByPath(
      stack,
      `/${stack.stackName}/Api/TaskDef/ExecutionRole/DefaultPolicy/Resource`,
      [{ id: 'AwsSolutions-IAM5', reason: 'Allow ecr:GetAuthorizationToken', appliesTo: ['Resource::*'] }],
    );

    // Suppress CDK-NAG for secret rotation
    NagSuppressions.addResourceSuppressionsByPath(
      stack,
      `/${stack.stackName}/AuroraSecret/Resource`,
      [{ id: 'AwsSolutions-SMG4', reason: 'Dont require secret rotation' }],
    );

    // Suppress CDK-NAG for RDS Serverless
    NagSuppressions.addResourceSuppressionsByPath(
      stack,
      `/${stack.stackName}/AuroraCluster/Resource`,
      [
        { id: 'AwsSolutions-RDS6', reason: 'IAM authentication not supported on Serverless v1' },
        { id: 'AwsSolutions-RDS10', reason: 'Disable delete protection to simplify cleanup of Reference Implementation' },
        { id: 'AwsSolutions-RDS11', reason: 'Custom port not supported on Serverless v1' },
        { id: 'AwsSolutions-RDS14', reason: 'Backtrack not supported on Serverless v1' },
        { id: 'AwsSolutions-RDS16', reason: 'CloudWatch Log Export not supported on Serverless v1' },
      ],
    );

    NagSuppressions.addResourceSuppressionsByPath(stack, [
      `/${stack.stackName}/Api/DeploymentGroup/Deployment/DeploymentProvider/framework-onEvent`,
      `/${stack.stackName}/Api/DeploymentGroup/Deployment/DeploymentProvider/framework-isComplete`,
      `/${stack.stackName}/Api/DeploymentGroup/Deployment/DeploymentProvider/framework-onTimeout`,
      `/${stack.stackName}/Api/DeploymentGroup/Deployment/DeploymentProvider/waiter-state-machine`,
    ], [
      { id: 'AwsSolutions-IAM5', reason: 'Unrelated to construct under test' },
      { id: 'AwsSolutions-L1', reason: 'Unrelated to construct under test' },
      { id: 'AwsSolutions-SF1', reason: 'Unrelated to construct under test' },
      { id: 'AwsSolutions-SF2', reason: 'Unrelated to construct under test' },
    ], true);

    // Ignore findings from access log bucket
    NagSuppressions.addResourceSuppressionsByPath(stack, [
      `/${stack.stackName}/Api/AccessLogBucket`,
    ], [
      { id: 'AwsSolutions-S1', reason: 'Dont need access logs for access log bucket' },
      { id: 'AwsSolutions-IAM5', reason: 'Allow resource:*', appliesTo: ['Resource::*'] },
    ]);

    NagSuppressions.addResourceSuppressionsByPath(stack, [
      `/${stack.stackName}/Api/Canary/ServiceRole`,
    ], [{ id: 'AwsSolutions-IAM5', reason: 'Allow resource:*' }]);

    NagSuppressions.addResourceSuppressionsByPath(stack, [
      `/${stack.stackName}/Api/CanaryArtifactsBucket`,
    ], [{ id: 'AwsSolutions-S1', reason: 'Dont need access logs for canary bucket' }]);

    NagSuppressions.addResourceSuppressionsByPath(stack, [
      `/${stack.stackName}/Api/DeploymentGroup/ServiceRole`,
    ], [
      { id: 'AwsSolutions-IAM4', reason: 'Allow AWSCodeDeployRoleForECS policy', appliesTo: ['Policy::arn:<AWS::Partition>:iam::aws:policy/AWSCodeDeployRoleForECS'] },
    ]);

    NagSuppressions.addResourceSuppressionsByPath(stack, [
      `/${stack.stackName}/Api/DeploymentGroup/Deployment`,
    ], [
      {
        id: 'AwsSolutions-IAM4',
        reason: 'Allow AWSLambdaBasicExecutionRole policy',
        appliesTo: ['Policy::arn:<AWS::Partition>:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'],
      },
    ], true);

    NagSuppressions.addResourceSuppressionsByPath(stack, [
      `/${stack.stackName}/Api/TaskDef`,
    ], [
      {
        id: 'AwsSolutions-ECS2',
        reason: 'Allow environment variables for configuration of values that are not confidential',
      },
    ]);

    NagSuppressions.addResourceSuppressionsByPath(stack, [
      `/${stack.stackName}/Api/LB/SecurityGroup`,
    ], [
      {
        id: 'AwsSolutions-EC23',
        reason: 'Allow public inbound access on ELB',
      },
    ]);
  });

  test('Snapshot', () => {
    const template = Template.fromStack(stack);
    expect(template.toJSON()).toMatchSnapshot();
  });

  test('cdk-nag AwsSolutions Pack errors', () => {
    const errors = Annotations.fromStack(stack).findError(
      '*',
      Match.stringLikeRegexp('AwsSolutions-.*'),
    ).map(synthesisMessageToString);
    expect(errors).toHaveLength(0);
  });

  test('cdk-nag AwsSolutions Pack warnings', () => {
    const warnings = Annotations.fromStack(stack).findWarning(
      '*',
      Match.stringLikeRegexp('AwsSolutions-.*'),
    ).map(synthesisMessageToString);
    expect(warnings).toHaveLength(0);
  });
});

describe('Deployment without AppConfig', () => {
  let stack: DeploymentStack;
  let app: App;

  beforeAll(() => {
    const appName = 'fruit-api';
    const environmentName = 'unit-test';
    app = new App({ context: { appName, environmentName } });
    stack = new DeploymentStack(app, 'TestStack', {
      env: {
        account: 'dummy',
        region: 'us-east-1',
      },
    });
  });

  test('Snapshot', () => {
    const template = Template.fromStack(stack);
    expect(template.toJSON()).toMatchSnapshot();
  });
  test('taskdef', () => {
    const template = Template.fromStack(stack);
    template.hasResourceProperties('AWS::ECS::TaskDefinition', {
      ContainerDefinitions: [
        {
          Environment: [{
            Name: 'SPRING_DATASOURCE_URL',
          }, {
            Name: 'APPCONFIG_AGENT_APPLICATION',
          }, {
            Name: 'APPCONFIG_AGENT_ENVIRONMENT',
            Value: 'unit-test',
          }, {
            Name: 'APPCONFIG_AGENT_ENABLED',
            Value: 'false',
          }],
        },
      ],
    });
  });
});

describe('Deployment with AppConfig', () => {
  let stack: DeploymentStack;
  let app: App;

  beforeAll(() => {
    const appName = 'fruit-api';
    const workloadName = 'food';
    const environmentName = 'unit-test';
    app = new App({ context: { appName, environmentName, workloadName } });
    stack = new DeploymentStack(app, 'TestStack', {
      appConfigRoleArn: 'dummy-role-arn',
      env: {
        account: 'dummy',
        region: 'us-east-1',
      },
    });
  });

  test('Snapshot', () => {
    const template = Template.fromStack(stack);
    expect(template.toJSON()).toMatchSnapshot();
  });
  test('taskdef', () => {
    const template = Template.fromStack(stack);
    template.hasResourceProperties('AWS::ECS::TaskDefinition', {
      ContainerDefinitions: [
        {
          Environment: [{
            Name: 'SPRING_DATASOURCE_URL',
          }, {
            Name: 'APPCONFIG_AGENT_APPLICATION',
            Value: 'food',
          }, {
            Name: 'APPCONFIG_AGENT_ENVIRONMENT',
            Value: 'unit-test',
          }, {
            Name: 'APPCONFIG_AGENT_ENABLED',
            Value: 'true',
          }],
        },
        {
          Environment: [{
            Name: 'SERVICE_REGION',
            Value: 'us-east-1',
          }, {
            Name: 'ROLE_ARN',
            Value: 'dummy-role-arn',
          }, {
            Name: 'ROLE_SESSION_NAME',
          }, {
            Name: 'LOG_LEVEL',
            Value: 'info',
          }],
        },
      ],
    });
  });
});
Secrets Detection

The same CDK construct that was created for Code Quality above is also used for secrets detection with Amazon CodeGuru.

Static Application Security Testing (SAST)

The same CDK construct that was created for Code Quality above is also used for SAST with Amazon CodeGuru.

Package and Store Artifact(s)

AWS Cloud Development Kit handles the packaging and storing of assets during the Synth action and Assets stage. The Synth action generates the CloudFormation templates to be deployed into the subsequent environments along with staging up the files necessary to create a docker image. The Assets stage then performs the docker build step to create a new image and push the image to Amazon ECR repositories in each environment account.

Software Composition Analysis (SCA)

Trivy is used to scan the source for vulnerabilities in its dependencies. The pom.xml and Dockerfile files are scanned for configuration issues or vulnerabilities in any dependencies. The scanning is accomplished by a CDK construct that creates a CodeBuild job to run trivy:

import { TrivyScan } from './trivy-scan';



    const trivyScan = new TrivyScan('TrivyScan', {
      source: source.codePipelineSource,
      severity: ['CRITICAL', 'HIGH'],
      checks: ['vuln', 'config', 'secret'],
    });

Trivy is also used within the Dockerfile to scan the image after it is built. The docker build will fail if Trivy finds any vulnerabilities in the final image:

FROM public.ecr.aws/amazoncorretto/amazoncorretto:17-al2022-jdk as build
USER nobody
WORKDIR /app
COPY target/fruit-api.jar /app
HEALTHCHECK --interval=30s --timeout=5s --start-period=30s --retries=3 CMD /bin/curl --fail --silent localhost:8080/actuator/health | grep UP || exit 1
ENTRYPOINT ["java","-jar","/app/fruit-api.jar"]

# Use multi-stage builds to scan newly created image with Trivy. This second stage 'vulnscan'
# isn't published to Amazon ECR and is never run. It is only used to run the Trivy scan
# against the newly created image in the 'build' stage.
#
# This stage must run as root so Trivy can scan all files in the image, not just
# those accessible by the nobody user. The user is switched back to 'nobody' at
# the end to ensure that even if this image is used for something it is done
# without the 'root' user.

FROM build AS vulnscan
USER root
COPY --from=aquasec/trivy:latest /usr/local/bin/trivy /usr/local/bin/trivy
RUN trivy filesystem --exit-code 1 --no-progress --ignore-unfixed -s CRITICAL /
USER nobody
Software Bill of Materials (SBOM)

Trivy generates an SBOM in the form of a CycloneDX JSON report. The SBOM is saved as a CodePipeline asset. Trivy supports additional SBOM formats such as SPDX, and SARIF.

Test (Beta)

Launch Environment

Deployment

The infrastructure for each environment is defined in AWS Cloud Development Kit:

super(scope, id, props);

const image = new AssetImage('.', { target: 'build' });

const appName = Stack.of(this).stackName.toLowerCase().replace(`-${Stack.of(this).region}-`, '-');
const vpc = new ec2.Vpc(this, 'Vpc', {
  maxAzs: 3,
  natGateways: props?.natGateways,
});
new FlowLog(this, 'VpcFlowLog', { resourceType: FlowLogResourceType.fromVpc(vpc) });

const dbName = 'fruits';
const dbSecret = new DatabaseSecret(this, 'AuroraSecret', {
  username: 'fruitapi',
  secretName: `${appName}-DB`,
});
const db = new ServerlessCluster(this, 'AuroraCluster', {
  engine: DatabaseClusterEngine.AURORA_MYSQL,
  vpc,
  credentials: Credentials.fromSecret(dbSecret),
  defaultDatabaseName: dbName,
  deletionProtection: false,
  clusterIdentifier: appName,
});

const cluster = new ecs.Cluster(this, 'Cluster', {
  vpc,
  containerInsights: true,
  clusterName: appName,
});
const appLogGroup = new LogGroup(this, 'AppLogGroup', {
  retention: RetentionDays.ONE_WEEK,
  logGroupName: `/aws/ecs/service/${appName}`,
  removalPolicy: RemovalPolicy.DESTROY,
});
let deploymentConfig: IEcsDeploymentConfig | undefined = undefined;
if (props?.deploymentConfigName) {
  deploymentConfig = EcsDeploymentConfig.fromEcsDeploymentConfigName(this, 'DeploymentConfig', props.deploymentConfigName);
}
const appConfigEnabled = props?.appConfigRoleArn !== undefined && props.appConfigRoleArn.length > 0;
const service = new ApplicationLoadBalancedCodeDeployedFargateService(this, 'Api', {
  cluster,
  capacityProviderStrategies: [
    {
      capacityProvider: 'FARGATE_SPOT',
      weight: 1,
    },
  ],
  minHealthyPercent: 50,
  maxHealthyPercent: 200,
  desiredCount: 3,
  cpu: 512,
  memoryLimitMiB: 1024,
  taskImageOptions: {
    image,
    containerName: 'api',
    containerPort: 8080,
    family: appName,
    logDriver: AwsLogDriver.awsLogs({
      logGroup: appLogGroup,
      streamPrefix: 'service',
    }),
    secrets: {
      SPRING_DATASOURCE_USERNAME: Secret.fromSecretsManager( dbSecret, 'username' ),
      SPRING_DATASOURCE_PASSWORD: Secret.fromSecretsManager( dbSecret, 'password' ),
    },
    environment: {
      SPRING_DATASOURCE_URL: `jdbc:mysql://${db.clusterEndpoint.hostname}:${db.clusterEndpoint.port}/${dbName}`,
      APPCONFIG_AGENT_APPLICATION: this.node.tryGetContext('workloadName'),
      APPCONFIG_AGENT_ENVIRONMENT: this.node.tryGetContext('environmentName'),
      APPCONFIG_AGENT_ENABLED: appConfigEnabled.toString(),
    },
  },
  deregistrationDelay: Duration.seconds(5),
  responseTimeAlarmThreshold: Duration.seconds(3),
  targetHealthCheck: {
    healthyThresholdCount: 2,
    unhealthyThresholdCount: 2,
    interval: Duration.seconds(60),
    path: '/actuator/health',
  },
  deploymentConfig,
  terminationWaitTime: Duration.minutes(5),
  apiCanaryTimeout: Duration.seconds(5),
  apiTestSteps: [{
    name: 'getAll',
    path: '/api/fruits',
    jmesPath: 'length(@)',
    expectedValue: 5,
  }],
});

if (appConfigEnabled) {
  service.taskDefinition.addContainer('appconfig-agent', {
    image: ecs.ContainerImage.fromRegistry('public.ecr.aws/aws-appconfig/aws-appconfig-agent:2.x'),
    essential: false,
    logging: AwsLogDriver.awsLogs({
      logGroup: appLogGroup,
      streamPrefix: 'service',
    }),
    environment: {
      SERVICE_REGION: this.region,
      ROLE_ARN: props!.appConfigRoleArn!,
      ROLE_SESSION_NAME: appName,
      LOG_LEVEL: 'info',
    },
    portMappings: [{ containerPort: 2772 }],
  });

  service.taskDefinition.addToTaskRolePolicy(new PolicyStatement({
    actions: ['sts:AssumeRole'],
    resources: [props!.appConfigRoleArn!],
  }));
}

service.service.connections.allowTo(db, Port.tcp(db.clusterEndpoint.port));

this.apiUrl = new CfnOutput(this, 'endpointUrl', {
  value: `http://${service.listener.loadBalancer.loadBalancerDnsName}`,
});

The DeploymentStack construct is then instantiated for each environment:

export const Beta: EnvironmentConfig = {
  name: 'Beta',
  account: accounts.beta,
  waves: [
    ['us-west-2'],
  ],
};



    new PipelineEnvironment(pipeline, Beta, (deployment, stage) => {
      stage.addPost(
        new SoapUITest('E2E Test', {
          source: source.codePipelineSource,
          endpoint: deployment.apiUrl,
          cacheBucket,
        }),
      );
    });
Database Deploy

Spring Boot is configured to run Liquibase on startup. This reads the configuration in src/main/resources/db/changelog/db.changelog-master.yml to define the tables and initial data for the database:

databaseChangeLog:
   - changeSet:
       id: "1"
       author: AWS
       changes:
       - createTable:
           tableName: fruit
           columns:
           - column:
               name: id
               type: bigint
               autoIncrement: true
               constraints:
                   primaryKey:  true
                   nullable:  false
           - column:
               name: name
               type: varchar(250)

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Apple

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Orange

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Banana

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Cherry

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Grape

   - changeSet:
       id: "2"
       author: AWS
       changes:
       - addColumn:
           tableName: fruit
           columns:
           - column:
               name: classification
               type: varchar(250)
               constraints:
                 nullable: true

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: pome
           where: name='Apple'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Orange'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Banana'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: drupe
           where: name='Cherry'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Grape'
Deploy Software

The Launch Environment action above creates a new Amazon ECS Task Definition for the new docker image and then updates the Amazon ECS Service to use the new Task Definition.

Integration Tests

Integration tests are preformed during the Build Source action. They are defined with SoapUI in fruit-api-soapui-project.xml. They are executed by Maven in the integration-test phase using plugins in pom.xml. Spring Boot is configure to start a local instance of the application with an H2 database during the pre-integration-test phase and then to terminate on the post-integration-test phase. The results of the unit tests are uploaded to AWS Code Build Test Reports to track over time.

<plugins>
    <plugin>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-maven-plugin</artifactId>
        <executions>
            <execution>
                <id>pre-integration-test</id>
                <goals>
                    <goal>start</goal>
                </goals>
            </execution>
            <execution>
                <id>post-integration-test</id>
                <goals>
                    <goal>stop</goal>
                </goals>
            </execution>
        </executions>
    </plugin>
    <plugin>
        <groupId>com.smartbear.soapui</groupId>
        <artifactId>soapui-maven-plugin</artifactId>
        <version>5.7.0</version>
        <configuration>
            <junitReport>true</junitReport>
            <outputFolder>target/soapui-reports</outputFolder>
            <endpoint>${soapui.endpoint}</endpoint>
        </configuration>
        <executions>
            <execution>
                <phase>integration-test</phase>
                <goals>
                    <goal>test</goal>
                </goals>
            </execution>
        </executions>
    </plugin>
</plugins>
Acceptance Tests

Acceptance tests are preformed after the Launch Environment and Deploy Software actions:

The tests are defined with SoapUI in fruit-api-soapui-project.xml. They are executed by Maven with the endpoint overridden to the URL from the CloudFormation output. A CDK construct called SoapUITest was created to create the CodeBuild Project to run SoapUI.

const stepProps = {
  envFromCfnOutputs: {
    ENDPOINT: props.endpoint,
  },
  input: props.source,
  commands: [],
  buildEnvironment: {
    buildImage: LinuxBuildImage.STANDARD_6_0,
  },
  partialBuildSpec: BuildSpec.fromObject({
    env: {
      variables: {
        MAVEN_OPTS: props.mavenOpts || '-XX:+TieredCompilation -XX:TieredStopAtLevel=1',
        MAVEN_ARGS: props.mavenArgs || '--batch-mode --no-transfer-progress',
      },
    },
    phases: {
      install: {
        'runtime-versions': {
          java: (props.javaRuntime || 'corretto17'),
        },
      },
      build: {
        commands: ['mvn ${MAVEN_ARGS} soapui:test -Dsoapui.endpoint=${ENDPOINT}'],
      },
    },
    cache: props.cacheBucket ? {
      paths: ['/root/.m2/**/*'],
    } : undefined,
    reports: {
      e2e: {
        'files': ['target/soapui-reports/*.xml'],
        'file-format': 'JUNITXML',
      },
    },
    version: '0.2',
  }),
  cache: props.cacheBucket ? Cache.bucket(props.cacheBucket) : undefined,
};
super(id, stepProps);

The results of the unit tests are uploaded to AWS Code Build Test Reports to track over time.

Test (Gamma)

Launch Environment

Deployment

The infrastructure for each environment is defined in AWS Cloud Development Kit:

super(scope, id, props);

const image = new AssetImage('.', { target: 'build' });

const appName = Stack.of(this).stackName.toLowerCase().replace(`-${Stack.of(this).region}-`, '-');
const vpc = new ec2.Vpc(this, 'Vpc', {
  maxAzs: 3,
  natGateways: props?.natGateways,
});
new FlowLog(this, 'VpcFlowLog', { resourceType: FlowLogResourceType.fromVpc(vpc) });

const dbName = 'fruits';
const dbSecret = new DatabaseSecret(this, 'AuroraSecret', {
  username: 'fruitapi',
  secretName: `${appName}-DB`,
});
const db = new ServerlessCluster(this, 'AuroraCluster', {
  engine: DatabaseClusterEngine.AURORA_MYSQL,
  vpc,
  credentials: Credentials.fromSecret(dbSecret),
  defaultDatabaseName: dbName,
  deletionProtection: false,
  clusterIdentifier: appName,
});

const cluster = new ecs.Cluster(this, 'Cluster', {
  vpc,
  containerInsights: true,
  clusterName: appName,
});
const appLogGroup = new LogGroup(this, 'AppLogGroup', {
  retention: RetentionDays.ONE_WEEK,
  logGroupName: `/aws/ecs/service/${appName}`,
  removalPolicy: RemovalPolicy.DESTROY,
});
let deploymentConfig: IEcsDeploymentConfig | undefined = undefined;
if (props?.deploymentConfigName) {
  deploymentConfig = EcsDeploymentConfig.fromEcsDeploymentConfigName(this, 'DeploymentConfig', props.deploymentConfigName);
}
const appConfigEnabled = props?.appConfigRoleArn !== undefined && props.appConfigRoleArn.length > 0;
const service = new ApplicationLoadBalancedCodeDeployedFargateService(this, 'Api', {
  cluster,
  capacityProviderStrategies: [
    {
      capacityProvider: 'FARGATE_SPOT',
      weight: 1,
    },
  ],
  minHealthyPercent: 50,
  maxHealthyPercent: 200,
  desiredCount: 3,
  cpu: 512,
  memoryLimitMiB: 1024,
  taskImageOptions: {
    image,
    containerName: 'api',
    containerPort: 8080,
    family: appName,
    logDriver: AwsLogDriver.awsLogs({
      logGroup: appLogGroup,
      streamPrefix: 'service',
    }),
    secrets: {
      SPRING_DATASOURCE_USERNAME: Secret.fromSecretsManager( dbSecret, 'username' ),
      SPRING_DATASOURCE_PASSWORD: Secret.fromSecretsManager( dbSecret, 'password' ),
    },
    environment: {
      SPRING_DATASOURCE_URL: `jdbc:mysql://${db.clusterEndpoint.hostname}:${db.clusterEndpoint.port}/${dbName}`,
      APPCONFIG_AGENT_APPLICATION: this.node.tryGetContext('workloadName'),
      APPCONFIG_AGENT_ENVIRONMENT: this.node.tryGetContext('environmentName'),
      APPCONFIG_AGENT_ENABLED: appConfigEnabled.toString(),
    },
  },
  deregistrationDelay: Duration.seconds(5),
  responseTimeAlarmThreshold: Duration.seconds(3),
  targetHealthCheck: {
    healthyThresholdCount: 2,
    unhealthyThresholdCount: 2,
    interval: Duration.seconds(60),
    path: '/actuator/health',
  },
  deploymentConfig,
  terminationWaitTime: Duration.minutes(5),
  apiCanaryTimeout: Duration.seconds(5),
  apiTestSteps: [{
    name: 'getAll',
    path: '/api/fruits',
    jmesPath: 'length(@)',
    expectedValue: 5,
  }],
});

if (appConfigEnabled) {
  service.taskDefinition.addContainer('appconfig-agent', {
    image: ecs.ContainerImage.fromRegistry('public.ecr.aws/aws-appconfig/aws-appconfig-agent:2.x'),
    essential: false,
    logging: AwsLogDriver.awsLogs({
      logGroup: appLogGroup,
      streamPrefix: 'service',
    }),
    environment: {
      SERVICE_REGION: this.region,
      ROLE_ARN: props!.appConfigRoleArn!,
      ROLE_SESSION_NAME: appName,
      LOG_LEVEL: 'info',
    },
    portMappings: [{ containerPort: 2772 }],
  });

  service.taskDefinition.addToTaskRolePolicy(new PolicyStatement({
    actions: ['sts:AssumeRole'],
    resources: [props!.appConfigRoleArn!],
  }));
}

service.service.connections.allowTo(db, Port.tcp(db.clusterEndpoint.port));

this.apiUrl = new CfnOutput(this, 'endpointUrl', {
  value: `http://${service.listener.loadBalancer.loadBalancerDnsName}`,
});

The DeploymentStack construct is then instantiated for each environment:

export const Gamma: EnvironmentConfig = {
  name: 'Gamma',
  account: accounts.gamma,
  waves: [
    ['us-west-2', 'us-east-1'],
  ],
};



    new PipelineEnvironment(pipeline, Gamma, (deployment, stage) => {
      stage.addPost(
        new JMeterTest('Performance Test', {
          source: source.codePipelineSource,
          endpoint: deployment.apiUrl,
          threads: 300,
          duration: 300,
          throughput: 6000,
          cacheBucket,
        }),
      );
    }, wave => {
      wave.addPost(
        new ManualApprovalStep('PromoteToProd'),
      );
    });
Database Deploy

Spring Boot is configured to run Liquibase on startup. This reads the configuration in src/main/resources/db/changelog/db.changelog-master.yml to define the tables and initial data for the database:

databaseChangeLog:
   - changeSet:
       id: "1"
       author: AWS
       changes:
       - createTable:
           tableName: fruit
           columns:
           - column:
               name: id
               type: bigint
               autoIncrement: true
               constraints:
                   primaryKey:  true
                   nullable:  false
           - column:
               name: name
               type: varchar(250)

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Apple

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Orange

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Banana

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Cherry

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Grape

   - changeSet:
       id: "2"
       author: AWS
       changes:
       - addColumn:
           tableName: fruit
           columns:
           - column:
               name: classification
               type: varchar(250)
               constraints:
                 nullable: true

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: pome
           where: name='Apple'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Orange'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Banana'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: drupe
           where: name='Cherry'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Grape'
Deploy Software

The Launch Environment action above creates a new Amazon ECS Task Definition for the new docker image and then updates the Amazon ECS Service to use the new Task Definition.

Application Monitoring & Logging

Amazon ECS uses Amazon CloudWatch Metrics and Amazon CloudWatch Logs for observability by default.

Synthetic Tests

Amazon CloudWatch Synthetics is used to continuously deliver traffic to the application and assert that requests are successful and responses are received within a given threshold. The canary is defined via CDK using the @cdklabs/cdk-ecs-codedeploy construct:

const service = new ApplicationLoadBalancedCodeDeployedFargateService(this, 'Api', {
  ...

  apiCanaryTimeout: Duration.seconds(5),
  apiTestSteps: [{
    name: 'getAll',
    path: '/api/fruits',
    jmesPath: 'length(@)',
    expectedValue: 5,
  }],
Performance Tests

Apache JMeter is used to run performance tests against the deployed application. The tests are stored in src/test/jmeter and added to the pipeline via CDK:

import { JMeterTest } from './jmeter-test';



        new JMeterTest('Performance Test', {
          source: source.codePipelineSource,
          endpoint: deployment.apiUrl,
          threads: 300,
          duration: 300,
          throughput: 6000,
          cacheBucket,
        }),

Prod

Manual Approval

A manual approval step is added to the end of the Gamma stage. The step is added at the end to keep the environment in a stable state while manual testing is performed. Once the step is approved, the pipeline continues execution to the next stage.

    new PipelineEnvironment(pipeline, Gamma, (deployment, stage) => {
        stage.addPost(
            new JMeterTest('Performance Test', {
            source: source.codePipelineSource,
            endpoint: deployment.apiUrl,
            threads: 300,
            duration: 300,
            throughput: 6000,
            cacheBucket,
            }),
            new ManualApprovalStep('PromoteFromGamma'),
        );
    });

When a manual approval step is used, IAM permissions should be used to restrict which principals can approve actions and stages to enforce least privilege.

    {
        "Effect": "Allow",
        "Action": [
            "codepipeline:PutApprovalResult"
        ],
        "Resource": "arn:aws:codepipeline:us-east-2:80398EXAMPLE:MyFirstPipeline/MyApprovalStage/MyApprovalAction"
    }
Database Deploy

Spring Boot is configured to run Liquibase on startup. This reads the configuration in src/main/resources/db/changelog/db.changelog-master.yml to define the tables and initial data for the database:

databaseChangeLog:
   - changeSet:
       id: "1"
       author: AWS
       changes:
       - createTable:
           tableName: fruit
           columns:
           - column:
               name: id
               type: bigint
               autoIncrement: true
               constraints:
                   primaryKey:  true
                   nullable:  false
           - column:
               name: name
               type: varchar(250)

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Apple

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Orange

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Banana

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Cherry

       - insert:
           tableName: fruit
           columns:
           - column:
               name: name
               value: Grape

   - changeSet:
       id: "2"
       author: AWS
       changes:
       - addColumn:
           tableName: fruit
           columns:
           - column:
               name: classification
               type: varchar(250)
               constraints:
                 nullable: true

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: pome
           where: name='Apple'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Orange'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Banana'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: drupe
           where: name='Cherry'

       - update:
           tableName: fruit
           columns:
           - column:
               name: classification
               value: berry
           where: name='Grape'
Progressive Deployment

Progressive deployment is implemented with AWS CodeDeploy for ECS. CodeDeploy performs a linear blue/green by deploying the new task definition as a new task with a separate target group and then shifting 10% of traffic every minute until all traffic is shifted. A CloudWatch alarm is monitored by CodeDeploy and an automatic rollback is triggered if the alarm exceeds the threshold.

Implementation of this type deployment presents challenges due to the following limitations:

  • aws/aws-cdk #19163 - CDK Pipelines aren't intended to be used with CodeDeploy actions.
  • AWS CloudFormation User Guide - The use of AWS::CodeDeploy::BlueGreen hooks and AWS::CodeDeployBlueGreen restricts the types of changes that can be made. Additionally, you can't use auto-rollback capabilities of CodeDeploy.
  • aws/aws-cdk #5170 - CDK doesn't support defining CloudFormation rollback triggers. This rules out CloudFormation based blue/green deployments.

The solution was to use the @cdklabs/cdk-ecs-codedeploy construct from the Construct Hub which addresses aws/aws-cdk #1559 - Lack of support for Blue/Green ECS Deployment in CDK.

const service = new ApplicationLoadBalancedCodeDeployedFargateService(this, 'Api', {
  cluster,
  capacityProviderStrategies: [
    {
      capacityProvider: 'FARGATE_SPOT',
      weight: 1,
    },
  ],
  minHealthyPercent: 50,
  maxHealthyPercent: 200,
  desiredCount: 3,
  cpu: 512,
  memoryLimitMiB: 1024,
  taskImageOptions: {
    image,
    containerName: 'api',
    containerPort: 8080,
    family: appName,
    logDriver: AwsLogDriver.awsLogs({
      logGroup: appLogGroup,
      streamPrefix: 'service',
    }),
    secrets: {
      SPRING_DATASOURCE_USERNAME: Secret.fromSecretsManager( dbSecret, 'username' ),
      SPRING_DATASOURCE_PASSWORD: Secret.fromSecretsManager( dbSecret, 'password' ),
    },
    environment: {
      SPRING_DATASOURCE_URL: `jdbc:mysql://${db.clusterEndpoint.hostname}:${db.clusterEndpoint.port}/${dbName}`,
    },
  },
  deregistrationDelay: Duration.seconds(5),
  responseTimeAlarmThreshold: Duration.seconds(3),
  healthCheck: {
    healthyThresholdCount: 2,
    unhealthyThresholdCount: 2,
    interval: Duration.seconds(60),
    path: '/actuator/health',
  },
  deploymentConfig,
  terminationWaitTime: Duration.minutes(5),
  apiCanaryTimeout: Duration.seconds(5),
  apiTestSteps: [{
    name: 'getAll',
    path: '/api/fruits',
    jmesPath: 'length(@)',
    expectedValue: 5,
  }],
});

this.apiUrl = new CfnOutput(this, 'endpointUrl', {
  value: `http://${service.listener.loadBalancer.loadBalancerDnsName}`,
});

Deployments are made incrementally across regions using the CDK Pipeline - Wave construct. Each wave contains a list of regions to deploy to in parallel. One wave must fully complete before the next wave starts. The diagram below shows how each wave deploys to 2 regions at a time.

Environments are configured in CDK with the list of waves:

// BETA environment is 1 wave with 1 region
export const Beta: EnvironmentConfig = {
    name: 'Beta',
    account: accounts.beta,
    waves: [
        ['us-west-2'],
    ],
};

// GAMMA environment is 1 wave with 2 regions
export const Gamma: EnvironmentConfig = {
    name: 'Gamma',
    account: accounts.gamma,
    waves: [
        ['us-west-2', 'us-east-1'],
    ],
};

// PROD environment is 3 wave with 2 regions each wave
export const Prod: EnvironmentConfig = {
    name: 'Prod',
    account: accounts.production,
    waves: [
        ['us-west-2', 'us-east-1'],
        ['eu-central-1', 'eu-west-1'],
        ['ap-south-1', 'ap-southeast-2'],
    ],
};

A PipelineEnvironment class is responsible for loading the EnvironmentConfig into CodePipeline stages:

    new PipelineEnvironment(pipeline, Beta, (deployment, stage) => {
      stage.addPost(
        new SoapUITest('E2E Test', {
          source: source.codePipelineSource,
          endpoint: deployment.apiUrl,
          cacheBucket,
        }),
      );
    });

    

    new PipelineEnvironment(pipeline, Gamma, (deployment, stage) => {
      stage.addPost(
        new JMeterTest('Performance Test', {
          source: source.codePipelineSource,
          endpoint: deployment.apiUrl,
          threads: 300,
          duration: 300,
          throughput: 6000,
          cacheBucket,
        }),
      );
    }, wave => {
      wave.addPost(
        new ManualApprovalStep('PromoteToProd'),
      );
    });

    

class PipelineEnvironment {
  constructor(
    pipeline: CodePipeline,
    environment: EnvironmentConfig,
    stagePostProcessor?: PipelineEnvironmentStageProcessor,
    wavePostProcessor?: PipelineEnvironmentWaveProcessor) {
    if (!environment.account?.accountId) {
      throw new Error(`Missing accountId for environment '${environment.name}'. Do you need to update '.accounts.env'?`);
    }
    for (const [i, regions] of environment.waves.entries()) {
      const wave = pipeline.addWave(`${environment.name}-${i}`);
      for (const region of regions) {
        const deployment = new Deployment(pipeline, environment.name, {
          account: environment.account!.accountId!,
          region,
        });
        const stage = wave.addStage(deployment);
        if (stagePostProcessor) {
          stagePostProcessor(deployment, stage);
        }
      }
      if (wavePostProcessor) {
        wavePostProcessor(wave);
      }
    }
  }
}
Synthetic Tests

Amazon CloudWatch Synthetics is used to continuously deliver traffic to the application and assert that requests are successful and responses are received within a given threshold. The canary is defined via CDK using the @cdklabs/cdk-ecs-codedeploy construct:

const service = new ApplicationLoadBalancedCodeDeployedFargateService(this, 'Api', {
  ...

  apiCanaryTimeout: Duration.seconds(5),
  apiTestSteps: [{
    name: 'getAll',
    path: '/api/fruits',
    jmesPath: 'length(@)',
    expectedValue: 5,
  }],

Frequently Asked Questions

What operating models does this reference implementation support?

This reference implementation can accomodate any operation model with minor updates:

  • Fully Separated - Restrict the role that CDK uses for CloudFormation execution to only create resources from approved product portfolios in AWS Service Catalog. Ownership of creating the products in Service Catalog is owned by the Platform Engineering team and operational support of Service Catalog is owned by the Platform Operations team. The Platform Engineering team should publish CDK constructs internally that provision AWS resources through Service Catalog. Update the CDK app in the infrastructure/ directory to use CDK constructs provided by the Platform Engineering team. Use a CODEOWNERS file to require all changes to the infrastructure/ directory be approved by the Application Operations team. Additionally, restrict permissions to the Manual Approval action to only allow members of the Application Operations to approve.
  • Separated AEO and IEO with Centralized Governance - Restrict the role that CDK uses for CloudFormation execution to only create resources from approved product portfolios in AWS Service Catalog. Ownership of creating the products in Service Catalog is owned by the Platform Engineering team and operational support of Service Catalog is owned by the Platform Engineering team. The Platform Engineering team should publish CDK constructs internally that provision AWS resources through Service Catalog. Update the CDK app in the infrastructure/ directory to use CDK constructs provided by the Platform Engineering team.
  • Separated AEO and IEO with Decentralized Governance - The Platform Engineering team should publish CDK constructs internally that provision AWS resources in manner that achieve organizational compliance. Update the CDK app in the infrastructure/ directory to use CDK constructs provided by the Platform Engineering team.
Where is manual testing performed in this pipeline?

Ideally, all testing is accomplished through automation in the Integration Tests and Acceptance Tests actions. If an organization relies on people manually executing tests then these tests would be performed in the Gamma Stage. The Manual Approval action would be required and the approval would be granted by a Quality Assurance team member once the manual testing completes successfully.