AWS CD/CI + Java + SpringBoot

Java 8 + SpringBoot + Microservices Continuous Delivery + Continuous Integration @ AWS

Author: Daniel Gomes <daniel@danielcg.net>

Project Overview

Deliver a Java 8 platform microservice on SpringBoot framework on an automated build and delivery pipeline.

The architected solution can be reused for any microservice based on Java + SpringBoot on AWS and serve as baseline template for a quick, automated deployment of new services.

The solution relies entirely on AWS SaaS offerings such as CodeCommit, CodeBuild and CodePipeline to achieve CI/CD. The advantage is to simplify the installation and management of tools such as Jenkins and accelerate deployment. The disadvantage is to become hostage of AWS, although other Cloud SaaS providers offer the same features with minimum differences.

Architectural Diagram

Product Technical Specifications

The solution relies on services offered by Amazon Web Services (AWS):

- Elastic Beanstalk: Provides virtual machines through EC2 and Application Load Balancer

- CodeCommit: Provides a Git repository to host the code

- CodeBuild: Provides a code builder and unit tester to generate the application JAR through Maven

- CodePipeline: Creates a deployment pipeline to trigger the build through CodeBuild and deployment by Elastic Beanstalk

- S3: Provides storage for the build application to be retrieved by Elastic Beanstalk

- Java 8: development platform

- SpringBoot: Service Oriented Architecture framework

- NAT Gateway: will provide internet access so the private subnet instances can download and install required software to run the Java app

This solution was created based on the reference paper by AWS “Scenario B: Use AWS CodePipeline to Run AWS CodeBuild and Deploy to Elastic Beanstalk”.

Additional enhancements to the solution regarding the network architecture to provide a more production-like environment.

AWS and IAM users

This project requires an AWS account. The Application Load Balancer cannot be deployed in a free tier account; therefore, a paid account must be used.

A non-root account profile with programmatic access must be created in IAM and granted admin policy to the services above to avoid the security risks of using root accounts.

For a quick guide:

https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html

AWS Key Pairs

Once the user is created, a private/public key pair must be generated and created so the EC2 instances are accessible through SSH.

Refer to the guide below and store the keys in a secure local machine to be used by the privileged operators.

https://docs.aws.amazon.com/cli/latest/userguide/cli-ec2-keypairs.html

AWS VPC and Subnets

A custom VPC must be created with two private sub-nets in two different availability zones within the same region, for example, sa-east-1.

This will provide single region redundancy:

Create an Internet gateway for the VPC and attach it to this new VPC:

After the VPC is created, create two private subnets, one in each AZ:

10.0.0.0/24 for sa-east-1a

10.0.1.0/24 for sa-east-1c

And a public subnet to host the NAT Gateway and the exposed load balancers:

10.0.2.0/24 (sa-east-1a)

10.0.3.0/24 (sa-east-1c)

The public ones must have Auto-assign public IPV4 turned on so they can reach the Internet.

Crate a new route table for internet access outside the main route table. If the default gateway for the internet is created in the main route table, every time a subnet is created it might get undesired external exposure.

Then, assign each subnet to it’s respective routing table.

Private subnets are associated with the main route table and public subnets get associated with the Internet routed route table.

Above, the private subnets. Below, the public subnets.

Notice that all subnets are assigned to a routing table.

NAT Gateway

Since our backends will be allocated in private subnets without access to the Internet for security reasons. If it is needed to install or update software in the EC2 virtual machines we need to go out through a NAT Gateway. EBS also needs Internet connectivity to report health statuses or it would fail to launch the instances.

Create the NAT gateway only on one AZ, there’s no critical need for redundancy for now.

Therefore, go to the VPC subnet settings and configure the 0.0.0.0/0 route in the routing table pointing to the NAT Gateway. Just remember to create the NAT Gateway in the public subnet with an Internet gateway and valid IPV4 address or it will fail to create.

Once the NAT Gateway is ready, configure a default gateway through the NAT Gateway in the VPC private subnets route tables because EBS instances need to talk to AWS endpoints. If this is not done, EBS will fail to launch.

Create a new route table and associate it to the private subnets:

Java Source and CodeCommit – Hello World REST service

A sample SpringBoot hello world application form the internet will be used as a deployment target.

The goal of this project is to demonstrate the continuous integration and continuous delivery capabilities of AWS for Java / SpringBoot applications. Therefore, the following Git repository will be cloned:

git clone https://github.com/spring-guides/gs-rest-service.git

Inside the structure of this cloned project, there is a folder called “complete” which will serve as our master Git repository in AWS CodeCommit.

Before copying it into a new Git repository folder, instantiate a CodeCommit service in AWS and initialize the local repository in your machine.

To be able to access the CodeCommit GIT repository, go to IAM in the AWS Management Console, Security Credentials, HTTPS Git credentials for AWS CodeCommit and generate a key. Save it as it will be your only chance to see the password:

Go to the root directory of your local machine projects and clone the created (still empty) repository to initialize it locally – notice the region where it was created – here it’s named springsampleapp:

git clone https://git-codecommit.sa-east-1.amazonaws.com/v1/repos/springsampleapp

Now, change directory to the springsampleapp directory and copy all the contents from our template project into our local CodeCommit GIT repository:

At this point, maven can be executed to build and test the app locally:

If it’s the 1st time Maven is being run, expect a lot of artifacts downloads.

Maven will try to execute and call the service locally. Expect a success like this:

Once Maven is done:

Code works!

Create a buildspec.ylm file like the one below in the root directory of the springsampleapp project:

version: 0.2
phases:
  post_build:
    commands:
      - mvn package
      - mv target/gs-rest-service-0.1.0.jar gs-rest-service-0.1.0.jar
artifacts:
  files:
    - gs-rest-service-0.1.0.jar

This will be used by CodeBuild / CodePipeline so it knows how to package the application with Maven.

Add the files to CodeCommit GIT:

git add -A

(Authenticate with the HTTPS GIT Credentials for CodeCommit obtained in IAM)

git commit -m "Added sample application files"

git push

You should be able to see the committed code to the CodeCommit repository:

S3 Bucket and Artifacts storage

It’s necessary to have a S3 bucket where CodeBuild will place the zip file with the artifacts, therefore, create one. Remember to create it in the same region used by CodeCommit:

Configuring Automated Code Retrieval and Code Build

Create a new project in CodePipeline to access the source code repository, build and deploy to Elastic Beanstalk instances.

Go to the management console and create a new project in CodePipeline as follows.

Pick a Pipeline name and select the CodeCommit as source location:

For the build provider, select AWS CodeBuild. Configure the environment for Ubuntu and Java 8, sourcing the buildspec.yml created earlier.

Save the build project, go to CodeBuild and do a manual build. It’s relevant so it tests the build process and the file deployment to the S3 bucket:

Important settings for the build are the Artifacts staging location (S3) and packaging type (zip):

A successful build will yield a file in S3 bucket and this output:

Go to the S3 bucket and get the link for the zipped artifacts file as it will be needed to create the Elastic Beanstalk instance:

Save the project to create the AWS Elastic Beanstalk app / environment. Select the Deployment provider as AWS Elastic Beanstalk and create a new application in AWS Elastic Beanstalk.

Configuring and Provisioning the Elastic Beanstalk Instances

At this point, the code is in CodeCommit, the builder is configured in CodeBuild and we have a partially created CodePipeline project. To wrap up the automated build and deployment process, since our deployment provider will be the AWS Elastic Beanstalk, we need to create an application and environment for it.

Start by creating a new Web server environment. As the pre-configured platform, select “Java”.

In Upload your code, point it to the S3 bucket where the builder dropped the zip file:

To deploy a high availability setup, it’s needed to click configure more instances and add the application load balancer.

In the Network configuration, place the environment in the custom VPC created before.

The load balancer instances go in the public subnets, while the app instances go in the private subnets:

Go to the Load Balancer settings and change it from classic to application load balancer.

The application load balancer must be configured to route incoming HTTP requests at port 80 to the backend TCP port 8080:

It is also a good opportunity to configure the machine’s key pairs for remote access, if needed.

Security Groups to control accesses to the instance can be assigned here, it is a good idea to assign a group. They can be configured later without having to re-provision the instances:

Elastic Beanstalk supports auto-scaling and load balancing in the High Availability profile, which is great because saves time in deploying these services manually.

Below is the summary of the Elastic Beanstalk deployment configuration:

By clicking “Create App” the provisioning of the environment is triggered.

Since we don’t have any app deployed, health checks will fail.

It is necessary to go back and wrap up the CodeCommit project, so the deployment of the application happens.

Configuring Automated Code Deployment

Finally, once ELB is done deploying the load balancer and one instance (it might not instantiate the second instance until it’s needed), configure the Deploy section of CodePipeline:

CodePipeline requires a role with permissions to access AWS services. More details about the policies are described in the base article.

Finally, create the pipeline!

Any change detected to the CodeCommit will be automatically picked up and published by AWS EBS through this pipeline:

The setup can be configure to do rolling deployments amongst the instances to avoid service outages.

Post-commit and deployment checks

The code changes build process and deployment to the instances can be verified from AWS console or CLI as in the screenshot above.

If everything goes well, EBS console will show the app green:

And detailed information about the event:

The final test and most important is to access the Load Balancer URL and request the service.

Grab the URL from EC2 console, Load Balancer section and access it through a browser:

And this is what we lived for:

Final considerations

The default EBS configuration starts only 1 instance at a time and auto-scales it as needed. It can be changed in the EBS capacity settings:

Much can be done to tweak this setup for production environments to enhance availability, resiliency, security and performance.

From the architectural point of view, alternatives approaches can be considered, such as serverless Lamba deployments or containers in Docker orchestrated by Kubernetes. It would all depends of a high-level view of the system and business drivers, such as number of involved microservices, legacy platforms, databases and data sources.

Multiple stages could be added to the pipeline to take care of multiple environments and it depends of how the organization’s deployment strategies.