Ads Top

Introduction to DevOps on AWS - Part 2

Continuous Deployment

Continuous deployment is another core concept in a DevOps strategy. Its primary goal is
to enable the automated deployment of production-ready application code.
Sometimes continuous deployment is referred to as continuous delivery. The only
difference is that continuous deployment usually refers to production deployments.
By using continuous delivery practices and tools, software can be deployed rapidly,
repeatedly, and reliably. If a deployment fails, it can be automatically rolled back to
previous version.

AWS CodeDeploy

A prime example of this principle in AWS is the code deployment service AWS
CodeDeploy.3 Its core features provide the ability to deploy applications across an
Amazon EC2 fleet with minimum downtime, centralizing control and integrating with your
existing software release or continuous delivery process.


Here's how it works:

1. Application content is packaged and deployed to Amazon S3 along with an
Application Specific (AppSpec) file that defines a series of deployment steps that
AWS CodeDeploy needs to execute. The package is called a CodeDeploy “revision.”

2. You create an application in AWS CodeDeploy and define the instances to which the
application should be deployed (DeploymentGroup). The application also defines the
Amazon S3 bucket where the deployment package resides.

3. An AWS CodeDeploy agent is deployed on each participating Amazon EC2. The
agent polls AWS CodeDeploy to determine what and when to pull a revision from the
specified Amazon S3 bucket.

4. The AWS CodeDeploy agent pulls the packaged application code and deploys it on
the instance. The AppSec file containing deployment instructions is also
downloaded.
In this way, AWS CodeDeploy exemplifies the continuous automated deployment that is
central to DevOps.

AWS CodePipeline

Like AWS CodeDeploy, AWS CodePipeline is a continuous delivery
and release automation service that aids smooth deployments. You can design your
development workflow for checking in code, building the code, deploying your
application into staging, testing it, and releasing it to production. You can integrate thirdparty
tools into any step of your release process, or you can use AWS CodePipeline as
an end-to-end solution. With AWS CodePipeline, you can rapidly deliver features and
updates with high quality through the automation of your build, test, and release process.
AWS CodePipeline has several benefits that align with the DevOps principle of
continuous deployment:

• Rapid delivery
• Improved quality
• Configurable workflow
• Easy to integrate

AWS CodeCommit

AWS CodeCommit, a secure, highly scalable, managed source
control service that hosts private Git repositories. CodeCommit eliminates the need for
you to operate your own source control system or worry about scaling its infrastructure.
You can use CodeCommit to store anything from code to binaries, and it supports the
standard functionality of Git, allowing it to work seamlessly with your existing Git-based
tools. Your team can also use CodeCommit’s online code tools to browse, edit, and
collaborate on projects.

AWS CodeCommit has several benefits:

• Fully managed
• Able to store anything
• Highly available
• Offers faster development lifecycles
• Works with your existing tools
• Secure

AWS Elastic Beanstalk and AWS OpsWorks

Both AWS Elastic Beanstalk and AWS OpsWorks support continuous deployment of
application code changes and infrastructure modifications. In AWS Elastic Beanstalk,
code changes deployments are stored as “application versions,” and infrastructure
changes are deployed “saved configurations.” AWS OpsWorks has its own process for
deploying applications and can define additional run-time launch commands and Chef
recipes.

An example of an application version would be a new Java application that you upload
as a .zip or .war file. An example of a saved configuration would be an AWS Elastic
Beanstalk configuration that uses Elastic Load Balancing and Auto Scaling rather than a
single instance. When you finish making changes, you can save your new configuration.
AWS Elastic Beanstalk supports the DevOps practice called “rolling deployments.” When
enabled, your configuration deployments work hand in hand with Auto Scaling to ensure
there are always a defined number of instances available as configuration changes are
made. This gives you control as Amazon EC2 instances are updated. For example, if the
EC2 instance type is being changed, you can determine whether AWS Elastic Beanstalk
updates all instances concurrently or keeps some instances running to serve requests
as other instances are being updated.

Similarly, AWS OpsWorks gives you the option of defining which instances in which
layers should be updated when deployments are made.
Additional features of AWS Elastic Beanstalk and AWS OpsWorks are described in the
Automation section.

Blue–Green Deployment

Blue–green deployment is a DevOps deployment practice that uses domain name
services (DNS) to make application deployments. The strategy involves starting with an
existing (blue) environment while testing a new (green) one. When the new environment
has passed all the necessary tests and is ready to go live, you simply redirect traffic from
the old environment to the new one via DNS.

AWS offers all the tools that you need for implementing a blue–green development
strategy. You configure your ideal new infrastructure environment by using a service like
AWS CloudFormation or AWS Elastic Beanstalk. With AWS CloudFormation templates,
you can easily create a new environment identical to the existing production
environment.

If you use the AWS DNS service Amazon Route 53, you can direct the traffic flow by
means of weighted resource record sets. By using these record sets, you can define
multiple services or load balancers with the DNS resolution.

The DNS service resolution (converting a domain name to an IP address) is weighted,
meaning you can define how much traffic is directed to your newly deployed production
environment. By using this feature, you can test the environment and, when you are
confident that the deployment is good, increase the weighting. When the old production
environment is receiving 0% traffic, you can either keep it for backup purposes or
decommission it. As the amount of the traffic in the new environment increases, you can
use Auto Scaling to scale up additional Amazon EC2 instances.

This ability to create and dispose of identical environments easily in the AWS cloud
makes DevOps practices like blue–green deployment feasible.
You can also use blue–green deployment for back-end services like database
deployment and failover.

Automation

Another core philosophy and practice of DevOps is automation. Automation focuses on
the setup, configuration, deployment, and support of infrastructure and the applications
that run on it. By using automation, you can set up environments more rapidly in a
standardized and repeatable manner. The removal of manual processes is a key to a

successful DevOps strategy. Historically, server configuration and application
deployment has been a predominantly a manual process. Environments become
nonstandard, and reproducing an environment when issues arise is difficult.
The use of automation is critical to realizing the full benefits of the cloud. Internally AWS
relies heavily on automation to provide the core features of elasticity and scalability.
Manual processes are error prone, unreliable, and inadequate to support an agile
business. Frequently an organization may tie up highly skilled resources to provide
manual configuration. Time could be better spent supporting other, more critical and
higher value activities within the business.

Modern operating environments commonly rely on full automation to eliminate manual
intervention or access to production environments. This includes all software releasing,
machine configuration, operating system patching, troubleshooting, or bug fixing. Many
levels of automation practices can be used together to provide a higher level end-to-end
automated process.

Automation has many benefits:

• Rapid changes
• Improved productivity
• Repeatable configurations
• Reproducible environments
• Leveraged elasticity
• Leveraged auto scaling
• Automated testing

Automation is a cornerstone with AWS services and is internally supported in all
services, features, and offerings.

AWS Elastic Beanstalk

For an example of automation in AWS, one need look no further than AWS Elastic
Beanstalk. AWS Elastic Beanstalk is a service that makes it easy and productive for
developers to deploy applications into commonly used technology stacks. Its simple-touse
interface helps developers deploy multitiered applications quickly and easily. AWS
Elastic Beanstalk supports automation and numerous other DevOps best practices
including automated application deployment, monitoring, infrastructure configuration,
and version management. Application and infrastructure changes can be easily rolled
back as well as forward.

Creating environments provides a good example of AWS Elastic Beanstalk automation.
You simply specify the details for your environment, and AWS Beanstalk does all the
configuration and provisioning work on its own. For example, here are just some of the
options you can specify for in the create application wizard:

• Whether you want a web server tier (which contains a web server and an application
server) or a worker tier (which utilizes the Amazon Simple Queue Service).

• What platform to use for your application. Choices include IIS, Node.js, PHP, Python,
Ruby, Tomcat, or Docker.

• Whether to launch a single instance or a create load balancing, autoscaling
environment.

• What URL to automatically assign to your environment.
• Whether the environment includes an Amazon Relational Database instance.
• Whether to create your environment inside an Amazon Virtual Private Cloud.
• What URL (if any) to use for automatic health checks of your application.
• What tags (if any) to apply to identify your environment.

AWS Elastic Beanstalk also uses automation to deploy applications. Depending on the
platform, all you need to do to deploy applications is to upload packages in the form of
.war or .zip files directly from your computer or from Amazon S3.

As the environment is being created, AWS Elastic Beanstalk automatically logs events
on the management console providing feedback on the progress and status of the
launch. Once complete, you can access your application by using the defined URL.
AWS Elastic Beanstalk can be customized should you want to take control over certain
aspects of the application and technology stack.

AWS OpsWorks

AWS OpsWorks take the principles of DevOps even further than AWS Elastic Beanstalk.
AWS OpsWorks provides even more levels of automation with additional features like
integration with configuration management software (Chef) and application lifecycle
management. You can use application lifecycle management to define when resources
are set up, configured, deployed, un-deployed, or terminated.

For added flexibility AWS OpsWorks has you define your application in configurable
stacks. You can also select predefined application stacks. Application stacks contain all
the provisioning for AWS resources that your application requires, including application
servers, web servers, databases, and load balancers.


Application stacks are organized into architectural layers so that stacks can be
maintained independently. Example layers could include web tier, application tier, and
database tier. Out of the box, AWS OpsWorks also simplifies setting up Auto Scaling
groups and Elastic Load Balancing load balancers, further illustrating the DevOps
principle of automation. Just like AWS Elastic Beanstalk, AWS OpsWorks supports
application versioning, continuous deployment, and infrastructure configuration
management.

AWS OpsWorks also supports the DevOps practices of monitoring and logging (covered
in the next section). Monitoring support is provided by Amazon CloudWatch. All lifecycle
events are logged, and a separate Chef log documents any Chef recipes that are run,
along with any exceptions.

Monitoring

Communication and collaboration is fundamental in a DevOps strategy. To facilitate this,
feedback is critical. In AWS feedback is provided by two core services: Amazon
CloudWatch and AWS CloudTrail. Together they provide a robust monitoring, alerting,
and auditing infrastructure so developers and operations teams can work together
closely and transparently.

Amazon CloudWatch

Amazon CloudWatch monitors in real time all AWS resources and the applications you
run on them. Resources and applications can produce metrics that Amazon
CloudWatch collates and tracks. You can configure alarms to send notifications when
events occur. You can configure notifications in numerous formats, including email,
Amazon SNS, and Amazon Simple Queue Service. The notifications can be delivered to
individuals, teams, or other AWS resources.

As well as providing feedback, Amazon CloudWatch also supports the DevOps concept
of automation. AWS services such as Auto Scaling rely on CloudWatch for notifications
that trigger appropriate automated action such as scaling up and scaling down Amazon
EC2 instances and load increases or decreases.


In the above example Amazon CloudWatch monitors latency metrics from Elastic Load
Balancing and average CPU metrics from the running Amazon EC2 instances. Latency
metrics measure how long it takes for replies to be returned after requests are made to
the Amazon EC2 instances. You can create scaling policies to act upon alarms that are
triggered when defined thresholds are broken. Such a policy can result in an increase or
decrease in the number of Amazon EC2 instances, depending upon the situation. You
can define additional notifications to deliver messages through Amazon SNS. This can
be useful to notify interested parties such as support teams that events have occurred.

The Auto Scaling example illustrates how AWS services work together to provide
automated, transparent services that are key to embracing a DevOps strategy. System
administrators and support teams can focus on other value-added business needs and
can rest assured that the AWS infrastructure is taking care of the applications scaling
requirements. Note that this scenario assumes that the application in question is cloud
optimized and designed in a horizontally scalable way to leverage the benefits of Auto
Scaling.

AWS CloudTrail

In order to embrace the DevOps principles of collaboration, communication, and
transparency, it’s important to understand who is making modifications to your
infrastructure. In AWS this transparency is provided by AWS CloudTrail service.

 All AWS interactions are handled through AWS API calls that are monitored and logged by
AWS CloudTrail. All generated log files are stored in an Amazon S3 bucket that you
define. Log files are encrypted using Amazon S3 server-side encryption (SSE). All API
calls are logged whether they come directly from a user or on behalf of a user by an
AWS service. Numerous groups can benefit from CloudTrail logs, including operations
teams for support, security teams for governance, and finance teams for billing.

Security

In a DevOps enabled environment, focus on security is still of paramount importance.
Infrastructure and company assets need to be protected, and when issues arise they
need to be rapidly and effective addressed.

Identity and Access Management (IAM)

The AWS Identity and Access Management service (IAM) is one component of the AWS
security infrastructure. With IAM, you can centrally manage users and security
credentials such as passwords, access keys, and permissions policies that control which
AWS services and resources users can access. You can also use IAM to create roles
that are used widely within a DevOps strategy. With an IAM role you can define a set of
permissions to access the resources that a user or service needs. But instead of
attaching the permissions to a specific user or group, you attach them to a named role.
Resources can be associated with roles and services can then be programmatically
defined to assume a role.

Security requirements and controls must be adhered to during any automation
processes and great care should be made when working with passwords and keys.
Security best practices should be followed at all times. For details about the importance
of security to AWS, visit the AWS Security Center.

Conclusion

In order to make the journey to the cloud smooth, efficient and effective, technology
companies can embrace DevOps principles and practices. These principles are
embedded in the AWS platform. Indeed, they form the cornerstone of numerous AWS
services, especially those in the deployment and monitoring offerings.

Begin by defining your infrastructure as code using the service AWS CloudFormation.
Next, define the way in which your applications are going to use continuous deployment
with the help of services like AWS CodeDeploy, AWS CodePipeline, and AWS
CodeCommit. At the application level, use services like AWS Elastic Beanstalk and AWS
OpsWorks to simplify the configuration of common architectures. Using these services
also makes it easy to include other important services like Auto Scaling and Elastic Load
Balancing. Finally, use the DevOps strategy of monitoring (AWS CloudWatch) and solid
security practices (AWS IAM).

With AWS as your partner, your DevOps principles will bring agility to your business and
IT organization and accelerate your journey to the cloud.

No comments:

Powered by Blogger.