Unleashing the Full Potential of Containerization for DevOps, and Avoiding First-Time Pitfalls

A powerful tool for simplifying DevOps is containerization, which delivers a convenient form of application packaging, combined with the opportunity to automate certain IT provisioning processes. With containerization, DevOps teams can focus on their priorities – the Ops team preparing containers with all needed dependencies and configurations; and the Dev team focusing on efficient coding of an application that can be easily deployed.

This automation can be achieved through PaaS or CaaS solutions, which offer additional benefits including eliminating human errors, accelerating time to market and more efficient resource utilization. Other important benefits of containerization are:

  • Container-based virtualization guarantees the highest application density and maximum utilization of server resources compared to virtual machines.
  • Considering advanced isolation of system containers, different types of applications can be run on the same hardware node leading to a reduction of TCO.
  • Resources that are not consumed within container boundaries are automatically shared with other containers running on the same hardware node.
  • Automatic vertical scaling of containers optimizes memory and CPU usage based on the current load, and no restart is needed to change the resource limits compared to VM scaling.

Unleashing the potential of containerization for DevOps requires careful attention to several challenges, however, especially for first-time adopters.

Realizing Project Needs 

At the early stages, DevOps teams must analyze the current state of their projects and decide what is required to move to containers, in order to realize long-term, ongoing benefits.

For optimal architecture the right type of container must be selected. There are two types:

  • an application container (Docker containers) runs as little as a single process
  • a system container (LXC, OpenVZ) behaves like a full OS and can run full-featured unit systems like systemd, SysVinit, openrc that allow it to spawn other processes like openssh, crond, syslogd together inside a single container

For new projects, application containers are typically more appropriate, as it is relatively easy to create the necessary images using publicly available Docker templates taking into account specific requirements of microservice patterns and modern immutable infrastructure design.

It is a common misconception that containers are good only for greenfield applications (microservices and cloud-native). They can indeed breathe new life into legacy applications, with just a bit of extra work at the initial phase while migrating from VMs.

For monolithic and legacy applications it is preferable to use system containers, so organizations can reuse architecture and configurations that were implemented in the original VM-based design.

Future-Proofing Containerization Strategy

After determining what the project requires today, it is best to think about the future and understand where technology is heading. With project growth, complexity will increase, so a platform for orchestration and automation of the main processes will most likely be needed.

Management of containerized environments is complex and dense, and PaaS solutions help developers concentrate on coding. There are many options when it comes to container orchestration platforms and services. Figuring out which one is best for a particular organization’s needs and applications can be a challenge, especially when needs are frequently changing.

Here are several points that should be considered when choosing a platform for containerization:

  • Flexibility. It is paramount to have a platform with a sufficient level of automation, which can be easily adjusted depending on variable requirements.
  • Level of Lock-In. PaaS solutions are often proprietary and therefore can lock you into one vendor or infrastructure provider.
  • Freedom to Innovate. The platform should offer a wide set of built-in tools, as well as possibilities to integrate third-party technologies in order not to constrain developers’ ability to innovate.
  • Supported Cloud Options. When using containerization in the cloud it is also important that your strategy supports public, private and hybrid cloud deployments, as needs can change eventually.
  • Pricing Model. When you choose a specific platform, it is typically a long-term commitment. So it is important to consider what pricing model is offered. Many public cloud platforms offer VM-based licensing, which may not be efficient when you’ve already migrated to containers, which can be charged only for real usage, not for the reserved limits.

Which platform you choose can significantly influence your business success, so the selection process should be carefully considered.

Expertise

Successful adoption of containers is not a trivial task. Managing them requires a different process and knowledge base, compared with virtual machines. The difference is significant, and many tricks and best practices with VM lifecycle management cannot be applied to containers. Ops teams need to educate themselves on this to avoid costly missteps.

The traditional operations skill set is obsolete when it comes to efficient containerization in the cloud. Cloud providers now mainly deliver management of infrastructure hardware and networks, and Ops teams are requested to make software deployment automation by scripting and using container-oriented tools.

Systems integrators and consulting companies can provide their expertise and maximize the benefits of containers. But if you want an in-house team to manage the whole process, it’s time to start building your own expertise –  hire experienced DevOps professionals, learn best practices, and create a new knowledge base.

Investing Time and Effort

Don’t expect to get containerized structure instantly. Some up-front time must be invested, especially if your architecture needs to be restructured to run microservices. To migrate from VMs for example, monolith applications should be decomposed into small logical pieces distributed among a set of interconnected containers. This requires specific knowledge to accomplish successfully.

In addition, for large organizations, it can be vital to select a solution that handles heterogeneous types of workloads using VMs and containers within one platform, because enterprise-wide container adoption can be a gradual process.

Security Concerns

Containerized environments are extremely dynamic, with the ability to change much more quickly than environments in VMs. This agility is a huge container benefit, but it can also be a challenge to achieve the appropriate level of security, while simultaneously enabling the required quick and easy access for developers.

A set of security risks should be considered with containerization:

  • Basic container technology doesn’t easily deal with interservice authentication, network configurations, partitions, and other concerns regarding network security when calling internal components inside a microservice application.
  • Using publicly available container templates packaged by untrusted or unknown third parties is risky. Vulnerabilities can be intentionally or unintentionally added to this type of container.

Traditional security approaches should be complemented with continuously enhancing strategies to keep pace with today’s dynamic IT environment. A key point here is that a wide choice of tools and orchestration platforms continues to evolve. They offer certified, proven templates, help to secure containers and ease the configuration process.

The IT market now offers a wide choice of solutions for container orchestration, making adoption easier, but skilled hands are required so the benefits can be fully leveraged and unexpected consequences avoided.

*****

This article was originally published at DEVOPSdigest.

Now when you have a closer insight on how containerization is crucial for DevOps, what challenges can be faced, and how to overcome them, it is time to get a closer look to the Jelastic PaaS that can become a helping hand during this evolutionary shift.

Introducing Continuous Integration into Your Organization

Continuous Integration is not an all-or-nothing affair. In fact, introducing CI into an organization takes you on a path that progresses through several distinct phases. Each of these phases involves incremental improvements to the technical infrastructure as well as, perhaps more importantly, improvements in the practices and culture of the development team itself. In the following paragraphs, I have tried to paint an approximate picture of each phase.

Phase 1—No Build Server

Initially, the team has no central build server of any kind. Software is built manually on a developer’s machine, though it may use an Ant script or similar to do so. Source code may be stored in a central source code repository, but developers do not necessarily commit their changes on a regular basis. Some time before a release is scheduled, a developer manually integrates the changes, a process which is generally associated with pain and suffering.

Phase 2—Nightly Builds

In this phase, the team has a build server, and automated builds are scheduled on a regular (typically nightly) basis. This build simply compiles the code, as there are no reliable or repeatable unit tests. Indeed, automated tests, if they are written, are not a mandatory part of the build process, and may well not run correctly at all. However developers now commit their changes regularly, at least at the end of every day. If a developer commits code changes that conflict with another developer’s work, the build server alerts the team via email the following morning. Nevertheless, the team still tends to use the build server for information purposes only—they feel little obligation to fix a broken build immediately, and builds may stay broken on the build server for some time.

Phase 3—Nightly Builds and Basic Automated Tests

The team is now starting to take Continuous Integration and automated testing more seriously. The build server is configured to kick off a build whenever new code is committed to the version control system, and team members are able to easily see what changes in the source code triggered a particular build, and what issues these changes address. In addition, the build script compiles the application and runs a set of automated unit and/or integration tests. In addition to email, the build server also alerts team members of integration issues using more proactive channels such as Instant Messaging. Broken builds are now generally fixed quickly.

Phase 4—Enter the Metrics

Automated code quality and code coverage metrics are now run to help evaluate the quality of the code base and (to some extent, at least) the relevance and effectiveness of the tests. The code quality build also automatically generates API documentation for the application. All this helps teams keep the quality of the code base high, alerting team members if good testing practices are slipping. The team has also set up a “build radiator,” a dashboard view of the project status that is displayed on a prominent screen visible to all team members.

Phase 5—Getting More Serious About Testing

The benefits of Continuous Integration are closely related to solid testing practices. Now, practices like Test-Driven Development are more widely practiced, resulting in a growing confidence in the results of the automated builds. The application is no longer simply compiled and tested, but if the tests pass, it is automatically deployed to an application server for more comprehensive end-to-end tests and performance tests.

Phase 6—Automated Acceptance Tests and More Automated Deployment

Acceptance-Test Driven Development is practiced, guiding development efforts and providing high-level reporting on the state of the project. These automated tests use Behavior-Driven Development and Acceptance-Test Driven Development tools to act as communication and documentation tools and documentation as much as testing tools, publishing reports on test results in business terms that non-developers can understand. Since these high-level tests are automated at an early stage in the development process, they also provide a clear idea of what features have been implemented, and which remain to be done. The application is automatically deployed into test environments for testing by the QA team either as changes are committed, or on a nightly basis; a version can be deployed (or “promoted”) to UAT and possibly production environments using a manually-triggered build when testers consider it ready. The team is also capable of using the build server to back out a release, rolling back to a previous release, if something goes horribly wrong.

Phase 7—Continuous Deployment

Confidence in the automated unit, integration and acceptance tests is now such that teams can apply the automated deployment techniques developed in the previous phase to push out new changes directly into production.

The progression between levels here is of course somewhat approximate, and may not always match real-world situations. For example, you may well introduce automated web tests before integrating code quality and code coverage reporting. However, it should give a general idea of how implementing a Continuous Integration strategy in a real world organization generally works.