Docker drew the attention of developer communities to containers. With Docker it’s possible to create and run a new container within seconds. This containerized approach is not new. The idea of containers has been around since the early days of unix with the chroot command. For instance, FreeBSD-based jail or Solaris zones serve similar concerns as Docker containers. Since the applications rely on a common OS kernel while using chroot, this approach can work only for applications that share the exact OS version. Docker found a way to address this limitation through an integrated user interface. It provides a greater level of simplicity. You don’t have to be a Linux kernel expert to use Linux container-based technology using Docker.
Both hypervisor-based virtualization and containers enable isolation. With hypervisor-based virtualization, a software layer (the hypervisor) abstracts the underlying physical hardware of a server, allowing for the creation of virtual machines on which an operating system and applications can be installed. Unlike hypervisor-based virtual machines, containers do not emulate physical servers. Instead, all containerized applications share a common operating system kernel on a host. This eliminates the resources needed to run a separate operating system and can significantly reduce overhead. A containerized application can be deployed in a matter of seconds, using fewer resources than with hypervisor-based virtualization. Containers are also leaner than VMs. So, where a VM would be measured in gigabytes and boot in minutes, the container will be measured in megabytes and can boot in seconds.
When you develop an application on your own laptop as a developer, everything works, but it doesn’t work in other environments. You use the stack and the language you like, with the versions of the libraries and tools you prefer. Unfortunately, as soon as you try to push that code to a server environment, it might not work because these environments are different from a local machine. For instance, you might have used the newest version of a certain library, but then Ops tells you that you can’t use this library in production because it would break all the other applications running on the server. So, there might be a lot of back and forth communication going between the Ops and the developers, before everything gets set up successfully. Docker supports a level of portability that allows a developer to write an application in any language and then easily move it from their laptop to a test or production server, regardless of the underlying Linux distribution. It’s this portability that attracted the interest of developers and systems administrators alike. When you develop with Docker, you package everything inside containers that can talk to each other, requiring microservices-based architecture. When you push a container to another environment Ops doesn’t have to care about what’s inside the container or how it was developed. That helps accelerate the development cycle and allows you to move containers around very easily. As a result, the developers are happy. They no longer have to care what’s in the remote operating environment because their application will work anywhere. Additionally, the Ops guys are happy because they don’t have to care about what the crazy developer is using for their back end. Docker Limitations Container technology is simpler, however, it has limitations:
There can be a risk of widespread workload disruption if hardware fails.
This risk is also pronounced in hypervisor-based virtualization.
A single kernel exploit could affect all containers on a host.
Orchestration tools and advanced management features available for VMs are – so far – largely missing for containers. As a result, orchestration needs to be handled in the software application, which turns Docker into an intrusive technology. If you apply Docker in a greenfield project, it’s still workable as you design your architecture considering Docker. However, for existing applications, the introduction of Docker is intrusive and requires changes in the application architecture. As orchestration needs to be handled programmatically as of now, a non-standard Docker interface-based approach is required. In the future, if you’d want to move from Docker to any other container-based approach, that will not be straightforward and will require code changes.
Docker hub is a central repository to publish and download container images. Detailed studies have been performed on those images to understand how vulnerable they are to security threats. By May 2015, more than 30% of those images were susceptible to security attacks. Although the study has been performed on the public registry, we could expect that private organizations inherit the same vulnerabilities. Rigorous operations management practices with real-time monitoring are required to ensure security.
Docker restricts the container to a single process only (that being the last command of your Docker file). That means that the container runs without an init system, in contrast to the way applications and services are designed to operate in normal multi-purpose OS environments. To run multiple processes, you need a shell script or a separate process manager, but that is considered to be an anti-pattern. The result of this design decision is a proliferation of Docker containers which in turn forces the adoption of microservices-based architecture.
Container technologies are and will continue to enable both very interesting architecture solutions and challenges at the same time. We are at a point where it is quite simple to try them out, especially for development purposes. Thanks to the fact that several cloud providers have been adopting them or developing their own versions, a lot of how-to’s have been written on how to set up containers locally. Do you have your own container story to share with us?
Squads is a community of distributed development teams consisting of freelance developers. We specialize in startups and lean innovation. If you want to know more about us and how we work and if you need any help with Continuous Delivery (CD) or with our open-source CD toolkit Prudentia and DevOps.