What is a container? Containers are a type of operating system virtualization, much like the virtual machines that preceded them. A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. While these technologies have been around since the 1960s, Docker's encapsulation of the container paradigm represents a modern implementation of resource isolation that utilizes built-in Linux kernel features such as chroot, control groups (cgroups), UnionFS, and namespaces to fully isolated resource control at the process level. Containers use these technologies to create lightweight images that act as a standalone, fully encapsulated piece of software that carries everything it needs inside the box. This can include application binaries, any system tools or libraries, environment-based configuration, and runtime. This special property of isolation is
A brief overview of containers: Believe it or not, containers and their precursors have been around for over 15 years in the Linux and Unix operating systems. If you look deeper into the fundamentals of how containers operate, you can see their roots in the chroot technology that was invented all the way back in 1970. Since the early 2000s, FreeBSD, Linux, Solaris, Open VZ, Warden, and finally, Docker all made significant attempts at encapsulating containerization technology for the end-user. While the VServer's project and first commit (running several general-purpose Linux server on a single box with a high degree of independence and security ( https://ieeexplore.ieee.org/document/1430092?reload=true ) may have been one of the most interesting historical junctures in container history, it's clear that Docker set the container ecosystem on fire back in late 2013 when they went full in on the container ecosystem and decided to rebrand from dotCloud to Docker.