Skip to main content

What Inside a Docker Container

 What is Container

A standard unit of software that allow to package applications and their dependencies and run them in a isolated environments, sometimes we call it an operating system level virtualization that allow to run a specific task which call container.

Container uses kernel, CPU & RAM of host machine, because container only has the application layer of the OS and there is no custom or additional kernel inside container. All containers are sharing the host OS kernel of machine.

Containers are designed to run as a in-memory processes, so you'll lose all the changes you made in the container, including software updates and installed tools if you stopped or restart the container.

What is Image

A read-only template or a blueprint that contains a set of instructions for building a container which call container image.

A image is made up of a collection of files that bundle together all the essentials (such as "installations, application code, and dependencies") required to configure a fully operational container environment.

A image contains application code, libraries, tools, dependencies and other files needed to make an application run. Each of the files that make up a image is known as a layer, images have multiple layers, each one originates from the previous layer but is different from it. 

What's Inside a Container

Containers do not contain operating system, it contains all the necessary executables, binary code, libraries, and configuration files that required to run application which deployed inside the container.

Containers contain same directory structure of a Linux system.

/ |- bin | The /bin directory contains system commands and other executable programs. |- /boot | The /boot directory contains the files needed to boot the system. This directory will be empty because container doesn't any boot loader. |- /dev | The /dev directory contains special, virtual files representing hardware component. |- /etc | The /etc directory contains vital system configuration files. |- /home | The /home directory stores an individual user's home directory. |- /lib | The /lib directory contains libraries needed by the essential binaries in the /bin and /sbin folder. |- /media | The /media directory is for removable media devices. This is showing here because the directory structure is a replica of base OS. |- /mnt | The /mnt directory is used to mount storage devices in the system temporarily. |- /opt | The /opt directory contains optional software packages to facilitate better compatibility of certain applications. |- /proc | The /proc directory containing information about processes and kernel parameters. |- /root | The /root directory is the home folder of the root user. |- /run | The /run directory is a temporary filesystem that contains volatile runtime data that shows the system has since it was booted. |- /sbin | This is similar to the /bin directory. The only difference is that is contains the binaries that can only be run by root or a sudo user. |- /srv | The /srv directory is the service directory and is abbreviated as ‘srv‘. |- /tmp | The /tmp directory is used by the system and its applications to store temporary files. |- /usr | The /usr directory contains applications files, libraries, programs, and system utilities. |- /var The /var directory is the storage space for system-generated variable files, and it includes logs, caches, and spool files.

Docker Images and layers

A docker image consists of a series of layers, that is because the docker image is a read-only image if you wants to add a new instructions/application you have to add a new layer top of the existing layer. Each layer represents an instruction in the Dockerfile and each container is an image with a readable/writeable layer on top of a bunch of read-only layers.

These layers also called intermediate images which generated when the commands in the Dockerfile are executed during the Docker image build.

Consider the below example of most basic Dockerfile that we can use to create the Node.js image:

FROM node:alpine #Create app directory WORKDIR /usr/src/app #Install app dependencies COPY package*.json ./ RUN npm install #Bundle app source COPY . . EXPOSE 8080 #command run within the container CMD ["node", "app.js"]

When you run a docker build command, first docker will try to find image in local file system and then it will pull from registry if not there. Then Docker starts executing these instructions one at a time, iteratively.

# docker build -t myapp .

Sending build context to Docker daemon   5.12kB

Step 1/7 : FROM node:alpine

alpine: Pulling from library/node

f56be85fc22e: Pull complete 

3f026796e5ad: Pull complete 

08556a236d1c: Pull complete 

06e990ca428a: Pull complete 

Digest: sha256:6e56967f8a4032f084856bad4185088711d25b2c2c705af84f57a522c84d123b

Status: Downloaded newer image for node:alpine

 ---> 8e7579c71aa8

Step 2/7 : WORKDIR /usr/src/app

 ---> Running in 1d01e6465ffe

Removing intermediate container 1d01e6465ffe

 ---> 787263f998d3

Step 3/7 : COPY package*.json ./

 ---> 09892be22f81

Step 4/7 : RUN npm install

 ---> Running in b87813677232

Removing intermediate container b87813677232

 ---> e54749551f3f

Step 5/7 : COPY . .

 ---> 55d86212581f

Step 6/7 : EXPOSE 8080

 ---> Running in 81b8faeae075

Removing intermediate container 81b8faeae075

 ---> 7fe6bf266f7a

Step 7/7 : CMD ["node", "app.js"]

 ---> Running in dc414aa7b82e

Removing intermediate container dc414aa7b82e

 ---> d23afdb575fc

Successfully built d23afdb575fc

Successfully tagged myapp:latest

When Docker builds the container from the Dockerfile, each time docker executes a new line in the dockerfile, it creates a new layer with the result of executing that line. It then adds that layer to the docker image, but it also keeps track of all the individual layers as cache.
# docker images REPOSITORY TAG IMAGE ID CREATED SIZE myapp latest d23afdb575fc 59 seconds ago 186MB node alpine 8e7579c71aa8 6 days ago 180MB

Once the image is built, you can view all the layers that make up the image with the docker history command. The “IMAGE” column shows the randomly generated UUID that correlates to that layer. The "<missing>" IMAGE column is the base image layers of alpine image that mentioned in the Dockerfile, this only means that those layers are built on a different system and are not available locally.

# docker history d23afdb575fc IMAGE CREATED CREATED BY SIZE COMMENT d23afdb575fc 4 minutes ago /bin/sh -c #(nop) CMD ["node" "app.js"] 0B 7fe6bf266f7a 4 minutes ago /bin/sh -c #(nop) EXPOSE 8080 0B 55d86212581f 4 minutes ago /bin/sh -c #(nop) COPY dir:7d3d561da2d4ca087… 699B e54749551f3f 4 minutes ago /bin/sh -c npm install 6.31MB 09892be22f81 4 minutes ago /bin/sh -c #(nop) COPY file:3a0b3e0c02585c8c… 265B 787263f998d3 4 minutes ago /bin/sh -c #(nop) WORKDIR /usr/src/app 0B 8e7579c71aa8 6 days ago /bin/sh -c #(nop) CMD ["node"] 0B <missing> 6 days ago /bin/sh -c #(nop) ENTRYPOINT ["docker-entry… 0B <missing> 6 days ago /bin/sh -c #(nop) COPY file:4d192565a7220e13… 388B <missing> 6 days ago /bin/sh -c apk add --no-cache --virtual .bui… 7.77MB <missing> 6 days ago /bin/sh -c #(nop) ENV YARN_VERSION=1.22.19 0B <missing> 6 days ago /bin/sh -c addgroup -g 1000 node && addu… 165MB <missing> 6 days ago /bin/sh -c #(nop) ENV NODE_VERSION=20.1.0 0B <missing> 6 weeks ago /bin/sh -c #(nop) CMD ["/bin/sh"] 0B <missing> 6 weeks ago /bin/sh -c #(nop) ADD file:9a4f77dfaba7fd2aa… 7.05MB

The myapp image (d23afdb575fc) is made upon the following layers.

Each layer is stored separately in local storage area, which is usually /var/lib/docker/ on Linux hosts. Use the following command in order to see the storage location for the image.

# docker inspect d23afdb575fc ... "GraphDriver": { "Data": { "LowerDir": "/var/lib/docker/overlay2/12e0adc7ae55f486c6eaa3818644cd115fbc06f7660f4c71db8e76a693554b4a/diff:/var/lib/docker/overlay2/96341dd0fa5b0d69e3bcaaf8bcf14414ccc3ad0c5412cb6d69180e9d7c525191/diff:/var/lib/docker/overlay2/998b44aac41b8560e983ecd35a8fd200fd375aa8e34fc7d43923029bff05ce16/diff:/var/lib/docker/overlay2/794fb75775a391fa2031bf21af875815f29d7a7a3ac9cec5fb8a0e472664b2fc/diff:/var/lib/docker/overlay2/971d50494e58dcc8a8f13e275312755f3ce1ba4604ba5eac1dbbfe5871a44df2/diff:/var/lib/docker/overlay2/80d06486a3fdd2533f6f7a20b59a027f6241407b47e652b95866288568b73dbe/diff:/var/lib/docker/overlay2/55089dbb833681721eca43b9e7dd13e78566fa81f1190966dbb1089a08bf47e5/diff", "MergedDir": "/var/lib/docker/overlay2/61e64aefa18562d66fe6662320ce5c7154a2e3822cf38fd690b356f8ffb4133b/merged", "UpperDir": "/var/lib/docker/overlay2/61e64aefa18562d66fe6662320ce5c7154a2e3822cf38fd690b356f8ffb4133b/diff", "WorkDir": "/var/lib/docker/overlay2/61e64aefa18562d66fe6662320ce5c7154a2e3822cf38fd690b356f8ffb4133b/work" }, "Name": "overlay2" ...

Comments

Post a Comment

Popular posts from this blog

How to configure a Datasource in JBoss / WildFly as a JAR Deployment

JDBC drivers can be installed as a JAR deployment using either the management CLI or the management console. As long as the driver is JDBC 4-compliant, it will automatically be recognized and installed as a JDBC driver upon deployment. 1. Download the appropriate JDBC driver from your database vendor. 2. Start the JBoss EAP/WildFly server. 3. Now most of the drivers coming with JDBC 4-compliant, but in case If the JDBC driver JAR is not JDBC 4-compliant, it can be made deployable using the following steps. i) Create a directory structure META-INF/services on your local system. $ mkdir -p META-INF/services    ii) Create a file inside META-INF/services/java.sql.Driver. $ touch META-INF/services/java.sql.Driver   iii) Add one line in the file to indicate the fully-qualified class name of the JDBC driver. $ echo “com.mysql.jdbc.Driver” > META-INF/services/java.sql.Driv...

Shift your Data into Virtualization

A single approach to data management that allows an application or user to retrieve and manipulate data without knowing any technical details about the data. That approach called Data Virtualization. Data Virtualization is different than traditional virtualization like - VMWare, Hypervisor, KVM, etc. because we already learned how to do virtualization of OS, Hardware and Storage, now time to add some more into virtualization, which is DATA. What is Data Virtualization? Data virtualization is a single window used to describe any approach to data management that allows an application to retrieve and manipulate data without requiring technical details about the data, such as how it is formatted, or where it is physically located. Why use Data Virtualization? Data virtualization promotes efficiency in data usage and processing and accelerates time to market on projects with complex data storage infrastructure. The purpose is to allow data to be accessed without creating extra ...

How to Install JBOSS EAP 7.0.0 on RHEL6.5/CentOS6.5 – a step by step tutorial of INSTALLER Installation

INTRODUCTION In this tutorial, we will demonstrate how to install and start a JBoss EAP 7.0.0 server on RHEL 6.5/CentOS 6.5. We use Oracle JDK 8 for this tutorial. This Tutorial Consists Of The Following Steps: Step 1: Download installer link Step 2: JDK installation and verification Step 3 to Step 14: JBoss EAP 7 installation procedure using INSTALLER Installation Step 15: Start Jboss EAP 7 server Red Hat JBoss EAP 7.0 is based on Wildfly 10 , and provides pre-configured options for features such as high-availability clustering, messaging, and distributed caching. And it is an application server that works as a middleware platform, is built on open standards, and is compliant with the Java EE 7 specification. Step 1: Download the installer from: https://developers.redhat.com/products/eap/download/ Select the EAP 7.0.0 (Developers version) from the list. Click on Installer option within Download column. For Linux/ Windows/Mac...