Tuesday, 22 May 2018

How to Install and Use Docker on Ubuntu

How to Install and Use Docker on Ubuntu


In the last few years, server virtualisation has become very popular. We cannot imagine cloud computing without virtualisation. But the explosive growth in computing demands more efficient virtualisation solutions. This is where containers come into play. Docker is one of the most popular container solution available today. In this article we will do hands-on with Docker.
Containers are lightweight virtualisation solutions. They provide OS-level virtualisation without any special support from the underlying hardware. Namespaces and control groups form the backbone of the container in the GNU/Linux kernel. Container solutions are built on top of these features. Namespaces provide isolation for processes, the network, mount points and so on, while control groups limit access to available hardware resources.
Sometimes, newbies get confused and think that server virtualisation and containerisation are the same thing. But, in fact, they are significantly different from each other. In server virtualisation, the OS in not shared; each VM instance has its own OS, whereas containers share the underlying OS. This approach has some advantages as well as disadvantages. The advantage is that the VM provides better isolation and security but performance is compromised. Whereas a container compromises on isolation but delivers a performance that’s as good as bare hardware.
Containers have been around since quite a long time. Their roots can be found in UNIX’s chroot program. After this release, many UNIX flavours have implemented their own container versions, like BSD jails and Solaris zones. On the GNU/Linux platform, LXD, OpenVZ and LXC are alternatives to Docker. However, Docker is much more mature and provides many advanced functionalities. We will discuss a few of them in the later sections of this article.
Setting up the environment
In this section, let’s discuss how to install Docker on an Ubuntu distribution, a task that is as simple as installing other software on GNU/Linux. To install Docker and its components, execute the following commands in a terminal:
sudo apt-get update
 
sudo apt-get install docker docker.io docker-compose
That’s it! The installation can be done by executing just two commands.
Now, let us verify the installation by printing the Docker version. If everything is fine, then it should display the installed Docker version. In my case, it was 1.13.1, as shown below:
$ docker --version
 
Docker version 1.13.1, build 092cba3
Now that we are done with the installation, let us briefly discuss a few important Docker components.
Docker Engine: This is the core of Docker. It runs as a daemon process and serves requests made by the client. It is responsible for creating and managing containers.
Docker Hub: This is the online public repository where Docker images are published. You can download as well as upload your custom images to this repository.
Docker Compose: This is one of the most useful components of Docker, which allows us to define its configuration in a YAML file. Once we define the configuration, we can use it to perform the deployment in an automated and repetitive manner.
In the later sections of this tutorial, we will discuss all these components briefly.
Getting hands-on with Docker
Now, that’s enough of theory! Let’s get started with the practical aspects. In this section, we’ll learn about containers by creating them and performing various operations on them, like starting, stopping, listing and finally destroying them.
Creating a Docker container: A container is a running instance of a Docker image. Wait, but what is a Docker image? It is a bundled package that contains the application and its runtime. To create a ‘busybox’ container, execute the following commands in a terminal:
# docker run busybox
 
Unable to find image ‘busybox:latest’ locally
 
latest: Pulling from library/busybox
 
d070b8ef96fc: Pull complete
 
Digest: sha256:2107a35b58593c58ec5f4e8f2c4a70d195321078aebfadfbfb223a2ff4a4ed21
 
Status: Downloaded newer image for busybox:latest
Let us understand what happens behind the scenes. In the above example, we are creating a ‘busybox’ container. First, Docker searches for the image locally. If it is present, it will be used; otherwise, it gets pulled and the container is created out of that image. But from where does it pull the image? Obviously, it pulls it from the Docker Hub.
Listing Docker containers: To list all Docker containers, we can use the ps command, as follows:
# docker ps -a
The -a switch lists all containers, including those running as well as those destroyed. This command shows various important attributes about the container like its ID, image name, creation date, running status and so on.
Running a Docker container: Now, let us run the ‘busybox’ container. We can use Docker’s ‘run’ command to do this.
# docker run busybox
As expected, this time the Docker image is not downloaded; instead, the local image is reused.
Docker detached mode: By default, a Docker container runs in the foreground. This is useful for debugging purposes but sometimes it is annoying. Docker provides the detach mode, using which we can run the container as follows:
# docker run -d busybox
 
240eb2570c9def655bcb94 c489435137057729c 4bad0e61034f5f9c6fb0f8428
In the above command, the -d switch indicates detached mode. This command prints the container ID on stdout for further use.
Attaching to a running container: Once the container is started in the detached mode, we can attach to it by using the attach command. We have to provide the container ID as an argument to this command. For instance, the command below attaches to a running container.
# docker attach 240eb2570c9def655b cb94c489435137057 729c4bad0e61034f5f9c6fb0f8428
Note: We can obtain the container ID by using the docker ps -a command.
Accessing containers: If you observe carefully, the docker run command starts and stops the container immediately. This is not at all useful. We can go inside the container environment using the following command:
# docker run -it busybox sh
In the above command, we have used the -it option and sh as an additional argument. This will provide us access to a container through the shell terminal. This is like a normal terminal, where you can execute all the supported commands. To exit from this, type ‘exit’ or press Ctrl+D.
Display information about a container: By using the inspect command, we can obtain useful information about a container, like its ID, running state, creation date, resource consumption, networking information and much more. To inspect a container, execute the following command:
# docker inspect <container-ID>
Destroying a container: Once we are done with the container, we should clear it off the system; otherwise, it’ll consume hardware resources. We can destroy a container using the rm command, as follows:
# docker rm <container-ID-1> <container-ID-2> ... <container-ID-N>
Working with Docker images
A Docker image is a blueprint for the container. It contains the application and its runtime. We can pull images from a remote repository and spawn containers using them. In this section, we will discuss various operations related to it.
To check Docker images, visit the official repository located at https://hub.docker.com. It hosts many images and provides detailed information about them, like their description, supported tags, Docker files and much more.
Listing images: To list all downloaded images, use the following command
# docker images
Pull image: As the name suggests, this command downloads the image from a remote repository and stores it on the local disk. To download the image, we have to provide the image’s name as an argument. For instance, the commands given below pull the busybox image:
# docker pull busybox

Using default tag: latest ----------------------------->
latest: Pulling from library/busybox
d070b8ef96fc: Pull complete
Digest: sha256:2107a35b58593c58ec5f4e8f2c4a70d195321078aebfadfbfb223a2ff4a4ed21
Status: Downloaded newer image for busybox:latest
Using tags: If we don’t provide any additional option, then the pull command downloads an image tagged with the latest tag. We can see it in the previous command’s output, where it has printed Using default tag: latest. To pull an image with a specific tag, we can provide the tag name with the pull command. For instance, the command given below pulls an image with the tag 1.28.1-uclibc:
# docker pull busybox:1.28.1-uclibc
 1.28.1-uclibc: Pulling from library/busybox ----------------------------->
Digest: sha256:2c3a381fd538dd732f20d824f87fac1e300a9ef56eb4006816fa0cd992e85ce5
Status: Downloaded newer image for busybox:1.28.1-uclibc
We can get image tags from the Docker Hub located at https://hub.docker.com.
Getting the history of an image: Using the history command, we can retrieve historical data about the image like its ID, creation date, author, size, description and so on. For instance, the following command shows the history of the ‘busybox’ image:
# docker history busybox
Deleting an image: Like containers, we can also delete Docker images. Docker provides the ‘rmi’ command for this purpose. In this command, ‘i’ stands for image. For instance, to delete a ‘busybox’ image, execute the following command:
# docker rmi f6e427c148a7
Note: We have to provide an image ID to it, which we can obtain using the docker images command.
Advanced Docker topics
So far we have explored only the basics of Docker. This can be a good start for beginners. However, the discussion does not end here. Docker is a feature-rich application. So let’s now briefly discuss some advanced Docker concepts.
Docker Compose: Docker Compose can be used to deploy and configure an entire software stack using automated methods rather than using the docker run command, followed by manual configuration. We can define the configuration in a YAML file and use that to perform deployment. Shown below is a simple example of configuration:
version: ‘2.0’

services:

database:

image: “jarvis/acme-web-app”

web:

image: “mysql”
In the above ‘docker-compose.yaml’ file, we have defined the configuration under the ‘services’ dictionary. We have also provided the images that should be used for deployment.
To deploy the above configuration, execute the following command in a terminal:
# docker-compose up
To stop and destroy a deployed configuration, execute the following commands in the terminal:
# docker-compose stop

# docker-compose down
Mapping ports: Like any other application, we can run Web applications inside a container. But the challenge is how to allow access to outside users. For this purpose, we can provide port mapping using the ‘-p’ option, as follows:
# docker run -p 80:5000 jarvis/acme-web-app
In the above example, Port 80 from the host machine is mapped to Port 5000 for the ‘acme-web-app’ container. Now, users can access this Web application using the host machine’s IP address.
Mapping storage: The application’s data is stored inside a container and, hence, when we destroy the container, data is also deleted. To avoid this, we can map volumes from the container to a local directory on a host machine. We can achieve this by using the -v option, as follows:
# docker run -v /opt/mysql-data:/var/lib/mysql mysql
In the above example, we have mapped the container’s /var/lib/mysql volume to the local directory /opt/mysql-data.Because of this, data can be persisted even when the container is destroyed.
Mapping ports and volumes with Docker Compose: To map ports and volumes as a part of the Docker Compose process, add the attributes of the ports and volumes in the ‘docker-compose.yaml’ file. After doing this, our modified file will look like what follows:
version: ‘2.0’

services:

database:

image: “jarvis/acme-web-app”

ports: ----------------------------->

- “80:5000” ----------------------------->

web:

image: “mysql”

volumes: ----------------------------->

- “/opt/mysql-data:/var/lib/mysql” ----------------------------->
Docker cluster: So far, we have worked with a single Docker host. This is a bare-minimal setup and is good enough for development and testing purposes. However, this is not enough for production, because if the Docker host goes down, then the entire application will go offline. To overcome this single point of failure, we can provide high availability to the container by using a Swarm cluster
A Swarm cluster is created with the aid of multiple Docker hosts. In this cluster, we can designate one of the nodes as a master and the remaining nodes as workers. The master will be responsible for load distribution and providing high availability within the cluster, whereas workers will host the Docker container after co-ordinating with the master.
In this article, we have discussed the basics of Docker as well as touched upon some advanced concepts. The article is a good starting point for absolute beginners. Once you build a strong foundation in Docker, you can delve deep into individual topics that interest you. By: efy.in

Thursday, 3 September 2015

How to update Ubuntu server from 9.04 to 10.04

Step 1: install update-manager-core
$ sudo apt-get update
$ sudo apt-get upgrade
$ sudo apt-get install update-manager-core

Step 2: Update Jaunty 9.04 to Karmic 9.10

edit /etc/apt/sources.list to replace "jaunty" with "karmic"
$ sudo apt-get update
$ sudo do-release-upgrade

Step 3: Update Karmic 9.10 to Lucid 10.04 

edit /etc/apt/sources.list to replace "karmic" with "lucid"
$ sudo apt-get update
$ sudo do-release-upgrade
To check your server version:
$ lsb_release -a