diff --git a/.markdownlint.json b/.markdownlint.json new file mode 100644 index 0000000..753ebc3 --- /dev/null +++ b/.markdownlint.json @@ -0,0 +1,14 @@ +{ + "default": true, + "line-length": { + "line_length": 100, + "code_blocks": false, + "tables": false + }, + "no-inline-html": { + "allowed_elements": [ + "details", + "summary" + ] + } +} diff --git a/CHEATSHEET.md b/CHEATSHEET.md index c848e74..0aa71fd 100644 --- a/CHEATSHEET.md +++ b/CHEATSHEET.md @@ -1,17 +1,21 @@ -Source: https://docs.docker.com/get-started/docker_cheatsheet.pdf +# Source: ## General -- [Docker cli](https://docs.docker.com/engine/reference/commandline/cli/) - **Docker CLI** is the command line interface for Docker -- [Docker Desktop](https://docs.docker.com/desktop) - **Docker Desktop** is available for Mac, Linux, and Windows +- [Docker cli](https://docs.docker.com/engine/reference/commandline/cli/) - **Docker CLI** is the + command line interface for Docker +- [Docker Desktop](https://docs.docker.com/desktop) - **Docker Desktop** is available for Mac, + Linux, and Windows - `docker --help` - Get help with Docker. You can use `--help` on all subcommands - `docker info` - Display system-wide information - [Docker Docs](https://docs.docker.com) - Check out our docs for information on using Docker ## Containers -- `docker run --name ` - Create and run a container from an image, with a custom name -- `docker run -p : ` - Run a container and publish a container’s port(s) to the host +- `docker run --name ` - Create and run a container from an image, + with a custom name +- `docker run -p : ` - Run a container and publish a + container’s port(s) to the host - `docker run -d ` - Run a container in the background - `docker start|stop (or )` - Start or stop an existing container - `docker rm ` - Remove a stopped container @@ -30,14 +34,12 @@ Source: https://docs.docker.com/get-started/docker_cheatsheet.pdf - `docker rmi ` - Delete an Image - `docker image prune` - Remove all unused images - ## Docker Registries The default registry is [Docker Hub](https://hub.docker.com), but you can add more registries. - - `docker login -u ` - Login into Docker - `docker push /` - Publish an image to Docker Hub - `docker search ` - Search Hub for an image - `docker pull ` - Pull an image from Docker Hub -- `docker tag : /:` - Tag an image for a registry \ No newline at end of file +- `docker tag : /:` - Tag an image for a registry diff --git a/README.md b/README.md index 3a5d02f..47d3357 100644 --- a/README.md +++ b/README.md @@ -6,18 +6,17 @@ This workshop will take you from "Hello Docker" to deploying a containerized web It's going to be a lot of fun! - ## Prerequisites You need to have access to a machine with docker installed. There are many ways of getting that: -* Click the link above to get a Cloud shell from Google (require login) -* Docker installed on a linux box. + +* Click the link above to get a Cloud shell from Google (requires login). +* Docker installed on a linux box. * Docker desktop installed on a Mac or Windows machine. ## Philosophy - There are a few things you should know about this tutorial before we begin. This tutorial is designed to be **self-paced** to make the most of your time. @@ -26,13 +25,12 @@ The exercises won't always tell you exactly what you need to do. Instead, it will point you to the right resources (like documentation and blog posts) to find the answer. -Ready to begin? ---------------- +## Ready to begin ? + +--- Head over to [the first lab](labs/00-getting-started.md) to begin. ## Cheat sheet For a quick reference of the most common docker commands, see [CHEATSHEET.md](CHEATSHEET.md). - - diff --git a/labs/00-getting-started.md b/labs/00-getting-started.md index f4330fc..7c64bff 100644 --- a/labs/00-getting-started.md +++ b/labs/00-getting-started.md @@ -4,14 +4,21 @@ In this section you will install docker. ## Terminology -Throughout these labs, you will see a lot of Docker-specific jargon which might be confusing to some. So before you go further, let's clarify some terminology that is used frequently in the Docker ecosystem. +Throughout these labs, you will see a lot of Docker-specific jargon which might be confusing to some. +So before you go further, let's clarify some terminology that is used frequently in the Docker ecosystem. *Docker Container*: An isolated, runnable environment that holds everything needed to run an application. -*Docker Image*: A lightweight, standalone package that contains all necessary code, libraries, and dependencies to run an application. -- *Docker daemon* - The background service running on the host that manages building, running and distributing Docker containers. +*Docker Image*: A lightweight, standalone package that contains all necessary code, libraries, and +dependencies to run an application. + +- *Docker daemon* - The background service running on the host that manages building, running and + distributing Docker containers. - *Docker client* - The command line tool that allows the user to interact with the Docker daemon. -- *Docker Hub* - A [docker registry](https://hub.docker.com/explore/) of Docker images. You can think of the registry as a directory of all available Docker images. You'll be using this later in this tutorial. +- *Docker Hub* - A [docker registry](https://hub.docker.com/explore/) of Docker images. You can + think of the registry as a directory of all available Docker images. You'll be using this later + in this tutorial. ## Installing Docker -Depending on what OS you are running, installation is different, but head over to the [Get started](https://www.docker.com/get-started/) website and follow the instructions there. +Depending on what OS you are running, installation is different, but head over to the +[Get started](https://www.docker.com/get-started/) website and follow the instructions there. diff --git a/labs/01-hello-world.md b/labs/01-hello-world.md index 35f271f..bf489b4 100644 --- a/labs/01-hello-world.md +++ b/labs/01-hello-world.md @@ -7,19 +7,20 @@ The goal of this scenario is to make you run your first Docker container. ## Terminology *Docker Container*: An isolated, runnable environment that holds everything needed to run an application. -*Docker Image*: A lightweight, standalone package that contains all necessary code, libraries, and dependencies to run an application. +*Docker Image*: A lightweight, standalone package that contains all necessary code, libraries, and +dependencies to run an application. ## Exercise Try running a command with Docker: -``` +```bash docker run hello-world ``` Your terminal output should look like this: -``` +```bash Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 78445dd45222: Pull complete @@ -49,8 +50,9 @@ For more examples and ideas, visit: This message shows that your installation appears to be working correctly. -_*Q: So what did this do?*_ +**Q: So what did this do?** Try to run `docker run hello-world` again. -Docker has now already downloaded the image locally, and can therefore execute the container straight away. +Docker has now already downloaded the image locally, and can therefore execute the container +straight away. diff --git a/labs/02-running-images.md b/labs/02-running-images.md index a7a77e8..2272126 100644 --- a/labs/02-running-images.md +++ b/labs/02-running-images.md @@ -2,12 +2,13 @@ ## Learning Goals - -- Run an [Alpine Linux](http://www.alpinelinux.org/) container (a lightweight linux distribution) on your system and get a taste of the `docker run` command. +- Run an [Alpine Linux](http://www.alpinelinux.org/) container (a lightweight linux distribution) on + your system and get a taste of the `docker run` command. ## Introduction -To get started with running your first container from an image, you'll first pull the Alpine Linux image, a lightweight Linux distribution, and then explore various commands to interact with it. +To get started with running your first container from an image, you'll first pull the Alpine Linux +image, a lightweight Linux distribution, and then explore various commands to interact with it. ## Exercise @@ -21,14 +22,14 @@ To get started with running your first container from an image, you'll first pul ### Step by step instructions - To get started, let's run the following in our terminal: -* `docker pull alpine` +- `docker pull alpine` -The `pull` command fetches the alpine **image** from the **Docker registry** and saves it in your system. You can use the `docker image ls` command to see a list of all images on your system. +The `pull` command fetches the alpine **image** from the **Docker registry** and saves it in your +system. You can use the `docker image ls` command to see a list of all images on your system. -* `docker image ls` +- `docker image ls` Expected output (your list of images will look different): @@ -42,7 +43,7 @@ hello-world latest 690ed74de00f 5 months ago Let's run a Docker **container** based on this image. -* `docker run alpine ls -l` +- `docker run alpine ls -l` Expected output: @@ -57,11 +58,12 @@ drwxr-xr-x 5 root root 4096 Mar 2 16:20 lib ...... ``` -When you run `docker run alpine`, you provided a command (`ls -l`), so Docker started the command specified and you saw the listing. +When you run `docker run alpine`, you provided a command (`ls -l`), so Docker started the command +specified and you saw the listing. Try run the following: -* `docker run alpine echo "hello from alpine"` +- `docker run alpine echo "hello from alpine"` Expected output: @@ -71,47 +73,53 @@ hello from alpine
More Details -In this case, the Docker client ran the `echo` command in our alpine container and then exited it. If you've noticed, all of that happened pretty quickly. Imagine booting up a virtual machine, running a command and then killing it. Now you know why they say containers are fast! +In this case, the Docker client ran the `echo` command in our alpine container and then exited it. +If you've noticed, all of that happened pretty quickly. Imagine booting up a virtual machine, +running a command and then killing it. Now you know why they say containers are fast!
Try another command: -* `docker run alpine /bin/sh` +- `docker run alpine /bin/sh` -Wait, nothing happened! Is that a bug? +Wait, nothing happened! Is that a bug? -Well, no. +Well, no. -These interactive shells will exit after running any scripted commands, unless they are run in an interactive terminal - so for this example to not exit, you need to add the parameters `i` and `t`. +These interactive shells will exit after running any scripted commands, unless they are run in +an interactive terminal - so for this example to not exit, you need to add the parameters `i` and `t`. -> :bulb: The flags `-it` are short for `-i -t` which again are the short forms of `--interactive` (Keep STDIN open) and `--tty` (Allocate a terminal). +> :bulb: The flags `-it` are short for `-i -t` which again are the short forms of `--interactive` +> (Keep STDIN open) and `--tty` (Allocate a terminal). -* `docker run -it alpine /bin/sh` +- `docker run -it alpine /bin/sh` -You are inside the container shell and you can try out a few commands like `ls -l`, `uname -a` and others. +You are inside the container shell and you can try out a few commands like `ls -l`, `uname -a` and others. -* Exit out of the container by giving the `exit` command. +- Exit out of the container by giving the `exit` command. -Ok, now it's time to list our containers. +Ok, now it's time to list our containers. The `docker ps` command shows you all containers that are currently running. -* `docker ps` +- `docker ps` Expected output: -``` +```bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` -Notice that you have no running containers. When you wrote `exit` in the shell, the primary process (`/bin/sh`) stopped. No containers are running, you see a blank line. Let's try a more useful variant, listing all containers, both stopped and started. +Notice that you have no running containers. When you wrote `exit` in the shell, the primary +process (`/bin/sh`) stopped. No containers are running, you see a blank line. Let's try a more +useful variant, listing all containers, both stopped and started. -* `docker ps -a` +- `docker ps -a` Expected output: -``` +```bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 36171a5da744 alpine "/bin/sh" 5 minutes ago Exited (0) 2 minutes ago fervent_newton a6a9d46d0b2f alpine "echo 'hello from alp" 6 minutes ago Exited (0) 6 minutes ago lonely_kilby @@ -119,17 +127,18 @@ ff0a5c3750b9 alpine "ls -l" 8 minutes ago c317d0a9e3d2 hello-world "/hello" 34 seconds ago Exited (0) 12 minutes ago stupefied_mcclintock ``` -What you see above is a list of all containers that you ran. Notice that the `STATUS` column shows that these containers exited a few minutes ago. +What you see above is a list of all containers that you ran. Notice that the `STATUS` column shows +that these containers exited a few minutes ago. ## Naming your container Take a look again at the output of the `docker ps -a`: -* `docker ps -a` +- `docker ps -a` Expected output: -``` +```bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 36171a5da744 alpine "/bin/sh" 5 minutes ago Exited (0) 2 minutes ago fervent_newton a6a9d46d0b2f alpine "echo 'hello from alp" 6 minutes ago Exited (0) 6 minutes ago lonely_kilby @@ -137,12 +146,16 @@ ff0a5c3750b9 alpine "ls -l" 8 minutes ago c317d0a9e3d2 hello-world "/hello" 34 seconds ago Exited (0) 12 minutes ago stupefied_mcclintock ``` -All containers have an **ID** and a **name**. +All containers have an **ID** and a **name**. Both the ID and name is generated every time a new container spins up with a random seed for uniqueness. -> :bulb: Tip: If you want to assign a specific name to a container then you can use the `--name` option. That can make it easier for you to reference the container going forward. +> :bulb: Tip: If you want to assign a specific name to a container then you can use the `--name` +> option. That can make it easier for you to reference the container going forward. ## Summary -That concludes a whirlwind tour of the `docker run` command which would most likely be the command you'll use most often. It makes sense to spend some time getting comfortable with it. To find out more about `run`, use `docker run --help` to see a list of all flags it supports. As you proceed further, we'll see a few more variants of `docker run`. +That concludes a whirlwind tour of the `docker run` command which would most likely be the +command you'll use most often. It makes sense to spend some time getting comfortable with it. To +find out more about `run`, use `docker run --help` to see a list of all flags it supports. +As you proceed further, we'll see a few more variants of `docker run`. diff --git a/labs/03-deletion.md b/labs/03-deletion.md index c78ee82..e623681 100644 --- a/labs/03-deletion.md +++ b/labs/03-deletion.md @@ -1,6 +1,7 @@ # Throw your container away -As containers are just a thin base layer on top of the host kernel, it is really fast to spin up a new instance if you crashed your old one. +As containers are just a thin base layer on top of the host kernel, it is really fast to spin up a +new instance if you crashed your old one. Let's try to run an alpine container and delete the file system. @@ -8,44 +9,47 @@ Spin up the container with `docker run -ti alpine` and then list all the folders on the root level to see the whole distribution: -``` +``` bash ls / ``` Expected output: -``` +``` bash bin etc lib mnt root sbin sys usr dev home media proc run srv tmp var ``` List the current user: + ``` bash whoami ``` + Expected output: -``` +``` bash root ``` List the current date: + ``` bash date ``` Expected output: +``` bash +Wed Aug 20 02:10:47 PM CEST 2025 ``` -Wed Nov -``` - -> **Warning:** Make sure that you are inside your container. most likely you can see that by your command promt showing `/ #` instead of `ubuntu@inst1:~/docker-katas/labs$` +> **Warning:** Make sure that you are inside your container. most likely you can see that by your +> command prompt showing `/ #` instead of `ubuntu@inst1:~/docker-katas/labs$` Now, delete the binaries that the system is build up of with: -``` +``` bash rm -rf /bin ``` @@ -54,19 +58,19 @@ They should all echo back that the binary is not found. Exit out by pressing `Ctrl+d` and create a new instance of the Alpine image and look a bit around: -``` +``` bash docker run -it alpine ``` In the container run: -``` +``` bash ls / ``` Expected output: -``` +``` bash bin etc lib mnt root sbin sys usr dev home media proc run srv tmp var ``` @@ -83,9 +87,11 @@ CONTAINER ID IMAGE COMMAND CREATED 4b09b2fe1d8c alpine "/bin/sh" 7 seconds ago Exited (1) 1 second ago silly_jones 0B (virtual 3.97MB) ``` -Here you can see that the alpine image itself takes 3.97MB, and the container itself takes 0B. When you begin to manipulate files in your container, the size of the container will rise. +Here you can see that the alpine image itself takes 3.97MB, and the container itself takes 0B. When +you begin to manipulate files in your container, the size of the container will rise. -If you are creating a lot of new containers eg. to test something, you can tell the Docker daemon to remove the container once stopped with the `--rm` option: +If you are creating a lot of new containers eg. to test something, you can tell the Docker daemon +to remove the container once stopped with the `--rm` option: `docker run --rm -it alpine` This will remove the container immediately after it is stopped. @@ -94,30 +100,30 @@ This will remove the container immediately after it is stopped. Containers are still persisted, even though they are stopped. If you want to delete them from your server you can use the `docker rm` command. -`docker rm` can take either the `CONTAINER ID` or `NAME` as seen above. +`docker rm` can take either the `CONTAINER ID` or `NAME` as seen above. Try to remove the `hello-world` container: -``` +``` bash docker container ls -a ``` Expected output: -``` +``` bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6a9246ff53cb hello-world "/hello" 18 seconds ago Exited (0) 16 seconds ago ecstatic_cray ``` Delete the container: -``` +``` bash docker container rm ecstatic_cray ``` The name or ID specified is echoed back: -``` +``` bash ecstatic_cray ``` @@ -127,56 +133,66 @@ The container is now gone when you execute a `ls -a` command. ### Deleting images -You deleted the container instance above, but not the image of hello-world itself. And as you are now on the verge to become a docker expert, you do not need the hello-world image anymore so let us delete it. +You deleted the container instance above, but not the image of hello-world itself. And as you are +now on the verge to become a docker expert, you do not need the hello-world image anymore so let us +delete it. First off, list all the images you have downloaded to your computer: -``` +``` bash docker image ls ``` Expected output: -``` +``` bash REPOSITORY TAG IMAGE ID CREATED SIZE alpine latest 053cde6e8953 9 days ago 3.97MB hello-world latest 48b5124b2768 10 months ago 1.84kB ``` Here you can see the images downloaded as well as their size. -To remove the hello-world image use the `docker image rm` command together with the id of the docker image. +To remove the hello-world image use the `docker image rm` command together with the id of the docker +image. -``` +``` bash docker image rm 48b5124b2768 ``` Expected output: -``` +``` bash Untagged: hello-world:latest Untagged: hello-world@sha256:c5515758d4c5e1e838e9cd307f6c6a0d620b5e07e6f927b07d05f6d12a1ac8d7 Deleted: sha256:48b5124b2768d2b917edcb640435044a97967015485e812545546cbed5cf0233 Deleted: sha256:98c944e98de8d35097100ff70a31083ec57704be0991a92c51700465e4544d08 ``` -What docker did here was to `untag` the image removing the references to the sha of the image. After the image has no references, it deletes the two layers the image itself is comprised of. +What docker did here was to `untag` the image removing the references to the sha of the image. +After the image has no references, it deletes the two layers the image itself is comprised of. ### Cleaning up -When building, running and rebuilding images, you download and store a lot of layers. These layers will not be deleted, as docker takes a very conservative approach to clean up. +When building, running and rebuilding images, you download and store a lot of layers. These layers +will not be deleted, as docker takes a very conservative approach to clean up. -Docker provides a `prune` command, taking all dangling containers/images/networks/volumes. +Docker provides a `prune` command to clean up all dangling containers, images, networks, and volumes. - `docker container prune` - `docker image prune` - `docker network prune` - `docker volume prune` -The docker image prune command allows you to clean up unused images. By default, docker image prune only cleans up dangling images. A dangling image is one that is not tagged and is not referenced by any container. To remove all _unused_ resources, resources that are not directly used by any existing containers, use the `-a` switch as well. +The docker image prune command allows you to clean up unused images. By default, docker image prune +only cleans up dangling images. A dangling image is one that is not tagged and is not referenced by +any container. To remove all _unused_ resources, resources that are not directly used by any +existing containers, use the `-a` switch as well. If you want a general cleanup, then `docker system prune` is your friend. ## Summary -You have now seen the swiftness of creating a new container from an image, trash it, and create a new one on top of it. -You have learned to use `container rm` for deleting containers, `image rm` for images, `image ls` for listing the images and `container ls -a` to look at all the containers on your host. +You have now seen the swiftness of creating a new container from an image, trash it, and create a +new one on top of it. +You have learned to use `container rm` for deleting containers, `image rm` for images, `image ls` +for listing the images and `container ls -a` to look at all the containers on your host. diff --git a/labs/04-port-forward.md b/labs/04-port-forward.md index 52fdfcd..08323ec 100644 --- a/labs/04-port-forward.md +++ b/labs/04-port-forward.md @@ -2,14 +2,16 @@ Running arbitrary Linux commands inside a Docker container is fun, but let's do something more useful. -Pull down the `nginx` Docker image from the Docker Hub. This Docker image uses the [Nginx](http://nginx.org/) webserver to serve a static HTML website. +Pull down the `nginx` Docker image from the Docker Hub. This Docker image uses the +[Nginx](http://nginx.org/) webserver to serve a static HTML website. -Start a new container from the `nginx` image that exposes port 80 from the container to port 8080 on your host. You will need to use the `-p` flag with the docker container run command. +Start a new container from the `nginx` image that exposes port 80 from the container to port 8080 +on your host. You will need to use the `-p` flag with the docker container run command. > :bulb: Mapping ports between your host machine and your containers can get confusing. > Here is the syntax you will use: > -> ``` +> ``` bash > docker run -p 8080:80 nginx > ``` > @@ -17,25 +19,28 @@ Start a new container from the `nginx` image that exposes port 80 from the conta > and **the container port always goes to the right.** > Remember it as traffic coming _from_ the host, _to_ the container. -Open a web browser and go to port 8080 on your host. The exact address will depend on how you're running Docker today: +Open a web browser and go to port 8080 on your host. The exact address will depend on how you're +running Docker today: - **Native Linux** - [http://localhost:8080](http://localhost:8080) -- **Cloud server** - Make sure firewall rules are configured to allow traffic on port 8080. Open browser and use the hostname (or IP) for your server. +- **Cloud server** - Make sure firewall rules are configured to allow traffic on port 8080. Open a + browser and use the hostname (or IP) for your server. Ex: [http://inst1.prefix.eficode.academy:8080](http://inst1.prefix.eficode.academy:8080) - Alternatively open a new shell and issue `curl localhost:8080` - **Google Cloud Shell** - Open Web Preview (upper right corner) If you see a webpage saying "Welcome to nginx!" then you're done! -If you look at the console output from docker, you see nginx producing a line of text for each time a browser hits the webpage: +If you look at the console output from docker, you see nginx producing a line of text for each time +a browser hits the webpage: -``` +``` bash docker run -p 8080:80 nginx ``` Expected output: -``` +``` bash 172.17.0.1 - - [31/May/2017:11:52:48 +0000] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:53.0) Gecko/20100101 Firefox/53.0" "- ``` @@ -43,12 +48,13 @@ Press **control + c** in your terminal window to stop your container. ## Running the webserver container in the background -When running a webserver like nginx, it is very useful to not run the container in the foreground of our terminal. +When running a webserver like nginx, it is very useful to not run the container in the foreground +of our terminal. Instead we should make it run in the background, freeing up our terminal for other things. Docker enables this with the `-d` parameter for the `run` command. For example: `docker run -d -p 8080:80 nginx` -``` +``` bash docker run -p 8080:80 -d nginx ``` @@ -61,11 +67,12 @@ Congratulations! You have just started a container in the background. :tada: Stop the container you just started. Remember that your container ID is different from the one in the example. -```bash +``` bash docker stop 78c943461b49584ebdf841f36d113567540ae460387bbd7b2f885343e7ad7554 ``` + Docker prints out the ID of the stopped container. -``` +``` bash 78c943461b49584ebdf841f36d113567540ae460387bbd7b2f885343e7ad7554 ``` diff --git a/labs/05-executing.md b/labs/05-executing.md index abde86c..d8958b4 100644 --- a/labs/05-executing.md +++ b/labs/05-executing.md @@ -1,12 +1,15 @@ # Executing processes in your container -It you want to examine a running container, but do not want to disturb the running process you can execute another process inside the container with `exec`. +It you want to examine a running container, but do not want to disturb the running process you can +execute another process inside the container with `exec`. -This could be a shell, or a script of some sort. In that way you can debug an existing environment before starting a new up. +This could be a shell, or a script of some sort. In that way you can debug an existing environment +before starting a new up. ## Exercise -In this exercise, we want to change a file in an already running container, by executing a secondary process. +In this exercise, we want to change a file in an already running container, by executing a secondar + process. ### Step by step @@ -15,16 +18,18 @@ In this exercise, we want to change a file in an already running container, by e Step into a new container by executing a bash shell inside the container: -``` +``` bash docker exec -it CONTAINERNAME bash ``` > :bulb: note that the CONTAINERNAME is the name of the NGINX container you just started. Inside, we want to edit the `index.html` page, with a cli text editor called [nano](https://www.nano-editor.org/). -Because containers only have the bare minimum installed, we need to first install nano, and then use it: +Because containers only have the bare minimum installed, we need to first install nano, and then +use it: -> :bulb: From the [DockerHub description](https://hub.docker.com/_/nginx) we know that the standard place for HTML pages NGINX serves is in /usr/share/nginx/html +> :bulb: From the [DockerHub description](https://hub.docker.com/_/nginx) we know that the standard +> place for HTML pages NGINX serves is in /usr/share/nginx/html - install nano on the container: `apt-get update && apt-get install -y nano` - Edit the index html page: `nano /usr/share/nginx/html/index.html` @@ -33,5 +38,7 @@ Because containers only have the bare minimum installed, we need to first instal ## Summary -You have tried to start a new process by the `exec` command in order to look around in a container, or to edit something. -You have also seen that terminating any of the the processes created with `docker exec` will not make the container stop. +You have tried to start a new process by the `exec` command in order to look around in a container + or to edit something. +You have also seen that terminating any of the the processes created with `docker exec` will not +make the container stop. diff --git a/labs/06-volumes.md b/labs/06-volumes.md index e569b13..88bf3c3 100644 --- a/labs/06-volumes.md +++ b/labs/06-volumes.md @@ -1,51 +1,49 @@ # Docker volumes -> _Hint: This lab only covers volumes on Docker for Linux. If you are on windows or mac, things can look different._ +> _Hint: This lab only covers volumes on Docker for Linux. If you are on windows or mac, things can +> look different._ Containers should be ephemeral. - The whole idea is that you can start, stop and delete the containers without losing data. - But how can we do that for persistent workloads like databases? - We need a way to store data outside of the containers. - You have two different ways of mounting data from your container `bind mounts` and `volumes`. -### Bind mount - -Is the simpler one to understand. It takes a host path, like `/data`, and mounts it inside your container eg. `/opt/app/data`. - -The good thing about bind mount is that they are easy and allow you to connect directly to the host filesystem. - -The downside is that you need to specify it at runtime, and path to mount might vary from host to host, which can be confusing when you want to run your containers on different hosts. - -With bind mount you will also need to deal with backup, migration etc. in an tool outside the Docker ecosystem. +## Bind mounts +Is the simpler one to understand. It takes a host path, like `/data`, and mounts it inside your +container eg. `/opt/app/data`. +The good thing about bind mount is that they are easy and allow you to connect directly to the host +filesystem. The downside is that you need to specify it at runtime, and path to mount might vary +from host to host, which can be confusing when you want to run your containers on different hosts. +With bind mount you will also need to deal with backup, migration etc. in an tool outside the Docker +ecosystem. As an example, let's look at the [Nginx](https://hub.docker.com/_/nginx/) container. - The server itself is of little use, if it cannot access our web content on the host. - We need to create a mapping between the host system, and the container with the `-v` command: ```bash docker run --name some-nginx -v /some/content:/usr/share/nginx/html:ro -d nginx ``` -That will map whatever files are in the `/some/content` folder on the host to `/usr/share/nginx/html` in the container. - -> The `:ro` attribute is making the host volume read-only, making sure the container can not edit the files on the host. +That will map whatever files are in the `/some/content` folder on the host to +`/usr/share/nginx/html` in the container. -### A docker Volume +> The `:ro` attribute is making the host volume read-only, making sure the container can not edit +> the files on the host. -This is where you can use a `named` or `unnamed` volume to store the external data. The data will still be stored locally unless you have configured a storage driver for your system (Ops things, not covered here). +## Volumes +This is where you can use a `named` or `unnamed` volume to store the external data. The data will +still be stored locally unless you have configured a storage driver for your system (Ops things, +not covered here). Volumes are entities inside docker, and can be created in three different ways. - By explicitly creating it with the `docker volume create ` command. -- By creating a named volume at container creation time with `docker container run -d -v DataVolume:/opt/app/data nginx` -- By creating an anonymous volume at container creation time with `docker container run -d -v /opt/app/data nginx` - +- By creating a named volume at container creation time with `docker container run -d -v + DataVolume:/opt/app/data nginx` +- By creating an anonymous volume at container creation time with `docker container run -d -v + /opt/app/data nginx` In the next section, you will get to try all of them. ## Step-by-Step Instructions @@ -54,9 +52,12 @@ In the next section, you will get to try all of them. Try to do the following: -- `git clone` this repository down. :bulb: If you are at training the repository is already cloned on your training workstation. +- `git clone` this repository down. :bulb: If you are at training the repository is already cloned + on your training workstation. - Navigate to the `labs/volumes/` directory, which contains a file we can try to serve: `index.html`. -- We need change `/some/content` to the right path, it must be an absolute path, starting from the root of the filesystem, (which in linux is `/`). You can use the command `pwd` (Print working directory) to display the path to where you are. +- We need change `/some/content` to the right path, it must be an absolute path, starting from the + root of the filesystem, (which in linux is `/`). You can use the command `pwd` (Print working + directory) to display the path to where you are. - Now try to run the container with the `labs/volumes` directory bind mounted. This will give you a nginx server running, serving your static files... _But on which port?_ @@ -77,7 +78,7 @@ The parameter `-p 8080:80` will map port 80 in the container to port 8080 on the - Check that it is running by navigating to the hostname or IP with your browser, and on port 8080. - Stop the container with `docker stop `. -### Volumes +### Volume First off, lets try to make a data volume called `data`: @@ -96,7 +97,8 @@ local data Unlike the bind mount, you do not specify where the data is stored on the host. -In the volume API, like for almost all other of Docker’s APIs, there is an `inspect` command giving you low level details. +In the volume API, like for almost all other of Docker’s APIs, there is an `inspect` command giving +you low level details. - run `docker volume inspect data` to see where the data is stored on the host. @@ -115,21 +117,24 @@ In the volume API, like for almost all other of Docker’s APIs, there is an `in You can see that the `data` volumes is mounted at `/var/lib/docker/volumes/data/_data` on the host. -> **Note** we will not go through the different drivers. For more info look at Dockers own [example](https://docs.docker.com/engine/admin/volumes/volumes/#use-a-volume-driver). +> **Note** we will not go through the different drivers. For more info look at Dockers own +> [example](https://docs.docker.com/engine/admin/volumes/volumes/#use-a-volume-driver). -You can now use this data volume in all containers. Try to mount it to an nginx server with the `docker container run --rm --name www -d -p 8080:80 -v data:/usr/share/nginx/html nginx` command. +You can now use this data volume in all containers. Try to mount it to an nginx server with the +`docker container run --rm --name www -d -p 8080:80 -v data:/usr/share/nginx/html nginx` command. -> **Note:** If the volume refer to is empty and we provide the path to a directory that contains data in the base image, that data will be copied into the volume. +> **Note:** If the volume refer to is empty and we provide the path to a directory that contains +> data in the base image, that data will be copied into the volume. Try now to look at the data stored in `/var/lib/docker/volumes/data/_data` on the host: -``` +``` bash sudo ls /var/lib/docker/volumes/data/_data/ ``` Expected output: -``` +``` bash 50x.html index.html ``` @@ -137,41 +142,41 @@ Those two files comes from the Nginx image and is the standard files the webserv ### Attaching multiple containers to a volume -Multiple containers can attach to the same volume with data. +Multiple containers can attach to the same volume with data. Docker doesn't handle any file locking, so applications must account for the file locking themselves. +Let's try to go in and make a new html page for nginx to serve. -Let's try to go in and make a new html page for nginx to serve. - -We do this by making a new ubuntu container that has the `data` volume attached to `/tmp`, and thereafter create a new html file with the `echo` command: +We do this by making a new ubuntu container that has the `data` volume attached to `/tmp`, and +thereafter create a new html file with the `echo` command: Start the container: -``` +``` bash docker run -it --rm -v data:/tmp ubuntu bash ``` In the container run: -``` +``` bash echo "

hello world

" > /tmp/hello.html ``` Verify the file was created by running in the container: -``` +``` bash ls /tmp ``` Expected output: -``` +``` bash hello.html 50x.html index.html ``` -Head over to your newly created webpage at: `http://workstation-X.Y.eficode.academy:8080/hello.html` where X is your workstation number and Y is the prefix. It should be the same URL as your Workstation. - +Head over to your newly created webpage at: `http://workstation-X.Y.eficode.academy:8080/hello.html` +where X is your workstation number and Y is the prefix. It should be the same URL as your Workstation. ## cleanup @@ -179,42 +184,54 @@ Exit out of your ubuntu server and execute a `docker stop www` to stop the nginx Run a `docker ps` to make sure that no other containers are running. -``` +``` bash docker ps ``` Expected output: -``` +``` bash CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES ``` -The data volume is still present, and will be there until you remove it with a `docker volume rm data` or make a general cleanup of all the unused volumes by running `docker volume prune`. +The data volume is still present, and will be there until you remove it with a `docker volume rm data` +or make a general cleanup of all the unused volumes by running `docker volume prune`. ## Tips and tricks -As you have seen, the `-v` flag can both create a bind mount or name a volume depending on the syntax. If the first argument begins with a / or ~/ you're creating a bind mount. Remove that, and you're naming the volume. For example: +As you have seen, the `-v` flag can both create a bind mount or name a volume depending on the syntax. +If the first argument begins with a / or ~/ you're creating a bind mount. Remove that, and you're +naming the volume. For example: - `-v /path:/path/in/container` mounts the host directory, `/path` at the `/path/in/container` - `-v path:/path/in/container` creates a volume named path with no relationship to the host. ### Sharing data -If you want to share volumes or bind mount between two containers, then use the `--volumes-from` option for the second container. The parameter maps the mapped volumes from the source container to the container being launched. +If you want to share volumes or bind mount between two containers, then use the `--volumes-from` +option for the second container. The parameter maps the mapped volumes from the source container +to the container being launched. ## More advanced docker commands -Before you go on, use the [Docker command line interface](https://docs.docker.com/engine/reference/commandline/cli/) documentation to try a few more commands: +Before you go on, use the [Docker command line interface](https://docs.docker.com/engine/reference/commandline/cli/) +documentation to try a few more commands: -- While your detached container is running, use the `docker ps` command to see what silly name Docker gave your container. **This is one command you're going to use often!** -- While your detached container is still running, look at its logs. Try following its logs and refreshing your browser. +- While your detached container is running, use the `docker ps` command to see what silly name + Docker gave your container. **This is one command you're going to use often!** +- While your detached container is still running, look at its logs. Try following its logs and + refreshing your browser. - Stop your detached container, and confirm that it is stopped with the `ps` command. - Start it again, wait 10 seconds for it to fire up, and stop it again. - Then delete that container from your system. -> **NOTE:** When running most docker commands, you only need to specify the first few characters of a container's ID. For example, if a container has the ID `df4fd19392ba`, you can stop it with `docker stop df4`. You can also use the silly names Docker provides containers by default, such as `boring_bardeen`. +> **NOTE:** When running most docker commands, you only need to specify the first few characters +> of a container's ID. For example, if a container has the ID `df4fd19392ba`, you can stop it with +> `docker stop df4`. You can also use the silly names Docker provides containers by default, +> such as `boring_bardeen`. -If you want to read more, I recommend [Digital Oceans](https://www.digitalocean.com/community/tutorials/how-to-share-data-between-docker-containers) guides to sharing data through containers, as well as Dockers own article about [volumes](https://docs.docker.com/engine/admin/volumes). +If you want to read more, I recommend [Digital Oceans](https://www.digitalocean.com/community/tutorials/how-to-share-data-between-docker-containers) +guides to sharing data through containers, as well as Dockers own article about [volumes](https://docs.docker.com/engine/admin/volumes). ## summary diff --git a/labs/07-building-an-image.md b/labs/07-building-an-image.md index 3d8fd16..90ee76f 100644 --- a/labs/07-building-an-image.md +++ b/labs/07-building-an-image.md @@ -1,10 +1,17 @@ # Constructing a docker image -Running images others made is useful, but if you want to use docker for your own application, chances are you want to construct an image on your own. +Running images others made is useful, but if you want to use docker for your own application, +chances are you want to construct an image on your own. -A [Dockerfile](https://docs.docker.com/reference/dockerfile/) is a text file containing a list of commands that the Docker daemon calls while creating an image. The Dockerfile contains all the information that Docker needs to know to run the app; a base Docker image to run from, location of your project code, any dependencies it has, and what commands to run at start-up. +A [Dockerfile](https://docs.docker.com/reference/dockerfile/) is a text file containing a list of +commands that the Docker daemon calls while creating an image. The Dockerfile contains all the +information that Docker needs to know to run the app; a base Docker image to run from, location of +your project code, any dependencies it has, and what commands to run at start-up. -It is a simple way to automate the image creation process. The best part is that the [commands](https://docs.docker.com/reference/dockerfile/) you write in a Dockerfile are _almost_ identical to their equivalent Linux commands. This means you don't really have to learn new syntax to create your own Dockerfiles. +It is a simple way to automate the image creation process. The best part is that the +[commands](https://docs.docker.com/reference/dockerfile/) you write in a Dockerfile are _almost_ +identical to their equivalent Linux commands. This means you don't really have to learn new syntax +to create your own Dockerfiles. ## Dockerfile commands summary @@ -24,15 +31,31 @@ Here's a quick summary of some basic commands we will use in our Dockerfile. Details: -- `FROM` is always the first item in the Dockerfile. It is a requirement that the Dockerfile starts with the `FROM` command. Images are created in layers, which means you can use another image as the base image for your own. The `FROM` command defines your base layer. As argument, it takes the name of the image. Optionally, you can add the Docker Hub username of the maintainer and image version, in the format `username/imagename:version`. +- `FROM` is always the first item in the Dockerfile. It is a requirement that the Dockerfile starts + with the `FROM` command. Images are created in layers, which means you can use another image as + the base image for your own. The `FROM` command defines your base layer. As argument, it takes + the name of the image. Optionally, you can add the Docker Hub username of the maintainer and + image version, in the format `username/imagename:version`. -- `RUN` is used to build up the image you're creating. For each `RUN` command, Docker will run the command then create a new layer of the image. This way you can roll back your image to previous states easily. The syntax for a `RUN` instruction is to place the full text of the shell command after the `RUN` (e.g., `RUN mkdir /user/local/foo`). This will automatically run in a `/bin/sh` shell. You can define a different shell like this: `RUN /bin/bash -c 'mkdir /user/local/foo'` +- `RUN` is used to build up the image you're creating. For each `RUN` command, Docker will run the + command then create a new layer of the image. This way you can roll back your image to previous + states easily. The syntax for a `RUN` instruction is to place the full text of the shell command + after the `RUN` (e.g., `RUN mkdir /user/local/foo`). This will automatically run in a `/bin/sh` + shell. You can define a different shell like this: `RUN /bin/bash -c 'mkdir /user/local/foo'` -- `COPY` copies local files into the container. The files need to be in the same folder (or a sub folder) as the Dockerfile itself. An example is copying the requirements for a python app into the container: `COPY requirements.txt /usr/src/app/`. +- `COPY` copies local files into the container. The files need to be in the same folder (or a sub + folder) as the Dockerfile itself. An example is copying the requirements for a python app into + the container: `COPY requirements.txt /usr/src/app/`. -- `ADD` should only be used if you want to copy and unpack a tar file into the image. In any other case, use `COPY`. `ADD` can also be used to add a file directly from an URL; consider whether this is good practice. +- `ADD` should only be used if you want to copy and unpack a tar file into the image. In any other + case, use `COPY`. `ADD` can also be used to add a file directly from an URL; consider whether + this is good practice. -- `CMD` defines the commands that will run on the image at start-up. Unlike a `RUN`, this does not create a new layer for the image, but simply runs the command. There can only be one `CMD` in a Dockerfile. If you need to run multiple commands, the best way to do that is to have the `CMD` run a script. `CMD` requires that you tell it where to run the command, unlike `RUN`. So example `CMD` commands would be: +- `CMD` defines the commands that will run on the image at start-up. Unlike a `RUN`, this does not + create a new layer for the image, but simply runs the command. There can only be one `CMD` in a + Dockerfile. If you need to run multiple commands, the best way to do that is to have the `CMD` + run a script. `CMD` requires that you tell it where to run the command, unlike `RUN`. So example + `CMD` commands would be: ```dockerfile CMD ["python3", "./app.py"] @@ -40,14 +63,16 @@ Details: CMD ["/bin/bash", "echo", "Hello World"] ``` -- `EXPOSE` creates a hint for users of an image that provides services on ports. It is included in the information which can be retrieved via `$ docker inspect `. +- `EXPOSE` creates a hint for users of an image that provides services on ports. It is included in + the information which can be retrieved via `$ docker inspect `. -> **Note:** The `EXPOSE` command does not actually make any ports accessible to the host! Instead, this requires -> publishing ports by means of the `-p` or `-P` flag when using `$ docker run`. +> **Note:** The `EXPOSE` command does not actually make any ports accessible to the host! Instead, +> this requires publishing ports by means of the `-p` or `-P` flag when using `$ docker run`. - `ENTRYPOINT` configures a command that will run no matter what the user specifies at runtime. -> :bulb: this is not the full list of commands, but the ones you will be using in the exercise. For a full list, see the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/). +> :bulb: this is not the full list of commands, but the ones you will be using in the exercise. For +> a full list, see the [Dockerfile reference](https://docs.docker.com/engine/reference/builder/). @@ -55,11 +80,14 @@ Details: We want to create a Docker image with a Python web app. -As mentioned above, all user images are based on a _base image_. We will build our own Python image based on [Ubuntu](https://hub.docker.com/_/ubuntu/). We'll do that using a **Dockerfile**. +As mentioned above, all user images are based on a _base image_. We will build our own Python image +based on [Ubuntu](https://hub.docker.com/_/ubuntu/). We'll do that using a **Dockerfile**. -> :bulb: If you want to learn more about Dockerfiles, check out [Best practices for writing Dockerfiles](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/). +> :bulb: If you want to learn more about Dockerfiles, check out [Best practices for writing +> Dockerfiles](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/). -1. We have made a small boilerplate file and app for you in the [/building-an-image](./building-an-image/) folder, so head over there and add content to the Dockerfile as described below +1. We have made a small boilerplate file and app for you in the [/building-an-image](./building-an-image/) + folder, so head over there and add content to the Dockerfile as described below We'll start by specifying our base image, using the `FROM` keyword: @@ -67,7 +95,11 @@ As mentioned above, all user images are based on a _base image_. We will build o FROM ubuntu:22.04 ``` -1. The next step is usually to write the commands of copying the files and installing the dependencies. But first we will install the Python pip package to the ubuntu linux distribution. This will not just install the pip package but any other dependencies too, which includes the python interpreter. Add the following [RUN](https://docs.docker.com/engine/reference/builder/#run) commands next: +1. The next step is usually to write the commands of copying the files and installing the + dependencies. But first we will install the Python pip package to the ubuntu linux distribution. + This will not just install the pip package but any other dependencies too, which includes the + python interpreter. Add the following [RUN](https://docs.docker.com/engine/reference/builder/#run) + commands next: ```docker RUN apt-get update -y @@ -83,14 +115,16 @@ As mentioned above, all user images are based on a _base image_. We will build o RUN pip3 install --no-cache-dir -r /usr/src/app/requirements.txt ``` - Copy the application app.py into our image by using the [COPY](https://docs.docker.com/engine/reference/builder/#copy) command. + Copy the application app.py into our image by using the + [COPY](https://docs.docker.com/engine/reference/builder/#copy) command. ```docker COPY app.py /usr/src/app/ ``` -1. Specify the port number which needs to be exposed. Since our flask app is running on `5000` that's what we'll expose. +1. Specify the port number which needs to be exposed. Since our flask app is running on `5000` + that's what we'll expose. ```docker EXPOSE 5000 @@ -102,13 +136,15 @@ As mentioned above, all user images are based on a _base image_. We will build o > about which ports are intended to be published. > You need the `-p`/`-P` command to actually open the host ports. -1. The last step is the command for running the application which is simply - `python3 ./app.py`. Use the [CMD](https://docs.docker.com/engine/reference/builder/#cmd) command to do that: +1. The last step is the command for running the application which is simply - `python3 ./app.py`. + Use the [CMD](https://docs.docker.com/engine/reference/builder/#cmd) command to do that: ```docker CMD ["python3", "/usr/src/app/app.py"] ``` - The primary purpose of `CMD` is to tell the container which command it should run by default when it is started. + The primary purpose of `CMD` is to tell the container which command it should run by default + when it is started. 1. Verify your Dockerfile. @@ -138,17 +174,19 @@ As mentioned above, all user images are based on a _base image_. We will build o ### Build the image -Now that you have your `Dockerfile`, you can build your image. The `docker build` command does the heavy-lifting of creating a docker image from a `Dockerfile`. +Now that you have your `Dockerfile`, you can build your image. The `docker build` command does the +heavy-lifting of creating a docker image from a `Dockerfile`. -The `docker build` command is quite simple - it takes an optional tag name with the `-t` flag, and the location of the directory containing the `Dockerfile` - the `.` indicates the current directory: +The `docker build` command is quite simple - it takes an optional tag name with the `-t` flag, and +the location of the directory containing the `Dockerfile` - the `.` indicates the current directory: -``` +``` bash docker build -t myfirstapp . ``` Expected output (at the end of the run): -``` +``` bash [+] Building 79.5s (11/11) FINISHED docker:default => [internal] load build definition from Dockerfile 0.1s => => transferring dockerfile: 583B 0.0s @@ -176,104 +214,54 @@ Expected output (at the end of the run): ``` -If you don't have the `ubuntu:22.04` image, the client will first pull the image and then create your image. If you do have it, your output on running the command will look different from mine. - -If everything went well, your image should be ready! Run `docker image ls` and see if your image (`myfirstapp`) shows. - -### Run your image - -The next step in this section is to run the image and see if it actually works. - -``` -docker run -p 8080:5000 --name myfirstapp myfirstapp -``` - -Expected output: - -``` - * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) -``` +If you don't have the `ubuntu:22.04` image, the client will first pull the image and then create +your image. If you do have it, your output on running the command will look different from mine. -> :bulb: remember that the application is listening on port 5000 on the Docker virtual network, but on the host it is listening on port 8080. +If everything went well, your image should be ready! Run `docker image ls` and see if your image +(`myfirstapp`) shows. -Head over to `http://:8080` or your server's URL and your app should be live. +> :bulb: remember that the application is listening on port 5000 on the Docker virtual network, but +> on the host it is listening on port 8080. -## EXTRA Images and layers +When dealing with docker images, a layer, or image layer, is a change on an image. Every time you +run one of the commands RUN, COPY or ADD in your Dockerfile it adds a new layer, causes the image +to change, and makes it possible to roll back to previous states. -When dealing with docker images, a layer, or image layer, is a change on an image. Every time you run one of the commands RUN, COPY or ADD in your Dockerfile it adds a new layer, causes the image to change to the new layer. You can think of it as staging changes when you're using Git: You add a file's change, then another one, then another one... +We add another layer on top of our starting image, running an update on the system. After that yet +another for installing the python ecosystem. -Consider the following Dockerfile: +The concept of layers comes in handy at the time of building images. Because layers are +intermediate images, if you make a change to your Dockerfile, docker will build only the layer that +was changed and reuse the rest. -```dockerfile - FROM ubuntu:22.04 - RUN apt-get update -y - RUN apt-get install -y python3 python3-pip python3-dev build-essential - COPY requirements.txt /usr/src/app/ - RUN pip3 install --no-cache-dir -r /usr/src/app/requirements.txt - COPY app.py /usr/src/app/ - EXPOSE 5000 - CMD ["python3", "/usr/src/app/app.py"] -``` +Each layer is build on top of its parent layer, meaning if the parent layer changes, the next layer +does as well. -First, we choose a starting image: `ubuntu:22.04`, which in turn has many layers. -We add another layer on top of our starting image, running an update on the system. After that yet another for installing the python ecosystem. -Then, we tell docker to copy the requirements to the container. That's another layer. +If you want to concatenate two layers (e.g. the update and install [which is a good +idea](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run)), then do +them in the same RUN command. -The concept of layers comes in handy at the time of building images. Because layers are intermediate images, if you make a change to your Dockerfile, docker will build only the layer that was changed and the ones after that. This is called layer caching. +If you want to be able to use any cached layers from last time, they need to be run _before the +update command_. -Each layer is build on top of it's parent layer, meaning if the parent layer changes, the next layer does as well. +> Once we build the layers, Docker will reuse them for new builds. This makes the builds much +> faster. This is great for continuous integration, where we want to build an image at the end of +> each successful build. -If you want to concatenate two layers (e.g. the update and install [which is a good idea](https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#run)), then do them in the same RUN command: +Try to move the two `COPY` commands before for the `RUN` and build again to see it taking the +cached layers instead of making new ones. -```dockerfile -FROM ubuntu:22.04 -RUN apt-get update && apt-get install -y \ - python3 \ - python3-pip \ - python3-dev \ - build-essential -COPY requirements.txt /usr/src/app/ -RUN pip3 install --no-cache-dir -r /usr/src/app/requirements.txt -COPY app.py /usr/src/app/ -EXPOSE 5000 -CMD ["python3", "/usr/src/app/app.py"] -``` +If you make a `docker ps -a` command, you can now see a container with the name _myfirstapp_ from +the image named _myfirstapp_. -If you want to be able to use any cached layers from last time, they need to be run _before the update command_. +You learned how to write your own docker images in a `Dockerfile` with the use of the `FROM` +command to choose base-images like Alpine or Ubuntu and keywords like `RUN` for executing +commands,`COPY` to copy files, and `CMD` to define the default command. -> NOTE: -> Once we build the layers, Docker will reuse them for new builds. This makes the builds much faster. This is great for continuous integration, where we want to build an image at the end of each successful build (e.g. in Jenkins). But the build is not only faster, the new image layers are also smaller, since intermediate images are shared between images. - -Try to move the two `COPY` commands before for the `RUN` and build again to see it taking the cached layers instead of making new ones. - -### Delete your image - -If you make a `docker ps -a` command, you can now see a container with the name _myfirstapp_ from the image named _myfirstapp_. - -``` -docker ps -a -``` - -Expected output (you might have more containers): - -``` -CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES -fcfba2dfb8ee myfirstapp "python3 /usr/src/a..." About a minute ago Exited (0) 28 seconds ago myfirstapp -``` - -Make a `docker image ls` command to see that you have a docker image with the name `myfirstapp` - -Try now to first: - -- remove the container -- remove the image file as well with the `image rm` [command](https://docs.docker.com/engine/reference/commandline/image_rm/). -- make `docker image ls` again to see that it's gone. +You also learned that each of the keywords generates an image layer on top of the previous, and +that every one of the layers can be converted to a running container. ## Instructions -There are constantly getting added new keywords to the Dockerfile. You can find a list of all the keywords [here](https://docs.docker.com/engine/reference/builder/). - -## Summary - -You learned how to write your own docker images in a `Dockerfile` with the use of the `FROM` command to choose base-images like Alpine or Ubuntu and keywords like `RUN` for executing commands,`COPY` to add resources to the container, and `CMD` to indicate what to run when starting the container. -You also learned that each of the keywords generates an image layer on top of the previous, and that everyone of the layers can be converted to a running container. +New keywords are constantly being added to the Dockerfile syntax. You can find a complete list of +all the keywords [Dockerfile reference](https://docs.docker.com/engine/reference/builder/). diff --git a/labs/08-multi-stage-builds.md b/labs/08-multi-stage-builds.md index 931d003..3040253 100644 --- a/labs/08-multi-stage-builds.md +++ b/labs/08-multi-stage-builds.md @@ -2,7 +2,8 @@ ## Task: create a tiny go-application container -In [multi-stage-build/hello.go](multi-stage-build/hello.go) we have created a small go application that prints `hello world` to your terminal. +In [multi-stage-build/hello.go](multi-stage-build/hello.go) we have created a small go application +that prints `hello world` to your terminal. You want to containerize it - that's easy! @@ -10,7 +11,7 @@ You don't even have to have go installed, because you can just use a `base image The [Dockerfile](multi-stage-build/Dockerfile) is already created for you in the same folder. -## Exercise +## Docker build exercise - Try building the image with `docker build` - Try to run it with `docker run`. @@ -18,17 +19,21 @@ The [Dockerfile](multi-stage-build/Dockerfile) is already created for you in the ## Using Multi Stage Builds -The image we built has both the compiler and the compiled binary - which is too much: we only need the binary to run our application. +The image we built has both the compiler and the compiled binary - which is too much: we only need +the binary to run our application. -By utilizing multi-stage builds, we can separate the build stage (compiling) from the image we actually want to ship. +By utilizing multi-stage builds, we can separate the build stage (compiling) from the image we +actually want to ship. -## Exercise +## Multi Stage exercise - try `docker image ls`. -- Could we make it smaller? We only need the compiler on build-time, since go is a statically compiled language. +- Could we make it smaller? We only need the compiler on build-time, since go is a statically + compiled language. -- See the `Dockerfile` below, it has two `build stages`, wherein the latter stage is using the compiled artifact (the binary) from the first: +- See the `Dockerfile` below, it has two `build stages`, wherein the latter stage is using the + compiled artifact (the binary) from the first: ```Dockerfile # build stage @@ -51,13 +56,15 @@ ENTRYPOINT ./goapp - Try inspecting the size with `docker image ls`. -- Compare the size of the two images. The latter image should be much smaller, since it's just containing the go-application using `alpine` as the `base image`, and not the entire `golang`-suite of tools. +- Compare the size of the two images. The latter image should be much smaller, since it's just + containing the go-application using `alpine` as the `base image`, and not the entire + `golang`-suite of tools. You can read more about this on: [Use multi-stage builds - docs.docker.com](https://docs.docker.com/develop/develop-images/multistage-build/) You should see a great reduction in size, like in the example below: -``` +``` bash REPOSITORY TAG IMAGE ID CREATED SIZE hello golang 5311178b692a 23 seconds ago 805MB hello multi-stage ba46dc3143ca 2 minutes ago 7.53MB @@ -68,13 +75,13 @@ hello multi-stage ba46dc3143ca 2 minutes ago 7.53MB Since go is a statically compiled language, we can actually use `scratch` as the `base image`. The `scratch` image is just an empty file system. -Hint: the "scratch" image has no shell, so in the Dockerfile you _also_ need to change the `ENTRYPOINT` from `shell form` to `exec form`. +Hint: the "scratch" image has no shell, so in the Dockerfile you _also_ need to change the +`ENTRYPOINT` from `shell form` to `exec form`. See: `ENTRYPOINT` under [Dockerfile reference](https://docs.docker.com/engine/reference/builder/). After building with your new Dockerfile, inspect the size of the images. Your new image should be even smaller than the alpine-based image! - ```Dockerfile FROM golang:1.19 AS builder WORKDIR /app diff --git a/labs/09-multi-container.md b/labs/09-multi-container.md index 3635b82..d6bada4 100644 --- a/labs/09-multi-container.md +++ b/labs/09-multi-container.md @@ -4,20 +4,23 @@ In this scenario, we are going to deploy the CMS system called Wordpress. -> WordPress is a free and open source blogging tool and a content management system (CMS) based on PHP and MySQL, which runs on a web hosting service. +> WordPress is a free and open source blogging tool and a content management system (CMS) based on PHP +> and MySQL, which runs on a web hosting service. So we need two containers: - One container that can serve the Wordpress PHP files - One container that can serve as a MySQL database for Wordpress. -Both containers already exists on the dockerhub: [Wordpress](https://hub.docker.com/_/wordpress/) and [Mysql](https://hub.docker.com/_/mysql/). +Both containers already exists on the dockerhub: [Wordpress](https://hub.docker.com/_/wordpress/) + and [Mysql](https://hub.docker.com/_/mysql/). ## Separate containers > [!NOTE] -> The following section is only an example to show how you can run multi-container setups with `docker run`. -> Please skip ahead to [Making a container network](#making-a-container-network) to continue with the exercise. +> The following section is only an example to show how you can run multi-container setups with +> `docker run`. Please skip ahead to [Making a container network](#making-a-container-network) to +> continue with the exercise. To start a mysql container, issue the following command @@ -32,15 +35,19 @@ Let's recap what this command does: - `--name mysql-container` gives the new container a name for better referencing - `--rm` tells docker to remove the container after it is stopped - `-p 3306:3306` mounts the host port 3306, to the containers port 3306. -- `-e MYSQL_ROOT_PASSWORD=wordpress` The `-e` option is used to inject environment variables into the container. +- `-e MYSQL_ROOT_PASSWORD=wordpress` The `-e` option is used to inject environment variables into + the container. - `-e MYSQL_DATABASE=wordpressdb` denotes the name of the database created when mysql starts up. - `-d` runs the container detached, in the background. -- `mysql` tells what container to actually run, here mysql:latest (:latest is the default if nothing else is specified) +- `mysql` tells what container to actually run, here mysql:latest (:latest is the default if nothing + else is specified) -MySQL is now exposing it's port 3306 on the host, and everybody can attach to it **so do not do this in production without proper security settings**. +MySQL is now exposing it's port 3306 on the host, and everybody can attach to it **so do not do this +in production without proper security settings**. -We need to connect our wordpress container to the host's IP address. -You can either use the external IP address of your server, or the DNS name if you are at a training, e.g. `workstation-..eficode.academy`. +We need to connect our wordpress container to the host's IP address. You can either use the external +IP address of your server, or the DNS name if you are at a training, e.g. +`workstation-..eficode.academy`. After you have noted down the IP, spin up the wordpress container with the host IP as a variable: @@ -48,26 +55,31 @@ After you have noted down the IP, spin up the wordpress container with the host docker run --name wordpress-container --rm -e WORDPRESS_DB_HOST=172.17.0.1 -e WORDPRESS_DB_PASSWORD=wordpress -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_NAME=wordpressdb -p 8080:80 -d wordpress:5.7.2-apache ``` -You can now browse to the IP:8080 and have your very own wordpress server running. Since port 3306 is the default MySQL port, wordpress will try to connect on that port by itself. +You can now browse to the IP:8080 and have your very own wordpress server running. Since port 3306 +is the default MySQL port, wordpress will try to connect on that port by itself. - Stop the two containers again `docker stop wordpress-container mysql-container` ## Making a container network -Even though we in two commands made the setup running in the above scenario, there are some problems here we can fix: +Even though we in two commands made the setup running in the above scenario, there are some problems +here we can fix: - We need to know the host IP to get them to talk to each other. - And we have exposed the database to the outside world. -In order to connect multiple docker containers without binding them to the hosts network interface we need to create a docker network. +In order to connect multiple docker containers without binding them to the hosts network interface +we need to create a docker network. -The `docker network` command securely connect and provide a channel to transfer information from one container to another. +The `docker network` command securely connect and provide a channel to transfer information from one +container to another. First off make a new network for the containers to communicate through: `docker network create if_wordpress` -Docker will return the `networkID` for the newly created network. You can reference it by name as well as the ID. +Docker will return the `networkID` for the newly created network. You can reference it by name as +well as the ID. Now you need to connect the two containers to the network, by adding the `--network` option: @@ -79,9 +91,14 @@ docker run --name wordpress-container --rm --network if_wordpress -e WORDPRESS_D fd4fd096c064094d7758cefce41d0f1124e78b86623160466973007cf0af8556 ``` -Notice the `WORDPRESS_DB_HOST` env variable. When you make a container join a network, it automatically gets the container name as DNS name as well, making it super easy to make containers discover each other. The DNS name is only visible inside the Docker network, which is also true for the `IP` address (usually an address starting with `172`) that is assigned to them. If you do not expose a port for a container, the container is only visible to Docker. +Notice the `WORDPRESS_DB_HOST` env variable. When you make a container join a network, it +automatically gets the container name as DNS name as well, making it super easy to make containers +discover each other. The DNS name is only visible inside the Docker network, which is also true for +the `IP` address (usually an address starting with `172`) that is assigned to them. If you do not +expose a port for a container, the container is only visible to Docker. -You have now deployed both containers into the network. Take a deeper look into the container network by issuing: `docker network inspect if_wordpress`. +You have now deployed both containers into the network. Take a deeper look into the container +network by issuing: `docker network inspect if_wordpress`. ```bash docker network inspect if_wordpress @@ -133,7 +150,9 @@ Expected output: ] ``` -As, we have linked both the container now wordpress container can be accessed from browser using the address [http://localhost:8080](http://localhost:8080) and setup of wordpress can be done easily. MySQL is not accessible from the outside so security is much better than before. +As, we have linked both the container now wordpress container can be accessed from browser using the +address [http://localhost:8080](http://localhost:8080) and setup of wordpress can be done easily. +MySQL is not accessible from the outside so security is much better than before. ### Cleanup @@ -145,21 +164,29 @@ docker stop wordpress-container mysql-container ## Using Docker compose -If you have started working with Docker and are building container images for your application services, you most likely have noticed that after a while you may end up writing long `docker container run` commands. -These commands, while very intuitive, can become cumbersome to write, especially if you are developing a multi-container applications and spinning up containers quickly. +If you have started working with Docker and are building container images for your application +services, you most likely have noticed that after a while you may end up writing long `docker +container run` commands. These commands, while very intuitive, can become cumbersome to write, +especially if you are developing a multi-container applications and spinning up containers quickly. -[Docker Compose](https://docs.docker.com/compose/install/) is a “_tool for defining and running your multi-container Docker applications_”. +[Docker Compose](https://docs.docker.com/compose/install/) is a “_tool for defining and running your +multi-container Docker applications_”. -Your applications can be defined in a YAML file where all the options that you used in `docker run` are defined. +Your applications can be defined in a YAML file where all the options that you used in `docker run` +are defined. -Compose also allows you to manage your application as a single entity rather than dealing with individual containers. +Compose also allows you to manage your application as a single entity rather than dealing with +individual containers. -This file defines all of the containers and settings you need to launch your set of clusters. The properties map onto how you use the docker run commands, however, are now stored in source control and shared along with your code. +This file defines all of the containers and settings you need to launch your set of clusters. The +properties map onto how you use the docker run commands, however, are now stored in source control +and shared along with your code. ## Terminology - `docker-compose.yml` The YAML file where all your configuration of your docker containers go. -- `docker compose` The cli tool that enables you to define and run multi-container applications with Docker +- `docker compose` The cli tool that enables you to define and run multi-container applications with + Docker - `up` : creates and starts the services stated in the compose file - `down` : stops and removes containers, networks, images, and volumes @@ -170,10 +197,11 @@ This file defines all of the containers and settings you need to launch your set - `start` : starts the services - `stop` : stops the services -The docker cli is used when managing individual containers on a docker engine. -It is the client command line to access the docker daemon api. +The docker cli is used when managing individual containers on a docker engine. It is the client +command line to access the docker daemon api. -The docker compose cli together with the yaml files can be used to manage a multi-container application. +The docker compose cli together with the yaml files can be used to manage a multi-container +application. ## Compose-erizing your wordpress @@ -213,15 +241,22 @@ This is the template we are building our compose file upon so let's drill this o - `wordpress-container` is the section where we define our wordpress container - `mysql-container` is the ditto of MySQL. -> For more information on docker compose yaml files, head over to the [documentation](https://docs.docker.com/compose/overview/). +> For more information on docker compose yaml files, head over to the +> [documentation](https://docs.docker.com/compose/overview/). -The `services` part is equivalent to our `docker container run` command. Likewise there is a `network` and `volumes` section for those as well corresponding to `docker network create` and `docker volume create`. +The `services` part is equivalent to our `docker container run` command. Likewise there is a +`network` and `volumes` section for those as well corresponding to `docker network create` and +`docker volume create`. -Let's look the mysql-container part together, making you able to create the other container yourself. Look at the original command we made to spin up the container: +Let's look the mysql-container part together, making you able to create the other container yourself. +Look at the original command we made to spin up the container: -`docker container run --name mysql-container --rm -p 3306:3306 -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_DATABASE=wordpressdb -d mysql:5.7.36` +``` bash +docker container run --name mysql-container --rm -p 3306:3306 -e MYSQL_ROOT_PASSWORD=wordpress -e MYSQL_DATABASE=wordpressdb -d mysql:5.7.36 +``` -The command gives out following information: a `name`, a `port` mapping, two `environment` variables and the `image` we want to run. +The command gives out following information: a `name`, a `port` mapping, two `environment` variables +and the `image` we want to run. Now look at the docker compose example again: @@ -230,7 +265,10 @@ Now look at the docker compose example again: - `ports` defines a list of port mappings from host to container - `environment` describes the `-e` variable made before in a yaml list -Instead of keeping sensitive information in the `docker-compose.yml` file, you can also use an [`.env`](https://docs.docker.com/compose/env-file/) file to keep all the environment variables. That way, it's easier to make a development environment and a production environment with the same `docker-compose.yml`. +Instead of keeping sensitive information in the `docker-compose.yml` file, you can also use an +[`.env`](https://docs.docker.com/compose/env-file/) file to keep all the environment variables. That +way, it's easier to make a development environment and a production environment with the same +`docker-compose.yml`. ```conf MYSQL_ROOT_PASSWORD=wordpress @@ -245,9 +283,11 @@ Creating multicontainer_mysql-container_1 ... Creating multicontainer_mysql-container_1 ... done ``` -Looking at the output you can see that it made a `docker network` named `multicontainer_default` as well as the MySQL container named `multicontainer_mysql-container_1`. +Looking at the output you can see that it made a `docker network` named `multicontainer_default` as +well as the MySQL container named `multicontainer_mysql-container_1`. -Issue a `docker container ls` as well as `docker network ls` to see that both the container and network are listed. +Issue a `docker container ls` as well as `docker network ls` to see that both the container and +network are listed. To shut down the container and network, issue a `docker compose down` @@ -255,7 +295,8 @@ To shut down the container and network, issue a `docker compose down` ### Creating the wordpress container -You now have all the pieces of information to make the Wordpress container. We've copied the run command from before if you can't remember it by heart: +You now have all the pieces of information to make the Wordpress container. We've copied the run +command from before if you can't remember it by heart: ```bash docker run --name wordpress-container --rm --network if_wordpress -e WORDPRESS_DB_HOST=mysql-container -e WORDPRESS_DB_PASSWORD=wordpress -e WORDPRESS_DB_USER=root -e WORDPRESS_DB_NAME=wordpressdb -p 8080:80 -d wordpress:5.7.2-apache @@ -267,6 +308,7 @@ You must - map the pieces of information from the docker container run command to the yaml format. - remove MySQL port mapping to close that from outside reach. -When you made that, run `docker compose up -d` and access your wordpress site from [http://IP:8080](http://IP:8080) +When you made that, run `docker compose up -d` and access your wordpress site from +[http://IP:8080](http://IP:8080) > **Hint**: If you are stuck, look at the file docker-compose_final.yaml in the same folder. diff --git a/labs/README.md b/labs/README.md index ed90943..de3b9fd 100644 --- a/labs/README.md +++ b/labs/README.md @@ -1,8 +1,9 @@ # Docker labs -In this folder are a lot of exercises. They are numbered in the way we think makes sence to introduce the concepts. +In this folder are a lot of exercises. They are numbered in the way we think makes sense to +introduce the concepts. -Below is a cheatsheet for many of the commands we will touch uppon in the lab. +Below is a cheatsheet for many of the commands we will touch upon in the lab. ```bash docker build -t friendlyname . # Create image using this directory's Dockerfile diff --git a/labs/advanced/containers-on-default-bridge.md b/labs/advanced/containers-on-default-bridge.md index 9376f98..6b662e2 100644 --- a/labs/advanced/containers-on-default-bridge.md +++ b/labs/advanced/containers-on-default-bridge.md @@ -1,18 +1,24 @@ # Exploration - Containers on default bridge network -In this exercise, you will explore various characterstics of the default network bridge, and the containers running in that network. +In this exercise, you will explore various characterstics of the default network bridge, and the +containers running in that network. + +## Default bridge network investigations -## You should investigate: * What docker networks exist on the host network? (`docker network ls`) * See what network interfaces exist on the host network? Do you see docker0? * Inpsect docker0 bridge. -* What does docker0 network interface on the host look like? (IP/network address, NetMask, MAC address, etc?) +* What does docker0 network interface on the host look like? (IP/network address, NetMask, MAC + address, etc?) ## Run couple of docker containers on the bridge network -Note: `praqma/network-multitool` is a small image with lots of network troubleshooting tools installed in it. It also runs a nginx web server on port 80 by default. Most of our examples will use this image in the *web* role. Just think of it as nginx image, with some extra bells and whistles. +Note: `praqma/network-multitool` is a small image with lots of network troubleshooting tools +installed in it. It also runs a nginx web server on port 80 by default. Most of our examples will +use this image in the *web* role. Just think of it as nginx image, with some extra bells and +whistles. -``` +```bash $ docker run --name web \ -d praqma/network-multitool @@ -21,18 +27,20 @@ $ docker run --name db \ -d mysql ``` -### You should investigate: +## You should investigate + * What are the IP addresses of the containers? -* What DNS resolver do these containers use? +* What DNS resolver do these containers use? * Can the containers on the default bridge network access each other by their names? IP addresses? * Do you see any **veth** interfaces on the host? -* Compare MAC addresses of veth interfaces on the host and the **eth0** interfaces on each container. +* Compare MAC addresses of veth interfaces on the host and the **eth0** interfaces on each + container. * Inspect containers for their IP addresses, MAC addresses, etc. * Explore what processes are listening on various network interfaces on the host. * Explore what processes are listening on various network interfaces on the container. +## Useful commands -# Useful commands: * docker ps * docker ls * docker network ls diff --git a/labs/advanced/containers-on-docker-compose-network.md b/labs/advanced/containers-on-docker-compose-network.md index 5c434e9..87b61d5 100644 --- a/labs/advanced/containers-on-docker-compose-network.md +++ b/labs/advanced/containers-on-docker-compose-network.md @@ -1,11 +1,18 @@ # Exploration - Containers on docker compose bridge network -In this exercise, you will explore various characterstics of the docker compose network bridge, and the containers running in that network. -Spoiler: Network created by `docker compose` are same as user-defined networks. These networks and the containers running in these networks exihibit similar behavior. +In this exercise, you will explore various characterstics of the docker compose network bridge, and +the containers running in that network. -## Create/run a multi-container `docker-compose.yml` application: +Spoiler: Network created by `docker compose` are same as user-defined networks. These networks and +the containers running in these networks exihibit similar behavior. + +## Create/run a multi-container `docker-compose.yml` application + +```bash +vi docker-compose.yml ``` -$ vi docker-compose.yml + +```yaml services: apache: image: httpd:alpine @@ -13,34 +20,42 @@ services: image: postgres environment: - POSTGRES_PASSWORD=secret +``` -$ docker compose up -d +```bash +docker compose up -d -$ docker compose ps +docker compose ps ``` -Note: `praqma/network-multitool` is a small image with lots of network troubleshooting tools installed in it. It also runs a nginx web server on port 80 by default. Most of our examples will use this image in the *web* role. Just think of it as nginx image, with some extra bells and whistles. You can replace Apache `httpd:alpine` image with `praqma/network-multitool` image in the above example, if you want to. +Note: `praqma/network-multitool` is a small image with lots of network troubleshooting tools +installed in it. It also runs a nginx web server on port 80 by default. Most of our examples will +use this image in the *web* role. Just think of it as nginx image, with some extra bells and +whistles. You can replace Apache `httpd:alpine` image with `praqma/network-multitool` image in the +above example, if you want to. + +## You should investigate -## You should investigate: * What docker networks exist on the host network? (`docker network ls`) * See what network interfaces exist on the host network? Do you see docker0? * Do you see any other/additional network interface? (e.g br-123zbc456xyz) * Inpsect docker0 bridge. * Inpsect the newly created docker compose bridge (e.g br-123zbc456xyz). -* What does the docker compose network (bridge) interface on the host look like? (IP/network address, NetMask, MAC address, etc?) +* What does the docker compose network (bridge) interface on the host look like? (IP/network + address, NetMask, MAC address, etc?) * What are the IP addresses of the containers? * What DNS resolver do these containers use? -* Can the containers on the docker compose bridge network access each other by their names? IP addresses? +* Can the containers on the docker compose bridge network access each other by their names? IP + addresses? * Do you see any **veth** interfaces on the host? -* Compare MAC addresses of veth interfaces on the host and the **eth0** interfaces on each container. +* Compare MAC addresses of veth interfaces on the host and the **eth0** interfaces on each + container. * Inspect containers for their IP addresses, MAC addresses, etc. * Explore what processes are listening on various network interfaces on the host. * Explore what processes are listening on various network interfaces on the container. +## Useful commands - - -# Useful commands: * docker ps * docker ls * docker network ls diff --git a/labs/advanced/containers-on-host-network.md b/labs/advanced/containers-on-host-network.md index f5b66d1..005a875 100644 --- a/labs/advanced/containers-on-host-network.md +++ b/labs/advanced/containers-on-host-network.md @@ -1,35 +1,36 @@ # Exploration - Containers on host network -In this exercise, you will explore various characterstics of the **host network**, and the containers running in that network. -## Run a container in "host network" mode: -``` +In this exercise, you will explore various characterstics of the **host network**, and the +containers running in that network. + +## Run a container in "host network" mode + +```bash docker run --name nginx --network host -d nginx ``` Optionally, run another container in the "host network" mode. -``` + +```bash docker run --name mysql --network host -e MYSQL_ROOT_PASSWORD=secret -d mysql ``` - +## You should investigate -## You should investigate: * What docker networks exist on the host? (`docker network ls`) * See what network interfaces exist on the host? Do you see docker0? * Do you see any new network interface(s)? (e.g br-123zbc456xyz) * Do you see any new **veth** interfaces on the host? * Do you see an **eth0** interface inside the container? * What is the IP address of the container(s)? -* What DNS resolver do this container(s) use? +* What DNS resolver do this container(s) use? * Can a container on this network reach/access any other containers by their names? IP addresses? * Inspect container(s) for their IP addresses, MAC addresses, etc. * Explore what processes are listening on various network interfaces on the host. * Explore what processes are listening on various network interfaces on the container. +### Useful commands - - -# Useful commands: * docker ps * docker ls * docker network ls @@ -38,6 +39,6 @@ docker run --name mysql --network host -e MYSQL_ROOT_PASSWORD=secret -d mysql * ip addr show * netstat * ps aux -* iptables -L +* iptables -L * iptables -t nat -L * iptables-save (This will not *save* any rules. It will just list them on the screen.) diff --git a/labs/advanced/containers-on-none-network.md b/labs/advanced/containers-on-none-network.md index cf7dc0f..d00e326 100644 --- a/labs/advanced/containers-on-none-network.md +++ b/labs/advanced/containers-on-none-network.md @@ -1,35 +1,36 @@ # Exploration - Containers on none network -In this exercise, you will explore various characterstics of the **none** network, and the containers running in that network. -## Run a container in "none network" mode: -``` +In this exercise, you will explore various characterstics of the **none** network, and the +containers running in that network. + +## Run a container in "none network" mode + +```bash docker run --name multitool --network none -p 80:80 -d praqma/network-multitool ``` Optionally, run another container in the "none network" mode. -``` + +```bash docker run --name mysql --network none -e MYSQL_ROOT_PASSWORD=secret -d mysql ``` - +## You should investigate -## You should investigate: * What docker networks exist on the host? (`docker network ls`) * See what network interfaces exist on the host network? Do you see docker0? * Do you see any new network interface(s)? (e.g br-123zbc456xyz) * Do you see any new **veth** interfaces on the host? * Do you see an **eth0** interface inside the container? * What is the IP address of the container(s)? -* What DNS resolver do this container(s) use? +* What DNS resolver do this container(s) use? * Can a container on this network reach/access any other containers by their names? IP addresses? * Inspect container(s) for their IP addresses, MAC addresses, etc. * Explore what processes are listening on various network interfaces on the host. * Explore what processes are listening on various network interfaces on the container. +### Useful commands - - -# Useful commands: * docker ps * docker ls * docker network ls @@ -38,6 +39,6 @@ docker run --name mysql --network none -e MYSQL_ROOT_PASSWORD=secret -d mysql * ip addr show * netstat * ps aux -* iptables -L +* iptables -L * iptables -t nat -L * iptables-save (This will not *save* any rules. It will just list them on the screen.) diff --git a/labs/advanced/containers-on-user-defined-bridge.md b/labs/advanced/containers-on-user-defined-bridge.md index 35bb9b1..66a1167 100644 --- a/labs/advanced/containers-on-user-defined-bridge.md +++ b/labs/advanced/containers-on-user-defined-bridge.md @@ -1,61 +1,69 @@ # Exploration - Containers on user-defined bridge network -In this exercise, you will explore various characterstics of the user-defined network bridge, and the containers running in that network. -## Create a user-defined docker network bridge: -``` +In this exercise, you will explore various characterstics of the user-defined network bridge, and the +containers running in that network. + +## Create a user-defined docker network bridge + +```bash docker network create mynet ``` -## You should investigate: +## User-defined bridge network - You should investigate + * What docker networks exist on the host network? (`docker network ls`) * See what network interfaces exist on the host network? Do you see docker0? * Do you see any other/additional network interface? (e.g br-123zbc456xyz) * Inspect docker0 bridge. * Inspect the newly created bridge `mynet`. -* What does the new network (bridge) interface on the host look like? (IP/network address, NetMask, MAC address, etc?) +* What does the new network (bridge) interface on the host look like? (IP/network address, NetMask, + MAC address, etc?) ## Run couple of docker containers on the user-defined bridge network -Note: `praqma/network-multitool` is a small image with lots of network troubleshooting tools installed in it. It also runs a nginx web server on port 80 by default. Most of our examples will use this image in the *web* role. Just think of it as nginx image, with some extra bells and whistles. +Note: `praqma/network-multitool` is a small image with lots of network troubleshooting tools +installed in it. It also runs a nginx web server on port 80 by default. Most of our examples will +use this image in the *web* role. Just think of it as nginx image, with some extra bells and +whistles. -Note: You may want to stop/remove container you created in the previous exercises before running the new ones shown below. +Note: You may want to stop/remove container you created in the previous exercises before running the +new ones shown below. -``` -$ docker run --name=web --network=mynet \ --d praqma/network-multitool +```bash +docker run --name=web --network=mynet -d praqma/network-multitool -$ docker run --name=db --network=mynet \ --e MYSQL_ROOT_PASSWORD=secret \ --d mysql +docker run --name=db --network=mynet -e MYSQL_ROOT_PASSWORD=secret -d mysql ``` -### You should investigate: +### You should investigate + * What are the IP addresses of the containers? * What DNS resolver do these containers use? -* Can the containers on the user-defined bridge network access each other by their names? IP addresses? +* Can the containers on the user-defined bridge network access each other by their names? IP + addresses? * Do you see any **veth** interfaces on the host? -* Compare MAC addresses of veth interfaces on the host and the **eth0** interfaces on each container. +* Compare MAC addresses of veth interfaces on the host and the **eth0** interfaces on each + container. * Inspect containers for their IP addresses, MAC addresses, etc. * Explore what processes are listening on various network interfaces on the host. * Explore what processes are listening on various network interfaces on the container. -## Explore IPTables magic happening inside the containers: -* Run another container on the user-defined network, with special privileges, and use that to explore the IPTables rules setup on the container. +## Explore IPTables magic happening inside the containers + +* Run another container on the user-defined network, with special privileges, and use that to + explore the IPTables rules setup on the container. * Optionally, explore the iptables rules on the host as well. -``` -$ docker run --cap-add=NET_ADMIN --cap-add=NET_RAW \ - --name multitool --network mynet \ - -it praqma/network-multitool /bin/bash +```bash +docker run --cap-add=NET_ADMIN --cap-add=NET_RAW --name multitool --network mynet -it praqma/network-multitool /bin/bash -$ iptables-save +iptables-save -$ netstat -ntlp +netstat -ntlp ``` +### Useful commands - -# Useful commands: * docker ps * docker ls * docker network ls diff --git a/labs/advanced/joining-network-and-process-namespace-of-existing-containers.md b/labs/advanced/joining-network-and-process-namespace-of-existing-containers.md index 08aca8d..b5dd69c 100644 --- a/labs/advanced/joining-network-and-process-namespace-of-existing-containers.md +++ b/labs/advanced/joining-network-and-process-namespace-of-existing-containers.md @@ -1,76 +1,89 @@ # Learn how to join containers to network and process namespace of other containers -In this exercise, you will learn how to trouble-shoot and extract certain information from containers which are very limited in nature. Nginx and MySQL are good examples. They do not have much tools installed in them, and executing simple commands such as `ping`, `curl` or even `ps` takes a lot of effort. -## Run a mysql container on default network: -``` -$ docker run --name mysql -e MYSQL_ROOT_PASSWORD=secret -d mysql +In this exercise, you will learn how to trouble-shoot and extract certain information from containers +which are very limited in nature. Nginx and MySQL are good examples. They do not have much tools +installed in them, and executing simple commands such as `ping`, `curl` or even `ps` takes a lot of +effort. + +## Run a mysql container on default network +```bash +docker run --name mysql -e MYSQL_ROOT_PASSWORD=secret -d mysql ``` -Note: This container runs on the default docker bridge network, so most of the things you already explored in previous exercises. +Note: This container runs on the default docker bridge network, so most of the things you already +explored in previous exercises. -### Things you should explore: -* Can you `exec` into the container, and run these commands? `ifconfig`, `ip`, `ping`, `curl`, `ps`, `netstat`, `top` -* If not, what are your options? Would you choose to install these tools from various OS pacakges inside the container? +### Things you should explore with a container on default network -## Run a **multitool** container and make it join the network of mysql container: -``` +* Can you `exec` into the container, and run these commands? `ifconfig`, `ip`, `ping`, `curl`, `ps`, + `netstat`, `top` +* If not, what are your options? Would you choose to install these tools from various OS pacakges + inside the container? + +## Run a **multitool** container and make it join the network of mysql container + +```bash $ docker run --name multitool --network container:mysql \ --rm -it praqma/network-multitool /bin/bash ``` +### Things you should explore with the multitool container -### Things you should explore: * Exec into the multitool container and see what is the IP address of this container? * Exec into the multitool container and see what is the IP address of mysql container? * Is there a corresponding veth interface on the host computer for this container? * Is there a corresponding veth interface on the host computer for mysql container? * Explore IP address, MAC, DNS , settings for this container. * Explore IP address, MAC, DNS , settings for mysql container. -* What are similarities and differences between the network settings of the mysql container and the multitool container? +* What are similarities and differences between the network settings of the mysql container and the + multitool container? * Inspect docker network and mysql container - from the host. * Can you see what processes are running in multitool container? * Can you see what processes are running in mysql container? -## Run a multitool (busybox) container and make it join the process namespace of the mysql container: -``` -$ docker run --name busybox --pid container:mysql \ - --rm -it busybox /bin/sh +## Run a multitool (busybox) container and make it join the process namespace of the mysql container + +```bash +docker run --name busybox --pid container:mysql --rm -it busybox /bin/sh ``` -### Things you should explore: +### Things you should explore + * Exec into the multitool container and see what is the IP address of this container? * Exec into the multitool container and see what is the IP address of mysql container? * Is there a corresponding veth interface on the host computer for this container? * Is there a corresponding veth interface on the host computer for mysql container? * Explore IP address, MAC, DNS , settings for this container. * Explore IP address, MAC, DNS , settings for mysql container. -* What are similarities and differences between the network settings of the mysql container and the multitool container? +* What are similarities and differences between the network settings of the mysql container and the + multitool container? * Inspect docker network and mysql container - from the host. * Can you see what processes are running in multitool container? * Can you see what processes are running in mysql container? -## Run a multitool (busybox) container and make it join the network **and** process namespace of the mysql container: -``` -$ docker run --name busybox \ - --network container:mysql \ - --pid container:mysql \ - --rm -it busybox /bin/sh +## Run Multitool container and make it join the network **and** process namespace of the mysql container + +```bash +docker run --name busybox --network container:mysql --pid container:mysql --rm -it busybox /bin/sh ``` -### Things you should explore: +### Things you should explore in this scenario + * Exec into the multitool container and see what is the IP address of this container? * Exec into the multitool container and see what is the IP address of mysql container? * Is there a corresponding veth interface on the host computer for this container? * Is there a corresponding veth interface on the host computer for mysql container? * Explore IP address, MAC, DNS , settings for this container. * Explore IP address, MAC, DNS , settings for mysql container. -* What are similarities and differences between the network settings of the mysql container and the multitool container? +* What are similarities and differences between the network settings of the mysql container and the + multitool container? * Inspect docker network and mysql container - from the host. * Can you see what processes are running in multitool container? * Can you see what processes are running in mysql container? -# Useful commands: +### Useful commands + * docker ps * docker ls * docker network ls @@ -79,7 +92,6 @@ $ docker run --name busybox \ * ip addr show * netstat * ps aux -* iptables -L +* iptables -L * iptables -t nat -L * iptables-save (This will not *save* any rules. It will just list them on the screen.) - diff --git a/labs/advanced/systemd/README.md b/labs/advanced/systemd/README.md index e8d816e..d9facd6 100644 --- a/labs/advanced/systemd/README.md +++ b/labs/advanced/systemd/README.md @@ -1,33 +1,32 @@ -# Build and run instructions for systemd based containers: +# Build and run instructions for systemd based containers -## Build: -``` +## Build + +```bash docker build -t local/centos7:mailserver ``` -## Run: -``` -docker run -p 80:80 -p 25:25 \ - -v /sys/fs/cgroup:/sys/fs/cgroup:ro \ - --tmpfs /run \ - -d local/centos7:mailserver +## Run + +```bash +docker run -p 80:80 -p 25:25 -v /sys/fs/cgroup:/sys/fs/cgroup:ro --tmpfs /run -d local/centos7:mailserver ``` -# Build and run using docker compose: -There is a docker-compose.yml, which can be used to build and run this container image. Of-course a Dockerfile is a pre-requisite. +## Build and run using docker compose -``` +There is a docker-compose.yml, which can be used to build and run this container image. Of-course a +Dockerfile is a pre-requisite. + +```bash docker compose build docker compose up -d ``` -## Verify: +## Verify -``` +```bash $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 604a437cc577 systemd_mailserver "/usr/sbin/init" 2 minutes ago Up 2 minutes 0.0.0.0:25->25/tcp, 0.0.0.0:80->80/tcp, 0.0.0.0:110->110/tcp, 0.0.0.0:143->143/tcp, 0.0.0.0:443->443/tcp, 0.0.0.0:993->993/tcp, 0.0.0.0:995->995/tcp systemd_mailserver_1 ``` - - diff --git a/labs/advanced/traefik/example1/README.md b/labs/advanced/traefik/example1/README.md index fc80602..0b5471c 100644 --- a/labs/advanced/traefik/example1/README.md +++ b/labs/advanced/traefik/example1/README.md @@ -1,11 +1,16 @@ -Make sure you set up some sort of DNS / name resolution from your computer to the IP address where reverse proxy is running. +# Traefik example 1 -For example, your VM in the cloud has IP address `1.2.3.4` , and you have your reverse proxy running in that VM as a docker container. In that case, on your local / work computer, setup the following: +Make sure you set up some sort of DNS / name resolution from your computer to the IP address where +reverse proxy is running. -``` -$ cat /etc/hosts +For example, your VM in the cloud has IP address `1.2.3.4` , and you have your reverse proxy +running in that VM as a docker container. In that case, on your local / work computer, setup the following: + +```bash +cat /etc/hosts 127.0.0.1 localhost 1.2.3.4 example.com www.example.com ``` -**Note:** For LetsEncrypt to be able to give you certs, your public IP must be reachable through the DNS name that you are using in your labels/traefik rules, .e.g `example.com`. +**Note:** For LetsEncrypt to be able to give you certs, your public IP must be reachable through +the DNS name that you are using in your labels/traefik rules, .e.g `example.com`. diff --git a/labs/advanced/traefik/example2/README.md b/labs/advanced/traefik/example2/README.md index 50f196f..2be11d2 100644 --- a/labs/advanced/traefik/example2/README.md +++ b/labs/advanced/traefik/example2/README.md @@ -1,67 +1,85 @@ -# Traefik proxy running as independent docker compose app: -In this example, Traefik runs as an independent docker compose application. For other applications requiring it's "proxy" services, they need to be on the same network as of the Traefik proxy. The best way is to make a docker bridge network ourselves, and connect all containers on that network. +# Traefik proxy running as independent docker compose app -## Create a docker bridge network - **services-network**: -``` +In this example, Traefik runs as an independent docker compose application. For other applications +requiring it's "proxy" services, they need to be on the same network as of the Traefik proxy. The +best way is to make a docker bridge network ourselves, and connect all containers on that network. + +## Create a docker bridge network - **services-network** + +```bash docker network create services-network ``` -## Start Traefik - frond-end / proxy: -Start the Traefik proxy service by briging up it's docker compose stack - which is compose of just one container! -``` +## Start Traefik - frond-end / proxy + +Start the Traefik proxy service by briging up it's docker compose stack - which is compose of just +one container! + +```bash cd proxy docker compose up -d cd .. ``` -Note: Please go through the actual `docker-compose.yml` and `traefik.toml` files under the `proxy` directory. +Note: Please go through the actual `docker-compose.yml` and `traefik.toml` files under the `proxy` directory. Verify: -``` -$ docker ps + +```bash +docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6665c2a9afb2 traefik:1.7 "/traefik" 7 minutes ago Up 7 minutes 0.0.0.0:80->80/tcp proxy_traefik_1 ``` -## Start the web server - back-end: -``` +## Start the web server - back-end + +```bash cd web docker compose up -d cd .. ``` -Note: Please go through the actual `docker-compose.yml` file under the `web` directory. +Note: Please go through the actual `docker-compose.yml` file under the `web` directory. Verify: -``` + +```bash $ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 65b42f52017a praqma/network-multitool "/docker-entrypoint.…" 2 minutes ago Up 2 minutes 80/tcp, 443/tcp web_multitool_1 6665c2a9afb2 traefik:1.7 "/traefik" 7 minutes ago Up 7 minutes 0.0.0.0:80->80/tcp proxy_traefik_1 ``` -## Ensure DNS/name-resolution is in place: +## Ensure DNS/name-resolution is in place + For this example, you can setup the following in your `/etc/hosts` file: -``` + +```bash $ cat /etc/hosts -127.0.0.1 localhost localhost.localdomain -. . . -127.0.0.1 example.com www.example.com -``` +127.0.0.1 localhost localhost.localdomain +127.0.0.1 example.com www.example.com -## Check with curl: ``` -$ curl example.com + +## Check with curl + +```bash +curl example.com + Praqma Network MultiTool (with NGINX) - 65b42f52017a - 172.20.0.3/16 $ curl www.example.com + Praqma Network MultiTool (with NGINX) - 65b42f52017a - 172.20.0.3/16 ``` -Notice that `curl localhost` will not work, because the proxy is listening on localhost, on port 80, expecting only the DNS names/URLs, which are configured as it's front-ends. It will ignore all other urls , and will show a `404` . +Notice that `curl localhost` will not work, because the proxy is listening on localhost, on port 80, +expecting only the DNS names/URLs, which are configured as it's front-ends. It will ignore all other +urls , and will show a `404` . + +```bash +curl localhost -``` -$ curl localhost 404 page not found ``` diff --git a/labs/docker-compose/00-getting-started.md b/labs/docker-compose/00-getting-started.md index b733aac..7ed86b8 100644 --- a/labs/docker-compose/00-getting-started.md +++ b/labs/docker-compose/00-getting-started.md @@ -4,4 +4,6 @@ In this section you will install Docker and run your container using compose. ## Installing Docker -Depending on what OS you are running, installation is different, but head over to the [docker compose installation instructions](https://docs.docker.com/compose/install/) website and follow the instructions there. +Depending on what OS you are running, installation is different, but head over to the +[docker compose installation instructions](https://docs.docker.com/compose/install/) website +and follow the instructions there. diff --git a/labs/docker-compose/01-hello-world.md b/labs/docker-compose/01-hello-world.md index ac0184f..1c73f15 100644 --- a/labs/docker-compose/01-hello-world.md +++ b/labs/docker-compose/01-hello-world.md @@ -50,14 +50,17 @@ hello-world_1 | docker-compose_hello-world_1 exited with code 0 ``` -As we can see. The hello-world container (tagged with latest) was started using the the compose file and your docker compose installation is working! +As we can see. The hello-world container (tagged with latest) was started using the the compose file +and your docker compose installation is working! Compose file explained: -`services:`: defines the configuration of the container -`hello-world:`: is now the name of the service -`image: hello-world:latest`: defines the container and its version that we're using for `hello-world` service. +- `services:`: defines the configuration of the container +- `hello-world:`: is now the name of the service +- `image: hello-world:latest`: defines the container and its version that we're using + for `hello-world` service. -Now re-create using hello-world service to use ubuntu container and also add `command:` for the service with value `echo "Hello from me"` +Now re-create using hello-world service to use ubuntu container and also add `command:` for the +service with value `echo "Hello from me"` _*Q: So what did this do?*_ diff --git a/labs/docker-compose/02-port-forward.md b/labs/docker-compose/02-port-forward.md index 0e79b0e..9bd15a7 100644 --- a/labs/docker-compose/02-port-forward.md +++ b/labs/docker-compose/02-port-forward.md @@ -1,8 +1,10 @@ # A basic webserver -Running arbitrary Linux commands inside a Docker container with docker compose is quite overhead, but let's do something more useful. +Running arbitrary Linux commands inside a Docker container with docker compose is quite +overhead, but let's do something more useful. Create docker-compose.yml file with following content: + ```yml services: nginx: @@ -10,47 +12,52 @@ services: ``` Now run the command `docker compose pull` and you should see: + ```bash Pulling nginx ... done ``` + This Docker image uses the [Nginx](http://nginx.org/) webserver to serve a static HTML website. -Configure you're nginx service to expose port 80 as 8080. -With: -```yml +Configure you're nginx service to expose port 80 as 8080. With: + +```yaml ports: - "8080:80" ``` Where first one is HOST and second one is CONTAINER. - -Open a web browser and go to port 8080 on your host. The exact address will depend on how you're running Docker today: +Open a web browser and go to port 8080 on your host. The exact address will depend on how you're +running Docker today: * **Native Linux** - [http://localhost:8080](http://localhost:8080) -* **Cloud server** - Make sure firewall rules are configured to allow traffic on port 8080. Open browser and use the hostname (or IP) for your server. -Ex: [http://ec2-54-69-126-146.us-west-2.compute.amazonaws.com:8080](http://ec2-54-69-126-146.us-west-2.compute.amazonaws.com:8080) - -Alternatively open a new shell and issue `curl localhost:8080` +* **Cloud server** - Make sure firewall rules are configured to allow traffic on port 8080. Open + browser and use the hostname (or IP) for your server. Ex: + [http://ec2-54-69-126-146.us-west-2.compute.amazonaws.com:8080](http://ec2-54-69-126-146.us-west-2.compute.amazonaws.com:8080) + . Alternatively open a new shell and issue `curl localhost:8080` If you see a webpage saying "Welcome to nginx!" then you're done! -If you look at the console output from docker, you see nginx producing a line of text for each time a browser hits the webpage: +If you look at the console output from docker, you see nginx producing a line of text for each time +a browser hits the webpage: ```bash $ docker compose up Creating docker-compose_nginx_1 ... done Attaching to docker-compose_nginx_1 -nginx_1 | 172.18.0.1 - - [07/Jan/2020:22:28:48 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" "-" +nginx_1 | 172.18.0.1 - - [07/Jan/2020:22:28:48 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.58.0" +"-" ``` Press **control + c** in your terminal window to stop your services. ## Working with your docker container -When running a webserver like nginx, it's pretty useful that you do not have to have an open session into the server at all times to run it. -We need to make it run in the background, freeing up our terminal for other things. -Docker enables this with the `-d` parameter for run. -`docker compose up -d` +When running a webserver like nginx, it's pretty useful that you do not have to have an open session +into the server at all times to run it. We need to make it run in the background, freeing up our +terminal for other things. Docker enables this with the `-d` parameter for run. `docker compose up +-d` ```bash $ docker compose up -d diff --git a/labs/docker-compose/03-volumes.md b/labs/docker-compose/03-volumes.md index af0e606..cfb5fc9 100644 --- a/labs/docker-compose/03-volumes.md +++ b/labs/docker-compose/03-volumes.md @@ -2,7 +2,9 @@ In some cases you're going to need data outside of docker containers and to do that you can use volumes -**A docker Volume** is where you can use a named or unnamed volume to store the external data. You would normally use a volume driver for this, but you can get a host mounted path using the default local volume driver. +**A docker Volume** is where you can use a named or unnamed volume to store the external data. You +would normally use a volume driver for this, but you can get a host mounted path using the default +local volume driver. So let's look at the [Nginx](https://hub.docker.com/_/nginx/) service from port-forwarding excercise. The server itself is of little use, if it cannot access our web content on the host. @@ -28,8 +30,6 @@ Try to do the following: _*Q: What do you see now in the browser?*_ -Now modify the index.html again and *do* not restart the container. +Now modify the index.html again and _do_ not restart the container. _*Q: What do you see after you refresh the browser page?*_ - - diff --git a/labs/docker-compose/04-build-image.md b/labs/docker-compose/04-build-image.md index 06e7745..99bd7ea 100644 --- a/labs/docker-compose/04-build-image.md +++ b/labs/docker-compose/04-build-image.md @@ -1,8 +1,10 @@ # Building docker image using docker-compose -In this excercise we're going to setup [bottlepy](https://bottlepy.org/docs/dev/) application running from our compose service. +In this excercise we're going to setup [bottlepy](https://bottlepy.org/docs/dev/) application +running from our compose service. Checkout the bottle folder and you should be seeing 3 files: + - app.py - Dockerfile - requirements.txt @@ -13,9 +15,11 @@ requirements.txt has the needed requirements for python to be installed before r Dockerfile is partially empty and now it's your job to fill it in in order to make the application run. -Application can be started with command `python3 /path/to/app.py` and requirements can be installed with `pip3 install -r /path/to/requirements.txt`. +Application can be started with command `python3 /path/to/app.py` and requirements can be installed +with `pip3 install -r /path/to/requirements.txt`. Your `docker-compose.yml` can look something like this: + ```yml services: bottle: @@ -25,8 +29,9 @@ services: Where `build:` is refering the folder what docker container we should build. -To make docker compose to build you can use command `docker compose up --build` in order to execute the `build:` configuration. +To make docker compose to build you can use command `docker compose up --build` in order to execute +the `build:` configuration. Try to build your application container and open browser to correct port. -_*Q: What do you see on :/hello/docker-is-awesome ?*_ \ No newline at end of file +_*Q: What do you see on <domain>:<port>/hello/docker-is-awesome ?*_ diff --git a/labs/exercise-template.md b/labs/exercise-template.md index 267dd06..f84258b 100644 --- a/labs/exercise-template.md +++ b/labs/exercise-template.md @@ -6,14 +6,15 @@ ## Introduction -Here you will provide the bare minimum of information people need to solve the exercise. +Here, you will provide the bare minimum of information people need to solve the exercise. ## Subsections You can have several subsections if needed.
-:bulb: If an explanaition becomes too long, the more detailed parts can be encapsulated in a drop down section +:bulb: If an explanation becomes too long, the more detailed parts can be encapsulated in a + drop down section
## Exercise @@ -22,14 +23,15 @@ You can have several subsections if needed. - In bullets, what are you going to solve as a student -### Step by step instructions +### Step-by-step instructions
More Details -**take the same bullet names as above and put them in to illustrate how far the student have gone** +> **NOTE**: Take **the same bullet names as above** and put them in to illustrate how far the +> student has gone. Then **delete this line**. -- all actions that you believe the student should do, should be in a bullet +- All actions that you believe the student should do should be in a bullet point > :bulb: Help can be illustrated with bulbs in order to make it easy to distinguish. diff --git a/labs/extra-exercises/image-security.md b/labs/extra-exercises/image-security.md index 23ba9a6..5c10b38 100644 --- a/labs/extra-exercises/image-security.md +++ b/labs/extra-exercises/image-security.md @@ -1,50 +1,61 @@ # Docker image security - ## 1. Create a new image Pull the latest alpine docker image as a base: - $ docker pull alpine:latest +```bash +docker pull alpine:latest +``` You can find out the Repository Digest for the image with this command: - $ docker image ls --digests alpine - REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE - alpine latest sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528 196d12cf6ab1 3 weeks ago 4.41MB +```bash +docker image ls --digests alpine +REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE +alpine latest sha256:621c2f39f8133acb8e64023a94dbdf0d5ca81896102b9e57c0dc184cadaf5528 196d12cf6ab1 3 weeks ago 4.41MB +``` Create a simple Dockerfile image to build upon the fixed - mkdir myalpine - cd myalpine - repodigest=$(docker image ls --digests alpine --format "{{.Digest}}") - cat < Dockerfile - FROM alpine@${repodigest} - MAINTAINER some maintainer +```bash +mkdir myalpine +cd myalpine +repodigest=$(docker image ls --digests alpine --format "{{.Digest}}") +cat < Dockerfile +FROM alpine@${repodigest} +MAINTAINER some maintainer - EXPOSE 443 - EOF +EXPOSE 443 +EOF +``` Perform a build on this image: - docker build -t myalpine:1.0 . +```bash +docker build -t myalpine:1.0 . +``` ### Checking the digests At this stage you will note that the image does not have a digest: - docker image ls --digests myalpine - REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE - myalpine 1.0 85aed7cdf75d 38 minutes ago 4.41MB +```bash +docker image ls --digests myalpine +REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE +myalpine 1.0 85aed7cdf75d 38 minutes ago 4.41MB +``` -This is because a digest is a sha of the registry manifest and the layers. This does not exist until the image is pushed to a registry. +This is because a digest is a sha of the registry manifest and the layers. This does not exist +until the image is pushed to a registry. -Tag your image with a pushable-name (i.e. starting with your dockerhub username) and push it to docker hub. You should be able to then see the image has a digest: +Tag your image with a pushable-name (i.e. starting with your dockerhub username) and push it to +docker hub. You should be able to then see the image has a digest: - docker image ls --digests meekrosoft/myalpine - REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE - meekrosoft/myalpine 1.0 sha256:b609be091e06834208b9d1d39e7e0fbfd60b550ea5d43a5476241d6218a8ee96 85aed7cdf75d 41 minutes ago 4.41MB +```bash +docker image ls --digests meekrosoft/myalpine +REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE +meekrosoft/myalpine 1.0 sha256:b609be091e06834208b9d1d39e7e0fbfd60b550ea5d43a5476241d6218a8ee96 85aed7cdf75d 41 minutes ago 4.41MB +``` Ask your neighbour to pull your image and check the digest. - -Fint! diff --git a/labs/extra-exercises/nginx/network.md b/labs/extra-exercises/nginx/network.md index 54ebc60..5830e72 100644 --- a/labs/extra-exercises/nginx/network.md +++ b/labs/extra-exercises/nginx/network.md @@ -2,16 +2,44 @@ When you spin up several containers, they all share the same internal network. -In this exercise, we all use this for creating a `nginx` server that serves content from two other containers. For simplicity, the same `nginx` container wil be used, but each serving a page with a different background color, so it's easier to see that it behaves correctly. +In this exercise, we all use this for creating a `nginx` server that serves content from two other +containers. + +For simplicity, the same `nginx` container wil be used, but each serving a page with a +different background color, so it's easier to see that it behaves correctly. ## Creating the containers -There's a `docker-compose.yml` in this folder that contains the necessary configuration to spin up three nginx containers, configured to be a master nginx container. Then do a `docker compose up -d`. When you open a browser window, go to http://localhost:9090/ You'll see the default page from a new `nginx` installation. The configuration file here makes two entry points for the two containers. In the configuration, the host names are used, instead of IP addresses, Docker assigns IP addresses to containers, but the values can vary between runs. If you inspect the docker containers, you'll see a section with `Networks`, where the alias is listed . When docker containers are created, a host name is assigned to them, which is by default the same as the container name. In the `nginx` configuration file, the host names are used without the ports defined in the above example; this is because docker has an internal network, and the `nginx` image by default exposes port 80. +There's a `docker-compose.yml` in this folder that contains the necessary configuration to spin up +three nginx containers, configured to be a master nginx container. + +Then do a `docker compose up -d`. + +When you open a browser window, go to You'll see the default page from a +new `nginx` installation. + +The configuration file here makes two entry points for the two containers. + +In the configuration, the host names are used, instead of IP addresses, Docker assigns IP addresses +to containers, but the values can vary between runs. + +If you inspect the docker containers, you'll +see a section with `Networks`, where the alias is listed. + +When docker containers are created, a host +name is assigned to them, which is by default the same as the container name. + +In the `nginx` +configuration file, the host names are used without the ports defined in the above example; this is +because docker has an internal network, and the `nginx` image by default exposes port 80. ## Running the containers Start the containers - $ docker compose up -d +```bash +docker compose up -d +``` -The port numbers that are exposed were chosen a bit arbitrarily. You can remove the port configuration from container `one` and `two`, and see if the setup still works. +The port numbers that are exposed were chosen a bit arbitrarily. You can remove the port +configuration from container `one` and `two`, and see if the setup still works. diff --git a/labs/extra-exercises/secrets.md b/labs/extra-exercises/secrets.md index bcfbfbd..53b0ecb 100644 --- a/labs/extra-exercises/secrets.md +++ b/labs/extra-exercises/secrets.md @@ -1,95 +1,120 @@ # Docker Secrets -Applications often require access to access tokens, passwords, and other sensitive information. Shipping this configuration in images poses security challenges, not to mention that with containers, applications are now dynamic and portable across multiple environments, making this a poor fit. +Applications often require access to access tokens, passwords, and other sensitive information. +Shipping this configuration in images poses security challenges, not to mention that with +containers, applications are now dynamic and portable across multiple environments, making this a +poor fit. -Docker secrets provide a means of managing sensitive information required at runtime independently of the build and run process. +Docker secrets provide a means of managing sensitive information required at runtime independently +of the build and run process. > ## Store config in the environment -> An app’s config is everything that is likely to vary between deploys (staging, production, developer environments, etc). This includes: +> +> An app’s configuration includes everything likely to vary between deployments (e.g., staging, +> production, or development environments). This includes: > > * Resource handles to the database, Memcached, and other backing services > * Credentials to external services such as Amazon S3 or Twitter -> * Per-deploy values such as the canonical hostname for the deploy +> * Per-deployment values such as the canonical hostname for the deployment > [https://12factor.net/config] ## Starting with Docker Secrets -To get started, we need to be running docker in swarm mode. Swarm is the distributed orchestration tool that originally shipped with docker, which now is being replaces with kubernetes. +To get started, we need to be running docker in swarm mode. Swarm is the distributed orchestration +tool that originally shipped with docker, which now is being replaces with kubernetes. The first step is to set up swarm mode on your docker host: - docker swarm init +```bash +docker swarm init +``` You can see initially that there are no secrets being managed with this command: - $ docker secret ls +```bash + docker secret ls ID NAME DRIVER CREATED UPDATED - $ + +``` We can create a secret like this: - $ printf "docker1234" | docker secret create db_pwd - - w3yszkcy3ip08cgnfbiggq5e6 - $ docker secret ls - ID NAME DRIVER CREATED UPDATED - w3yszkcy3ip08cgnfbiggq5e6 db_pwd 6 seconds ago 6 seconds ago - $ +```bash +printf "docker1234" | docker secret create db_pwd - +w3yszkcy3ip08cgnfbiggq5e6 +docker secret ls +ID NAME DRIVER CREATED UPDATED +w3yszkcy3ip08cgnfbiggq5e6 db_pwd 6 seconds ago 6 seconds ago +``` Now we want to grant access to our secret: - $ docker service create --name redis --secret db_pwd redis:alpine +```bash +docker service create --name redis --secret db_pwd redis:alpine +``` And we can verify that the secret is available like this: - $ docker ps --filter name=redis -q - 5cb1c2348a59 - $ docker container exec $(docker ps --filter name=redis -q) ls -l /run/secrets - $ docker container exec $(docker ps --filter name=redis -q) cat /run/secrets/db_pwd +```bash +docker ps --filter name=redis -q +5cb1c2348a59 +docker container exec $(docker ps --filter name=redis -q) ls -l /run/secrets +docker container exec $(docker ps --filter name=redis -q) cat /run/secrets/db_pwd +``` Verify it doesn't commit - $ docker commit $(docker ps --filter name=redis -q) committed_redis - $ docker run --rm -it committed_redis cat /run/secrets/my_secret_data +```bash +docker commit $(docker ps --filter name=redis -q) committed_redis +docker run --rm -it committed_redis cat /run/secrets/my_secret_data + +``` -Try removing the secret. The removal fails because the redis service is running and has access to the secret. +Try removing the secret. The removal fails because the redis service is running and has access to +the secret. +```bash +docker secret ls - $ docker secret ls - ID NAME DRIVER CREATED UPDATED - w3yszkcy3ip08cgnfbiggq5e6 db_pwd 6 seconds ago 6 seconds ago - $ docker secret rm db_pwd +ID NAME DRIVER CREATED UPDATED +w3yszkcy3ip08cgnfbiggq5e6 db_pwd 6 seconds ago 6 seconds ago - Error response from daemon: rpc error: code = 3 desc = secret - 'db_pwd' is in use by the following service: redis +docker secret rm db_pwd +Error response from daemon: rpc error: code = 3 desc = secret +'db_pwd' is in use by the following service: redis +``` Remove access to the secret from the running redis service by updating the service. - $ docker service update --secret-rm db_pwd redis +```bash +docker service update --secret-rm db_pwd redis -Cleanup the service, secret and leave swarm mode: +``` - $ docker service rm redis - $ docker secret rm db_pwd - $ docker swarm leave --force +Cleanup the service, secret and leave swarm mode: +```bash +docker service rm redis +docker secret rm db_pwd +docker swarm leave --force +``` ## Cheatsheet -| Command | Usage | -| --- | --- | -| docker secret create | Create a secret from a file or STDIN as content | -| docker secret inspect |Display detailed information on one or more secrets | -| docker secret ls | List secrets | -| docker secret rm | Remove one or more secrets | - +| Command | Usage | +| --------------------- | --------------------------------------------------- | +| docker secret create | Create a secret from a file or STDIN as content | +| docker secret inspect | Display detailed information on one or more secrets | +| docker secret ls | List secrets | +| docker secret rm | Remove one or more secrets | ## Slides (later) -High level overview - why secrets, diagram of how +High-level overview: Why secrets are important and a diagram explaining their usage. -## Additonal exercises +## Additional exercises * [Docker lab on secrets](https://github.com/docker/labs/tree/master/security/secrets) diff --git a/labs/image-best-practices.md b/labs/image-best-practices.md index a2d7d72..5b11bb6 100644 --- a/labs/image-best-practices.md +++ b/labs/image-best-practices.md @@ -1,18 +1,28 @@ -# Best practises +# Best practices ## dockerignore -Before the docker CLI sends the context to the docker daemon, it looks for a file named `.dockerignore` in the root directory of the context. If this file exists, the CLI modifies the context to exclude files and directories that match patterns in it. This helps to avoid unnecessarily sending large or sensitive files and directories to the daemon and potentially adding them to images using ADD or COPY. +Before the docker CLI sends the context to the docker daemon, it looks for a file named +`.dockerignore` in the root directory of the context. If this file exists, the CLI modifies the +context to exclude files and directories that match patterns in it. This helps to avoid +unnecessarily sending large or sensitive files and directories to the daemon and potentially adding +them to images using ADD or COPY. -> For more info on dockerignore, head over to the [documentation](https://docs.docker.com/engine/reference/builder/#dockerignore-file). +> For more info on dockerignore, head over to the +> [documentation](https://docs.docker.com/engine/reference/builder/#dockerignore-file). ## Lint your Dockerfile -[Hadolint](https://hadolint.github.io/hadolint/) highlights dubious constraints in your `Dockerfile`. +[Hadolint](https://hadolint.github.io/hadolint/) highlights dubious constraints in your +`Dockerfile`. -The linter uses the principles described in [Docker's documentation on best practices](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) as the basis for the suggestions. +The linter uses the principles described in +[Docker's documentation on best practices](https://docs.docker.com/develop/develop-images/dockerfile_best-practices/) +as the basis for the suggestions. ## Consider security when building images -[Snyk](https://snyk.io/blog/10-docker-image-security-best-practices/) wrote a blog with 10 things that you should consider when building images. They consider adding a label for the security policy of -the image, using a linter (as described above), and [signing docker images](https://docs.docker.com/notary/getting_started/). +[Snyk](https://snyk.io/blog/10-docker-image-security-best-practices/) wrote a blog with 10 things +that you should consider when building images. They consider adding a label for the security policy +of the image, using a linter (as described above), and +[signing docker images](https://docs.docker.com/notary/getting_started/). diff --git a/labs/sharing-images.md b/labs/sharing-images.md index 320dc0e..a1f2f3c 100644 --- a/labs/sharing-images.md +++ b/labs/sharing-images.md @@ -1,36 +1,42 @@ # Sharing images -Before we can take our dockerized Flask app to another computer, we need to push it up to Docker Hub so that publicly avaliable. +Before we can take our dockerized Flask app to another computer, we need to push it up to Docker Hub +so that publicly avaliable. -Docker Hub is like GitHub for Docker images. It’s the main place people store their Docker images in the cloud. +Docker Hub is like GitHub for Docker images. It’s the main place people store their Docker images in +the cloud. -> :bulb: In order for you to do this exercise, you need to have an account. If not, create an account on the Docker Hub - it's free: -> [https://hub.docker.com/signup](https://hub.docker.com/signup) +> :bulb: In order for you to do this exercise, you need to have an account. If not, create an account +> on the Docker Hub - it's free: [https://hub.docker.com/signup](https://hub.docker.com/signup) Then, login to that account by running the `docker login` command on your laptop. -> :bulb: If you do not want your password to be stored on the workstation, create an access token instead: https://hub.docker.com/settings/security +> :bulb: If you do not want your password to be stored on the workstation, create an access token +> instead: -We're almost ready to push our Flask image up to the Docker Hub. We just need to rename it to our namespace (which is the same as our docker username) first. +We're almost ready to push our Flask image up to the Docker Hub. We just need to rename it to our +namespace (which is the same as our docker username) first. -Using the `docker tag` command, tag the image you created in the previous section to your namespace. For example, I would run: +Using the `docker tag` command, tag the image you created in the previous section to your namespace. +For example, I would run: -``` +```bash docker tag myfirstapp /myfirstapp:latest ``` -`myfirstapp` is the tag I used in my `docker build` commands in the previous section, and `/myfirstapp:` is the full name of the new Docker image I want to push to the Hub. -The `:latest` is a versioning scheme you can append to. +`myfirstapp` is the tag I used in my `docker build` commands in the previous section, and +`/myfirstapp:` is the full name of the new Docker image I want to push to +the Hub. The `:latest` is a versioning scheme you can append to. All that's left to do is push up your image: -``` +```bash docker push /myfirstapp ``` Expected output: -``` +```bash The push refers to a repository [docker.io//myfirstapp] 6daf7f1140cb: Pushed 7f74a217d86b: Pushed @@ -46,11 +52,12 @@ latest: digest: sha256:e7016870c297b3c49996ee00972d8abe7f20b4cbe45089dc914193fa8 ``` Go to your profile page on the Docker Hub and you should see your new repository listed: -[https://hub.docker.com/repos/u/](https://hub.docker.com/repos/u/) +[https://hub.docker.com/repos/u/<username>](https://hub.docker.com/repos/u/) **Congrats!** You just made your first Docker image and shared it with the world! Try to pull down the images that your fellows have pushed to see that it's really up there. -For more info on the Docker Hub, and the cli integration, -head over to [https://docs.docker.com/docker-hub/](https://docs.docker.com/docker-hub/) and read the guides there. +For more info on the Docker Hub, and the cli integration, head over to +[https://docs.docker.com/docker-hub/](https://docs.docker.com/docker-hub/) and read the guides +there. diff --git a/labs/windows-docker/win1-windows-on-linux.md b/labs/windows-docker/win1-windows-on-linux.md index 8369f1d..69c2d75 100644 --- a/labs/windows-docker/win1-windows-on-linux.md +++ b/labs/windows-docker/win1-windows-on-linux.md @@ -1,43 +1,49 @@ # Dotnet core in Docker -Before starting with .NET in docker, it is important to know the following: -- The ASP framework is supported, but is done differently. -- .NET core runs natively in linux docker (we start with this) -- Microsoft is doing a lot of development on Docker for Windows, and are constantly improving the entire ecosystem. Things that did not work last week might work now. +Before starting with .NET in docker, it is important to know the following: -And since there is always people wondering if it is production ready - .NET core has been production capable since 2.0 according to microsoft, and Kubernetes has support for Windows nodes. +- The ASP framework is supported, but is done differently. +- .NET core runs natively in linux docker (we start with this) +- Microsoft is doing a lot of development on Docker for Windows, and are constantly improving + the entire ecosystem. Things that did not work last week might work now. -Anyway let's get started ! +And since there is always people wondering if it is production ready - .NET core has been +production capable since 2.0 according to microsoft, and Kubernetes has support for Windows nodes. -``` +Anyway let's get started ! + +```bash docker container run -it -p 5000:5000 microsoft/dotnet ``` -We expose port 5000 preemptively, since that is what our app will run on. +We expose port 5000 preemptively, since that is what our app will run on. -Inside the container, make a new directory (dotnet has issues running in root directory) and then make a new project: +Inside the container, make a new directory (dotnet has issues running in root directory) and then +make a new project: -``` +```bash mkdir myapp cd myapp && dotnet new razor ``` -The project should automatically restore nuget packages, but in the unlikely case it did not you can run : +The project should automatically restore nuget packages, but in the unlikely case it did not you +can run: -``` +```bash dotnet restore ``` -.NET has a thing with containers, where it needs to expose an environmental variable telling the environment where kestrel (.NET webserver) is allowed to host solutions: +.NET has a thing with containers, where it needs to expose an environmental variable telling the +environment where kestrel (.NET webserver) is allowed to host solutions: -``` +```bash export ASPNETCORE_URLS=http://*:5000 ``` -And then just run the app: +And then just run the app: -``` +```bash dotnet run ``` -Go to localhost:5000 on your machine, you should have a fresh web app running (razor pages). \ No newline at end of file +Go to localhost:5000 on your machine, you should have a fresh web app running (razor pages). diff --git a/labs/windows-docker/win2-windows-on-windows.md b/labs/windows-docker/win2-windows-on-windows.md index e9aac3e..e73d5da 100644 --- a/labs/windows-docker/win2-windows-on-windows.md +++ b/labs/windows-docker/win2-windows-on-windows.md @@ -1,36 +1,43 @@ # ASP framework in containers -Before starting, it is important to be aware of something when working with containers and ASP framework. +Before starting, it is important to be aware of something when working with containers and ASP +framework. -Microsoft is pushing for a specific workflow with ASP framework and Docker, and making the best practice workflow (ie everything as code) is out of the scope of this workshop. +Microsoft is pushing for a specific workflow with ASP framework and Docker, and making the best +practice workflow (ie everything as code) is out of the scope of this workshop. -The workflow proposed by Microsoft, is to run publish (ie build) from Visual Studio, and then package that output into a container. +The workflow proposed by Microsoft, is to run publish (ie build) from Visual Studio, and then +package that output into a container. -To make this correctly, would require us to get our hands real dirty with MSBuild and MSDeploy in powershell - but instead we will run various ready made containers to show off Windows containers as these workshop's focus is on docker. +To make this correctly, would require us to get our hands real dirty with MSBuild and MSDeploy in +powershell - but instead we will run various ready made containers to show off Windows containers +as these workshop's focus is on docker. To start off, remote desktop to the windows machine given and open powershell. Run the familiar hello-world example: -``` +```bash docker run hello-world ``` -The interesting thing here, is that command is being executed in powershell. On windows kernel. It is NOT the same hello-world we saw previously. +The interesting thing here, is that command is being executed in powershell. On windows kernel. It +is NOT the same hello-world we saw previously. Let's ramp things up a bit: -``` +```bash docker run -it microsoft/nanoserver powershell ``` -If you want leave out the "powershell" in the end, it will automatically execute cmd which messes a bit with the powershell of your VM. +If you want leave out the "powershell" in the end, it will automatically execute cmd which messes a +bit with the powershell of your VM. Exit the container by typing `exit` to exit the container. If you run `docker image ls`, you'll note that hello-world is built on nanoserver by looking at the size: -``` +```bash REPOSITORY TAG IMAGE ID CREATED SIZE microsoft/dotnet-framework-samples latest e5cc04acc880 13 hours ago 12.4GB microsoft/mssql-server-windows-developer latest 9e08a14c562e 3 days ago 11.6GB @@ -39,52 +46,60 @@ microsoft/windowsservercore latest 1fbef5019583 microsoft/nanoserver latest edc711fca788 3 weeks ago 1.1GB ``` -It is a bit bigger than the linux equivalent.. but it does the same thing, and loads an entire windows OS while we are at it. +It is a bit bigger than the linux equivalent.. but it does the same thing, and loads an entire +windows OS while we are at it. -The base image normally run in windows is microsoft/windowsservercore - and is what you should base your windows applications on. +The base image normally run in windows is microsoft/windowsservercore - and is what you should base +your windows applications on. -``` +```bash docker run -it microsoft/windowsservercore powershell ``` -The biggest challenge working with Windows containers in my experience, has been adapting things that are natively provided to run in a container. +The biggest challenge working with Windows containers in my experience, has been adapting things +that are natively provided to run in a container. -Examples include how to build and deploy a project that normally Visual Studio and IIS took care of. This means the real tradeoff to using Windows containers is learning how Microsoft works under the hood. The gain is that a lot of the common problems with Windows go away. +Examples include how to build and deploy a project that normally Visual Studio and IIS took care of. +This means the real tradeoff to using Windows containers is learning how Microsoft works under the +hood. The gain is that a lot of the common problems with Windows go away. > remember to `exit` your container before moving to the next command- Containers will allow you to spin up things seamlessly, just like on Linux. For example: -``` +```bash docker run -d -p 1433:1433 -e sa_password=YOUR_PWD_HERE -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer ``` -Which spins up a development server for Microsoft SQL. Since the remote desktop machines are not set up with tools, you cannot access it - but a real development machine could just use SQL management tools. +Which spins up a development server for Microsoft SQL. Since the remote desktop machines are not set +up with tools, you cannot access it - but a real development machine could just use SQL management +tools. Or how about the entire azure powershell commandline interface? - -``` +```shell docker run -it azuresdk/azure-powershell powershell Get-Help Add-AzureRmAccount ``` -Microsoft did a pretty good job, making it feel and seem like native docker - because it is. They have an upstream docker fork, that they pull in for releasing docker on windows. - +Microsoft did a pretty good job, making it feel and seem like native docker - because it is. They +have an upstream docker fork, that they pull in for releasing docker on windows. Let's look at some examples to finish: -``` + +```bash docker run microsoft/dotnet-framework-samples ``` or specify a specific ASP framework version: -``` +```bash docker run microsoft/dotnet-framework-samples:4.7.1 ``` + The docker file for the above example looks like this: -``` +```dockerfile FROM microsoft/dotnet-framework-build:4.7.1 AS builder WORKDIR /app @@ -98,8 +113,92 @@ COPY --from=builder /app/bin/Release . ENTRYPOINT ["dotnetapp-4.7.1.exe"] ``` -The above dockerfile is also a way to get started with shipping apps natively on Windows, as you can publish your app and copy it in in a similar fashion. Just make use of the servercore image instead. +The above dockerfile is also a way to get started with shipping apps natively on Windows, as you can +publish your app and copy it in in a similar fashion. Just make use of the servercore image instead +of nanoserver. + +These are not supposed to be base images, but serve as an excellent demo that Windows is capable of +running native containers. + +## Windows on Windows + +Microsoft is pushing for a specific workflow with ASP framework and Docker, and making the best +practice workflow (ie everything as code) is out of the scope of this workshop. + +The workflow proposed by Microsoft, is to run publish (ie build) from Visual Studio, and then +package that output into a container. + +To make this correctly, would require us to get our hands real dirty with MSBuild and MSDeploy in +powershell - but instead we will run various ready made containers to show off Windows containers +as they are. + +```bash +docker run hello-world:nanoserver +``` + +The interesting thing here, is that command is being executed in powershell. On windows kernel. It +is NOT the same hello-world we saw previously. + +```bash +docker run -it microsoft/windowsservercore powershell +``` + +If you want leave out the "powershell" in the end, it will automatically execute cmd which messes a +bit with the powershell of your VM. + +```bash +docker run -it microsoft/windowsservercore +``` + +It is a bit bigger than the linux equivalent.. but it does the same thing, and loads an entire +windows OS while we are at it. + +The base image normally run in windows is microsoft/windowsservercore - and is what you should base +your windows applications on. + +```bash +docker run -it microsoft/windowsservercore powershell +``` + +The biggest challenge working with Windows containers in my experience, has been adapting things +that are natively provided to run in a container. + +Examples include how to build and deploy a project that normally Visual Studio and IIS took care of. +This means the real tradeoff to using Windows containers is learning how Microsoft works under the +hood. + +```bash +docker run -it microsoft/mssql-server-windows-developer +``` + +Which spins up a development server for Microsoft SQL. Since the remote desktop machines are not set +up with tools, you cannot access it - but a real development machine could just use SQL management +studio. + +```shell +docker run -it microsoft/mssql-server-windows-developer +``` + +Microsoft did a pretty good job, making it feel and seem like native docker - because it is. They +have an upstream docker fork, that they pull in for releasing docker on windows. + +```bash +docker run -it microsoft/windowsservercore powershell +``` + +```bash +docker run -it microsoft/windowsservercore powershell +``` + +```dockerfile +FROM microsoft/windowsservercore +COPY . /app +RUN powershell -Command "Install-WindowsFeature Web-Server" +``` -These are not supposed to be base images, but serve as an exellent demo that Windows is capable of running native containers. +The above dockerfile is also a way to get started with shipping apps natively on Windows, as you can +publish your app and copy it in in a similar fashion. Just make use of the servercore image instead +of nanoserver. -This concludes the ASP framework exercises. +These are not supposed to be base images, but serve as an excellent demo that Windows is capable of +running native containers. diff --git a/labs/windows-docker/win3-common-commands.md b/labs/windows-docker/win3-common-commands.md index 97b19d9..afe7ad6 100644 --- a/labs/windows-docker/win3-common-commands.md +++ b/labs/windows-docker/win3-common-commands.md @@ -1,19 +1,20 @@ # The usual set of Docker commands on Windows -Now, time to show that Docker on Windows really just is Docker as you know it from Linux by now. +Now, time to show that Docker on Windows really just is Docker as you know it from Linux by now. ## Volumes Let's start with volume. Make a folder in your C drive, called data and run: -``` +```powershell mkdir \data docker container run -it -v C:\data:C:\data microsoft/nanoserver powershell ``` Make some files in the directory by your nomal explorer, and run the following in your container: -``` + +```powershell PS C:\> dir data @@ -27,10 +28,11 @@ Mode LastWriteTime Length Name -a---- 11/29/2017 12:54 PM 0 Windows.txt ``` -## Building +## Building Similarly, let's play with a Dockerfile like the one below: -``` + +```dockerfile FROM microsoft/windowsservercore ENV chocolateyUseWindowsCompression false @@ -38,33 +40,39 @@ ENV chocolateyUseWindowsCompression false RUN powershell -Command \ iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')); ``` + `Build` that image, and run a container based on your image. -You now have chocolatey (a package manager for Windows powershell): -``` +You now have chocolatey (a package manager for Windows powershell): + +```powershell chocolatey -? -``` +``` + It will print out the avaliable commands on chocolatey. Now, exit the container again. ## Port forwarding -Let's look at an IIS Server: -``` +Let's look at an IIS Server: + +```powershell docker run -d -p 80:80 --name iis microsoft/iis ``` -Which can be accessed via your windows machine on this ip: -``` +Which can be accessed via your windows machine on this ip: + +```powershell docker inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' iis ``` + You see the welcome screen of the IIS, but that is not very usefull, so let's start making an application. ## Compiling and serving code -This fresh installation does not have golang installed. We can just use a container to fix that. -Create the following file called `webserver.go`: +This fresh installation does not have golang installed. We can just use a container to fix that. +Create the following file called `webserver.go`: -``` +```go package main import ( @@ -82,20 +90,22 @@ func main() { panic(http.ListenAndServe(fmt.Sprintf(":%s", port), http.FileServer(http.Dir(".")))) } ``` -and run: -``` +and run: + +```powershell docker run -it -v C:\data:C:\code -w C:\code golang:nanoserver powershell go build webserver.go ``` -Voila. Webserver.exe has been put into the current directory. +Voila. Webserver.exe has been put into the current directory. Now we need to serve the application in a container. -Make the following dockerfile in the same directory: -``` +Make the following dockerfile in the same directory: + +```dockerfile FROM microsoft/nanoserver COPY webserver.exe /webserver.exe @@ -104,17 +114,20 @@ EXPOSE 8080 CMD ["/webserver.exe"] ``` + And build an image. -IIS needs to be able to find it later, and it does not run on localhost. So we need to name our container: -``` +IIS needs to be able to find it later, and it does not run on localhost. So we need to name our container: + +```powershell docker run -d --rm --name=mysite -p 8080:8080 ``` > the `--rm` part makes sure that if the container stops, it gets automatically deleted. -You can access it by running: -``` +You can access it by running: + +```powershell start http://$(docker inspect -f '{{ .NetworkSettings.Networks.nat.IPAddress }}' mysite):8080 ``` @@ -122,6 +135,8 @@ It will show you a folder view of the containers files, including the webserver. ## Summary -This concludes the Windows bit of the workshop for now, but everything you worked with in regards to Docker works with Windows. +This concludes the Windows bit of the workshop for now, but everything you worked with in regards +to Docker works with Windows. -However multi container builds require a newer version of Docker than the Virtual machines have, so this is something you'll have to try at home ;) +However multi container builds require a newer version of Docker than the Virtual machines have, +so this is something you'll have to try at home ;) diff --git a/trainer/README.md b/trainer/README.md index c67e83a..e630a46 100644 --- a/trainer/README.md +++ b/trainer/README.md @@ -1,9 +1,14 @@ # trainer -This is the how for additional examples and other trainer resources for the Praqma docker-slides and katas. -# Examples: +This is the how for additional examples and other trainer resources for the Praqma docker-slides +and katas. + +## Examples + ## Basic-compose + A simple compose example with only a single container. ## scratch + A simple example building a really empty image to demo that scratch is 0 bytes. diff --git a/trainer/examples/basic-compose/README.md b/trainer/examples/basic-compose/README.md index a6fabb9..218e7a1 100644 --- a/trainer/examples/basic-compose/README.md +++ b/trainer/examples/basic-compose/README.md @@ -2,13 +2,13 @@ ## Build image -``` +```bash docker build -t basic-compose:latest . ``` ## Run image manually -``` +```bash docker run --rm -p 8050:5000 -v $(pwd):/usr/src/app -d basic-compose:latest /usr/src/app/app.py ``` diff --git a/trainer/examples/building-flask-on-different-os/README.md b/trainer/examples/building-flask-on-different-os/README.md index d6a0fa3..f05ac8b 100644 --- a/trainer/examples/building-flask-on-different-os/README.md +++ b/trainer/examples/building-flask-on-different-os/README.md @@ -1,6 +1,7 @@ # Building your flask application with three different from OS's -Building our flask application with Ubuntu might seem like a bad idea, as Ubuntu is a general purpose OS. +Building our flask application with Ubuntu might seem like a bad idea, as Ubuntu is a general +purpose OS. Looking into dockerhub we see that python already have an image we can use. diff --git a/trainer/examples/ci-tools/README.md b/trainer/examples/ci-tools/README.md index ad9f128..d735b30 100644 --- a/trainer/examples/ci-tools/README.md +++ b/trainer/examples/ci-tools/README.md @@ -1,17 +1,19 @@ - # Docker CI tools + Note, you need to have run the "building flask on different OS" example first. ## Hadolint -https://github.com/hadolint/hadolint -``` + + +```bash docker run --rm -i hadolint/hadolint < Dockerfile docker run --rm -i hadolint/hadolint < Dockerfile-python docker run --rm -i hadolint/hadolint < Dockerfile-python-alpine ``` ## Trivy + See the different severities you get by the different OS. ```bash @@ -19,4 +21,4 @@ docker run -v /var/run/docker.sock:/var/run/docker.sock -v aquacache:/root/.cach docker run -v aquacache:/root/.cache aquasec/trivy image mypy:python docker run -v aquacache:/root/.cache aquasec/trivy image mypy:python-alpine -``` \ No newline at end of file +``` diff --git a/trainer/examples/nextcloud/README.md b/trainer/examples/nextcloud/README.md index 7057008..48cfb49 100644 --- a/trainer/examples/nextcloud/README.md +++ b/trainer/examples/nextcloud/README.md @@ -87,7 +87,7 @@ you might even still be logged in. 1. Run `docker compose up` to start the application. You can find the version under the `gear icon -> help`, -[link](http://localhost/settings/help). +[Version](http://localhost/settings/help). ## Downgrading Nextcloud @@ -115,6 +115,7 @@ app_1 | Can't start Nextcloud because the version of the data (12.0.13.2) is hi In the original example, the upgrade would fail without adding the `depends_on: db` to the docker compose file. -Nicolaj has been unable to reproduce this error on `Docker for Windows 19.03`, and thus the "example of using `depends_on`" is left out of the tutorial. +Nicolaj has been unable to reproduce this error on `Docker for Windows 19.03`, and thus the "example +of using `depends_on`" is left out of the tutorial. `depends_on` is however kept in the docker compose file, so as to not cause any unintentional errors. diff --git a/trainer/examples/scratch/README.md b/trainer/examples/scratch/README.md index 7e7ee31..6bb6c52 100644 --- a/trainer/examples/scratch/README.md +++ b/trainer/examples/scratch/README.md @@ -1,14 +1,15 @@ -# Empty image (useles, but really empty) +# Empty image (useless, but really empty) -In this example we show that Scratch is in fact empty. +In this example, we demonstrate that Scratch is, in fact, an empty image. -It is not possible to `docker pull scratch` as the image doesn't actualy exist. +It is not possible to `docker pull scratch` because the image does not actually exist as a +downloadable entity. -Neither is it possible ot just do a `FROM scratch` Dockerfile. +Neither is it possible to simply use `FROM scratch` in a Dockerfile without additional context. But we can do the following: -``` +```dockerfile FROM scratch COPY emptyfile . ``` diff --git a/trainer/examples/security-run-as-root/README.md b/trainer/examples/security-run-as-root/README.md index e85892c..9f39f8c 100644 --- a/trainer/examples/security-run-as-root/README.md +++ b/trainer/examples/security-run-as-root/README.md @@ -124,4 +124,5 @@ Use great caution when mounting the filesystem into a container, and pay attention to the privileges that a container is running with. Greatly inspired by the -[Keynote: Running with Scissors - Liz Rice, Technology Evangelist, Aqua Security](https://www.youtube.com/watch?v=ltrV-Qmh3oY) at KubeCon 2018 +[Keynote: Running with Scissors - Liz Rice, Technology Evangelist, Aqua Security](https://www.youtube.com/watch?v=ltrV-Qmh3oY) +at KubeCon 2018 diff --git a/trainer/examples/volume-python/README.md b/trainer/examples/volume-python/README.md index d36818f..2e5a724 100644 --- a/trainer/examples/volume-python/README.md +++ b/trainer/examples/volume-python/README.md @@ -1,10 +1,14 @@ -The idea for this one is to make a small python application that prints "Hello world!" to your terminal, and volume mount in the code from the host. +# This one is incomplete + +The idea for this one is to make a small python application that prints "Hello world!" to your +terminal, and volume mount in the code from the host. It is a pre-example to building your own image. ## Exercise + You need a terminal in this folder. Run: -`docker run -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python main.py` \ No newline at end of file +`docker run -v "$PWD":/usr/src/myapp -w /usr/src/myapp python:3 python main.py` diff --git a/trainer/experiments/unicode/README.md b/trainer/experiments/unicode/README.md index 15a0b94..57d4827 100644 --- a/trainer/experiments/unicode/README.md +++ b/trainer/experiments/unicode/README.md @@ -1,8 +1,8 @@ # Testing unicode support -``` +```bash docker build -f Файлдокера -t unicode:latest . docker run -it unicode:latest ``` -Sadly, Docker doesn't support unicode in image names and tags. 😢 \ No newline at end of file +Sadly, Docker doesn't support unicode in image names and tags. 😢