In this unit, we will get familiar with Containers and the podman CLI.
ℹ️
|
You are not required to reference any additional resources for these exercises. This is informational only. |
There is a dedicated VM we will use for the container exercises. From workstation.example.com, you should be able to ssh to node3.example.com as 'root' without any prompts for credentials.
ssh node3.example.com
Verify that you are on the right host for these exercises.
cheat-podman-checkhost.sh
You also need to load the local container registry with a few container images that are staged on core.example.com. There’s a convenient cheat script to take care of this.
cheat-podman-loadregistry.sh
ℹ️
|
Native command(s) to load local registry for IMAGENAME in rhel7.5 httpd-24-rhel7 ubi ; do IMAGEID=`wget -O - http://core.example.com/containers/${IMAGENAME}.tar | podman load | grep "Loaded image" | cut -f2 -d@` podman tag ${IMAGEID} core.example.com:5000/${IMAGENAME} podman push core.example.com:5000/${IMAGENAME} done |
Now you are ready to begin your exercises with containers.
Now have a look at the general container information.
podman info
host:
BuildahVersion: 1.6-dev
Conmon:
package: podman-1.0.0-2.git921f98f.module+el8+2785+ff8a053f.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.14.0-dev, commit: be8255a19cda8a598d76dfa49e16e337769d4528-dirty'
Distribution:
distribution: '"rhel"'
version: "8.0"
MemFree: 3181297664
MemTotal: 3964538880
OCIRuntime:
package: runc-1.0.0-54.rc5.dev.git2abd837.module+el8+2769+577ad176.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.0'
SwapFree: 1073737728
SwapTotal: 1073737728
arch: amd64
cpus: 2
hostname: node3.example.com
kernel: 4.18.0-67.el8.x86_64
os: linux
rootless: false
uptime: 1h 12m 54.65s (Approximately 0.04 days)
insecure registries:
registries:
- core.example.com
registries:
registries:
- core.example.com
- quay.io
- docker.io
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
ImageStore:
number: 0
RunRoot: /var/run/containers/storage
Now have a look at the general container information.
podman images
Your results should have come back empty and that’s because we have not imported, loaded or pulled any containers on to our platform.
Time to pull a container from our local repository.
podman pull core.example.com:5000/ubi
Have a looks at the image list now.
podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
core.example.com:5000/ubi latest c096c0dc7247 2 weeks ago 214 MB
ℹ️
|
if you are a subscriber to Red Hat Enterprise Linux, you can pull authentic Red Hat certified images directly from Red Hat’s repository with podman pull rhel7.5 --creds 'username:password'
|
Pull a few more container images.
podman pull core.example.com:5000/rhel7.5 podman pull core.example.com:5000/httpd-24-rhel7
podman images
core.example.com:5000/httpd-24-rhel7 latest 0f1cb8c3c29b 12 days ago 323 MB
core.example.com:5000/ubi latest c096c0dc7247 2 weeks ago 214 MB
core.example.com:5000/rhel7.5 latest 7b875638cfd8 7 months ago 211 MB
Container images can also be tagged with convenient (ie:custom names). This could make it more intuitive to understand what they contain, especially after an image has been customized.
podman tag core.example.com:5000/ubi myfavorite
podman images
core.example.com:5000/httpd-24-rhel7 latest 0f1cb8c3c29b 12 days ago 323 MB
core.example.com:5000/ubi latest c096c0dc7247 2 weeks ago 214 MB
localhost/myfavorite latest c096c0dc7247 2 weeks ago 214 MB
core.example.com:5000/rhel7.5 latest 7b875638cfd8 7 months ago 211 MB
Notice host the image-id for "ubi" and "myfavorite" are identical.
Later you will create a custom image based on an official Red Hat Enterprise Linux container image.
ℹ️
|
The Red Hat Container Catalog (RHCC) provides a convenient service to locate certified container images built and supported by Red Hat. You can also view the "security evaluation" for each image. |
podman images
podman rmi rhel7.5
podman images
core.example.com:5000/httpd-24-rhel7 latest 0f1cb8c3c29b 12 days ago 323 MB
core.example.com:5000/ubi latest c096c0dc7247 2 weeks ago 214 MB
localhost/myfavorite latest c096c0dc7247 2 weeks ago 214 MB
podman images - list images
podman ps - lists running containers
podman pull - pulls (copies) container image from repository (ie: redhat and/or docker hub)
podman run - run a container
podman logs - display logs of a container (can be used with --follow)
podman rm - remove one or more containers
podman rmi - remove one or more images
podman stop - stops one or more containers
podman kill $(podman ps -q) - kill all running containers
podman rm $(podman ps -a -q) - deletes all stopped containers
podman run ubi echo "hello world"
hello world
Well that was really boring!! What did we learn from this? For starters, you should have noticed how fast the container launched and then concluded. Compare that with traditional virtualization where:
-
you power up,
-
wait for bios,
-
wait for grub,
-
wait for the kernel to boot and initialize resources,
-
pivot root,
-
launch all the services, and then finally
-
run the application
Let us run a few more commands to see what else we can glean.
podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
249de20ebdb0 core.example.com:5000/ubi:latest echo hello world 18 seconds ago Exited (0) 17 seconds ago objective_kepler
Now let us run the exact same command as before to print "hello world".
podman run ubi echo "hello world"
hello world
Check out 'podman info' one more time and you should notice a few changes.
podman info
host:
BuildahVersion: 1.6-dev
Conmon:
package: podman-1.0.0-2.git921f98f.module+el8+2785+ff8a053f.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.14.0-dev, commit: be8255a19cda8a598d76dfa49e16e337769d4528-dirty'
Distribution:
distribution: '"rhel"'
version: "8.0"
MemFree: 2508419072
MemTotal: 3964538880
OCIRuntime:
package: runc-1.0.0-54.rc5.dev.git2abd837.module+el8+2769+577ad176.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.0'
SwapFree: 1073737728
SwapTotal: 1073737728
arch: amd64
cpus: 2
hostname: node3.example.com
kernel: 4.18.0-67.el8.x86_64
os: linux
rootless: false
uptime: 1h 20m 7.85s (Approximately 0.04 days)
insecure registries:
registries:
- core.example.com
registries:
registries:
- core.example.com
- quay.io
- docker.io
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 2
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: xfs
Native Overlay Diff: "true"
Supports d_type: "true"
ImageStore:
number: 2
RunRoot: /var/run/containers/storage
You should notice that the number of containers (ContainerStore) has incremented to 2, and that the number of ImageStore(s) has grown.
Run 'podman ps -a' to the IDs of the exited containers.
podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e3f139ef0942 core.example.com:5000/ubi:latest echo hello world 35 seconds ago Exited (0) 34 seconds ago cocky_golick
249de20ebdb0 core.example.com:5000/ubi:latest echo hello world 2 minutes ago Exited (0) 2 minutes ago objective_kepler
Using the container UIDs from the above output, you can now clean up the 'exited' containers.
podman rm <CONTAINER-ID> <CONTAINER-ID>
ℹ️
|
if you are lazy, you can also cleanup up the containers with podman rm --all
|
Now you should be able to run 'podman ps -a' again, and the results should come back empty.
podman ps -a
podman run ubi cat /proc/sys/kernel/hostname
5d6d58699069
So what we have learned here is that the hostname in the container’s namespace is NOT the same as the host platform (node3.example.com). It is unique and is by default identical to the container’s ID. You can verify this with 'podman ps -a'.
podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5d6d58699069 core.example.com:5000/ubi:latest cat /proc/sys/ker... 42 seconds ago Exited (0) 42 seconds ago sharp_swanson
Let us have a look at the process table from with-in the container’s namespace.
podman run ubi ps -ef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 19:53 ? 00:00:00 ps -ef
Notice that all the process belonging to host itself are absent. The programs running in the container’s namespace are isolated from the rest of the host. From the container’s perspective, the process in the container is the only process running.
Now let us run a command to report the network configuration from within the a container’s namespace.
podman run ubi ip addr show eth0
container create failed: container_linux.go:336: starting container process caused "exec: \"ip\": executable file not found in $PATH"
: internal libpod error
What just happened?
For the most part, containers are not meant for interactive (user) sessions. In this instance, the image that we are using (ie: ubi) does not have the traditional commmandline utilities a user might expect. Common tools to configure network interfaces like 'ip' simply aren’t there.
So for this exercise, we leverage something called a 'bind mount' to effectively mirror a portion of the host’s filesystem into the container’s namespace. Bind mounts are declared using the '-v' option. In the example below, /usr/sbin from the host will be exposed and accessible to the containers namespace mounted at '/usr/bin' (ie: /usr/sbin:/usr/sbin).
ℹ️
|
Using bind mounts is generally suitable for debugging, but not a good practice as a design decision for enterprise container strategies. After all, creating dependencies between applications and host operating systems is what we are trying to get away from. |
podman run -v /usr/sbin:/usr/sbin -v /usr/lib64:/usr/lib64 ubi /usr/sbin/ip addr show eth0
3: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 8a:ce:7f:ea:c7:9a brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 10.88.0.8/16 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::88ce:7fff:feea:c79a/64 scope link tentative
valid_lft forever preferred_lft forever
A couple more commands to understand the network setup.
Let us begin by examining the '/etc/hosts' file.
ℹ️
|
Note that we introduce the '--rm' flag to our podman command. This tells podman to automatically cleanup after the container exists |
podman run --rm ubi cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.88.0.9 aa2204f3cd29
How does the container resolve hostnames (ie: DNS)?
podman run --rm ubi cat /etc/resolv.conf
search example.com
nameserver 10.0.0.2
Take a look at the routing table. Pay attention now, the route command is in '/usr/sbin'. Take a look at the routing table for the container namespace.
podman run -v /usr/sbin:/usr/sbin --rm ubi route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.88.0.1 0.0.0.0 UG 0 0 0 eth0
10.88.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
Finally, look at the filesystem(S) in the container’s namespace.
podman run ubi df -h
Filesystem Size Used Avail Use% Mounted on
overlay 8.0G 1.9G 6.2G 24% /
tmpfs 64M 0 64M 0% /dev
tmpfs 1.9G 8.6M 1.9G 1% /etc/hosts
shm 63M 0 63M 0% /dev/shm
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 1.9G 0 1.9G 0% /proc/acpi
tmpfs 1.9G 0 1.9G 0% /proc/scsi
tmpfs 1.9G 0 1.9G 0% /sys/firmware
You were introduced to Bind-Mounts in the previous section. Let us examine what the filesystems looks like with an active Bind-Mount.
podman run -v /usr/bin:/usr/bin ubi df -h
Filesystem Size Used Avail Use% Mounted on
overlay 8.0G 1.9G 6.2G 24% /
tmpfs 64M 0 64M 0% /dev
tmpfs 1.9G 8.6M 1.9G 1% /etc/hosts
/dev/mapper/rhel-root 8.0G 1.9G 6.2G 24% /usr/bin
shm 63M 0 63M 0% /dev/shm
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 1.9G 0 1.9G 0% /proc/acpi
tmpfs 1.9G 0 1.9G 0% /proc/scsi
tmpfs 1.9G 0 1.9G 0% /sys/firmware
Notice above how there is now a dedicated mount point for /usr/bin. Bind-Mounts can be a very powerful tool (primarily for diagnostics) to termporarily inject tools and files that are not normally part of a container image. Using bind mounts as a design decision for enterprise container strategies is folly. Creating direct dependencies between containerized applications and host operating systems is what we are trying to get away from.
Let us clean up your environment before proceeding
podman kill --all podman rm --all
mkdir -p /var/www/html echo "Server up and running" > /var/www/html/test.txt restorecon -Rv /var/www
podman run --name "web_example" -v /var/www/html:/var/www/html -d -p 8080:8080 httpd-24-rhel7
pgrep -laf httpd
8662 httpd -D FOREGROUND
8703 httpd -D FOREGROUND
8704 httpd -D FOREGROUND
8705 httpd -D FOREGROUND
8711 httpd -D FOREGROUND
8717 httpd -D FOREGROUND
On the host, we see httpd processes on port 8080. That’s good!
Now let’s introduce a commandline utility 'lsns' to check out the namespaces.
lsns
NS TYPE NPROCS PID USER COMMAND
4026531835 cgroup 111 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18
4026531836 pid 101 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18
4026531837 user 111 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18
4026531838 uts 101 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18
4026531839 ipc 101 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18
4026531840 mnt 96 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18
4026531860 mnt 1 21 root kdevtmpfs
4026531992 net 101 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 18
4026532117 mnt 1 598 root /usr/lib/systemd/systemd-udevd
4026532197 mnt 1 671 root /sbin/auditd
4026532198 mnt 1 700 chrony /usr/sbin/chronyd
4026532199 mnt 1 730 root /usr/sbin/NetworkManager --no-daemon
4026532201 net 10 8662 1001 httpd -D FOREGROUND
4026532272 mnt 10 8662 1001 httpd -D FOREGROUND
4026532273 uts 10 8662 1001 httpd -D FOREGROUND
4026532274 ipc 10 8662 1001 httpd -D FOREGROUND
4026532275 pid 10 8662 1001 httpd -D FOREGROUND
We see that the httpd processes running are using the mnt uts ipc pid and net namespaces.
Since we explored namespaces, we may as well have a look at and discuss the control-groups aligned with our process.
systemd-cgls
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 22
<... SNIP ...>
└─machine.slice
├─libpod-c76b2199880cc1fb1318953be06be8b2c458cc7ebbd5bb4d74312e96e68c2011.scope
│ ├─8662 httpd -D FOREGROUND
│ ├─8699 /usr/bin/cat
│ ├─8700 /usr/bin/cat
│ ├─8701 /usr/bin/cat
│ ├─8702 /usr/bin/cat
│ ├─8703 httpd -D FOREGROUND
│ ├─8704 httpd -D FOREGROUND
│ ├─8705 httpd -D FOREGROUND
│ ├─8711 httpd -D FOREGROUND
│ └─8717 httpd -D FOREGROUND
└─libpod-conmon-c76b2199880cc1fb1318953be06be8b2c458cc7ebbd5bb4d74312e96e68c2011.scope
└─8651 /usr/libexec/podman/conmon -s -c c76b2199880cc1fb1318953be06be8b2c458cc7ebbd5bb4d74312e96e68c2011 -u c76b2199880cc1fb131>
What we can tell is that our container is bound by a cgroup called "machine.slice". Otherwise, nothing remarkable to discern here.
Take a quick look are the network on the host.
netstat -tulpn | grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 8651/conmon
Just pointing out that that there is now a service hanging on port 8080 proxying the network traffic to the container.
Now let us see if the simple web server is working.
curl localhost:8080/test.txt
Server up and running
Now it is time to examine security. Start be re-launching the container from our last exercise.
podman run --name "web_example" -v /var/www/html:/var/www/html -v /usr/sbin:/usr/sbin -d -p 8080:8080 httpd-24-rhel7
Now you will start a shell that inherits the namespaces from 'web_example'.
podman exec -it web_example bash
echo "Hello From My Container" > /usr/sbin/tryme.txt
You should see:
bash: /usr/sbin/tryme.txt: Permission denied
Now run:
exit
podman stop web_example podman rm web_example
podman build -t custom_image --file custom_image.OCIFile
Once this completes, run:
podman images
and you should see something like:
REPOSITORY TAG IMAGE ID CREATED SIZE localhost/custom_image latest 41af4d6affa6 26 minutes ago 323 MB
podman run -d --name="custom_server" -p 8080:8080 custom_image
curl localhost:8080/test.txt
This should return:
Custom Server up and running