Docker Command Explained: Architecture, Flags & Internals
Key Takeaways
- The
dockercommand is a client that communicates with the Docker daemon (dockerd) via REST API. - Every visibly simple command (e.g.,
docker run) expands into a combination of lower-level instructions executed by containerd. - The
dockerCLI is structured into management commands (likedocker container) and legacy top-level commands (likedocker ps). - Understanding flags like
--cgroupns,--pid,--privileged, and storage/network drivers is essential for advanced container manipulation. - Modern Docker uses BuildKit, containerd, and runc behind the scenes — the CLI is just the entry point.
- Debugging containers often requires hidden gems:
docker system events,docker inspect -f,docker exec --privileged, and equivalent API calls.
What Is the docker Command and How Does It Work Internally?
The docker command is a CLI client that talks to dockerd through a REST API over a Unix socket (/var/run/docker.sock) or TCP. It sends high-level instructions (create, pull, run) that get translated into low-level container lifecycle operations via containerd and runc.
The architecture looks like this:
flowchart LR
A[docker CLI] -->|REST API| B[/var/run/docker.sock]
B --> C[dockerd]
C --> D[containerd]
D --> E[runc]
E --> F[Linux kernel namespaces & cgroups]
Core components behind every docker command
| Layer | Responsibility |
|---|---|
| docker CLI | User-facing interface |
| dockerd daemon | API server, images, networks, lifecycle mgmt |
| containerd | Runtime orchestration |
| runc | Creates actual Linux containers |
| Kernel | Namespaces, cgroups, networking, overlayfs |
Knowing this helps you debug Docker when the CLI “looks fine” but containers fail deeper in the runtime stack.
How Is the docker CLI Structured?
Docker uses a two-tier command model:
1. Management Commands (Preferred)
These are fully namespaced and self-explanatory:
docker containerdocker imagedocker volumedocker networkdocker systemdocker compose
Example:
docker container ls
docker container prune
docker network inspect bridge2. Legacy Commands (Older but still common)
docker psdocker rundocker execdocker logsdocker rmi
These map internally to the management commands.
What Are the Most Important docker Subcommands? (Deep Technical View)
Below is a breakdown of subcommands with real behavior, not surface-level descriptions.
docker run: What Actually Happens Under the Hood?
docker run is a wrapper that performs pull → create → start.
Detailed Breakdown
docker run nginxSteps executed:
- docker pull nginx
- docker create
- generates container metadata
- allocates rootfs with overlayfs
- assigns networking, hostname, MAC, IP
- docker start
- Attach stdout/stderr (unless
-dis used)
Important docker run flags:
--network(bridge, host, none, custom driver)--mountvs--volume--privileged(full kernel access)--pid=host(share host process namespace)--cgroupns=host--cap-add,--cap-drop--pids-limit(prevent fork bombs)--isolation(Windows)
docker exec: What Does It Actually Do?
docker exec does not create a new container — it enters an existing namespace set.
docker exec -it myapp bashIt joins:
- IPC namespace
- PID namespace (unless
--pid=host) - mount namespace
- network namespace
This is why debugging a paused container behaves differently — PID ns isolation affects visibility.
docker images / docker image ls
Shows local on-disk layers.
Hidden but crucial flags:
docker image inspect --format '{{ .RootFS.Layers }}'
Lets you see diff layers and how much disk space each consumes.
docker logs: How Logging Actually Works
Docker captures logs using:
- json-file driver (default)
- journald
- fluentd
- syslog
- local driver (block-based, fastest)
To check the active driver:
docker inspect <container> -f '{{ .HostConfig.LogConfig.Type }}'docker inspect: The Power Tool for Debugging
Example:
docker inspect -f '{{ .State.Pid }}' mycontainerUseful for entering a container’s namespaces with nsenter.
docker system events: The Underrated Monitoring Tool
This streams real-time system-level events:
docker system events<Outputs:
- container start/stop
- network changes
- image pulls
- OOM kills
- cgroup throttling events
Indispensable for debugging.
How Does Docker Networking Actually Work?
Bridge Mode (default)
- Connects containers to a virtual bridge (
docker0) - Assigned via internal DHCP
- NAT to host via iptables
Host Mode
- Container shares host network stack
- No port mapping needed
- Used for performance-sensitive workloads (e.g., load balancers)
None
- Fully isolated network
- Ideal for security-restricted workloads
Custom Drivers
- macvlan
- ipvlan
- overlay (Swarm mode)
To inspect:
docker network inspect bridge
Understanding Docker Storage & Filesystems
Docker supports:
- OverlayFS (default)
- btrfs
- ZFS
- Device Mapper (legacy)
View driver:
docker info | grep Storage
Every container uses a union mount of:
- base layers
- container-specific writable layer
Advanced Use Cases
1. Debugging Containers with nsenter
PID=$(docker inspect -f '{{ .State.Pid }}' app)
sudo nsenter --target $PID --mount --uts --ipc --net --pid bash
2. Throttling CPU & Memory
docker run --cpus=0.5 --memory=512m nginx
3. Creating an Ephemeral Container
docker run --rm -it ubuntu bash
Complete Docker Command Reference Table
| Category | Commands |
|---|---|
| Containers | run, create, start, stop, exec, logs, kill, inspect, rm |
| Images | build, pull, push, images, rmi, commit |
| Networks | network ls, network inspect, network create, network connect |
| Volumes | volume ls, volume prune, volume rm |
| System | system df, system prune, system events |
New Section: How the docker CLI Talks to the API (Deep Technical)
Running:
docker version
Shows:
- Client version (CLI binary)
- Server version (dockerd daemon)
The API endpoints live under:
/v1.xx/containers/create
/v1.xx/images/pull
/v1.xx/networks/createTo see raw API calls:
DOCKER_HOST=unix:///var/run/docker.sock curl --unix-socket /var/run/docker.sock http:/v1.41/containers/jsonThis reveals the underlying contract the CLI uses.

Recent Comments