What is Docker ?
Stay on top of this story
Follow the names and topics behind it.
Add this story's key topics to your watchlist so LyscoNews can highlight related developments and future matches.
Create a free account to sync your watchlist, saved stories, and alerts across devices.
Quick Summary
Docker is a platform that allows developers to package applications into standardized units called containers. It is useful for local testing or for deployment on cloud platforms. Docker is a set of tools to create containerized applications. OverlayFS is responsible for "mounting" the image. It traverses each layer from the most recent to the oldest, adding the elements it finds. However, if a specific element already exists in a higher layer or is marked as a whiteout, it is ignored. This process results in a "phantom" or unified view, which serves as the final mounted system. If I run an alpine/jre image on my Ubuntu machine, a process is created that "believes" it is running on an Alpine system with a JRE (thanks to this filesystem "phantom"). It then executes a Java command on a JAR file within what it perceives to be its own local filesystem. The dockerfile is the recipe used by docker to build an image. It has some properties like: FROM: to define the base image WORKDIR: Sets the working directory for all subsequent commands. COPY: copy a file or a folder from local environment to a directory RUN : Executes a command during the build process. ENTRYPOINT : The main command that will always run when the container starts. CMD : Provides default arguments for the ENTRYPOINT. EXPOSE: documenting the port used by the application as Builder : with a FROM, it inform docker to build this part for the next FROM, but not keeping it docker build -t [REGISTRY_URL]/[PROJECT_OR_ORG]/[REPO_NAME]:[TAG] . Note: If [REGISTRY_URL] is omitted, Docker Hub is used by default. But it can be simplified to docker build -t [REPO_NAME]:[TAG] . if you are on local environment. latestrule:
Never use latestas a unique version. It's a pointer, not a release. Build with a real version (ex: 1.0.0). (Optional) Re-tag it as latest if you want it to be the default. docker login docker push [PROJECT_OR_ORG]/[REPO_NAME]:[TAG] echo $MY_GITHUB_TOKEN | docker login ghcr.io -u [USERNAME] --password-stdin docker tag [LOCAL_IMAGE] ghcr.io/[PROJECT_OR_ORG]/[REPO_NAME]:[TAG] docker push ghcr.io/[PROJECT_OR_ORG]/[REPO_NAME]:[TAG] When you run a Docker image and want your application to be available from outside the container, you must map the ports. You choose a host port available on your machine and an internal port that the application uses inside the container: docker run -d -p <HOST_PORT>:<INTERNAL_PORT> --name <NAME_OF_APP> <IMAGE_NAME>:<TAG> Docker is responsible for bridging the host port and the internal port. As a result, your application works perfectly, "thinking" it is the only app on the machine. On a VPS, your app will be reachable at <VPS_IP>:<HOST_PORT>. Important: For this to work, the application inside the container must listen on 0.0.0.0 and not 127.0.0.1. With this command, if you are on a vps, your app will be available on : If you want a container (like a database) to be isolated from the public network while remaining accessible to your API, you should use a Docker network: docker network create my-net docker run -d --name db --network my-net -e POSTGRES_PASSWORD=pw postgres:15-alpine docker run -d --name app --network my-net -p 80:8080 my-app After that, your database is isolated from the internet. Only the containers inside my-net (like app) can connect to it using the container name db as the hostname (e.g., jdbc:postgresql://db:5432/mydb). By default, all files created inside a container is lost if the container is deleted. If you want to keep data permanently on Docker, you must use volumes. When someone create an image, like a postgres or a mysql image, it can add VOLUME /path/to/data to precise to Docker that it wants the data from this path to be saved. Then, when the image is launched in a container, Docker will automatically create a Volume, called anonymous volume, if there is no volume added in the command. If we delete this container and recreate it with a command with the volume, it will point to the data from the volume in the last layer of the image, the temporary layer, in the path specified in the VOLUME line in the image. There are 2 types of volumes, Bind Mounts and Named Volumes. We link a folder from your host machine directly to a folder in the container. Format : -v /path/to/host:/path/inside/container
Advantage : You see in realtime your files on your machine.
Named Volumes:
docker volume create volume_name
-v volume_name:/path/inside/container
Example : Postgresql
docker volume create pgdata
docker run -d --name app-db -e POSTGRES_PASSWORD=secret \
Docker Compose is the way provided by Docker to manage multiple containers with only one files. The main concepts : docker-compose.yml : A single text file that describes how to launch one or more containers.
*Service *: A container defined in the file.
Network : Docker Compose automatically creates a network so that your containers talk to each other with their service names.
Volumes : The definition of named volumes is also done in this file.
Action Command
start docker-compose up -d
stop docker-compose down
logs for all containers docker-compose logs -f
version: '3.8'
services:
1. DB service
app-db: image: postgres:17-alpine container_name: app-db environment: POSTGRES_USER: user POSTGRES_PASSWORD: secret_password # <--- To be changed ! POSTGRES_DB: appdb volumes: - pgdata:/var/lib/postgresql/data # <--- Named Volume networks: - app-network restart: always
2. Java API Service
app-api: image: your-java-api:latest container_name: app-api ports: - "8080:8080" environment: # The url use the name of "app-db" DB_URL: jdbc:postgresql://app-db:5432/appdb DB_USER: user DB_PASSWORD: secret_password depends_on: - app-db # <--- We check that the db started before starting the app networks: - app-network restart: always
3. Named Volume definition
volumes: pgdata:
4. Network definition
networks: app-network: driver: bridge
Now that you have a working Docker environment, you should be able to debug a Docker container in case of failure:
Action Command
see state for all containers docker ps -a
get logs for a container docker logs
clean unused containers, images and networks docker system prune