- Every service now runs inside a container. Containers do not communicate with each other via localhost — they use Docker service names instead. The bootstrap servers are now kafka1:9092, kafka2:9092, and kafka3:9092.
- The
orderstopic will have a replication factor of 3 and 3 partitions. You will observe messages landing on different partitions. You can stop one broker and watch the system continue working. - The producer now uses
acks=all. This means the producer waits for all in-sync replicas to confirm each message. Combined withmin.insync.replicas=2, the cluster tolerates one broker failure without losing a single message. - The Inventory Service now not only consumes messages but also proceeds to write every order to PostgreSQL via an Object Relational Mapping (ORM) layer created using SQLAlchemy.
Navigate into the Part 2 directory (2_containerized_microservices) first. All the
commands below assume that you are inside the 2_containerized_microservices directory.
cd 2_containerized_microservices# This is executed to create the required volume directories
chmod u+x project_setup.sh
sed -i 's/\r$//' project_setup.sh
./project_setup.shdocker compose -f docker-compose.yaml up --build \
--scale producer=1 \
--scale consumer-notification=1 \
--scale consumer-inventory=1Give the containers in the stack a few minutes to be stable. Check
Docker Desktop for any container that did not start and start it manually
by running docker start <container-name> or clicking the "Start" button in
the Docker Desktop UI.
Execute:
docker exec -it postgres psql -U lab_user -d lab_db -c "SELECT * FROM orders ORDER BY received_at DESC LIMIT 5;"docker exec -it kafka1 kafka-topics \
--bootstrap-server localhost:9092 \
--describe --topic ordersdocker stop kafka3Observe that the producer and consumers continue operating without
interruption.
The cluster still has 2 in-sync replicas, which satisfies min.insync.replicas=2.
docker start kafka3
Both inventory consumer containers join the same consumer group (order-inventories).
Kafka detects the new member, triggers a rebalance, and redistributes the 3
partitions between the 2 instances. Each instance owns a subset of partitions and
processes only the messages from those partitions. No message is processed twice.
Example:
Before scaling:
consumer-inventory-1 → partitions 0, 1, 2
After scaling to 2:
consumer-inventory-1 → partitions 0, 1
consumer-inventory-2 → partition 2
Observe the partition assignment before the scaling:
docker exec kafka1 kafka-consumer-groups \
--bootstrap-server localhost:9092 \
--describe --group order-inventoriesdocker compose -f docker-compose.yaml up -d \
--scale producer=1 \
--scale consumer-notification=1 \
--scale consumer-inventory=2Observe the partition assignment after the scaling:
docker exec kafka1 kafka-consumer-groups \
--bootstrap-server localhost:9092 \
--describe --group order-inventoriesdocker compose -f docker-compose.yaml up -d \
--scale producer=1 \
--scale consumer-notification=1 \
--scale consumer-inventory=3Observe the partition assignment after the scaling:
docker exec kafka1 kafka-consumer-groups \
--bootstrap-server localhost:9092 \
--describe --group order-inventoriesOberve that the orders table in the PostgreSQL container is still receiving
data:
docker exec -it postgres psql -U lab_user -d lab_db -c "SELECT * FROM orders ORDER BY received_at DESC LIMIT 5;"# Stop all services AND delete all stored data
docker-compose down -vchmod u+x project_cleanup.sh
sed -i.bak 's/\r$//' project_cleanup.sh
./project_cleanup.sh