Step by Step Basic Microservices System (3 NodeJS + 1 Load Balancer containers) with Docker Compose

About this video

### Summary of the Text: 1. **Objective**: The video demonstrates how to build a small microservices-based system consisting of: - Four services. - One load balancer. - Three identical Node.js applications (previously created on the channel). 2. **Audience Engagement**: - Encourages viewers to subscribe for more software engineering content. - Highlights that the channel covers various software engineering topics with practical examples. 3. **Application Overview**: - A simple Node.js Express application is used as the base. - The app takes environment variables and REST endpoints, returning an application ID. - The application listens on port 9999 and is containerized using Docker. 4. **Challenges with Direct Docker Exposure**: - Exposing multiple containers directly to the host is undesirable due to inefficiency and security concerns. - Instead, a load balancer (or proxy) will distribute traffic among the microservices. 5. **Proxy Configuration**: - HAProxy is chosen as the load balancer due to its simplicity. - A configuration file (`haproxy.cfg`) is created to define frontend (port 8080) and backend settings. - Backend servers are assigned unique hostnames and ports for internal communication. 6. **Microservices Architecture**: - The entire system runs in an isolated environment with private IPs and hostnames. - Docker Compose is used to manage the setup, defining services, networks, and volumes. 7. **Docker Compose Setup**: - Version 3 of Docker Compose is used for the latest features. - Services include: - `haproxy` (load balancer). - Three Node.js application instances (`node-app-1`, `node-app-2`, `node-app-3`). - Environment variables (e.g., `APP_ID`) are passed to each Node.js service to differentiate them. 8. **Load Balancing**: - HAProxy uses Round-Robin algorithm by default to distribute requests among services. - Additional features like IP hashing and sticky sessions are mentioned as options. 9. **Testing the System**: - Accessing `localhost` routes requests through HAProxy to different Node.js instances. - Refreshing the page cycles through responses from different services, demonstrating load balancing. 10. **Scaling the System**: - Adding a fourth service instance is straightforward by updating the Docker Compose file. - Stopping and restarting the setup ensures changes take effect. 11. **Conclusion**: - Microservices have pros and cons; they are not always the best choice. - Encourages viewers to understand the trade-offs before adopting microservices. - Code and resources are available in the video description. 12. **Call to Action**: - Subscribes and notifications are encouraged for future content. - Thanks viewers and wishes them a great weekend. This summary captures the key points of the video, focusing on the technical steps, tools used, and insights shared about microservices architecture.


Course: Docker

### Course Description: Docker This comprehensive course on Docker is designed to equip students with the knowledge and skills necessary to create, manage, and deploy containerized applications effectively. The course begins with an introduction to Docker, focusing on its importance in modern software development, particularly in continuous integration and continuous deployment (CI/CD) pipelines, Jenkins tasks, and Kubernetes clusters. Students will learn how to create lightweight containers that encapsulate their applications in an isolated environment, allowing for consistent execution across different platforms. This isolation ensures that applications run seamlessly regardless of the underlying infrastructure, making Docker a critical tool for developers. The course delves into the practical aspects of Docker by guiding students through the process of creating a Docker image and running a container. Starting with setting up a Dockerfile, participants will learn how to define the environment and dependencies required for their application. Through hands-on examples using Node.js and Express, students will build a simple web application and containerize it using Docker. The course also covers essential commands such as `docker build` and `docker run`, demonstrating how to expose ports, install dependencies, and execute applications within containers. Additionally, students will explore how to scale their applications by running multiple containers and load-balancing them using tools like Nginx or HAProxy. By the end of this section, learners will have a solid understanding of how to leverage Docker for deploying stateless, self-contained applications. Beyond the basics, the course introduces advanced topics such as microservices architecture and orchestration. Students will gain insights into how Docker facilitates the development of distributed systems by enabling the creation of modular, scalable services. The course includes practical demonstrations of running multiple containers simultaneously, simulating real-world scenarios where applications are deployed across various environments. Furthermore, learners will be introduced to the integration of Docker with Kafka, a distributed streaming platform, to build robust data processing pipelines. By combining Docker with Kafka, students will understand how to handle high-throughput, fault-tolerant systems that are essential for modern applications. Overall, this course provides a thorough grounding in Docker, empowering students to harness its full potential in both development and production environments.

View Full Course