Ever heard the words "monolithic" and "microservices" tossed around in tech conversations? Don't worry if they sound a bit confusing – you're not alone! These terms are hot topics in the world of software design, and for good reason.
In this article, we're going to break down what monolithic and microservices mean. We'll compare them in simple terms and talk about when you might want to use each one.
Monolithic and microservices are both types of software architecture. In simple terms, architecture is just a fancy word for the overall design of a computer program - it's the plan that lays out how all the pieces of the software will fit and work together. These are just different ways to build web applications.
We are going to understand each architecture with the help of an example.
We'll consider designing an online travel booking web application that offers hotel reservations, flight bookings, and train ticket purchases, along with payment processing and user account management for understanding these architectures.
We will be discussing these architectures concerning backend logic.
In a monolithic architecture, we would design the application as a single, unified system. All the code for every feature would reside within one codebase(project). This includes:
The entire codebase, encompassing all these functions, operates as one cohesive unit.
In a monolithic architecture, you build and deploy the entire application as one single unit. This means you have to use just one technology stack for the whole app, whether it's Java with Spring Boot, Python with Django, JavaScript with Node.js, or any other combination.
Monolithic architecture is often the go-to choice for small startups, personal projects, or companies that don't deal with a huge user base.
These advantages make monolithic architecture a practical choice for smaller projects or teams that want to get their application up and running quickly without dealing with the complexities of more distributed systems.
Some major issues that come with using a monolithic architecture include:
In a monolithic setup, all parts of the application are tightly interconnected. This means if something goes wrong in one area—like a bug or performance issue—it can affect the entire system. Imagine if a glitch in one feature causes the whole app to crash, leaving users unable to access anything. This dependency on a single codebase makes the system vulnerable to widespread failures, impacting reliability and user experience. It also limits flexibility in updating and scaling different parts of the app independently, which can lead to more downtime and operational headaches.
Imagine we need to make a minor update to improve the payment functionality in our application. Even though it's a small change, we have to redeploy the entire app. This can be incredibly frustrating. A simple tweak to one part of the system forces us to go through the whole deployment process again, involving all components of the app, even those not affected by the change. This not only wastes time and resources but also increases the risk of introducing new bugs and downtime, making the process unnecessarily cumbersome.
When user traffic spikes, a monolithic application can struggle to cope. Every part of the app—from hotel bookings to payments—feels the strain on the single server. This overload leads to slow responses, timeouts, and errors, frustrating users.
To manage increased traffic, you typically need to scale horizontally by deploying multiple copies of the entire app across many servers.
However, this method is inefficient because it requires replicating the entire application, even for parts that don't need more resources. And with scaling, comes a hefty bill to manage.
Balancing performance needs with cost efficiency becomes challenging. Scaling down during quieter periods also poses challenges, including safely decommissioning excess servers without disrupting service and managing data consistency.
These challenges highlight why monolithic architectures struggle with fluctuating loads and why organizations often turn to more flexible solutions like microservices, which offer better scalability and cost management capabilities.
If too many developers are working on the same codebase, even making a small change can become problematic. Each change can impact other parts of the application, leading to conflicts and dependencies that need to be resolved. This often results in longer development times, as developers must coordinate closely, perform extensive testing, and handle merge conflicts. The interdependence of various components means that a small tweak by one developer might require adjustments or fixes in seemingly unrelated parts of the code, complicating the development process and slowing down progress.
Enter microservices:
And one of the pioneers in adopting microservices was the renowned media streaming platform, Netflix. Originally built on a monolithic architecture, Netflix faced significant challenges as its user base grew exponentially. With millions of subscribers globally streaming movies and TV shows simultaneously, the monolithic approach struggled to handle the increasing traffic efficiently.
This led Netflix to innovate and transition towards a microservices architecture, marking a pivotal shift in how modern applications are designed and operated. Today, Netflix operates using hundreds of microservices to power their application. This architectural shift has become a standard for high-traffic applications.
Netflix’s adoption of microservices revolutionized the way large-scale applications handle scalability, flexibility, and user experience.
For more info check out these blogs published by Netflix
Similarly, Atlassian noted in their monolithic vs microservices blog:
In January 2016, we had about 15 total microservices. Now we have more than 1300. We moved 100K customers to the cloud, built a new platform along the way, transformed our culture, and ended up with new tools. We have happier, autonomous teams and a better DevOps culture.Microservices may not be for everyone. A legacy monolith may work perfectly well, and breaking it down may not be worth the trouble. But as organizations grow and the demands on their applications increase, microservices architecture can be worthwhile.
Reference: Microservices vs. monolithic architecture | Atlassian
To sum up microservices in one phrase: don't put all your eggs in one basket.
In a microservices setup, we take a big application and break it down into smaller, manageable pieces, each handling a specific function. These smaller pieces, or services, operate independently, have their codebase, and can be developed and deployed on their own. This makes them loosely connected, allowing for greater flexibility and easier management.
Microservices architecture is generally not adopted by small teams or small organizations. It is more commonly implemented by large companies with millions of users and thousands of developers, such as Amazon, Uber, Netflix, Airbnb, etc.
When transitioning from a monolithic application to a microservices architecture, we need to divide the single, unified application into multiple smaller, independent services.
But the question arises: how many independent services should a monolithic application be divided?
There's no exact answer or strict rule for this—it varies from company to company based on their specific business needs. For example, in our monolithic travel application, the company might choose to split it into smaller services like payments, hotel bookings, flight bookings, authentication, and train bookings. This way, the monolithic travel application becomes five separate microservices, each handling a specific part of the application and working together.
One of the key advantages of a microservices architecture is the ability to independently deploy individual services. The independent deployment results in major cost savings, lower downtime, and more time saved as we don't have to redeploy the whole application. In this approach, each microservice has its own:
Independent deployment means you can update, modify, or scale a single service without affecting the entire application. Changes to one service don't require redeploying the whole system.
For example, consider our travel application developed in a microservices style. It consists of five microservices: Hotel Booking, Flight Booking, Train Booking, Payment, and User Authentication and SignUp. If the Payment service needs an upgrade or a bug is found in the payment service, developers can modify only the Payment service's code. They can then test and deploy these changes independently. The other four services remain untouched and continue to operate without interruption.
This approach offers significant flexibility and efficiency in development and deployment processes, allowing teams to work more autonomously and respond quickly to specific service needs without impacting the entire application.
In our travel application microservices, different services may experience varying levels of demand:
This approach allows us to efficiently allocate resources where they're most needed, optimizing performance and costs.
In a microservices architecture, each service has its independent codebase. This means we can write different services in different technologies based on which tech stack suits each service best.
For example, in our travel application, each microservice can use the most suitable technology stack for its specific requirements:
Each service can also use the database that best fits its data model and query patterns. For instance, the Flight Booking Service might use MongoDB for flexible schema design, while the Payment Service could use PostgreSQL for its ACID compliance.
This technology flexibility allows each team to choose the best tools for their specific service, optimize performance where needed, and even adopt new technologies for individual services without affecting the entire application.
When we develop individual microservices, such as a hotel booking service, flight booking service, train booking service, payment service, and sign-in/sign-up service, the question arises: how do these microservices connect and interact with each other? For instance, if a user wants to book a hotel, the hotel booking service will need to communicate with the payment service to complete the booking. Thus, microservices need to interact with each other.
There are three major ways for microservices to communicate:
In synchronous communication, microservices interact with each other in real time through API calls. Each microservice exposes an API endpoint, which other microservices can access using standard protocols like HTTP or HTTPS. Here's a detailed explanation of how it works:
In summary, synchronous communication in microservices uses real-time API calls to exchange information and perform actions, following a simple and direct request-response pattern.
Asynchronous communication allows microservices to interact without waiting for an immediate response. This can be achieved using message brokers like Apache Kafka and RabbitMQ. Here’s a simple explanation:
In summary, asynchronous communication using message brokers like Apache Kafka and RabbitMQ allows microservices to talk to each other without waiting for immediate responses, making the system more scalable and reliable, though it adds some complexity and potential delays.
A service mesh is a system that helps microservices talk to each other more efficiently and securely. It uses small helper programs, called sidecar proxies, that sit next to each microservice. These proxies handle all the communication between services, making sure data is transferred safely and quickly. The service mesh can balance the load, ensure secure connections, and provide detailed monitoring and logging, all without requiring changes to the microservices themselves. This makes it easier to manage and scale large applications while keeping everything running smoothly and securely.
A service mesh, like Istio, integrates with Kubernetes to enhance microservice communication. In this setup, Istio uses sidecar proxies alongside each Kubernetes pod, managing how services talk to each other. These proxies handle tasks like load balancing, secure connections, and detailed monitoring. Istio’s control plane oversees these proxies, setting rules and policies for traffic flow. This combination allows Kubernetes to efficiently manage and scale services, while Istio ensures secure, reliable, and observable communication between them.
Microservices can be complex to develop because they require breaking down a large application into many smaller services. Each of these services needs to be independently designed, developed, and tested, which can be challenging. Developers need to ensure that each microservice can communicate effectively with others and handle failures gracefully. This added complexity can make the development process more time-consuming and require specialized skills and tools.
Managing microservices can be overwhelming due to the sheer number of services involved. Each service has its own codebase, database, and deployment pipeline, which means there are many moving parts to keep track of. Ensuring that all services are running smoothly, updating them, and troubleshooting issues can require significant effort and coordination. This management overhead can be a burden, especially for small teams.
Running multiple microservices can be more expensive than maintaining a single monolithic application. Each service may need its server or container, and you might need more resources for networking, monitoring, and security. Additionally, the complexity of the infrastructure often requires more advanced and costly tools to manage everything effectively. This can lead to higher infrastructure costs, which can be a significant consideration for startups and small teams with limited budgets.
Because of these challenges, small teams or startups often prefer not to use microservice architecture. They may find it more practical to start with a simpler monolithic design, which is easier to develop, manage, and afford.
Cheers!
Happy Coding.