Blog
navigate_next
Software Engineering
Monolithic vs Microservices
Gaurav Sharma
July 16, 2024

Intro

Ever heard the words "monolithic" and "microservices" tossed around in tech conversations? Don't worry if they sound a bit confusing – you're not alone! These terms are hot topics in the world of software design, and for good reason.

In this article, we're going to break down what monolithic and microservices mean. We'll compare them in simple terms and talk about when you might want to use each one.

Monolithic and microservices are both types of software architecture. In simple terms, architecture is just a fancy word for the overall design of a computer program - it's the plan that lays out how all the pieces of the software will fit and work together. These are just different ways to build web applications.

We are going to understand each architecture with the help of an example.

We'll consider designing an online travel booking web application that offers hotel reservations, flight bookings, and train ticket purchases, along with payment processing and user account management for understanding these architectures.

We will be discussing these architectures concerning backend logic.

Monolithic

In a monolithic architecture, we would design the application as a single, unified system. All the code for every feature would reside within one codebase(project). This includes:

  1. Hotel booking functionality
  2. Flight reservation system
  3. Train ticket booking
  4. Payment processing
  5. User authentication (sign-up and login)

The entire codebase, encompassing all these functions, operates as one cohesive unit.

In a monolithic architecture, you build and deploy the entire application as one single unit. This means you have to use just one technology stack for the whole app, whether it's Java with Spring Boot, Python with Django, JavaScript with Node.js, or any other combination.

Monolithic architecture is often the go-to choice for small startups, personal projects, or companies that don't deal with a huge user base.

Benefits of Monolithic Architecture

Easier to Develop:

  • You work with a single codebase, which means less complexity to manage.
  • All parts of your application are in one place, making it simpler to understand the overall structure.
  • When you need to add new features, you can easily integrate them into the existing codebase.
  • Testing is more straightforward because you can run all tests in one go.

Easier to Manage:

  • Deployment is simpler because you're dealing with just one application.
  • You only need to monitor and maintain a single system.

These advantages make monolithic architecture a practical choice for smaller projects or teams that want to get their application up and running quickly without dealing with the complexities of more distributed systems.

Challenges with Monolithic Architecture

Some major issues that come with using a monolithic architecture include:

Single Point of Failure

In a monolithic setup, all parts of the application are tightly interconnected. This means if something goes wrong in one area—like a bug or performance issue—it can affect the entire system. Imagine if a glitch in one feature causes the whole app to crash, leaving users unable to access anything. This dependency on a single codebase makes the system vulnerable to widespread failures, impacting reliability and user experience. It also limits flexibility in updating and scaling different parts of the app independently, which can lead to more downtime and operational headaches.

Redeployment

Imagine we need to make a minor update to improve the payment functionality in our application. Even though it's a small change, we have to redeploy the entire app. This can be incredibly frustrating. A simple tweak to one part of the system forces us to go through the whole deployment process again, involving all components of the app, even those not affected by the change. This not only wastes time and resources but also increases the risk of introducing new bugs and downtime, making the process unnecessarily cumbersome.

Scaling Issues

When user traffic spikes, a monolithic application can struggle to cope. Every part of the app—from hotel bookings to payments—feels the strain on the single server. This overload leads to slow responses, timeouts, and errors, frustrating users.

To manage increased traffic, you typically need to scale horizontally by deploying multiple copies of the entire app across many servers.

However, this method is inefficient because it requires replicating the entire application, even for parts that don't need more resources. And with scaling, comes a hefty bill to manage.

Balancing performance needs with cost efficiency becomes challenging. Scaling down during quieter periods also poses challenges, including safely decommissioning excess servers without disrupting service and managing data consistency.

These challenges highlight why monolithic architectures struggle with fluctuating loads and why organizations often turn to more flexible solutions like microservices, which offer better scalability and cost management capabilities.

Lots of Dependence(Rigid)

If too many developers are working on the same codebase, even making a small change can become problematic. Each change can impact other parts of the application, leading to conflicts and dependencies that need to be resolved. This often results in longer development times, as developers must coordinate closely, perform extensive testing, and handle merge conflicts. The interdependence of various components means that a small tweak by one developer might require adjustments or fixes in seemingly unrelated parts of the code, complicating the development process and slowing down progress.

Solution

Enter microservices:

And one of the pioneers in adopting microservices was the renowned media streaming platform, Netflix. Originally built on a monolithic architecture, Netflix faced significant challenges as its user base grew exponentially. With millions of subscribers globally streaming movies and TV shows simultaneously, the monolithic approach struggled to handle the increasing traffic efficiently.

This led Netflix to innovate and transition towards a microservices architecture, marking a pivotal shift in how modern applications are designed and operated. Today, Netflix operates using hundreds of microservices to power their application. This architectural shift has become a standard for high-traffic applications.

Netflix’s adoption of microservices revolutionized the way large-scale applications handle scalability, flexibility, and user experience.

For more info check out these blogs published by Netflix

Similarly, Atlassian noted in their monolithic vs microservices  blog:

In January 2016, we had about 15 total microservices. Now we have more than 1300. We moved 100K customers to the cloud, built a new platform along the way, transformed our culture, and ended up with new tools. We have happier, autonomous teams and a better DevOps culture.Microservices may not be for everyone. A legacy monolith may work perfectly well, and breaking it down may not be worth the trouble. But as organizations grow and the demands on their applications increase, microservices architecture can be worthwhile.

Reference: Microservices vs. monolithic architecture | Atlassian

Microservices

To sum up microservices in one phrase: don't put all your eggs in one basket.

In a microservices setup, we take a big application and break it down into smaller, manageable pieces, each handling a specific function. These smaller pieces, or services, operate independently, have their codebase, and can be developed and deployed on their own. This makes them loosely connected, allowing for greater flexibility and easier management.

Microservices architecture is generally not adopted by small teams or small organizations. It is more commonly implemented by large companies with millions of users and thousands of developers, such as Amazon, Uber, Netflix, Airbnb, etc.

Deciding the Number of Microservices

When transitioning from a monolithic application to a microservices architecture, we need to divide the single, unified application into multiple smaller, independent services.

But the question arises: how many independent services should a monolithic application be divided?

There's no exact answer or strict rule for this—it varies from company to company based on their specific business needs. For example, in our monolithic travel application, the company might choose to split it into smaller services like payments, hotel bookings, flight bookings, authentication, and train bookings. This way, the monolithic travel application becomes five separate microservices, each handling a specific part of the application and working together.

Benefits of Using Microservices

Independent Deployment (Isolation development and Deployment)

One of the key advantages of a microservices architecture is the ability to independently deploy individual services. The independent deployment results in major cost savings, lower downtime, and more time saved as we don't have to redeploy the whole application. In this approach, each microservice has its own:

  • Codebase
  • Development team
  • Database
  • CI/CD pipeline

Independent deployment means you can update, modify, or scale a single service without affecting the entire application. Changes to one service don't require redeploying the whole system.

For example, consider our travel application developed in a microservices style. It consists of five microservices: Hotel Booking, Flight Booking, Train Booking, Payment, and User Authentication and SignUp. If the Payment service needs an upgrade or a bug is found in the payment service, developers can modify only the Payment service's code. They can then test and deploy these changes independently. The other four services remain untouched and continue to operate without interruption.

This approach offers significant flexibility and efficiency in development and deployment processes, allowing teams to work more autonomously and respond quickly to specific service needs without impacting the entire application.

Flexible Scaling (granular scaling)

In our travel application microservices, different services may experience varying levels of demand:

  • Flight Booking Service: During a flash sale on airline tickets, this service might experience a sudden surge in traffic. We can quickly scale up this specific service by adding more instances to handle the increased load without touching the other services.
  • Hotel Booking Service: If it's the off-season for hotel bookings, this service might not need extra resources. We can leave it at its current capacity, saving costs.
  • Payment Service: As more flight bookings are made, this service will see increased activity. We can allocate additional resources to ensure smooth transaction processing.
  • User Authentication and Signup Service: With more users accessing the platform for flight bookings, we might need to moderately scale this service to handle the increased logins and new account creation.
  • Train Booking Service: If train travel is less popular during this period, we can keep this service at its base capacity.

This approach allows us to efficiently allocate resources where they're most needed, optimizing performance and costs.

Technology Flexibility

In a microservices architecture, each service has its independent codebase. This means we can write different services in different technologies based on which tech stack suits each service best.

For example, in our travel application, each microservice can use the most suitable technology stack for its specific requirements:

  • Flight Booking Service: This could be built with Node.js and Express for fast, non-blocking I/O operations, ideal for handling numerous concurrent requests for flight searches.
  • Hotel Booking Service: Might use Java with Spring Boot, leveraging its robust ecosystem for complex business logic related to room availability and pricing.
  • Train Booking Service: This could be implemented in Python with Django, taking advantage of its rapid development capabilities and data analysis libraries for route optimization.
  • Payment Service: Might be developed in Go, utilizing its excellent performance for high-concurrency scenarios and its strong security features for handling sensitive financial data.
  • User Authentication and Signup Service: Could use .NET Core, benefiting from its built-in security features and easy integration with identity providers.

Each service can also use the database that best fits its data model and query patterns. For instance, the Flight Booking Service might use MongoDB for flexible schema design, while the Payment Service could use PostgreSQL for its ACID compliance.

This technology flexibility allows each team to choose the best tools for their specific service, optimize performance where needed, and even adopt new technologies for individual services without affecting the entire application.

But how do microservices interact with each other?

When we develop individual microservices, such as a hotel booking service, flight booking service, train booking service, payment service, and sign-in/sign-up service, the question arises: how do these microservices connect and interact with each other? For instance, if a user wants to book a hotel, the hotel booking service will need to communicate with the payment service to complete the booking. Thus, microservices need to interact with each other.

There are three major ways for microservices to communicate:

  • Synchronous Communication
  • Asynchronous Communication
  • Service Mesh

Synchronous Communication

In synchronous communication, microservices interact with each other in real time through API calls. Each microservice exposes an API endpoint, which other microservices can access using standard protocols like HTTP or HTTPS. Here's a detailed explanation of how it works:

  • API Endpoints: Each microservice has defined API endpoints that other services can call. These endpoints are like entry points that allow external access to the functionalities provided by the microservice.
  • Request-Response Mechanism: The interaction between microservices follows a request-response model. When a microservice needs information or action from another service, it sends an HTTP request to the specific API endpoint of the target service.
  • Real-Time Interaction: Upon receiving the request, the target microservice processes it and sends back an HTTP response. This response is immediate and contains the required data or confirmation of the action. The calling service waits (blocks) until it receives this response before proceeding further.
  • Example Scenario:
    • Hotel Booking Service: A user wants to book a hotel. The hotel booking service needs to charge the user's credit card.
    • Interaction with Payment Service: The hotel booking service sends an HTTP request to the payment service’s API endpoint with payment details.
    • Payment Service Response: The payment service processes the payment and sends back an HTTP response indicating success or failure.
    • Proceeding with Booking: Based on the response from the payment service, the hotel booking service proceeds with confirming the hotel booking.
  • Advantages:
    • Simplicity: The request-response model is straightforward to implement.
    • Immediate Feedback: Services get immediate feedback about the success or failure of their requests, allowing for quick error handling.
  • Disadvantages:
    • Tight Coupling: Services are tightly coupled during the interaction, which can lead to potential delays if the target service is slow or unresponsive.
    • Scalability Issues: Synchronous communication can become a bottleneck under high load, as services wait for responses from each other.

In summary, synchronous communication in microservices uses real-time API calls to exchange information and perform actions, following a simple and direct request-response pattern.

Asynchronous Communication

Using Message Brokers

Asynchronous communication allows microservices to interact without waiting for an immediate response. This can be achieved using message brokers like Apache Kafka and RabbitMQ. Here’s a simple explanation:

  • Message Brokers: These are systems that help microservices send and receive messages without needing to be directly connected. They act as a middleman.
  • How It Works:
    • Producer: A microservice that needs to send a message (for example, confirming a payment) sends it to the message broker.
    • Broker: The message broker holds onto the message and makes sure it gets to the right place.
    • Consumer: The microservice that needs to receive the message gets it from the message broker and processes it when it's ready.
  • Example Scenario:
    • Hotel Booking Service: A user books a hotel, and the hotel booking service needs to trigger a payment process.
    • Interaction with Payment Service: The hotel booking service sends a message to the message broker saying a payment is needed.
    • Payment Service Processing: The payment service gets the message from the broker and processes the payment. Once done, it can send another message back through the broker saying the payment was successful.
    • Hotel Booking Service Response: The hotel booking service then receives the payment success message and confirms the hotel booking.
  • Advantages:
    • Decoupling: Services don't need to wait for each other or even know about each other directly.
    • Scalability: Services can handle more work since they process messages as they come in, at their speed.
    • Reliability: Message brokers can store messages and retry sending them if something goes wrong, making the system more reliable.
  • Disadvantages:
    • Complexity: Setting up and managing message brokers adds some complexity.
    • Latency: There might be small delays in processing messages since they aren't handled instantly.

In summary, asynchronous communication using message brokers like Apache Kafka and RabbitMQ allows microservices to talk to each other without waiting for immediate responses, making the system more scalable and reliable, though it adds some complexity and potential delays.

Service Mesh: Brief overview

A service mesh is a system that helps microservices talk to each other more efficiently and securely. It uses small helper programs, called sidecar proxies, that sit next to each microservice. These proxies handle all the communication between services, making sure data is transferred safely and quickly. The service mesh can balance the load, ensure secure connections, and provide detailed monitoring and logging, all without requiring changes to the microservices themselves. This makes it easier to manage and scale large applications while keeping everything running smoothly and securely.

A service mesh, like Istio, integrates with Kubernetes to enhance microservice communication. In this setup, Istio uses sidecar proxies alongside each Kubernetes pod, managing how services talk to each other. These proxies handle tasks like load balancing, secure connections, and detailed monitoring. Istio’s control plane oversees these proxies, setting rules and policies for traffic flow. This combination allows Kubernetes to efficiently manage and scale services, while Istio ensures secure, reliable, and observable communication between them.

Issues with Microservices

Complex to Develop

Microservices can be complex to develop because they require breaking down a large application into many smaller services. Each of these services needs to be independently designed, developed, and tested, which can be challenging. Developers need to ensure that each microservice can communicate effectively with others and handle failures gracefully. This added complexity can make the development process more time-consuming and require specialized skills and tools.

Management Overhead

Managing microservices can be overwhelming due to the sheer number of services involved. Each service has its own codebase, database, and deployment pipeline, which means there are many moving parts to keep track of. Ensuring that all services are running smoothly, updating them, and troubleshooting issues can require significant effort and coordination. This management overhead can be a burden, especially for small teams.

High Infrastructure Cost

Running multiple microservices can be more expensive than maintaining a single monolithic application. Each service may need its server or container, and you might need more resources for networking, monitoring, and security. Additionally, the complexity of the infrastructure often requires more advanced and costly tools to manage everything effectively. This can lead to higher infrastructure costs, which can be a significant consideration for startups and small teams with limited budgets.

Because of these challenges, small teams or startups often prefer not to use microservice architecture. They may find it more practical to start with a simpler monolithic design, which is easier to develop, manage, and afford.

Summary: Choosing Between Monolithic and Microservices

When to Use Monolithic Architecture:

  • Simple Apps: Best for small, straightforward applications.
  • Small Teams: It is easier for small teams like startups or new companies to manage and develop.
  • Lower Costs: Cheaper to start with since it’s less complex.
  • Quick Development: Faster to build and launch without dealing with multiple services.
  • Easy Management: Everything is in one place, making handling simpler.

When to Opt for Microservices:

  • Big and Complex Apps: Great for large applications with lots of different features.
  • Need to Scale: Perfect for scaling parts of your app independently.
  • Multiple Teams: Allows different teams to work on different parts of the app at the same time.
  • Frequent Updates: Ideal if you need to update parts of your app often without affecting the whole system.
  • High Reliability: If one part fails, it doesn’t take down the whole app.
  • Different Tech Needs: Let you use the best tools and technologies for each part of your app.

Cheers!

Happy Coding.

Gaurav Sharma
July 16, 2024
Use Unlogged to
mock instantly
record and replay methods
mock instantly
Install Plugin