Systems design is the process of defining the architecture, modules, interfaces, and data for a system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development.
I was recently working on an application that had a sharded MySQL database. While working on this application, I needed to generate unique IDs that could be used as the primary keys in the tables.
When you’re working with a single MySQL database, you can simply use an auto-increment ID as the primary key, But this won’t work in a sharded MySQL database.
So I looked at various existing solutions for this, and finally wrote a simple 64-bit unique ID generator that was inspired by a similar service by Twitter called Twitter snowflake.
In this article, I’ll share a simplified version of the unique ID generator that will work for any use-case of generating unique IDs in a distributed environment, not just sharded databases.
I’ll also outline other existing solutions and discuss their pros and cons.
Existing Solutions
UUID
UUIDs are 128-bit hexadecimal numbers that are globally unique. The chances of the same UUID getting generated twice is negligible.
The problem with UUIDs is that they are very big in size and don’t index well. When your dataset increases, the index size increases as well and the query performance takes a hit.
MongoDB’s ObjectId
MongoDB’s ObjectIDs are 12-byte (96-bit) hexadecimal numbers that are made up of –
a 4-byte epoch timestamp in seconds,
a 3-byte machine identifier,
a 2-byte process id, and
a 3-byte counter, starting with a random value.
This is smaller than the earlier 128-bit UUID. But again the size is relatively longer than what we normally have in a single MySQL auto-increment field (a 64-bit bigint value).
Database Ticket Servers
This approach uses a centralized database server to generate unique incrementing IDs. It’s like a centralized auto-increment. This approach is used by Flickr.
The problem with this approach is that the ticket server can become a write bottleneck. Moreover, you introduce one more component in your infrastructure that you need to manage and scale.
Twitter Snowflake
Twitter snowflake is a dedicated network service for generating 64-bit unique IDs at high scale. The IDs generated by this service are roughly time sortable.
The IDs are made up of the following components:
Epoch timestamp in millisecond precision – 41 bits (gives us 69 years with a custom epoch)
Configured machine id – 10 bits (gives us up to 1024 machines)
Sequence number – 12 bits (A local counter per machine that rolls over every 4096)
The extra 1 bit is reserved for future purposes. Since the IDs use timestamp as the first component, they are time sortable.
The IDs generated by twitter snowflake fits in 64-bits and are time sortable, which is great. That’s what we want.
But If we use Twitter snowflake, we’ll again be introducing another component in our infrastructure that we need to maintain.
Distributed 64-bit unique ID generator inspired by Twitter Snowflake
Finally, I wrote a simple sequence generator that generates 64-bit IDs based on the concepts outlined in the Twitter snowflake service.
The IDs generated by this sequence generator are composed of –
Epoch timestamp in milliseconds precision – 41 bits. The maximum timestamp that can be represented using 41 bits is 241 – 1, or 2199023255551, which comes out to be Wednesday, September 7, 2039 3:47:35.551 PM. That gives us 69 years with respect to a custom epoch.
Node ID – 10 bits. This gives us 1024 nodes/machines.
Local counter per machine – 12 bits. The counter’s max value would be 4095.
The remaining 1-bit is the signed bit and it is always set to 0.
Your microservices can use this Sequence Generator to generate IDs independently. This is efficient and fits in the size of a bigint.
Here is the complete program –
import java.net.NetworkInterface;
import java.security.SecureRandom;
import java.time.Instant;
import java.util.Enumeration;
/**
* Distributed Sequence Generator.
* Inspired by Twitter snowflake: https://github.com/twitter/snowflake/tree/snowflake-2010
*
* This class should be used as a Singleton.
* Make sure that you create and reuse a Single instance of SequenceGenerator per node in your distributed system cluster.
*/
public class SequenceGenerator {
private static final int UNUSED_BITS = 1; // Sign bit, Unused (always set to 0)
private static final int EPOCH_BITS = 41;
private static final int NODE_ID_BITS = 10;
private static final int SEQUENCE_BITS = 12;
private static final int maxNodeId = (int)(Math.pow(2, NODE_ID_BITS) - 1);
private static final int maxSequence = (int)(Math.pow(2, SEQUENCE_BITS) - 1);
// Custom Epoch (January 1, 2015 Midnight UTC = 2015-01-01T00:00:00Z)
private static final long CUSTOM_EPOCH = 1420070400000L;
private final int nodeId;
private volatile long lastTimestamp = -1L;
private volatile long sequence = 0L;
// Create SequenceGenerator with a nodeId
public SequenceGenerator(int nodeId) {
if(nodeId < 0 || nodeId > maxNodeId) {
throw new IllegalArgumentException(String.format("NodeId must be between %d and %d", 0, maxNodeId));
}
this.nodeId = nodeId;
}
// Let SequenceGenerator generate a nodeId
public SequenceGenerator() {
this.nodeId = createNodeId();
}
public synchronized long nextId() {
long currentTimestamp = timestamp();
if(currentTimestamp < lastTimestamp) {
throw new IllegalStateException("Invalid System Clock!");
}
if (currentTimestamp == lastTimestamp) {
sequence = (sequence + 1) & maxSequence;
if(sequence == 0) {
// Sequence Exhausted, wait till next millisecond.
currentTimestamp = waitNextMillis(currentTimestamp);
}
} else {
// reset sequence to start with zero for the next millisecond
sequence = 0;
}
lastTimestamp = currentTimestamp;
long id = currentTimestamp << (NODE_ID_BITS + SEQUENCE_BITS);
id |= (nodeId << SEQUENCE_BITS);
id |= sequence;
return id;
}
// Get current timestamp in milliseconds, adjust for the custom epoch.
private static long timestamp() {
return Instant.now().toEpochMilli() - CUSTOM_EPOCH;
}
// Block and wait till next millisecond
private long waitNextMillis(long currentTimestamp) {
while (currentTimestamp == lastTimestamp) {
currentTimestamp = timestamp();
}
return currentTimestamp;
}
private int createNodeId() {
int nodeId;
try {
StringBuilder sb = new StringBuilder();
Enumeration<NetworkInterface> networkInterfaces = NetworkInterface.getNetworkInterfaces();
while (networkInterfaces.hasMoreElements()) {
NetworkInterface networkInterface = networkInterfaces.nextElement();
byte[] mac = networkInterface.getHardwareAddress();
if (mac != null) {
for(int i = 0; i < mac.length; i++) {
sb.append(String.format("%02X", mac[i]));
}
}
}
nodeId = sb.toString().hashCode();
} catch (Exception ex) {
nodeId = (new SecureRandom().nextInt());
}
nodeId = nodeId & maxNodeId;
return nodeId;
}
}
The above generator uses the system’s MAC address to create a unique identifier for the Node. You can also supply a NodeID to the sequence generator. That will guarantee uniqueness.
Let’s now understand how it works. Let’s say it’s June 9, 2018 10:00:00 AM GMT. The epoch timestamp for this particular time is 1528538400000.
First of all, we adjust our timestamp with respect to the custom epoch-
Now, the first 41 bits of the ID (after the signed bit) will be filled with the epoch timestamp. Let’s do that using a left-shift –
id = currentTimestamp << (10 + 12)
Next, we take the configured node ID and fill the next 10 bits with the node ID. Let’s say that the nodeId is 786 –
id |= nodeId << 12
Finally, we fill the last 12 bits with the local counter. Considering the counter’s next value is 3450, i.e. sequence = 3450, the final ID is obtained like so –
CQRS has been around for a long time, but if you’re not familiar with it, it’s new to you. Take a look at a quick introduction to what it is and how it works.
CQRS, Command Query Responsibility Segregation, is a method for optimizing writes to databases (queries) and reads from them (commands). Nowadays, many companies work with one large database. But these databases weren’t originally built to scale. When they were planned in the 1990s, there wasn’t so much data that needed to be consumed quickly.
In the age of Big Data, many databases can’t handle the growing number of complex reads and writes, resulting in errors, bottlenecks, and slow customer service. This is something DevOps engineers need to find a solution for.
Take a ride-sharing service like Uber or Lyft (just as an example; this is not an explanation of how these companies work). Traditionally (before CQRS), the client app queries the service database for drivers, whose profiles they see on their screens. At the same time, drivers can send commands to the service database to update their profile. The service database needs to be able to crosscheck queries for drivers, user locations, and driver commands about their profile and send the cache to client apps and drivers. These kinds of queries can put a strain on the database.
CQRS to the rescue. CQRS dictates the segregation of complex queries and commands. Instead of all queries and commands going to the database by the same code, the queries and data manipulation routines are simplified by adding a process in between for the data processing. This reduces stress on the database by enabling easier reading and writing cache data from the database.
A manifestation of such a segregation would be:
Pushing the commands into a robust and scalable queue platform like Apache Kafka and storing the raw commands in a data warehouse software like AWS Redshift or HP Vertica.
Creating an aggregated cache table where data is queried from and displayed to the users.
The magic happens in the streaming part. Advanced calculations based on Google’s MapReduce technology enable quick and advanced distributed calculations across many machines. As a result, large amounts of data are quickly processed, and the right data gets to the right users, quickly and on time.
CQRS can be used by any service that is based on fast-growing data, whether user data (i.e. user profiles) or machine data (i.e. monitoring metrics). If you want to learn more about CQRS, check out this article.
We have all heard the term microservices and perhaps have worked with them in the backend world. Before microservices, there were monolith applications. Back then, team and application growth was leading monolithic applications into a non-scalable dead-end. The codebase was growing, and technologies were getting older. All of this has made migrations and upgrades incredibly painful and frustrating.
An increase in project growth meant that multiple teams would unwittingly create further bottlenecks and inter-team dependencies. Deployments and releases were being micro-managed, but the growing amount of unmanaged technical debt would cause problems further down the line.
The same problems were present in the front-end world too. Again, you can structure the fanciest framework/library at that time and build your application with the best intentions. However, several years later, your tiny little application will inevitably become huge and force you to increase the size of your team. Eventually, this work will be split between multiple teams that will find themselves spending hours defining a release management schedule.
It’s around this time that you will hear about another cool library trending in the industry. Exploring the complexities of how to migrate to a new library while maintaining the old one typically leads to paranoia and eventually makes you give up on the idea because it’s just too much hassle. But you are also faced with a shrinking number of engineers who are willing to work with deprecated technologies. The chosen path has led you to a non-scalable dead-end.
A path to scalability
Try to imagine what the outcome would have looked like would we bravely decide to choose a different path. First, we should know that everything comes with a price, and with great power comes great responsibility. That power is the implementation of micro-frontends, which is
an architectural pattern to develop and maintain web application as a composition of small front-end applications
With back-end microservices, we had completely separate services with their separate DB and API. Microservices can be divided by business logic. For example, in an e-commerce app, we can have different microservices for a product catalog, user’s shopping cart, orders, etc. Without micro-frontend applications, it would have been a monolithic front-end application communicating with different back-end microservices (Fig. 1).
Figure 1. Application with monolith front-end and back-end microservices.
By splitting up the front-end into micro-frontends, we can align front-end and back-end architecture approaches and have a more robust and elegant full-stack architecture (Fig. 2).
Figure 2. End-to-end front-end and back-end microservices.
With this approach, we can reach several significant benefits:
Team and technology autonomy: each team would have its own mission on the product and could select its own technology stack;
Small, maintainable, and de-coupled codebases: each team’s codebase would remain small and isolated from others’ codebases;
Independent release management: each team would be able to independently release its own part of the application saving a lot of time on inter-team communication and release schedule;
Painless upgrades and migrations: having small codebases and being free of inter-team dependency would lead to painless and independent upgrades of application technologies as well as migrations from the old one to a new one;
With the mentioned benefits, there are also challenges attached to implementing the micro-frontends architecture. There will still remain some issues and concerns that will be shared between teams such as web performance, a common design system, etc.
Even though the idea behind micro-services and micro-frontends is very similar, the implementation challenges are slightly different, and we will discuss them in the next section.
Micro-frontends integration concepts
There are three main concepts and problems to be taken into consideration when implementing the micro-frontend architecture.
Routing and page transition
This is one of the most important concepts. Regardless of how many micro-frontends are implemented in our application, it is crucial to handle transitions between micro-frontends (Fig. 3) properly. That said, users must never notice a switch from one micro-frontend application to another when navigating in the application.
Figure 3. The transition from one micro-frontend to another.
Page transition, which is also known as routing, can be either server or client-side. Let’s briefly go over each of them.
Server routing: on each transition (usually triggered by a click), there is a request to a server to obtain and serve the whole HTML document to the user. This method eventually leads to a full page reload, which is also known as a hard transition. It has several pros and cons and was a traditional way to handle transitions before the client-side routing evolution.
Client routing: on each transition, only some part of the application is being reloaded and operated by JavaScript in the browser on the client-side. Compared to server routing, there is no full page reload and is also known as soft transition. It brings a much better user experience and is implemented in almost all modern web applications.
There is an excellent article that covers server and client routing in more detail. With micro-frontends, the problem has various solutions starting from simple link transitions (the way routing was being handled in the 90’s and early 2000’s) into multi-layered client-side routings(a very modern way of having one top-level routing for the whole application, and several low-level routings per each micro-frontend). They differ by their implementation complexity, so the selection should be made based on the needs of our application.
Composition
It might be necessary to have a page owned by one team, but it will include fragments from other teams. This means we should consider the composition of multiple micro-frontends within each other (Fig. 4). For example, we could have an e-commerce web application and a product detail page owned by one team. We should also show the shopping cart content, which is owned by another team.
Figure 4. Composition of multiple micro-frontends.
Composition is a rendering concern which, similarly to routing, can be client and/or server-side:
Server-side rendering (SSR): with SSR or Universal rendering, our page document is being constructed and rendered on the server and served to the user. It is beneficial on the initial page load of an application as it is much more performant than rendering it on the client-side.
Client-side rendering (CSR): With CSR, our application page content is being constructed, rendered, and loaded on the client-side, operated by JavaScript. This type of rendering is used with all modern libraries and frameworks.
There are many cases when having just SSR or CSR will not be enough to meet our app needs (for example, when we need good SEO, a smooth user experience, and a fast content load). In these cases, the application uses both server and client-side rendering.
In most cases, the most relevant and essential content is being rendered server-side. It is being served to the user first, and afterward, the rest and more dynamic content is being handed over to JavaScript, which renders it on the client-side. There is much more to say about CSR, SSR but it is out of our topic for now. I highly recommend reading this article to learn more about these concepts and differences.
Tip: Again, most of the applications use both client and server-side rendering. But there is an easy life-hack that can help you check which parts of an application are rendered on the server-side and which parts on the client-side. Simply go to your browser settings and turn off JavaScript. Then open your desired website, and you will see the content which is loaded on the server-side. Then by turning on JS again, reloading the page will allow JS to render the other parts handed over to the client-side.
As with routing, there are also many different techniques both server-side(SSI, Zalando, Podium, etc.) and client-side(iframe, Ajax, Web Components, etc.) to handle micro-frontends composition. Which of these to go with depends on our application requirements.
Communication
The last concept of micro-frontend integration is the communication between our micro-frontends (Fig. 5).
Figure 5. Communication between micro-frontends.
Remember that we can have multiple micro-frontends on one page, and interaction with one can eventually lead to others’ changes. For example, let’s imagine we are on the product detail page of an e-commerce application, and we want to add a product to our shopping cart.
On the header (which is another micro-frontend fragment application), there is an icon of our cart, which also has an indicator — the number of added items. Adding the product on the details page should also update the number of items on our header’s shopping cart icon. There can be multiple communication scenarios like parent to fragment, fragment to parent, as well as fragment to fragment, and each communication is being handled differently.
Parent to fragment: This case is similar to handling communication between the parent component to the child components. It can be done by passing the data to children via props/attributes. So if our fragment/child is implemented with web components technology, it should be getting some attributes when they are being loaded, and data change in parent micro-frontend can trigger changed data passing to the fragment/child micro-frontends;
Fragment to parent: In this case, we can use the browser’s native CustomEvents API. So, on the parent side, there can be a subscribed event listener, while inside a fragment, we emit/publish an event on data change. The event emitter will bubble up the data to the listener on the parent side, which will then be handled on the parent micro-frontend.
Fragment to fragment: In this case, we can use the combination of the previous two techniques by emitting events to the parent fragment, which will listen to the changes and pass them to another fragment via props/attributes.
Another way of micro-frontends communication is Broadcast Channel API, which is an implementation of the Publisher/Subscriber design pattern just like the Custom Events API.
What’s next?
So, we have learned how beneficial micro-frontends can be and what problems they can solve. We also learned what challenges and problems we might face when implementing the micro-frontend architecture. We saw that based on the described three concepts, some various technologies and patterns could be used to achieve a full micro-frontend integration.
This should be enough to understand the main concepts and the big picture of micro-frontend architecture. This article is the first part of the series of 3 articles. The next article will be focused on high-level micro-frontend architecture types starting from the simplest and ending with the most complex one. We will also learn the concepts that will help us go with the architecture type that best fits our needs. So, stay tuned.
Ever wondered how large enterprise scale systems are designed? Before major software development starts, we have to choose a suitable architecture that will provide us with the desired functionality and quality attributes. Hence, we should understand different architectures, before applying them to our design.
What is an Architectural Pattern?
According to Wikipedia,
An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. Architectural patterns are similar to software design pattern but have a broader scope.
In this article, I will be briefly explaining the following 10 common architectural patterns with their usage, pros and cons.
Layered pattern
Client-server pattern
Master-slave pattern
Pipe-filter pattern
Broker pattern
Peer-to-peer pattern
Event-bus pattern
Model-view-controller pattern
Blackboard pattern
Interpreter pattern
1. Layered pattern
This pattern can be used to structure programs that can be decomposed into groups of subtasks, each of which is at a particular level of abstraction. Each layer provides services to the next higher layer.
The most commonly found 4 layers of a general information system are as follows.
Presentation layer (also known as UI layer)
Application layer (also known as service layer)
Business logic layer (also known as domain layer)
Data access layer (also known as persistence layer)
Usage
General desktop applications.
E commerce web applications.
Layered pattern
2. Client-server pattern
This pattern consists of two parties; a server and multiple clients. The server component will provide services to multiple client components. Clients request services from the server and the server provides relevant services to those clients. Furthermore, the server continues to listen to client requests.
Usage
Online applications such as email, document sharing and banking.
Client-server pattern
3. Master-slave pattern
This pattern consists of two parties; master and slaves. The master component distributes the work among identical slave components, and computes a final result from the results which the slaves return.
Usage
In database replication, the master database is regarded as the authoritative source, and the slave databases are synchronized to it.
Peripherals connected to a bus in a computer system (master and slave drives).
Master-slave pattern
4. Pipe-filter pattern
This pattern can be used to structure systems which produce and process a stream of data. Each processing step is enclosed within a filter component. Data to be processed is passed through pipes. These pipes can be used for buffering or for synchronization purposes.
Usage
Compilers. The consecutive filters perform lexical analysis, parsing, semantic analysis, and code generation.
Workflows in bioinformatics.
Pipe-filter pattern
5. Broker pattern
This pattern is used to structure distributed systems with decoupled components. These components can interact with each other by remote service invocations. A broker component is responsible for the coordination of communication among components.
Servers publish their capabilities (services and characteristics) to a broker. Clients request a service from the broker, and the broker then redirects the client to a suitable service from its registry.
In this pattern, individual components are known as peers. Peers may function both as a client, requesting services from other peers, and as a server, providing services to other peers. A peer may act as a client or as a server or as both, and it can change its role dynamically with time.
This pattern primarily deals with events and has 4 major components; event source, event listener, channel and event bus. Sources publish messages to particular channels on an event bus. Listeners subscribe to particular channels. Listeners are notified of messages that are published to a channel to which they have subscribed before.
Usage
Android development
Notification services
Event-bus pattern
8. Model-view-controller pattern
This pattern, also known as MVC pattern, divides an interactive application in to 3 parts as,
model — contains the core functionality and data
view — displays the information to the user (more than one view may be defined)
controller — handles the input from the user
This is done to separate internal representations of information from the ways information is presented to, and accepted from, the user. It decouples components and allows efficient code reuse.
Usage
Architecture for World Wide Web applications in major programming languages.
This pattern is useful for problems for which no deterministic solution strategies are known. The blackboard pattern consists of 3 main components.
blackboard — a structured global memory containing objects from the solution space
knowledge source — specialized modules with their own representation
control component — selects, configures and executes modules.
All the components have access to the blackboard. Components may produce new data objects that are added to the blackboard. Components look for particular kinds of data on the blackboard, and may find these by pattern matching with the existing knowledge source.
Usage
Speech recognition
Vehicle identification and tracking
Protein structure identification
Sonar signals interpretation.
Blackboard pattern
10. Interpreter pattern
This pattern is used for designing a component that interprets programs written in a dedicated language. It mainly specifies how to evaluate lines of programs, known as sentences or expressions written in a particular language. The basic idea is to have a class for each symbol of the language.
Usage
Database query languages such as SQL.
Languages used to describe communication protocols.
Interpreter pattern
Comparison of Architectural Patterns
The table given below summarizes the pros and cons of each architectural pattern.
Comparison of Architectural Patterns
Hope you found this article useful. I would love to hear your thoughts. 😇
Microservices architecture has become a hot topic in the software backend development world. The ecosystem carries a profound impact on not just the enterprises’ IT function but also in the digital transformation of an entire app business.
The debate of Microservices vs monolithic architecture defines a revolutionary shift in how an IT team approaches their software development cycle: Whether they go with the approach that brands like Google, Amazon, and Netflix chose or do they go with the simplicity quotient that a startup which is at the development stage demands.
In this article, we are going to get startups an answer to which backend architecture they should choose when they are starting their journey to become a startup.
Table Of Content:
What are Microservices Architecture?
What is Monolithic Architecture?
Microservices vs Monolithic Architecture: Advantages and Disadvantages
How to Choose Between Monolithic and Microservice Architecture?
Migrating from a Monolithic Architecture to a Microservice Ecosystem
Conclusion
What are Microservices Architecture?
Microservices architecture contains a mix of small and autonomous services where every service is self-contained and must be implemented as a single business ability. It is a distinct approach used for development of software systems which focus on developing several single-function modules with clearly-defined operations and interfaces. The approach has become a popular trend in the past several years as more and more Enterprises are looking to become Agile and make a shift towards DevOps.
Components of Microservices architecture that makes it one of the best enterprise architecture:
The services are independent, small, and loosely coupled
Encapsulates a business or customer scenario
Every service is different codebase
Services can be independently deployed
Services interact with each other using APIs
With the question of what are microservices architecture now answered, let us move on to look into what is monolithic architecture.
What is Monolithic Architecture?
Monolithic application has a single codebase having multiple modules. The modules, in turn, are divided into either technical features or business features. The architecture comes with a single build system that helps build complete application. It also comes with a single deployable or executable binary.
Now that we have looked into what is monolithic architecture and microservices architecture, let us look into the disadvantages and benefits that both the backend system offers to get an understanding of what separates them from each other.
Microservices vs Monolithic Architecture: Advantages and Disadvantages
Advantages of Monolithic Architecture
A. Zero Deployment Dependencies
An organized and well-documented Monolith architecture makes it possible for Backend developers to not worry about which version would be compatible with which service, how to find which services are present and what they do, etc.
B. Error Tracing
One of the biggest benefits of monolithic is that all the transactions are logged into one place, making error tracing task a breeze.
C. No Silos
The one factor that works in the favour of monolithic in the microservices vs monolithic architecture debate is absence of silos. It becomes very easy for the developers to work on multiple parts of the app for they are all structured similarly, using the same tools, which makes it okay to have no prior distributed computing knowledge.
D. Cross-cutting concerns:
Spending time in defining the services which do not bleed in each other’s time is the time that you can actually spend in developing things that help the customers.
E. Shared Code:
No shared libraries where the complete scope needed for services to operate is sent along each request.
Limitations of Monolithic Architecture
A. Lack of Flexibility:
Monolithic architectures are not flexible. You cannot use different technologies when you have incorporated Monolithic. The technology stack which have been decided at the beginning have to be followed throughout the project, making upgrades a next to impossible task.
B. Development Speed:
Microservices speed development process is famous when you compare microservices architecture vs monolithic architecture. Development is very slow in monolithic architecture. It can be very difficult for team members to understand and then modify the code of large monolithic applications. Additionally, as the size of codebase increases, the IDE gets overloaded and gets slower. All of this results in a slowed down app development speed.
C. Difficult Scalability:
Scaling monolithic applications becomes difficult when the apps becomes large. While developers can develop new instances of monolith and load balancer to distribute the traffic to new instances, monolithic architecture cannot scale with the increasing load.
Benefits of Microservices Architecture
The biggest factor in favor of microservices in the difference between microservices and monolithic architecture is that it handles complexity issues by decomposing the app into manageable service set that are faster to develop and easier to maintain and understand.
It enables independent service development through a team which is focused on the particular service, which makes the ideal choice of businesses that work with an Agile development approach.
It lowers the barrier of adopting newer technologies as the developers have the freedom to choose whatever technology that makes sense to their project.
It makes it possible for every microservice to be deployed individually. The result of which is that continuous deployment of complex application becomes possible.
Drawbacks of Microservices Architecture
Microservices add a complexity to project simply by the fact that the microservices application is distributed system. To solve the complexities, developers have to select and implement inter-process communication that is based on either RPC or messaging.
They work with partitioned database architecture. The business transactions which update multiple business entities inside the microservices application also have to update different databases that are owned by multiple services.
It is a lot more difficult to implement changes which span across multiple services. While in case of Monolithic architecture, an app development agency only have to change the corresponding modules, integrate all the changes, and then deploy them all in one go.
Deployment of a microservice application is very complex. It consists of a number of services, which individually have multiple runtime instances. In contrast, a monolithic application is deployed on set of identical servers behind load balancer.
The benefits and limitations are prevalent in both monolithic and microservices architecture. This makes it extremely difficult for a startup to gauge which backend architecture to incorporate in their journey.
Let us help you.
How to Choose Between Monolithic and Microservice Architecture?
The fact that both the approaches come with their own set of pros and cons are a sign that there is no one size fits all methodology when it comes to choosing a backend architecture. But there are a few questions that can help you decide which is the right direction to head into.
Are You Working in a Familiar Sector?
When you work in an industry where you know the veins of the sector and you know the demands and the needs of the customers, it becomes easier to enter into the system with a definite structure. The same, however, is not possible with a business that is very new in the industry, for the amount of looming doubts are much greater.
So, the use of microservice architecture in app development is best suited in cases where you know the industry inside out. If that is not the case, go with monolithic approach to develop your app.
How Prepared is Your Team?
Is your team aware with the best practices for implementing microservices? Or are they more comfortable with working around the simplicity of monolithic? Will your team and your business offering expand in the coming time? You will have to find answers to all these questions to gauge whether the people who have to work on a project are even ready to migrate.
What is Your Infrastructure Like?
Everything from the development to the deployment of a monolithic web application would require a cloud-based infrastructure. You will have to make use of Amazon AWS and Google Cloud for deploying even tiny elements. While the cloud technologies make the process easier, The idea of setting up database server for every other microservice and then scaling out is something that startup entrepreneur might not be comfortable with.
Have you Evaluated the Business Risk?
More often than not, businesses take microservices’ side in the Microservices vs Monolithic Architecture thinking it is the right thing for their business. What they forget to factor in is the chance that their application might not become as scalable as they are optimistically expecting and they might have to suffer the risks of adding a highly scalable system in their process.
Here is a short list of pointers that would help you make the decision of choosing to opt for software development processes with microservices vs monolithic architecture:
When to Choose Monolithic Architecture?
When your team is at a founding stage
When you are developing a proof of concept
When you have no experience in microservices
When you have experience in the development on solid frameworks, like the Ruby on Rails, Laravel, etc.
When to Choose Microservices Architecture?
You need independent, quick delivery service
You need to extend your team
Your platform need to be extremely efficient
You don’t have a tight deadline to work with
Migrating from a Monolithic Architecture to a Microservice Ecosystem
The right approach for migrating a monolithic architecture to a microservice ecosystem is to divide the monolith processes and turn them into microservices. The result of this is a two-factor plan:
Identification of existing monolithic elements which can get decoupled
A validation that the new functionality can be developed as microservice
One of the main challenges that can emerge when initiating the migration from a monolithic architecture to a microservice architecture is to design and create an integration between existing system and a new microservice. A solution for this can be to add a glue code which allows them to connect later, something like an API.
API gateway can also help in combining multiple individual service calls in one coarse-grained service, and this in turn would help reduce the integration cost with monolithic system.
Conclusion
When you compare microservices architecture vs monolithic architecture, you will find the former being a hot trend. Every entrepreneur wants to say that their app is based on this architecture. But the temptation to focus only on the problems of monolithic architecture and abandon the architecture should be measured against the actual value of microservice architecture.
The right approach would be to develop new apps using a monolithic approach and move to microservices only when the justification of the move is backed by proper metrics like performance monitoring.
For established businesses, microservices tend to be avenues for continuous deployment, team based development, and an agility to shift to new technologies. But for startups, or companies that are just starting, adopting microservices can impact the software project success very negatively.
FAQs About Microservices vs Monolithic Architecture
Q. What is the Purpose of Microservices?
The Microservice architectures allow you to divide the application in separate independent services, where each of them are managed by different groups in the software development agency. This way, the responsibility gets divided and the application is developed and deployed at a much faster rate.
Q. Does moving from a monolith to a Microservice architecture help with resilience?
Yes. Since microservices enable developers to handle multiple parts of the project at the same time in a streamlined manner, it becomes much easier to identify issues and solve them within time. Something that is next to impossible in case of Monolithic architecture where it is impossible to add new technologies or change the process, mid project.
Q. What is the difference between Microservices vs Monolithic approach?
The difference in microservices and monolithic architecture is the difference of approaches. While in case of Monolithic architecture, there is a single build system, Microservices come with multiple build systems, which makes development and deployment of an application faster.
Q. When To Choose Microservices Over Monolithic Architecture
The choice of going with microservices over monolithic architecture can be decided upon these factors:
The importance of being persistent in event driven architectures.
Enterprises have to constantly adapt and evolve their enterprise architecture strategies in order to deliver the desired business outcomes. The evolving architecture patterns may involve business processing of sales transactions with a human in the loop or they may involve machine to machine data processing using automation. Enterprises earlier adopted a request-driven model where microservices made a call to a service and the service responded to the request being made. In this request-driven model, you run into challenges around flexibility as you try to scale your global deployment footprint.
A new approach, that is quickly gaining adoption in enterprises is called the event-driven architecture. In the new approach, you are able to increase application agility and flexibility by allowing for multiple data producers to coexist with multiple data consumers and you process data only after an event or state change. In this enterprise architecture, the producers and consumers of data can be quickly extended to deliver better flexibility and agility as you scale your operation globally. Examples of event-driven solutions are available in a hosted managed format from most cloud providers today. In this blog post, we will look at a Kafka solution running in a Kubernetes cluster and how you can make sure persistence is achieved for the solution. This approach is running at customers in production today supported by Confluent and Portworx.
Why Use Event-Driven Architecture?
With the advent of 5G technology, a vast amount of data will be generated by sensors, devices, systems, and humans in the loop to track, manage and achieve business outcomes. The use case for Event-Driven architectures may include the following:
Business Process state changes – You want to notify the change of state between a purchase order and accounts receivable with an event. The event-based approach allows a human or a decision engine to take the next appropriate step in the process.
Log and Metrics processing – The event-driven model allows for multiple actions to be triggered based on a single metric. The ability to send messages to multiple event handlers in different subsystems offers the scalability and resiliency required by certain business applications.
Why Use Kafka?
Apache Kafka is a scalable, fault-tolerant messaging system that enables you to build distributed real-time applications with an event-driven architecture. Kafka delivers events, with fast ingestion rates, and provides persistence and in-order guarantees. Kafka adoption in your solution will depend on your specific use-case. Below are some important concepts about Apache Kafka:
Kafka organizes messages into “topics”.
The process that does the work in Kafka is called the “broker”. A producer pushes data into a topic hosted on a broker and a consumer pulls messages from a topic via a broker.
Kafka topics can be divided into “partitions”. This allows for parallelizing a topic across multiple brokers and increasing message ingests and throughput.
Brokers can hold multiple partitions but at any given time, only one partition can act as leader of a topic. A leader is responsible for updating any replicas with new data.
Brokers are responsible for storing messages to disk. The messages are stored with unique offsets. Messages in Kafka do not have a unique ID.
Persistence With Portworx and Kafka on Kubernetes
Kafka needs Zookeeper to be deployed as a StatefulSets in Kubernetes. Kafka Brokers, which maintains the state of the topics and partitions also need to be deployed as StatefulSets which should be backed by persistent volumes.
Kafka offers replication of topics between different brokers. In the case of a node failure, Kafka can recover from failure using the replicated topics. This recovery mechanism does create additional network calls in order to synchronize with the replica on a different broker. The recovery time of the failed node and its broker depend on the amount of data that needs to be rehydrated and network latencies in the cluster.
Portworx offers data replication using the replication parameter in the Kuberbetes storage class. In this scenario, the storage system is responsible for maintaining copies of the topic on different nodes. In the case of a node failure, the Kafka broker is rescheduled on a node that already contains the replicated topic data. The rebuild time of the broker is reduced because it uses the storage system to rehydrate the topic data without any network latencies. Once the data is rehydrated using the storage system, the broker can quickly catch up on the topic offset from an existing broker and thus reducing the overall recovery time.
Let’s walk through a Kubernetes node failure scenario on a Kubernetes cluster where a Kafka application is running and is backed by Portworx volumes.
Figure 1: We will describe the deployment of Kafka and Portworx on a 5 node Kubernetes cluster. In the image below, you can see that the Kafka deployment has 3 Brokers, each with 2 partitions and a replication factor of 2. For the Portworx data platform, we have a volume replication factor set to 3 for each volume.
Figure 2: We will describe a node failure event. In the diagram below, we have simulated a node failure and identified the Kubernetes worker node 2 has been taken out of production. Kubernetes worker node 2 contains a Kafka broker with 2 partitions and 2 Portworx persistent volumes. Figure 3: Once Kubernetes node failure is detected by the Kubernetes API server, the Kaffka broker is deployed to a node with available resources. The Portworx Platform has the ability to influence the placement of the broker on a Kubernete node. Since Kubernetes node 4 already has a copy of Kafka Broker’s persistent volumes, Portworx makes sure that the Kafka broker is deployed on Kubernetes node 4. On the Portworx platform’s recovery side, the volumes are also created on other nodes in order to maintain the defined replication factor of 3.
Figure 4: Once the failed recovery node is back in production use, the Kaffka application does not get scheduled on the recovered node since the application is already at the desired number of Kafka brokers. On the Portworx platform side, replicated volumes are placed on the recovered Kubernetes node to maintain the desired replication factor for the volumes.
I’m a big fan of Microservices. Does it sound weird, given the title? Well, it should not.
Every now and then, a new paradigm or technology arises. It brings hope and hype. This will save us all! Yeah, it’s true. Maybe it is ground-breaking. Extremely beneficial. However, like everything in life, every solution has its scope.
Have you ever tried to apply Newtonian laws to subatomic particles? Or, have you ever tried to squeeze a television into a sock? Pretty much the same here. Hype makes you forget this little clause at the end of the contract. And results are catastrophic.
In a normal web application, you have a single process serving all requests. Well, you can replicate it for balancing reasons, but in any case there’s a single package/software you have to run. And it bundles all the logic together.
When you use Microservices, instead, you have several, distinct processes, each one serving just a subset of requests. Each process runs a distinct package, with just the code it needs.
Is it really new?
No, it isn’t.
The Microservice pattern is nothing new, actually, like almost every emergent design (besides it has been around for some years). It is Object-Oriented Programming applied on a higher level: application architecture. No coincidence that it often makes use of HTTP APIs, particularly RESTful, which are in turn OOP principles applied to API design. P.S. I hate acronyms.
Let me explain.
When you create a program using Object-Oriented Programming, you split it into several pieces, Classes. Each Class represents a component of the solution and has a specific role. It also works just on its own subset of data. Does it sound familiar?
This is not the right place to discuss about Object-Oriented Programming in general, but I have to point out some aspects. If you are not familiar with it, you can reference online documentation and, soon, my other article about it!
The Two Principles — The Good Tailor
When you design Classes, you must follow two main principles: loose coupling and high cohesion.
To be honest, these should be the guiding stars of each software project, not just OOP ones. But that’s another story. Or is it?
High cohesionis the situation where each component is strongly consistent. All its data and methods refer to a very specific area, or meaning, or role.
Loose couplingis the situation where each component has the least possible amount of interactions with other components. It can work on its own, hiding low-level details, and exposes the essential interfaces.
For Microservices it is the same. I can’t stress this enough. A good OOP design leads to an optimal Microservice design. There is no middle ground.
This should be almost enough to answer why and when to use Microservices. But since there is more, let’s break it down.
Be a Good Tailor. Loosely sew highly-entangled (coherent) fabrics. Photo by Jeff Wade on Unsplash
When to split?
This is exactly as it is for Object-Oriented Programming.
When you can respect high cohesion and loose coupling.
No differently.
You know how to split code into Classes in the right way. Why would you split your Microservices randomly? If a bad Class diagram is painful to maintain and manage, because of the sparsity of code in different files, think of randomly split Microservices application, where you have different processes altogether.
But why on Earth would I split my application into sub-processes?
Imagine how much it’s easy to make two Objects interact. You just have to call an Object’s method and you are good to go. All within a single program.
Microservices, instead, live in different processes. Possibly, on different machines. You have to communicate over a network, using APIs.
It adds complexity! You must have excellent reasons to do so. Not for trend, not for fun. Because, I assure you, if you use Microservices randomly, their management is not fun at all.
Actually, you must have the same reasons you’d have when you decide to use APIs. It’s the same.
APIs let you hide all the implementation details behind them. That’s essential in some cases, and it offers incredible benefits.
1. Scalability for non-uniform traffic
Scenario: you have a Supermarket application. You have a Stock Microservice, that just shows you the amounts of articles in stock, and a Vision Microservice, which uses GPUs to detect your articles given a picture of them.
If you receive thousands of calls for your Stock APIs and just a bunch of requests for your Vision APIs, you could just replicate your Stock Microservice.
No need to double your GPU resources!
Think of it. If you kept everything in a single machine or application, and you still wanted to scale to match all the incoming requests, you would have to replicate it entirely. A waste.
2. Error resilience
Suppose you have a Bank application. Your Transfer Microservice keeps crashing, maybe for a temporary error or, worse, for a newly introduced bug (damn untested releases…).
Why should the entire Bank application be down? Maybe some customers wouldn’t even notice, because they just want to see their movements or they want to use their cards.
3. Separate deployments
Similarly to the reason above, if you have a new feature to add or a bug to fix, you can just deploy the right Microservice. Your build and deployment times should be smaller.
Also, you could put your Microservices on different machines, in different locations, maybe.
4. Complete isolation
Maybe you need complete isolation between Microservices. In this way, you can use different DBs (SQL and noSQL) or different technologies altogether. This will give you complete freedom! And it is of course strictly related to the other points as well, about error resilience or requirements.
5. Different requirements
Do you have a heavy computation to perform? Maybe a Machine Learning one, where you need GPUs, their libraries… Maybe that part would be much better in Python, while the other one could benefit from Java. Or maybe libraries for the same language would create some conflicts, but you desperately need two specific versions for two distinct tasks. And finally, perhaps you want to use two different Cloud products to host your two distinct pieces.
There are plenty of reasons, some more common than others.
In those cases, split the application! Probably it’s easier. But remember to respect the Two Principles! Be a Good Tailor, or you would have hard times sharing data between your Microservices…
Even the highest of Monoliths is made by blocks. It might be atoms, but it is. Photo by Zoltan Tasi on Unsplash
Is it not your case? Go for a Monolith.
If your system is tightly coupled, use a single application. A Monolith.
You’ll have cleaner dependency management, and you won’t need complex orchestrations or distributed systems to trace errors, share data, collect logs, sync calls, secure network interactions… and I can go on.
Is Object-Oriented Programming not suitable for your application architecture? Or, better, is RESTful design not suitable? Microservices won’t be either!
If your application serves a long sequence of API calls, maybe because an API requires the result of the previous ones, this is not ideal. It would introduce also network delays. If your application is a single procedure that requires user interaction, well, the sentence is self-explaining. It is a Procedure! You have to share a state between calls, each component is not independent and a single error in the sequence will block it entirely. There is hardly a benefit in splitting the application.
And remember, Monolith doesn’t mean Mess
Monolith.
Mess.
They are also spelled differently.
Someone sees Monolith as a bunch of unordered code. They claim that Microservice pattern is the only way to have splits, order and all.
Well, I’ll reveal something.
It’s — not — true.
Your code design does not depend on process divisions. Object-Oriented Programming doesn’t either. If you split your code right, if you manage your dependencies in a clear hierarchy, as well as your files, you can achieve the same level of cohesion and decoupling.
And another secret: if you divide your APIs into separate packages/modules/controllers/blueprints/whatever in your code, then you can decide to serve them in a single process or split them in Microservices at build-time, or at run-time too, for free.
Wow.
Conclusions
I’m a big fan of Microservices, as I said.
If used in the right context, for the right reasons, they solve issues and improve performances or costs.
But please, stop making them the one and only solution for all problems. Because they can also be the root of all evil.
Logging in Spring Boot can be confusing, and the wide range of tools and frameworks make it a challenge to even know where to start. This guide talks through the most common best practices for Spring Boot logging and gives five key suggestions to add to your logging tool kit.
What’s in the Spring Boot Box?
The Spring Boot Starters all depend on spring-boot-starter-logging. This is where the majority of the logging dependencies for your application come from. The dependencies involve a facade (SLF4J) and frameworks (Logback). It’s important to know what these are and how they fit together.
SLF4J is a simple front-facing facade supported by several logging frameworks. It’s main advantage is that you can easily switch from one logging framework to another. In our case, we can easily switch our logging from Logback to Log4j, Log4j2 or JUL.
The dependencies we use will also write logs. For example, Hibernate uses SLF4J, which fits perfectly as we have that available. However, the AWS SDK for Java uses Apache Commons Logging (JCL). Spring-boot-starter-logging includes the necessary bridges to ensure those logs are delegated to our logging framework out of the box.
SLF4J usage:
At a high level, all the application code has to worry about is:
Getting an instance of an SLF4J logger (Regardless of the underlying framework): private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);Copy
Writing some logs: LOG.info(“My message set at info level”);Copy
Logback or Log4j2?
Spring Boot’s default logging framework is Logback. Your application code should interface only with the SLF4J facade so that it’s easy to switch to an alternative framework if necessary.
Log4j2 is newer and claims to improve on the performance of Logback. Log4j2 also supports a wide range of appenders so it can log to files, HTTP, databases, Cassandra, Kafka, as well as supporting asynchronous loggers. If logging performance is of high importance, switching to log4j2 may improve your metrics. Otherwise, for simplicity, you may want to stick with the default Logback implementation.
This guide will provide configuration examples for both frameworks.
5 Tips for Getting the Most Out of Your Spring Boot Logging
With your initial set up out of the way, here are 5 top tips for spring boot logging.
1. Configuring Your Log Format
Spring Boot Logging provides default configurations for logback and log4j2. These specify the logging level, the appenders (where to log) and the format of the log messages.
For all but a few specific packages, the default log level is set to INFO, and by default, the only appender used is the Console Appender, so logs will be directed only to the console.
The default format for the logs using logback looks like this:
Let’s take a look at that last line of log, which was a statement created from within a controller with the message “My message set at info level”.
It looks simple, yet the default log pattern for logback seems “off” at first glance. As much as it looks like it could be, it’s not regex, it doesn’t parse email addresses, and actually, when we break it down it’s not so bad.
The variables that are available for the log format allow you to create meaningful logs, so let’s look a bit deeper at the ones in the default log pattern example.Show 102550100 entriesSearch:
Pattern Part
What it Means
%clr
%clr specifies a colour. By default, it is based on log levels, e.g, INFO is green. If you want to specify specific colours, you can do that too.
The format is: %clr(Your message){your colour}
So for example, if we wanted to add “Demo” to the start of every log message, in green, we would write: %clr(Demo){green}
%d is the current date, and the part in curly braces is the format. ${VARIABLE}:-default is a way of specifying that we should use the $VARIABLE environment variable for the format, if it is available, and if not, fall back to default. This is handy if you want to override these values in your properties files, by providing arguments, or by setting environment variables.
In this example, the default format is yyyy-MM-dd HH:mm:ss.SSS unless we specify a variable named LOG_DATEFORMAT_PATTERN. In the logs above, we can see 2020-10-19 10:09:58.152 matches the default pattern, meaning we did not specify a custom LOG_DATEFORMAT_PATTERN.
${LOG_LEVEL_PATTERN:-%5p}
Uses the LOG_LEVEL_PATTERN if it is defined, else will print the log level with right padding up to 5 characters (E.g “INFO” becomes “INFO “ but “TRACE” will not have the trailing space). This keeps the rest of the log aligned as it’ll always be 5 characters.
${PID:- }
The environment variable $PID, if it exists. If not, space.
t
The name of the thread triggering the log message.
logger
The name of the logger (up to 39 characters), in our case this is the class name.
%m
The log message.
%n
The platform-specific line separator.
%wEx
If one exists, wEx is the stack trace of any exception, formatted using Spring Boot’s ExtendedWhitespaceThrowableProxyConverter.
You can customise the ${} variables that are found in the logback-spring.xml by passing in properties or environment variables. For example, you may set logging.pattern.console to override the whole of the console log pattern.
Armed with the ability to customise your logs, you should consider adding:
Application name.
A request ID.
The endpoint being requested (E.g /health).
There are a few items in the default log that I would remove unless you have a specific use case for them:
The ‘—’ separator.
The thread name.
The process ID.
With the ability to customise these through the use of the logback-spring.xml or log4j2-spring.xml, the format of your logs is fully within your control.
2. Configuring the Destination for Your Logs (Appenders and Loggers)
An appender is just a fancy name for the part of the logging framework that sends your logs to a particular target. Both frameworks can output to console, over HTTP, to databases, or over a TCP socket, as well as to many other targets. The way we configure the destination for the logs is by adding, removing and configuring these appenders.
You have more control over which appenders you use, and the configuration of them, if you create your own custom .xml configuration. However, the default logging configuration does make use of environment properties that allow you to override some parts of it, for example, the date format.
The official docs for logback appenders and log4j2 appenders detail the parameters required for each of the appenders available, and how to configure them in your XML file. One tip for choosing the destination for your logs is to have a plan for rotating them. Writing logs to a file always feels like a great idea, until the storage used for that file runs out and brings down the whole service.
Log4j and logback both have a RollingFileAppender which handles rotating these log files based on file size, or time, and it’s exactly that which Spring Boot Logging uses if you set the logging.file property.
3. Logging as a Cross-Cutting Concern to Keep Your Code Clean (Using Filters and Aspects)
You might want to log every HTTP request your API receives. That’s a fairly normal requirement, but putting a log statement into every controller is unnecessary duplication. It’s easy to forget and make mistakes. A requirement that you want to log every method within your packages that your application calls would be even more cumbersome.
I’ve seen developers use this style of logging at trace level so that they can turn it on to see exactly what is happening in a production environment. Adding log statements to the start and end of every method is messy, and there is a better way. This is where filters and aspects save the day and avoid the code duplication.
When to Use a Filter Vs When to Use Aspect-Oriented Programming
If you are looking to create log statements related to specific requests, you should opt for using filters, as they are part of the handling chain that your application already goes through for each request. They are easier to write, easier to test and usually more performant than using aspects. If you are considering more cross-cutting concerns, for example, audit logging, or logging every method that causes an exception to be thrown, use AOP.
Using a Filter to Log Every Request
Filters can be registered with your web container by creating a class implementing javax.servlet.Filter and annotating it with @Component, or adding it as an @Bean in one of your configuration classes. When your spring-boot-starter application starts up, it will create the Filter and register it with the container.
Aspect-oriented programming enables you to fulfill cross-cutting concerns, like logging for example, in one place. You can do this without your logging code needing to sprawl across every class.
This approach is great for use cases such as:
Logging any exceptions thrown from any method within your packages (See @AfterThrowing)
Logging performance metrics by timing before/after each method is run (See @Around)
Audit logging. You can log calls to methods that have your a custom annotation on, such as adding @Audit. You only need to create a pointcut matching calls to methods with that annotation
Let’s start with a simple example – we want to log the name of every public method that we call within our package, com.example.demo. There are only a few steps to writing an Aspect that will run before every public method in a package that you specify.
Add @EnableAspectJAutoProxy to one of your configuration classes. This line tells spring-boot that you want to enable AspectJ support.
Add your pointcut, which defines a pattern that is matched against method signatures as they run. You can find more about how to construct your matching pattern in the spring boot documentation for AOP. In our example, we match any method inside the com.example.demo package.
Add your Aspect. This defines when you want to run your code in relation to the pointcut (E.g, before, after or around the methods that it matches). In this example, the @Before annotation causes the method to be executed before any methods that match the pointcut.
That’s all there is to logging every method call. The logs will appear as:
2020-10-27 19:26:33.269 INFO 2052 --- [nio-8080-exec-2]
com.example.demo.MyAspect : Called checkHealthCopy
By making changes to your pointcut, you can write logs for every method annotated with a specific annotation. For example, consider what you can do with:
@annotation(com.example.demo.Audit)Copy
4. Applying Context to Your Logs Using MDC
(This would run for every method annotated with a custom annotation, @Audit).
MDC (Mapped Diagnostic Context) is a complex-sounding name for a map of key-value pairs, associated with a single thread. Each thread has its own map. You can add keys/values to the map at runtime, and then reference the keys from that map in your logging pattern.
The approach comes with a warning that threads may be reused, and so you’ll need to make sure to clear your MDC after each request to avoid your context leaking from one request to the next.
MDC is accessible through SLF4J and supported by both Logback and Log4j2, so we don’t need to worry about the specifics of the underlying implementation.
Tracking Requests Through Your Application Using Filters and MDC
Want to be able to group logs for a specific request? The Mapped Diagnostic Context (MDC) will help.
The steps are:
Add a header to each request going to your API, for example, ‘tracking-id’. You can generate this on the fly (I suggest using a UUID) if your client cannot provide one.
Create a filter that runs once per request and stores that value in the MDC.
Update your logging pattern to reference the key in the MDC to retrieve the value.
After setting the value on your MDC, just add %X{tracking} to your logging pattern (Replacing the word “tracking” with the key you have put in MDC) and your logs will contain the value in every log message for that request.
If a client reports a problem, as long as you can get a unique tracking-id from your client, then you’ll be able to search your logs and pull up every log statement generated from that specific request.
Other use cases that you may want to put into your MDC and include on every log message include:
The application version.
Details of the request, for example, the path.
Details of the logged-in user, for example, the username.
5. Unit Testing Your Log Statements
Why Test Your Logs?
You can unit test your logging code. Too often this is overlooked because the log statements return void. For example, logger.info(“foo”); does not return a value that you can assert against.
It’s easy to make mistakes. Log statements usually involve parameters or formatted strings, and it’s easy to put log statements in the wrong place. Unit testing reassures you that your logs do what you expect and that you’re covered when refactoring to avoid any accidental modifications to your logging behaviour.
The Approach to Testing Your Logs
The Problem
SLF4J’s LoggerFactory.getLogger is static, making it difficult to mock. Searching through any outputted log files in our unit tests is error-prone (E.g we need to consider resetting the log files between each unit test). How do we assert against the logs?
The Solution
The trick is to add your own test appender to the logging framework (e.g Logback or Log4j2) that captures the logs from your application in memory, allowing us to assert against the output later. The steps are:
Before each test case, add an appender to your logger.
Within the test, call your application code that logs some output.
The logger will delegate to your test appender.
Assert that your expected logs have been received by your test appender.
Each logging framework has suitable appenders, but referencing those concrete appenders in our tests means we need to depend on the specific framework rather than SLF4J. That’s not ideal, but the alternatives of searching through logged output in files, or implementing our own SLF4J implementation is overkill, making this the pragmatic choice.
Here are a couple of tricks for unit testing using JUnit 4 rules or JUnit 5 extensions that will keep your test classes clean, and reduce the coupling with the logging framework.
Testing Log Statements Using Junit 5 Extensions in Two Steps
JUnit 5 extensions help to avoid code duplicates between your tests. Here’s how to set up your logging tests in two steps:
Step 2: Use that rule to assert against your log statement with logback or log4j2
Testing Log Statements Using Junit 4 Rules in Two Steps
JUnit 4 rules help to avoid code duplication by extracting the common test code away from the test classes. In our example, we don’t want to duplicate the code for adding a test appender to our logger in every test class.
Step 2: Use that rule to assert against your log statements using logback or log4j2.
With these approaches, you can assert that your log statements have been called with a message and level that you expect.
Conclusion
The Spring Boot Logging Starter provides everything you need to quickly get started, whilst allowing full control when you need it. We’ve looked at how most logging concerns (formatting, destinations, cross-cutting logging, context and unit tests) can be abstracted away from your core application code.
Any global changes to your logging can be done in one place, and the classes for the rest of your application don’t need to change. At the same time, unit tests for your log statements provide you with reassurance that your log statements are being fired after making any alterations to your business logic.
These are my top 5 tips for configuring Spring Boot Logging. However, when your logging configuration is set up, remember that your logs are only ever as good as the content you put in them. Be mindful of the content you are logging, and make sure you are using the right logging levels.
Microservices are defined as a self-regulating, and independent codebase that can be written and maintained even by a small team of developers. Microservices Architecture consists of such loosely coupled services with each service responsible for the execution of its associated business logic.
The services are separated from each other based on the nature of their domains and belong to a mini-microservice pool. Enterprise mobile app developers leverage the capabilities of this architecture especially for complex applications.
Microservices Architecture allows developers to release versions of software thanks to sophisticated automation of software building, testing, and deployment – something that acts as a prime differentiation point between Microservices and Monolithic architecture.
Benefits
Since the services are bifurcated into pools, the architecture design pattern makes the system highly fault-tolerant. In other words, the whole software won’t collapse on its head even if some microservices cease to function.
An enterprise mobile app development company working on such an architecture for clients can deploy multiple programming languages to build different microservices for their specific purpose. Therefore the technology stack can be kept updated with the latest upgrades in computing.
This architecture is a perfect fit for applications that need to scale. Since the services are already independent of each other, they can scale individually rather than overloading the entire system with the need to expand.
Services can be integrated into any application depending upon the scope of work.
Potential Drawbacks
Since each service is unique in its ability to contribute to the whole codebase, it could be challenging for an enterprise mobile application development company to interlink all and operate so many distinctive services seamlessly.
Developers must define a standard protocol for all services to adhere to. It is important to do so, as the decentralized approach towards coding microservices in multiple languages can pose serious issues while debugging.
Each microservice with its limited environment is responsible to maintain the integrity of the data. It is up to the architects of such a system to come up with a universally consistent data integrity protocol, wherever possible.
You definitely need the best of breed professionals to design such a system for you as the technology stack keeps changing.
Ideal For
Use Microservices Architecture for apps in which a specific segment will be used heavily than the others and would need a sporadic burst of scaling. Instead of a standalone application you may also deploy this for a service that provides functionality to other applications of the system.
Discover the deliberate design decisions that have made Kafka the performance powerhouse it is today.
The last few years have brought about immense changes in the software architecture landscape. The notion of a single monolithic application or even several coarse-grained services sharing a common data store has been all but erased from the hearts and minds of software practitioners world-wide. Autonomous microservices, event-driven architecture, and CQRS are the dominant tools in the construction of contemporary business-centric applications. To top it off, the proliferation of device connectivity — IoT, mobile, wearables — is creating an upward pressure on the number of events a system must handle in near-real-time.
Let’s start by acknowledging that the term ‘fast’ is multi-faceted, complex, and highly ambiguous. Latency, throughput, jitter, are metrics that shape and influence one’s interpretation of the term. It is also inherently contextual: the industry and application domains in themselves set the norms and expectations around performance. Whether or not something is fast depends largely on one’s frame of reference.
Apache Kafka is optimized for throughput at the expense of latency and jitter, while preserving other desirable qualities, such as durability, strict record order, and at-least-once delivery semantics. When someone says ‘Kafka is fast’, and assuming they are at least mildly competent, you can assume they are referring to Kafka’s ability to safely accumulate and distribute a very high number of records in a short amount of time.
Historically, Kafka was born out of LinkedIn’s need to move a very large number of messages efficiently, amounting to multiple terabytes of data on an hourly basis. The individual message propagation delay was deemed of secondary importance, as was the variability of that time. After all, LinkedIn is not a financial institution that engages in high-frequency trading, nor is it an industrial control system that operates within deterministic deadlines. Kafka can be used to implement near-real-time (otherwise known as soft real-time) systems.
Note: For those unfamiliar with the term, ‘real-time’ does not mean ‘fast’, it means ‘predictable’. Specifically, real-time implies a hard upper bound, otherwise known as a deadline, on the time taken to complete an action. If the system as a whole is unable to meet this deadline each and every time, it cannot be classed as real-time. Systems that are able to perform within a probabilistic tolerance, are labelled as ‘near-real-time’. In terms of sheer throughput, real-time systems are often slower than their near-real-time or non-real-time counterparts.
There are two significant areas that Kafka draws upon for its speed, and they need to be discussed separately. The first relates to the low-level efficiency of the client and broker implementations. The second derives from the opportunistic parallelism of stream processing.
Broker performance
Log-structured persistence
Kafka utilizes a segmented, append-only log, largely limiting itself to sequential I/O for both reads and writes, which is fast across a wide variety of storage media. There is a wide misconception that disks are slow; however, the performance of storage media (particularly rotating media) is greatly dependent on access patterns. The performance of random I/O on a typical 7,200 RPM SATA disk is between three and four orders of magnitude slower when compared to sequential I/O. Furthermore, a modern operating system provides read-ahead and write-behind techniques that prefetch data in large block multiples and group smaller logical writes into large physical writes. Because of this, the difference between sequential I/O and random I/O is still evident in flash and other forms of solid-state non-volatile media, although it is far less dramatic compared to rotating media.
Record batching
Sequential I/O is blazingly fast on most media types, comparable to the peak performance of network I/O. In practice, this means that a well-designed log-structured persistence layer will keep up with the network traffic. In fact, quite often the bottleneck with Kafka isn’t the disk, but the network. So in addition to the low-level batching provided by the OS, Kafka clients and brokers will accumulate multiple records in a batch — for both reading and writing — before sending them over the network. Batching of records amortizes the overhead of the network round-trip, using larger packets and improving bandwidth efficiency.
Batch compression
The impact of batching is particularly obvious when compression is enabled, as compression becomes generally more effective as the data size increases. Especially when using text-based formats such as JSON, the effects of compression can be quite pronounced, with compression ratios typically ranging from 5x to 7x. Furthermore, record batching is largely done as a client-side operation, which transfers the load onto the client and has a positive effect not only on the network bandwidth but also on the brokers’ disk I/O utilization.
Cheap consumers
Unlike traditional MQ-style brokers which remove messages at point of consumption (incurring the penalty of random I/O), Kafka doesn’t remove messages after they are consumed — instead, it independently tracks offsets at each consumer group level. The progression of offsets themselves is published on an internal Kafka topic __consumer_offsets. Again, being an append-only operation, this is fast. The contents of this topic are further reduced in the background (using Kafka’s compaction feature) to only retain the last known offsets for any given consumer group.
Compare this model with more traditional message brokers which typically offer several contrasting message distribution topologies. On one hand is the message queue — a durable transport for point-to-point messaging, with no point-to-multipoint ability. On the other hand, a pub-sub topic allows for point-to-multipoint messaging but does so at the expense of durability. Implementing a durable point-to-multipoint messaging model in a traditional MQ requires maintaining a dedicated message queue for each stateful consumer. This creates both read and write amplification. On one hand, the publisher is forced to write to multiple queues. Alternatively, a fan-out relay may consume records from one queue and write to several others, but this only defers the point of amplification. On the other hand, several consumers are generating load on the broker — being a mixture of read and write I/O, both sequential and random.
Consumers in Kafka are ‘cheap’, insofar as they don’t mutate the log files (only the producer or internal Kafka processes are permitted to do that). This means that a large number of consumers may concurrently read from the same topic without overwhelming the cluster. There is still some cost in adding a consumer, but it is mostly sequential reads with a low rate of sequential writes. So it’s fairly normal to see a single topic being shared across a diverse consumer ecosystem.
Unflushed buffered writes
Another fundamental reason for Kafka’s performance, and one that is worth exploring further: Kafka doesn’t actually call fsync when writing to the disk before acknowledging the write; the only requirement for an ACK is that the record has been written to the I/O buffer. This is a little known fact, but a crucial one: in fact, this is what actually makes Kafka perform as if it were an in-memory queue — because for all intents and purposes Kafka is a disk-backed in-memory queue (limited by the size of the buffer/pagecache).
On the flip side, this form of writing is unsafe, as the failure of a replica can lead to a data loss even though the record has seemingly been acknowledged. In other words, unlike say a relational database, acknowledging a write alone does not imply durability. What makes Kafka durable is running several in-sync replicas; even if one were to fail, the others (assuming there is more than one) will remain operational — providing that the failure is uncorrelated (i.e. multiple replicas failing simultaneously due of a common upstream failure). So the combination of a non-blocking approach to I/O with no fsync, and redundant in-sync replicas give Kafka the combination of high throughput, durability, and availability.
Client-side optimisations
Most databases, queues, and other forms of persistent middleware are designed around the notion of an all-mighty server (or a cluster of servers), and fairly thin clients that communicate with the server(s) over a well-known wire protocol. Client implementations are generally considered to be significantly simpler than the server. As a result, the server will absorb the bulk of the load — the clients merely act as interfaces between the application code and the server.
Kafka takes a different approach to client design. A significant amount of work is performed on the client before records get to the server. This includes the staging of records in an accumulator, hashing the record keys to arrive at the correct partition index, checksumming the records and the compression of the record batch. The client is aware of the cluster metadata and periodically refreshes this metadata to keep abreast of any changes to the broker topology. This lets the client make low-level forwarding decisions; rather than sending a record blindly to the cluster and relying on the latter to forward it to the appropriate broker node, a producer client will forward writes directly to partition masters. Similarly, consumer clients are able to make intelligent decisions when sourcing records, potentially using replicas that geographically closer to the client when issuing read queries. (This feature is a more recent addition to Kafka, available as of version 2.4.0.)
Zero-copy
One of the typical sources of inefficiencies is copying byte data between buffers. Kafka uses a binary message format that is shared by the producer, the broker, and the consumer parties so that data chunks can flow end-to-end without modification, even if it’s compressed. While eliminating structural differences between communicating parties is an important step, it doesn’t in itself avoid the copying of data.
Kafka solves this problem on Linux and UNIX systems by using Java’s NIO framework, specifically, the transferTo() method of a java.nio.channels.FileChannel. This method permits the transfer of bytes from a source channel to a sink channel without involving the application as a transfer intermediary. To appreciate the difference that NIO makes, consider the traditional approach where a source channel is read into a byte buffer, then written to a sink channel as two separate operations:
Diagrammatically, this can be represented using the following.
Although this looks simple enough, internally, the copy operation requires four context switches between user mode and kernel mode, and the data is copied four times before the operation is complete. The diagram below outlines the context switches at each step.
Looking at this in more detail —
The initial read() causes a context switch from user mode to kernel mode. The file is read, and its contents are copied to a buffer in the kernel address space by the DMA (Direct Memory Access) engine. This is not the same buffer that was used in the code snippet.
Prior to returning from read(), the kernel buffer is copied into the user-space buffer. At this point, our application can read the contents of the file.
The subsequent send() will switch back into kernel mode, copying the user-space buffer into the kernel address space — this time into a different buffer associated with the destination socket. Behind the scenes, the DMA engine takes over, asynchronously copying the data from the kernel buffer to the protocol stack. The send() method does not wait for this prior to returning.
The send() call returns, switching back to the user-space context.
In spite of its mode-switching inefficiencies and additional copying, the intermediate kernel buffer can actually improve performance in many cases. It can act as a read-ahead cache, asynchronously prefetching blocks, thereby front-running requests from the application. However, when the amount of requested data is significantly larger than the kernel buffer size, the kernel buffer becomes a performance bottleneck. Rather than copying the data directly, it forces the system to oscillate between user and kernel modes until all the data is transferred.
By contrast, the zero-copy approach is handled in a single operation. The snippet from the earlier example can be rewritten as a one-liner:
fileDesc.transferTo(offset, len, socket);
The zero-copy approach is illustrated below.
Under this model, the number of context switches is reduced to one. Specifically, the transferTo() method instructs the block device to read data into a read buffer by the DMA engine. This buffer is then copied another kernel buffer for staging to the socket. Finally, the socket buffer is copied to the NIC buffer by DMA.
As a result, we have reduced the number of copies from four to three, and only one of those copies involves the CPU. We have also reduced the number of context switches from four to two.
This is a massive improvement, but it’s not query zero-copy yet. The latter can be achieved as a further optimization when running Linux kernels 2.4 and later, and on network interface cards that support the gather operation. This is illustrated below.
Calling the transferTo() method causes the device to read data into a kernel read buffer by the DMA engine, as per the previous example. However, with the gather operation, there is no copying between the read buffer and the socket buffer. Instead, the NIC is given a pointer to the read buffer, along with the offset and the length, which is vacuumed up by DMA. At no point is the CPU involved in copying buffers.
Comparisons of traditional and zero-copy on file sizes ranging from a few megabytes to a gigabyte show performance gains by a factor of two to three in favor of zero-copy. But what’s more impressive, is that Kafka achieves this using a plain JVM with no native libraries or JNI code.
Avoiding the GC
The heavy use of channels, native buffers, and the page cache has one additional benefit — reducing the load on the garbage collector (GC). For example, running Kafka on a machine with 32 GB of RAM will result in 28–30 GB usable for the page cache, completely outside of the GC’s scope. The difference in throughput is minimal — in the region of several percentage points — as the throughput of a correctly-tuned GC can be quite high, especially when dealing with short-lived objects. The real gains are in the reduction of jitter; by avoiding the GC, the brokers are less likely to experience a pause that may impact the client, extending the end-to-end propagation delay of records.
To be fair, the avoidance of GC is less of a problem now, compared to what it used to be when Kafka was conceived. Modern GCs like Shenandoah and ZGC scale to huge, multi-terabyte heaps, and have tunable worst-case pause times, down to single-digit milliseconds. It is not uncommon these days to see JVM-based applications using large heap-based caches outperform off-heap designs.
Stream parallelism
The efficiency of log-structured I/O is one crucial aspect of performance, mostly affecting writes; Kafka’s treatment of parallelism in the topic structure and the consumer ecosystem is fundamental to its read performance. The combination produces an overall very high end-to-end messaging throughput. Concurrency is ingrained into its partitioning scheme and the operation of consumer groups, which is effectively a load-balancing mechanism within Kafka — distributing partition assignments approximately evenly among the individual consumer instances within the group. Compare this to a more traditional MQ: in an equivalent RabbitMQ setup, multiple concurrent consumers may read from a queue in a round-robin fashion, but in doing so they forfeit the notion of message ordering.
The partitioning mechanism also allows for the horizontal scalability of Kafka brokers. Every partition has a dedicated leader; any nontrivial topic (with multiple partitions) can, therefore, utilize the entire cluster of broker for writes. This is yet another point of distinction between Kafka and a message queue; where the latter utilizes clustering for availability, Kafka will genuinely balance the load across the brokers for availability, durability, and throughput.
The producer specifies the partition when publishing a record, assuming that you are publishing to a topic with multiple partitions. (One may have a single-partition topic, in which case this is a non-issue.) This may be accomplished either directly — by specifying a partition index, or indirectly — by way of a record key, which deterministically hashes to a consistent (i.e. same every time) partition index. Records sharing the same hash are guaranteed to occupy the same partition. Assuming a topic with multiple partitions, records with a different key will likely end up in different partitions. However, due to hash collisions, records with different hashes may also end up in the same partition. Such is the nature of hashing. If you understand how a hash table works, this is no different.
The actual processing of records is done by consumers, operating within an (optional) consumer group. Kafka guarantees that a partition may only be assigned to at most one consumer within its consumer group. (We say ‘at most’ to cover the case when all consumers are offline.) When the first consumer in a group subscribes to the topic, it will receive all partitions on that topic. When a second consumer subsequently joins, it will get approximately half of the partitions, relieving the first consumer of half of its prior load. This enables you to process an event stream in parallel, adding consumers as necessary (ideally, using an auto-scaling mechanism), providing that you have adequately partitioned your event stream.
Control of record throughput accomplished in two ways:
The topic partitioning scheme. Topics should be partitioned to maximize the number of independent event sub-streams. In other words, record order should only be preserved where it is absolutely necessary. If any two records are not legitimately related in a causal sense, they shouldn’t be bound to the same partition. This implies the use of different keys, as Kafka will use a record’s key as a hashing source to derive its consistent partition mapping.
The number of consumers in the group. You can increase the number of consumers to match the load of inbound records, up to the number of partitions in the topic. (You can have more consumers if you wish, but the partition count will place an upper bound on the number of active consumers which get at least one partition assignment; the remaining consumers will remain idle.) Note that a consumer could be a process or a thread. Depending on the type of workload that the consumer performs, you may be able to employ multiple individual consumer threads or process records in a thread pool.
If you were wondering whether Kafka is fast, how it achieves its renowned performance characteristics, or if it can scale to your use cases, you should hopefully by now have all the answers you need.
To make things abundantly clear, Kafka is not the fastest (that is, most throughput-capable) messaging middleware — there are other platforms capable of greater throughput — some are software-based and some are implemented in hardware. Nor is it the best throughput-latency compromise — Apache Pulsar is a promising technology that is scalable and achieves a better throughput-latency profile while offering identical ordering and durability guarantees. The rationale for adopting Kafka is that as a complete ecosystem, it remains unmatched overall. It exhibits excellent performance while offering an environment that is abundant and mature, but also involving — in spite of its size, Kafka is still growing at an enviable pace.
The designers and maintainers of Kafka have done an amazing job at devising a solution that is performance-oriented at its core. Few of its design elements feel like an afterthought or a bolt-on. From offloading of work to clients to the log-structured persistence on the broker, batching, compression, zero-copy I/O, and stream-level parallelism — Kafka throws down the gauntlet to just about any other message-oriented middleware, commercial or open-source. And most impressively, it does so without compromising on qualities such as durability, record order, and at-least-once delivery semantics.
Kafka is not the simplest of messaging platforms, and there is a fair bit to learn. One must come to grips with the concepts of a total and partial order, topics, partitions, consumers and consumer groups, before comfortably designing and building high-performance event-driven systems. And while the knowledge curve is substantial, the results will certainly be worth your while. If you are keen on taking the proverbial ‘red pill’, read the Introduction to Event Streaming with Kafka and Kafdrop.