Category: System Design

Systems design is the process of defining the architecture, modules, interfaces, and data for a system to satisfy specified requirements. Systems design could be seen as the application of systems theory to product development.

What is a microservice Architecture
08
Mar
2021

What is a Microservices Architecture in a Nutshell?

Robert C. Martin coined the term single responsibility principle which states “gather together those things that change for the same reason, and separate those things that change for different reasons.”

A microservices architecture takes this same approach and extends it to the loosely coupled services which can be developed, deployed, and maintained independently. Each of these services is responsible for discrete task and can communicate with other services through simple APIs to solve a larger complex business problem.

When you are just starting to develop microservices, start modestly with just one or two services, learn from them, and with time and experience add more.

Why Kafka Is so Fast
26
Mar
2021

Why Kafka Is so Fast

Discover the deliberate design decisions that have made Kafka the performance powerhouse it is today.

The last few years have brought about immense changes in the software architecture landscape. The notion of a single monolithic application or even several coarse-grained services sharing a common data store has been all but erased from the hearts and minds of software practitioners world-wide. Autonomous microservices, event-driven architecture, and CQRS are the dominant tools in the construction of contemporary business-centric applications. To top it off, the proliferation of device connectivity — IoT, mobile, wearables — is creating an upward pressure on the number of events a system must handle in near-real-time.

Let’s start by acknowledging that the term ‘fast’ is multi-faceted, complex, and highly ambiguous. Latency, throughput, jitter, are metrics that shape and influence one’s interpretation of the term. It is also inherently contextual: the industry and application domains in themselves set the norms and expectations around performance. Whether or not something is fast depends largely on one’s frame of reference.

Apache Kafka is optimized for throughput at the expense of latency and jitter, while preserving other desirable qualities, such as durability, strict record order, and at-least-once delivery semantics. When someone says ‘Kafka is fast’, and assuming they are at least mildly competent, you can assume they are referring to Kafka’s ability to safely accumulate and distribute a very high number of records in a short amount of time.

Historically, Kafka was born out of LinkedIn’s need to move a very large number of messages efficiently, amounting to multiple terabytes of data on an hourly basis. The individual message propagation delay was deemed of secondary importance, as was the variability of that time. After all, LinkedIn is not a financial institution that engages in high-frequency trading, nor is it an industrial control system that operates within deterministic deadlines. Kafka can be used to implement near-real-time (otherwise known as soft real-time) systems.

Note: For those unfamiliar with the term, ‘real-time’ does not mean ‘fast’, it means ‘predictable’. Specifically, real-time implies a hard upper bound, otherwise known as a deadline, on the time taken to complete an action. If the system as a whole is unable to meet this deadline each and every time, it cannot be classed as real-time. Systems that are able to perform within a probabilistic tolerance, are labelled as ‘near-real-time’. In terms of sheer throughput, real-time systems are often slower than their near-real-time or non-real-time counterparts.

There are two significant areas that Kafka draws upon for its speed, and they need to be discussed separately. The first relates to the low-level efficiency of the client and broker implementations. The second derives from the opportunistic parallelism of stream processing.

Broker performance

Log-structured persistence

Kafka utilizes a segmented, append-only log, largely limiting itself to sequential I/O for both reads and writes, which is fast across a wide variety of storage media. There is a wide misconception that disks are slow; however, the performance of storage media (particularly rotating media) is greatly dependent on access patterns. The performance of random I/O on a typical 7,200 RPM SATA disk is between three and four orders of magnitude slower when compared to sequential I/O. Furthermore, a modern operating system provides read-ahead and write-behind techniques that prefetch data in large block multiples and group smaller logical writes into large physical writes. Because of this, the difference between sequential I/O and random I/O is still evident in flash and other forms of solid-state non-volatile media, although it is far less dramatic compared to rotating media.

Record batching

Sequential I/O is blazingly fast on most media types, comparable to the peak performance of network I/O. In practice, this means that a well-designed log-structured persistence layer will keep up with the network traffic. In fact, quite often the bottleneck with Kafka isn’t the disk, but the network. So in addition to the low-level batching provided by the OS, Kafka clients and brokers will accumulate multiple records in a batch — for both reading and writing — before sending them over the network. Batching of records amortizes the overhead of the network round-trip, using larger packets and improving bandwidth efficiency.

Batch compression

The impact of batching is particularly obvious when compression is enabled, as compression becomes generally more effective as the data size increases. Especially when using text-based formats such as JSON, the effects of compression can be quite pronounced, with compression ratios typically ranging from 5x to 7x. Furthermore, record batching is largely done as a client-side operation, which transfers the load onto the client and has a positive effect not only on the network bandwidth but also on the brokers’ disk I/O utilization.

Cheap consumers

Unlike traditional MQ-style brokers which remove messages at point of consumption (incurring the penalty of random I/O), Kafka doesn’t remove messages after they are consumed — instead, it independently tracks offsets at each consumer group level. The progression of offsets themselves is published on an internal Kafka topic __consumer_offsets. Again, being an append-only operation, this is fast. The contents of this topic are further reduced in the background (using Kafka’s compaction feature) to only retain the last known offsets for any given consumer group.

Compare this model with more traditional message brokers which typically offer several contrasting message distribution topologies. On one hand is the message queue — a durable transport for point-to-point messaging, with no point-to-multipoint ability. On the other hand, a pub-sub topic allows for point-to-multipoint messaging but does so at the expense of durability. Implementing a durable point-to-multipoint messaging model in a traditional MQ requires maintaining a dedicated message queue for each stateful consumer. This creates both read and write amplification. On one hand, the publisher is forced to write to multiple queues. Alternatively, a fan-out relay may consume records from one queue and write to several others, but this only defers the point of amplification. On the other hand, several consumers are generating load on the broker — being a mixture of read and write I/O, both sequential and random.

Consumers in Kafka are ‘cheap’, insofar as they don’t mutate the log files (only the producer or internal Kafka processes are permitted to do that). This means that a large number of consumers may concurrently read from the same topic without overwhelming the cluster. There is still some cost in adding a consumer, but it is mostly sequential reads with a low rate of sequential writes. So it’s fairly normal to see a single topic being shared across a diverse consumer ecosystem.

Unflushed buffered writes

Another fundamental reason for Kafka’s performance, and one that is worth exploring further: Kafka doesn’t actually call fsync when writing to the disk before acknowledging the write; the only requirement for an ACK is that the record has been written to the I/O buffer. This is a little known fact, but a crucial one: in fact, this is what actually makes Kafka perform as if it were an in-memory queue — because for all intents and purposes Kafka is a disk-backed in-memory queue (limited by the size of the buffer/pagecache).

On the flip side, this form of writing is unsafe, as the failure of a replica can lead to a data loss even though the record has seemingly been acknowledged. In other words, unlike say a relational database, acknowledging a write alone does not imply durability. What makes Kafka durable is running several in-sync replicas; even if one were to fail, the others (assuming there is more than one) will remain operational — providing that the failure is uncorrelated (i.e. multiple replicas failing simultaneously due of a common upstream failure). So the combination of a non-blocking approach to I/O with no fsync, and redundant in-sync replicas give Kafka the combination of high throughput, durability, and availability.

Client-side optimisations

Most databases, queues, and other forms of persistent middleware are designed around the notion of an all-mighty server (or a cluster of servers), and fairly thin clients that communicate with the server(s) over a well-known wire protocol. Client implementations are generally considered to be significantly simpler than the server. As a result, the server will absorb the bulk of the load — the clients merely act as interfaces between the application code and the server.

Kafka takes a different approach to client design. A significant amount of work is performed on the client before records get to the server. This includes the staging of records in an accumulator, hashing the record keys to arrive at the correct partition index, checksumming the records and the compression of the record batch. The client is aware of the cluster metadata and periodically refreshes this metadata to keep abreast of any changes to the broker topology. This lets the client make low-level forwarding decisions; rather than sending a record blindly to the cluster and relying on the latter to forward it to the appropriate broker node, a producer client will forward writes directly to partition masters. Similarly, consumer clients are able to make intelligent decisions when sourcing records, potentially using replicas that geographically closer to the client when issuing read queries. (This feature is a more recent addition to Kafka, available as of version 2.4.0.)

Zero-copy

One of the typical sources of inefficiencies is copying byte data between buffers. Kafka uses a binary message format that is shared by the producer, the broker, and the consumer parties so that data chunks can flow end-to-end without modification, even if it’s compressed. While eliminating structural differences between communicating parties is an important step, it doesn’t in itself avoid the copying of data.

Kafka solves this problem on Linux and UNIX systems by using Java’s NIO framework, specifically, the transferTo() method of a java.nio.channels.FileChannel. This method permits the transfer of bytes from a source channel to a sink channel without involving the application as a transfer intermediary. To appreciate the difference that NIO makes, consider the traditional approach where a source channel is read into a byte buffer, then written to a sink channel as two separate operations:

File.read(fileDesc, buf, len);
Socket.send(socket, buf, len);

Diagrammatically, this can be represented using the following.

Although this looks simple enough, internally, the copy operation requires four context switches between user mode and kernel mode, and the data is copied four times before the operation is complete. The diagram below outlines the context switches at each step.

Looking at this in more detail —

  1. The initial read() causes a context switch from user mode to kernel mode. The file is read, and its contents are copied to a buffer in the kernel address space by the DMA (Direct Memory Access) engine. This is not the same buffer that was used in the code snippet.
  2. Prior to returning from read(), the kernel buffer is copied into the user-space buffer. At this point, our application can read the contents of the file.
  3. The subsequent send() will switch back into kernel mode, copying the user-space buffer into the kernel address space — this time into a different buffer associated with the destination socket. Behind the scenes, the DMA engine takes over, asynchronously copying the data from the kernel buffer to the protocol stack. The send() method does not wait for this prior to returning.
  4. The send() call returns, switching back to the user-space context.

In spite of its mode-switching inefficiencies and additional copying, the intermediate kernel buffer can actually improve performance in many cases. It can act as a read-ahead cache, asynchronously prefetching blocks, thereby front-running requests from the application. However, when the amount of requested data is significantly larger than the kernel buffer size, the kernel buffer becomes a performance bottleneck. Rather than copying the data directly, it forces the system to oscillate between user and kernel modes until all the data is transferred.

By contrast, the zero-copy approach is handled in a single operation. The snippet from the earlier example can be rewritten as a one-liner:

fileDesc.transferTo(offset, len, socket);

The zero-copy approach is illustrated below.

Under this model, the number of context switches is reduced to one. Specifically, the transferTo() method instructs the block device to read data into a read buffer by the DMA engine. This buffer is then copied another kernel buffer for staging to the socket. Finally, the socket buffer is copied to the NIC buffer by DMA.

As a result, we have reduced the number of copies from four to three, and only one of those copies involves the CPU. We have also reduced the number of context switches from four to two.

This is a massive improvement, but it’s not query zero-copy yet. The latter can be achieved as a further optimization when running Linux kernels 2.4 and later, and on network interface cards that support the gather operation. This is illustrated below.

Calling the transferTo() method causes the device to read data into a kernel read buffer by the DMA engine, as per the previous example. However, with the gather operation, there is no copying between the read buffer and the socket buffer. Instead, the NIC is given a pointer to the read buffer, along with the offset and the length, which is vacuumed up by DMA. At no point is the CPU involved in copying buffers.

Comparisons of traditional and zero-copy on file sizes ranging from a few megabytes to a gigabyte show performance gains by a factor of two to three in favor of zero-copy. But what’s more impressive, is that Kafka achieves this using a plain JVM with no native libraries or JNI code.

Avoiding the GC

The heavy use of channels, native buffers, and the page cache has one additional benefit — reducing the load on the garbage collector (GC). For example, running Kafka on a machine with 32 GB of RAM will result in 28–30 GB usable for the page cache, completely outside of the GC’s scope. The difference in throughput is minimal — in the region of several percentage points — as the throughput of a correctly-tuned GC can be quite high, especially when dealing with short-lived objects. The real gains are in the reduction of jitter; by avoiding the GC, the brokers are less likely to experience a pause that may impact the client, extending the end-to-end propagation delay of records.

To be fair, the avoidance of GC is less of a problem now, compared to what it used to be when Kafka was conceived. Modern GCs like Shenandoah and ZGC scale to huge, multi-terabyte heaps, and have tunable worst-case pause times, down to single-digit milliseconds. It is not uncommon these days to see JVM-based applications using large heap-based caches outperform off-heap designs.

Stream parallelism

The efficiency of log-structured I/O is one crucial aspect of performance, mostly affecting writes; Kafka’s treatment of parallelism in the topic structure and the consumer ecosystem is fundamental to its read performance. The combination produces an overall very high end-to-end messaging throughput. Concurrency is ingrained into its partitioning scheme and the operation of consumer groups, which is effectively a load-balancing mechanism within Kafka — distributing partition assignments approximately evenly among the individual consumer instances within the group. Compare this to a more traditional MQ: in an equivalent RabbitMQ setup, multiple concurrent consumers may read from a queue in a round-robin fashion, but in doing so they forfeit the notion of message ordering.

The partitioning mechanism also allows for the horizontal scalability of Kafka brokers. Every partition has a dedicated leader; any nontrivial topic (with multiple partitions) can, therefore, utilize the entire cluster of broker for writes. This is yet another point of distinction between Kafka and a message queue; where the latter utilizes clustering for availability, Kafka will genuinely balance the load across the brokers for availability, durability, and throughput.

The producer specifies the partition when publishing a record, assuming that you are publishing to a topic with multiple partitions. (One may have a single-partition topic, in which case this is a non-issue.) This may be accomplished either directly — by specifying a partition index, or indirectly — by way of a record key, which deterministically hashes to a consistent (i.e. same every time) partition index. Records sharing the same hash are guaranteed to occupy the same partition. Assuming a topic with multiple partitions, records with a different key will likely end up in different partitions. However, due to hash collisions, records with different hashes may also end up in the same partition. Such is the nature of hashing. If you understand how a hash table works, this is no different.

The actual processing of records is done by consumers, operating within an (optional) consumer group. Kafka guarantees that a partition may only be assigned to at most one consumer within its consumer group. (We say ‘at most’ to cover the case when all consumers are offline.) When the first consumer in a group subscribes to the topic, it will receive all partitions on that topic. When a second consumer subsequently joins, it will get approximately half of the partitions, relieving the first consumer of half of its prior load. This enables you to process an event stream in parallel, adding consumers as necessary (ideally, using an auto-scaling mechanism), providing that you have adequately partitioned your event stream.

Control of record throughput accomplished in two ways:

  1. The topic partitioning scheme. Topics should be partitioned to maximize the number of independent event sub-streams. In other words, record order should only be preserved where it is absolutely necessary. If any two records are not legitimately related in a causal sense, they shouldn’t be bound to the same partition. This implies the use of different keys, as Kafka will use a record’s key as a hashing source to derive its consistent partition mapping.
  2. The number of consumers in the group. You can increase the number of consumers to match the load of inbound records, up to the number of partitions in the topic. (You can have more consumers if you wish, but the partition count will place an upper bound on the number of active consumers which get at least one partition assignment; the remaining consumers will remain idle.) Note that a consumer could be a process or a thread. Depending on the type of workload that the consumer performs, you may be able to employ multiple individual consumer threads or process records in a thread pool.

If you were wondering whether Kafka is fast, how it achieves its renowned performance characteristics, or if it can scale to your use cases, you should hopefully by now have all the answers you need.

To make things abundantly clear, Kafka is not the fastest (that is, most throughput-capable) messaging middleware — there are other platforms capable of greater throughput — some are software-based and some are implemented in hardware. Nor is it the best throughput-latency compromise — Apache Pulsar is a promising technology that is scalable and achieves a better throughput-latency profile while offering identical ordering and durability guarantees. The rationale for adopting Kafka is that as a complete ecosystem, it remains unmatched overall. It exhibits excellent performance while offering an environment that is abundant and mature, but also involving — in spite of its size, Kafka is still growing at an enviable pace.

The designers and maintainers of Kafka have done an amazing job at devising a solution that is performance-oriented at its core. Few of its design elements feel like an afterthought or a bolt-on. From offloading of work to clients to the log-structured persistence on the broker, batching, compression, zero-copy I/O, and stream-level parallelism — Kafka throws down the gauntlet to just about any other message-oriented middleware, commercial or open-source. And most impressively, it does so without compromising on qualities such as durability, record order, and at-least-once delivery semantics.

Kafka is not the simplest of messaging platforms, and there is a fair bit to learn. One must come to grips with the concepts of a total and partial order, topics, partitions, consumers and consumer groups, before comfortably designing and building high-performance event-driven systems. And while the knowledge curve is substantial, the results will certainly be worth your while. If you are keen on taking the proverbial ‘red pill’, read the Introduction to Event Streaming with Kafka and Kafdrop.

Event-Driven Architecture
14
May
2021

What is an Event-Driven Architecture?

An event is described as an alteration in the hardware or the software. And Event-Driven Architecture has two parts to the working equation i.e. an event producer and en event consumer. Let us understand how this application architecture works: 

It all begins with the event producer, that identifies the emergence of an event, and labels the same as a message. 

  • The subsequent step involves this event to be broadcasted to an event consumer. 
  • The message travels through the respective channels and is interpreted by a centralized event processing platform. 
  • This platform is programmed to decide upon the follow-up action to be taken on the event. 
  • Once it matches the event to the corresponding response within its directory, it forwards the same to the respective consumer. 

This last step determines the final outcome of the event that has been generated. The brightest example of this pattern can be found on a web page. 

The moment you click a button, the browser interprets the event and surfaces the programmed action, such as video playback, matching the input with the right output. In contrast to the layered architecture, where the code must flow top-down and filter through all the layers, Event-Driven architectures deploy modules that are activated only when an even connected to them is generated.

Event-Driven Architecture

Benefits

  • Among the different types of architecture in software engineering, the Event-Driven Architecture is suited to applications that have a tendency to scale. It adds to the response time of the architecture eventually leading to better business outcomes. 
  • This system is very adaptable to real-time changes and is suited to asynchronous systems that run on asymmetric data flow. 
  • They play a huge role in defining how IoT works. They are widely applicable across networks and applications where devices that are part of the Internet of Things (IoT) must exchange information between even producers and consumers in real-time.

Potential Drawbacks 

  • Developers might face bottlenecks while managing error handling, particularly in cases where multiple modules are responsible for a single event. 
  • You must use a recommended software architect tool, to backup the central processing platform. This, to deter the failure of a module to result in system collapse. 
  • The operational speed of the entire system could be slowed down if the processing platform is programmed to buffer messages as and when they come. 

Ideal For

Event Driven Architecture can be deployed for applications that leverage instant data communication that scales on-demand as in the case of website tracking or stream processing. 

Client-side vs. Server-side vs. Pre-rendering for Web Apps
26
Mar
2021

Client-side vs. Server-side vs. Pre-rendering for Web Apps

There is something going on within the front-end community recently. Server-side rendering is getting more and more traction thanks to React and its built-in server-side hydration feature. But it’s not the only solution to deliver a fast experience to the user with a super fast time-to-first-byte (TTFB) score: Pre-rendering is also a pretty good strategy. What’s the difference between these solutions and a fully client-rendered application?

Client-rendered Application

Since frameworks like Angular, Ember.js, and Backbone exists, front-end developers have tended to render everything client-side. Thanks to Google and its ability to “read” JavaScript, it works pretty well, and it’s even SEO friendly.

With a client-side rendering solution, you redirect the request to a single HTML file and the server will deliver it without any content (or with a loading screen) until you fetch all the JavaScript and let the browser compile everything before rendering the content.

Under a good and reliable internet connection, it’s pretty fast and works well. But it can be a lot better, and it doesn’t have to be difficult to make it that way. That’s what we will see in the following sections.

Server-side Rendering (SSR)

An SSR solution is something we used to do a lot, many years ago, but tend to forget in favor of a client-side rendering solution.

With old server-side rendering solutions, you built a web page—with PHP for example—the server compiled everything, included the data, and delivered a fully populated HTML page to the client. It was fast and effective.

But… every time you navigated to another route, the server had to do the work all over again: Get the PHP file, compile it, and deliver the HTML, with all the CSS and JS delaying the page load to a few hundred ms or even whole seconds.

What if you could do the first page load with the SSR solution, and then use a framework to do dynamic routing with AJAX, fetching only the necessary data?

This is why SSR is getting more and more traction within the community because React popularized this problem with an easy-to-use solution: The RenderToString method.

This new kind of web application is called a universal app or an isomorphic app. There’s still some controversy over the exact meanings of these terms and the relationship between them, but many people use them interchangeably.

Anyway, the advantage of this solution is being able to develop an app server-side and client-side with the same code and deliver a really fast experience to the user with custom data. The disadvantage is that you need to run a server.

SSR is used to fetch data and pre-populate a page with custom content, leveraging the server’s reliable internet connection. That is, the server’s own internet connection is better than that of a user with lie-fi), so it’s able to prefetch and amalgamate data before delivering it to the user.

With the pre-populated data, using an SSR app can also fix an issue that client-rendered apps have with social sharing and the OpenGraph system. For example, if you have only one index.html file to deliver to the client, they will only have one type of metadata—most likely your homepage metadata. This won’t be contextualized when you want to share a different route, so none of your routes will be shown on other sites with their proper user content (description and preview picture) that users would want to share with the world.

Pre-rendering

The mandatory server for a universal app can be a deterrent for some and may be overkill for a small application. This is why pre-rendering can be a really nice alternative.

I discovered this solution with Preact and its own CLI that allows you to compile all pre-selected routes so you can store a fully populated HTML file to a static server. This lets you deliver a super-fast experience to the user, thanks to the Preact/React hydration function, without the need for Node.js.

The catch is, because this isn’t SSR, you don’t have user-specific data to show at this point—it’s just a static (and somewhat generic) file sent directly on the first request, as-is. So if you have user-specific data, here is where you can integrate a beautifully designed skeleton to show the user their data is coming, to avoid some frustration on their part:

Using a document skeleton as part of a loading indicator

There is another catch: In order for this technique to work, you still need to have a proxy or something to redirect the user to the right file.

Why?

With a single-page application, you need to redirect all requests to the root file, and then the framework redirects the user with its built-in routing system. So the first page load is always the same root file.

In order for a pre-rendering solution to work, you need to tell your proxy that some routes need specific files and not always the root index.html file.

For example, say you have four routes (//about/jobs, and blog) and all of them have different layouts. You need four different HTML files to deliver the skeleton to the user that will then let React/Preact/etc. rehydrate it with data. So if you redirect all those routes to the root index.html file, the page will have an unpleasant, glitchy feel during loading, whereby the user will see the skeleton of the wrong page until it finishes loading and replaces the layout. For example, the user might see a homepage skeleton with only one column, when they had asked for a different page with a Pinterest-like gallery.

The solution is to tell your proxy that each of those four routes needs a specific file:

  • https://my-website.com → Redirect to the root index.html file
  • https://my-website.com/about → Redirect to the /about/index.html file
  • https://my-website.com/jobs → Redirect to the /jobs/index.html file
  • https://my-website.com/blog → Redirect to the /blog/index.html file

This is why this solution can be useful for small applications—you can see how painful it would be if you had a few hundred pages.

Strictly speaking, it’s not mandatory to do it this way—you could just use a static file directly. For example, https://my-website.com/about/ will work without any redirection because it will automatically search for an index.html inside its directory. But you need this proxy if you have param urls—https://my-website.com/profile/guillaume will need to redirect the request to /profile/index.html with its own layout, because profile/guillaume/index.html doesn’t exist and will trigger a 404 error.

A flowchart showing how a proxy makes a difference in a pre-rendering solution, as described in the previous paragraph

In short, there are three basic views at play with the rendering strategies described above: A loading screen, a skeleton, and the full page once it’s finally rendered.

Comparing a loading screen, a skeleton, and a fully-rendered page

Depending on the strategy, sometimes we use all three of these views, and sometimes we jump straight to a fully-rendered page. Only in one use case are we forced to use a different approach:

MethodLanding (e.g. /)Static (e.g. /about)Fixed Dynamic (e.g. /news)Parameterized Dynamic (e.g. /users/:user-id)
Client-renderedLoading → FullLoading → FullLoading → Skeleton → FullLoading → Skeleton → Full
Pre-renderedFullFullSkeleton → FullHTTP 404 (page not found)
Pre-rendered With ProxyFullFullSkeleton → FullSkeleton → Full
SSRFullFullFullFull

Client-only Rendering is Often Not Enough

Client-rendered applications are something we should avoid now because we can do better for the user. And doing better, in this case, is as easy as the pre-rendering solution. It’s definitely an improvement over client-only rendering and easier to implement than a fully server-side-rendered application.

An SSR/universal application can be really powerful if you have a large application with a lot of different pages. It allows your content to be focused and relevant when talking to a social crawler. This is also true for search engine robots, which now take your site’s performance into account when ranking it.

Stay tuned for a follow-up tutorial, where I will walk through the transformation of an SPA into pre-rendered and SSR versions, and compare their performance.

Stop this Microservices Madness
26
Mar
2021

Stop this Microservices Madness

I’m a big fan of Microservices.
Does it sound weird, given the title? Well, it should not.

Every now and then, a new paradigm or technology arises. It brings hope and hype. This will save us all!
Yeah, it’s true. Maybe it is ground-breaking. Extremely beneficial. However, like everything in life, every solution has its scope.

Have you ever tried to apply Newtonian laws to subatomic particles? Or, have you ever tried to squeeze a television into a sock? Pretty much the same here.
Hype makes you forget this little clause at the end of the contract. And results are catastrophic.

Proper usage of socks and televisions. Photo by Mollie Sivaram on Unsplash

Splitting is the way

In a normal web application, you have a single process serving all requests. Well, you can replicate it for balancing reasons, but in any case there’s a single package/software you have to run. And it bundles all the logic together.

When you use Microservices, instead, you have several, distinct processes, each one serving just a subset of requests. Each process runs a distinct package, with just the code it needs.

Is it really new?

No, it isn’t.

The Microservice pattern is nothing new, actually, like almost every emergent design (besides it has been around for some years).
It is Object-Oriented Programming applied on a higher level: application architecture.

No coincidence that it often makes use of HTTP APIs, particularly RESTful, which are in turn OOP principles applied to API design.
P.S. I hate acronyms.

Let me explain.

When you create a program using Object-Oriented Programming, you split it into several pieces, Classes. Each Class represents a component of the solution and has a specific role. It also works just on its own subset of data. Does it sound familiar?

This is not the right place to discuss about Object-Oriented Programming in general, but I have to point out some aspects. If you are not familiar with it, you can reference online documentation and, soon, my other article about it!

The Two Principles — The Good Tailor

When you design Classes, you must follow two main principles: loose coupling and high cohesion.

To be honest, these should be the guiding stars of each software project, not just OOP ones. But that’s another story. Or is it?

High cohesionis the situation where each component is strongly consistent. All its data and methods refer to a very specific area, or meaning, or role.

Loose couplingis the situation where each component has the least possible amount of interactions with other components. It can work on its own, hiding low-level details, and exposes the essential interfaces.

For Microservices it is the same.
I can’t stress this enough. A good OOP design leads to an optimal Microservice design. There is no middle ground.

This should be almost enough to answer why and when to use Microservices. But since there is more, let’s break it down.

Be a Good Tailor. Loosely sew highly-entangled (coherent) fabrics. Photo by Jeff Wade on Unsplash

When to split?

This is exactly as it is for Object-Oriented Programming.

When you can respect high cohesion and loose coupling.

No differently.

You know how to split code into Classes in the right way. Why would you split your Microservices randomly?
If a bad Class diagram is painful to maintain and manage, because of the sparsity of code in different files, think of randomly split Microservices application, where you have different processes altogether.

But why on Earth would I split my application into sub-processes?

Imagine how much it’s easy to make two Objects interact. You just have to call an Object’s method and you are good to go. All within a single program.

Microservices, instead, live in different processes. Possibly, on different machines. You have to communicate over a network, using APIs.

It adds complexity!
You must have excellent reasons to do so.
Not for trend, not for fun. Because, I assure you, if you use Microservices randomly, their management is not fun at all.

Actually, you must have the same reasons you’d have when you decide to use APIs. It’s the same.

APIs let you hide all the implementation details behind them. That’s essential in some cases, and it offers incredible benefits.

1. Scalability for non-uniform traffic

Scenario: you have a Supermarket application.
You have a Stock Microservice, that just shows you the amounts of articles in stock, and a Vision Microservice, which uses GPUs to detect your articles given a picture of them.

If you receive thousands of calls for your Stock APIs and just a bunch of requests for your Vision APIs, you could just replicate your Stock Microservice.

No need to double your GPU resources!

Think of it. If you kept everything in a single machine or application, and you still wanted to scale to match all the incoming requests, you would have to replicate it entirely. A waste.

2. Error resilience

Suppose you have a Bank application. Your Transfer Microservice keeps crashing, maybe for a temporary error or, worse, for a newly introduced bug (damn untested releases…).

Why should the entire Bank application be down? Maybe some customers wouldn’t even notice, because they just want to see their movements or they want to use their cards.

3. Separate deployments

Similarly to the reason above, if you have a new feature to add or a bug to fix, you can just deploy the right Microservice. Your build and deployment times should be smaller.

Also, you could put your Microservices on different machines, in different locations, maybe.

4. Complete isolation

Maybe you need complete isolation between Microservices. In this way, you can use different DBs (SQL and noSQL) or different technologies altogether. This will give you complete freedom!
And it is of course strictly related to the other points as well, about error resilience or requirements.

5. Different requirements

Do you have a heavy computation to perform? Maybe a Machine Learning one, where you need GPUs, their libraries…
Maybe that part would be much better in Python, while the other one could benefit from Java.
Or maybe libraries for the same language would create some conflicts, but you desperately need two specific versions for two distinct tasks.
And finally, perhaps you want to use two different Cloud products to host your two distinct pieces.

There are plenty of reasons, some more common than others.

In those cases, split the application!
Probably it’s easier. But remember to respect the Two Principles! Be a Good Tailor, or you would have hard times sharing data between your Microservices…

Even the highest of Monoliths is made by blocks. It might be atoms, but it is. Photo by Zoltan Tasi on Unsplash

Is it not your case? Go for a Monolith.

If your system is tightly coupled, use a single application. A Monolith.

You’ll have cleaner dependency management, and you won’t need complex orchestrations or distributed systems to trace errors, share data, collect logs, sync calls, secure network interactions… and I can go on.

Is Object-Oriented Programming not suitable for your application architecture? Or, better, is RESTful design not suitable? Microservices won’t be either!

If your application serves a long sequence of API calls, maybe because an API requires the result of the previous ones, this is not ideal. It would introduce also network delays.
If your application is a single procedure that requires user interaction, well, the sentence is self-explaining. It is a Procedure!
You have to share a state between calls, each component is not independent and a single error in the sequence will block it entirely. There is hardly a benefit in splitting the application.

And remember, Monolith doesn’t mean Mess

Monolith.

Mess.

They are also spelled differently.

Someone sees Monolith as a bunch of unordered code.
They claim that Microservice pattern is the only way to have splits, order and all.

Well, I’ll reveal something.

It’s — not — true.

Your code design does not depend on process divisions. Object-Oriented Programming doesn’t either. If you split your code right, if you manage your dependencies in a clear hierarchy, as well as your files, you can achieve the same level of cohesion and decoupling.

And another secret: if you divide your APIs into separate packages/modules/controllers/blueprints/whatever in your code, then you can decide to serve them in a single process or split them in Microservices at build-time, or at run-time too, for free.

Wow.

Conclusions

I’m a big fan of Microservices, as I said.

If used in the right context, for the right reasons, they solve issues and improve performances or costs.

But please, stop making them the one and only solution for all problems.
Because they can also be the root of all evil.

Microservices Architecture
14
May
2021

Microservices Architecture

Microservices are defined as a self-regulating, and independent codebase that can be written and maintained even by a small team of developers. Microservices Architecture consists of such loosely coupled services with each service responsible for the execution of its associated business logic. 

The services are separated from each other based on the nature of their domains and belong to a mini-microservice pool. Enterprise mobile app developers leverage the capabilities of this architecture especially for complex applications. 

Microservices Architecture allows developers to release versions of software thanks to sophisticated automation of software building, testing, and deployment – something that acts as a prime differentiation point between Microservices and Monolithic architecture.

Microservices Architecture

Benefits

  • Since the services are bifurcated into pools, the architecture design pattern makes the system highly fault-tolerant. In other words, the whole software won’t collapse on its head even if some microservices cease to function. 
  • An enterprise mobile app development company working on such an architecture for clients can deploy multiple programming languages to build different microservices for their specific purpose. Therefore the technology stack can be kept updated with the latest upgrades in computing. 
  • This architecture is a perfect fit for applications that need to scale. Since the services are already independent of each other, they can scale individually rather than overloading the entire system with the need to expand. 
  • Services can be integrated into any application depending upon the scope of work. 

Potential Drawbacks 

  • Since each service is unique in its ability to contribute to the whole codebase, it could be challenging for an enterprise mobile application development company to interlink all and operate so many distinctive services seamlessly. 
  • Developers must define a standard protocol for all services to adhere to. It is important to do so, as the decentralized approach towards coding microservices in multiple languages can pose serious issues while debugging. 
  • Each microservice with its limited environment is responsible to maintain the integrity of the data. It is up to the architects of such a system to come up with a universally consistent data integrity protocol, wherever possible. 
  • You definitely need the best of breed professionals to design such a system for you as the technology stack keeps changing. 

Ideal For

Use Microservices Architecture for apps in which a specific segment will be used heavily than the others and would need a sporadic burst of scaling. Instead of a standalone application you may also deploy this for a service that provides functionality to other applications of the system. 

Persistence in Event Driven Architectures
28
Mar
2021

Persistence in Event Driven Architectures

The importance of being persistent in event driven architectures.

Enterprises have to constantly adapt and evolve their enterprise architecture strategies in order to deliver the desired business outcomes. The evolving architecture patterns may involve business processing of sales transactions with a human in the loop or they may involve machine to machine data processing using automation. Enterprises earlier adopted a request-driven model where microservices made a call to a service and the service responded to the request being made. In this request-driven model, you run into challenges around flexibility as you try to scale your global deployment footprint.

A new approach, that is quickly gaining adoption in enterprises is called the event-driven architecture. In the new approach, you are able to increase application agility and flexibility by allowing for multiple data producers to coexist with multiple data consumers and you process data only after an event or state change. In this enterprise architecture, the producers and consumers of data can be quickly extended to deliver better flexibility and agility as you scale your operation globally. Examples of event-driven solutions are available in a hosted managed format from most cloud providers today. In this blog post, we will look at a Kafka solution running in a Kubernetes cluster and how you can make sure persistence is achieved for the solution. This approach is running at customers in production today supported by Confluent and Portworx.

Why Use Event-Driven Architecture?

With the advent of 5G technology, a vast amount of data will be generated by sensors, devices, systems, and humans in the loop to track, manage and achieve business outcomes. The use case for Event-Driven architectures may include the following:

  • Business Process state changes – You want to notify the change of state between a purchase order and accounts receivable with an event. The event-based approach allows a human or a decision engine to take the next appropriate step in the process.
  • Log and Metrics processing – The event-driven model allows for multiple actions to be triggered based on a single metric. The ability to send messages to multiple event handlers in different subsystems offers the scalability and resiliency required by certain business applications.

Why Use Kafka?

Apache Kafka is a scalable, fault-tolerant messaging system that enables you to build distributed real-time applications with an event-driven architecture. Kafka delivers events, with fast ingestion rates, and provides persistence and in-order guarantees. Kafka adoption in your solution will depend on your specific use-case. Below are some important concepts about Apache Kafka:

  • Kafka organizes messages into “topics”.
  • The process that does the work in Kafka is called the “broker”. A producer pushes data into a topic hosted on a broker and a consumer pulls messages from a topic via a broker.
  • Kafka topics can be divided into “partitions”. This allows for parallelizing a topic across multiple brokers and increasing message ingests and throughput.
  • Brokers can hold multiple partitions but at any given time, only one partition can act as leader of a topic. A leader is responsible for updating any replicas with new data.
  • Brokers are responsible for storing messages to disk. The messages are stored with unique offsets. Messages in Kafka do not have a unique ID.

Persistence With Portworx and Kafka on Kubernetes

Kafka needs Zookeeper to be deployed as a StatefulSets in Kubernetes. Kafka Brokers, which maintains the state of the topics and partitions also need to be deployed as StatefulSets which should be backed by persistent volumes.

  • Kafka offers replication of topics between different brokers. In the case of a node failure, Kafka can recover from failure using the replicated topics. This recovery mechanism does create additional network calls in order to synchronize with the replica on a different broker. The recovery time of the failed node and its broker depend on the amount of data that needs to be rehydrated and network latencies in the cluster.
  • Portworx offers data replication using the replication parameter in the Kuberbetes storage class. In this scenario, the storage system is responsible for maintaining copies of the topic on different nodes. In the case of a node failure, the Kafka broker is rescheduled on a node that already contains the replicated topic data. The rebuild time of the broker is reduced because it uses the storage system to rehydrate the topic data without any network latencies. Once the data is rehydrated using the storage system, the broker can quickly catch up on the topic offset from an existing broker and thus reducing the overall recovery time.

Let’s walk through a Kubernetes node failure scenario on a Kubernetes cluster where a Kafka application is running and is backed by Portworx volumes.

Figure 1: We will describe the deployment of Kafka and Portworx on a 5 node Kubernetes cluster. In the image below, you can see that the Kafka deployment has 3 Brokers, each with 2 partitions and a replication factor of 2. For the Portworx data platform, we have a volume replication factor set to 3 for each volume.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 1

Figure 2: We will describe a node failure event. In the diagram below, we have simulated a node failure and identified the Kubernetes worker node 2 has been taken out of production. Kubernetes worker node 2 contains a Kafka broker with 2 partitions and 2 Portworx persistent volumes.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 2
Figure 3: Once Kubernetes node failure is detected by the Kubernetes API server, the Kaffka broker is deployed to a node with available resources. The Portworx Platform has the ability to influence the placement of the broker on a Kubernete node. Since Kubernetes node 4 already has a copy of Kafka Broker’s persistent volumes, Portworx makes sure that the Kafka broker is deployed on Kubernetes node 4. On the Portworx platform’s recovery side, the volumes are also created on other nodes in order to maintain the defined replication factor of 3.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 3 Figure 4: Once the failed recovery node is back in production use, the Kaffka application does not get scheduled on the recovered node since the application is already at the desired number of Kafka brokers. On the Portworx platform side, replicated volumes are placed on the recovered Kubernetes node to maintain the desired replication factor for the volumes.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 4

10 Common Software Architectural Patterns in a nutshell
18
Mar
2021

10 Common Software Architectural Patterns in a nutshell

Ever wondered how large enterprise scale systems are designed? Before major software development starts, we have to choose a suitable architecture that will provide us with the desired functionality and quality attributes. Hence, we should understand different architectures, before applying them to our design.

What is an Architectural Pattern?

According to Wikipedia,

An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. Architectural patterns are similar to software design pattern but have a broader scope.

In this article, I will be briefly explaining the following 10 common architectural patterns with their usage, pros and cons.

  1. Layered pattern
  2. Client-server pattern
  3. Master-slave pattern
  4. Pipe-filter pattern
  5. Broker pattern
  6. Peer-to-peer pattern
  7. Event-bus pattern
  8. Model-view-controller pattern
  9. Blackboard pattern
  10. Interpreter pattern

1. Layered pattern

This pattern can be used to structure programs that can be decomposed into groups of subtasks, each of which is at a particular level of abstraction. Each layer provides services to the next higher layer.

The most commonly found 4 layers of a general information system are as follows.

  • Presentation layer (also known as UI layer)
  • Application layer (also known as service layer)
  • Business logic layer (also known as domain layer)
  • Data access layer (also known as persistence layer)

Usage

  • General desktop applications.
  • E commerce web applications.
Layered pattern

2. Client-server pattern

This pattern consists of two parties; a server and multiple clients. The server component will provide services to multiple client components. Clients request services from the server and the server provides relevant services to those clients. Furthermore, the server continues to listen to client requests.

Usage

  • Online applications such as email, document sharing and banking.
Client-server pattern

3. Master-slave pattern

This pattern consists of two parties; master and slaves. The master component distributes the work among identical slave components, and computes a final result from the results which the slaves return.

Usage

  • In database replication, the master database is regarded as the authoritative source, and the slave databases are synchronized to it.
  • Peripherals connected to a bus in a computer system (master and slave drives).
Master-slave pattern

4. Pipe-filter pattern

This pattern can be used to structure systems which produce and process a stream of data. Each processing step is enclosed within a filter component. Data to be processed is passed through pipes. These pipes can be used for buffering or for synchronization purposes.

Usage

  • Compilers. The consecutive filters perform lexical analysis, parsing, semantic analysis, and code generation.
  • Workflows in bioinformatics.
Pipe-filter pattern

5. Broker pattern

This pattern is used to structure distributed systems with decoupled components. These components can interact with each other by remote service invocations. A broker component is responsible for the coordination of communication among components.

Servers publish their capabilities (services and characteristics) to a broker. Clients request a service from the broker, and the broker then redirects the client to a suitable service from its registry.

Usage

Broker pattern

6. Peer-to-peer pattern

In this pattern, individual components are known as peers. Peers may function both as a client, requesting services from other peers, and as a server, providing services to other peers. A peer may act as a client or as a server or as both, and it can change its role dynamically with time.

Usage

Peer-to-peer pattern

7. Event-bus pattern

This pattern primarily deals with events and has 4 major components; event sourceevent listenerchannel and event bus. Sources publish messages to particular channels on an event bus. Listeners subscribe to particular channels. Listeners are notified of messages that are published to a channel to which they have subscribed before.

Usage

  • Android development
  • Notification services
Event-bus pattern

8. Model-view-controller pattern

This pattern, also known as MVC pattern, divides an interactive application in to 3 parts as,

  1. model — contains the core functionality and data
  2. view — displays the information to the user (more than one view may be defined)
  3. controller — handles the input from the user

This is done to separate internal representations of information from the ways information is presented to, and accepted from, the user. It decouples components and allows efficient code reuse.

Usage

  • Architecture for World Wide Web applications in major programming languages.
  • Web frameworks such as Django and Rails.
Model-view-controller pattern

9. Blackboard pattern

This pattern is useful for problems for which no deterministic solution strategies are known. The blackboard pattern consists of 3 main components.

  • blackboard — a structured global memory containing objects from the solution space
  • knowledge source — specialized modules with their own representation
  • control component — selects, configures and executes modules.

All the components have access to the blackboard. Components may produce new data objects that are added to the blackboard. Components look for particular kinds of data on the blackboard, and may find these by pattern matching with the existing knowledge source.

Usage

  • Speech recognition
  • Vehicle identification and tracking
  • Protein structure identification
  • Sonar signals interpretation.
Blackboard pattern

10. Interpreter pattern

This pattern is used for designing a component that interprets programs written in a dedicated language. It mainly specifies how to evaluate lines of programs, known as sentences or expressions written in a particular language. The basic idea is to have a class for each symbol of the language.

Usage

  • Database query languages such as SQL.
  • Languages used to describe communication protocols.
Interpreter pattern

Comparison of Architectural Patterns

The table given below summarizes the pros and cons of each architectural pattern.

Comparison of Architectural Patterns

Hope you found this article useful. I would love to hear your thoughts. 😇

Thanks for reading. 😊

Cheers! 😃

Clarified CQRS
28
Mar
2021

Clarified CQRS

After listening how the community has interpreted Command-Query Responsibility Segregation I think that the time has come for some clarification. Some have been tying it together to Event Sourcing. Most have been overlaying their previous layered architecture assumptions on it. Here I hope to identify CQRS itself, and describe in which places it can connect to other patterns.
This is quite a long post.
Why CQRS Before describing the details of CQRS we need to understand the two main driving forces behind it: collaboration and staleness.
Collaboration refers to circumstances under which multiple actors will be using/modifying the same set of data – whether or not the intention of the actors is actually to collaborate with each other. There are often rules which indicate which user can perform which kind of modification and modifications that may have been acceptable in one case may not be acceptable in others. We’ll give some examples shortly. Actors can be human like normal users, or automated like software.Staleness refers to the fact that in a collaborative environment, once data has been shown to a user, that same data may have been changed by another actor – it is stale. Almost any system which makes use of a cache is serving stale data – often for performance reasons. What this means is that we cannot entirely trust our users decisions, as they could have been made based on out-of-date information.Standard layered architectures don’t explicitly deal with either of these issues. While putting everything in the same database may be one step in the direction of handling collaboration, staleness is usually exacerbated in those architectures by the use of caches as a performance-improving afterthought.A picture for referenceI’ve given some talks about CQRS using this diagram to explain it:The boxes named AC are Autonomous Components. We’ll describe what makes them autonomous when discussing commands. But before we go into the complicated parts, let’s start with queries:QueriesIf the data we’re going to be showing users is stale anyway, is it really necessary to go to the master database and get it from there? Why transform those 3rd normal form structures to domain objects if we just want data – not any rule-preserving behaviors? Why transform those domain objects to DTOs to transfer them across a wire, and who said that wire has to be exactly there? Why transform those DTOs to view model objects?In short, it looks like we’re doing a heck of a lot of unnecessary work based on the assumption that reusing code that has already been written will be easier than just solving the problem at hand. Let’s try a different approach:How about we create an additional data store whose data can be a bit out of sync with the master database – I mean, the data we’re showing the user is stale anyway, so why not reflect in the data store itself. We’ll come up with an approach later to keep this data store more or less in sync.Now, what would be the correct structure for this data store? How about just like the view model? One table for each view. Then our client could simply SELECT * FROM MyViewTable (or possibly pass in an ID in a where clause), and bind the result to the screen. That would be just as simple as can be. You could wrap that up with a thin facade if you feel the need, or with stored procedures, or using AutoMapper which can simply map from a data reader to your view model class. The thing is that the view model structures are already wire-friendly, so you don’t need to transform them to anything else.You could even consider taking that data store and putting it in your web tier. It’s just as secure as an in-memory cache in your web tier. Give your web servers SELECT only permissions on those tables and you should be fine.Query Data StorageWhile you can use a regular database as your query data store it isn’t the only option. Consider that the query schema is in essence identical to your view model. You don’t have any relationships between your various view model classes, so you shouldn’t need any relationships between the tables in the query data store.So do you actually need a relational database?The answer is no, but for all practical purposes and due to organizational inertia, it is probably your best choice (for now).Scaling QueriesSince your queries are now being performed off of a separate data store than your master database, and there is no assumption that the data that’s being served is 100% up to date, you can easily add more instances of these stores without worrying that they don’t contain the exact same data. The same mechanism that updates one instance can be used for many instances, as we’ll see later.This gives you cheap horizontal scaling for your queries. Also, since your not doing nearly as much transformation, the latency per query goes down as well. Simple code is fast code.Data modificationsSince our users are making decisions based on stale data, we need to be more discerning about which things we let through. Here’s a scenario explaining why:Let’s say we have a customer service representative who is one the phone with a customer. This user is looking at the customer’s details on the screen and wants to make them a ‘preferred’ customer, as well as modifying their address, changing their title from Ms to Mrs, changing their last name, and indicating that they’re now married. What the user doesn’t know is that after opening the screen, an event arrived from the billing department indicating that this same customer doesn’t pay their bills – they’re delinquent. At this point, our user submits their changes.Should we accept their changes?Well, we should accept some of them, but not the change to ‘preferred’, since the customer is delinquent. But writing those kinds of checks is a pain – we need to do a diff on the data, infer what the changes mean, which ones are related to each other (name change, title change) and which are separate, identify which data to check against – not just compared to the data the user retrieved, but compared to the current state in the database, and then reject or accept.Unfortunately for our users, we tend to reject the whole thing if any part of it is off. At that point, our users have to refresh their screen to get the up-to-date data, and retype in all the previous changes, hoping that this time we won’t yell at them because of an optimistic concurrency conflict.As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts.If only there was some way for our users to provide us with the right level of granularity and intent when modifying data. That’s what commands are all about.CommandsA core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above.We could even consider allowing our users to submit a new command even before they’ve received confirmation on the previous one. We could have a little widget on the side showing the user their pending commands, checking them off asynchronously as we receive confirmation from the server, or marking them with an X if they fail. The user could then double-click that failed task to find information about what happened.Note that the client sends commands to the server – it doesn’t publish them. Publishing is reserved for events which state a fact – that something has happened, and that the publisher has no concern about what receivers of that event do with it.Commands and ValidationIn thinking through what could make a command fail, one topic that comes up is validation. Validation is different from business rules in that it states a context-independent fact about a command. Either a command is valid, or it isn’t. Business rules on the other hand are context dependent.In the example we saw before, the data our customer service rep submitted was valid, it was only due to the billing event arriving earlier which required the command to be rejected. Had that billing event not arrived, the data would have been accepted.Even though a command may be valid, there still may be reasons to reject it.As such, validation can be performed on the client, checking that all fields required for that command are there, number and date ranges are OK, that kind of thing. The server would still validate all commands that arrive, not trusting clients to do the validation.Rethinking UIs and commands in light of validationThe client can make of the query data store when validating commands. For example, before submitting a command that the customer has moved, we can check that the street name exists in the query data store.At that point, we may rethink the UI and have an auto-completing text box for the street name, thus ensuring that the street name we’ll pass in the command will be valid. But why not take things a step further? Why not pass in the street ID instead of its name? Have the command represent the street not as a string, but as an ID (int, guid, whatever).On the server side, the only reason that such a command would fail would be due to concurrency – that someone had deleted that street and that that hadn’t been reflected in the query store yet; a fairly exceptional set of circumstances.Reasons valid commands fail and what to do about itSo we’ve got a well-behaved client that is sending valid commands, yet the server still decides to reject them. Often the circumstances for the rejection are related to other actors changing state relevant to the processing of that command.In the CRM example above, it is only because the billing event arrived first. But “first” could be a millisecond before our command. What if our user pressed the button a millisecond earlier? Should that actually change the business outcome? Shouldn’t we expect our system to behave the same when observed from the outside?So, if the billing event arrived second, shouldn’t that revert preferred customers to regular ones? Not only that, but shouldn’t the customer be notified of this, like by sending them an email? In which case, why not have this be the behavior for the case where the billing event arrives first? And if we’ve already got a notification model set up, do we really need to return an error to the customer service rep? I mean, it’s not like they can do anything about it other than notifying the customer.So, if we’re not returning errors to the client (who is already sending us valid commands), maybe all we need to do on the client when sending a command is to tell the user “thank you, you will receive confirmation via email shortly”. We don’t even need the UI widget showing pending commands.Commands and AutonomyWhat we see is that in this model, commands don’t need to be processed immediately – they can be queued. How fast they get processed is a question of Service-Level Agreement (SLA) and not architecturally significant. This is one of the things that makes that node that processes commands autonomous from a runtime perspective – we don’t require an always-on connection to the client.Also, we shouldn’t need to access the query store to process commands – any state that is needed should be managed by the autonomous component – that’s part of the meaning of autonomy.Another part is the issue of failed message processing due to the database being down or hitting a deadlock. There is no reason that such errors should be returned to the client – we can just rollback and try again. When an administrator brings the database back up, all the message waiting in the queue will then be processed successfully and our users receive confirmation.The system as a whole is quite a bit more robust to any error conditions.Also, since we don’t have queries going through this database any more, the database itself is able to keep more rows/pages in memory which serve commands, improving performance. When both commands and queries were being served off of the same tables, the database server was always juggling rows between the two.Autonomous ComponentsWhile in the picture above we see all commands going to the same AC, we could logically have each command processed by a different AC, each with it’s own queue. That would give us visibility into which queue was the longest, letting us see very easily which part of the system was the bottleneck. While this is interesting for developers, it is critical for system administrators.Since commands wait in queues, we can now add more processing nodes behind those queues (using the distributor with NServiceBus) so that we’re only scaling the part of the system that’s slow. No need to waste servers on any other requests.Service LayersOur command processing objects in the various autonomous components actually make up our service layer. The reason you don’t see this layer explicitly represented in CQRS is that it isn’t really there, at least not as an identifiable logical collection of related objects – here’s why:In the layered architecture (AKA 3-Tier) approach, there is no statement about dependencies between objects within a layer, or rather it is implied to be allowed. However, when taking a command-oriented view on the service layer, what we see are objects handling different types of commands. Each command is independent of the other, so why should we allow the objects which handle them to depend on each other?Dependencies are things which should be avoided, unless there is good reason for them.Keeping the command handling objects independent of each other will allow us to more easily version our system, one command at a time, not needing even to bring down the entire system, given that the new version is backwards compatible with the previous one.Therefore, keep each command handler in its own VS project, or possibly even in its own solution, thus guiding developers away from introducing dependencies in the name of reuse (it’s a fallacy). If you do decide as a deployment concern, that you want to put them all in the same process feeding off of the same queue, you can ILMerge those assemblies and host them together, but understand that you will be undoing much of the benefits of your autonomous components.Whither the domain model?Although in the diagram above you can see the domain model beside the command-processing autonomous components, it’s actually an implementation detail. There is nothing that states that all commands must be processed by the same domain model. Arguably, you could have some commands be processed by transaction script, others using table module (AKA active record), as well as those using the domain model. Event-sourcing is another possible implementation.Another thing to understand about the domain model is that it now isn’t used to serve queries. So the question is, why do you need to have so many relationships between entities in your domain model?(You may want to take a second to let that sink in.)Do we really need a collection of orders on the customer entity? In what command would we need to navigate that collection? In fact, what kind of command would need any one-to-many relationship? And if that’s the case for one-to-many, many-to-many would definitely be out as well. I mean, most commands only contain one or two IDs in them anyway.Any aggregate operations that may have been calculated by looping over child entities could be pre-calculated and stored as properties on the parent entity. Following this process across all the entities in our domain would result in isolated entities needing nothing more than a couple of properties for the IDs of their related entities – “children” holding the parent ID, like in databases.In this form, commands could be entirely processed by a single entity – viola, an aggregate root that is a consistency boundary.Persistence for command processingGiven that the database used for command processing is not used for querying, and that most (if not all) commands contain the IDs of the rows they’re going to affect, do we really need to have a column for every single domain object property? What if we just serialized the domain entity and put it into a single column, and had another column containing the ID? This sounds quite similar to key-value storage that is available in the various cloud providers. In which case, would you really need an object-relational mapper to persist to this kind of storage?You could also pull out an additional property per piece of data where you’d want the “database” to enforce uniqueness.I’m not suggesting that you do this in all cases – rather just trying to get you to rethink some basic assumptions.Let me reiterateHow you process the commands is an implementation detail of CQRS.Keeping the query store in syncAfter the command-processing autonomous component has decided to accept a command, modifying its persistent store as needed, it publishes an event notifying the world about it. This event often is the “past tense” of the command submitted:MakeCustomerPerferredCommand -> CustomerHasBeenMadePerferredEventThe publishing of the event is done transactionally together with the processing of the command and the changes to its database. That way, any kind of failure on commit will result in the event not being sent. This is something that should be handled by default by your message bus, and if you’re using MSMQ as your underlying transport, requires the use of transactional queues.The autonomous component which processes those events and updates the query data store is fairly simple, translating from the event structure to the persistent view model structure. I suggest having an event handler per view model class (AKA per table).Here’s the picture of all the pieces again:
Bounded ContextsWhile CQRS touches on many pieces of software architecture, it is still not at the top of the food chain. CQRS if used is employed within a bounded context (DDD) or a business component (SOA) – a cohesive piece of the problem domain. The events published by one BC are subscribed to by other BCs, each updating their query and command data stores as needed.UI’s from the CQRS found in each BC can be “mashed up” in a single application, providing users a single composite view on all parts of the problem domain. Composite UI frameworks are very useful for these cases.SummaryCQRS is about coming up with an appropriate architecture for multi-user collaborative applications. It explicitly takes into account factors like data staleness and volatility and exploits those characteristics for creating simpler and more scalable constructs.One cannot truly enjoy the benefits of CQRS without considering the user-interface, making it capture user intent explicitly. When taking into account client-side validation, command structures may be somewhat adjusted. Thinking through the order in which commands and events are processed can lead to notification patterns which make returning errors unnecessary.While the result of applying CQRS to a given project is a more maintainable and performant code base, this simplicity and scalability require understanding the detailed business requirements and are not the result of any technical “best practice”. If anything, we can see a plethora of approaches to apparently similar problems being used together – data readers and domain models, one-way messaging and synchronous calls.
Microservices vs Monolithic Architecture: Which is Right for Startups?
14
May
2021

Microservices vs Monolithic Architecture: Which is Right for Startups?

Microservices architecture has become a hot topic in the software backend development world. The ecosystem carries a profound impact on not just the enterprises’ IT function but also in the digital transformation of an entire app business. 

The debate of Microservices vs monolithic architecture defines a revolutionary shift in how an IT  team approaches their software development cycle: Whether they go with the approach that brands like Google, Amazon, and Netflix chose or do they go with the simplicity quotient that a startup which is at the development stage demands.

In this article, we are going to get startups an answer to which backend architecture they should choose when they are starting their journey to become a startup. 

Table Of Content:

  1. What are Microservices Architecture?
  2. What is Monolithic Architecture?
  3. Microservices vs Monolithic Architecture: Advantages and Disadvantages
  4. How to Choose Between Monolithic and Microservice Architecture?
  5. Migrating from a Monolithic Architecture to a Microservice Ecosystem
  6. Conclusion

What are Microservices Architecture?

Microservices architecture contains a mix of small and autonomous services where every service is self-contained and must be implemented as a single business ability. It is a distinct approach used for development of software systems which focus on developing several single-function modules with clearly-defined operations and interfaces. The approach has become a popular trend in the past several years as more and more Enterprises are looking to become Agile and make a shift towards DevOps. 

Components of Microservices architecture that makes it one of the best enterprise architecture:

  • The services are independent, small, and loosely coupled
  • Encapsulates a business or customer scenario
  • Every service is different codebase
  • Services can be independently deployed 
  • Services interact with each other using APIs

With the question of what are microservices architecture now answered, let us move on to look into what is monolithic architecture.

What is Monolithic Architecture? 

Monolithic application has a single codebase having multiple modules. The modules, in turn, are divided into either technical features or business features. The architecture comes with a single build system that helps build complete application. It also comes with a single deployable or executable binary.

Now that we have looked into what is monolithic architecture and microservices architecture, let us look into the disadvantages and benefits that both the backend system offers to get an understanding of what separates them from each other. 

Microservices vs Monolithic Architecture: Advantages and Disadvantages

Microservices vs Monolithic Architecture Advantages and Disadvantages

Advantages of Monolithic Architecture

A. Zero Deployment Dependencies

An organized and well-documented Monolith architecture makes it possible for Backend developers to not worry about which version would be compatible with which service, how to find which services are present and what they do, etc. 

B. Error Tracing

One of the biggest benefits of monolithic is that all the transactions are logged into one place, making error tracing task a breeze. 

C. No Silos

The one factor that works in the favour of monolithic in the microservices vs monolithic architecture debate is absence of silos. It becomes very easy for the developers to work on multiple parts of the app for they are all structured similarly, using the same tools, which makes it okay to have no prior distributed computing knowledge. 

D. Cross-cutting concerns:

 Spending time in defining the services which do not bleed in each other’s time is the time that you can actually spend in developing things that help the customers. 

E. Shared Code: 

No shared libraries where the complete scope needed for services to operate is sent along each request. 

Limitations of Monolithic Architecture

A. Lack of Flexibility:

Monolithic architectures are not flexible. You cannot use different technologies when you have incorporated Monolithic. The technology stack which have been decided at the beginning have to be followed throughout the project, making upgrades a next to impossible task. 

B. Development Speed:

Microservices speed development process is famous when you compare microservices architecture vs monolithic architecture. Development is very slow in monolithic architecture. It can be very difficult for team members to understand and then modify the code of large monolithic applications. Additionally, as the size of codebase increases, the IDE gets overloaded and gets slower. All of this results in a slowed down app development speed.  

C. Difficult Scalability:

Scaling monolithic applications becomes difficult when the apps becomes large. While developers can develop new instances of monolith and load balancer to distribute the traffic to new instances, monolithic architecture cannot scale with the increasing load. 

Benefits of Microservices Architecture

  1. The biggest factor in favor of microservices in the difference between microservices and monolithic architecture is that it handles complexity issues by decomposing the app into manageable service set that are faster to develop and easier to maintain and understand. 
  2. It enables independent service development through a team which is focused on the particular service, which makes the ideal choice of businesses that work with an Agile development approach. 
  3. It lowers the barrier of adopting newer technologies as the developers have the freedom to choose whatever technology that makes sense to their project. 
  4. It makes it possible for every microservice to be deployed individually. The result of which is that continuous deployment of complex application becomes possible. 

Drawbacks of Microservices Architecture

  1. Microservices add a complexity to project simply by the fact that the microservices application is distributed system. To solve the complexities, developers have to select and implement inter-process communication that is based on either RPC or messaging. 
  2. They work with partitioned database architecture. The business transactions which update multiple business entities inside the microservices application also have to update different databases that are owned by multiple services. 
  3. It is a lot more difficult to implement changes which span across multiple services. While in case of Monolithic architecture, an app development agency only have to change the corresponding modules, integrate all the changes, and then deploy them all in one go. 
  4. Deployment of a microservice application is very complex. It consists of a number of services, which individually have multiple runtime instances. In contrast, a monolithic application is deployed on set of identical servers behind load balancer. 

The benefits and limitations are prevalent in both monolithic and microservices architecture. This makes it extremely difficult for a startup to gauge which backend architecture to incorporate in their journey. 

Let us help you. 

How to Choose Between Monolithic and Microservice Architecture? 

The fact that both the approaches come with their own set of pros and cons are a sign that there is no one size fits all methodology when it comes to choosing a backend architecture. But there are a few questions that can help you decide which is the right direction to head into. 

Are You Working in a Familiar Sector?

When you work in an industry where you know the veins of the sector and you know the demands and the needs of the customers, it becomes easier to enter into the system with a definite structure. The same, however, is not possible with a business that is very new in the industry, for the amount of looming doubts are much greater. 

So, the use of microservice architecture in app development is best suited in cases where you know the industry inside out. If that is not the case, go with monolithic approach to develop your app. 

How Prepared is Your Team?

Is your team aware with the best practices for implementing microservices? Or are they more comfortable with working around the simplicity of monolithic? Will your team and your business offering expand in the coming time? You will have to find answers to all these questions to gauge whether the people who have to work on a project are even ready to migrate. 

What is Your Infrastructure Like?

Everything from the development to the deployment of a monolithic web application would require a cloud-based infrastructure. You will have to make use of Amazon AWS and Google Cloud for deploying even tiny elements. While the cloud technologies make the process easier, The idea of setting up database server for every other microservice and then scaling out is something that startup entrepreneur might not be comfortable with. 

Have you Evaluated the Business Risk?

More often than not, businesses take microservices’ side in the Microservices vs Monolithic Architecture thinking it is the right thing for their business. What they forget to factor in is the chance that their application might not become as scalable as they are optimistically expecting and they might have to suffer the risks of adding a highly scalable system in their process. 

Here is a short list of pointers that would help you make the decision of choosing to opt for software development processes with microservices vs monolithic architecture:

When to Choose Monolithic Architecture?

  • When your team is at a founding stage
  • When you are developing a proof of concept
  • When you have no experience in microservices
  • When you have experience in the development on solid frameworks, like the Ruby on Rails, Laravel, etc.

When to Choose Microservices Architecture?

  • You need independent, quick delivery service
  • You need to extend your team
  • Your platform need to be extremely efficient
  • You don’t have a tight deadline to work with

Migrating from a Monolithic Architecture to a Microservice Ecosystem 

Migrating from a Monolithic Architecture to a Microservice Ecosystem

The right approach for migrating a monolithic architecture to a microservice ecosystem is to divide the monolith processes and turn them into microservices. The result of this is a two-factor plan:

  1. Identification of existing monolithic elements which can get decoupled
  2. A validation that the new functionality can be developed as microservice

One of the main challenges that can emerge when initiating the migration from a monolithic architecture to a microservice architecture is to design and create an integration between existing system and a new microservice. A solution for this can be to add a glue code which allows them to connect later, something like an API. 

API gateway can also help in combining multiple individual service calls in one coarse-grained service, and this in turn would help reduce the integration cost with monolithic system.

Conclusion

When you compare microservices architecture vs monolithic architecture, you will find the former being a hot trend. Every entrepreneur wants to say that their app is based on this architecture. But the temptation to focus only on the problems of monolithic architecture and abandon the architecture should be measured against the actual value of microservice architecture. 

The right approach would be to develop new apps using a monolithic approach and move to microservices only when the justification of the move is backed by proper metrics like performance monitoring.

For established businesses, microservices tend to be avenues for continuous deployment, team based development, and an agility to shift to new technologies. But for startups, or companies that are just starting, adopting microservices can impact the software project success very negatively. 

FAQs About Microservices vs Monolithic Architecture

Q. What is the Purpose of Microservices?

The Microservice architectures allow you to divide the application in separate independent services, where each of them are managed by different groups in the software development agency. This way, the responsibility gets divided and the application is developed and deployed at a much faster rate. 

Q. Does moving from a monolith to a Microservice architecture help with resilience?

Yes. Since microservices enable developers to handle multiple parts of the project at the same time in a streamlined manner, it becomes much easier to identify issues and solve them within time. Something that is next to impossible in case of Monolithic architecture where it is impossible to add new technologies or change the process, mid project. 

Q. What is the difference between Microservices vs Monolithic approach?

The difference in microservices and monolithic architecture is the difference of approaches. While in case of Monolithic architecture, there is a single build system, Microservices come with multiple build systems, which makes development and deployment of an application faster. 

Q. When To Choose Microservices Over Monolithic Architecture

The choice of going with microservices over monolithic architecture can be decided upon these factors:

  • When you require an independent delivery service
  • When you have to extend the team
  • When you have to make an efficient platform
  • When you do not have a tight deadline