Category: System Architecture

A system architecture is the conceptual model that defines the structure, behavior, and more views of a system. An architecture description is a formal description and representation of a system, organized in a way that supports reasoning about the structures and behaviors of the system.

Event-Driven Architecture
14
May
2021

What is an Event-Driven Architecture?

An event is described as an alteration in the hardware or the software. And Event-Driven Architecture has two parts to the working equation i.e. an event producer and en event consumer. Let us understand how this application architecture works: 

It all begins with the event producer, that identifies the emergence of an event, and labels the same as a message. 

  • The subsequent step involves this event to be broadcasted to an event consumer. 
  • The message travels through the respective channels and is interpreted by a centralized event processing platform. 
  • This platform is programmed to decide upon the follow-up action to be taken on the event. 
  • Once it matches the event to the corresponding response within its directory, it forwards the same to the respective consumer. 

This last step determines the final outcome of the event that has been generated. The brightest example of this pattern can be found on a web page. 

The moment you click a button, the browser interprets the event and surfaces the programmed action, such as video playback, matching the input with the right output. In contrast to the layered architecture, where the code must flow top-down and filter through all the layers, Event-Driven architectures deploy modules that are activated only when an even connected to them is generated.

Event-Driven Architecture

Benefits

  • Among the different types of architecture in software engineering, the Event-Driven Architecture is suited to applications that have a tendency to scale. It adds to the response time of the architecture eventually leading to better business outcomes. 
  • This system is very adaptable to real-time changes and is suited to asynchronous systems that run on asymmetric data flow. 
  • They play a huge role in defining how IoT works. They are widely applicable across networks and applications where devices that are part of the Internet of Things (IoT) must exchange information between even producers and consumers in real-time.

Potential Drawbacks 

  • Developers might face bottlenecks while managing error handling, particularly in cases where multiple modules are responsible for a single event. 
  • You must use a recommended software architect tool, to backup the central processing platform. This, to deter the failure of a module to result in system collapse. 
  • The operational speed of the entire system could be slowed down if the processing platform is programmed to buffer messages as and when they come. 

Ideal For

Event Driven Architecture can be deployed for applications that leverage instant data communication that scales on-demand as in the case of website tracking or stream processing. 

Microkernel Architecture
14
May
2021

What is Microkernel Architecture?

Many third-party applications, in view of software architecture best practices, make avail software packages as downloadable plug-ins or versions. It is to this particular type, that the Microkernel Architecture is most suited as a result of which it is also called the plug-in architecture pattern. 

With this style, enterprise application development services can add pluggable features to an erstwhile version of the software providing for extensibility. The architecture is formulated of two components, with one part dedicated to the core system and the other to the plug-ins. Minimalism is followed while designing the core of the architecture, that stores just the right proportion of components to render the system effective. 

Microkernel Architecture

The most relatable example of the Microkernel Architecture would be any internet browser. You download a version of the application, that is essentially a software, and depending upon the missing functionalities, download and add plug-ins. Enterprise software development services rely on this pattern for designing large scale, complex applications as well. An example of such a business application could be a software for processing insurance claims. 

Benefits

  • This design has proven its worth as one being highly flexible. Operational possibilities arising from the capability of plug-ins make reacting to such changes in near real-time critical to sustenance. Such changes can be dealt with in isolation with the core system regaining its stable state, for the most part, therefore requiring less developmental updates over time.
  • An enterprise software development company could face a downtime issue at the time of deployment but that can be minimized or altogether avoided by adding plug-in modules to the core dynamically. 
  • A software development company could test plug-in prototypes in isolation and see for performance issues without affecting the core of the architecture. 
  • Microkernel Architecture is most appreciated for maintaining high-performance applications as the software can be customized to include only those capabilities that are needed the most. 

Potential Drawbacks 

  • Apps such as those conceptualized by enterprise mobile app development services, have a non-negotiable scope to scale. However, the Microkernel Architecture is grounded on designs of the product and naturally suited to apps that are smaller in size. 
  • An enterprise app development company could find the Microkernel pattern rather hard to execute due to the vast number of plug-ins compatible with the core. This calls for drawing out governance contracts, updating plug-in regitaries and so many formalities that the implementation becomes a challenge. 

Ideal For

Microkernel Architecture is best suited for workflow applications in addition to those that need job scheduling. As pointed above, like a web browser, any application that you want to release with just the right amount of specs but want to leave room that can be filled in by installing additional plug-ins can be built with this design pattern. 

3 Layer Automated Testing
26
Mar
2021

3 Layer Automated Testing

Birth of Quality Engineering

Quality Assurance practice was relatively simple when we built the Monolith systems with traditional waterfall development models. The Quality Assurance (QA) teams would start the GUI layer’s validation process after waiting for months for the product development. To enhance the testing process, we would have to spend a lot of efforts and $$$ (commercial tools) in automating the GUI via various tools like Microfocus UFTSeleniumTest CompleteCoded UIRanorex etc., and most often these tests are complex to maintain and scale. Thus, most QA teams would have to restrict their automated tests to smoke and partial regression, ending in inadequate test coverage.

With modern technology, the new era tech companies, including Varo, have widely adopted Microservices-based architecture combined with the Agile/ Dev-Ops development model. This opens up a lot of opportunities for Quality Assurance practice, and in my opinion, this was the origin of the transformation from Quality Assurance to “Quality Engineering.”

The Common Pitfall

While automated testing gives us a massive benefit with the three R’s (Repeatable → Run any number of times, Reliable → Run with confidence, Reusable → Develop, and share), it also comes with maintenance costs. I like to quote Grady Boosh’s — “A fool with a tool is still a fool.” Targeting inappropriate areas would not provide us the desired benefit. We should consider several factors to choose the right candidate for automation. Few to name are the lifespan of the product, the volatility of the requirements, the complexity of the tests, business criticality, and technical feasibility.

It’s well known that the cost of a bug increases toward the right of software lifecycle. So it is necessary to implement several walls of defenses to arrest these software bugs as early as possible (Shift-Left Testing paradigm). By implementing an agile development model with a fail-fast mindset, we have taken care of our first wall of defense. But to move faster in this shorter development cycle, we must build robust automated test suites to take care of the rolled out features and make room for testing the new features.

The 3 Layer Architecture

The Varo architecture comprises three essential layers.

  • The Frontend layer (Web, iOS, Mobile apps) — User experience
  • The Orchestration layer (GraphQl) — Makes multiple microservices calls and returns the decision and data to the frontend apps
  • The Microservice layer (gRPC, Kafka, Postgres) — Core business layer

While understanding the microservice architecture for testing, there were several questions posed.

  • Which layer to test?
  • What to test in these layers?
  • Does testing frontend automatically validate downstream service?
  • Does testing multiple layers introduce redundancies?

We will try to answer these by analyzing the table below, which provides an overview of what these layers mean for quality engineering.

After inferring the table, we have loosely adopted the Testing Pyramid pattern to invest in automated testing as:

  • Full feature/ functional validations on Microservices layer
  • Business process validations on Orchestration layer
  • E2E validations on Frontend layer

The diagram below best represents our test strategy for each layer.

Note: Though we have automated white-box tests such as unit-test and integration-test, we exclude those in this discussion.

Use Case

Let’s take the example below for illustration to understand best how this Pyramid works.

The user is presented with a form to submit. The form accepts three inputs — Field A to get the user identifier, Field B a drop-down value, and Field C accepts an Integer value (based on a defined range).

Once the user clicks on the Submit button, the GraphQL API calls Microservice A to get what type of customer. Then it calls the next Microservice B to validate the acceptable range of values for Field C (which depends on the values from Field A and Field B).

Validations:

1. Feature Validations

✓ Positive behavior (Smoke, Functional, System Integration)

  • Validating behavior with a valid set of data combinations
  • Validating database
  • Validating integration — Impact on upstream/ downstream systems

✓ Negative behavior

  • Validating with invalid data (for example: Invalid authorization, Disqualified data)

2. Fluent Validations

✓ Evaluating field definitions — such as

  • Mandatory field (not empty/not null)
  • Invalid data types (for example: Int → negative value, String → Junk values with special characters or UUID → Invalid UUID formats)

Let’s look at how the “feature validations” can be written for the above use case by applying one of the test case authoring techniques — Boundary Value Analysis.

To test the scenario above, it would require 54 different combinations of feature validations, and below is the rationale to pick the right candidate for each layer.

Microservice Layer: This is the layer delivered first, enabling us to invest in automated testing as early as possible (Shift-Left). And the scope for our automation would be 100% of all the above scenarios.

Orchestration Layer: This layer translates the information from the microservice to frontend layers; we try to select at least two tests (1 positive & 1 negative) for each scenario. The whole objective is to ensure the integration is working as expected.

Frontend Layer: In this layer, we focus on E2E validations, which means these validations would be a part of the complete user journey. But we would ensure that we have at least one or more positive and negative scenarios embedded in those E2E tests. Business priority (frequently used data by the real-time users) helps us to select the best scenario for our E2E validations.

Conclusion

There are always going to be sets of redundant tests across these layers. But that is the trade-off we had to take to ensure that we have correct quality gates on each of these layers. The pros of this approach are that we achieve safe and faster deployments to Production by enabling quicker testing cycles, better test coverage, and risk-free decisions. In addition, having these functional test suites spread across the layers helps us to isolate the failures in respective areas, thus saving us time to troubleshoot an issue.

However, often, not one size fits all. The decision has to be made based on understanding how the software architecture is built and the supporting infrastructure to facilitate the testing efforts. One of the critical success factors for this implementation is building a good quality engineering team with the right skills and proper tools. But that is another story — Coming soon “Quality Engineering: Redefined.”

Log Aggregation For Microservice Architecture
08
Mar
2021

Log Aggregation For Microservice Architecture

To solve such problems, a preferred approach is to take advantage of a centralized logging service that aggregate logs from each service instance.

Users can search through these logs from one centralized spot and configure alerts when certain messages appear.

Standard tools are available and widely used by various enterprises.

ELK Stack is the most frequently used solution, where logging daemon, Logstash, collects and aggregate logs which can be searched via a Kibana dashboard indexed by Elasticsearch.

Microservices Architecture
14
May
2021

Microservices Architecture

Microservices are defined as a self-regulating, and independent codebase that can be written and maintained even by a small team of developers. Microservices Architecture consists of such loosely coupled services with each service responsible for the execution of its associated business logic. 

The services are separated from each other based on the nature of their domains and belong to a mini-microservice pool. Enterprise mobile app developers leverage the capabilities of this architecture especially for complex applications. 

Microservices Architecture allows developers to release versions of software thanks to sophisticated automation of software building, testing, and deployment – something that acts as a prime differentiation point between Microservices and Monolithic architecture.

Microservices Architecture

Benefits

  • Since the services are bifurcated into pools, the architecture design pattern makes the system highly fault-tolerant. In other words, the whole software won’t collapse on its head even if some microservices cease to function. 
  • An enterprise mobile app development company working on such an architecture for clients can deploy multiple programming languages to build different microservices for their specific purpose. Therefore the technology stack can be kept updated with the latest upgrades in computing. 
  • This architecture is a perfect fit for applications that need to scale. Since the services are already independent of each other, they can scale individually rather than overloading the entire system with the need to expand. 
  • Services can be integrated into any application depending upon the scope of work. 

Potential Drawbacks 

  • Since each service is unique in its ability to contribute to the whole codebase, it could be challenging for an enterprise mobile application development company to interlink all and operate so many distinctive services seamlessly. 
  • Developers must define a standard protocol for all services to adhere to. It is important to do so, as the decentralized approach towards coding microservices in multiple languages can pose serious issues while debugging. 
  • Each microservice with its limited environment is responsible to maintain the integrity of the data. It is up to the architects of such a system to come up with a universally consistent data integrity protocol, wherever possible. 
  • You definitely need the best of breed professionals to design such a system for you as the technology stack keeps changing. 

Ideal For

Use Microservices Architecture for apps in which a specific segment will be used heavily than the others and would need a sporadic burst of scaling. Instead of a standalone application you may also deploy this for a service that provides functionality to other applications of the system. 

Micro-frontends: The path to a scalable future — part 1
26
Mar
2021

Micro-frontends: The path to a scalable future — part 1

Introduction

We have all heard the term microservices and perhaps have worked with them in the backend world. Before microservices, there were monolith applications. Back then, team and application growth was leading monolithic applications into a non-scalable dead-end. The codebase was growing, and technologies were getting older. All of this has made migrations and upgrades incredibly painful and frustrating.

An increase in project growth meant that multiple teams would unwittingly create further bottlenecks and inter-team dependencies. Deployments and releases were being micro-managed, but the growing amount of unmanaged technical debt would cause problems further down the line.

The same problems were present in the front-end world too. Again, you can structure the fanciest framework/library at that time and build your application with the best intentions. However, several years later, your tiny little application will inevitably become huge and force you to increase the size of your team. Eventually, this work will be split between multiple teams that will find themselves spending hours defining a release management schedule.

It’s around this time that you will hear about another cool library trending in the industry. Exploring the complexities of how to migrate to a new library while maintaining the old one typically leads to paranoia and eventually makes you give up on the idea because it’s just too much hassle. But you are also faced with a shrinking number of engineers who are willing to work with deprecated technologies. The chosen path has led you to a non-scalable dead-end.

A path to scalability

Try to imagine what the outcome would have looked like would we bravely decide to choose a different path. First, we should know that everything comes with a price, and with great power comes great responsibility. That power is the implementation of micro-frontends, which is

an architectural pattern to develop and maintain web application as a composition of small front-end applications

With back-end microservices, we had completely separate services with their separate DB and API. Microservices can be divided by business logic. For example, in an e-commerce app, we can have different microservices for a product catalog, user’s shopping cart, orders, etc. Without micro-frontend applications, it would have been a monolithic front-end application communicating with different back-end microservices (Fig. 1).

Figure 1. Application with monolith front-end and back-end microservices.

By splitting up the front-end into micro-frontends, we can align front-end and back-end architecture approaches and have a more robust and elegant full-stack architecture (Fig. 2).

Figure 2. End-to-end front-end and back-end microservices.
Figure 2. End-to-end front-end and back-end microservices.

With this approach, we can reach several significant benefits:

  • Team and technology autonomy: each team would have its own mission on the product and could select its own technology stack;
  • Small, maintainable, and de-coupled codebases: each team’s codebase would remain small and isolated from others’ codebases;
  • Independent release management: each team would be able to independently release its own part of the application saving a lot of time on inter-team communication and release schedule;
  • Painless upgrades and migrations: having small codebases and being free of inter-team dependency would lead to painless and independent upgrades of application technologies as well as migrations from the old one to a new one;

With the mentioned benefits, there are also challenges attached to implementing the micro-frontends architecture. There will still remain some issues and concerns that will be shared between teams such as web performance, a common design system, etc.

Even though the idea behind micro-services and micro-frontends is very similar, the implementation challenges are slightly different, and we will discuss them in the next section.

Micro-frontends integration concepts

There are three main concepts and problems to be taken into consideration when implementing the micro-frontend architecture.

Routing and page transition

This is one of the most important concepts. Regardless of how many micro-frontends are implemented in our application, it is crucial to handle transitions between micro-frontends (Fig. 3) properly. That said, users must never notice a switch from one micro-frontend application to another when navigating in the application.

Figure 3. The transition from one micro-frontend to another.

Page transition, which is also known as routing, can be either server or client-side. Let’s briefly go over each of them.

  • Server routing: on each transition (usually triggered by a click), there is a request to a server to obtain and serve the whole HTML document to the user. This method eventually leads to a full page reload, which is also known as a hard transition. It has several pros and cons and was a traditional way to handle transitions before the client-side routing evolution.
  • Client routing: on each transition, only some part of the application is being reloaded and operated by JavaScript in the browser on the client-side. Compared to server routing, there is no full page reload and is also known as soft transition. It brings a much better user experience and is implemented in almost all modern web applications.

There is an excellent article that covers server and client routing in more detail. With micro-frontends, the problem has various solutions starting from simple link transitions (the way routing was being handled in the 90’s and early 2000’s) into multi-layered client-side routings(a very modern way of having one top-level routing for the whole application, and several low-level routings per each micro-frontend). They differ by their implementation complexity, so the selection should be made based on the needs of our application.

Composition

It might be necessary to have a page owned by one team, but it will include fragments from other teams. This means we should consider the composition of multiple micro-frontends within each other (Fig. 4). For example, we could have an e-commerce web application and a product detail page owned by one team. We should also show the shopping cart content, which is owned by another team.

Figure 4. Composition of multiple micro-frontends.

Composition is a rendering concern which, similarly to routing, can be client and/or server-side:

  • Server-side rendering (SSR): with SSR or Universal rendering, our page document is being constructed and rendered on the server and served to the user. It is beneficial on the initial page load of an application as it is much more performant than rendering it on the client-side.
  • Client-side rendering (CSR): With CSR, our application page content is being constructed, rendered, and loaded on the client-side, operated by JavaScript. This type of rendering is used with all modern libraries and frameworks.

There are many cases when having just SSR or CSR will not be enough to meet our app needs (for example, when we need good SEO, a smooth user experience, and a fast content load). In these cases, the application uses both server and client-side rendering.

In most cases, the most relevant and essential content is being rendered server-side. It is being served to the user first, and afterward, the rest and more dynamic content is being handed over to JavaScript, which renders it on the client-side. There is much more to say about CSR, SSR but it is out of our topic for now. I highly recommend reading this article to learn more about these concepts and differences.

Tip: Again, most of the applications use both client and server-side rendering. But there is an easy life-hack that can help you check which parts of an application are rendered on the server-side and which parts on the client-side. Simply go to your browser settings and turn off JavaScript. Then open your desired website, and you will see the content which is loaded on the server-side. Then by turning on JS again, reloading the page will allow JS to render the other parts handed over to the client-side.

As with routing, there are also many different techniques both server-side(SSI, Zalando, Podium, etc.) and client-side(iframe, Ajax, Web Components, etc.) to handle micro-frontends composition. Which of these to go with depends on our application requirements.

Communication

The last concept of micro-frontend integration is the communication between our micro-frontends (Fig. 5).

Figure 5. Communication between micro-frontends.

Remember that we can have multiple micro-frontends on one page, and interaction with one can eventually lead to others’ changes. For example, let’s imagine we are on the product detail page of an e-commerce application, and we want to add a product to our shopping cart.

On the header (which is another micro-frontend fragment application), there is an icon of our cart, which also has an indicator — the number of added items. Adding the product on the details page should also update the number of items on our header’s shopping cart icon. There can be multiple communication scenarios like parent to fragment, fragment to parent, as well as fragment to fragment, and each communication is being handled differently.

  • Parent to fragment: This case is similar to handling communication between the parent component to the child components. It can be done by passing the data to children via props/attributes. So if our fragment/child is implemented with web components technology, it should be getting some attributes when they are being loaded, and data change in parent micro-frontend can trigger changed data passing to the fragment/child micro-frontends;
  • Fragment to parent: In this case, we can use the browser’s native CustomEvents API. So, on the parent side, there can be a subscribed event listener, while inside a fragment, we emit/publish an event on data change. The event emitter will bubble up the data to the listener on the parent side, which will then be handled on the parent micro-frontend.
  • Fragment to fragment: In this case, we can use the combination of the previous two techniques by emitting events to the parent fragment, which will listen to the changes and pass them to another fragment via props/attributes.

Another way of micro-frontends communication is Broadcast Channel API, which is an implementation of the Publisher/Subscriber design pattern just like the Custom Events API.

What’s next?

So, we have learned how beneficial micro-frontends can be and what problems they can solve. We also learned what challenges and problems we might face when implementing the micro-frontend architecture. We saw that based on the described three concepts, some various technologies and patterns could be used to achieve a full micro-frontend integration.

This should be enough to understand the main concepts and the big picture of micro-frontend architecture. This article is the first part of the series of 3 articles. The next article will be focused on high-level micro-frontend architecture types starting from the simplest and ending with the most complex one. We will also learn the concepts that will help us go with the architecture type that best fits our needs. So, stay tuned.

Clarified CQRS
28
Mar
2021

Clarified CQRS

After listening how the community has interpreted Command-Query Responsibility Segregation I think that the time has come for some clarification. Some have been tying it together to Event Sourcing. Most have been overlaying their previous layered architecture assumptions on it. Here I hope to identify CQRS itself, and describe in which places it can connect to other patterns.
This is quite a long post.
Why CQRS Before describing the details of CQRS we need to understand the two main driving forces behind it: collaboration and staleness.
Collaboration refers to circumstances under which multiple actors will be using/modifying the same set of data – whether or not the intention of the actors is actually to collaborate with each other. There are often rules which indicate which user can perform which kind of modification and modifications that may have been acceptable in one case may not be acceptable in others. We’ll give some examples shortly. Actors can be human like normal users, or automated like software.Staleness refers to the fact that in a collaborative environment, once data has been shown to a user, that same data may have been changed by another actor – it is stale. Almost any system which makes use of a cache is serving stale data – often for performance reasons. What this means is that we cannot entirely trust our users decisions, as they could have been made based on out-of-date information.Standard layered architectures don’t explicitly deal with either of these issues. While putting everything in the same database may be one step in the direction of handling collaboration, staleness is usually exacerbated in those architectures by the use of caches as a performance-improving afterthought.A picture for referenceI’ve given some talks about CQRS using this diagram to explain it:The boxes named AC are Autonomous Components. We’ll describe what makes them autonomous when discussing commands. But before we go into the complicated parts, let’s start with queries:QueriesIf the data we’re going to be showing users is stale anyway, is it really necessary to go to the master database and get it from there? Why transform those 3rd normal form structures to domain objects if we just want data – not any rule-preserving behaviors? Why transform those domain objects to DTOs to transfer them across a wire, and who said that wire has to be exactly there? Why transform those DTOs to view model objects?In short, it looks like we’re doing a heck of a lot of unnecessary work based on the assumption that reusing code that has already been written will be easier than just solving the problem at hand. Let’s try a different approach:How about we create an additional data store whose data can be a bit out of sync with the master database – I mean, the data we’re showing the user is stale anyway, so why not reflect in the data store itself. We’ll come up with an approach later to keep this data store more or less in sync.Now, what would be the correct structure for this data store? How about just like the view model? One table for each view. Then our client could simply SELECT * FROM MyViewTable (or possibly pass in an ID in a where clause), and bind the result to the screen. That would be just as simple as can be. You could wrap that up with a thin facade if you feel the need, or with stored procedures, or using AutoMapper which can simply map from a data reader to your view model class. The thing is that the view model structures are already wire-friendly, so you don’t need to transform them to anything else.You could even consider taking that data store and putting it in your web tier. It’s just as secure as an in-memory cache in your web tier. Give your web servers SELECT only permissions on those tables and you should be fine.Query Data StorageWhile you can use a regular database as your query data store it isn’t the only option. Consider that the query schema is in essence identical to your view model. You don’t have any relationships between your various view model classes, so you shouldn’t need any relationships between the tables in the query data store.So do you actually need a relational database?The answer is no, but for all practical purposes and due to organizational inertia, it is probably your best choice (for now).Scaling QueriesSince your queries are now being performed off of a separate data store than your master database, and there is no assumption that the data that’s being served is 100% up to date, you can easily add more instances of these stores without worrying that they don’t contain the exact same data. The same mechanism that updates one instance can be used for many instances, as we’ll see later.This gives you cheap horizontal scaling for your queries. Also, since your not doing nearly as much transformation, the latency per query goes down as well. Simple code is fast code.Data modificationsSince our users are making decisions based on stale data, we need to be more discerning about which things we let through. Here’s a scenario explaining why:Let’s say we have a customer service representative who is one the phone with a customer. This user is looking at the customer’s details on the screen and wants to make them a ‘preferred’ customer, as well as modifying their address, changing their title from Ms to Mrs, changing their last name, and indicating that they’re now married. What the user doesn’t know is that after opening the screen, an event arrived from the billing department indicating that this same customer doesn’t pay their bills – they’re delinquent. At this point, our user submits their changes.Should we accept their changes?Well, we should accept some of them, but not the change to ‘preferred’, since the customer is delinquent. But writing those kinds of checks is a pain – we need to do a diff on the data, infer what the changes mean, which ones are related to each other (name change, title change) and which are separate, identify which data to check against – not just compared to the data the user retrieved, but compared to the current state in the database, and then reject or accept.Unfortunately for our users, we tend to reject the whole thing if any part of it is off. At that point, our users have to refresh their screen to get the up-to-date data, and retype in all the previous changes, hoping that this time we won’t yell at them because of an optimistic concurrency conflict.As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts.If only there was some way for our users to provide us with the right level of granularity and intent when modifying data. That’s what commands are all about.CommandsA core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above.We could even consider allowing our users to submit a new command even before they’ve received confirmation on the previous one. We could have a little widget on the side showing the user their pending commands, checking them off asynchronously as we receive confirmation from the server, or marking them with an X if they fail. The user could then double-click that failed task to find information about what happened.Note that the client sends commands to the server – it doesn’t publish them. Publishing is reserved for events which state a fact – that something has happened, and that the publisher has no concern about what receivers of that event do with it.Commands and ValidationIn thinking through what could make a command fail, one topic that comes up is validation. Validation is different from business rules in that it states a context-independent fact about a command. Either a command is valid, or it isn’t. Business rules on the other hand are context dependent.In the example we saw before, the data our customer service rep submitted was valid, it was only due to the billing event arriving earlier which required the command to be rejected. Had that billing event not arrived, the data would have been accepted.Even though a command may be valid, there still may be reasons to reject it.As such, validation can be performed on the client, checking that all fields required for that command are there, number and date ranges are OK, that kind of thing. The server would still validate all commands that arrive, not trusting clients to do the validation.Rethinking UIs and commands in light of validationThe client can make of the query data store when validating commands. For example, before submitting a command that the customer has moved, we can check that the street name exists in the query data store.At that point, we may rethink the UI and have an auto-completing text box for the street name, thus ensuring that the street name we’ll pass in the command will be valid. But why not take things a step further? Why not pass in the street ID instead of its name? Have the command represent the street not as a string, but as an ID (int, guid, whatever).On the server side, the only reason that such a command would fail would be due to concurrency – that someone had deleted that street and that that hadn’t been reflected in the query store yet; a fairly exceptional set of circumstances.Reasons valid commands fail and what to do about itSo we’ve got a well-behaved client that is sending valid commands, yet the server still decides to reject them. Often the circumstances for the rejection are related to other actors changing state relevant to the processing of that command.In the CRM example above, it is only because the billing event arrived first. But “first” could be a millisecond before our command. What if our user pressed the button a millisecond earlier? Should that actually change the business outcome? Shouldn’t we expect our system to behave the same when observed from the outside?So, if the billing event arrived second, shouldn’t that revert preferred customers to regular ones? Not only that, but shouldn’t the customer be notified of this, like by sending them an email? In which case, why not have this be the behavior for the case where the billing event arrives first? And if we’ve already got a notification model set up, do we really need to return an error to the customer service rep? I mean, it’s not like they can do anything about it other than notifying the customer.So, if we’re not returning errors to the client (who is already sending us valid commands), maybe all we need to do on the client when sending a command is to tell the user “thank you, you will receive confirmation via email shortly”. We don’t even need the UI widget showing pending commands.Commands and AutonomyWhat we see is that in this model, commands don’t need to be processed immediately – they can be queued. How fast they get processed is a question of Service-Level Agreement (SLA) and not architecturally significant. This is one of the things that makes that node that processes commands autonomous from a runtime perspective – we don’t require an always-on connection to the client.Also, we shouldn’t need to access the query store to process commands – any state that is needed should be managed by the autonomous component – that’s part of the meaning of autonomy.Another part is the issue of failed message processing due to the database being down or hitting a deadlock. There is no reason that such errors should be returned to the client – we can just rollback and try again. When an administrator brings the database back up, all the message waiting in the queue will then be processed successfully and our users receive confirmation.The system as a whole is quite a bit more robust to any error conditions.Also, since we don’t have queries going through this database any more, the database itself is able to keep more rows/pages in memory which serve commands, improving performance. When both commands and queries were being served off of the same tables, the database server was always juggling rows between the two.Autonomous ComponentsWhile in the picture above we see all commands going to the same AC, we could logically have each command processed by a different AC, each with it’s own queue. That would give us visibility into which queue was the longest, letting us see very easily which part of the system was the bottleneck. While this is interesting for developers, it is critical for system administrators.Since commands wait in queues, we can now add more processing nodes behind those queues (using the distributor with NServiceBus) so that we’re only scaling the part of the system that’s slow. No need to waste servers on any other requests.Service LayersOur command processing objects in the various autonomous components actually make up our service layer. The reason you don’t see this layer explicitly represented in CQRS is that it isn’t really there, at least not as an identifiable logical collection of related objects – here’s why:In the layered architecture (AKA 3-Tier) approach, there is no statement about dependencies between objects within a layer, or rather it is implied to be allowed. However, when taking a command-oriented view on the service layer, what we see are objects handling different types of commands. Each command is independent of the other, so why should we allow the objects which handle them to depend on each other?Dependencies are things which should be avoided, unless there is good reason for them.Keeping the command handling objects independent of each other will allow us to more easily version our system, one command at a time, not needing even to bring down the entire system, given that the new version is backwards compatible with the previous one.Therefore, keep each command handler in its own VS project, or possibly even in its own solution, thus guiding developers away from introducing dependencies in the name of reuse (it’s a fallacy). If you do decide as a deployment concern, that you want to put them all in the same process feeding off of the same queue, you can ILMerge those assemblies and host them together, but understand that you will be undoing much of the benefits of your autonomous components.Whither the domain model?Although in the diagram above you can see the domain model beside the command-processing autonomous components, it’s actually an implementation detail. There is nothing that states that all commands must be processed by the same domain model. Arguably, you could have some commands be processed by transaction script, others using table module (AKA active record), as well as those using the domain model. Event-sourcing is another possible implementation.Another thing to understand about the domain model is that it now isn’t used to serve queries. So the question is, why do you need to have so many relationships between entities in your domain model?(You may want to take a second to let that sink in.)Do we really need a collection of orders on the customer entity? In what command would we need to navigate that collection? In fact, what kind of command would need any one-to-many relationship? And if that’s the case for one-to-many, many-to-many would definitely be out as well. I mean, most commands only contain one or two IDs in them anyway.Any aggregate operations that may have been calculated by looping over child entities could be pre-calculated and stored as properties on the parent entity. Following this process across all the entities in our domain would result in isolated entities needing nothing more than a couple of properties for the IDs of their related entities – “children” holding the parent ID, like in databases.In this form, commands could be entirely processed by a single entity – viola, an aggregate root that is a consistency boundary.Persistence for command processingGiven that the database used for command processing is not used for querying, and that most (if not all) commands contain the IDs of the rows they’re going to affect, do we really need to have a column for every single domain object property? What if we just serialized the domain entity and put it into a single column, and had another column containing the ID? This sounds quite similar to key-value storage that is available in the various cloud providers. In which case, would you really need an object-relational mapper to persist to this kind of storage?You could also pull out an additional property per piece of data where you’d want the “database” to enforce uniqueness.I’m not suggesting that you do this in all cases – rather just trying to get you to rethink some basic assumptions.Let me reiterateHow you process the commands is an implementation detail of CQRS.Keeping the query store in syncAfter the command-processing autonomous component has decided to accept a command, modifying its persistent store as needed, it publishes an event notifying the world about it. This event often is the “past tense” of the command submitted:MakeCustomerPerferredCommand -> CustomerHasBeenMadePerferredEventThe publishing of the event is done transactionally together with the processing of the command and the changes to its database. That way, any kind of failure on commit will result in the event not being sent. This is something that should be handled by default by your message bus, and if you’re using MSMQ as your underlying transport, requires the use of transactional queues.The autonomous component which processes those events and updates the query data store is fairly simple, translating from the event structure to the persistent view model structure. I suggest having an event handler per view model class (AKA per table).Here’s the picture of all the pieces again:
Bounded ContextsWhile CQRS touches on many pieces of software architecture, it is still not at the top of the food chain. CQRS if used is employed within a bounded context (DDD) or a business component (SOA) – a cohesive piece of the problem domain. The events published by one BC are subscribed to by other BCs, each updating their query and command data stores as needed.UI’s from the CQRS found in each BC can be “mashed up” in a single application, providing users a single composite view on all parts of the problem domain. Composite UI frameworks are very useful for these cases.SummaryCQRS is about coming up with an appropriate architecture for multi-user collaborative applications. It explicitly takes into account factors like data staleness and volatility and exploits those characteristics for creating simpler and more scalable constructs.One cannot truly enjoy the benefits of CQRS without considering the user-interface, making it capture user intent explicitly. When taking into account client-side validation, command structures may be somewhat adjusted. Thinking through the order in which commands and events are processed can lead to notification patterns which make returning errors unnecessary.While the result of applying CQRS to a given project is a more maintainable and performant code base, this simplicity and scalability require understanding the detailed business requirements and are not the result of any technical “best practice”. If anything, we can see a plethora of approaches to apparently similar problems being used together – data readers and domain models, one-way messaging and synchronous calls.
Persistence in Event Driven Architectures
28
Mar
2021

Persistence in Event Driven Architectures

The importance of being persistent in event driven architectures.

Enterprises have to constantly adapt and evolve their enterprise architecture strategies in order to deliver the desired business outcomes. The evolving architecture patterns may involve business processing of sales transactions with a human in the loop or they may involve machine to machine data processing using automation. Enterprises earlier adopted a request-driven model where microservices made a call to a service and the service responded to the request being made. In this request-driven model, you run into challenges around flexibility as you try to scale your global deployment footprint.

A new approach, that is quickly gaining adoption in enterprises is called the event-driven architecture. In the new approach, you are able to increase application agility and flexibility by allowing for multiple data producers to coexist with multiple data consumers and you process data only after an event or state change. In this enterprise architecture, the producers and consumers of data can be quickly extended to deliver better flexibility and agility as you scale your operation globally. Examples of event-driven solutions are available in a hosted managed format from most cloud providers today. In this blog post, we will look at a Kafka solution running in a Kubernetes cluster and how you can make sure persistence is achieved for the solution. This approach is running at customers in production today supported by Confluent and Portworx.

Why Use Event-Driven Architecture?

With the advent of 5G technology, a vast amount of data will be generated by sensors, devices, systems, and humans in the loop to track, manage and achieve business outcomes. The use case for Event-Driven architectures may include the following:

  • Business Process state changes – You want to notify the change of state between a purchase order and accounts receivable with an event. The event-based approach allows a human or a decision engine to take the next appropriate step in the process.
  • Log and Metrics processing – The event-driven model allows for multiple actions to be triggered based on a single metric. The ability to send messages to multiple event handlers in different subsystems offers the scalability and resiliency required by certain business applications.

Why Use Kafka?

Apache Kafka is a scalable, fault-tolerant messaging system that enables you to build distributed real-time applications with an event-driven architecture. Kafka delivers events, with fast ingestion rates, and provides persistence and in-order guarantees. Kafka adoption in your solution will depend on your specific use-case. Below are some important concepts about Apache Kafka:

  • Kafka organizes messages into “topics”.
  • The process that does the work in Kafka is called the “broker”. A producer pushes data into a topic hosted on a broker and a consumer pulls messages from a topic via a broker.
  • Kafka topics can be divided into “partitions”. This allows for parallelizing a topic across multiple brokers and increasing message ingests and throughput.
  • Brokers can hold multiple partitions but at any given time, only one partition can act as leader of a topic. A leader is responsible for updating any replicas with new data.
  • Brokers are responsible for storing messages to disk. The messages are stored with unique offsets. Messages in Kafka do not have a unique ID.

Persistence With Portworx and Kafka on Kubernetes

Kafka needs Zookeeper to be deployed as a StatefulSets in Kubernetes. Kafka Brokers, which maintains the state of the topics and partitions also need to be deployed as StatefulSets which should be backed by persistent volumes.

  • Kafka offers replication of topics between different brokers. In the case of a node failure, Kafka can recover from failure using the replicated topics. This recovery mechanism does create additional network calls in order to synchronize with the replica on a different broker. The recovery time of the failed node and its broker depend on the amount of data that needs to be rehydrated and network latencies in the cluster.
  • Portworx offers data replication using the replication parameter in the Kuberbetes storage class. In this scenario, the storage system is responsible for maintaining copies of the topic on different nodes. In the case of a node failure, the Kafka broker is rescheduled on a node that already contains the replicated topic data. The rebuild time of the broker is reduced because it uses the storage system to rehydrate the topic data without any network latencies. Once the data is rehydrated using the storage system, the broker can quickly catch up on the topic offset from an existing broker and thus reducing the overall recovery time.

Let’s walk through a Kubernetes node failure scenario on a Kubernetes cluster where a Kafka application is running and is backed by Portworx volumes.

Figure 1: We will describe the deployment of Kafka and Portworx on a 5 node Kubernetes cluster. In the image below, you can see that the Kafka deployment has 3 Brokers, each with 2 partitions and a replication factor of 2. For the Portworx data platform, we have a volume replication factor set to 3 for each volume.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 1

Figure 2: We will describe a node failure event. In the diagram below, we have simulated a node failure and identified the Kubernetes worker node 2 has been taken out of production. Kubernetes worker node 2 contains a Kafka broker with 2 partitions and 2 Portworx persistent volumes.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 2
Figure 3: Once Kubernetes node failure is detected by the Kubernetes API server, the Kaffka broker is deployed to a node with available resources. The Portworx Platform has the ability to influence the placement of the broker on a Kubernete node. Since Kubernetes node 4 already has a copy of Kafka Broker’s persistent volumes, Portworx makes sure that the Kafka broker is deployed on Kubernetes node 4. On the Portworx platform’s recovery side, the volumes are also created on other nodes in order to maintain the defined replication factor of 3.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 3 Figure 4: Once the failed recovery node is back in production use, the Kaffka application does not get scheduled on the recovered node since the application is already at the desired number of Kafka brokers. On the Portworx platform side, replicated volumes are placed on the recovered Kubernetes node to maintain the desired replication factor for the volumes.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 4

Why Kafka Is so Fast
26
Mar
2021

Why Kafka Is so Fast

Discover the deliberate design decisions that have made Kafka the performance powerhouse it is today.

The last few years have brought about immense changes in the software architecture landscape. The notion of a single monolithic application or even several coarse-grained services sharing a common data store has been all but erased from the hearts and minds of software practitioners world-wide. Autonomous microservices, event-driven architecture, and CQRS are the dominant tools in the construction of contemporary business-centric applications. To top it off, the proliferation of device connectivity — IoT, mobile, wearables — is creating an upward pressure on the number of events a system must handle in near-real-time.

Let’s start by acknowledging that the term ‘fast’ is multi-faceted, complex, and highly ambiguous. Latency, throughput, jitter, are metrics that shape and influence one’s interpretation of the term. It is also inherently contextual: the industry and application domains in themselves set the norms and expectations around performance. Whether or not something is fast depends largely on one’s frame of reference.

Apache Kafka is optimized for throughput at the expense of latency and jitter, while preserving other desirable qualities, such as durability, strict record order, and at-least-once delivery semantics. When someone says ‘Kafka is fast’, and assuming they are at least mildly competent, you can assume they are referring to Kafka’s ability to safely accumulate and distribute a very high number of records in a short amount of time.

Historically, Kafka was born out of LinkedIn’s need to move a very large number of messages efficiently, amounting to multiple terabytes of data on an hourly basis. The individual message propagation delay was deemed of secondary importance, as was the variability of that time. After all, LinkedIn is not a financial institution that engages in high-frequency trading, nor is it an industrial control system that operates within deterministic deadlines. Kafka can be used to implement near-real-time (otherwise known as soft real-time) systems.

Note: For those unfamiliar with the term, ‘real-time’ does not mean ‘fast’, it means ‘predictable’. Specifically, real-time implies a hard upper bound, otherwise known as a deadline, on the time taken to complete an action. If the system as a whole is unable to meet this deadline each and every time, it cannot be classed as real-time. Systems that are able to perform within a probabilistic tolerance, are labelled as ‘near-real-time’. In terms of sheer throughput, real-time systems are often slower than their near-real-time or non-real-time counterparts.

There are two significant areas that Kafka draws upon for its speed, and they need to be discussed separately. The first relates to the low-level efficiency of the client and broker implementations. The second derives from the opportunistic parallelism of stream processing.

Broker performance

Log-structured persistence

Kafka utilizes a segmented, append-only log, largely limiting itself to sequential I/O for both reads and writes, which is fast across a wide variety of storage media. There is a wide misconception that disks are slow; however, the performance of storage media (particularly rotating media) is greatly dependent on access patterns. The performance of random I/O on a typical 7,200 RPM SATA disk is between three and four orders of magnitude slower when compared to sequential I/O. Furthermore, a modern operating system provides read-ahead and write-behind techniques that prefetch data in large block multiples and group smaller logical writes into large physical writes. Because of this, the difference between sequential I/O and random I/O is still evident in flash and other forms of solid-state non-volatile media, although it is far less dramatic compared to rotating media.

Record batching

Sequential I/O is blazingly fast on most media types, comparable to the peak performance of network I/O. In practice, this means that a well-designed log-structured persistence layer will keep up with the network traffic. In fact, quite often the bottleneck with Kafka isn’t the disk, but the network. So in addition to the low-level batching provided by the OS, Kafka clients and brokers will accumulate multiple records in a batch — for both reading and writing — before sending them over the network. Batching of records amortizes the overhead of the network round-trip, using larger packets and improving bandwidth efficiency.

Batch compression

The impact of batching is particularly obvious when compression is enabled, as compression becomes generally more effective as the data size increases. Especially when using text-based formats such as JSON, the effects of compression can be quite pronounced, with compression ratios typically ranging from 5x to 7x. Furthermore, record batching is largely done as a client-side operation, which transfers the load onto the client and has a positive effect not only on the network bandwidth but also on the brokers’ disk I/O utilization.

Cheap consumers

Unlike traditional MQ-style brokers which remove messages at point of consumption (incurring the penalty of random I/O), Kafka doesn’t remove messages after they are consumed — instead, it independently tracks offsets at each consumer group level. The progression of offsets themselves is published on an internal Kafka topic __consumer_offsets. Again, being an append-only operation, this is fast. The contents of this topic are further reduced in the background (using Kafka’s compaction feature) to only retain the last known offsets for any given consumer group.

Compare this model with more traditional message brokers which typically offer several contrasting message distribution topologies. On one hand is the message queue — a durable transport for point-to-point messaging, with no point-to-multipoint ability. On the other hand, a pub-sub topic allows for point-to-multipoint messaging but does so at the expense of durability. Implementing a durable point-to-multipoint messaging model in a traditional MQ requires maintaining a dedicated message queue for each stateful consumer. This creates both read and write amplification. On one hand, the publisher is forced to write to multiple queues. Alternatively, a fan-out relay may consume records from one queue and write to several others, but this only defers the point of amplification. On the other hand, several consumers are generating load on the broker — being a mixture of read and write I/O, both sequential and random.

Consumers in Kafka are ‘cheap’, insofar as they don’t mutate the log files (only the producer or internal Kafka processes are permitted to do that). This means that a large number of consumers may concurrently read from the same topic without overwhelming the cluster. There is still some cost in adding a consumer, but it is mostly sequential reads with a low rate of sequential writes. So it’s fairly normal to see a single topic being shared across a diverse consumer ecosystem.

Unflushed buffered writes

Another fundamental reason for Kafka’s performance, and one that is worth exploring further: Kafka doesn’t actually call fsync when writing to the disk before acknowledging the write; the only requirement for an ACK is that the record has been written to the I/O buffer. This is a little known fact, but a crucial one: in fact, this is what actually makes Kafka perform as if it were an in-memory queue — because for all intents and purposes Kafka is a disk-backed in-memory queue (limited by the size of the buffer/pagecache).

On the flip side, this form of writing is unsafe, as the failure of a replica can lead to a data loss even though the record has seemingly been acknowledged. In other words, unlike say a relational database, acknowledging a write alone does not imply durability. What makes Kafka durable is running several in-sync replicas; even if one were to fail, the others (assuming there is more than one) will remain operational — providing that the failure is uncorrelated (i.e. multiple replicas failing simultaneously due of a common upstream failure). So the combination of a non-blocking approach to I/O with no fsync, and redundant in-sync replicas give Kafka the combination of high throughput, durability, and availability.

Client-side optimisations

Most databases, queues, and other forms of persistent middleware are designed around the notion of an all-mighty server (or a cluster of servers), and fairly thin clients that communicate with the server(s) over a well-known wire protocol. Client implementations are generally considered to be significantly simpler than the server. As a result, the server will absorb the bulk of the load — the clients merely act as interfaces between the application code and the server.

Kafka takes a different approach to client design. A significant amount of work is performed on the client before records get to the server. This includes the staging of records in an accumulator, hashing the record keys to arrive at the correct partition index, checksumming the records and the compression of the record batch. The client is aware of the cluster metadata and periodically refreshes this metadata to keep abreast of any changes to the broker topology. This lets the client make low-level forwarding decisions; rather than sending a record blindly to the cluster and relying on the latter to forward it to the appropriate broker node, a producer client will forward writes directly to partition masters. Similarly, consumer clients are able to make intelligent decisions when sourcing records, potentially using replicas that geographically closer to the client when issuing read queries. (This feature is a more recent addition to Kafka, available as of version 2.4.0.)

Zero-copy

One of the typical sources of inefficiencies is copying byte data between buffers. Kafka uses a binary message format that is shared by the producer, the broker, and the consumer parties so that data chunks can flow end-to-end without modification, even if it’s compressed. While eliminating structural differences between communicating parties is an important step, it doesn’t in itself avoid the copying of data.

Kafka solves this problem on Linux and UNIX systems by using Java’s NIO framework, specifically, the transferTo() method of a java.nio.channels.FileChannel. This method permits the transfer of bytes from a source channel to a sink channel without involving the application as a transfer intermediary. To appreciate the difference that NIO makes, consider the traditional approach where a source channel is read into a byte buffer, then written to a sink channel as two separate operations:

File.read(fileDesc, buf, len);
Socket.send(socket, buf, len);

Diagrammatically, this can be represented using the following.

Although this looks simple enough, internally, the copy operation requires four context switches between user mode and kernel mode, and the data is copied four times before the operation is complete. The diagram below outlines the context switches at each step.

Looking at this in more detail —

  1. The initial read() causes a context switch from user mode to kernel mode. The file is read, and its contents are copied to a buffer in the kernel address space by the DMA (Direct Memory Access) engine. This is not the same buffer that was used in the code snippet.
  2. Prior to returning from read(), the kernel buffer is copied into the user-space buffer. At this point, our application can read the contents of the file.
  3. The subsequent send() will switch back into kernel mode, copying the user-space buffer into the kernel address space — this time into a different buffer associated with the destination socket. Behind the scenes, the DMA engine takes over, asynchronously copying the data from the kernel buffer to the protocol stack. The send() method does not wait for this prior to returning.
  4. The send() call returns, switching back to the user-space context.

In spite of its mode-switching inefficiencies and additional copying, the intermediate kernel buffer can actually improve performance in many cases. It can act as a read-ahead cache, asynchronously prefetching blocks, thereby front-running requests from the application. However, when the amount of requested data is significantly larger than the kernel buffer size, the kernel buffer becomes a performance bottleneck. Rather than copying the data directly, it forces the system to oscillate between user and kernel modes until all the data is transferred.

By contrast, the zero-copy approach is handled in a single operation. The snippet from the earlier example can be rewritten as a one-liner:

fileDesc.transferTo(offset, len, socket);

The zero-copy approach is illustrated below.

Under this model, the number of context switches is reduced to one. Specifically, the transferTo() method instructs the block device to read data into a read buffer by the DMA engine. This buffer is then copied another kernel buffer for staging to the socket. Finally, the socket buffer is copied to the NIC buffer by DMA.

As a result, we have reduced the number of copies from four to three, and only one of those copies involves the CPU. We have also reduced the number of context switches from four to two.

This is a massive improvement, but it’s not query zero-copy yet. The latter can be achieved as a further optimization when running Linux kernels 2.4 and later, and on network interface cards that support the gather operation. This is illustrated below.

Calling the transferTo() method causes the device to read data into a kernel read buffer by the DMA engine, as per the previous example. However, with the gather operation, there is no copying between the read buffer and the socket buffer. Instead, the NIC is given a pointer to the read buffer, along with the offset and the length, which is vacuumed up by DMA. At no point is the CPU involved in copying buffers.

Comparisons of traditional and zero-copy on file sizes ranging from a few megabytes to a gigabyte show performance gains by a factor of two to three in favor of zero-copy. But what’s more impressive, is that Kafka achieves this using a plain JVM with no native libraries or JNI code.

Avoiding the GC

The heavy use of channels, native buffers, and the page cache has one additional benefit — reducing the load on the garbage collector (GC). For example, running Kafka on a machine with 32 GB of RAM will result in 28–30 GB usable for the page cache, completely outside of the GC’s scope. The difference in throughput is minimal — in the region of several percentage points — as the throughput of a correctly-tuned GC can be quite high, especially when dealing with short-lived objects. The real gains are in the reduction of jitter; by avoiding the GC, the brokers are less likely to experience a pause that may impact the client, extending the end-to-end propagation delay of records.

To be fair, the avoidance of GC is less of a problem now, compared to what it used to be when Kafka was conceived. Modern GCs like Shenandoah and ZGC scale to huge, multi-terabyte heaps, and have tunable worst-case pause times, down to single-digit milliseconds. It is not uncommon these days to see JVM-based applications using large heap-based caches outperform off-heap designs.

Stream parallelism

The efficiency of log-structured I/O is one crucial aspect of performance, mostly affecting writes; Kafka’s treatment of parallelism in the topic structure and the consumer ecosystem is fundamental to its read performance. The combination produces an overall very high end-to-end messaging throughput. Concurrency is ingrained into its partitioning scheme and the operation of consumer groups, which is effectively a load-balancing mechanism within Kafka — distributing partition assignments approximately evenly among the individual consumer instances within the group. Compare this to a more traditional MQ: in an equivalent RabbitMQ setup, multiple concurrent consumers may read from a queue in a round-robin fashion, but in doing so they forfeit the notion of message ordering.

The partitioning mechanism also allows for the horizontal scalability of Kafka brokers. Every partition has a dedicated leader; any nontrivial topic (with multiple partitions) can, therefore, utilize the entire cluster of broker for writes. This is yet another point of distinction between Kafka and a message queue; where the latter utilizes clustering for availability, Kafka will genuinely balance the load across the brokers for availability, durability, and throughput.

The producer specifies the partition when publishing a record, assuming that you are publishing to a topic with multiple partitions. (One may have a single-partition topic, in which case this is a non-issue.) This may be accomplished either directly — by specifying a partition index, or indirectly — by way of a record key, which deterministically hashes to a consistent (i.e. same every time) partition index. Records sharing the same hash are guaranteed to occupy the same partition. Assuming a topic with multiple partitions, records with a different key will likely end up in different partitions. However, due to hash collisions, records with different hashes may also end up in the same partition. Such is the nature of hashing. If you understand how a hash table works, this is no different.

The actual processing of records is done by consumers, operating within an (optional) consumer group. Kafka guarantees that a partition may only be assigned to at most one consumer within its consumer group. (We say ‘at most’ to cover the case when all consumers are offline.) When the first consumer in a group subscribes to the topic, it will receive all partitions on that topic. When a second consumer subsequently joins, it will get approximately half of the partitions, relieving the first consumer of half of its prior load. This enables you to process an event stream in parallel, adding consumers as necessary (ideally, using an auto-scaling mechanism), providing that you have adequately partitioned your event stream.

Control of record throughput accomplished in two ways:

  1. The topic partitioning scheme. Topics should be partitioned to maximize the number of independent event sub-streams. In other words, record order should only be preserved where it is absolutely necessary. If any two records are not legitimately related in a causal sense, they shouldn’t be bound to the same partition. This implies the use of different keys, as Kafka will use a record’s key as a hashing source to derive its consistent partition mapping.
  2. The number of consumers in the group. You can increase the number of consumers to match the load of inbound records, up to the number of partitions in the topic. (You can have more consumers if you wish, but the partition count will place an upper bound on the number of active consumers which get at least one partition assignment; the remaining consumers will remain idle.) Note that a consumer could be a process or a thread. Depending on the type of workload that the consumer performs, you may be able to employ multiple individual consumer threads or process records in a thread pool.

If you were wondering whether Kafka is fast, how it achieves its renowned performance characteristics, or if it can scale to your use cases, you should hopefully by now have all the answers you need.

To make things abundantly clear, Kafka is not the fastest (that is, most throughput-capable) messaging middleware — there are other platforms capable of greater throughput — some are software-based and some are implemented in hardware. Nor is it the best throughput-latency compromise — Apache Pulsar is a promising technology that is scalable and achieves a better throughput-latency profile while offering identical ordering and durability guarantees. The rationale for adopting Kafka is that as a complete ecosystem, it remains unmatched overall. It exhibits excellent performance while offering an environment that is abundant and mature, but also involving — in spite of its size, Kafka is still growing at an enviable pace.

The designers and maintainers of Kafka have done an amazing job at devising a solution that is performance-oriented at its core. Few of its design elements feel like an afterthought or a bolt-on. From offloading of work to clients to the log-structured persistence on the broker, batching, compression, zero-copy I/O, and stream-level parallelism — Kafka throws down the gauntlet to just about any other message-oriented middleware, commercial or open-source. And most impressively, it does so without compromising on qualities such as durability, record order, and at-least-once delivery semantics.

Kafka is not the simplest of messaging platforms, and there is a fair bit to learn. One must come to grips with the concepts of a total and partial order, topics, partitions, consumers and consumer groups, before comfortably designing and building high-performance event-driven systems. And while the knowledge curve is substantial, the results will certainly be worth your while. If you are keen on taking the proverbial ‘red pill’, read the Introduction to Event Streaming with Kafka and Kafdrop.

Server-side vs Client-side Routing
26
Mar
2021

Server-side vs Client-side Routing

Almost every website or web-application uses routing. Discovering a website by changing its URL is a very powerful feature that comes standard with the web. How all of this is handled can vary a lot between different websites and web-applications.

All websites and web-applications, whether they use server-side or client-side routing, are accessed from a server. How a website or web-application responds to different URLs is commonly handled server-side, although with the rising popularity of JavaScript frameworks, other ways have been found to manage routing.

Routing

Routing is the mechanism by which requests are connected to some code. It is essentially the way you navigate through a website or web-application. By clicking on a link, the URL changes which provides the user with some new data or a new webpage.

Server-side

When browsing, the adjustment of a URL can make a lot of things happen. This will happen regularly by clicking on a link, which in turn will request a new page from the server. This is what we call a server-side route. A whole new document is served to the user.

A server-side request causes the whole page to refresh. This is because a new GET request is sent to the server which responds with a new document, completely discarding the old page altogether.

Pros

  • A server-side route will only request the data that’s needed. No more, no less.
  • Because server-side routing has been the standard for a long time, search engines are optimised for webpages that come from the server.

Cons

  • Every request results in a full-page refresh. That means that unnecessary data is being requested. A header and a footer of a webpage often stays the same. This isn’t something you would want to request from the server again.
  • It can take a while for the page to be rendered. However, this is only the case when the document to be rendered is very large or when you have slow internet speed.

Client-side

A client-side route happens when the route is handled internally by the JavaScript that is loaded on the page. When a user clicks on a link, the URL changes but the request to the server is prevented. The adjustment to the URL will result in a changed state of the application. The changed state will ultimately result in a different view of the webpage. This could be the rendering of a new component, or even a request to a server for some data that the application will turn into some HTML elements.

It is important to note that the whole page won’t refresh when using client-side routing. There are just some elements inside the application that will change.

Pros

  • Because less data is processed, routing between views is generally faster.
  • Smooth transitions and animations between views are easier to implement.

Cons

  • The whole website or web-application needs to be loaded on the first request. That’s why the initial loading time usually takes longer.
  • Because the whole website or web-application is loaded initially, there is a possibility that there is data downloaded for views you won’t even come across.
  • It requires more setup work or even a library. Because server-side is the standard, extra code must be written to make client-side routing possible.
  • Search engine crawling is less optimised. Google is making good progress on crawling single-paged-apps, but it isn’t nearly as efficient as server-side routed websites.

Summary

There is no best method to manage your routing. Server-side and client-side routing both have their advantages and weaknesses. It is important to make your decision based on the needs of your website or web-application, or heck, even combine the two.