Welcome To Fusebes - Dev & Programming Blog

What is auto-configuration? How to enable or disable certain configuration?
30
Mar
2021

What is Auto-Configuration? How to Enable or Disable Certain Configuration?

Spring boot auto configuration scans the classpath, finds the libraries in the classpath and then attempt to guess the best configuration for them, and finally configure all such beans.

Auto-configuration tries to be as intelligent as possible and will back-away as we define more of our own custom configuration. It is always applied after user-defined beans have been registered.

Auto-configuration works with help of @Conditional annotations such as @ConditionalOnBean and @ConditionalOnClass.

For example, look at AopAutoConfiguration class.

If class path scanning finds EnableAspectJAutoProxy, Aspect, Advice and AnnotatedElement classes and spring.aop.auto=false is not present in properties file then Spring boot will configure the Spring AOP module for us.

@Configuration @ConditionalOnClass({ EnableAspectJAutoProxy.class, Aspect.class, Advice.class, AnnotatedElement.class }) @ConditionalOnProperty(prefix = "spring.aop", name = "auto", havingValue = "true", matchIfMissing = true) public class AopAutoConfiguration {//code}
How to Schedule Tasks with Spring Boot
06
Feb
2021

How to Schedule Tasks with Spring Boot

In this article, You’ll learn how to schedule tasks in Spring Boot using @Scheduled annotation. You’ll also learn how to use a custom thread pool for executing all the scheduled tasks.

The @Scheduled annotation is added to a method along with some information about when to execute it, and Spring Boot takes care of the rest.

Spring Boot internally uses the TaskScheduler interface for scheduling the annotated methods for execution.

The purpose of this article is to build a simple project demonstrating all the concepts related to task scheduling.

Create the Project

Let’s use Spring Boot CLI to create the Project. Fire up your terminal and type the following command to generate the project –

$ spring init --name=scheduler-demo scheduler-demo 

Alternatively, You can generate the project using Spring Initializer web app. Just go to http://start.spring.io/, enter the Artifact’s value as “scheduler-demo” and click Generate to generate and download the project.

Once the project is generated, import it in your favorite IDE. The project’s directory structure should like this –

Spring Boot Scheduled Annotation Example Directory Structure

Enable Scheduling

You can enable scheduling simply by adding the @EnableScheduling annotation to the main application class or one of the Configuration classes.

Open SchedulerDemoApplication.java and add @EnableScheduling annotation like so –

package com.example.schedulerdemo;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.scheduling.annotation.EnableScheduling;

@SpringBootApplication
@EnableScheduling
public class SchedulerDemoApplication {

	public static void main(String[] args) {
		SpringApplication.run(SchedulerDemoApplication.class, args);
	}
}

Scheduling Tasks

Scheduling a task with Spring Boot is as simple as annotating a method with @Scheduled annotation, and providing few parameters that will be used to decide the time at which the task will run.

Before adding tasks, Let’s first create the container for all the scheduled tasks. Create a new class called ScheduledTasks inside com.example.schedulerdemo package with the following contents –

package com.example.schedulerdemo;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;
import java.util.concurrent.TimeUnit;

@Component
public class ScheduledTasks {
    private static final Logger logger = LoggerFactory.getLogger(ScheduledTasks.class);
    private static final DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("HH:mm:ss");

    public void scheduleTaskWithFixedRate() {}

    public void scheduleTaskWithFixedDelay() {}

    public void scheduleTaskWithInitialDelay() {}

    public void scheduleTaskWithCronExpression() {}
}

The class contains four empty methods. We’ll look at the implementation of all the methods one by one.

All the scheduled methods should follow the following two criteria –

  • The method should have a void return type.
  • The method should not accept any arguments.

Cool! Let’s now jump into the implementation.

1. Scheduling a Task with Fixed Rate

You can schedule a method to be executed at a fixed interval by using fixedRate parameter in the @Scheduled annotation. In the following example, The annotated method will be executed every 2 seconds.

@Scheduled(fixedRate = 2000)
public void scheduleTaskWithFixedRate() {
    logger.info("Fixed Rate Task :: Execution Time - {}", dateTimeFormatter.format(LocalDateTime.now()) );
}
# Sample Output
Fixed Rate Task :: Execution Time - 10:26:58
Fixed Rate Task :: Execution Time - 10:27:00
Fixed Rate Task :: Execution Time - 10:27:02
....
....

The fixedRate task is invoked at the specified interval even if the previous invocation of the task is not finished.

2. Scheduling a Task with Fixed Delay

You can execute a task with a fixed delay between the completion of the last invocation and the start of the next, using fixedDelay parameter.

The fixedDelay parameter counts the delay after the completion of the last invocation.

Consider the following example –

@Scheduled(fixedDelay = 2000)
public void scheduleTaskWithFixedDelay() {
    logger.info("Fixed Delay Task :: Execution Time - {}", dateTimeFormatter.format(LocalDateTime.now()));
    try {
        TimeUnit.SECONDS.sleep(5);
    } catch (InterruptedException ex) {
        logger.error("Ran into an error {}", ex);
        throw new IllegalStateException(ex);
    }
}

Since the task itself takes 5 seconds to complete and we have specified a delay of 2 seconds between the completion of the last invocation and the start of the next, there will be a delay of 7 seconds between each invocation –

# Sample Output
Fixed Delay Task :: Execution Time - 10:30:01
Fixed Delay Task :: Execution Time - 10:30:08
Fixed Delay Task :: Execution Time - 10:30:15
....
....

3. Scheduling a Task With Fixed Rate and Initial Delay

You can use initialDelay parameter with fixedRate and fixedDelay to delay the first execution of the task with the specified number of milliseconds.

In the following example, the first execution of the task will be delayed by 5 seconds and then it will be executed normally at a fixed interval of 2 seconds –

@Scheduled(fixedRate = 2000, initialDelay = 5000)
public void scheduleTaskWithInitialDelay() {
    logger.info("Fixed Rate Task with Initial Delay :: Execution Time - {}", dateTimeFormatter.format(LocalDateTime.now()));
}
# Sample output (Server Started at 10:48:46)
Fixed Rate Task with Initial Delay :: Execution Time - 10:48:51
Fixed Rate Task with Initial Delay :: Execution Time - 10:48:53
Fixed Rate Task with Initial Delay :: Execution Time - 10:48:55
....
....

4. Scheduling a Task using Cron Expression

If the above simple parameters can not fulfill your needs, then you can use cron expressions to schedule the execution of your tasks.

In the following example, I have scheduled the task to be executed every minute –

@Scheduled(cron = "0 * * * * ?")
public void scheduleTaskWithCronExpression() {
    logger.info("Cron Task :: Execution Time - {}", dateTimeFormatter.format(LocalDateTime.now()));
}
# Sample Output
Cron Task :: Execution Time - 11:03:00
Cron Task :: Execution Time - 11:04:00
Cron Task :: Execution Time - 11:05:00

Running @Scheduled Tasks in a Custom Thread Pool

By default, all the @Scheduled tasks are executed in a default thread pool of size one created by Spring.

You can verify that by logging the name of the current thread in all the methods –

logger.info("Current Thread : {}", Thread.currentThread().getName());

All the methods will print the following –

Current Thread : pool-1-thread-1

But hey, You can create your own thread pool and configure Spring to use that thread pool for executing all the scheduled tasks.

Create a new package config inside com.example.schedulerdemo, and then create a new class called SchedulerConfig inside config package with the following contents –

package com.example.schedulerdemo.config;

import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.annotation.SchedulingConfigurer;
import org.springframework.scheduling.concurrent.ThreadPoolTaskScheduler;
import org.springframework.scheduling.config.ScheduledTaskRegistrar;

@Configuration
public class SchedulerConfig implements SchedulingConfigurer {
    private final int POOL_SIZE = 10;

    @Override
    public void configureTasks(ScheduledTaskRegistrar scheduledTaskRegistrar) {
        ThreadPoolTaskScheduler threadPoolTaskScheduler = new ThreadPoolTaskScheduler();

        threadPoolTaskScheduler.setPoolSize(POOL_SIZE);
        threadPoolTaskScheduler.setThreadNamePrefix("my-scheduled-task-pool-");
        threadPoolTaskScheduler.initialize();

        scheduledTaskRegistrar.setTaskScheduler(threadPoolTaskScheduler);
    }
}

That’s all you need to do for configuring Spring to use your own thread pool instead of the default one.

If you log the name of the current thread in the scheduled methods now, you’ll get the output like so –

Current Thread : my-scheduled-task-pool-1
Current Thread : my-scheduled-task-pool-2

# etc...

Conclusion

In this article, you learned how to schedule tasks in Spring Boot using @Scheduled annotation. You also learned how to use a custom thread pool for running these tasks.

The Scheduling abstraction provided by Spring Boot works pretty well for simple use-cases. But if you have more advanced use cases like Persistent JobsClusteringDynamically adding and triggering new jobs then check out the following article –

Spring Boot Quartz Scheduler Example: Building an Email Scheduling App

Thank you for reading. See you in the next Post!

Spring-boot-starter-parent Example
30
Mar
2021

Spring Boot Starter Parent Example

In this spring boot tutorial, we will learn about spring-boot-starter-parent dependency which is used internally by all spring boot dependencies. We will also learn what all configurations this dependency provides, and how to override them.

What is spring-boot-starter-parent dependency?

The spring-boot-starter-parent dependency is the parent POM providing dependency and plugin management for Spring Boot-based applications. It contains the default versions of Java to use, the default versions of dependencies that Spring Boot uses, and the default configuration of the Maven plugins.

Few important configurations provided by this file are as below. Please refer to this link to read the complete configuration.

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;<modelVersion>4.0.0</modelVersion><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-dependencies</artifactId><version>${revision}</version><relativePath>../../spring-boot-dependencies</relativePath></parent><artifactId>spring-boot-starter-parent</artifactId><packaging>pom</packaging><name>Spring Boot Starter Parent</name><description>Parent pom providing dependency and plugin management for applicationsbuilt with Maven</description><properties><java.version>1.8</java.version><resource.delimiter>@</resource.delimiter> <!-- delimiter that doesn't clash with Spring ${} placeholders --><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding><maven.compiler.source>${java.version}</maven.compiler.source><maven.compiler.target>${java.version}</maven.compiler.target></properties> ... <resource><directory>${basedir}/src/main/resources</directory><filtering>true</filtering><includes><include>**/application*.yml</include><include>**/application*.yaml</include><include>**/application*.properties</include></includes></resource> </project>

The spring-boot-starter-parent dependency further inherits from spring-boot-dependencies, which is defined at the top of above POM file at line number : 9.

This file is the actual file which contains the information of default version to use for all libraries. The following code shows the different versions of various dependencies that are configured in spring-boot-dependencies:

<properties><!-- Dependency versions --><activemq.version>5.15.3</activemq.version><antlr2.version>2.7.7</antlr2.version><appengine-sdk.version>1.9.63</appengine-sdk.version><artemis.version>2.4.0</artemis.version><aspectj.version>1.8.13</aspectj.version><assertj.version>3.9.1</assertj.version><atomikos.version>4.0.6</atomikos.version><bitronix.version>2.1.4</bitronix.version><byte-buddy.version>1.7.11</byte-buddy.version><caffeine.version>2.6.2</caffeine.version><cassandra-driver.version>3.4.0</cassandra-driver.version><classmate.version>1.3.4</classmate.version> ......</properties>

Above list is very long and you can read complete list in this link.

How to override default dependency version?

As you see, spring boot has default version to use for most of dependencies. You can override the version of your choice or project need, in properties tag in your project’s pom.xml file.

e.g. Spring boot used default version of google GSON library as 2.8.2.

<groovy.version>2.4.14</groovy.version><gson.version>2.8.2</gson.version><h2.version>1.4.197</h2.version>

I want to use 2.7 of gson dependency. So I will give this information in properties tag like this.

<properties><java.version>1.8</java.version><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><gson.version>2.7</gson.version></properties>

Now in your eclipse editor, you can see the message as : The managed version is 2.7 The artifact is managed in org.springframework.boot:spring-boot-dependencies:2.0.0.RELEASE.

GSON resolved dependency
GSON resolved dependency

Drop me your questions in comments section.

Happy Learning !!

Java Add/subtract years, months, days, hours, minutes, or seconds to a Date & Time
03
Mar
2021

Java Add/subtract years, months, days, hours, minutes, or seconds to a Date & Time

In this article, you’ll find several ways of adding or subtracting years, months, days, hours, minutes, or seconds to a Date in Java.

Add/Subtract years, months, days, hours, minutes, seconds to a Date using Calendar class

import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;

public class CalendarAddSubtractExample {
    public static void main(String[] args) {
        SimpleDateFormat dateFormat = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
        Date date = new Date();
        System.out.println("Current Date " + dateFormat.format(date));

        // Convert Date to Calendar
        Calendar c = Calendar.getInstance();
        c.setTime(date);

        // Perform addition/subtraction
        c.add(Calendar.YEAR, 2);
        c.add(Calendar.MONTH, 1);
        c.add(Calendar.DATE, -10);
        c.add(Calendar.HOUR, -4);
        c.add(Calendar.MINUTE, 30);
        c.add(Calendar.SECOND, 50);

        // Convert calendar back to Date
        Date currentDatePlusOne = c.getTime();

        System.out.println("Updated Date " + dateFormat.format(currentDatePlusOne));
    }
}
# Output
Current Date 2020-02-24 19:35:00
Updated Date 2022-03-14 16:05:50

Add/Subtract years, months, weeks, days, hours, minutes, seconds to LocalDateTime

import java.time.LocalDateTime;
import java.time.temporal.ChronoUnit;

public class LocalDateTimeAddSubtractExample {
    public static void main(String[] args) {
        LocalDateTime dateTime = LocalDateTime.of(2020, 1, 26, 10, 30, 45);

        // Add years, months, weeks, days, hours, minutes, seconds to LocalDateTime
        LocalDateTime newDateTime = dateTime.plusYears(3).plusMonths(1).plusWeeks(4).plusDays(2).plusHours(2);
        System.out.println(newDateTime);


        // Subtract years, months, weeks, days, hours, minutes, seconds to LocalDateTime
        newDateTime = dateTime.minusYears(3).minusMonths(1).minusWeeks(4).minusDays(2).minusMinutes(30);
        System.out.println(newDateTime);

        // Add/Subtract using the generic plus/minus method
        System.out.println(dateTime.plus(2, ChronoUnit.DAYS));

    }
}
# Output
2023-03-28T12:30:45
2016-11-26T10:00:45
2020-01-28T10:30:45

Add/Subtract days, weeks, months, years to LocalDate

import java.time.LocalDate;
import java.time.temporal.ChronoUnit;

public class LocalDateAddSubtractExample {
    public static void main(String[] args) {
        LocalDate date = LocalDate.of(2020, 1, 26);

        // Add years, months, weeks, days to LocalDate
        LocalDate newDate = date.plusYears(3).plusMonths(1).plusWeeks(4).plusDays(2);
        System.out.println(newDate);


        // Subtract years, months, weeks, days to LocalDate
        newDate = date.minusYears(3).minusMonths(1).minusWeeks(4).minusDays(2);
        System.out.println(newDate);

        // Add/Subtract using the generic plus/minus method
        System.out.println(date.plus(2, ChronoUnit.DAYS));

    }
}
2023-03-28
2016-11-26
2020-01-28

Add/Subtract hours, minutes, seconds to LocalTime

import java.time.LocalTime;
import java.time.temporal.ChronoUnit;

public class LocalTimeAddSubtractExample {
    public static void main(String[] args) {
        LocalTime time = LocalTime.of(11, 45, 20);

        // Add hours, minutes, or seconds
        LocalTime newTime = time.plusHours(2).plusMinutes(30).plusSeconds(80);
        System.out.println(newTime);

        // Subtract hours, minutes, or seconds
        LocalTime updatedTime = time.minusHours(2).minusMinutes(30).minusSeconds(30);
        System.out.println(updatedTime);

        // Add/Subtract using the generic plus/minus method
        System.out.println(time.plus(1, ChronoUnit.HOURS));

    }
}
# Output
14:16:40
09:14:50
12:45:20
Client-side vs. Server-side vs. Pre-rendering for Web Apps
26
Mar
2021

Client-side vs. Server-side vs. Pre-rendering for Web Apps

There is something going on within the front-end community recently. Server-side rendering is getting more and more traction thanks to React and its built-in server-side hydration feature. But it’s not the only solution to deliver a fast experience to the user with a super fast time-to-first-byte (TTFB) score: Pre-rendering is also a pretty good strategy. What’s the difference between these solutions and a fully client-rendered application?

Client-rendered Application

Since frameworks like Angular, Ember.js, and Backbone exists, front-end developers have tended to render everything client-side. Thanks to Google and its ability to “read” JavaScript, it works pretty well, and it’s even SEO friendly.

With a client-side rendering solution, you redirect the request to a single HTML file and the server will deliver it without any content (or with a loading screen) until you fetch all the JavaScript and let the browser compile everything before rendering the content.

Under a good and reliable internet connection, it’s pretty fast and works well. But it can be a lot better, and it doesn’t have to be difficult to make it that way. That’s what we will see in the following sections.

Server-side Rendering (SSR)

An SSR solution is something we used to do a lot, many years ago, but tend to forget in favor of a client-side rendering solution.

With old server-side rendering solutions, you built a web page—with PHP for example—the server compiled everything, included the data, and delivered a fully populated HTML page to the client. It was fast and effective.

But… every time you navigated to another route, the server had to do the work all over again: Get the PHP file, compile it, and deliver the HTML, with all the CSS and JS delaying the page load to a few hundred ms or even whole seconds.

What if you could do the first page load with the SSR solution, and then use a framework to do dynamic routing with AJAX, fetching only the necessary data?

This is why SSR is getting more and more traction within the community because React popularized this problem with an easy-to-use solution: The RenderToString method.

This new kind of web application is called a universal app or an isomorphic app. There’s still some controversy over the exact meanings of these terms and the relationship between them, but many people use them interchangeably.

Anyway, the advantage of this solution is being able to develop an app server-side and client-side with the same code and deliver a really fast experience to the user with custom data. The disadvantage is that you need to run a server.

SSR is used to fetch data and pre-populate a page with custom content, leveraging the server’s reliable internet connection. That is, the server’s own internet connection is better than that of a user with lie-fi), so it’s able to prefetch and amalgamate data before delivering it to the user.

With the pre-populated data, using an SSR app can also fix an issue that client-rendered apps have with social sharing and the OpenGraph system. For example, if you have only one index.html file to deliver to the client, they will only have one type of metadata—most likely your homepage metadata. This won’t be contextualized when you want to share a different route, so none of your routes will be shown on other sites with their proper user content (description and preview picture) that users would want to share with the world.

Pre-rendering

The mandatory server for a universal app can be a deterrent for some and may be overkill for a small application. This is why pre-rendering can be a really nice alternative.

I discovered this solution with Preact and its own CLI that allows you to compile all pre-selected routes so you can store a fully populated HTML file to a static server. This lets you deliver a super-fast experience to the user, thanks to the Preact/React hydration function, without the need for Node.js.

The catch is, because this isn’t SSR, you don’t have user-specific data to show at this point—it’s just a static (and somewhat generic) file sent directly on the first request, as-is. So if you have user-specific data, here is where you can integrate a beautifully designed skeleton to show the user their data is coming, to avoid some frustration on their part:

Using a document skeleton as part of a loading indicator

There is another catch: In order for this technique to work, you still need to have a proxy or something to redirect the user to the right file.

Why?

With a single-page application, you need to redirect all requests to the root file, and then the framework redirects the user with its built-in routing system. So the first page load is always the same root file.

In order for a pre-rendering solution to work, you need to tell your proxy that some routes need specific files and not always the root index.html file.

For example, say you have four routes (//about/jobs, and blog) and all of them have different layouts. You need four different HTML files to deliver the skeleton to the user that will then let React/Preact/etc. rehydrate it with data. So if you redirect all those routes to the root index.html file, the page will have an unpleasant, glitchy feel during loading, whereby the user will see the skeleton of the wrong page until it finishes loading and replaces the layout. For example, the user might see a homepage skeleton with only one column, when they had asked for a different page with a Pinterest-like gallery.

The solution is to tell your proxy that each of those four routes needs a specific file:

  • https://my-website.com → Redirect to the root index.html file
  • https://my-website.com/about → Redirect to the /about/index.html file
  • https://my-website.com/jobs → Redirect to the /jobs/index.html file
  • https://my-website.com/blog → Redirect to the /blog/index.html file

This is why this solution can be useful for small applications—you can see how painful it would be if you had a few hundred pages.

Strictly speaking, it’s not mandatory to do it this way—you could just use a static file directly. For example, https://my-website.com/about/ will work without any redirection because it will automatically search for an index.html inside its directory. But you need this proxy if you have param urls—https://my-website.com/profile/guillaume will need to redirect the request to /profile/index.html with its own layout, because profile/guillaume/index.html doesn’t exist and will trigger a 404 error.

A flowchart showing how a proxy makes a difference in a pre-rendering solution, as described in the previous paragraph

In short, there are three basic views at play with the rendering strategies described above: A loading screen, a skeleton, and the full page once it’s finally rendered.

Comparing a loading screen, a skeleton, and a fully-rendered page

Depending on the strategy, sometimes we use all three of these views, and sometimes we jump straight to a fully-rendered page. Only in one use case are we forced to use a different approach:

MethodLanding (e.g. /)Static (e.g. /about)Fixed Dynamic (e.g. /news)Parameterized Dynamic (e.g. /users/:user-id)
Client-renderedLoading → FullLoading → FullLoading → Skeleton → FullLoading → Skeleton → Full
Pre-renderedFullFullSkeleton → FullHTTP 404 (page not found)
Pre-rendered With ProxyFullFullSkeleton → FullSkeleton → Full
SSRFullFullFullFull

Client-only Rendering is Often Not Enough

Client-rendered applications are something we should avoid now because we can do better for the user. And doing better, in this case, is as easy as the pre-rendering solution. It’s definitely an improvement over client-only rendering and easier to implement than a fully server-side-rendered application.

An SSR/universal application can be really powerful if you have a large application with a lot of different pages. It allows your content to be focused and relevant when talking to a social crawler. This is also true for search engine robots, which now take your site’s performance into account when ranking it.

Stay tuned for a follow-up tutorial, where I will walk through the transformation of an SPA into pre-rendered and SSR versions, and compare their performance.

Stop this Microservices Madness
26
Mar
2021

Stop this Microservices Madness

I’m a big fan of Microservices.
Does it sound weird, given the title? Well, it should not.

Every now and then, a new paradigm or technology arises. It brings hope and hype. This will save us all!
Yeah, it’s true. Maybe it is ground-breaking. Extremely beneficial. However, like everything in life, every solution has its scope.

Have you ever tried to apply Newtonian laws to subatomic particles? Or, have you ever tried to squeeze a television into a sock? Pretty much the same here.
Hype makes you forget this little clause at the end of the contract. And results are catastrophic.

Proper usage of socks and televisions. Photo by Mollie Sivaram on Unsplash

Splitting is the way

In a normal web application, you have a single process serving all requests. Well, you can replicate it for balancing reasons, but in any case there’s a single package/software you have to run. And it bundles all the logic together.

When you use Microservices, instead, you have several, distinct processes, each one serving just a subset of requests. Each process runs a distinct package, with just the code it needs.

Is it really new?

No, it isn’t.

The Microservice pattern is nothing new, actually, like almost every emergent design (besides it has been around for some years).
It is Object-Oriented Programming applied on a higher level: application architecture.

No coincidence that it often makes use of HTTP APIs, particularly RESTful, which are in turn OOP principles applied to API design.
P.S. I hate acronyms.

Let me explain.

When you create a program using Object-Oriented Programming, you split it into several pieces, Classes. Each Class represents a component of the solution and has a specific role. It also works just on its own subset of data. Does it sound familiar?

This is not the right place to discuss about Object-Oriented Programming in general, but I have to point out some aspects. If you are not familiar with it, you can reference online documentation and, soon, my other article about it!

The Two Principles — The Good Tailor

When you design Classes, you must follow two main principles: loose coupling and high cohesion.

To be honest, these should be the guiding stars of each software project, not just OOP ones. But that’s another story. Or is it?

High cohesionis the situation where each component is strongly consistent. All its data and methods refer to a very specific area, or meaning, or role.

Loose couplingis the situation where each component has the least possible amount of interactions with other components. It can work on its own, hiding low-level details, and exposes the essential interfaces.

For Microservices it is the same.
I can’t stress this enough. A good OOP design leads to an optimal Microservice design. There is no middle ground.

This should be almost enough to answer why and when to use Microservices. But since there is more, let’s break it down.

Be a Good Tailor. Loosely sew highly-entangled (coherent) fabrics. Photo by Jeff Wade on Unsplash

When to split?

This is exactly as it is for Object-Oriented Programming.

When you can respect high cohesion and loose coupling.

No differently.

You know how to split code into Classes in the right way. Why would you split your Microservices randomly?
If a bad Class diagram is painful to maintain and manage, because of the sparsity of code in different files, think of randomly split Microservices application, where you have different processes altogether.

But why on Earth would I split my application into sub-processes?

Imagine how much it’s easy to make two Objects interact. You just have to call an Object’s method and you are good to go. All within a single program.

Microservices, instead, live in different processes. Possibly, on different machines. You have to communicate over a network, using APIs.

It adds complexity!
You must have excellent reasons to do so.
Not for trend, not for fun. Because, I assure you, if you use Microservices randomly, their management is not fun at all.

Actually, you must have the same reasons you’d have when you decide to use APIs. It’s the same.

APIs let you hide all the implementation details behind them. That’s essential in some cases, and it offers incredible benefits.

1. Scalability for non-uniform traffic

Scenario: you have a Supermarket application.
You have a Stock Microservice, that just shows you the amounts of articles in stock, and a Vision Microservice, which uses GPUs to detect your articles given a picture of them.

If you receive thousands of calls for your Stock APIs and just a bunch of requests for your Vision APIs, you could just replicate your Stock Microservice.

No need to double your GPU resources!

Think of it. If you kept everything in a single machine or application, and you still wanted to scale to match all the incoming requests, you would have to replicate it entirely. A waste.

2. Error resilience

Suppose you have a Bank application. Your Transfer Microservice keeps crashing, maybe for a temporary error or, worse, for a newly introduced bug (damn untested releases…).

Why should the entire Bank application be down? Maybe some customers wouldn’t even notice, because they just want to see their movements or they want to use their cards.

3. Separate deployments

Similarly to the reason above, if you have a new feature to add or a bug to fix, you can just deploy the right Microservice. Your build and deployment times should be smaller.

Also, you could put your Microservices on different machines, in different locations, maybe.

4. Complete isolation

Maybe you need complete isolation between Microservices. In this way, you can use different DBs (SQL and noSQL) or different technologies altogether. This will give you complete freedom!
And it is of course strictly related to the other points as well, about error resilience or requirements.

5. Different requirements

Do you have a heavy computation to perform? Maybe a Machine Learning one, where you need GPUs, their libraries…
Maybe that part would be much better in Python, while the other one could benefit from Java.
Or maybe libraries for the same language would create some conflicts, but you desperately need two specific versions for two distinct tasks.
And finally, perhaps you want to use two different Cloud products to host your two distinct pieces.

There are plenty of reasons, some more common than others.

In those cases, split the application!
Probably it’s easier. But remember to respect the Two Principles! Be a Good Tailor, or you would have hard times sharing data between your Microservices…

Even the highest of Monoliths is made by blocks. It might be atoms, but it is. Photo by Zoltan Tasi on Unsplash

Is it not your case? Go for a Monolith.

If your system is tightly coupled, use a single application. A Monolith.

You’ll have cleaner dependency management, and you won’t need complex orchestrations or distributed systems to trace errors, share data, collect logs, sync calls, secure network interactions… and I can go on.

Is Object-Oriented Programming not suitable for your application architecture? Or, better, is RESTful design not suitable? Microservices won’t be either!

If your application serves a long sequence of API calls, maybe because an API requires the result of the previous ones, this is not ideal. It would introduce also network delays.
If your application is a single procedure that requires user interaction, well, the sentence is self-explaining. It is a Procedure!
You have to share a state between calls, each component is not independent and a single error in the sequence will block it entirely. There is hardly a benefit in splitting the application.

And remember, Monolith doesn’t mean Mess

Monolith.

Mess.

They are also spelled differently.

Someone sees Monolith as a bunch of unordered code.
They claim that Microservice pattern is the only way to have splits, order and all.

Well, I’ll reveal something.

It’s — not — true.

Your code design does not depend on process divisions. Object-Oriented Programming doesn’t either. If you split your code right, if you manage your dependencies in a clear hierarchy, as well as your files, you can achieve the same level of cohesion and decoupling.

And another secret: if you divide your APIs into separate packages/modules/controllers/blueprints/whatever in your code, then you can decide to serve them in a single process or split them in Microservices at build-time, or at run-time too, for free.

Wow.

Conclusions

I’m a big fan of Microservices, as I said.

If used in the right context, for the right reasons, they solve issues and improve performances or costs.

But please, stop making them the one and only solution for all problems.
Because they can also be the root of all evil.

Server-side vs Client-side Routing
26
Mar
2021

Server-side vs Client-side Routing

Almost every website or web-application uses routing. Discovering a website by changing its URL is a very powerful feature that comes standard with the web. How all of this is handled can vary a lot between different websites and web-applications.

All websites and web-applications, whether they use server-side or client-side routing, are accessed from a server. How a website or web-application responds to different URLs is commonly handled server-side, although with the rising popularity of JavaScript frameworks, other ways have been found to manage routing.

Routing

Routing is the mechanism by which requests are connected to some code. It is essentially the way you navigate through a website or web-application. By clicking on a link, the URL changes which provides the user with some new data or a new webpage.

Server-side

When browsing, the adjustment of a URL can make a lot of things happen. This will happen regularly by clicking on a link, which in turn will request a new page from the server. This is what we call a server-side route. A whole new document is served to the user.

A server-side request causes the whole page to refresh. This is because a new GET request is sent to the server which responds with a new document, completely discarding the old page altogether.

Pros

  • A server-side route will only request the data that’s needed. No more, no less.
  • Because server-side routing has been the standard for a long time, search engines are optimised for webpages that come from the server.

Cons

  • Every request results in a full-page refresh. That means that unnecessary data is being requested. A header and a footer of a webpage often stays the same. This isn’t something you would want to request from the server again.
  • It can take a while for the page to be rendered. However, this is only the case when the document to be rendered is very large or when you have slow internet speed.

Client-side

A client-side route happens when the route is handled internally by the JavaScript that is loaded on the page. When a user clicks on a link, the URL changes but the request to the server is prevented. The adjustment to the URL will result in a changed state of the application. The changed state will ultimately result in a different view of the webpage. This could be the rendering of a new component, or even a request to a server for some data that the application will turn into some HTML elements.

It is important to note that the whole page won’t refresh when using client-side routing. There are just some elements inside the application that will change.

Pros

  • Because less data is processed, routing between views is generally faster.
  • Smooth transitions and animations between views are easier to implement.

Cons

  • The whole website or web-application needs to be loaded on the first request. That’s why the initial loading time usually takes longer.
  • Because the whole website or web-application is loaded initially, there is a possibility that there is data downloaded for views you won’t even come across.
  • It requires more setup work or even a library. Because server-side is the standard, extra code must be written to make client-side routing possible.
  • Search engine crawling is less optimised. Google is making good progress on crawling single-paged-apps, but it isn’t nearly as efficient as server-side routed websites.

Summary

There is no best method to manage your routing. Server-side and client-side routing both have their advantages and weaknesses. It is important to make your decision based on the needs of your website or web-application, or heck, even combine the two.

Create maven web project in eclipse step by step
30
Mar
2021

Create Maven Web Project In Eclipse

Learn to create maven web project in eclipse which we should be able to import on eclipse IDE for further development.

To create eclipse supported web project, we will need to create first a normal maven we application and then we will make it compatible to eclipse IDE.

1. Create maven web project in eclipse

Run this maven command to create a maven web project named ‘demoWebApplication‘. Maven archetype used is ‘maven-archetype-webapp‘.

$ mvn archetype:generate -DgroupId=com.howtodoinjava -DartifactId=demoWebApplication-DarchetypeArtifactId=maven-archetype-webapp -DinteractiveMode=false

This will create maven web project structure and web application specific files like web.xml.

create-web-project-using-maven-1844893

2. Convert to eclipse dynamic web project

To Convert created maven web project to eclipse dynamic web project, following maven command needs to be run.

$ mvn eclipse:eclipse -Dwtpversion=2.0
convert-to-eclipse-webproject-9798545

Please remember that adding “-Dwtpversion=2.0” is necessary, otherwise using only “mvn eclipse:eclipse” will convert it to only normal Java project (without web support), and you will not be able to run it as web application.

3. Import web project in Eclipse

  1. Click on File menu and click on Import option.
    import-project-menu-9106164
  2. Now, click on “Existing project..” in general section.
    existing-project-menu-9473492
  3. Now, browse the project root folder and click OK. Finish.
    browse-project-menu-2890673
  4. Above steps will import the project into eclipse work space. You can verify the project structure like this.
    project-created-success-4773138

In this maven tutorial, we learned how to create maven dynamic web project in eclipse. In this example, I used eclipse oxygen. You may have different eclipse version but the steps to follow will be same.

Happy Learning !!

Introduction to Event Streaming with Kafka and Kafdrop
26
Mar
2021

Introduction to Event Streaming with Kafka and Kafdrop

Event sourcing, eventual consistency, microservices, CQRS… These are quickly becoming household names in mainstream application development. But do you know what makes them tick? What are the basic building blocks required to assemble complex, business-centric applications from fine-grained services without turning the lot into a big ball of mud?

This article examines a fundamental building block — event streaming. Leading the charge will be Apache Kafka — the de facto standard in event streaming platforms, which we’ll observe through Kafdrop — a feature-packed web UI.

A Brief Intro

Event streaming platforms reside in the broader class of Message-oriented Middleware (MoM) and are similar to traditional message queues and topics but offer stronger temporal guarantees and typically order-of-magnitude performance gains due to log-structured immutability. In simple terms, write operations are mostly limited to sequential appends, which make them fast. Really fast.

Whereas messages in a traditional Message Queue (MQ) tend to be arbitrarily ordered and generally independent of one another, events (or records) in a stream tend to be strongly ordered, often chronologically or causally. Also, a stream persists its records, whereas an MQ will discard a message as soon as it has been read.

For this reason, event streaming tends to be a better fit for Event-Driven Architectures, encompassing event sourcing, eventual consistency, and CQRS concepts. (Of course, there are FIFO message queues too, but the differences between FIFO queues and fully-fledged event streaming platforms are quite substantial, not limited to ordering alone.)

Event streaming platforms are a comparatively recent paradigm within the broader MoM domain. There are only a handful of mainstream implementations available, compared to hundreds of MQ-style brokers, some going back to the 1980s (e.g. Tuxedo). Compared to established standards such as AMQP, MQTT, XMPP, and JMS, there are no equivalent standards in the streaming space.

Event streaming platforms are an active area of continuous research and experimentation. In spite of this, streaming platforms aren’t just a niche concept or an academic idea with a few esoteric use cases; they can be applied effectively to a broad range of messaging and eventing scenarios, routinely displacing their more traditional counterparts.

You may also like: A Kafka Tutorial for Everyone, no Matter Your Stage in Development.

Architecture Overview

The diagram below offers a brief overview of the Kafka component architecture. While the intention isn’t to indoctrinate you with all of Kafka’s inner workings, some appreciation of its design will go a long way in explaining the key concepts that we will cover shortly.

Kafka Architecture Overview

Kafka is a distributed system comprising several key components:

  • Broker nodes: Responsible for the bulk of I/O operations and durable persistence within the cluster. Brokers accommodate the append-only log files that comprise the topic partitions hosted by the cluster. Partitions can be replicated across multiple brokers for both horizontal scalability and increased durability — these are called replicas. A broker node may act as the leader for certain replicas, while being a follower for others. A single broker node will also be elected as the cluster controller — responsible for the internal management of partition states. This includes the arbitration of the leader-follower roles for any given partition.
  • ZooKeeper nodes: Under the hood, Kafka needs a way to manage the overall controller status in the cluster. Should the controller drop out for whatever reason, there is a protocol in place to elect another controller from the set of remaining brokers. The actual mechanics of controller election, heart-beating, and so forth, are largely implemented in ZooKeeper. ZooKeeper also acts as a configuration repository of sorts, maintaining cluster metadata, leader-follower states, quotas, user information, ACLs, and other housekeeping items. Due to the underlying gossiping and consensus protocol, the number of ZooKeeper nodes must be odd.
  • Producers: These are client applications responsible for appending records to Kafka topics. Because of the log-structured nature of Kafka and the ability to share topics across multiple consumer ecosystems, only producers are able to modify the data in the underlying log files. The actual I/O is performed by the broker nodes on behalf of the producer clients. Any number of producers may publish to the same topic, selecting the partitions used to persist the records.
  • Consumers: These are client applications that read from topics. Any number of consumers may read from the same topic; however, depending on the configuration and grouping of consumers, there are rules governing the distribution of records among the consumers.

Topics, Partitions, Records, and Offsets

partition is a totally ordered sequence of records and is fundamental to Kafka. A record has an ID — a 64-bit integer offset and a millisecond-precise timestamp. Also, it may have a key and a value; both are byte arrays and both are optional. The term “totally ordered” simply means that for any given producer, records will be written in the order they were emitted by the application. If record P was published before Q, then P will precede Q in the partition. (Assuming P and Q share a partition.)

Furthermore, they will be read in the same order by all consumers; P will always be read before Q, for every possible consumer. This ordering guarantee is vital in a large majority of use cases. Published records will generally correspond to some real-life events, and preserving the timeline of these events is often essential.

Note: Kafka uses the term “record,” where others might use “message” or “event.” In this article, we will use the terms interchangeably, depending on the context. Similarly, you might see the term “stream” as a generic substitute for “topic.”

There is no recognized ordering across producers. If two (or more) producers emit records simultaneously, those records may materialize in arbitrary order. That said, this order will still be observed uniformly across all consumers.

A record’s offset uniquely identifies it in the partition. The offset is a strictly monotonically-increasing integer in a sparse address space, meaning that each successive offset is always higher than its predecessor and there may be varying gaps between neighboring offsets. Gaps might legitimately appear if compaction is enabled or as a result of transactions; we don’t need to delve into the details at this stage. Suffice it to say that offsets need not be contiguous.

Your application shouldn’t attempt to literally interpret an offset or guess what the next offset might be. It may, however, infer the relative order of any record pair based on their offsets, sort the records by their offset, and so forth.

The diagram below shows what a partition looks like on the inside.1

     start of partition

2

+--------+-----------------+

3

|0..00000|First record     |

4

+--------+-----------------+

5

|0..00001|Second record    |

6

+--------+-----------------+

7

|0..00002|Third record     |

8

+--------+-----------------+

9

|0..00003|Fourth record    |

10

+--------+-----------------+

11

|0..00007|Fifth record     |

12

+--------+-----------------+

13

|0..00008|Sixth record     |

14

+--------+-----------------+

15

|0..00010|Seventh record   |

16

+--------+-----------------+

17

            ...

18

+--------+-----------------+

19

|0..56789|Last record      |

20

+--------+-----------------+

21

       end of partition

The beginning offset, also called the low-water mark, is the first message that will be presented to a consumer. Due to Kafka’s bounded retention, this is not necessarily the first message that was published. Records may be pruned on the basis of time and/or partition size. When this occurs, the low-water mark will appear to advance, and records earlier than the low-water mark will be truncated.

Conversely, the high-water mark is the offset immediately following the last record in the partition, also known as the end offset. It is the offset that will be assigned to the next record that will be published. It is not the offset of the last record.

topic is a logical composition of partitions. A topic may have one or more partitions, and a partition must be a part of exactly one topic. Topics are fundamental to Kafka, allowing for both parallelism and load balancing.

Earlier, we said that partitions exhibit total order. Because partitions within a topic are mutually independent, the topic is said to exhibit partial order. In simple terms, this means that certain records may be ordered in relation to one another while being unordered with respect to certain other records. The concepts of total and partial order, while sounding somewhat academic, are hugely important in the construction of performant event streaming pipelines. They enables us to process records in parallel where we can, while maintaining order where we must. We’ll explore the concepts of record order, consumer parallelism, and topic sizing in a short while.

Example: Publishing Messages

Let’s put some of this theory into practice. We are going to spin up a pair of Docker containers — one for Kafka and another for Kafdrop. Rather than launching them individually, we’ll use Docker Compose.

Create a docker-compose.yaml file in a directory of your choice, containing the following:1

version: "2"

2

services:

3

  kafdrop:

4

    image: obsidiandynamics/kafdrop

5

    restart: "no"

6

    ports:

7

      - "9000:9000"

8

    environment:

9

      KAFKA_BROKERCONNECT: "kafka:29092"

10

    depends_on:

11

      - "kafka"

12

  kafka:

13

    image: obsidiandynamics/kafka

14

    restart: "no"

15

    ports:

16

      - "2181:2181"

17

      - "9092:9092"

18

    environment:

19

      KAFKA_LISTENERS: "INTERNAL://:29092,EXTERNAL://:9092"

20

      KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka:29092,EXTERNAL://localhost:9092"

21

      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT"

22

      KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"

Note: We’re using the obsidiandynamics/kafka image for convenience because it neatly bundles Kafka and ZooKeeper into a single image. If you wanted to, you could replace this with images from Confluent or Wurstmeister, but then you’d have to wire it all up properly. The obsidiandynamics/kafka image does all this for you, so it’s highly recommended for beginners (and lazy pros).

Then, start it with docker-compose up. Once it boots, navigate to localhost:9000 in your browser. You should see the Kafdrop landing screen.

Kafdrop landing page

You should see our single-broker cluster. It’s a promising start, but there are no topics. Not a problem; let’s create a topic and publish some messages using Kafka’s command-line tools. Conveniently, we already have a Kafka image running as part of our docker-compose stack, so we can shell into it to use the built-in CLI tools.1

docker exec -it kafka-kafdrop_kafka_1 bash

This gets you into a Bash shell. The tools are in the /opt/kafka/bin directory, so let’s cd into it:1

cd /opt/kafka/bin

Create a topic named streams-intro with three partitions:1

./kafka-topics.sh --bootstrap-server localhost:9092 \

2

    --create --partitions 3 --replication-factor 1 \

3

    --topic streams-intro

Switching back to Kafdrop, we should now see the new topic in the list.

Kafdrop topics list

Time to publish stuff. We are going to use the kafka-console-producer tool:1

./kafka-console-producer.sh --broker-list localhost:9092 \

2

    --topic streams-intro --property "parse.key=true" \

3

    --property "key.separator=:"

Note:kafka-topics uses the --bootstrap-server argument to configure the Kafka broker list, while kafka-console-producer uses the --broker-list argument for the same purpose. Also, --property arguments are largely undocumented; be prepared to Google your way around.

Records are separated by newlines. The key and the value parts are delimited by colons, as indicated by the key.separator property. For the sake of an example, type in the following (a copy-paste will do):1

foo:first message

2

foo:second message

3

bar:first message

4

foo:third message

5

bar:second message

Press CTRL+D when done. Then, switch back to Kafdrop and click on the streams-intro topic. You’ll see an overview of the topic, along with a detailed breakdown of the underlying partitions:

Kafdrop topic overview

Let’s pause for a moment and dissect what’s been done. We created a topic with three partitions. We then published five records using two unique keys — foo and bar. Kafka uses keys to map records to partitions, such that all records with the same key will always appear on the same partition. Handy, but also important because it lets the publisher dictate the precise order of records. We’ll discuss key hashing and partition assignments in more detail later; in the meanwhile, sit back and enjoy the ride.

Looking at the partitions table, partition #0 has the first and last offsets at zero and two respectively. Partition #2 has them at zero and three, while partition #1 appears to blank. Clicking on #0 in the Kafdrop web UI sends us to a topic viewer:

Kafdrop topic viewer

We can see the two records published under the bar key. Note, they are completely unrelated to the foo records. Other than being collated within the same topic, there is nothing that binds records across partitions.

Note: In case you were wondering, the arrow to the left of the message lets you expand and pretty-print JSON-encoded messages. As our examples didn’t use JSON, there’s nothing to pretty-print.

It can be said without exaggeration that Kafka’s built-in tooling is an abomination. There is no consistency in the naming of command arguments and the simple act of publishing keyed messages requires you to jump through hoops — passing in obscure, undocumented properties. The usability of the built-in tools is a well-known heartache within the Kafka community. This is a real shame. It’s like buying a Ferrari, only to have it delivered with plastic hub caps. Fortunately, there are alternatives — both commercial and open source — that can fill the glaring gaps in tooling and observability.

Consumers and Consumer Groups

So far we have learned that producers emit records into the stream; these records are organized into nicely ordered partitions. Kafka’s pub-sub topology adheres to a flexible multipoint-to-multipoint model, meaning that there may be any number of producers and consumers simultaneously interacting with a stream. Depending on the actual solution context, stream topologies may also be point-to-multipoint, multipoint-to-point, and point-to-point. It’s about time we looked at how records are consumed.

consumer is a process or a thread that attaches to a Kafka cluster via a client library. (One is available for most languages.) A consumer generally, but not necessarily, operates as part of an encompassing consumer group. The group is specified by the group.id property. Consumer groups are effectively a load-balancing mechanism within Kafka — distributing partition assignments approximately evenly among the individual consumer instances within the group.

When the first consumer in a group joins the topic, it will receive all partitions in that topic. When a second consumer subsequently joins, it will get approximately half of the partitions, relieving the first consumer of half of its prior load. The process runs in reverse when consumers leave (by disconnecting or timing out) — the remaining consumers will absorb a greater number of partitions.

So, a consumer siphons records from a topic, pulling from the share of partitions that have been assigned to it by Kafka, alongside the other consumers in its group. As far as load-balancing goes, this should be fairly straightforward. But here’s the kicker — the act of consuming a record does not remove it. This might seem contradictory at first, especially if you associate the act of consuming with depletion. (If anything, a consumer should have been called a ‘reader’, but let’s not dwell on the choice of terminology.)

The simple fact is, consumers have absolutely no impact on topics and their partitions. A topic is an append-only ledger that may only be mutated by the producer, or by Kafka itself (as part of compaction or cleanup). Consumers are “cheap,” so you can have quite a number of them tail the logs without stressing the cluster. This is yet another point of distinction between an event stream and a traditional message queue, and it’s a crucial one.

A consumer internally maintains an offset that points to the next record in a partition, advancing the offset for every successive read. When a consumer first subscribes to a topic, it may elect to start at either the head-end or the tail-end of the topic. This behavior is controlled by setting the auto.offset.reset property to one of latestearliest or none. In the latter case, an exception will be thrown if no previous offset exists for the consumer group.

Consumers retain their offset state vector locally. Since consumers across different consumer groups do not interfere, there may be any number of them reading concurrently from the same topic. Consumers run at their own pace — a slow or backlogged consumer has no impact on its peers.

To illustrate this concept, consider a scenario involving a topic with two partitions. Two consumer groups, A and B, are subscribed to the topic. Each group has three instances, the consumers being named A1A2A3B1B2, and B3. The diagram below illustrates how the two groups might share the topic and how the consumers advance through the records independently of one another.1

               Partition 0                       Partition 1

2

               +--------+                        +--------+

3

               |0..00000|                        |0..00000|

4

               +--------+                        +--------+

5

               |0..00001| <= consumer A2         |0..00001|

6

               +--------+                        +--------+

7

               |0..00002|                        |0..00002| <= consumer A1

8

               +--------+                        +--------+

9

               |0..00003|                        |0..00003| 

10

               +--------+                        +--------+

11

                  ...                               ...

12

               +--------+                        +--------+

13

               |0..00008| <= consumer B3         |0..00008| <= consumer B2

14

               +--------+                        +--------+

15

               |0..00009|                        |0..00009|

16

               +--------+                        +--------+

17

producer P1 => |0..00010|                        |0..00010|

18

               +--------+                        +--------+

19

                                  producer P1 => |0..00011|

20

                                                 +--------+

21

Look carefully and you’ll notice something is missing. Consumers A3 and B1 aren’t there. That’s because Kafka guarantees that a partition may only be assigned to at most one consumer within its consumer group. (We say ‘at most’ to cover the case when all consumers are offline.) Because there are three consumers in each group, but only two partitions, one consumer will remain idle — waiting for another consumer in its respective group to depart before being assigned a partition.

In this manner, consumer groups are not only a load-balancing mechanism, but also a fence-like exclusivity control, used to build highly performant pipelines without sacrificing safety, particularly when there is a requirement that a record may only be handled by one thread or process at any given time.

Consumer groups are also used to ensure availability. By periodically pulling records from a topic, the consumer implicitly signals to the cluster that it’s in a ‘healthy’ state, thereby extending the lease over its partition assignment. However, should the consumer fail to read again within the allowable deadline, it will be deemed faulty and its partitions will be reassigned — apportioned among the remaining ‘healthy’ consumers within its group. This deadline is controlled by the max.poll.interval.ms consumer client property, set to five minutes by default.

To use a transportation analogy, a topic is like a highway, while a partition is a lane. A record is the equivalent of a car, and its occupants correspond to the record’s value. Several cars can safely travel on the same highway, providing they keep to their lane. Cars sharing the same line ride in a sequence, forming a queue. Now, suppose each lane leads to an off-ramp, diverting its traffic to some location. If one off-ramp gets banked up, other off-ramps may still flow smoothly.

It’s precisely this highway-lane metaphor that Kafka exploits to achieve its end-to-end throughput, easily reaching millions of records per second on commodity hardware. When creating a topic, one can choose the partition count — the number of lanes, if you will.

The partitions are divided approximately evenly among the individual consumers in a consumer group, with a guarantee that no partition will be assigned to two (or more) consumers at the same time, providing that these consumers are part of the same consumer group. Referring to our analogy, a car will never end up in two off-ramps simultaneously; however, two lanes might conceivably lead to the same off-ramp.

Note: A topic may be resized after creation by increasing the number of partitions. It is not possible, however, to decrease the partition count without recreating the topic.

Records correspond to events, messages, commands, or any other streamable content. Precisely how records are partitioned is left to the discretion of the producer(s). A producer may explicitly assign a partition index when publishing a record, although this approach is rarely used. A much more common approach is to assign a key to a record, as we have done in our earlier example. The key is completely opaque to Kafka. In other words, Kafka doesn’t attempt to interpret the contents of the key, treating it as an array of bytes. These bytes are hashed to derive a partition index, using a consistent hashing technique.

Records sharing the same hash are guaranteed to occupy the same partition. Assuming a topic with multiple partitions, records with a different key will likely end up in different partitions. However, due to hash collisions, records with different hashes may also end up in the same partition. Such is the nature of hashing. If you understand how a hash table works, this is no different.

Producers rarely care which specific partition the records will map to, only that related records end up in the same partition, and that their order is preserved. Similarly, consumers are largely indifferent to their assigned partitions, so long that they receive the records in the same order as they were published, and their partition assignment does not overlap with any other consumer in their group.

Committing Offsets

We already said that consumers maintain an internal state with respect to their partition offsets. At some point, that state must be shared with Kafka, so that when a partition is reassigned, the new consumer can resume processing from where the outgoing consumer left off. Similarly, if the consumers were to disconnect, upon reconnection they would ideally skip over those records that have already been processed.

Persisting the consumer state back to the Kafka cluster is called committing an offset. Typically, a consumer will read a record (or a batch of records) and commit the offset of the last record, plus one. If a new consumer takes over the topic, it will commence processing from the last committed offset, hence the plus-one step is essential. (Otherwise, the last processed record would be handled a second time.)

Fun fact: Kafka employs a recursive approach to managing committed offsets, elegantly utilising itself to persist and track offsets. When an offset is committed, Kafka will publish a binary record on the internal __consumer_offsets topic. The contents of this topic are compacted in the background, creating an efficient event store that progressively reduces to only the last known commit points for any given consumer group.

Controlling the point when an offset is committed provides a great deal of flexibility around delivery guarantees, handing Kafka a yet another trump card. The term ‘delivery’ assumes not just reading a record, but the full processing cycle, complete with any side-effects. One can shift from an at-most-once to an at-least-once delivery model by simply moving the commit operation from a point before the processing of a record is commenced, to a point sometime after the processing is complete. With this model, should the consumer fail midway through processing a record, the record will be re-read the following partition reassignment.

By default, a Kafka consumer will automatically commit offsets every five seconds, regardless of whether the consumer has finished processing the record. Often, this is not what you want, as it may lead to mixed delivery semantics. For example, in the event of consumer failure, some records might be delivered twice, while others might not be delivered at all. To enable manual offset committing, set the enable.auto.commit property to false.

Note: There are a few gotchas like this in Kafka. Pay close attention to the (producer and consumer) client properties in the official Kafka documentation, particularly to the stated defaults. Don’t assume for a moment that the defaults are sensible, insofar as they ought to favour safety over other competing qualities. Kafka defaults tend to be optimised for performance, and will need to be explicitly overridden on the client when safety is a critical objective. Fortunately, setting the properties to insure safety has only a minor impact on performance — Kafka is still a beast. Remember the first rule of optimisation: Don’t do it. Kafka would have been even better, had their creators given this more thought.

Getting offset committing right can be tricky, and routinely catches out beginners. A committed offset implies that the record one below that offset and all prior records have been dealt with by the consumer. When designing at-least-once or exactly-once applications, an offset should only be committed when the application is dealt with with the record in question and all records before it.

In other words, the record has been processed to the point that any actions that would have resulted from the record have been carried out and finalized. This may include calling other APIs, updating a database, committing transactions, persisting the record’s payload, or publishing more records. Stated otherwise, if the consumer were to fail after committing the record, then not ever seeing this record again must not be detrimental to its correctness.

In the at-least-once (and by extension, the exactly-once) scenario, a typical consumer implementation will commit its offset linearly, in tandem with the processing of the records. That is, read a record, commit it (plus-one), read the next, commit it (plus one), and so on. A common tactic is to process a batch of records concurrently (where this makes sense), using a thread pool, and only confirm the last record when the entire batch is done. The commit process in Kafka is very efficient, the client library will send commit requests asynchronously to the cluster using an in-memory queue, without blocking the consumer. The client application can register an optional callback, notifying it when the commit has been acknowledged by the cluster.

The consumer group is a somewhat understated concept that is pivotal to the versatility of an event streaming platform. By simply varying the affinity of consumers with their groups, one can arrive at vastly different distribution topologies — from a topic-like, pub-sub behavior to an MQ-style, point-to-point model. Because records are never truly consumed (the advancing offset only creates the illusion of consumption), one can concurrently superimpose disparate distribution topologies over a single event stream.

Free Consumers

Consumer groups are completely optional; a consumer does not need to be encompassed in a consumer group to pull messages from a topic. A free consumer omits the group.id property. Doing so allows it to operate under relaxed rules, entirely transferring the responsibility for consumer management to the application.

Note: The use of the term ‘free’ to denote a consumer without an encompassing group is not part of the standard Kafka nomenclature. As Kafka lacks a canonical term to describe this, the term ‘free’ was adopted here.

Free consumers do not subscribe to a topic. Instead, the consuming application is responsible for manually assigning a set of topic-partitions to the consumer, individually specifying the starting offset for each topic-partition pair. Free consumers do not commit their offsets to Kafka; it is up to the application to track the progress of such consumers and persist their state as appropriate, using a datastore of their choosing. The concepts of automatic partition assignment, rebalancing, offset persistence, partition exclusivity, consumer heart-beating and failure detection, and other so-called niceties accorded to consumer groups cease to exist in this mode.

Free consumers are not observed in the wild as often as their grouped counterparts. There are predominantly two use cases where a free consumer is an appropriate choice. The first, is when you genuinely need full control of the partition assignment scheme and/or you require an alternative place to store consumer offsets. This is very rare.

Needless to say, it’s also very difficult to implement correctly, given the multitude of scenarios one must account for. The second, more commonly seen use case, is when you have a stateless or ephemeral consumer that needs to monitor a topic. For example, you might be interested in tailing a topic to identify specific records, or just as a debugging tool. You might only care about records that were published when your stateless consumer was online, so concerns such as persisting offsets and resuming from the last processed record are completely irrelevant.

A good example of where this is used routinely is the Kafdrop web UI, which we’ve already seen. When you click on a topic to view the messages, Kafdrop creates a free consumer and assigns the requested partition to it, reading the records from the supplied offsets. Navigating to a different topic or partition will reset the consumer, discarding any prior state.

The illustration below outlines the relationship between producers, topics, partitions, consumers, and consumer groups.1

+----------+          +----------+

2

|PRODUCER 1|          |PRODUCER 2|

3

+-----v----+          +-----v----+

4

      |                     |

5

      |                     |

6

      |                     |

7

 +----V---------------------V-----------------------------------------+

8

 |                            >>> TOPIC >>>                           |

9

 |            +---------------------------------------------------+   |

10

 | PARTITION 0|record 0..00|record 0..01|record 0..02|record 0..03|   |

11

 |            +--------------------v------------------------------+   |

12

 |                                 |                                  |

13

 |            +--------------------|------------------------------+   |

14

 | PARTITION 1|record 0..00|       |    |record 0..02|record 0..03|   |

15

 |            +--------------------|-------------v----------------+   |

16

 |                                 |             |                    |

17

 +----------v----------------------|-------------|--------------------+

18

            |                      |             |       

19

            |                      |             | 

20

            |                      |             | 

21

            |              +-------|-------------|----------------------+

22

            |              |       |             |                      |

23

       +----V-----+        | +-----V----+   +----V-----+   +----------+ |

24

       |CONSUMER 1|        | |CONSUMER 2|   |CONSUMER 3|   |CONSUMER 4| |

25

       +----------+        | +----------+   +----------+   +----------+ |

26

                           |               CONSUMER GROUP               |

27

                           +--------------------------------------------+

28

29

The key takeaways are:

  • Topics are subdivided into partitions, each forming an independent, totally-ordered sequence within a wider, partially-ordered stream.
  • Multiple producers are able to publish to a topic, picking a partition at will. This may be accomplished either directly, by specifying a partition index, or indirectly, by way of a record key, which deterministically hashes to a consistent partition index. (In the diagram above, both Producer 1 and Producer 2 publish to the same topic.)
  • Partitions in a topic can be load-balanced across a population of consumers in a consumer group, allocating partitions approximately evenly among the members of that group. (Consumer 2 and Consumer 3 each get one partition.)
  • A consumer in a group is not guaranteed a partition assignment. Where the group’s population outnumbers the partitions, some consumers will remain idle until this balance equalizes or tips in favor of the other side. (Consumer 4 remains partition-less.)
  • Partitions may be manually assigned to free consumers. If necessary, an entire topic may be assigned to a single free consumer — this is done by individually assigning all partitions. (Consumer 1 can be freely assigned any partition.)

Exactly-Once Delivery

When contrasting at-least-once with at-most-once delivery semantics, an often-asked question is: Why can’t we have it exactly once?

Without delving into the academic details, which involve conjectures and impossibility proofs, it is sufficient to say that exactly-once semantics are not possible without collaboration with the consumer application. What does this mean in practice?

Consumers in event streaming applications must be idempotent. In other words, processing the same record repeatedly should have no net effect on the consumer ecosystem. If a record has no additive effects, the consumer is inherently idempotent. (For example, if the consumer simply overwrites an existing database entry with a new one, then the update is naturally idempotent.) Otherwise, the consumer must check whether a record has already been processed, and to what extent, prior to processing a record. The combination of at-least-once delivery and consumer idempotence collectively leads to exactly-once semantics.

Example: A Trading Platform

With all this theory looming over us like Kubrick’s Monolith, it would be inappropriate to conclude without offering the reader a practical scenario.

Let’s say you were looking for specific price patterns in listed stocks, emitting trading signals when a particular pattern is identified. There are a large number of stocks, and understandably you’d like them processed in parallel. However, the time series for any given ticker code must be processed sequentially on a single consumer.

Kafka makes this use case, and others like it, almost trivial to implement. We would create a pair of topics: prices for the raw price data, and orders for any resulting orders. We can be fairly generous with our partition counts, as the nature of the data gives us ample opportunities for parallelism.

At the feed source, we could publish a record for each price on the prices topic, keyed by the ticker code. Kafka’s automatic partition assignment will ensure that every ticker code is handled by (at most) one consumer in its group. The consumer instances are free to scale in and out to match the processing load. Consumer groups should be meaningfully named, ideally reflecting the purpose of the consuming application. A good example might be trading-strategy.abc, for a fictitious trading strategy named ‘ABC’.

Once a price pattern is identified by the consumer, it can publish another message — the order request — on the orders topic. We’ll muster up another consumer group — order-execution — responsible for reading the orders and forwarding them to the broker.

In this simple example, we have created an end-to-end trading pipeline that is entirely event-driven and highly scalable — at least theoretically, assuming there are no other bottlenecks. We can dynamically add more processing nodes to the individual stages to cope with the increased load where it’s called for.

Now, let’s spice things up a bit. Suppose you need several trading strategies operating concurrently, driven by a common data feed. Furthermore, the trading strategies will be developed by different teams; the objective being to decouple these implementations as much as possible, allowing the teams to operate autonomously — develop and deploy at their individual cadence, perhaps even using different programming languages and tool-chains. That said, you’d ideally want to reuse as much of what’s already been written. So, how would we pull this off? 

Trading Platform

Kafka’s flexible multipoint-to-multipoint pub-sub architecture combines stateful consumption with broadcast semantics. Using distinct consumer groups, Kafka allows disparate applications to share input topics, processing events at their own pace. The second trading strategy would need a dedicated consumer group — trading-strategy.xyz — applying its specific business logic to the common pricing stream, publishing the resulting orders to the same orders topic. In this fashion, Kafka enables you to construct modular event processing pipelines from discrete elements that are readily reusable and composable.

Note: In the days of service buses and traditional ‘enterprisey’ message brokers, before event sourcing entered the mainstream, you would have had to choose between persistent message queues or transient broadcast topics. In our example, you would likely have created multiple FIFO queues, using the fan-out pattern. Because Kafka generalises pub-sub topics and persistent message queues into a unified model, a single source topic can power a diverse range of consumers without incurring duplication.

In Conclusion

Event streaming platforms are a highly effective building block in the construction of modular, loosely-coupled, event-driven applications. Within the world of event streaming, Kafka has solidified its position as the go-to open-source solution that is both amazingly flexible and highly performant. Concurrency and parallelism are at the heart of Kafka’s architecture, forming partially-ordered event streams that can be load-balanced across a scalable consumer ecosystem. A simple reconfiguration of consumers and their encompassing groups can bring about vastly different event distribution and processing semantics; shifting the offset commit point can invert the delivery guarantee from an at-most-once to an at-least-once model.

Of course, Kafka isn’t without its flaws. The tooling is sub-par, to put it mildly; most Kafka practitioners have long abandoned the out-of-the-box CLI utilities in favour of other open-source tools such as KafdropKafkacat and third-party commercial offerings like Kafka Tool. The breadth of Kafka’s configuration options is overwhelming, with defaults that are riddled with gotchas, ready to shock the unsuspecting first-time user.

All in all, Kafka represents a paradigm shift in how we architect and build complex systems. Its benefits go beyond the superfluous, and they dwarf any of the niggles that are bound to exist in a technology that has undergone such aggressive adoption. Crucially, it paves the way for further progress in its space; Apache Pulsar is a prime example of an alternative platform that has improved on much of Kafka’s shortcomings, yet owes a great deal to its predecessor for laying the cornerstone and bringing the genre to the mainstream.

How to copy a File or Directory in Java
03
Mar
2021

How to copy a File or Directory in Java

In this article, you’ll learn how to copy a file or directory in Java using various methods like Files.copy() or using BufferedInputStream and BufferedOutputStream.

Java Copy File using Files.copy()

Java NIO’s Files.copy() method is the simplest way of copying a file in Java.

import java.io.IOException;
import java.nio.file.*;

public class CopyFileExample {
    public static void main(String[] args) {

        Path sourceFilePath = Paths.get("./bar.txt");
        Path targetFilePath = Paths.get(System.getProperty("user.home") + "/Desktop/bar-copy.txt");

        try {
            Files.copy(sourceFilePath, targetFilePath);
        } catch (FileAlreadyExistsException ex) {
            System.out.println("File already exists");
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}

The Files.copy() method will throw FileAlreadyExistsException if the target file already exists. If you want to replace the target file then you can use the REPLACE_EXISTING option like this –

Files.copy(sourceFilePath, targetFilePath, StandardCopyOption.REPLACE_EXISTING);

Note that, Directories can be copied using the same method. However, files inside the directory are not copied, so the new directory will be empty even when the original directory contains files.

Read: How to copy Directories recursively in Java

Java Copy File using BufferedInputStream and BufferedOutputStream

You can also copy a file byte-by-byte using a byte-stream I/O. The following example uses BufferedInputStream to read a file into a byte array and then write the byte array using BufferedOutputStream.

You can also use a FileInputStream and a FileOutputStream directly for performing the reading and writing. But a Buffered I/O is more performant because it buffers data and reads/writes it in chunks.

import java.io.*;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class CopyFileExample1 {
    public static void main(String[] args) {
        Path sourceFilePath = Paths.get("./bar.txt");
        Path targetFilePath = Paths.get(System.getProperty("user.home") + "/Desktop/bar-copy.txt");

        try(InputStream inputStream = Files.newInputStream(sourceFilePath);
            BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);

            OutputStream outputStream = Files.newOutputStream(targetFilePath);
            BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(outputStream)) {

            byte[] buffer = new byte[4096];
            int numBytes;
            while ((numBytes = bufferedInputStream.read(buffer)) != -1) {
                bufferedOutputStream.write(buffer, 0, numBytes);
            }
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}