Welcome To Fusebes - Dev & Programming Blog

Kotlin Infix Notation - Make function calls more intuitive
07
Mar
2021

Kotlin Classes, Objects, Constructors and Initializers

Kotlin supports method calls of a special kind, called infix calls.

You can mark any member function or extension function with the infix modifier to allow it to be called using infix notation. The only requirement is that the function should have only one required parameter.

Infix notations are used extensively in Kotlin. If you’ve been programming in Kotlin, chances are that you’ve already used infix notations.

Following are few common examples of infix notations in Kotlin –

1. Infix Notation Example – Creating a Map

val map = mapOf(1 to "one", 2 to "two", 3 to "three")

In the above example, the expressions 1 to "one"2 to "two" etc, are infix notations of the function calls 1.to("one") and 2.to("two") etc.

to() is an infix function that creates a Pair<A, B> from two values.

2. Infix Notation Example – Range Operators (until, downTo, step)

Kotlin provides various range operators that are usually called using infix notation –

for(i in 1 until 10) {	// Same as - for(i in 1.until(10))
    print("$i ")
}
for(i in 10 downTo 1) {	 // Same as - for(i in 10.downTo(1))
    print("$i ")
}
for(i in 1 until 10 step 2) { // Same as - for(i in 1.until(10).step(2))
    print("$i ")
}

3. Infix Notation Example – String.matches()

The String.matches() function in Kotlin which matches a String with a Regex is an infix function –

val regex = Regex("[tT]rue|[yY]es")
val str = "yes"

str.matches(regex)

// Infix notation of the above function call -
str matches regex

Creating an Infix Function

You can make a single argument member function or extension function, an infix function by marking it with the infix keyword.

Check out the following example where I have created an infix member function called add() for adding two Complex numbers –

data class ComplexNumber(val realPart: Double, val imaginaryPart: Double) {
	// Infix function for adding two complex numbers
    infix fun add(c: ComplexNumber): ComplexNumber {
        return ComplexNumber(realPart + c.realPart, imaginaryPart + c.imaginaryPart)
    }
}

You can now call the add() method using infix notation –

val c1 = ComplexNumber(3.0, 5.0)
val c2 = ComplexNumber(4.0, 7.0)

// Usual call
c1.add(c2) // produces - ComplexNumber(realPart=7.0, imaginaryPart=12.0)

// Infix call
c1 add c2  // produces - ComplexNumber(realPart=7.0, imaginaryPart=12.0)

Conclusion

That’s all folks. In this article, You learned what infix notation is and how it works. You saw several examples of Infix notations in Kotlin and also learned how to create an infix function.

How to read a File in Java
03
Mar
2021

How to read a File in Java

In this article, you’ll learn how to read a text file or binary (image) file in Java using various classes and utility methods provided by Java like:

BufferedReaderLineNumberReader
Files.readAllLinesFiles.lines
BufferedInputStreamFiles.readAllBytes, etc.

Let’s look at each of the different ways of reading a file in Java with the help of examples.

Java read file using BufferedReader

BufferedReader is a simple and performant way of reading text files in Java. It reads text from a character-input stream. It buffers characters to provide efficient reading.

import java.io.BufferedReader;
import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class BufferedReaderExample {
    public static void main(String[] args) {
        Path filePath = Paths.get("demo.txt");
        Charset charset = StandardCharsets.UTF_8;

        try (BufferedReader bufferedReader = Files.newBufferedReader(filePath, charset)) {
            String line;
            while ((line = bufferedReader.readLine()) != null) {
                System.out.println(line);
            }
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}

Java read file line by line using Files.readAllLines()

Files.readAllLines() is a utility method of the Java NIO’s Files class that reads all the lines of a file and returns a List<String> containing each line. It internally uses BufferedReader to read the file.

import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.List;

public class FilesReadAllLinesExample {
    public static void main(String[] args) {
        Path filePath = Paths.get("demo.txt");
        Charset charset = StandardCharsets.UTF_8;
        try {
            List<String> lines = Files.readAllLines(filePath, charset);
            for(String line: lines) {
                System.out.println(line);
            }
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}

Java read file line by line using Files.lines()

Files.lines() method reads all the lines from a file as a Stream. You can use Stream API methods like forEachmap to work with each line of the file.

import java.io.IOException;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class FilesLinesExample {
    public static void main(String[] args) {
        Path filePath = Paths.get("demo.txt");
        Charset charset = StandardCharsets.UTF_8;

        try {

            Files.lines(filePath, charset)
                    .forEach(System.out::println);

        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}

Java read file line by line using LineNumberReader

LineNumberReader is a buffered character-stream reader that keeps track of line numbers. You can use this class to read a text file line by line.

import java.io.*;
import java.nio.charset.Charset;
import java.nio.charset.StandardCharsets;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class LineNumberReaderExample {
    public static void main(String[] args) {
        Path filePath = Paths.get("demo.txt");
        Charset charset = StandardCharsets.UTF_8;

        try(BufferedReader bufferedReader = Files.newBufferedReader(filePath, charset);
            LineNumberReader lineNumberReader = new LineNumberReader(bufferedReader)) {

            String line;
            while ((line = lineNumberReader.readLine()) != null) {
                System.out.format("Line %d: %s%n", lineNumberReader.getLineNumber(), line);
            }
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}

Java read binary file (image file) using BufferedInputStream

All the examples presented in this article so far read textual data from a character-input stream. If you’re reading a binary data such as an image file then you need to use a byte-input stream.

BufferedInputStream lets you read raw stream of bytes. It also buffers the input for improving performance

import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;

public class BufferedInputStreamImageCopyExample {
    public static void main(String[] args) {
        try(InputStream inputStream = Files.newInputStream(Paths.get("sample.jpg"));
            BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);

            OutputStream outputStream = Files.newOutputStream(Paths.get("sample-copy.jpg"));
            BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(outputStream)) {

            byte[] buffer = new byte[4096];
            int numBytes;
            while ((numBytes = bufferedInputStream.read(buffer)) != -1) {
                bufferedOutputStream.write(buffer, 0, numBytes);
            }
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}

Java read file into []byte using Files.readAllBytes()

If you want to read the entire contents of a file in a byte array then you can use the Files.readAllBytes() method.

import com.sun.org.apache.xpath.internal.operations.String;

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Paths;

public class FilesReadAllBytesExample {
    public static void main(String[] args) {
        try {
            byte[] data = Files.readAllBytes(Paths.get("demo.txt"));

            // Use byte data
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }

    }
}
Microservices vs Monolithic Architecture: Which is Right for Startups?
14
May
2021

Microservices vs Monolithic Architecture: Which is Right for Startups?

Microservices architecture has become a hot topic in the software backend development world. The ecosystem carries a profound impact on not just the enterprises’ IT function but also in the digital transformation of an entire app business. 

The debate of Microservices vs monolithic architecture defines a revolutionary shift in how an IT  team approaches their software development cycle: Whether they go with the approach that brands like Google, Amazon, and Netflix chose or do they go with the simplicity quotient that a startup which is at the development stage demands.

In this article, we are going to get startups an answer to which backend architecture they should choose when they are starting their journey to become a startup. 

Table Of Content:

  1. What are Microservices Architecture?
  2. What is Monolithic Architecture?
  3. Microservices vs Monolithic Architecture: Advantages and Disadvantages
  4. How to Choose Between Monolithic and Microservice Architecture?
  5. Migrating from a Monolithic Architecture to a Microservice Ecosystem
  6. Conclusion

What are Microservices Architecture?

Microservices architecture contains a mix of small and autonomous services where every service is self-contained and must be implemented as a single business ability. It is a distinct approach used for development of software systems which focus on developing several single-function modules with clearly-defined operations and interfaces. The approach has become a popular trend in the past several years as more and more Enterprises are looking to become Agile and make a shift towards DevOps. 

Components of Microservices architecture that makes it one of the best enterprise architecture:

  • The services are independent, small, and loosely coupled
  • Encapsulates a business or customer scenario
  • Every service is different codebase
  • Services can be independently deployed 
  • Services interact with each other using APIs

With the question of what are microservices architecture now answered, let us move on to look into what is monolithic architecture.

What is Monolithic Architecture? 

Monolithic application has a single codebase having multiple modules. The modules, in turn, are divided into either technical features or business features. The architecture comes with a single build system that helps build complete application. It also comes with a single deployable or executable binary.

Now that we have looked into what is monolithic architecture and microservices architecture, let us look into the disadvantages and benefits that both the backend system offers to get an understanding of what separates them from each other. 

Microservices vs Monolithic Architecture: Advantages and Disadvantages

Microservices vs Monolithic Architecture Advantages and Disadvantages

Advantages of Monolithic Architecture

A. Zero Deployment Dependencies

An organized and well-documented Monolith architecture makes it possible for Backend developers to not worry about which version would be compatible with which service, how to find which services are present and what they do, etc. 

B. Error Tracing

One of the biggest benefits of monolithic is that all the transactions are logged into one place, making error tracing task a breeze. 

C. No Silos

The one factor that works in the favour of monolithic in the microservices vs monolithic architecture debate is absence of silos. It becomes very easy for the developers to work on multiple parts of the app for they are all structured similarly, using the same tools, which makes it okay to have no prior distributed computing knowledge. 

D. Cross-cutting concerns:

 Spending time in defining the services which do not bleed in each other’s time is the time that you can actually spend in developing things that help the customers. 

E. Shared Code: 

No shared libraries where the complete scope needed for services to operate is sent along each request. 

Limitations of Monolithic Architecture

A. Lack of Flexibility:

Monolithic architectures are not flexible. You cannot use different technologies when you have incorporated Monolithic. The technology stack which have been decided at the beginning have to be followed throughout the project, making upgrades a next to impossible task. 

B. Development Speed:

Microservices speed development process is famous when you compare microservices architecture vs monolithic architecture. Development is very slow in monolithic architecture. It can be very difficult for team members to understand and then modify the code of large monolithic applications. Additionally, as the size of codebase increases, the IDE gets overloaded and gets slower. All of this results in a slowed down app development speed.  

C. Difficult Scalability:

Scaling monolithic applications becomes difficult when the apps becomes large. While developers can develop new instances of monolith and load balancer to distribute the traffic to new instances, monolithic architecture cannot scale with the increasing load. 

Benefits of Microservices Architecture

  1. The biggest factor in favor of microservices in the difference between microservices and monolithic architecture is that it handles complexity issues by decomposing the app into manageable service set that are faster to develop and easier to maintain and understand. 
  2. It enables independent service development through a team which is focused on the particular service, which makes the ideal choice of businesses that work with an Agile development approach. 
  3. It lowers the barrier of adopting newer technologies as the developers have the freedom to choose whatever technology that makes sense to their project. 
  4. It makes it possible for every microservice to be deployed individually. The result of which is that continuous deployment of complex application becomes possible. 

Drawbacks of Microservices Architecture

  1. Microservices add a complexity to project simply by the fact that the microservices application is distributed system. To solve the complexities, developers have to select and implement inter-process communication that is based on either RPC or messaging. 
  2. They work with partitioned database architecture. The business transactions which update multiple business entities inside the microservices application also have to update different databases that are owned by multiple services. 
  3. It is a lot more difficult to implement changes which span across multiple services. While in case of Monolithic architecture, an app development agency only have to change the corresponding modules, integrate all the changes, and then deploy them all in one go. 
  4. Deployment of a microservice application is very complex. It consists of a number of services, which individually have multiple runtime instances. In contrast, a monolithic application is deployed on set of identical servers behind load balancer. 

The benefits and limitations are prevalent in both monolithic and microservices architecture. This makes it extremely difficult for a startup to gauge which backend architecture to incorporate in their journey. 

Let us help you. 

How to Choose Between Monolithic and Microservice Architecture? 

The fact that both the approaches come with their own set of pros and cons are a sign that there is no one size fits all methodology when it comes to choosing a backend architecture. But there are a few questions that can help you decide which is the right direction to head into. 

Are You Working in a Familiar Sector?

When you work in an industry where you know the veins of the sector and you know the demands and the needs of the customers, it becomes easier to enter into the system with a definite structure. The same, however, is not possible with a business that is very new in the industry, for the amount of looming doubts are much greater. 

So, the use of microservice architecture in app development is best suited in cases where you know the industry inside out. If that is not the case, go with monolithic approach to develop your app. 

How Prepared is Your Team?

Is your team aware with the best practices for implementing microservices? Or are they more comfortable with working around the simplicity of monolithic? Will your team and your business offering expand in the coming time? You will have to find answers to all these questions to gauge whether the people who have to work on a project are even ready to migrate. 

What is Your Infrastructure Like?

Everything from the development to the deployment of a monolithic web application would require a cloud-based infrastructure. You will have to make use of Amazon AWS and Google Cloud for deploying even tiny elements. While the cloud technologies make the process easier, The idea of setting up database server for every other microservice and then scaling out is something that startup entrepreneur might not be comfortable with. 

Have you Evaluated the Business Risk?

More often than not, businesses take microservices’ side in the Microservices vs Monolithic Architecture thinking it is the right thing for their business. What they forget to factor in is the chance that their application might not become as scalable as they are optimistically expecting and they might have to suffer the risks of adding a highly scalable system in their process. 

Here is a short list of pointers that would help you make the decision of choosing to opt for software development processes with microservices vs monolithic architecture:

When to Choose Monolithic Architecture?

  • When your team is at a founding stage
  • When you are developing a proof of concept
  • When you have no experience in microservices
  • When you have experience in the development on solid frameworks, like the Ruby on Rails, Laravel, etc.

When to Choose Microservices Architecture?

  • You need independent, quick delivery service
  • You need to extend your team
  • Your platform need to be extremely efficient
  • You don’t have a tight deadline to work with

Migrating from a Monolithic Architecture to a Microservice Ecosystem 

Migrating from a Monolithic Architecture to a Microservice Ecosystem

The right approach for migrating a monolithic architecture to a microservice ecosystem is to divide the monolith processes and turn them into microservices. The result of this is a two-factor plan:

  1. Identification of existing monolithic elements which can get decoupled
  2. A validation that the new functionality can be developed as microservice

One of the main challenges that can emerge when initiating the migration from a monolithic architecture to a microservice architecture is to design and create an integration between existing system and a new microservice. A solution for this can be to add a glue code which allows them to connect later, something like an API. 

API gateway can also help in combining multiple individual service calls in one coarse-grained service, and this in turn would help reduce the integration cost with monolithic system.

Conclusion

When you compare microservices architecture vs monolithic architecture, you will find the former being a hot trend. Every entrepreneur wants to say that their app is based on this architecture. But the temptation to focus only on the problems of monolithic architecture and abandon the architecture should be measured against the actual value of microservice architecture. 

The right approach would be to develop new apps using a monolithic approach and move to microservices only when the justification of the move is backed by proper metrics like performance monitoring.

For established businesses, microservices tend to be avenues for continuous deployment, team based development, and an agility to shift to new technologies. But for startups, or companies that are just starting, adopting microservices can impact the software project success very negatively. 

FAQs About Microservices vs Monolithic Architecture

Q. What is the Purpose of Microservices?

The Microservice architectures allow you to divide the application in separate independent services, where each of them are managed by different groups in the software development agency. This way, the responsibility gets divided and the application is developed and deployed at a much faster rate. 

Q. Does moving from a monolith to a Microservice architecture help with resilience?

Yes. Since microservices enable developers to handle multiple parts of the project at the same time in a streamlined manner, it becomes much easier to identify issues and solve them within time. Something that is next to impossible in case of Monolithic architecture where it is impossible to add new technologies or change the process, mid project. 

Q. What is the difference between Microservices vs Monolithic approach?

The difference in microservices and monolithic architecture is the difference of approaches. While in case of Monolithic architecture, there is a single build system, Microservices come with multiple build systems, which makes development and deployment of an application faster. 

Q. When To Choose Microservices Over Monolithic Architecture

The choice of going with microservices over monolithic architecture can be decided upon these factors:

  • When you require an independent delivery service
  • When you have to extend the team
  • When you have to make an efficient platform
  • When you do not have a tight deadline 
Spring Boot Quartz Scheduler Example Building an Email Scheduling App
06
Feb
2021

Spring Boot Quartz Scheduler Example Building an Email Scheduling App

Quartz is an open source Java library for scheduling Jobs. It has a very rich set of features including but not limited to persistent Jobs, transactions, and clustering.

You can schedule Jobs to be executed at a certain time of day, or periodically at a certain interval, and much more. Quartz provides a fluent API for creating jobs and scheduling them.

Quartz Jobs can be persisted into a database, or a cache, or in-memory.

In this article, you’ll learn how to schedule Jobs in spring boot using Quartz Scheduler by building a simple Email Scheduling application. The application will have a Rest API that allows clients to schedule Emails at a later time.

We’ll use MySQL to persist all the jobs and other job-related data.

Creating the Application

Let’s bootstrap the application using Spring Boot CLI. Open your terminal and type the following command –

spring init -d=web,jpa,mysql,quartz,mail -n=quartz-demo quartz-demo

The above command will generate the project with all the specified dependencies in a folder named quartz-demo.

Note that, you can also use Spring Initializr web tool to bootstrap the project by following the instructions below –

  • Open http://start.spring.io
  • Enter quartz-demo in the Artifact field.
  • Add WebJPAMySQLQuartz, and Mail in the dependencies section.
  • Click Generate to generate and download the project.

That’s it! You may now import the project into your favorite IDE and start working.

Directory Structure

Following is the directory structure of the complete application for your reference. We’ll create all the required folders and classes one-by-one in this article –

Spring Boot Quartz Scheduler Email Scheduling App directory structure

Configuring MySQL database, Quartz Scheduler, and Mail Sender

Let’s configure Quartz Scheduler, MySQL database, and Spring Mail. MySQL database will be used for storing Quartz Jobs, and Spring Mail will be used to send emails.

Open src/main/resources/application.properties file and add the following properties –

## Spring DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.url = jdbc:mysql://localhost:3306/quartz_demo?useSSL=false
spring.datasource.username = root
spring.datasource.password = password

## QuartzProperties
spring.quartz.job-store-type = jdbc
spring.quartz.properties.org.quartz.threadPool.threadCount = 5

## MailProperties
spring.mail.host=smtp.gmail.com
spring.mail.port=587
spring.mail.username=testme@gmail.com
spring.mail.password=

spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true

You’ll need to create a MySQL database named quartz_demo. Also, don’t forget to change the spring.datasource.username and spring.datasource.password properties as per your MySQL installation.

We’ll be using Gmail’s SMTP server for sending emails. Please add your password in the spring.mail.password property. You may also pass this property at runtime as command line argument or set it in the environment variable.

Note that, Gmail’s SMTP access is disabled by default. To allow this app to send emails using your Gmail account –

All the quartz specific properties are prefixed with spring.quartz. You can refer to the complete set of configurations supported by Quartz in its official documentation. To directly set configurations for Quartz scheduler, you can use the format spring.quartz.properties.<quartz_configuration_name>=<value>.

Creating Quartz Tables

Since we have configured Quartz to store Jobs in the database, we’ll need to create the tables that Quartz uses to store Jobs and other job-related meta-data.

Please download the following SQL script and run it in your MySQL database to create all the Quartz specific tables.

After downloading the above SQL script, login to MySQL and run the script like this –

mysql> source <PATH_TO_QUARTZ_TABLES.sql> 

Overview of Quartz Scheduler’s APIs and Terminologies

1. Scheduler

The Primary API for scheduling, unscheduling, adding, and removing Jobs.

2. Job

The interface to be implemented by classes that represent a ‘job’ in Quartz. It has a single method called execute() where you write the work that needs to be performed by the Job.

3. JobDetail

A JobDetail represents an instance of a Job. It also contains additional data in the form of a JobDataMap that is passed to the Job when it is executed.

Every JobDetail is identified by a JobKey that consists of a name and a group. The name must be unique within a group.

4. Trigger

A Trigger, as the name suggests, defines the schedule at which a given Job will be executed. A Job can have many Triggers, but a Trigger can only be associated with one Job.

Every Trigger is identified by a TriggerKey that comprises of a name and a group. The name must be unique within a group.

Just like JobDetails, Triggers can also send parameters/data to the Job.

5. JobBuilder

JobBuilder is a fluent builder-style API to construct JobDetail instances.

6. TriggerBuilder

TriggerBuilder is used to instantiate Triggers.

Creating a REST API to schedule Email Jobs dynamically in Quartz

All right! Let’s now create a REST API to schedule email Jobs in Quartz dynamically. All the Jobs will be persisted in the database and executed at the specified schedule.

Before writing the API, Let’s create the DTO classes that will be used as request and response payloads for the scheduleEmail API –

ScheduleEmailRequest

package com.example.quartzdemo.payload;

import javax.validation.constraints.Email;
import javax.validation.constraints.NotEmpty;
import javax.validation.constraints.NotNull;
import java.time.LocalDateTime;
import java.time.ZoneId;

public class ScheduleEmailRequest {
    @Email
    @NotEmpty
    private String email;

    @NotEmpty
    private String subject;

    @NotEmpty
    private String body;

    @NotNull
    private LocalDateTime dateTime;

    @NotNull
    private ZoneId timeZone;
	
	// Getters and Setters (Omitted for brevity)
}

ScheduleEmailResponse

package com.example.quartzdemo.payload;

import com.fasterxml.jackson.annotation.JsonInclude;

@JsonInclude(JsonInclude.Include.NON_NULL)
public class ScheduleEmailResponse {
    private boolean success;
    private String jobId;
    private String jobGroup;
    private String message;

    public ScheduleEmailResponse(boolean success, String message) {
        this.success = success;
        this.message = message;
    }

    public ScheduleEmailResponse(boolean success, String jobId, String jobGroup, String message) {
        this.success = success;
        this.jobId = jobId;
        this.jobGroup = jobGroup;
        this.message = message;
    }

    // Getters and Setters (Omitted for brevity)
}

ScheduleEmail Rest API

The following controller defines the /scheduleEmail REST API that schedules email Jobs in Quartz –

package com.example.quartzdemo.controller;

import com.example.quartzdemo.job.EmailJob;
import com.example.quartzdemo.payload.ScheduleEmailRequest;
import com.example.quartzdemo.payload.ScheduleEmailResponse;
import org.quartz.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

import javax.validation.Valid;
import java.time.ZonedDateTime;
import java.util.Date;
import java.util.UUID;

@RestController
public class EmailJobSchedulerController {
    private static final Logger logger = LoggerFactory.getLogger(EmailJobSchedulerController.class);

    @Autowired
    private Scheduler scheduler;

    @PostMapping("/scheduleEmail")
    public ResponseEntity<ScheduleEmailResponse> scheduleEmail(@Valid @RequestBody ScheduleEmailRequest scheduleEmailRequest) {
        try {
            ZonedDateTime dateTime = ZonedDateTime.of(scheduleEmailRequest.getDateTime(), scheduleEmailRequest.getTimeZone());
            if(dateTime.isBefore(ZonedDateTime.now())) {
                ScheduleEmailResponse scheduleEmailResponse = new ScheduleEmailResponse(false,
                        "dateTime must be after current time");
                return ResponseEntity.badRequest().body(scheduleEmailResponse);
            }

            JobDetail jobDetail = buildJobDetail(scheduleEmailRequest);
            Trigger trigger = buildJobTrigger(jobDetail, dateTime);
            scheduler.scheduleJob(jobDetail, trigger);

            ScheduleEmailResponse scheduleEmailResponse = new ScheduleEmailResponse(true,
                    jobDetail.getKey().getName(), jobDetail.getKey().getGroup(), "Email Scheduled Successfully!");
            return ResponseEntity.ok(scheduleEmailResponse);
        } catch (SchedulerException ex) {
            logger.error("Error scheduling email", ex);

            ScheduleEmailResponse scheduleEmailResponse = new ScheduleEmailResponse(false,
                    "Error scheduling email. Please try later!");
            return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(scheduleEmailResponse);
        }
    }

    private JobDetail buildJobDetail(ScheduleEmailRequest scheduleEmailRequest) {
        JobDataMap jobDataMap = new JobDataMap();

        jobDataMap.put("email", scheduleEmailRequest.getEmail());
        jobDataMap.put("subject", scheduleEmailRequest.getSubject());
        jobDataMap.put("body", scheduleEmailRequest.getBody());

        return JobBuilder.newJob(EmailJob.class)
                .withIdentity(UUID.randomUUID().toString(), "email-jobs")
                .withDescription("Send Email Job")
                .usingJobData(jobDataMap)
                .storeDurably()
                .build();
    }

    private Trigger buildJobTrigger(JobDetail jobDetail, ZonedDateTime startAt) {
        return TriggerBuilder.newTrigger()
                .forJob(jobDetail)
                .withIdentity(jobDetail.getKey().getName(), "email-triggers")
                .withDescription("Send Email Trigger")
                .startAt(Date.from(startAt.toInstant()))
                .withSchedule(SimpleScheduleBuilder.simpleSchedule().withMisfireHandlingInstructionFireNow())
                .build();
    }
}

Spring Boot has built-in support for Quartz. It automatically creates a Quartz Scheduler bean with the configuration that we supplied in the application.properties file. That’s why we could directly inject the Scheduler in the controller.

In the /scheduleEmail API,

  • We first validate the request body
  • Then, Build a JobDetail instance with a JobDataMap that contains the recipient email, subject, and body. The JobDetail that we create is of type EmailJob. We’ll define EmailJob in the next section.
  • Next, we Build a Trigger instance that defines when the Job should be executed.
  • Finally, we schedule the Job using scheduler.scheduleJob() API.

Creating the Quartz Job to sends emails

Let’s now define the Job that sends the actual emails. Spring Boot provides a wrapper around Quartz Scheduler’s Job interface called QuartzJobBean. This allows you to create Quartz Jobs as Spring beans where you can autowire other beans.

Let’s create our EmailJob by extending QuartzJobBean –

package com.example.quartzdemo.job;

import org.quartz.JobDataMap;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.mail.MailProperties;
import org.springframework.mail.javamail.JavaMailSender;
import org.springframework.mail.javamail.MimeMessageHelper;
import org.springframework.scheduling.quartz.QuartzJobBean;
import org.springframework.stereotype.Component;

import javax.mail.MessagingException;
import javax.mail.internet.MimeMessage;
import java.nio.charset.StandardCharsets;

@Component
public class EmailJob extends QuartzJobBean {
    private static final Logger logger = LoggerFactory.getLogger(EmailJob.class);

    @Autowired
    private JavaMailSender mailSender;

    @Autowired
    private MailProperties mailProperties;
    
    @Override
    protected void executeInternal(JobExecutionContext jobExecutionContext) throws JobExecutionException {
        logger.info("Executing Job with key {}", jobExecutionContext.getJobDetail().getKey());

        JobDataMap jobDataMap = jobExecutionContext.getMergedJobDataMap();
        String subject = jobDataMap.getString("subject");
        String body = jobDataMap.getString("body");
        String recipientEmail = jobDataMap.getString("email");

        sendMail(mailProperties.getUsername(), recipientEmail, subject, body);
    }

    private void sendMail(String fromEmail, String toEmail, String subject, String body) {
        try {
            logger.info("Sending Email to {}", toEmail);
            MimeMessage message = mailSender.createMimeMessage();

            MimeMessageHelper messageHelper = new MimeMessageHelper(message, StandardCharsets.UTF_8.toString());
            messageHelper.setSubject(subject);
            messageHelper.setText(body, true);
            messageHelper.setFrom(fromEmail);
            messageHelper.setTo(toEmail);

            mailSender.send(message);
        } catch (MessagingException ex) {
            logger.error("Failed to send email to {}", toEmail);
        }
    }
}

Running the Application and Testing the API

It’s time to run the application and watch the live action. Open your terminal, go to the root directory of the project and type the following command to run it –

mvn spring-boot:run -Dspring.mail.password=<YOUR_SMTP_PASSWORD>

You don’t need to pass the spring.mail.password command line argument if you have already set the password in the application.properties file.

The application will start on port 8080 by default. Let’s now schedule an email using the /scheduleEmail API –

Spring Boot Quartz Scheduler Email Job Scheduler API

And, Here I get the email at the specified time 🙂

Spring Boot Quartz Scheduler Dynamic Email Job Scheduler API Example

Conclusion

That’s all folks! I hope you enjoyed the article. You can find the complete source code of the project in the Github Repository. Consider giving the project a star on Github if you find it useful.

References

Sounds interesting? Let’s start…

Microkernel Architecture
14
May
2021

What is Microkernel Architecture?

Many third-party applications, in view of software architecture best practices, make avail software packages as downloadable plug-ins or versions. It is to this particular type, that the Microkernel Architecture is most suited as a result of which it is also called the plug-in architecture pattern. 

With this style, enterprise application development services can add pluggable features to an erstwhile version of the software providing for extensibility. The architecture is formulated of two components, with one part dedicated to the core system and the other to the plug-ins. Minimalism is followed while designing the core of the architecture, that stores just the right proportion of components to render the system effective. 

Microkernel Architecture

The most relatable example of the Microkernel Architecture would be any internet browser. You download a version of the application, that is essentially a software, and depending upon the missing functionalities, download and add plug-ins. Enterprise software development services rely on this pattern for designing large scale, complex applications as well. An example of such a business application could be a software for processing insurance claims. 

Benefits

  • This design has proven its worth as one being highly flexible. Operational possibilities arising from the capability of plug-ins make reacting to such changes in near real-time critical to sustenance. Such changes can be dealt with in isolation with the core system regaining its stable state, for the most part, therefore requiring less developmental updates over time.
  • An enterprise software development company could face a downtime issue at the time of deployment but that can be minimized or altogether avoided by adding plug-in modules to the core dynamically. 
  • A software development company could test plug-in prototypes in isolation and see for performance issues without affecting the core of the architecture. 
  • Microkernel Architecture is most appreciated for maintaining high-performance applications as the software can be customized to include only those capabilities that are needed the most. 

Potential Drawbacks 

  • Apps such as those conceptualized by enterprise mobile app development services, have a non-negotiable scope to scale. However, the Microkernel Architecture is grounded on designs of the product and naturally suited to apps that are smaller in size. 
  • An enterprise app development company could find the Microkernel pattern rather hard to execute due to the vast number of plug-ins compatible with the core. This calls for drawing out governance contracts, updating plug-in regitaries and so many formalities that the implementation becomes a challenge. 

Ideal For

Microkernel Architecture is best suited for workflow applications in addition to those that need job scheduling. As pointed above, like a web browser, any application that you want to release with just the right amount of specs but want to leave room that can be filled in by installing additional plug-ins can be built with this design pattern. 

Spring-boot-starter-parent Example
30
Mar
2021

Spring Boot Starter Parent Example

In this spring boot tutorial, we will learn about spring-boot-starter-parent dependency which is used internally by all spring boot dependencies. We will also learn what all configurations this dependency provides, and how to override them.

What is spring-boot-starter-parent dependency?

The spring-boot-starter-parent dependency is the parent POM providing dependency and plugin management for Spring Boot-based applications. It contains the default versions of Java to use, the default versions of dependencies that Spring Boot uses, and the default configuration of the Maven plugins.

Few important configurations provided by this file are as below. Please refer to this link to read the complete configuration.

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;<modelVersion>4.0.0</modelVersion><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-dependencies</artifactId><version>${revision}</version><relativePath>../../spring-boot-dependencies</relativePath></parent><artifactId>spring-boot-starter-parent</artifactId><packaging>pom</packaging><name>Spring Boot Starter Parent</name><description>Parent pom providing dependency and plugin management for applicationsbuilt with Maven</description><properties><java.version>1.8</java.version><resource.delimiter>@</resource.delimiter> <!-- delimiter that doesn't clash with Spring ${} placeholders --><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding><maven.compiler.source>${java.version}</maven.compiler.source><maven.compiler.target>${java.version}</maven.compiler.target></properties> ... <resource><directory>${basedir}/src/main/resources</directory><filtering>true</filtering><includes><include>**/application*.yml</include><include>**/application*.yaml</include><include>**/application*.properties</include></includes></resource> </project>

The spring-boot-starter-parent dependency further inherits from spring-boot-dependencies, which is defined at the top of above POM file at line number : 9.

This file is the actual file which contains the information of default version to use for all libraries. The following code shows the different versions of various dependencies that are configured in spring-boot-dependencies:

<properties><!-- Dependency versions --><activemq.version>5.15.3</activemq.version><antlr2.version>2.7.7</antlr2.version><appengine-sdk.version>1.9.63</appengine-sdk.version><artemis.version>2.4.0</artemis.version><aspectj.version>1.8.13</aspectj.version><assertj.version>3.9.1</assertj.version><atomikos.version>4.0.6</atomikos.version><bitronix.version>2.1.4</bitronix.version><byte-buddy.version>1.7.11</byte-buddy.version><caffeine.version>2.6.2</caffeine.version><cassandra-driver.version>3.4.0</cassandra-driver.version><classmate.version>1.3.4</classmate.version> ......</properties>

Above list is very long and you can read complete list in this link.

How to override default dependency version?

As you see, spring boot has default version to use for most of dependencies. You can override the version of your choice or project need, in properties tag in your project’s pom.xml file.

e.g. Spring boot used default version of google GSON library as 2.8.2.

<groovy.version>2.4.14</groovy.version><gson.version>2.8.2</gson.version><h2.version>1.4.197</h2.version>

I want to use 2.7 of gson dependency. So I will give this information in properties tag like this.

<properties><java.version>1.8</java.version><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><gson.version>2.7</gson.version></properties>

Now in your eclipse editor, you can see the message as : The managed version is 2.7 The artifact is managed in org.springframework.boot:spring-boot-dependencies:2.0.0.RELEASE.

GSON resolved dependency
GSON resolved dependency

Drop me your questions in comments section.

Happy Learning !!

Intro to Spring Boot Starters
26
Mar
2021

Intro to Spring Boot Starters

1. Overview

Dependency management is a critical aspects of any complex project. And doing this manually is less than ideal; the more time you spent on it the less time you have on the other important aspects of the project.

Spring Boot starters were built to address exactly this problem. Starter POMs are a set of convenient dependency descriptors that you can include in your application. You get a one-stop-shop for all the Spring and related technology that you need, without having to hunt through sample code and copy-paste loads of dependency descriptors.

We have more than 30 Boot starters available – let’s see some of them in the following section

2. The Web Starter

First, let’s look at developing the REST service; we can use libraries like Spring MVC, Tomcat and Jackson – a lot of dependencies for a single application.

Spring Boot starters can help to reduce the number of manually added dependencies just by adding one dependency. So instead of manually specifying the dependencies just add one starter as in the following example:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

Now we can create a REST controller. For the sake of simplicity we won’t use the database and focus on the REST controller:

@RestController
public class GenericEntityController {
    private List<GenericEntity> entityList = new ArrayList<>();

    @RequestMapping("/entity/all")
    public List<GenericEntity> findAll() {
        return entityList;
    }

    @RequestMapping(value = "/entity", method = RequestMethod.POST)
    public GenericEntity addEntity(GenericEntity entity) {
        entityList.add(entity);
        return entity;
    }

    @RequestMapping("/entity/findby/{id}")
    public GenericEntity findById(@PathVariable Long id) {
        return entityList.stream().
                 filter(entity -> entity.getId().equals(id)).
                   findFirst().get();
    }
}

The GenericEntity is a simple bean with id of type Long and value of type String.

That’s it – with the application running, you can access http://localhost:8080/entity/all and check the controller is working.

We have created a REST application with quite a minimal configuration.

3. The Test Starter

For testing we usually use the following set of libraries: Spring Test, JUnit, Hamcrest, and Mockito. We can include all of these libraries manually, but Spring Boot starter can be used to automatically include these libraries in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

Notice that you don’t need to specify the version number of an artifact. Spring Boot will figure out what version to use – all you need to specify is the version of spring-boot-starter-parent artifact. If later on you need to upgrade the Boot library and dependencies, just upgrade the Boot version in one place and it will take care of the rest.

Let’s actually test the controller we created in the previous example.

There are two ways to test the controller:

  • Using the mock environment
  • Using the embedded Servlet container (like Tomcat or Jetty)

In this example we’ll use a mock environment:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
public class SpringBootApplicationIntegrationTest {
    @Autowired
    private WebApplicationContext webApplicationContext;
    private MockMvc mockMvc;

    @Before
    public void setupMockMvc() {
        mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext).build();
    }

    @Test
    public void givenRequestHasBeenMade_whenMeetsAllOfGivenConditions_thenCorrect()
      throws Exception { 
        MediaType contentType = new MediaType(MediaType.APPLICATION_JSON.getType(),
        MediaType.APPLICATION_JSON.getSubtype(), Charset.forName("utf8"));
        mockMvc.perform(MockMvcRequestBuilders.get("/entity/all")).
        andExpect(MockMvcResultMatchers.status().isOk()).
        andExpect(MockMvcResultMatchers.content().contentType(contentType)).
        andExpect(jsonPath("$", hasSize(4))); 
    } 
}

The above test calls the /entity/all endpoint and verifies that the JSON response contains 4 elements. For this test to pass, we also have to initialize our list in the controller class:

public class GenericEntityController {
    private List<GenericEntity> entityList = new ArrayList<>();

    {
        entityList.add(new GenericEntity(1l, "entity_1"));
        entityList.add(new GenericEntity(2l, "entity_2"));
        entityList.add(new GenericEntity(3l, "entity_3"));
        entityList.add(new GenericEntity(4l, "entity_4"));
    }
    //...
}

What is important here is that @WebAppConfiguration annotation and MockMVC are part of the spring-test module, hasSize is a Hamcrest matcher, and @Before is a JUnit annotation. These are all available by importing one this one starter dependency.

4. The Data JPA Starter

Most web applications have some sort of persistence – and that’s quite often JPA.

Instead of defining all of the associated dependencies manually – let’s go with the starter instead:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>

Notice that out of the box we have automatic support for at least the following databases: H2, Derby and Hsqldb. In our example, we’ll use H2.

Now let’s create the repository for our entity:

public interface GenericEntityRepository extends JpaRepository<GenericEntity, Long> {}

Time to test the code. Here is the JUnit test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootJPATest {
    
    @Autowired
    private GenericEntityRepository genericEntityRepository;

    @Test
    public void givenGenericEntityRepository_whenSaveAndRetreiveEntity_thenOK() {
        GenericEntity genericEntity = 
          genericEntityRepository.save(new GenericEntity("test"));
        GenericEntity foundedEntity = 
          genericEntityRepository.findOne(genericEntity.getId());
        
        assertNotNull(foundedEntity);
        assertEquals(genericEntity.getValue(), foundedEntity.getValue());
    }
}

We didn’t spend time on specifying the database vendor, URL connection, and credentials. No extra configuration is necessary as we’re benefiting from the solid Boot defaults; but of course all of these details can still be configured if necessary.

5. The Mail Starter

A very common task in enterprise development is sending email, and dealing directly with Java Mail API usually can be difficult.

Spring Boot starter hides this complexity – mail dependencies can be specified in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-mail</artifactId>
</dependency>

Now we can directly use the JavaMailSender, so let’s write some tests.

For the testing purpose, we need a simple SMTP server. In this example, we’ll use Wiser. This is how we can include it in our POM:

<dependency>
    <groupId>org.subethamail</groupId>
    <artifactId>subethasmtp</artifactId>
    <version>3.1.7</version>
    <scope>test</scope>
</dependency>

Here is the source code for the test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootMailTest {
    @Autowired
    private JavaMailSender javaMailSender;

    private Wiser wiser;

    private String userTo = "user2@localhost";
    private String userFrom = "user1@localhost";
    private String subject = "Test subject";
    private String textMail = "Text subject mail";

    @Before
    public void setUp() throws Exception {
        final int TEST_PORT = 25;
        wiser = new Wiser(TEST_PORT);
        wiser.start();
    }

    @After
    public void tearDown() throws Exception {
        wiser.stop();
    }

    @Test
    public void givenMail_whenSendAndReceived_thenCorrect() throws Exception {
        SimpleMailMessage message = composeEmailMessage();
        javaMailSender.send(message);
        List<WiserMessage> messages = wiser.getMessages();

        assertThat(messages, hasSize(1));
        WiserMessage wiserMessage = messages.get(0);
        assertEquals(userFrom, wiserMessage.getEnvelopeSender());
        assertEquals(userTo, wiserMessage.getEnvelopeReceiver());
        assertEquals(subject, getSubject(wiserMessage));
        assertEquals(textMail, getMessage(wiserMessage));
    }

    private String getMessage(WiserMessage wiserMessage)
      throws MessagingException, IOException {
        return wiserMessage.getMimeMessage().getContent().toString().trim();
    }

    private String getSubject(WiserMessage wiserMessage) throws MessagingException {
        return wiserMessage.getMimeMessage().getSubject();
    }

    private SimpleMailMessage composeEmailMessage() {
        SimpleMailMessage mailMessage = new SimpleMailMessage();
        mailMessage.setTo(userTo);
        mailMessage.setReplyTo(userFrom);
        mailMessage.setFrom(userFrom);
        mailMessage.setSubject(subject);
        mailMessage.setText(textMail);
        return mailMessage;
    }
}

In the test, the @Before and @After methods are in charge of starting and stopping the mail server.

Notice that we’re wiring in the JavaMailSender bean – the bean was automatically created by Spring Boot.

Just like any other defaults in Boot, the email settings for the JavaMailSender can be customized in application.properties:

spring.mail.host=localhost
spring.mail.port=25
spring.mail.properties.mail.smtp.auth=false

So we configured the mail server on localhost:25 and we didn’t require authentication.

6. Conclusion

In this article we have given an overview of Starters, explained why we need them and provided examples on how to use them in your projects.

Let’s recap the benefits of using Spring Boot starters:

  • increase pom manageability
  • production-ready, tested & supported dependency configurations
  • decrease the overall configuration time for the project
Spring Boot Logging Best Practices Guide
05
May
2021

Spring Boot Logging Best Practices Guide

Logging in Spring Boot can be confusing, and the wide range of tools and frameworks make it a challenge to even know where to start. This guide talks through the most common best practices for Spring Boot logging and gives five key suggestions to add to your logging tool kit.

What’s in the Spring Boot Box?

The Spring Boot Starters all depend on spring-boot-starter-logging. This is where the majority of the logging dependencies for your application come from. The dependencies involve a facade (SLF4J) and frameworks (Logback). It’s important to know what these are and how they fit together.

SLF4J is a simple front-facing facade supported by several logging frameworks. It’s main advantage is that you can easily switch from one logging framework to another. In our case, we can easily switch our logging from Logback to Log4j, Log4j2 or JUL.

The dependencies we use will also write logs. For example, Hibernate uses SLF4J, which fits perfectly as we have that available. However, the AWS SDK for Java uses Apache Commons Logging (JCL). Spring-boot-starter-logging includes the necessary bridges to ensure those logs are delegated to our logging framework out of the box.

SLF4J usage:

At a high level, all the application code has to worry about is:

  1. Getting an instance of an SLF4J logger (Regardless of the underlying framework):
    private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);Copy
  2. Writing some logs:
    LOG.info(“My message set at info level”);Copy

Logback or Log4j2?

Spring Boot’s default logging framework is Logback. Your application code should interface only with the SLF4J facade so that it’s easy to switch to an alternative framework if necessary.

Log4j2 is newer and claims to improve on the performance of Logback. Log4j2 also supports a wide range of appenders so it can log to files, HTTP, databases, Cassandra, Kafka, as well as supporting asynchronous loggers. If logging performance is of high importance, switching to log4j2 may improve your metrics. Otherwise, for simplicity, you may want to stick with the default Logback implementation.

This guide will provide configuration examples for both frameworks.

Want to use log4j2? You’ll need to exclude spring-boot-starter-logging and include spring-boot-starter-logging-log4j2.

spring boot logging frameworks

5 Tips for Getting the Most Out of Your Spring Boot Logging

With your initial set up out of the way, here are 5 top tips for spring boot logging.

1. Configuring Your Log Format

Spring Boot Logging provides default configurations for logback and log4j2. These specify the logging level, the appenders (where to log) and the format of the log messages.

For all but a few specific packages, the default log level is set to INFO, and by default, the only appender used is the Console Appender, so logs will be directed only to the console.

The default format for the logs using logback looks like this:

logback default logging format

Let’s take a look at that last line of log, which was a statement created from within a controller with the message “My message set at info level”.

It looks simple, yet the default log pattern for logback seems “off” at first glance. As much as it looks like it could be, it’s not regex, it doesn’t parse email addresses, and actually, when we break it down it’s not so bad.

%clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint}
%clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint}
%clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint}
%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}Copy

Understanding the Default Logback Pattern

The variables that are available for the log format allow you to create meaningful logs, so let’s look a bit deeper at the ones in the default log pattern example.Show 102550100 entriesSearch:

Pattern PartWhat it Means
%clr%clr specifies a colour. By default, it is based on log levels, e.g, INFO is green. If you want to specify specific colours, you can do that too.

The format is:
%clr(Your message){your colour}

So for example, if we wanted to add “Demo” to the start of every log message, in green, we would write:
%clr(Demo){green}
%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}%d is the current date, and the part in curly braces is the format. ${VARIABLE}:-default is a way of specifying that we should use the $VARIABLE environment variable for the format, if it is available, and if not, fall back to default. This is handy if you want to override these values in your properties files, by providing arguments, or by setting environment variables.

In this example, the default format is yyyy-MM-dd HH:mm:ss.SSS unless we specify a variable named LOG_DATEFORMAT_PATTERN. In the logs above, we can see 2020-10-19 10:09:58.152 matches the default pattern, meaning we did not specify a custom LOG_DATEFORMAT_PATTERN.
${LOG_LEVEL_PATTERN:-%5p}Uses the LOG_LEVEL_PATTERN if it is defined, else will print the log level with right padding up to 5 characters (E.g “INFO” becomes “INFO “ but “TRACE” will not have the trailing space). This keeps the rest of the log aligned as it’ll always be 5 characters.
${PID:- }The environment variable $PID, if it exists. If not, space.
tThe name of the thread triggering the log message.
loggerThe name of the logger (up to 39 characters), in our case this is the class name.
%mThe log message.
%nThe platform-specific line separator.
%wExIf one exists, wEx is the stack trace of any exception, formatted using Spring Boot’s ExtendedWhitespaceThrowableProxyConverter.

Showing 1 to 9 of 9 entriesPreviousNext

Customising the log format

You can customise the ${} variables that are found in the logback-spring.xml by passing in properties or environment variables. For example, you may set logging.pattern.console to override the whole of the console log pattern. 

However, for more control, including adding additional appenders, it is recommended to create your logback-spring.xml and place it inside your resources folder. You can do the same with log4j2 by adding log4j2-spring.xml to your resources folder.

Armed with the ability to customise your logs, you should consider adding:

  • Application name.
  • A request ID.
  • The endpoint being requested (E.g /health).

There are a few items in the default log that I would remove unless you have a specific use case for them:

  • The ‘—’ separator.
  • The thread name.
  • The process ID.

With the ability to customise these through the use of the logback-spring.xml or log4j2-spring.xml, the format of your logs is fully within your control.

2. Configuring the Destination for Your Logs (Appenders and Loggers)

An appender is just a fancy name for the part of the logging framework that sends your logs to a particular target. Both frameworks can output to console, over HTTP, to databases, or over a TCP socket, as well as to many other targets. The way we configure the destination for the logs is by adding, removing and configuring these appenders. 

You have more control over which appenders you use, and the configuration of them, if you create your own custom .xml configuration. However, the default logging configuration does make use of environment properties that allow you to override some parts of it, for example, the date format.

Preset configuration for logging to files are available within Spring Boot Logging. You can use the logback configuration with a file appender or the log4j2 configuration with a file appender if you specify logging.file or logging.path in your application properties.

The official docs for logback appenders and log4j2 appenders detail the parameters required for each of the appenders available, and how to configure them in your XML file. One tip for choosing the destination for your logs is to have a plan for rotating them. Writing logs to a file always feels like a great idea, until the storage used for that file runs out and brings down the whole service. 

Log4j and logback both have a RollingFileAppender which handles rotating these log files based on file size, or time, and it’s exactly that which Spring Boot Logging uses if you set the logging.file property. 

3. Logging as a Cross-Cutting Concern to Keep Your Code Clean (Using Filters and Aspects)

You might want to log every HTTP request your API receives. That’s a fairly normal requirement, but putting a log statement into every controller is unnecessary duplication. It’s easy to forget and make mistakes. A requirement that you want to log every method within your packages that your application calls would be even more cumbersome. 

I’ve seen developers use this style of logging at trace level so that they can turn it on to see exactly what is happening in a production environment. Adding log statements to the start and end of every method is messy, and there is a better way. This is where filters and aspects save the day and avoid the code duplication.

When to Use a Filter Vs When to Use Aspect-Oriented Programming

If you are looking to create log statements related to specific requests, you should opt for using filters, as they are part of the handling chain that your application already goes through for each request. They are easier to write, easier to test and usually more performant than using aspects. If you are considering more cross-cutting concerns, for example, audit logging, or logging every method that causes an exception to be thrown, use AOP. 

Using a Filter to Log Every Request

Filters can be registered with your web container by creating a class implementing javax.servlet.Filter and annotating it with @Component, or adding it as an @Bean in one of your configuration classes. When your spring-boot-starter application starts up, it will create the Filter and register it with the container.

You can choose to create your own Filter, or to use an existing one. To make use of the existing Filter, you need to supply a CommonsRequestLoggingFilter bean and set your logging level to debug. You’ll get something that looks like:

2020-10-27 18:50:50.427 DEBUG 24168 --- [nio-8080-exec-2] o.a.coyote.http11.Http11InputBuffer      : Received [GET /health HTTP/1.1
tracking-header: my-tracking
User-Agent: PostmanRuntime/7.26.5
Accept: */*
Postman-Token: 04a661b7-209c-43c3-83ea-e09466cf3d92
Host: localhost:8080
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
]Copy

If you use the existing one, you have little control over the message that gets logged. 

If you want more control, create your own Filter using this example, and you then have full control over the content of the log message.

Using Aspects for Cross-Cutting Concerns

Aspect-oriented programming enables you to fulfill cross-cutting concerns, like logging for example, in one place. You can do this without your logging code needing to sprawl across every class.

This approach is great for use cases such as:

  • Logging any exceptions thrown from any method within your packages (See @AfterThrowing)
  • Logging performance metrics by timing before/after each method is run (See @Around)
  • Audit logging. You can log calls to methods that have your a custom annotation on, such as adding @Audit. You only need to create a pointcut matching calls to methods with that annotation

Let’s start with a simple example – we want to log the name of every public method that we call within our package, com.example.demo. There are only a few steps to writing an Aspect that will run before every public method in a package that you specify.

  1. Include spring-boot-starter-aop in your pom.xml or build.gradle.
  2. Add @EnableAspectJAutoProxy to one of your configuration classes. This line tells spring-boot that you want to enable AspectJ support.
  3. Add your pointcut, which defines a pattern that is matched against method signatures as they run. You can find more about how to construct your matching pattern in the spring boot documentation for AOP. In our example, we match any method inside the com.example.demo package.
  4. Add your Aspect. This defines when you want to run your code in relation to the pointcut (E.g, before, after or around the methods that it matches). In this example, the @Before annotation causes the method to be executed before any methods that match the pointcut. 

That’s all there is to logging every method call. The logs will appear as:

2020-10-27 19:26:33.269  INFO 2052 --- [nio-8080-exec-2]
com.example.demo.MyAspect                : Called checkHealthCopy

By making changes to your pointcut, you can write logs for every method annotated with a specific annotation. For example, consider what you can do with:

@annotation(com.example.demo.Audit)Copy

4. Applying Context to Your Logs Using MDC

(This would run for every method annotated with a custom annotation, @Audit).

MDC (Mapped Diagnostic Context) is a complex-sounding name for a map of key-value pairs, associated with a single thread. Each thread has its own map. You can add keys/values to the map at runtime, and then reference the keys from that map in your logging pattern. 

The approach comes with a warning that threads may be reused, and so you’ll need to make sure to clear your MDC after each request to avoid your context leaking from one request to the next.

MDC is accessible through SLF4J and supported by both Logback and Log4j2, so we don’t need to worry about the specifics of the underlying implementation. 

The MDC section in the SLF4J documentation gives the simplest examples.

Tracking Requests Through Your Application Using Filters and MDC

Want to be able to group logs for a specific request? The Mapped Diagnostic Context (MDC) will help. 

The steps are:

  1. Add a header to each request going to your API, for example, ‘tracking-id’. You can generate this on the fly (I suggest using a UUID) if your client cannot provide one.
  2. Create a filter that runs once per request and stores that value in the MDC.
  3. Update your logging pattern to reference the key in the MDC to retrieve the value.

Using a Filter, this is how you can read values from the request and set them on the MDC. Make sure to clear up after the request by calling MDC.clear(), preferably in a finally block so that it always runs. 

After setting the value on your MDC, just add %X{tracking}  to your logging pattern (Replacing the word “tracking” with the key you have put in MDC) and your logs will contain the value in every log message for that request. 

If a client reports a problem, as long as you can get a unique tracking-id from your client, then you’ll be able to search your logs and pull up every log statement generated from that specific request.

Other use cases that you may want to put into your MDC and include on every log message include:

  • The application version.
  • Details of the request, for example, the path.
  • Details of the logged-in user, for example, the username.

5. Unit Testing Your Log Statements

Why Test Your Logs?

You can unit test your logging code. Too often this is overlooked because the log statements return void. For example, logger.info(“foo”);  does not return a value that you can assert against. 

It’s easy to make mistakes. Log statements usually involve parameters or formatted strings, and it’s easy to put log statements in the wrong place. Unit testing reassures you that your logs do what you expect and that you’re covered when refactoring to avoid any accidental modifications to your logging behaviour.

The Approach to Testing Your Logs

The Problem

SLF4J’s LoggerFactory.getLogger is static, making it difficult to mock. Searching through any outputted log files in our unit tests is error-prone (E.g we need to consider resetting the log files between each unit test). How do we assert against the logs?

The Solution

The trick is to add your own test appender to the logging framework (e.g Logback or Log4j2) that captures the logs from your application in memory, allowing us to assert against the output later. The steps are:

  1. Before each test case, add an appender to your logger.
  2. Within the test, call your application code that logs some output.
  3. The logger will delegate to your test appender.
  4. Assert that your expected logs have been received by your test appender.

Each logging framework has suitable appenders, but referencing those concrete appenders in our tests means we need to depend on the specific framework rather than SLF4J. That’s not ideal, but the alternatives of searching through logged output in files, or implementing our own SLF4J implementation is overkill, making this the pragmatic choice.

Here are a couple of tricks for unit testing using JUnit 4 rules or JUnit 5 extensions that will keep your test classes clean, and reduce the coupling with the logging framework.

Testing Log Statements Using Junit 5 Extensions in Two Steps

JUnit 5 extensions help to avoid code duplicates between your tests. Here’s how to set up your logging tests in two steps:

Step 1: Create your JUnit extension

Create your extension for Logback

Create your extension for Log4j2

Step 2: Use that rule to assert against your log statement with logback or log4j2

Testing Log Statements Using Junit 4 Rules in Two Steps

JUnit 4 rules help to avoid code duplication by extracting the common test code away from the test classes. In our example, we don’t want to duplicate the code for adding a test appender to our logger in every test class.

Step 1: Create your JUnit rule. 

Create your rule for Logback

Create your rule for Log4j2

Step 2: Use that rule to assert against your log statements using logback or log4j2.

With these approaches, you can assert that your log statements have been called with a message and level that you expect. 

Conclusion

The Spring Boot Logging Starter provides everything you need to quickly get started, whilst allowing full control when you need it. We’ve looked at how most logging concerns (formatting, destinations, cross-cutting logging, context and unit tests) can be abstracted away from your core application code.

Any global changes to your logging can be done in one place, and the classes for the rest of your application don’t need to change. At the same time, unit tests for your log statements provide you with reassurance that your log statements are being fired after making any alterations to your business logic.

These are my top 5 tips for configuring Spring Boot Logging. However, when your logging configuration is set up, remember that your logs are only ever as good as the content you put in them. Be mindful of the content you are logging, and make sure you are using the right logging levels.

Two Number Sum Problem Solution
06
Feb
2021

Two Number Sum Problem Solution

Two Number Sum Problem Statement

Given an array of integers, return the indices of the two numbers whose sum is equal to a given target.

You may assume that each input would have exactly one solution, and you may not use the same element twice.

Example:

Given nums = [2, 7, 11, 15], target = 9.

The output should be [0, 1]. 
Because nums[0] + nums[1] = 2 + 7 = 9.

Two Number Sum Problem solution in Java

METHOD 1. Naive approach: Use two for loops

The naive approach is to just use two nested for loops and check if the sum of any two elements in the array is equal to the given target.

Time complexity: O(n^2)

import java.util.HashMap;
import java.util.Scanner;
import java.util.Map;

class TwoSum {

    // Time complexity: O(n^2)
    private static int[] findTwoSum_BruteForce(int[] nums, int target) {
        for (int i = 0; i < nums.length; i++) {
            for (int j = i + 1; j < nums.length; j++) {
                if (nums[i] + nums[j] == target) {
                    return new int[] { i, j };
                }
            }
        }
        return new int[] {};
    }


    public static void main(String[] args) {
        Scanner keyboard = new Scanner(System.in);

        int n = keyboard.nextInt();
        int[] nums = new int[n];

        for(int i = 0; i < n; i++) {
            nums[i] = keyboard.nextInt();
        }
        int target = keyboard.nextInt();

        keyboard.close();

        int[] indices = findTwoSum_BruteForce(nums, target);

        if (indices.length == 2) {
            System.out.println(indices[0] + " " + indices[1]);
        } else {
            System.out.println("No solution found!");
        }
    }
}
# Output
$ javac TwoSum.java
$ java TwoSum
4 2 7 11 15
9
0 1

METHOD 2. Use a HashMap (Most efficient)

You can use a HashMap to solve the problem in O(n) time complexity. Here are the steps:

  1. Initialize an empty HashMap.
  2. Iterate over the elements of the array.
  3. For every element in the array –
    • If the element exists in the Map, then check if it’s complement (target - element) also exists in the Map or not. If the complement exists then return the indices of the current element and the complement.
    • Otherwise, put the element in the Map, and move to the next iteration.

Time complexity: O(n)

import java.util.HashMap;
import java.util.Scanner;
import java.util.Map;

class TwoSum {
    
    // Time complexity: O(n)
    private static int[] findTwoSum(int[] nums, int target) {
        Map<Integer, Integer> numMap = new HashMap<>();
        for (int i = 0; i < nums.length; i++) {
            int complement = target - nums[i];
            if (numMap.containsKey(complement)) {
                return new int[] { numMap.get(complement), i };
            } else {
                numMap.put(nums[i], i);
            }
        }
        return new int[] {};
    }
}

METHOD 3. Use Sorting along with the two-pointer sliding window approach

There is another approach which works when you need to return the numbers instead of their indexes. Here is how it works:

  1. Sort the array.
  2. Initialize two variables, one pointing to the beginning of the array (left) and another pointing to the end of the array (right).
  3. Loop until left < right, and for each iteration
    • if arr[left] + arr[right] == target, then return the indices.
    • if arr[left] + arr[right] < target, increment the left index.
    • else, decrement the right index.

This approach is called the two-pointer sliding window approach. It is a very common pattern for solving array related problems.

Time complexity: O(n*log(n))

import java.util.Scanner;
import java.util.Arrays;

class TwoSum {

    // Time complexity: O(n*log(n))
    private static int[] findTwoSum_Sorting(int[] nums, int target) {
        Arrays.sort(nums);
        int left = 0;
        int right = nums.length - 1;
        while(left < right) {
            if(nums[left] + nums[right] == target) {
                return new int[] {nums[left], nums[right]};
            } else if (nums[left] + nums[right] < target) {
                left++;
            } else {
                right--;
            }
        }
        return new int[] {};
    }
}

Liked the Article? Share it on Social media!

Building Reactive Rest APIs with Spring WebFlux and Reactive MongoDB
05
Mar
2021

Building Reactive Rest APIs with Spring WebFlux and Reactive MongoDB

Spring 5 has embraced reactive programming paradigm by introducing a brand new reactive framework called Spring WebFlux.

Spring WebFlux is an asynchronous framework from the bottom up. It can run on Servlet Containers using the Servlet 3.1 non-blocking IO API as well as other async runtime environments such as netty or undertow.

It will be available for use alongside Spring MVC. Yes, Spring MVC is not going anywhere. It’s a popular web framework that developers have been using for a long time.

But You now have a choice between the new reactive framework and the traditional Spring MVC. You can choose to use any of them depending on your use case.

Spring WebFlux uses a library called Reactor for its reactive support. Reactor is an implementation of the Reactive Streams specification.

Reactor Provides two main types called Flux and Mono. Both of these types implement the Publisher interface provided by Reactive Streams. Flux is used to represent a stream of 0..N elements and Mono is used to represent a stream of 0..1 element.

Although Spring uses Reactor as a core dependency for most of its internal APIs, It also supports the use of RxJava at the application level.

Programming models supported by Spring WebFlux

Spring WebFlux supports two types of programming models :

  1. Traditional annotation-based model with @Controller@RequestMapping, and other annotations that you have been using in Spring MVC.
  2. A brand new Functional style model based on Java 8 lambdas for routing and handling requests.

In this article, We’ll be using the traditional annotation-based programming model. I will write about functional style model in a future article.

Let’s build a Reactive Restful Service in Spring Boot

In this article, we’ll build a Restful API for a mini twitter application. The application will only have a single domain model called Tweet. Every Tweet will have a text and a createdAt field.

We’ll use MongoDB as our data store along with the reactive mongodb driver. We’ll build REST APIs for creating, retrieving, updating and deleting a Tweet. All the REST APIs will be asynchronous and will return a Publisher.

We’ll also learn how to stream data from the database to the client.

Finally, we’ll write integration tests to test all the APIs using the new asynchronous WebTestClient provided by Spring 5.

Creating the Project

Let’s use Spring Initializr web app to generate our application. Follow the steps below to generate the Project –

  1. Head over to http://start.spring.io
  2. Enter artifact’s value as webflux-demo
  3. Add Reactive Web and Reactive MongoDB dependencies
  4. Click Generate to generate and download the Project.
Spring WebFlux Reactive MongoDB REST API Example

Once the project is downloaded, unzip it and import it into your favorite IDE. The project’s directory structure should look like this –

Spring WebFlux Reactive MongoDB REST API Application Directory Structure

Configuring MongoDB

You can configure MongoDB by simply adding the following property to the application.properties file –

spring.data.mongodb.uri=mongodb://localhost:27017/webflux_demo

Spring Boot will read this configuration on startup and automatically configure the data source.

Creating the Domain Model

Let’s create our domain model – Tweet. Create a new package called model inside com.example.webfluxdemo package and then create a file named Tweet.java with the following contents –

package com.example.webfluxdemo.model;

import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
import java.util.Date;

@Document(collection = "tweets")
public class Tweet {
    @Id
    private String id;

    @NotBlank
    @Size(max = 140)
    private String text;

    @NotNull
    private Date createdAt = new Date();

    public Tweet() {

    }

    public Tweet(String text) {
        this.id = id;
        this.text = text;
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getText() {
        return text;
    }

    public void setText(String text) {
        this.text = text;
    }

    public Date getCreatedAt() {
        return createdAt;
    }

    public void setCreatedAt(Date createdAt) {
        this.createdAt = createdAt;
    }
}

Simple enough! The Tweet model contains a text and a createdAt field. The text field is annotated with @NotBlank and @Size annotations to ensure that it is not blank and have a maximum of 140 characters.

Creating the Repository

Next, we’re going to create the data access layer which will be used to access the MongoDB database. Create a new package called repository inside com.example.webfluxdemo and then create a new file called TweetRepository.java with the following contents –

package com.example.webfluxdemo.repository;

import com.example.webfluxdemo.model.Tweet;
import org.springframework.data.mongodb.repository.ReactiveMongoRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface TweetRepository extends ReactiveMongoRepository<Tweet, String> {

}

The TweetRepository interface extends from ReactiveMongoRepository which exposes various CRUD methods on the Document.

Spring Boot automatically plugs in an implementation of this interface called SimpleReactiveMongoRepository at runtime.

So you get all the CRUD methods on the Document readily available to you without needing to write any code. Following are some of the methods available from SimpleReactiveMongoRepository –

reactor.core.publisher.Flux<T>  findAll(); 

reactor.core.publisher.Mono<T>  findById(ID id); 

<S extends T> reactor.core.publisher.Mono<S>  save(S entity); 

reactor.core.publisher.Mono<Void>   delete(T entity);

Notice that all the methods are asynchronous and return a publisher in the form of a Flux or a Mono type.

Creating the Controller Endpoints

Finally, Let’s write the APIs that will be exposed to the clients. Create a new package called controller inside com.example.webfluxdemo and then create a new file called TweetController.java with the following contents –

package com.example.webfluxdemo.controller;

import com.example.webfluxdemo.model.Tweet;
import com.example.webfluxdemo.repository.TweetRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

import javax.validation.Valid;

@RestController
public class TweetController {

    @Autowired
    private TweetRepository tweetRepository;

    @GetMapping("/tweets")
    public Flux<Tweet> getAllTweets() {
        return tweetRepository.findAll();
    }

    @PostMapping("/tweets")
    public Mono<Tweet> createTweets(@Valid @RequestBody Tweet tweet) {
        return tweetRepository.save(tweet);
    }

    @GetMapping("/tweets/{id}")
    public Mono<ResponseEntity<Tweet>> getTweetById(@PathVariable(value = "id") String tweetId) {
        return tweetRepository.findById(tweetId)
                .map(savedTweet -> ResponseEntity.ok(savedTweet))
                .defaultIfEmpty(ResponseEntity.notFound().build());
    }

    @PutMapping("/tweets/{id}")
    public Mono<ResponseEntity<Tweet>> updateTweet(@PathVariable(value = "id") String tweetId,
                                                   @Valid @RequestBody Tweet tweet) {
        return tweetRepository.findById(tweetId)
                .flatMap(existingTweet -> {
                    existingTweet.setText(tweet.getText());
                    return tweetRepository.save(existingTweet);
                })
                .map(updatedTweet -> new ResponseEntity<>(updatedTweet, HttpStatus.OK))
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    @DeleteMapping("/tweets/{id}")
    public Mono<ResponseEntity<Void>> deleteTweet(@PathVariable(value = "id") String tweetId) {

        return tweetRepository.findById(tweetId)
                .flatMap(existingTweet ->
                        tweetRepository.delete(existingTweet)
                            .then(Mono.just(new ResponseEntity<Void>(HttpStatus.OK)))
                )
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    // Tweets are Sent to the client as Server Sent Events
    @GetMapping(value = "/stream/tweets", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Tweet> streamAllTweets() {
        return tweetRepository.findAll();
    }
}

All the controller endpoints return a Publisher in the form of a Flux or a Mono. The last endpoint is very interesting where we set the content-type to text/event-stream. It sends the tweets in the form of Server Sent Events to a browser like this –

data: {"id":"59ba5389d2b2a85ed4ebdafa","text":"tweet1","createdAt":1505383305602}
data: {"id":"59ba5587d2b2a85f93b8ece7","text":"tweet2","createdAt":1505383814847}

Now that we’re talking about event-stream, You might ask that doesn’t the following endpoint also return a Stream?

@GetMapping("/tweets")
public Flux<Tweet> getAllTweets() {
    return tweetRepository.findAll();
}

And the answer is Yes. Flux<Tweet> represents a stream of tweets. But, by default, it will produce a JSON array because If a stream of individual JSON objects is sent to the browser then It will not be a valid JSON document as a whole. A browser client has no way to consume a stream other than using Server-Sent-Events or WebSocket.

However, Non-browser clients can request a stream of JSON by setting the Accept header to application/stream+json, and the response will be a stream of JSON similar to Server-Sent-Events but without extra formatting :

{"id":"59ba5389d2b2a85ed4ebdafa","text":"tweet1","createdAt":1505383305602}
{"id":"59ba5587d2b2a85f93b8ece7","text":"tweet2","createdAt":1505383814847}

Integration Test with WebTestClient

Spring 5 also provides an asynchronous and reactive http client called WebClient for working with asynchronous and streaming APIs. It is a reactive alternative to RestTemplate.

Moreover, You also get a WebTestClient for writing integration tests. The test client can be either run on a live server or used with mock request and response.

We’ll use WebTestClient to write integration tests for our REST APIs. Open WebfluxDemoApplicationTests.java file and add the following tests to it –

package com.example.webfluxdemo;

import com.example.webfluxdemo.model.Tweet;
import com.example.webfluxdemo.repository.TweetRepository;
import org.assertj.core.api.Assertions;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.http.MediaType;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.reactive.server.WebTestClient;
import reactor.core.publisher.Mono;

import java.util.Collections;

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class WebfluxDemoApplicationTests {

	@Autowired
	private WebTestClient webTestClient;

	@Autowired
    TweetRepository tweetRepository;

	@Test
	public void testCreateTweet() {
		Tweet tweet = new Tweet("This is a Test Tweet");

		webTestClient.post().uri("/tweets")
				.contentType(MediaType.APPLICATION_JSON_UTF8)
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .body(Mono.just(tweet), Tweet.class)
				.exchange()
				.expectStatus().isOk()
				.expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
				.expectBody()
                .jsonPath("$.id").isNotEmpty()
                .jsonPath("$.text").isEqualTo("This is a Test Tweet");
	}

	@Test
    public void testGetAllTweets() {
	    webTestClient.get().uri("/tweets")
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .exchange()
                .expectStatus().isOk()
                .expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
                .expectBodyList(Tweet.class);
    }

    @Test
    public void testGetSingleTweet() {
        Tweet tweet = tweetRepository.save(new Tweet("Hello, World!")).block();

        webTestClient.get()
                .uri("/tweets/{id}", Collections.singletonMap("id", tweet.getId()))
                .exchange()
                .expectStatus().isOk()
                .expectBody()
                .consumeWith(response ->
                        Assertions.assertThat(response.getResponseBody()).isNotNull());
    }

    @Test
    public void testUpdateTweet() {
        Tweet tweet = tweetRepository.save(new Tweet("Initial Tweet")).block();

        Tweet newTweetData = new Tweet("Updated Tweet");

        webTestClient.put()
                .uri("/tweets/{id}", Collections.singletonMap("id", tweet.getId()))
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .body(Mono.just(newTweetData), Tweet.class)
                .exchange()
                .expectStatus().isOk()
                .expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
                .expectBody()
                .jsonPath("$.text").isEqualTo("Updated Tweet");
    }

    @Test
    public void testDeleteTweet() {
	    Tweet tweet = tweetRepository.save(new Tweet("To be deleted")).block();

	    webTestClient.delete()
                .uri("/tweets/{id}", Collections.singletonMap("id",  tweet.getId()))
                .exchange()
                .expectStatus().isOk();
    }
}

In the above example, I have written tests for all the CRUD APIs. You can run the tests by going to the root directory of the project and typing mvn test.

Conclusion

In this article, we learned the basics of reactive programming with Spring and built a simple Restful service with the reactive support provided by Spring WebFlux framework. We also tested all the Rest APIs using WebTestClient.

References

I strongly recommend the following awesome YouTube videos for learning more about reactive programming with Spring and Reactor –

Thanks for reading folks! Let me know what do you think about the new Spring WebFlux framework in the comment section below.