Welcome To Fusebes - Dev & Programming Blog

@Scope – How to get Scope of Bean
14
May
2021

@Scope – How to get Scope of Bean

@Scope – How to get Scope of Bean from Code

When we create a Bean we are creating  actual instances of the class defined by that bean definition. We can also control the scope of the objects created from a particular bean definition.

There are 5 types of scopes in bean,

  • singleton (default scope)
  • prototype
  • request
  • session
  • global-session

Singleton:
Single instance per spring IoC container

Prototype:
Single bean definition to any number of object instances.

Request:
Bean definition for each request. Only valid web-aware Spring ApplicationContext.

Session:
Bean definition for a session. Only valid web-aware Spring ApplicationContext.

Global-Session:
Similar to session but the only makes sense in the context of portlet-based web applications. Only valid web-aware Spring ApplicationContext.

Three Number Sum Solution
06
Feb
2021

Three Number Sum Solution

Three Number Sum Problem Statement

Given an array of integers, find all triplets in the array that sum up to a given target value.

In other words, given an array arr and a target value target, return all triplets a, b, c such that a + b + c = target.

Example:

Input array: [7, 12, 3, 1, 2, -6, 5, -8, 6]
Target sum: 0

Output: [[2, -8, 6], [3, 5, -8], [1, -6, 5]]

Three Number Sum Problem solution in Java

METHOD 1. Naive approach: Use three for loops

The naive approach is to just use three nested for loops and check if the sum of any three elements in the array is equal to the given target.

Time complexity: O(n^3)

import java.util.Scanner;
import java.util.List;
import java.util.ArrayList;
import java.util.Arrays;

class ThreeSum {

  // Time complexity: O(n^3)
  private static List<Integer[]> findThreeSum_BruteForce(int[] nums, int target) {
    List<Integer[]> result = new ArrayList<>();
    for (int i = 0; i < nums.length; i++) {
      for (int j = i + 1; j < nums.length; j++) {
        for (int k = j + 1; k < nums.length; k++) {
          if (nums[i] + nums[j] + nums[k] == target) {
            result.add(new Integer[] { nums[i], nums[j], nums[k] });
          }
        }
      }
    }
    return result;
  }

  public static void main(String[] args) {
    Scanner keyboard = new Scanner(System.in);

    int n = keyboard.nextInt();
    int[] nums = new int[n];

    for (int i = 0; i < n; i++) {
      nums[i] = keyboard.nextInt();
    }
    int target = keyboard.nextInt();

    keyboard.close();

    List<Integer[]> result = findThreeSum_Sorting(nums, target);

    for(Integer[] triplets: result) {
      for(int num: triplets) {
        System.out.print(num + " ");
      }
      System.out.println();
    }
  }
}

METHOD 2. Use Sorting along with the two-pointer sliding window approach

Another approach is to first sort the array, then –

  • Iterate through each element of the array and for every iteration,
    • Fix the first element (nums[i])
    • Try to find the other two elements whose sum along with nums[i] gives target. This boils down to the two sum problem.

Time complexity: O(n^2)

import java.util.Scanner;
import java.util.List;
import java.util.ArrayList;
import java.util.Arrays;

class ThreeSum {

  // Time complexity: O(n^2)
  private static List<Integer[]> findThreeSum_Sorting(int[] nums, int target) {
    List<Integer[]> result = new ArrayList<>();
    Arrays.sort(nums);
    for (int i = 0; i < nums.length; i++) {
      int left = i + 1;
      int right = nums.length - 1;
      while (left < right) {
        if (nums[i] + nums[left] + nums[right] == target) {
          result.add(new Integer[] { nums[i], nums[left], nums[right] });
          left++;
          right--;
        } else if (nums[i] + nums[left] + nums[right] < target) {
          left++;
        } else {
          right--;
        }
      }
    }
    return result;
  }
}

METHOD 3. Use a Map/Set

Finally, you can also solve the problem using a Map/Set. You just need to iterate through the array, fix the first element, and then try to find the other two elements using the approach similar to the two sum problem.

I’m using a Set in the following solution instead of a Map as used in the two-sum problem because in the two-sum problem, we had to keep track of the index of the elements as well. But In this problem, we just care about the element and not its index.

Time complexity: O(n^2)

import java.util.Set;
import java.util.Scanner;
import java.util.HashSet;

class ThreeSum {

  // Time complexity: O(n^2)
  private static List<Integer[]> findThreeSum(int[] nums, int target) {
    List<Integer[]> result = new ArrayList<>();
    for (int i = 0; i < nums.length; i++) {
      int currentTarget = target - nums[i];
      Set<Integer> existingNums = new HashSet<>();
      for (int j = i + 1; j < nums.length; j++) {
        if (existingNums.contains(currentTarget - nums[j])) {
          result.add(new Integer[] { nums[i], nums[j], currentTarget - nums[j] });
        } else {
          existingNums.add(nums[j]);
        }
      }
    }
    return result;
  }
}

Liked the Article? Share it on Social media!

3 Layer Automated Testing
26
Mar
2021

3 Layer Automated Testing

Birth of Quality Engineering

Quality Assurance practice was relatively simple when we built the Monolith systems with traditional waterfall development models. The Quality Assurance (QA) teams would start the GUI layer’s validation process after waiting for months for the product development. To enhance the testing process, we would have to spend a lot of efforts and $$$ (commercial tools) in automating the GUI via various tools like Microfocus UFTSeleniumTest CompleteCoded UIRanorex etc., and most often these tests are complex to maintain and scale. Thus, most QA teams would have to restrict their automated tests to smoke and partial regression, ending in inadequate test coverage.

With modern technology, the new era tech companies, including Varo, have widely adopted Microservices-based architecture combined with the Agile/ Dev-Ops development model. This opens up a lot of opportunities for Quality Assurance practice, and in my opinion, this was the origin of the transformation from Quality Assurance to “Quality Engineering.”

The Common Pitfall

While automated testing gives us a massive benefit with the three R’s (Repeatable → Run any number of times, Reliable → Run with confidence, Reusable → Develop, and share), it also comes with maintenance costs. I like to quote Grady Boosh’s — “A fool with a tool is still a fool.” Targeting inappropriate areas would not provide us the desired benefit. We should consider several factors to choose the right candidate for automation. Few to name are the lifespan of the product, the volatility of the requirements, the complexity of the tests, business criticality, and technical feasibility.

It’s well known that the cost of a bug increases toward the right of software lifecycle. So it is necessary to implement several walls of defenses to arrest these software bugs as early as possible (Shift-Left Testing paradigm). By implementing an agile development model with a fail-fast mindset, we have taken care of our first wall of defense. But to move faster in this shorter development cycle, we must build robust automated test suites to take care of the rolled out features and make room for testing the new features.

The 3 Layer Architecture

The Varo architecture comprises three essential layers.

  • The Frontend layer (Web, iOS, Mobile apps) — User experience
  • The Orchestration layer (GraphQl) — Makes multiple microservices calls and returns the decision and data to the frontend apps
  • The Microservice layer (gRPC, Kafka, Postgres) — Core business layer

While understanding the microservice architecture for testing, there were several questions posed.

  • Which layer to test?
  • What to test in these layers?
  • Does testing frontend automatically validate downstream service?
  • Does testing multiple layers introduce redundancies?

We will try to answer these by analyzing the table below, which provides an overview of what these layers mean for quality engineering.

After inferring the table, we have loosely adopted the Testing Pyramid pattern to invest in automated testing as:

  • Full feature/ functional validations on Microservices layer
  • Business process validations on Orchestration layer
  • E2E validations on Frontend layer

The diagram below best represents our test strategy for each layer.

Note: Though we have automated white-box tests such as unit-test and integration-test, we exclude those in this discussion.

Use Case

Let’s take the example below for illustration to understand best how this Pyramid works.

The user is presented with a form to submit. The form accepts three inputs — Field A to get the user identifier, Field B a drop-down value, and Field C accepts an Integer value (based on a defined range).

Once the user clicks on the Submit button, the GraphQL API calls Microservice A to get what type of customer. Then it calls the next Microservice B to validate the acceptable range of values for Field C (which depends on the values from Field A and Field B).

Validations:

1. Feature Validations

✓ Positive behavior (Smoke, Functional, System Integration)

  • Validating behavior with a valid set of data combinations
  • Validating database
  • Validating integration — Impact on upstream/ downstream systems

✓ Negative behavior

  • Validating with invalid data (for example: Invalid authorization, Disqualified data)

2. Fluent Validations

✓ Evaluating field definitions — such as

  • Mandatory field (not empty/not null)
  • Invalid data types (for example: Int → negative value, String → Junk values with special characters or UUID → Invalid UUID formats)

Let’s look at how the “feature validations” can be written for the above use case by applying one of the test case authoring techniques — Boundary Value Analysis.

To test the scenario above, it would require 54 different combinations of feature validations, and below is the rationale to pick the right candidate for each layer.

Microservice Layer: This is the layer delivered first, enabling us to invest in automated testing as early as possible (Shift-Left). And the scope for our automation would be 100% of all the above scenarios.

Orchestration Layer: This layer translates the information from the microservice to frontend layers; we try to select at least two tests (1 positive & 1 negative) for each scenario. The whole objective is to ensure the integration is working as expected.

Frontend Layer: In this layer, we focus on E2E validations, which means these validations would be a part of the complete user journey. But we would ensure that we have at least one or more positive and negative scenarios embedded in those E2E tests. Business priority (frequently used data by the real-time users) helps us to select the best scenario for our E2E validations.

Conclusion

There are always going to be sets of redundant tests across these layers. But that is the trade-off we had to take to ensure that we have correct quality gates on each of these layers. The pros of this approach are that we achieve safe and faster deployments to Production by enabling quicker testing cycles, better test coverage, and risk-free decisions. In addition, having these functional test suites spread across the layers helps us to isolate the failures in respective areas, thus saving us time to troubleshoot an issue.

However, often, not one size fits all. The decision has to be made based on understanding how the software architecture is built and the supporting infrastructure to facilitate the testing efforts. One of the critical success factors for this implementation is building a good quality engineering team with the right skills and proper tools. But that is another story — Coming soon “Quality Engineering: Redefined.”

How to copy Directories recursively in Java
03
Mar
2021

How to copy Directories recursively in Java

In this article, you’ll learn how to copy a non-empty directory recursively with all its sub-directories and files to another location in Java.

Java copy directory recursively

import java.io.IOException;
import java.nio.file.*;

public class CopyDirectoryRecursively {
    public static void main(String[] args) throws IOException {
        Path sourceDir = Paths.get( "/Users/project/Desktop/new-media");
        Path destinationDir = Paths.get("/Users/project/Desktop/media");

        // Traverse the file tree and copy each file/directory.
        Files.walk(sourceDir)
                .forEach(sourcePath -> {
                    try {
                        Path targetPath = destinationDir.resolve(sourceDir.relativize(sourcePath));
                        System.out.printf("Copying %s to %s%n", sourcePath, targetPath);
                        Files.copy(sourcePath, targetPath, StandardCopyOption.REPLACE_EXISTING);
                    } catch (IOException ex) {
                        System.out.format("I/O error: %s%n", ex);
                    }
                });
    }
}
Split a PDF file into many using Java Pdfbox Api.
05
Mar
2021

Split a PDF file into many using Java Pdfbox Api

Following is an example program to split a PDF in to many using Java.

mport org.apache.pdfbox.multipdf.Splitter; 
import org.apache.pdfbox.pdmodel.PDDocument;  

import java.io.File; 
import java.io.IOException; 

import java.util.List; 
import java.util.Iterator;  

public class SplittingPDF { 
   public static void main(String[] args) throws IOException { 
      
      //Loading an existing PDF document 
      File file = new File("C:/pdfBox/splitpdf_IP.pdf"); 
      PDDocument doc = PDDocument.load(file); 

      //Instantiating Splitter class 
      Splitter splitter = new Splitter(); 
      
      //splitting the pages of a PDF document 
      List<PDDocument> Pages = splitter.split(doc); 

      //Creating an iterator 
      Iterator<PDDocument> iterator = Pages.listIterator();         

      //Saving each page as an individual document 
      int i = 1; 
      
      while(iterator.hasNext()){ 
         PDDocument pd = iterator.next(); 
         pd.save("C:/pdfBox/splitOP"+ i++ +".pdf");             
      } 
      System.out.println("PDF splitted");     
   } 
}

Input

Split Input

Output

Split Output
Split Output
How to get current Date and Time in Java
03
Mar
2021

How to get current Date and Time in Java

In this article, you’ll find several examples to get the current date, current time, current date & time, current date & time in a specific timezone in Java.

Get current Date in Java

import java.time.LocalDate;

public class CurrentDateTimeExample {
    public static void main(String[] args) {
        // Current Date
        LocalDate currentDate = LocalDate.now();
        System.out.println("Current Date: " + currentDate);
    }
}

Get current Time in Java

import java.time.LocalTime;

public class CurrentDateTimeExample {
    public static void main(String[] args) {
        // Current Time
        LocalTime currentTime = LocalTime.now();
        System.out.println("Current Time: " + currentTime);
    }
}

Get current Date and Time in Java

import java.time.LocalDateTime;

public class CurrentDateTimeExample {
    public static void main(String[] args) {
        // Current Date and Time
        LocalDateTime currentDateTime = LocalDateTime.now();
        System.out.println("Current Date & time: " + currentDateTime);
    }
}

Get current Date and Time in a specific Timezone in Java

import java.time.ZoneId;
import java.time.ZonedDateTime;

public class CurrentDateTimeExample {
    public static void main(String[] args) {
        // Current Date and Time in a given Timezone
        ZonedDateTime currentNewYorkDateTime = ZonedDateTime.now(ZoneId.of("America/New_York"));
        System.out.println(currentNewYorkDateTime);
    }
}
Persistence in Event Driven Architectures
28
Mar
2021

Persistence in Event Driven Architectures

The importance of being persistent in event driven architectures.

Enterprises have to constantly adapt and evolve their enterprise architecture strategies in order to deliver the desired business outcomes. The evolving architecture patterns may involve business processing of sales transactions with a human in the loop or they may involve machine to machine data processing using automation. Enterprises earlier adopted a request-driven model where microservices made a call to a service and the service responded to the request being made. In this request-driven model, you run into challenges around flexibility as you try to scale your global deployment footprint.

A new approach, that is quickly gaining adoption in enterprises is called the event-driven architecture. In the new approach, you are able to increase application agility and flexibility by allowing for multiple data producers to coexist with multiple data consumers and you process data only after an event or state change. In this enterprise architecture, the producers and consumers of data can be quickly extended to deliver better flexibility and agility as you scale your operation globally. Examples of event-driven solutions are available in a hosted managed format from most cloud providers today. In this blog post, we will look at a Kafka solution running in a Kubernetes cluster and how you can make sure persistence is achieved for the solution. This approach is running at customers in production today supported by Confluent and Portworx.

Why Use Event-Driven Architecture?

With the advent of 5G technology, a vast amount of data will be generated by sensors, devices, systems, and humans in the loop to track, manage and achieve business outcomes. The use case for Event-Driven architectures may include the following:

  • Business Process state changes – You want to notify the change of state between a purchase order and accounts receivable with an event. The event-based approach allows a human or a decision engine to take the next appropriate step in the process.
  • Log and Metrics processing – The event-driven model allows for multiple actions to be triggered based on a single metric. The ability to send messages to multiple event handlers in different subsystems offers the scalability and resiliency required by certain business applications.

Why Use Kafka?

Apache Kafka is a scalable, fault-tolerant messaging system that enables you to build distributed real-time applications with an event-driven architecture. Kafka delivers events, with fast ingestion rates, and provides persistence and in-order guarantees. Kafka adoption in your solution will depend on your specific use-case. Below are some important concepts about Apache Kafka:

  • Kafka organizes messages into “topics”.
  • The process that does the work in Kafka is called the “broker”. A producer pushes data into a topic hosted on a broker and a consumer pulls messages from a topic via a broker.
  • Kafka topics can be divided into “partitions”. This allows for parallelizing a topic across multiple brokers and increasing message ingests and throughput.
  • Brokers can hold multiple partitions but at any given time, only one partition can act as leader of a topic. A leader is responsible for updating any replicas with new data.
  • Brokers are responsible for storing messages to disk. The messages are stored with unique offsets. Messages in Kafka do not have a unique ID.

Persistence With Portworx and Kafka on Kubernetes

Kafka needs Zookeeper to be deployed as a StatefulSets in Kubernetes. Kafka Brokers, which maintains the state of the topics and partitions also need to be deployed as StatefulSets which should be backed by persistent volumes.

  • Kafka offers replication of topics between different brokers. In the case of a node failure, Kafka can recover from failure using the replicated topics. This recovery mechanism does create additional network calls in order to synchronize with the replica on a different broker. The recovery time of the failed node and its broker depend on the amount of data that needs to be rehydrated and network latencies in the cluster.
  • Portworx offers data replication using the replication parameter in the Kuberbetes storage class. In this scenario, the storage system is responsible for maintaining copies of the topic on different nodes. In the case of a node failure, the Kafka broker is rescheduled on a node that already contains the replicated topic data. The rebuild time of the broker is reduced because it uses the storage system to rehydrate the topic data without any network latencies. Once the data is rehydrated using the storage system, the broker can quickly catch up on the topic offset from an existing broker and thus reducing the overall recovery time.

Let’s walk through a Kubernetes node failure scenario on a Kubernetes cluster where a Kafka application is running and is backed by Portworx volumes.

Figure 1: We will describe the deployment of Kafka and Portworx on a 5 node Kubernetes cluster. In the image below, you can see that the Kafka deployment has 3 Brokers, each with 2 partitions and a replication factor of 2. For the Portworx data platform, we have a volume replication factor set to 3 for each volume.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 1

Figure 2: We will describe a node failure event. In the diagram below, we have simulated a node failure and identified the Kubernetes worker node 2 has been taken out of production. Kubernetes worker node 2 contains a Kafka broker with 2 partitions and 2 Portworx persistent volumes.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 2
Figure 3: Once Kubernetes node failure is detected by the Kubernetes API server, the Kaffka broker is deployed to a node with available resources. The Portworx Platform has the ability to influence the placement of the broker on a Kubernete node. Since Kubernetes node 4 already has a copy of Kafka Broker’s persistent volumes, Portworx makes sure that the Kafka broker is deployed on Kubernetes node 4. On the Portworx platform’s recovery side, the volumes are also created on other nodes in order to maintain the defined replication factor of 3.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 3 Figure 4: Once the failed recovery node is back in production use, the Kaffka application does not get scheduled on the recovered node since the application is already at the desired number of Kafka brokers. On the Portworx platform side, replicated volumes are placed on the recovered Kubernetes node to maintain the desired replication factor for the volumes.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 4

Building Reactive Rest APIs with Spring WebFlux and Reactive MongoDB
05
Mar
2021

Building Reactive Rest APIs with Spring WebFlux and Reactive MongoDB

Spring 5 has embraced reactive programming paradigm by introducing a brand new reactive framework called Spring WebFlux.

Spring WebFlux is an asynchronous framework from the bottom up. It can run on Servlet Containers using the Servlet 3.1 non-blocking IO API as well as other async runtime environments such as netty or undertow.

It will be available for use alongside Spring MVC. Yes, Spring MVC is not going anywhere. It’s a popular web framework that developers have been using for a long time.

But You now have a choice between the new reactive framework and the traditional Spring MVC. You can choose to use any of them depending on your use case.

Spring WebFlux uses a library called Reactor for its reactive support. Reactor is an implementation of the Reactive Streams specification.

Reactor Provides two main types called Flux and Mono. Both of these types implement the Publisher interface provided by Reactive Streams. Flux is used to represent a stream of 0..N elements and Mono is used to represent a stream of 0..1 element.

Although Spring uses Reactor as a core dependency for most of its internal APIs, It also supports the use of RxJava at the application level.

Programming models supported by Spring WebFlux

Spring WebFlux supports two types of programming models :

  1. Traditional annotation-based model with @Controller@RequestMapping, and other annotations that you have been using in Spring MVC.
  2. A brand new Functional style model based on Java 8 lambdas for routing and handling requests.

In this article, We’ll be using the traditional annotation-based programming model. I will write about functional style model in a future article.

Let’s build a Reactive Restful Service in Spring Boot

In this article, we’ll build a Restful API for a mini twitter application. The application will only have a single domain model called Tweet. Every Tweet will have a text and a createdAt field.

We’ll use MongoDB as our data store along with the reactive mongodb driver. We’ll build REST APIs for creating, retrieving, updating and deleting a Tweet. All the REST APIs will be asynchronous and will return a Publisher.

We’ll also learn how to stream data from the database to the client.

Finally, we’ll write integration tests to test all the APIs using the new asynchronous WebTestClient provided by Spring 5.

Creating the Project

Let’s use Spring Initializr web app to generate our application. Follow the steps below to generate the Project –

  1. Head over to http://start.spring.io
  2. Enter artifact’s value as webflux-demo
  3. Add Reactive Web and Reactive MongoDB dependencies
  4. Click Generate to generate and download the Project.
Spring WebFlux Reactive MongoDB REST API Example

Once the project is downloaded, unzip it and import it into your favorite IDE. The project’s directory structure should look like this –

Spring WebFlux Reactive MongoDB REST API Application Directory Structure

Configuring MongoDB

You can configure MongoDB by simply adding the following property to the application.properties file –

spring.data.mongodb.uri=mongodb://localhost:27017/webflux_demo

Spring Boot will read this configuration on startup and automatically configure the data source.

Creating the Domain Model

Let’s create our domain model – Tweet. Create a new package called model inside com.example.webfluxdemo package and then create a file named Tweet.java with the following contents –

package com.example.webfluxdemo.model;

import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
import java.util.Date;

@Document(collection = "tweets")
public class Tweet {
    @Id
    private String id;

    @NotBlank
    @Size(max = 140)
    private String text;

    @NotNull
    private Date createdAt = new Date();

    public Tweet() {

    }

    public Tweet(String text) {
        this.id = id;
        this.text = text;
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getText() {
        return text;
    }

    public void setText(String text) {
        this.text = text;
    }

    public Date getCreatedAt() {
        return createdAt;
    }

    public void setCreatedAt(Date createdAt) {
        this.createdAt = createdAt;
    }
}

Simple enough! The Tweet model contains a text and a createdAt field. The text field is annotated with @NotBlank and @Size annotations to ensure that it is not blank and have a maximum of 140 characters.

Creating the Repository

Next, we’re going to create the data access layer which will be used to access the MongoDB database. Create a new package called repository inside com.example.webfluxdemo and then create a new file called TweetRepository.java with the following contents –

package com.example.webfluxdemo.repository;

import com.example.webfluxdemo.model.Tweet;
import org.springframework.data.mongodb.repository.ReactiveMongoRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface TweetRepository extends ReactiveMongoRepository<Tweet, String> {

}

The TweetRepository interface extends from ReactiveMongoRepository which exposes various CRUD methods on the Document.

Spring Boot automatically plugs in an implementation of this interface called SimpleReactiveMongoRepository at runtime.

So you get all the CRUD methods on the Document readily available to you without needing to write any code. Following are some of the methods available from SimpleReactiveMongoRepository –

reactor.core.publisher.Flux<T>  findAll(); 

reactor.core.publisher.Mono<T>  findById(ID id); 

<S extends T> reactor.core.publisher.Mono<S>  save(S entity); 

reactor.core.publisher.Mono<Void>   delete(T entity);

Notice that all the methods are asynchronous and return a publisher in the form of a Flux or a Mono type.

Creating the Controller Endpoints

Finally, Let’s write the APIs that will be exposed to the clients. Create a new package called controller inside com.example.webfluxdemo and then create a new file called TweetController.java with the following contents –

package com.example.webfluxdemo.controller;

import com.example.webfluxdemo.model.Tweet;
import com.example.webfluxdemo.repository.TweetRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

import javax.validation.Valid;

@RestController
public class TweetController {

    @Autowired
    private TweetRepository tweetRepository;

    @GetMapping("/tweets")
    public Flux<Tweet> getAllTweets() {
        return tweetRepository.findAll();
    }

    @PostMapping("/tweets")
    public Mono<Tweet> createTweets(@Valid @RequestBody Tweet tweet) {
        return tweetRepository.save(tweet);
    }

    @GetMapping("/tweets/{id}")
    public Mono<ResponseEntity<Tweet>> getTweetById(@PathVariable(value = "id") String tweetId) {
        return tweetRepository.findById(tweetId)
                .map(savedTweet -> ResponseEntity.ok(savedTweet))
                .defaultIfEmpty(ResponseEntity.notFound().build());
    }

    @PutMapping("/tweets/{id}")
    public Mono<ResponseEntity<Tweet>> updateTweet(@PathVariable(value = "id") String tweetId,
                                                   @Valid @RequestBody Tweet tweet) {
        return tweetRepository.findById(tweetId)
                .flatMap(existingTweet -> {
                    existingTweet.setText(tweet.getText());
                    return tweetRepository.save(existingTweet);
                })
                .map(updatedTweet -> new ResponseEntity<>(updatedTweet, HttpStatus.OK))
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    @DeleteMapping("/tweets/{id}")
    public Mono<ResponseEntity<Void>> deleteTweet(@PathVariable(value = "id") String tweetId) {

        return tweetRepository.findById(tweetId)
                .flatMap(existingTweet ->
                        tweetRepository.delete(existingTweet)
                            .then(Mono.just(new ResponseEntity<Void>(HttpStatus.OK)))
                )
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    // Tweets are Sent to the client as Server Sent Events
    @GetMapping(value = "/stream/tweets", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Tweet> streamAllTweets() {
        return tweetRepository.findAll();
    }
}

All the controller endpoints return a Publisher in the form of a Flux or a Mono. The last endpoint is very interesting where we set the content-type to text/event-stream. It sends the tweets in the form of Server Sent Events to a browser like this –

data: {"id":"59ba5389d2b2a85ed4ebdafa","text":"tweet1","createdAt":1505383305602}
data: {"id":"59ba5587d2b2a85f93b8ece7","text":"tweet2","createdAt":1505383814847}

Now that we’re talking about event-stream, You might ask that doesn’t the following endpoint also return a Stream?

@GetMapping("/tweets")
public Flux<Tweet> getAllTweets() {
    return tweetRepository.findAll();
}

And the answer is Yes. Flux<Tweet> represents a stream of tweets. But, by default, it will produce a JSON array because If a stream of individual JSON objects is sent to the browser then It will not be a valid JSON document as a whole. A browser client has no way to consume a stream other than using Server-Sent-Events or WebSocket.

However, Non-browser clients can request a stream of JSON by setting the Accept header to application/stream+json, and the response will be a stream of JSON similar to Server-Sent-Events but without extra formatting :

{"id":"59ba5389d2b2a85ed4ebdafa","text":"tweet1","createdAt":1505383305602}
{"id":"59ba5587d2b2a85f93b8ece7","text":"tweet2","createdAt":1505383814847}

Integration Test with WebTestClient

Spring 5 also provides an asynchronous and reactive http client called WebClient for working with asynchronous and streaming APIs. It is a reactive alternative to RestTemplate.

Moreover, You also get a WebTestClient for writing integration tests. The test client can be either run on a live server or used with mock request and response.

We’ll use WebTestClient to write integration tests for our REST APIs. Open WebfluxDemoApplicationTests.java file and add the following tests to it –

package com.example.webfluxdemo;

import com.example.webfluxdemo.model.Tweet;
import com.example.webfluxdemo.repository.TweetRepository;
import org.assertj.core.api.Assertions;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.http.MediaType;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.reactive.server.WebTestClient;
import reactor.core.publisher.Mono;

import java.util.Collections;

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class WebfluxDemoApplicationTests {

	@Autowired
	private WebTestClient webTestClient;

	@Autowired
    TweetRepository tweetRepository;

	@Test
	public void testCreateTweet() {
		Tweet tweet = new Tweet("This is a Test Tweet");

		webTestClient.post().uri("/tweets")
				.contentType(MediaType.APPLICATION_JSON_UTF8)
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .body(Mono.just(tweet), Tweet.class)
				.exchange()
				.expectStatus().isOk()
				.expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
				.expectBody()
                .jsonPath("$.id").isNotEmpty()
                .jsonPath("$.text").isEqualTo("This is a Test Tweet");
	}

	@Test
    public void testGetAllTweets() {
	    webTestClient.get().uri("/tweets")
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .exchange()
                .expectStatus().isOk()
                .expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
                .expectBodyList(Tweet.class);
    }

    @Test
    public void testGetSingleTweet() {
        Tweet tweet = tweetRepository.save(new Tweet("Hello, World!")).block();

        webTestClient.get()
                .uri("/tweets/{id}", Collections.singletonMap("id", tweet.getId()))
                .exchange()
                .expectStatus().isOk()
                .expectBody()
                .consumeWith(response ->
                        Assertions.assertThat(response.getResponseBody()).isNotNull());
    }

    @Test
    public void testUpdateTweet() {
        Tweet tweet = tweetRepository.save(new Tweet("Initial Tweet")).block();

        Tweet newTweetData = new Tweet("Updated Tweet");

        webTestClient.put()
                .uri("/tweets/{id}", Collections.singletonMap("id", tweet.getId()))
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .body(Mono.just(newTweetData), Tweet.class)
                .exchange()
                .expectStatus().isOk()
                .expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
                .expectBody()
                .jsonPath("$.text").isEqualTo("Updated Tweet");
    }

    @Test
    public void testDeleteTweet() {
	    Tweet tweet = tweetRepository.save(new Tweet("To be deleted")).block();

	    webTestClient.delete()
                .uri("/tweets/{id}", Collections.singletonMap("id",  tweet.getId()))
                .exchange()
                .expectStatus().isOk();
    }
}

In the above example, I have written tests for all the CRUD APIs. You can run the tests by going to the root directory of the project and typing mvn test.

Conclusion

In this article, we learned the basics of reactive programming with Spring and built a simple Restful service with the reactive support provided by Spring WebFlux framework. We also tested all the Rest APIs using WebTestClient.

References

I strongly recommend the following awesome YouTube videos for learning more about reactive programming with Spring and Reactor –

Thanks for reading folks! Let me know what do you think about the new Spring WebFlux framework in the comment section below.

Kotlin Properties, Backing Fields, Getters and Setters
07
Mar
2021

Kotlin Properties, Backing Fields, Getters and Setters

You can declare properties inside Kotlin classes in the same way you declare any other variable. These properties can be mutable (declared using var) or immutable (declared using val).

Here is an example –

// User class with a Primary constructor that accepts three parameters
class User(_id: Int, _name: String, _age: Int) {
    // Properties of User class
    val id: Int = _id         // Immutable (Read only)
    var name: String = _name  // Mutable
    var age: Int = _age       // Mutable
}

You can get or set the properties of an object using the dot(.) notation like so –

val user = User(1, "Jack Sparrow", 44)

// Getting a Property
val name = user.name

// Setting a Property
user.age = 46

// You cannot set read-only properties
user.id = 2	// Error: Val cannot be assigned

Getters and Setters

Kotlin internally generates a default getter and setter for mutable properties, and a getter (only) for read-only properties.

It calls these getters and setters internally whenever you access or modify a property using the dot(.) notation.

Here is how the User class that we saw in the previous section looks like with the default getters and setters –

class User(_id: Int, _name: String, _age: Int) {
    val id: Int = _id
        get() = field
    
    var name: String = _name
        get() = field
        set(value) {
            field = value
        }
    
    var age: Int = _age
        get() = field
        set(value) {
            field = value
        }
}

You might have noticed two strange identifiers in all the getter and setter methods – field and value.

Let’s understand what these identifiers are for –

1. value

We use value as the name of the setter parameter. This is the default convention in Kotlin but you’re free to use any other name if you want.

The value parameter contains the value that a property is assigned to. For example, when you write user.name = "Bill Gates", the value parameter contains the assigned value “Bill Gates”.

2. Backing Field (field)

Backing field helps you refer to the property inside the getter and setter methods. This is required because if you use the property directly inside the getter or setter then you’ll run into a recursive call which will generate a StackOverflowError.

Let’s see that in action. Let’s modify the User class and refer to the properties directly inside the getters and setters instead of using the field identifier –

class User(_name: String, _age: Int) {
    var name: String = _name
        get() = name		  // Calls the getter recursively

    var age: Int = _age
        set(value) {
            age = value		  // Calls the setter recursively
        }
}

fun main(args: Array<String>) {
    val user = User("Jack Sparrow", 44)

    // Getting a Property
    println("${user.name}") // StackOverflowError

    // Setting a Property
    user.age = 45           // StackOverflowError

}

The above program will generate a StackOverflowError due to the recursive calls in the getter and setter methods. This is why Kotlin has the concept of backing fields. It makes storing the property value in memory possible. When you initialize a property with a value in the class, the initializer value is written to the backing field of that property –

class User(_id: Int) {
    val id: Int	= _id   // The initializer value is written to the backing field of the property
}

A backing field is generated for a property if

  • A custom getter/setter references it through the field identifier or,
  • It uses the default implementation of at least one of the accessors (getter or setter). (Remember, the default getter and setter references the field identifier themselves)

For example, a backing field is not generated for the id property in the following class because it neither uses the default implementation of the getter nor refers to the field identifier in the custom getter –

class User() {
    val id	// No backing field is generated
        get() = 0
}

Since a backing field is not generated, you won’t be able to initialize the above property like so –

class User(_id: Int) {
    val id: Int = _id	//Error: Initialization not possible because no backing field is generated which could store the initialized value in memory
        get() = 0
}

Creating Custom Getters and Setters

You can ditch Kotlin’s default getter/setter and define a custom getter and setter for the properties of your class.

Here is an example –

class User(_id: Int, _name: String, _age: Int) {
    val id: Int = _id

    var name: String = _name 
        // Custom Getter
        get() {     
            return field.toUpperCase()
        }     

    var age: Int = _age
        // Custom Setter
        set(value) {
            field = if(value > 0) value else throw IllegalArgumentException("Age must be greater than zero")
        }
}

fun main(args: Array<String>) {
    val user = User(1, "Jack Sparrow", 44)

    println("${user.name}") // JACK SPARROW

    user.age = -1           // Throws IllegalArgumentException: Age must be greater that zero
}

Conclusion

Thanks for reading folks. In this article, You learned how Kotlin’s getters and setters work and how to define a custom getter and setter for the properties of a class.

Find the pair with the smallest difference in two unsorted arrays
06
Feb
2021

Find the pair with the smallest difference in two unsorted arrays

Given two non-empty arrays of integers, find the pair of values (one value from each array) with the smallest (non-negative) difference.

Example

Input: [1, 3, 15, 11, 2], [23, 127, 235, 19, 8]

Output: [11, 8]; this pair has the smallest difference.

Solution 1. Brute Force approach: Use two for loops

The naive way to solve this problem is to use two for loops and compare the difference of every pair to find the pair with the smallest difference:

Time complexity: O(n^2)

import java.util.Arrays;

class SmallestDifference {

  public static int[] findSmallestDifferencePair_Naive(int[] a1, int[] a2) {
    double smallestDiff = Double.MAX_VALUE;
    int[] smallestDiffPair = new int[2];

    for(int i = 0; i < a1.length; i++) {
      for(int j = 0; j < a2.length; j++) {
        int currentDiff = Math.abs(a1[i] - a2[j]);
        if(currentDiff < smallestDiff) {
          smallestDiff = currentDiff;
          smallestDiffPair[0] = a1[i];
          smallestDiffPair[1] = a2[j];  
        }
      }
    }
    return smallestDiffPair;
  }

  public static void main(String[] args) {
    int[] a1 = new int[] {-1, 5, 10, 20, 28, 3};
    int[] a2 = new int[] {26, 134, 135, 15, 17};

    int[] pair = findSmallestDifferencePair_Naive(a1, a2);
    System.out.println(pair[0] + " " + pair[1]);
  }
}

Solution 2. Use Sorting along with the two-pointer sliding window approach

You can improve upon the brute force solution by first sorting the array and then using the two-pointer sliding window pattern.

Here is how it will work in this case:

  • Initialize a variable to keep track of the smallest difference found so far (smallestDiff).
  • Sort both the arrays
  • Initialize two indexes (one for each array): i = 0 and j = 0.
  • Loop until we reach the end of any of the arrays.
  • For every iteration:
    • Compare the smallestDiff with abs(a1[i] - a2[j]) and reset it if the new difference is smaller.
    • If a1[i] < a2[j], increment i.
    • Otherwise, increment j

Time complexity: O(mlog(m) + nlog(n))

import java.util.Arrays;

class SmallestDifference {
  public static int[] findSmallestDifferencePair(int[] a1, int[] a2) {
    Arrays.sort(a1);
    Arrays.sort(a2);

    double smallestDiff = Double.MAX_VALUE;
    int[] smallestDiffPair = new int[2];
    int i = 0, j = 0;

    while(i < a1.length && j < a2.length) {
      double currentDiff = Math.abs(a1[i] - a2[j]);
      if(currentDiff < smallestDiff) {
        smallestDiff = currentDiff;
        smallestDiffPair[0] = a1[i];
        smallestDiffPair[1] = a2[j];
      }
      if(a1[i] < a2[j]) {
        i++;
      } else {
        j++;
      }
    }
    return smallestDiffPair;
  }

  public static void main(String[] args) {
    int[] a1 = new int[] {-1, 5, 10, 20, 28, 3};
    int[] a2 = new int[] {26, 134, 135, 15, 17};

    int[] pair = findSmallestDifferencePair(a1, a2);
    System.out.println(pair[0] + " " + pair[1]);
  }
}

Liked the Article? Share it on Social media!