Welcome To Fusebes - Dev & Programming Blog

Kotlin Operators with Examples
07
Mar
2021

Kotlin Operators with Examples

In the previous article, you learned how to create variables and what are various basic data types available in Kotlin for creating variables.

In this article, you’ll learn what are various operators provided by kotlin to perform operations on basic data types.

Operations on Numeric Types

Just like other languages, Kotlin provides various operators to perform computations on numbers –

  • Arithmetic operators (+-*/%)
  • Comparison operators (==!=<><=>=)
  • Assignment operators (+=-=*=/=%=)
  • Increment & Decrement operators (++--)

Following are few examples that demonstrate the usage of above operators –

var a = 10
var b = 20
var c = ((a + b) * ( a + b))/2   // 450

var isALessThanB = a < b   // true

a++     // a now becomes 11
b += 5  // b equals to 25 now

Understanding how operators work in Kotlin

Everything in Kotlin is an object, even the basic data types like IntCharDoubleBoolean etc. Kotlin doesn’t have separate primitive types and their corresponding boxed types like Java.

Note that Kotlin may represent basic types like IntCharBoolean etc. as primitive values at runtime to improve performance, but for the end users, all of them are objects.

Since all the data types are objects, the operations on these types are internally represented as function calls.

For example, the addition operation a + b between two numbers a and b is represented as a function call a.plus(b) –

var a = 4
var b = 5

println(a + b)

// equivalent to
println(a.plus(b))

All the operators that we looked at in the previous section have a symbolic name which is used to translate any expression containing those operators into the corresponding function calls –

ExpressionTranslates to
a + ba.plus(b)
a – ba.minus(b)
a * ba.times(b)
a / ba.div(b)
a % ba.rem(b)
a++a.inc()
a−−a.dec()
a > ba.compareTo(b) > 0
a < ba.compareTo(b) < 0
a += ba.plusAssign(b)

You can check out other expressions and their corresponding function calls on Kotlin’s reference page.

The concept of translating such expressions to function calls enable operator overloading in Kotlin. For example, you can provide implementation for the plus function in a class defined by you, and then you’ll be able to add the objects of that class using + operator like this – object1 + object2.

Kotlin will automatically convert the addition operation object1 + object2 into the corresponding function call object1.plus(object2) (Think of a ComplexNumber class with the + operator overloaded).

You’ll learn more about operator overloading in a future article.

Note that the operations on basic types like IntCharDoubleBoolean etc. are optimized and do not include the overhead of function calls.

Bitwise Operators

Unlike C, C++ and Java, Kotlin doesn’t have bitwise operators like |(bitwise-or), &(bitwise-and), ^(bitwise-xor), << (signed left shift), >>(signed right shift) etc.

For performing bitwise operations, Kotlin provides following methods that work for Int and Long types –

  • shl – signed shift left (equivalent of << operator)
  • shr – signed shift right (equivalent of >> operator)
  • ushr– unsigned shift right (equivalent of >>> operator)
  • and – bitwise and (equivalent of & operator)
  • or – bitwise or (equivalent of | operator)
  • xor – bitwise xor (equivalent of ^ operator)
  • inv – bitwise complement (equivalent of ~ operator)

Here are few examples demonstrating how to use above functions –

1 shl 2   // Equivalent to 1.shl(2), Result = 4
16 shr 2  // Result = 4
2 and 4   // Result = 0
2 or 3    // Result = 3
4 xor 5   // Result = 1
4.inv()   // Result = -5

All the bitwise functions, except inv(), can be called using infix notation. The infix notation of 2.and(4) is 2 and 4. Infix notation allows you to write function calls in a more intuitive way.

Operations on Boolean Types

Kotlin supports following logical operators for performing operations on boolean types –

  • || – Logical OR
  • && – Logical AND
  • !   – Logical NOT

Here are few examples of logical operators –

2 == 2 && 4 != 5  // true
4 > 5 && 2 < 7    // false
!(7 > 12 || 14 < 18)  // false

Logical operators are generally used in control flow statements like ifif-elsewhile etc., to test the validity of a condition.

Operations on Strings

String Concatenation

The + operator is overloaded for String types. It performs String concatenation –

var firstName = "Rajeev"
var lastName = "Singh"
var fullName = firstName + " " + lastName	// "Rajeev Singh"

String Interpolation

Kotlin has an amazing feature called String Interpolation. This feature allows you to directly insert a template expression inside a String. Template expressions are tiny pieces of code that are evaluated and their results are concatenated with the original String.

A template expression is prefixed with $ symbol. Following is an example of String interpolation –

var a = 12
var b = 18
println("Avg of $a and $b is equal to ${ (a + b)/2 }") 

// Prints - Avg of 12 and 18 is equal to 15

If the template expression is a simple variable, you can write it like $variableName. If it is an expression then you need to insert it inside a ${} block.

Conclusion

That’s all folks! In this article, you learned what are various operators provided in Kotlin to perform operations on Numbers, Booleans, and Strings. You also learned how the expressions containing operators are translated to function calls internally.

As always, Thank you for reading.

Find the pair with the smallest difference in two unsorted arrays
06
Feb
2021

Find the pair with the smallest difference in two unsorted arrays

Given two non-empty arrays of integers, find the pair of values (one value from each array) with the smallest (non-negative) difference.

Example

Input: [1, 3, 15, 11, 2], [23, 127, 235, 19, 8]

Output: [11, 8]; this pair has the smallest difference.

Solution 1. Brute Force approach: Use two for loops

The naive way to solve this problem is to use two for loops and compare the difference of every pair to find the pair with the smallest difference:

Time complexity: O(n^2)

import java.util.Arrays;

class SmallestDifference {

  public static int[] findSmallestDifferencePair_Naive(int[] a1, int[] a2) {
    double smallestDiff = Double.MAX_VALUE;
    int[] smallestDiffPair = new int[2];

    for(int i = 0; i < a1.length; i++) {
      for(int j = 0; j < a2.length; j++) {
        int currentDiff = Math.abs(a1[i] - a2[j]);
        if(currentDiff < smallestDiff) {
          smallestDiff = currentDiff;
          smallestDiffPair[0] = a1[i];
          smallestDiffPair[1] = a2[j];  
        }
      }
    }
    return smallestDiffPair;
  }

  public static void main(String[] args) {
    int[] a1 = new int[] {-1, 5, 10, 20, 28, 3};
    int[] a2 = new int[] {26, 134, 135, 15, 17};

    int[] pair = findSmallestDifferencePair_Naive(a1, a2);
    System.out.println(pair[0] + " " + pair[1]);
  }
}

Solution 2. Use Sorting along with the two-pointer sliding window approach

You can improve upon the brute force solution by first sorting the array and then using the two-pointer sliding window pattern.

Here is how it will work in this case:

  • Initialize a variable to keep track of the smallest difference found so far (smallestDiff).
  • Sort both the arrays
  • Initialize two indexes (one for each array): i = 0 and j = 0.
  • Loop until we reach the end of any of the arrays.
  • For every iteration:
    • Compare the smallestDiff with abs(a1[i] - a2[j]) and reset it if the new difference is smaller.
    • If a1[i] < a2[j], increment i.
    • Otherwise, increment j

Time complexity: O(mlog(m) + nlog(n))

import java.util.Arrays;

class SmallestDifference {
  public static int[] findSmallestDifferencePair(int[] a1, int[] a2) {
    Arrays.sort(a1);
    Arrays.sort(a2);

    double smallestDiff = Double.MAX_VALUE;
    int[] smallestDiffPair = new int[2];
    int i = 0, j = 0;

    while(i < a1.length && j < a2.length) {
      double currentDiff = Math.abs(a1[i] - a2[j]);
      if(currentDiff < smallestDiff) {
        smallestDiff = currentDiff;
        smallestDiffPair[0] = a1[i];
        smallestDiffPair[1] = a2[j];
      }
      if(a1[i] < a2[j]) {
        i++;
      } else {
        j++;
      }
    }
    return smallestDiffPair;
  }

  public static void main(String[] args) {
    int[] a1 = new int[] {-1, 5, 10, 20, 28, 3};
    int[] a2 = new int[] {26, 134, 135, 15, 17};

    int[] pair = findSmallestDifferencePair(a1, a2);
    System.out.println(pair[0] + " " + pair[1]);
  }
}

Liked the Article? Share it on Social media!

Java Optional Tutorial with Examples
03
Mar
2021

Java Optional Tutorial with Examples

If you’re a Java programmer, then you must have heard about or experienced NullPointerExceptions in your programs.

NullPointerExceptions are Runtime Exceptions which are thrown by the jvm at runtime. Null checks in programs are often overlooked by developers causing serious bugs in code.

Java 8 introduced a new type called Optional<T> to help developers deal with null values properly.

The concept of Optional is not new and other programming languages have similar constructs. For example – Scala has Optional[T]Haskell has Maybe type.

In this blog post, I’ll explain about Java 8’s Optional type and show you how to use it by giving simple examples.

What is Optional?

Optional is a container type for a value which may be absent. Confused? Let me explain.

Consider the following function which takes a user id, fetches the user’s details with the given id from the database and returns it –

User findUserById(String userId) { ... };

If userId is not present in the database then the above function returns null. Now, let’s consider the following code written by a client –

User user = findUserById("667290");
System.out.println("User's Name = " + user.getName());

A common NullPointerException situation, right? The developer forgot to add the null check in his code. If userId is not present in the database, then the above code snippet will throw a NullPointerException.

Now, let’s understand how Optional will help you mitigate the risk of running into NullPointerException here –

Optional<User> findUserById(String userId) { ... };

By returning Optional<User> from the function, we have made it clear to the clients of this function that there might not be a User with the given userId. Now the clients of this function are explicitly forced to handle this fact.

The client code can now be written as –

Optional<User> optional = findUserById("667290");

optional.ifPresent(user -> {
    System.out.println("User's name = " + user.getName());    
})

Once you have an Optional object, you can use various utility methods to work with the Optional. The ifPresent() method in the above example calls the supplied lambda expression if the user is present, otherwise it does nothing.

Well! You get the idea here right? The client is now forced by the type system to write the Optional check in his code.

Creating an Optional object

1. Create an empty Optional

An empty Optional Object describes the absence of a value.

Optional<User> user = Optional.empty();

2. Create an Optional with a non-null value –

User user = new User("667290", "Rajeev Kumar Singh");
Optional<User> userOptional = Optional.of(user);

If the argument supplied to Optional.of() is null, then it will throw a NullPointerException immediately and the Optional object won’t be created.

3. Create an Optional with a value which may or may not be null –

Optional<User> userOptional = Optional.ofNullable(user);

If the argument passed to Optional.ofNullable() is non-null, then it returns an Optional containing the specified value, otherwise it returns an empty Optional.

Checking the presence of a value

1. isPresent()

isPresent() method returns true if the Optional contains a non-null value, otherwise it returns false.

if(optional.isPresent()) {
    // value is present inside Optional
    System.out.println("Value found - " + optional.get());
} else {
    // value is absent
    System.out.println("Optional is empty");
}	

2. ifPresent()

ifPresent() method allows you to pass a Consumer function that is executed if a value is present inside the Optional object.

It does nothing if the Optional is empty.

optional.ifPresent(value -> {
    System.out.println("Value found - " + value);
});

Note that I have supplied a lambda expression to the ifPresent() method. This makes the code more readable and concise.

Retrieving the value using get() method

Optional’s get() method returns a value if it is present, otherwise it throws NoSuchElementException.

User user = optional.get()

You should avoid using get() method on your Optionals without first checking whether a value is present or not, because it throws an exception if the value is absent.

Returning default value using orElse()

orElse() is great when you want to return a default value if the Optional is empty. Consider the following example –

// return "Unknown User" if user is null
User finalUser = (user != null) ? user : new User("0", "Unknown User");

Now, let’s see how we can write the above logic using Optional’s orElse() construct –

// return "Unknown User" if user is null
User finalUser = optionalUser.orElse(new User("0", "Unknown User"));

Returning default value using orElseGet()

Unlike orElse(), which returns a default value directly if the Optional is empty, orElseGet() allows you to pass a Supplier function which is invoked when the Optional is empty. The result of the Supplier function becomes the default value of the Optional –

User finalUser = optionalUser.orElseGet(() -> {
    return new User("0", "Unknown User");
});

Throw an exception on absence of a value

You can use orElseThrow() to throw an exception if Optional is empty. A typical scenario in which this might be useful is – returning a custom ResourceNotFound() exception from your REST API if the object with the specified request parameters does not exist.

@GetMapping("/users/{userId}")
public User getUser(@PathVariable("userId") String userId) {
    return userRepository.findByUserId(userId).orElseThrow(
	    () -> new ResourceNotFoundException("User not found with userId " + userId);
    );
}

Filtering values using filter() method

Let’s say you have an Optional object of User. You want to check its gender and call a function if it’s a MALE. Here is how you would do it using old school method –

if(user != null && user.getGender().equalsIgnoreCase("MALE")) {
    // call a function
}

Now, let’s use Optional along with filter to achieve the same –

userOptional.filter(user -> user.getGender().equalsIgnoreCase("MALE"))
.ifPresent(() -> {
    // Your function
})

The filter() method takes a predicate as an argument. If the Optional contains a non-null value and the value matches the given predicate, then filter() method returns an Optional with that value, otherwise it returns an empty Optional.

So, the function inside ifPresent() in the above example will be called if and only if the Optional contains a user and user is a MALE.

Extracting and transforming values using map()

Let’s say that you want to get the address of a user if it is present and print it if the user is from India.

Considering the following getAddress() method inside User class –

Address getAddress() {
    return this.address;
}

Here is how you would achieve the desired result –

if(user != null) {
    Address address = user.getAddress();
    if(address != null && address.getCountry().equalsIgnoreCase("India")) {
	    System.out.println("User belongs to India");
    }
}

Now, let’s see how we can get the same result using map() method –

userOptional.map(User::getAddress)
.filter(address -> address.getCountry().equalsIgnoreCase("India"))
.ifPresent(() -> {
    System.out.println("User belongs to India");
});

You see how concise and readable the above code is? Let’s break the above code snippet and understand it in detail –

// Extract User's address using map() method.
Optional<Address> addressOptional = userOptional.map(User::getAddress)

// filter address from India
Optional<Address> indianAddressOptional = addressOptional.filter(address -> address.getCountry().equalsIgnoreCase("India"));

// Print, if country is India
indianAddressOptional.ifPresent(() -> {
    System.out.println("User belongs to India");
});

In the above example, map() method returns an empty Optional in the following cases – 1. user is absent in userOptional. 2. user is present but getAdderess() returns null.

otherwise, it returns an Optional<Address> containing user’s address.

Cascading Optionals using flatMap()

Let’s consider the above map() example again. You might ask that if user’s address can be null then why the heck aren’t you returning an Optional<Address> instead of plain Address from getAddress() method?

And, You’re right! Let’s correct that, let’s now assume that getAddress() returns Optional<Address>. Do you think that above code will still work?

The answer is no! The problem is the following line –

Optional<Address> addressOptional = userOptional.map(User::getAddress)

Since getAddress() returns Optional<Address>, the return type of userOptional.map() will be Optional<Optional<Address>>

Optional<Optional<Address>> addressOptional = userOptional.map(User::getAddress)

Oops! We certainly don’t want that nested Optional. Let’s use flatMap() to correct that –

Optional<Address> addressOptional = userOptional.flatMap(User::getAddress)

Cool! So, Rule of thumb here – if the mapping function returns an Optional, use flatMap() instead of map() to get the flattened result from your Optional

Conclusion

Thank you for reading. If you Optional<Liked> this blog post. Give an Optional<High Five> in the comment section below.

Remove Duplicates from Sorted Array II
06
Feb
2021

Remove Duplicates from Sorted Array II

Remove Duplicates from Sorted Array II

Given a sorted array, remove the duplicates from the array in-place such that each element appears at most twice, and return the new length.

Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.

Example

Given array [1, 1, 1, 3, 5, 5, 7]

The output should be 6, with the first six elements of the array being [1, 1, 3, 5, 5, 7]

Remove Duplicates from Sorted Array II solution in Java

It can also be solved in O(n) time complexity by using two pointers (indexes).

class RemoveDuplicatesSortedArrayII {
  private static int removeDuplicates(int[] nums) {
    int n = nums.length;

    /*
     * This index will move when we modify the array in-place to include an element
     * so that it is not repeated more than twice.
     */
    int j = 0;

    for (int i = 0; i < n; i++) {
      /*
       * If the current element is equal to the element at index i+2, then skip the
       * current element because it is repeated more than twice.
       */
      if (i < n - 2 && nums[i] == nums[i + 2]) {
        continue;
      }

      nums[j++] = nums[i];
    }

    return j;
  }

  public static void main(String[] args) {
    int[] nums = new int[] { 1, 1, 1, 3, 5, 5, 7 };
    int newLength = removeDuplicates(nums);

    System.out.println("Length of array after removing duplicates = " + newLength);

    System.out.print("Array = ");
    for (int i = 0; i < newLength; i++) {
      System.out.print(nums[i] + " ");
    }
    System.out.println();
  }
}
# Output
Length of array after removing duplicates = 6
Array = 1 1 3 5 5 7 
Why We Use Spring Boot Maven Plugin
30
Mar
2021

Why We Use Spring Boot Maven Plugin?

It provides Spring Boot support in Maven, letting us package executable jar or war archives and run an application “in-place”. To use it, we must use Maven 3.2 (or later).

The plugin provides several goals to work with a Spring Boot application:

  • spring-boot:repackage: create a jar or war file that is auto-executable. It can replace the regular artifact or can be attached to the build lifecycle with a separate classifier.
  • spring-boot:run: run your Spring Boot application with several options to pass parameters to it.
  • spring-boot:start and stop: integrate your Spring Boot application to the integration-test phase so that the application starts before it.
  • spring-boot:build-info: generate a build information that can be used by the Actuator.
Spring Boot Starter Maven Templates
30
Mar
2021

Spring Boot Starter Maven Templates

Not very long ago, with the exponential increase in number of libraries and their dependencies, dependency management was becoming very complex task which required good amount of technical expertise to do it correctly. With the introduction of String boot starter templates, you can get a lot of help in identifying the correct dependencies to use in project if you want to use any popular library into your project.

Spring Boot comes with over 50+ different starter modules, which provide ready-to-use integration libraries for many different frameworks, such as database connections that are both relational and NoSQL, web services, social network integration, monitoring libraries, logging, template rendering, and the list just keeps going on.

How starter template work?

Spring Boot starters are templates that contain a collection of all the relevant transitive dependencies that are needed to start a particular functionality. Each starter has a special file, which contains the list of all the provided dependencies Spring provides.

These files can be found inside pom.xml files in respective starter module. e.g.

The spring-boot-starter-data-jpa starter pom file can be found at github.

This tells us that by including spring-boot-starter-data-jpa in our build as a dependency, we will automatically get spring-ormhibernate-entity-manager and spring-data-jpa. These libraries will provide us all basic things to start writing JPA/DAO code .

So next time when you want to give your project any specific functionality, I will suggest to check for existing starter templates to see if you can use it directly. Ongoing community additions are always on, so this list is already growing and you can contribute to it as well.

Popular templates and their transitive dependencies

I am listing down some very frequently use spring starters and what dependencies they bring along, for information only.

STARTERDEPENDENCIES
spring-boot-starterspring-boot, spring-context, spring-beans
spring-boot-starter-jerseyjersey-container-servlet-core, jersey-container-servlet, jersey-server
spring-boot-starter-actuatorspring-boot-actuator, micrometer-core
spring-boot-starter-aopspring-aop, aspectjrt, aspectjweaver
spring-boot-starter-data-restspring-hateoas, spring-data-rest-webmvc
spring-boot-starter-hateoasspring-hateoas
spring-boot-starter-logginglogback-classic, jcl-over-slf4j, jul-to-slf4j
spring-boot-starter-log4j2log4j2, log4j-slf4j-impl
spring-boot-starter-securityspring-security-web, spring-security-config
spring-boot-starter-testspring-test, spring-boot,junit,mockito, hamcrest-library, assertj, jsonassert, json-path
spring-boot-starter-web-servicesspring-ws-core

Drop me your questions in comments section.

Happy Learning !!

References:

Spring boot starters
Using boot starters

3 Layer Automated Testing
26
Mar
2021

3 Layer Automated Testing

Birth of Quality Engineering

Quality Assurance practice was relatively simple when we built the Monolith systems with traditional waterfall development models. The Quality Assurance (QA) teams would start the GUI layer’s validation process after waiting for months for the product development. To enhance the testing process, we would have to spend a lot of efforts and $$$ (commercial tools) in automating the GUI via various tools like Microfocus UFTSeleniumTest CompleteCoded UIRanorex etc., and most often these tests are complex to maintain and scale. Thus, most QA teams would have to restrict their automated tests to smoke and partial regression, ending in inadequate test coverage.

With modern technology, the new era tech companies, including Varo, have widely adopted Microservices-based architecture combined with the Agile/ Dev-Ops development model. This opens up a lot of opportunities for Quality Assurance practice, and in my opinion, this was the origin of the transformation from Quality Assurance to “Quality Engineering.”

The Common Pitfall

While automated testing gives us a massive benefit with the three R’s (Repeatable → Run any number of times, Reliable → Run with confidence, Reusable → Develop, and share), it also comes with maintenance costs. I like to quote Grady Boosh’s — “A fool with a tool is still a fool.” Targeting inappropriate areas would not provide us the desired benefit. We should consider several factors to choose the right candidate for automation. Few to name are the lifespan of the product, the volatility of the requirements, the complexity of the tests, business criticality, and technical feasibility.

It’s well known that the cost of a bug increases toward the right of software lifecycle. So it is necessary to implement several walls of defenses to arrest these software bugs as early as possible (Shift-Left Testing paradigm). By implementing an agile development model with a fail-fast mindset, we have taken care of our first wall of defense. But to move faster in this shorter development cycle, we must build robust automated test suites to take care of the rolled out features and make room for testing the new features.

The 3 Layer Architecture

The Varo architecture comprises three essential layers.

  • The Frontend layer (Web, iOS, Mobile apps) — User experience
  • The Orchestration layer (GraphQl) — Makes multiple microservices calls and returns the decision and data to the frontend apps
  • The Microservice layer (gRPC, Kafka, Postgres) — Core business layer

While understanding the microservice architecture for testing, there were several questions posed.

  • Which layer to test?
  • What to test in these layers?
  • Does testing frontend automatically validate downstream service?
  • Does testing multiple layers introduce redundancies?

We will try to answer these by analyzing the table below, which provides an overview of what these layers mean for quality engineering.

After inferring the table, we have loosely adopted the Testing Pyramid pattern to invest in automated testing as:

  • Full feature/ functional validations on Microservices layer
  • Business process validations on Orchestration layer
  • E2E validations on Frontend layer

The diagram below best represents our test strategy for each layer.

Note: Though we have automated white-box tests such as unit-test and integration-test, we exclude those in this discussion.

Use Case

Let’s take the example below for illustration to understand best how this Pyramid works.

The user is presented with a form to submit. The form accepts three inputs — Field A to get the user identifier, Field B a drop-down value, and Field C accepts an Integer value (based on a defined range).

Once the user clicks on the Submit button, the GraphQL API calls Microservice A to get what type of customer. Then it calls the next Microservice B to validate the acceptable range of values for Field C (which depends on the values from Field A and Field B).

Validations:

1. Feature Validations

✓ Positive behavior (Smoke, Functional, System Integration)

  • Validating behavior with a valid set of data combinations
  • Validating database
  • Validating integration — Impact on upstream/ downstream systems

✓ Negative behavior

  • Validating with invalid data (for example: Invalid authorization, Disqualified data)

2. Fluent Validations

✓ Evaluating field definitions — such as

  • Mandatory field (not empty/not null)
  • Invalid data types (for example: Int → negative value, String → Junk values with special characters or UUID → Invalid UUID formats)

Let’s look at how the “feature validations” can be written for the above use case by applying one of the test case authoring techniques — Boundary Value Analysis.

To test the scenario above, it would require 54 different combinations of feature validations, and below is the rationale to pick the right candidate for each layer.

Microservice Layer: This is the layer delivered first, enabling us to invest in automated testing as early as possible (Shift-Left). And the scope for our automation would be 100% of all the above scenarios.

Orchestration Layer: This layer translates the information from the microservice to frontend layers; we try to select at least two tests (1 positive & 1 negative) for each scenario. The whole objective is to ensure the integration is working as expected.

Frontend Layer: In this layer, we focus on E2E validations, which means these validations would be a part of the complete user journey. But we would ensure that we have at least one or more positive and negative scenarios embedded in those E2E tests. Business priority (frequently used data by the real-time users) helps us to select the best scenario for our E2E validations.

Conclusion

There are always going to be sets of redundant tests across these layers. But that is the trade-off we had to take to ensure that we have correct quality gates on each of these layers. The pros of this approach are that we achieve safe and faster deployments to Production by enabling quicker testing cycles, better test coverage, and risk-free decisions. In addition, having these functional test suites spread across the layers helps us to isolate the failures in respective areas, thus saving us time to troubleshoot an issue.

However, often, not one size fits all. The decision has to be made based on understanding how the software architecture is built and the supporting infrastructure to facilitate the testing efforts. One of the critical success factors for this implementation is building a good quality engineering team with the right skills and proper tools. But that is another story — Coming soon “Quality Engineering: Redefined.”

How to copy a File or Directory in Java
03
Mar
2021

How to copy a File or Directory in Java

In this article, you’ll learn how to copy a file or directory in Java using various methods like Files.copy() or using BufferedInputStream and BufferedOutputStream.

Java Copy File using Files.copy()

Java NIO’s Files.copy() method is the simplest way of copying a file in Java.

import java.io.IOException;
import java.nio.file.*;

public class CopyFileExample {
    public static void main(String[] args) {

        Path sourceFilePath = Paths.get("./bar.txt");
        Path targetFilePath = Paths.get(System.getProperty("user.home") + "/Desktop/bar-copy.txt");

        try {
            Files.copy(sourceFilePath, targetFilePath);
        } catch (FileAlreadyExistsException ex) {
            System.out.println("File already exists");
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}

The Files.copy() method will throw FileAlreadyExistsException if the target file already exists. If you want to replace the target file then you can use the REPLACE_EXISTING option like this –

Files.copy(sourceFilePath, targetFilePath, StandardCopyOption.REPLACE_EXISTING);

Note that, Directories can be copied using the same method. However, files inside the directory are not copied, so the new directory will be empty even when the original directory contains files.

Read: How to copy Directories recursively in Java

Java Copy File using BufferedInputStream and BufferedOutputStream

You can also copy a file byte-by-byte using a byte-stream I/O. The following example uses BufferedInputStream to read a file into a byte array and then write the byte array using BufferedOutputStream.

You can also use a FileInputStream and a FileOutputStream directly for performing the reading and writing. But a Buffered I/O is more performant because it buffers data and reads/writes it in chunks.

import java.io.*;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class CopyFileExample1 {
    public static void main(String[] args) {
        Path sourceFilePath = Paths.get("./bar.txt");
        Path targetFilePath = Paths.get(System.getProperty("user.home") + "/Desktop/bar-copy.txt");

        try(InputStream inputStream = Files.newInputStream(sourceFilePath);
            BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);

            OutputStream outputStream = Files.newOutputStream(targetFilePath);
            BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(outputStream)) {

            byte[] buffer = new byte[4096];
            int numBytes;
            while ((numBytes = bufferedInputStream.read(buffer)) != -1) {
                bufferedOutputStream.write(buffer, 0, numBytes);
            }
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}
Advantages and Disadvantages of Kafka
31
Mar
2021

Advantages and Disadvantages of Kafka

1. Advantages and Disadvantages of Kafka

Today, we will discuss the Advantages and Disadvantages of Kafka Because, it is very important to know the limitations of any technology before using it, same in case of advantages.
So, let’s discuss Kafka Advantage and Disadvantage in detail.

advantages & disadvantages of kafka

2. Advantages of Kafka

So, here we are listing out some of the advantages of Kafka. Basically, these Kafka advantages are making Kafka ideal for our data lake implementation. So, let’s start learning advantages of Kafka in detail:

Kafka Pros and Cons – Kafka Advantages

a. High-throughput
Without having not so large hardware, Kafka is capable of handling high-velocity and high-volume data. Also, able to support message throughput of thousands of messages per second.
b. Low Latency
It is capable of handling these messages with the very low latency of the range of milliseconds, demanded by most of the new use cases.
c. Fault-Tolerant
One of the best advantages is Fault Tolerance. There is an inherent capability in Kafka, to be resistant to node/machine failure within a cluster.
d. Durability
Here, durability refers to the persistence of data/messages on disk. Also, messages replication is one of the reasons behind durability, hence messages are never lost.
e. Scalability
Without incurring any downtime on the fly by adding additional nodes, Kafka can be scaled-out. Moreover, inside the Kafka cluster, the message handling is fully transparent and these are seamless.
f. Distributed
The distributed architecture of Kafka makes it scalable using capabilities like replication and partitioning.
g. Message Broker Capabilities
Kafka tends to work very well as a replacement for a more traditional message broker. Here, a message broker refers to an intermediary program, which translates messages from the formal messaging protocol of the publisher to the formal messaging protocol of the receiver.
h. High Concurrency
Kafka is able to handle thousands of messages per second and that too in low latency conditions with high throughput. In addition, it permits the reading and writing of messages into it at high concurrency.
i. By Default Persistent
As we discussed above that the messages are persistent, that makes it durable and reliable.
j. Consumer Friendly
It is possible to integrate with the variety of consumers using Kafka. The best part of Kafka is, it can behave or act differently according to the consumer, that it integrates with because each customer has a different ability to handle these messages, coming out of Kafka. Moreover, Kafka can integrate well with a variety of consumers written in a variety of languages.
k. Batch Handling Capable (ETL like functionality)
Kafka could also be employed for batch-like use cases and can also do the work of a traditional ETL, due to its capability of persists messages.
l. Variety of Use Cases
It is able to manage the variety of use cases commonly required for a Data Lake. For Example log aggregation, web activity tracking, and so on.
m. Real-Time Handling
Kafka can handle real-time data pipeline. Since we need to find a technology piece to handle real-time messages from applications, it is one of the core reasons for Kafka as our choice.

3. Disadvantages of Kafka

Cons of Kafka – Apache Kafka Disadvantages

It is good to know Kafka’s limitations even if its advantages appear more prominent then its disadvantages. However, consider it only when advantages are too compelling to omit. Here is one more condition that some disadvantages might be more relevant for a particular use case but not really linked to ours. So, here we are listing out some of the disadvantage associated with Kafka:
a. No Complete Set of Monitoring Tools
It is seen that it lacks a full set of management and monitoring tools. Hence, enterprise support staff felt anxious or fearful about choosing Kafka and supporting it in the long run.
b. Issues with Message Tweaking
As we know, the broker uses certain system calls to deliver messages to the consumer. However, Kafka’s performance reduces significantly if the message needs some tweaking. So, it can perform quite well if the message is unchanged because it uses the capabilities of the system.
c. Not support wildcard topic selection
There is an issue that Kafka only matches the exact topic name, that means it does not support wildcard topic selection. Because that makes it incapable of addressing certain use cases.
d. Lack of Pace
There can be a problem because of the lack of pace, while API’s which are needed by other languages are maintained by different individuals and corporates.
e. Reduces Performance
In general, there are no issues with the individual message size. However, the brokers and consumers start compressing these messages as the size increases. Due to this, when decompressed, the node memory gets slowly used. Also, compress happens when the data flow in the pipeline. It affects throughput and also performance.
f. Behaves Clumsy
Sometimes, it starts behaving a bit clumsy and slow, when the number of queues in a Kafka cluster increases.
g. Lacks some Messaging Paradigms
Some of the messaging paradigms are missing in Kafka like request/reply, point-to-point queues and so on. Not always but for certain use cases, it sounds problematic.
So, this was all about the advantages and disadvantages of Kafka. Hope you like our explanation.

4. Conclusion: Advantages and Disadvantages of Kafka

Hence, we have seen all the Advantages and Disadvantages of Kafka in detail. That will help you a lot before using it. However, if any doubt occurs regarding Kafka Pros and Cons, feel free to ask through the comment section.

Server-side vs Client-side Routing
26
Mar
2021

Server-side vs Client-side Routing

Almost every website or web-application uses routing. Discovering a website by changing its URL is a very powerful feature that comes standard with the web. How all of this is handled can vary a lot between different websites and web-applications.

All websites and web-applications, whether they use server-side or client-side routing, are accessed from a server. How a website or web-application responds to different URLs is commonly handled server-side, although with the rising popularity of JavaScript frameworks, other ways have been found to manage routing.

Routing

Routing is the mechanism by which requests are connected to some code. It is essentially the way you navigate through a website or web-application. By clicking on a link, the URL changes which provides the user with some new data or a new webpage.

Server-side

When browsing, the adjustment of a URL can make a lot of things happen. This will happen regularly by clicking on a link, which in turn will request a new page from the server. This is what we call a server-side route. A whole new document is served to the user.

A server-side request causes the whole page to refresh. This is because a new GET request is sent to the server which responds with a new document, completely discarding the old page altogether.

Pros

  • A server-side route will only request the data that’s needed. No more, no less.
  • Because server-side routing has been the standard for a long time, search engines are optimised for webpages that come from the server.

Cons

  • Every request results in a full-page refresh. That means that unnecessary data is being requested. A header and a footer of a webpage often stays the same. This isn’t something you would want to request from the server again.
  • It can take a while for the page to be rendered. However, this is only the case when the document to be rendered is very large or when you have slow internet speed.

Client-side

A client-side route happens when the route is handled internally by the JavaScript that is loaded on the page. When a user clicks on a link, the URL changes but the request to the server is prevented. The adjustment to the URL will result in a changed state of the application. The changed state will ultimately result in a different view of the webpage. This could be the rendering of a new component, or even a request to a server for some data that the application will turn into some HTML elements.

It is important to note that the whole page won’t refresh when using client-side routing. There are just some elements inside the application that will change.

Pros

  • Because less data is processed, routing between views is generally faster.
  • Smooth transitions and animations between views are easier to implement.

Cons

  • The whole website or web-application needs to be loaded on the first request. That’s why the initial loading time usually takes longer.
  • Because the whole website or web-application is loaded initially, there is a possibility that there is data downloaded for views you won’t even come across.
  • It requires more setup work or even a library. Because server-side is the standard, extra code must be written to make client-side routing possible.
  • Search engine crawling is less optimised. Google is making good progress on crawling single-paged-apps, but it isn’t nearly as efficient as server-side routed websites.

Summary

There is no best method to manage your routing. Server-side and client-side routing both have their advantages and weaknesses. It is important to make your decision based on the needs of your website or web-application, or heck, even combine the two.