Welcome To Fusebes - Dev & Programming Blog

Spring Boot Starter Maven Templates
30
Mar
2021

Spring Boot Starter Maven Templates

Not very long ago, with the exponential increase in number of libraries and their dependencies, dependency management was becoming very complex task which required good amount of technical expertise to do it correctly. With the introduction of String boot starter templates, you can get a lot of help in identifying the correct dependencies to use in project if you want to use any popular library into your project.

Spring Boot comes with over 50+ different starter modules, which provide ready-to-use integration libraries for many different frameworks, such as database connections that are both relational and NoSQL, web services, social network integration, monitoring libraries, logging, template rendering, and the list just keeps going on.

How starter template work?

Spring Boot starters are templates that contain a collection of all the relevant transitive dependencies that are needed to start a particular functionality. Each starter has a special file, which contains the list of all the provided dependencies Spring provides.

These files can be found inside pom.xml files in respective starter module. e.g.

The spring-boot-starter-data-jpa starter pom file can be found at github.

This tells us that by including spring-boot-starter-data-jpa in our build as a dependency, we will automatically get spring-ormhibernate-entity-manager and spring-data-jpa. These libraries will provide us all basic things to start writing JPA/DAO code .

So next time when you want to give your project any specific functionality, I will suggest to check for existing starter templates to see if you can use it directly. Ongoing community additions are always on, so this list is already growing and you can contribute to it as well.

Popular templates and their transitive dependencies

I am listing down some very frequently use spring starters and what dependencies they bring along, for information only.

STARTERDEPENDENCIES
spring-boot-starterspring-boot, spring-context, spring-beans
spring-boot-starter-jerseyjersey-container-servlet-core, jersey-container-servlet, jersey-server
spring-boot-starter-actuatorspring-boot-actuator, micrometer-core
spring-boot-starter-aopspring-aop, aspectjrt, aspectjweaver
spring-boot-starter-data-restspring-hateoas, spring-data-rest-webmvc
spring-boot-starter-hateoasspring-hateoas
spring-boot-starter-logginglogback-classic, jcl-over-slf4j, jul-to-slf4j
spring-boot-starter-log4j2log4j2, log4j-slf4j-impl
spring-boot-starter-securityspring-security-web, spring-security-config
spring-boot-starter-testspring-test, spring-boot,junit,mockito, hamcrest-library, assertj, jsonassert, json-path
spring-boot-starter-web-servicesspring-ws-core

Drop me your questions in comments section.

Happy Learning !!

References:

Spring boot starters
Using boot starters

Find the pair with the smallest difference in two unsorted arrays
06
Feb
2021

Find the pair with the smallest difference in two unsorted arrays

Given two non-empty arrays of integers, find the pair of values (one value from each array) with the smallest (non-negative) difference.

Example

Input: [1, 3, 15, 11, 2], [23, 127, 235, 19, 8]

Output: [11, 8]; this pair has the smallest difference.

Solution 1. Brute Force approach: Use two for loops

The naive way to solve this problem is to use two for loops and compare the difference of every pair to find the pair with the smallest difference:

Time complexity: O(n^2)

import java.util.Arrays;

class SmallestDifference {

  public static int[] findSmallestDifferencePair_Naive(int[] a1, int[] a2) {
    double smallestDiff = Double.MAX_VALUE;
    int[] smallestDiffPair = new int[2];

    for(int i = 0; i < a1.length; i++) {
      for(int j = 0; j < a2.length; j++) {
        int currentDiff = Math.abs(a1[i] - a2[j]);
        if(currentDiff < smallestDiff) {
          smallestDiff = currentDiff;
          smallestDiffPair[0] = a1[i];
          smallestDiffPair[1] = a2[j];  
        }
      }
    }
    return smallestDiffPair;
  }

  public static void main(String[] args) {
    int[] a1 = new int[] {-1, 5, 10, 20, 28, 3};
    int[] a2 = new int[] {26, 134, 135, 15, 17};

    int[] pair = findSmallestDifferencePair_Naive(a1, a2);
    System.out.println(pair[0] + " " + pair[1]);
  }
}

Solution 2. Use Sorting along with the two-pointer sliding window approach

You can improve upon the brute force solution by first sorting the array and then using the two-pointer sliding window pattern.

Here is how it will work in this case:

  • Initialize a variable to keep track of the smallest difference found so far (smallestDiff).
  • Sort both the arrays
  • Initialize two indexes (one for each array): i = 0 and j = 0.
  • Loop until we reach the end of any of the arrays.
  • For every iteration:
    • Compare the smallestDiff with abs(a1[i] - a2[j]) and reset it if the new difference is smaller.
    • If a1[i] < a2[j], increment i.
    • Otherwise, increment j

Time complexity: O(mlog(m) + nlog(n))

import java.util.Arrays;

class SmallestDifference {
  public static int[] findSmallestDifferencePair(int[] a1, int[] a2) {
    Arrays.sort(a1);
    Arrays.sort(a2);

    double smallestDiff = Double.MAX_VALUE;
    int[] smallestDiffPair = new int[2];
    int i = 0, j = 0;

    while(i < a1.length && j < a2.length) {
      double currentDiff = Math.abs(a1[i] - a2[j]);
      if(currentDiff < smallestDiff) {
        smallestDiff = currentDiff;
        smallestDiffPair[0] = a1[i];
        smallestDiffPair[1] = a2[j];
      }
      if(a1[i] < a2[j]) {
        i++;
      } else {
        j++;
      }
    }
    return smallestDiffPair;
  }

  public static void main(String[] args) {
    int[] a1 = new int[] {-1, 5, 10, 20, 28, 3};
    int[] a2 = new int[] {26, 134, 135, 15, 17};

    int[] pair = findSmallestDifferencePair(a1, a2);
    System.out.println(pair[0] + " " + pair[1]);
  }
}

Liked the Article? Share it on Social media!

How to create a new file in Java
03
Mar
2021

How to create a new file in Java

There are various ways in which you can create a new file in Java. In this article, I’ve outlined the two most recommended way of creating new files.

You can use Files.createFile(path) method to create a new File in Java:

import java.io.IOException;
import java.nio.file.FileAlreadyExistsException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class CreateNewFile {

    public static void main(String[] args)  {
        // New file path
        Path filePath = Paths.get("./bar.txt");

        try {
            // Create a file at the specified file path
            Files.createFile(filePath);
            System.out.println("File created successfully!");

        } catch (FileAlreadyExistsException e) {
            System.out.println("File already exists");
        } catch (IOException e) {
            System.out.println("An I/O error occurred: " + e.getMessage());
        } catch (SecurityException e) {
            System.out.println("No permission to create file: " + e.getMessage());
        }
    }
}

Create New File with missing parent directories using Java NIO

There are scenarios in which you may want to create any missing parent directories while creating a File. You can use Files.createDirectories(path) function to create missing parent directories before creating the file.

import java.io.IOException;
import java.nio.file.FileAlreadyExistsException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class CreateNewFile {

    public static void main(String[] args)  {
        // New file path
        Path filePath = Paths.get("java/io/bar.txt");

        try {
            // Create missing parent directories
            if(filePath.getParent() != null) {
                Files.createDirectories(filePath.getParent());
            }

            // Create a file at the specified file path
            Files.createFile(filePath);
            System.out.println("File created successfully!");

        } catch (FileAlreadyExistsException e) {
            System.out.println("File already exists");
        } catch (IOException e) {
            System.out.println("An I/O error occurred: " + e.getMessage());
        } catch (SecurityException e) {
            System.out.println("No permission to create file: " + e.getMessage());
        }
    }
}

Create New File in Java using java.io.File class – JDK 6+

You can also use the File.createNewFile() method to create a new File in Java. It returns a boolean value which is –

  • true, if the file does not exist and was created successfully
  • false, if the file already exists
import java.io.File;
import java.io.IOException;

public class CreateNewFile {

    public static void main(String[] args) {
        // Instantiate a File object with a file path
        File file = new File("./foo.txt");

        try {
            // Create the file in the filesystem
            boolean success = file.createNewFile();

            if (success) {
                System.out.println("File created successfully!");
            } else {
                System.out.println("File already exists!");
            }
        } catch (IOException e) {
            System.out.println("An I/O error occurred: " + e.getMessage());
        } catch (SecurityException e) {
            System.out.println("No sufficient permission to create file: " + e.getMessage());
        }
    }
}

Create New File along with missing parent directories with java.io.File class

If you want to create missing parent directories while creating a file, then you can explicitly creaate the directories by calling file.getParentFile().mkdirs() method:

import java.io.File;
import java.io.IOException;

public class CreateNewFile {

    public static void main(String[] args) {
        // Instantiate a File object with a file path
        File file = new File("java/io/foo.txt");

        try {
            // Create missing parent directories
            if(file.getParentFile() != null) {
                file.getParentFile().mkdirs();
            }

            // Create the file
            boolean success = file.createNewFile();

            if (success) {
                System.out.println("File created successfully!");
            } else {
                System.out.println("File already exists!");
            }
        } catch (IOException e) {
            System.out.println("An I/O error occurred: " + e.getMessage());
        } catch (SecurityException e) {
            System.out.println("No sufficient permission to create file: " + e.getMessage());
        }
    }
}
How to copy Directories recursively in Java
03
Mar
2021

How to copy Directories recursively in Java

In this article, you’ll learn how to copy a non-empty directory recursively with all its sub-directories and files to another location in Java.

Java copy directory recursively

import java.io.IOException;
import java.nio.file.*;

public class CopyDirectoryRecursively {
    public static void main(String[] args) throws IOException {
        Path sourceDir = Paths.get( "/Users/project/Desktop/new-media");
        Path destinationDir = Paths.get("/Users/project/Desktop/media");

        // Traverse the file tree and copy each file/directory.
        Files.walk(sourceDir)
                .forEach(sourcePath -> {
                    try {
                        Path targetPath = destinationDir.resolve(sourceDir.relativize(sourcePath));
                        System.out.printf("Copying %s to %s%n", sourcePath, targetPath);
                        Files.copy(sourcePath, targetPath, StandardCopyOption.REPLACE_EXISTING);
                    } catch (IOException ex) {
                        System.out.format("I/O error: %s%n", ex);
                    }
                });
    }
}
Writing your first Kotlin program
07
Mar
2021

Writing your first Kotlin program

The first program that we typically write in any programming language is the “Hello, World” program. Let’s write the “Hello, World” program in Kotlin and understand its internals.

The “Hello, World!” program in Kotlin

Open your favorite editor or an IDE and create a file named hello.kt with the following code –

// Kotlin Hello World Program
fun main(args: Array<String>) {
    println("Hello, World!")
}

You can compile and run the program using Kotlin’s compiler like so –

$ kotlinc hello.kt
$ kotlin HelloKt
Hello, World!

You can also run the program in an IDE like Eclipse or IntelliJ. Learn how to Setup and run Kotlin programs in your system.

Internals of the “Hello, World!” program

  • Line 1. The first line is a comment. Comments are ignored by the compiler. They are used for your own sake (and for others who read your code).Kotlin supports two different styles of comments similar to other languages like C, C++, and Java –
    1. Single Line Comment:// This is a Single line comment
    2. Multi-line Comment:/* This is an example of a multi-line comment. It can span over multiple lines. */
  • Line 2. The second line defines a function called main.fun main(args: Array<String>) { // ... } Functions are the building blocks of a Kotlin program. All functions in Kotlin start with the keyword fun followed by a name of the function (main in this case), a list of zero or more comma-separated parameters, an optional return type, and a body.The main() function takes one argument – an Array of strings, and returns Unit. The Unit type corresponds to void in Java. It is used to indicate that the function doesn’t return any meaningful value.The Unit type is optional. i.e. you don’t need to declare it explicitly in the function declaration –fun main(args: Array<String>): Unit { // `Unit` is optional } If you don’t specify a return type, Unit is assumed.The main() function is not just any ordinary function. It is the entry point of your program. It is the first thing that gets called when you run a Kotlin program.
  • Line 3. The third line is a statement. It prints a String “Hello, World!” to standard output.println("Hello, World!") // No Semicolons needed :) Note that we can directly use println() to print to standard output, unlike Java where we need to use System.out.println().Kotlin provides several wrappers over standard Java library functions, println is one of them.Also, Semicolons are optional in Kotlin, just like many other modern languages.

Conclusion

Congratulations! You just wrote your first program in Kotlin. You learned about the main method, single-line and multi-line comments, and the println library function.

In the next article, you’ll learn how to create variables in Kotlin and also look at various data types that are supported for creating variables.

Microkernel Architecture
14
May
2021

What is Microkernel Architecture?

Many third-party applications, in view of software architecture best practices, make avail software packages as downloadable plug-ins or versions. It is to this particular type, that the Microkernel Architecture is most suited as a result of which it is also called the plug-in architecture pattern. 

With this style, enterprise application development services can add pluggable features to an erstwhile version of the software providing for extensibility. The architecture is formulated of two components, with one part dedicated to the core system and the other to the plug-ins. Minimalism is followed while designing the core of the architecture, that stores just the right proportion of components to render the system effective. 

Microkernel Architecture

The most relatable example of the Microkernel Architecture would be any internet browser. You download a version of the application, that is essentially a software, and depending upon the missing functionalities, download and add plug-ins. Enterprise software development services rely on this pattern for designing large scale, complex applications as well. An example of such a business application could be a software for processing insurance claims. 

Benefits

  • This design has proven its worth as one being highly flexible. Operational possibilities arising from the capability of plug-ins make reacting to such changes in near real-time critical to sustenance. Such changes can be dealt with in isolation with the core system regaining its stable state, for the most part, therefore requiring less developmental updates over time.
  • An enterprise software development company could face a downtime issue at the time of deployment but that can be minimized or altogether avoided by adding plug-in modules to the core dynamically. 
  • A software development company could test plug-in prototypes in isolation and see for performance issues without affecting the core of the architecture. 
  • Microkernel Architecture is most appreciated for maintaining high-performance applications as the software can be customized to include only those capabilities that are needed the most. 

Potential Drawbacks 

  • Apps such as those conceptualized by enterprise mobile app development services, have a non-negotiable scope to scale. However, the Microkernel Architecture is grounded on designs of the product and naturally suited to apps that are smaller in size. 
  • An enterprise app development company could find the Microkernel pattern rather hard to execute due to the vast number of plug-ins compatible with the core. This calls for drawing out governance contracts, updating plug-in regitaries and so many formalities that the implementation becomes a challenge. 

Ideal For

Microkernel Architecture is best suited for workflow applications in addition to those that need job scheduling. As pointed above, like a web browser, any application that you want to release with just the right amount of specs but want to leave room that can be filled in by installing additional plug-ins can be built with this design pattern. 

JPA / Hibernate ElementCollection Example with Spring Boot
03
Mar
2021

JPA / Hibernate ElementCollection Example with Spring Boot

In this article, You’ll learn how to map a collection of basic as well as embeddable types using JPA’s @ElementCollection and @CollectionTable annotations.

Let’s say that the users of your application can have multiple phone numbers and addresses. To map this requirement into the database schema, you need to create separate tables for storing the phone numbers and addresses –

Hibernate Spring Boot JPA @ElementCollection example table structure

Both the tables user_phone_numbers and user_addresses contain a foreign key to the users table.

You can implement such relationship at the object level using JPA’s one-to-many mapping. But for basic and embeddable types like the one in the above schema, JPA has a simple solution in the form of ElementCollection.

Let’s create a project from scratch and learn how to use an ElementCollection to map the above schema in your application using hibernate.

Creating the Application

If you have Spring Boot CLI installed, then simply type the following command in your terminal to generate the application –

spring init -n=jpa-element-collection-demo -d=web,jpa,mysql --package-name=com.example.jpa jpa-element-collection-demo

Alternatively, You can use Spring Initializr web app to generate the application by following the instructions below –

  1. Open http://start.spring.io
  2. Enter Artifact as “jpa-element-collection-demo”
  3. Click Options dropdown to see all the options related to project metadata.
  4. Change Package Name to “com.example.jpa”
  5. Select WebJPA and Mysql dependencies.
  6. Click Generate to generate and download the project.

Configuring MySQL database and Hibernate Log levels

Let’s first configure the database URL, username, password, hibernate log levels and other properties.

Open src/main/resources/application.properties and add the following properties to it –

# DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.url=jdbc:mysql://localhost:3306/jpa_element_collection_demo?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
spring.datasource.username=root
spring.datasource.password=root

# Hibernate

# The SQL dialect makes Hibernate generate better SQL for the chosen database
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect

# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = update

logging.level.org.hibernate.SQL=DEBUG
logging.level.org.hibernate.type=TRACE

Please change spring.datasource.username and spring.datasource.password as per your MySQL installation. Also, create a database named jpa_element_collection_demo before proceeding to the next section.

Defining the Entity classes

Next, We’ll define the Entity classes that will be mapped to the database tables we saw earlier.

Before defining the User entity, let’s first define the Address type which will be embedded inside the User entity.

All the domain models will go inside the package com.example.jpa.model.

1. Address – Embeddable Type

package com.example.jpa.model;

import javax.persistence.Embeddable;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;

@Embeddable
public class Address {
    @NotNull
    @Size(max = 100)
    private String addressLine1;

    @NotNull
    @Size(max = 100)
    private String addressLine2;

    @NotNull
    @Size(max = 100)
    private String city;

    @NotNull
    @Size(max = 100)
    private String state;

    @NotNull
    @Size(max = 100)
    private String country;

    @NotNull
    @Size(max = 100)
    private String zipCode;

    public Address() {

    }

    public Address(String addressLine1, String addressLine2, String city, 
                   String state, String country, String zipCode) {
        this.addressLine1 = addressLine1;
        this.addressLine2 = addressLine2;
        this.city = city;
        this.state = state;
        this.country = country;
        this.zipCode = zipCode;
    }

    // Getters and Setters (Omitted for brevity)
}

2. User Entity

Let’s now see how we can map a collection of basic types (phone numbers) and embeddable types (addresses) using hibernate –

package com.example.jpa.model;

import javax.persistence.*;
import javax.validation.constraints.Email;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
import java.util.HashSet;
import java.util.Set;

@Entity
@Table(name = "users")
public class User {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @NotNull
    @Size(max = 100)
    private String name;

    @NotNull
    @Email
    @Size(max = 100)
    @Column(unique = true)
    private String email;

    @ElementCollection
    @CollectionTable(name = "user_phone_numbers", joinColumns = @JoinColumn(name = "user_id"))
    @Column(name = "phone_number")
    private Set<String> phoneNumbers = new HashSet<>();

    @ElementCollection(fetch = FetchType.LAZY)
    @CollectionTable(name = "user_addresses", joinColumns = @JoinColumn(name = "user_id"))
    @AttributeOverrides({
            @AttributeOverride(name = "addressLine1", column = @Column(name = "house_number")),
            @AttributeOverride(name = "addressLine2", column = @Column(name = "street"))
    })
    private Set<Address> addresses = new HashSet<>();


    public User() {

    }

    public User(String name, String email, Set<String> phoneNumbers, Set<Address> addresses) {
        this.name = name;
        this.email = email;
        this.phoneNumbers = phoneNumbers;
        this.addresses = addresses;
    }

    // Getters and Setters (Omitted for brevity)
}

We use @ElementCollection annotation to declare an element-collection mapping. All the records of the collection are stored in a separate table. The configuration for this table is specified using the @CollectionTable annotation.

The @CollectionTable annotation is used to specify the name of the table that stores all the records of the collection, and the JoinColumn that refers to the primary table.

Moreover, When you’re using an Embeddable type with Element Collection, you can use the @AttributeOverrides and @AttributeOverride annotations to override/customize the fields of the embeddable type.

Defining the Repository

Next, Let’s create the repository for accessing the user’s data from the database. You need to create a new package called repository inside com.example.jpa package and add the following interface inside the repository package –

package com.example.jpa.repository;

import com.example.jpa.model.User;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface UserRepository extends JpaRepository<User, Long> {

}

Testing the ElementCollection Setup

Finally, Let’s write the code to test our setup in the main class JpaElementCollectionDemoApplication.java –

package com.example.jpa;

import com.example.jpa.model.Address;
import com.example.jpa.model.User;
import com.example.jpa.repository.UserRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import java.util.HashSet;
import java.util.Set;

@SpringBootApplication
public class JpaElementCollectionDemoApplication implements CommandLineRunner {

    @Autowired
    private UserRepository userRepository;

    public static void main(String[] args) {
        SpringApplication.run(JpaElementCollectionDemoApplication.class, args);
    }

    @Override
    public void run(String... args) throws Exception {
        // Cleanup database tables.
        userRepository.deleteAll();

        // Insert a user with multiple phone numbers and addresses.
        Set<String> phoneNumbers = new HashSet<>();
        phoneNumbers.add("+91-9999999999");
        phoneNumbers.add("+91-9898989898");

        Set<Address> addresses = new HashSet<>();
        addresses.add(new Address("747", "Golf View Road", "Bangalore",
                "Karnataka", "India", "560008"));
        addresses.add(new Address("Plot No 44", "Electronic City", "Bangalore",
                "Karnataka", "India", "560001"));

        User user = new User("Yaniv Levy", "yaniv@fusebes.com",
                phoneNumbers, addresses);

        userRepository.save(user);
    }
}

The main class implements the CommandLineRunner interface and provides the implementation of its run() method.

The run() method is executed post application startup. In the run() method, we first cleanup all the tables and then insert a new user with multiple phone numbers and addresses.

You can run the application by typing mvn spring-boot:run from the root directory of the project.

After running the application, Go ahead and check all the tables in MySQL. The users table will have a new entry, the user_phone_numbers and user_addresses table will have two new entries –

mysql> select * from users;
+----+-----------------------+--------------------+
| id | email                 | name               |
+----+-----------------------+--------------------+
|  3 | yaniv@fusebes.com | Yaniv Levy |
+----+-----------------------+--------------------+
1 row in set (0.01 sec)
mysql> select * from user_phone_numbers;
+---------+----------------+
| user_id | phone_number   |
+---------+----------------+
|       3 | +91-9898989898 |
|       3 | +91-9999999999 |
+---------+----------------+
2 rows in set (0.00 sec)
mysql> select * from user_addresses;
+---------+--------------+-----------------+-----------+---------+-----------+----------+
| user_id | house_number | street          | city      | country | state     | zip_code |
+---------+--------------+-----------------+-----------+---------+-----------+----------+
|       3 | 747          | Golf View Road  | Bangalore | India   | Karnataka | 560008   |
|       3 | Plot No 44   | Electronic City | Bangalore | India   | Karnataka | 560001   |
+---------+--------------+-----------------+-----------+---------+-----------+----------+
2 rows in set (0.00 sec)

Conclusion

That’s all in this article Folks. I hope you learned how to use hibernate’s element-collection with Spring Boot.

Thanks for reading guys. Happy Coding! 🙂

How to create and bootstrap a simple boot application
30
Mar
2021

How to Create and Bootstrap a Simple Boot Application?

  • To create any simple spring boot application, we need to start by creating a pom.xml file. Add spring-boot-starter-parent as parent of the project which makes it a spring boot application.pom.xml<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId><artifactId>myproject</artifactId><version>0.0.1-SNAPSHOT</version> <parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>2.2.1.RELEASE</version></parent></project>Above is simplest spring boot application which can be packaged as jar file. Now import the project in your IDE (optional).
  • Now we can start adding other starter dependencies that we are likely to need when developing a specific type of application. For example, for a web application, we add a spring-boot-starter-web dependency.pom.xml<dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency></dependencies>
  • Start adding application business logic, views and domain data at this step. By default, Maven compiles sources from src/main/java so create your application code inside this folder only.As the very initial package, add the application bootstrap class and add @SpringBootApplication annotation. This class will be used to run the application. It’s main() method acts as the application entry point.MyApplication.javaimport org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplicationpublic class MyApplication {public static void main(String[] args) {SpringApplication.run(MyApplication.class, args);}}
  • The SpringApplication class provided in above main() method bootstraps our application, starting Spring, which, in turn, starts the auto-configured Tomcat web server. MyApplication (method argument) indicates the primary Spring component.
  • Since we used the spring-boot-starter-parent POM, we have a useful “run” goal that we can use to start the application. Type 'mvn spring-boot:run' from the root project directory to start the application.It will start the application which we can verify in console logs as well as in browser by hitting URL: localhost:8080.
Maximum Contiguous Subarray Sum
06
Feb
2021

Maximum Contiguous Subarray Sum

Maximum Contiguous Subarray Sum problem statement

Given an array of integers, find the contiguous subarray (containing at least one number) which has the largest sum and return its sum.

Example:

Input: [-2,1,-3,4,-1,2,1,-5,4],
Output: 6
Explanation: The subarray [4,-1,2,1] has the largest sum = 6.

Maximum Contiguous Subarray Sum solution in Java

This problem can be solved using Kadane’s algorithm in O(n) time complexity.

The algorithm iterates over all the elements of the array (nums) and computes the maximum sum ending at every index (maxEndingHere). Here is how maxEndingHere is calculated:

  • Initialize maxEndingHere to nums[0]
  • Iterate through the array (1..n). For every iteration:
    • If maxEndingHere + nums[i] < nums[i], that means the previous max was negative and therefore we reset maxEndingHere to nums[i].
    • Otherwise, we set maxEndingHere = maxEndingHere + nums[i]
  • We also keep a variable maxSoFar which indicates the maximum sum found so far. And, with every iteration, we check if the current max (maxEndingHere) is greater than maxSoFar. If yes, then we update the value of maxSoFar.

Here is the complete solution:

import java.util.Scanner;

class MaximumSubarray {

  private static int findMaxSubarraySum(int[] nums) {
    int n = nums.length;
    int maxSoFar = nums[0];
    int maxEndingHere = nums[0];

    for(int i = 1; i < n; i++) {
      if(maxEndingHere + nums[i] < nums[i]) {
        maxEndingHere = nums[i];
      } else {
        maxEndingHere = maxEndingHere + nums[i];
      }

      if(maxEndingHere > maxSoFar) {
        maxSoFar = maxEndingHere;
      }
    }
    return maxSoFar;
  }

  public static void main(String[] args) {
    Scanner keyboard = new Scanner(System.in);
    int n = keyboard.nextInt();
    int[] nums = new int[n];
    for(int i = 0; i < n; i++) {
      nums[i] = keyboard.nextInt();
    }

    System.out.println(findMaxSubarraySum(nums));
  }
}
# Output
$ javac MaximumSubarray.java
$ java MaximumSubarray
9 -2 1 -3 4 -1 2 1 -5 4
6

Maximum Contiguous Subarray with Indices

It’s also possible to find out the start and end indices of the maximum contiguous subarray. Here is a modification of the above program which does that and prints the subarray –

import java.util.Scanner;

class MaximumSubarray {

  private static int[] findMaxSubarrayIndices(int[] nums) {
    int n = nums.length;
    int maxSoFar = nums[0];
    int maxEndingHere = nums[0];
    int currentStart = 0, maxStart = 0, maxEnd = 0;

    for(int i = 1; i < n; i++) {
      if(maxEndingHere + nums[i] < nums[i]) {
        maxEndingHere = nums[i];
        currentStart = i;
      } else {
        maxEndingHere = maxEndingHere + nums[i];
      }

      if(maxEndingHere > maxSoFar) {
        maxSoFar = maxEndingHere;
        maxStart = currentStart;
        maxEnd = i;
      }
    }

    return new int[]{maxStart, maxEnd};
  }

  public static void main(String[] args) {
    Scanner keyboard = new Scanner(System.in);
    int n = keyboard.nextInt();
    int[] nums = new int[n];
    for(int i = 0; i < n; i++) {
      nums[i] = keyboard.nextInt();
    }

    int[] indices = findMaxSubarrayIndices(nums);
    int sum = 0;
    System.out.print("Max contiguous subarray: ");
    for(int i = indices[0]; i <= indices[1]; i++) {
      System.out.print(nums[i] + " ");
      sum += nums[i];
    }
    System.out.println();
    System.out.println("Max contiguous subarray sum: " + sum);
  }
}
# Output
$ javac MaximumSubarray.java
$ java MaximumSubarray
9 -2 1 -3 4 -1 2 1 -5 4
Max contiguous subarray: 4 -1 2 1
Max contiguous subarray sum: 6
Kafka For Beginners
31
Mar
2021

Apache Kafka For Beginners

What is Kafka?

We use Apache Kafka when it comes to enabling communication between producers and consumers using message-based topics. Apache Kafka is a fast, scalable, fault-tolerant, publish-subscribe messaging system. Basically, it designs a platform for high-end new generation distributed applications. Also, it allows a large number of permanent or ad-hoc consumers. One of the best features of Kafka is, it is highly available and resilient to node failures and supports automatic recovery. This feature makes Apache Kafka ideal for communication and integration between components of large-scale data systems in real-world data systems.

Moreover, this technology replaces the conventional message brokers, with the ability to give higher throughput, reliability, and replication like JMS, AMQP and many more. In addition, core abstraction Kafka offers a Kafka broker, a Kafka Producer, and a Kafka Consumer. Kafka broker is a node on the Kafka cluster, its use is to persist and replicate the data. A Kafka Producer pushes the message into the message container called the Kafka Topic. Whereas a Kafka Consumer pulls the message from the Kafka Topic.

Before moving forward in Kafka Tutorial, let’s understand the actual meaning of term Messaging System in Kafka.

a. Messaging System in Kafka

When we transfer data from one application to another, we use the Messaging System. It results as, without worrying about how to share data, applications can focus on data only. On the concept of reliable message queuing, distributed messaging is based. Although, messages are asynchronously queued between client applications and messaging system. There are two types of messaging patterns available, i.e. point to point and publish-subscribe (pub-sub) messaging system. However, most of the messaging patterns follow pub-sub.

Apache Kafka — Kafka Messaging System
  • Point to Point Messaging System

Here, messages are persisted in a queue. Although, a particular message can be consumed by a maximum of one consumer only, even if one or more consumers can consume the messages in the queue. Also, it makes sure that as soon as a consumer reads a message in the queue, it disappears from that queue.

  • Publish-Subscribe Messaging System

Here, messages are persisted in a topic. In this system, Kafka Consumers can subscribe to one or more topic and consume all the messages in that topic. Moreover, message producers refer publishers and message consumers are subscribers here.

History of Apache Kafka

Previously, LinkedIn was facing the issue of low latency ingestion of huge amount of data from the website into a lambda architecture which could be able to process real-time events. As a solution, Apache Kafka was developed in the year 2010, since none of the solutions was available to deal with this drawback, before.

However, there were technologies available for batch processing, but the deployment details of those technologies were shared with the downstream users. Hence, while it comes to Real-time Processing, those technologies were not enough suitable. Then, in the year 2011 Kafka was made public.

Why Should we use Apache Kafka Cluster?

As we all know, there is an enormous volume of data in Big Data. And, when it comes to big data, there are two main challenges. One is to collect the large volume of data, while another one is to analyze the collected data. Hence, in order to overcome those challenges, we need a messaging system. Then Apache Kafka has proved its utility. There are numerous benefits of Apache Kafka such as:

  • Tracking web activities by storing/sending the events for real-time processes.
  • Alerting and reporting the operational metrics.
  • Transforming data into the standard format.
  • Continuous processing of streaming data to the topics.

Therefore, this technology is giving a tough competition to some of the most popular applications like ActiveMQ, RabbitMQ, AWS etc. because of its wide use.

Kafka Tutorial — Audience

Professionals who are aspiring to make a career in Big Data Analytics using Apache Kafka messaging system should refer this Kafka Tutorial article. It will give you complete understanding about Apache Kafka.

Kafka Tutorial — Prerequisites

You must have a good understanding ofJavaScala, Distributed messaging system, and Linux environment, before proceeding with this Apache Kafka Tutorial.

Kafka Architecture

Below we are discussing four core APIs in this Apache Kafka tutorial:

Apache Kafka — Kafka Architecture

a. Kafka Producer API

This Kafka Producer API permits an application to publish a stream of records to one or more Kafka topics.

b. Kafka Consumer API

To subscribe to one or more topics and process the stream of records produced to them in an application, we use this Kafka Consumer API.

c. Kafka Streams API

In order to act as a stream processor consuming an input stream from one or more topics and producing an output stream to one or more output topics and also effectively transforming the input streams to output streams, this Kafka Streams API gives permission to an application.

d. Kafka Connector API

This Kafka Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.

Kafka Components

Using the following components, Kafka achieves messaging:

a. Kafka Topic

Basically, how Kafka stores and organizes messages across its system and essentially a collection of messages are Topics. In addition, we can replicate and partition Topics. Here, replicate refers to copies and partition refers to the division. Also, visualize them as logs wherein, Kafka stores messages. However, this ability to replicate and partitioning topics is one of the factors that enable Kafka’s fault tolerance and scalability.

Apache Kafka — Kafka Topic

b. Kafka Producer

It publishes messages to a Kafka topic.

c. Kafka Consumer

This component subscribes to a topic(s), reads and processes messages from the topic(s).

d. Kafka Broker

Kafka Broker manages the storage of messages in the topic(s). If Kafka has more than one broker, that is what we call a Kafka cluster.

e. Kafka Zookeeper

To offer the brokers with metadata about the processes running in the system and to facilitate health checking and broker leadership election, Kafka uses Kafka zookeeper.

Kafka Tutorial — Log Anatomy

We view log as the partitions in this Kafka tutorial. Basically, a data source writes messages to the log. One of the advantages is, at any time one or more consumers read from the log they select. Here, the below diagram shows a log is being written by the data source and the log is being read by consumers at different offsets.

Apache Kafka Tutorial — Log Anatomy

Kafka Tutorial — Data Log

By Kafka, messages are retained for a considerable amount of time. Also, consumers can read as per their convenience. However, if Kafka is configured to keep messages for 24 hours and a consumer is down for time greater than 24 hours, the consumer will lose messages. And, messages can be read from last known offset, if the downtime on part of the consumer is just 60 minutes. Kafka doesn’t keep state on what consumers are reading from a topic.

Kafka Tutorial — Partition in Kafka

There are few partitions in every Kafka broker. Moreover, each partition can be either a leader or a replica of a topic. In addition, along with updating of replicas with new data, Leader is responsible for all writes and reads to a topic. The replica takes over as the new leader if somehow the leader fails.

Apache Kafka Tutorial — Partition In Kafka

Importance of Java in Apache Kafka

Apache Kafka is written in pure Java and also Kafka’s native API is java. However, many other languages like C++, Python, .Net, Go, etc. also support Kafka. Still, a platform where there is no need of using a third-party library is Java. Also, we can say, writing code in languages apart from Java will be a little overhead.

In addition, we can useJavalanguage if we need the high processing rates that come standard on Kafka. Also, Java provides a good community support for Kafka consumer clients. Hence, it is a right choice to implement Kafka in Java.

Kafka Use Cases

There are several use Cases of Kafka that show why we actually use Apache Kafka.

  • Messaging

For a more traditional message broker, Kafka works well as a replacement. We can say Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large-scale message processing applications.

  • Metrics

For operational monitoring data, Kafka finds the good application. It includes aggregating statistics from distributed applications to produce centralized feeds of operational data.

  • Event Sourcing

Since it supports very large stored log data, that means Kafka is an excellent backend for applications of event sourcing.

Kafka Tutorial — Comparisons in Kafka

Many applications offer the same functionality as Kafka like ActiveMQ, RabbitMQ, Apache Flume, Storm, and Spark. Then why should you go for Apache Kafka instead of others?

Let’s see the comparisons below:

a. Apache Kafka vs Apache Flume

Kafka Tutorial — Apache Kafka vs Flume

i. Types of tool

Apache Kafka– For multiple producers and consumers, it is a general-purpose tool.

Apache Flume– Whereas, it is a special-purpose tool for specific applications.

ii. Replication feature

Apache Kafka– Using ingest pipelines, it replicates the events.

Apache Flume- It does not replicate the events.

b. RabbitMQ vs Apache Kafka

One among the foremost Apache Kafka alternatives is RabbitMQ. So, let’s see how they differ from one another:

Kafka Tutorial — Kafka vs RabbitMQ

i. Features

Apache Kafka– Basically, Kafka is distributed. Also, with guaranteed durability and availability, the data is shared and replicated.

RabbitMQ– It offers relatively less support for these features.

ii. Performance rate

Apache Kafka — Its performance rate is high to the tune of 100,000 messages/second.

RabbitMQ — Whereas, the performance rate of RabbitMQ is around 20,000 messages/second.

iii. Processing

Apache Kafka — It allows reliable log distributed processing. Also, stream processing semantics built into the Kafka Streams.

RabbitMQ — Here, the consumer is just FIFO based, reading from the HEAD and processing 1 by 1.

c. Traditional queuing systems vs Apache Kafka

Kafka Tutorial — Traditional queuing systems vs Apache Kafka

i. Messages Retaining

Traditional queuing systems — Most queueing systems remove the messages after it has been processed typically from the end of the queue.

Apache Kafka — Here, messages persist even after being processed. They don’t get removed as consumers receive them.

ii. Logic-based processing

Traditional queuing systems — It does not allow to process logic based on similar messages or events.

Apache Kafka — It allows to process logic based on similar messages or events.

So, this was all about Apache Kafka Tutorials. Hope you like our explanation.

Conclusion: Kafka Tutorial

Hence, in this Kafka Tutorial, we have seen the whole concept of Apache Kafka and seen what is Kafka. Moreover, we discussed Kafka components, use cases, and Kafka architecture. At last, we discussed the comparison of Kafka vs other messaging tools. Furthermore, if you have any query regarding Kafka Tutorial, feel free to ask in the comment section. Also, keep visiting Data Flair for more knowledgeable articles on Apache Kafka.