Welcome To Fusebes - Dev & Programming Blog

Scaffolding your Spring Boot Application
06
Feb
2021

Scaffolding your Spring Boot Application

Scaffolding, in modern web apps, has been made popular by tools like yeoman. These tools generate a standard prototype for your project with everything you need to get up and running, including the directory structure, the build system, the project dependencies etc.

Spring Boot has its own scaffolding tool. There are two ways to quickly bootstrap your Spring boot application –

  1. Using Spring Initializer web app hosted at http://start.spring.io
  2. Using Spring Boot CLI

Spring Initializer

Spring Initializer is a web tool which helps in quickly bootstrapping spring boot projects.

The tool is very simple to use. You just need to enter your project details and required dependencies. The tool will generate and download a zip file containing a standard spring boot project based on the details you have entered.

Spring Initializer Web Page

Follow these steps to generate a new project using Spring Initializer web app –

Step 1. Go to http://start.spring.io.

Step 2. Select Maven Project or Gradle Project in the Project section. Maven Project is the default.

Step 3. Select the language of your choice JavaKotlin, or Groovy. Java is the default

Step 4. Select the Spring Boot version. The current stable release is the default.

Step 5. Enter group and artifact details in Project metadata.

Step 5.1. Click Options to customize more specific details about your project like Name, Description, Package name, etc.

Step 6. Search for dependencies in the Dependencies section and press enter to add them to your project. You can also click on the menu icon besides the search icon to see the list of all the dependencies available and select the ones that you want to include in your project.

Step 7. Once you have selected all the dependencies and entered the project details, click Generate to generate your project.

The tool will generate and download a zip file containing your project’s prototype. The generated prototype will have all the project directories, the main application class and the pom.xml or build.gradle file with all the dependencies you had selected.

Just unzip the file and import it in an IDE of your choice and start working.

I created a maven project using Spring Initializer while writing this post and here is what I got –

Spring Boot Project Scaffolding Directory Structure

You see! I’ve got everything that is needed to get started. A standard spring boot project structure containing all the required directories along with a pom.xml file containing all the dependencies.

You can go to the project’s root directory and type the following command in your terminal to run the application.

$ mvn spring-boot:run

If you have generated a gradle project, you can run it using –

$ gradle bootRun

Spring Boot CLI

Spring Boot CLI is a command line tool that helps in quickly prototyping with Spring. You can also run Groovy scripts using this tool.

Checkout the Official Spring Boot documentation for instructions on how to install Spring Boot CLI for your operating system. Once installed, type the following command to see what you can do with Spring Boot CLI –

$ spring --help

usage: spring [--help] [--version]
       <command> [<args>]

Available commands are:

  run [options] <files> [--] [args]
    Run a spring groovy script

  grab
    Download a spring groovy scripts dependencies to ./repository

  jar [options] <jar-name> <files>
    Create a self-contained executable jar file from a Spring Groovy script

  war [options] <war-name> <files>
    Create a self-contained executable war file from a Spring Groovy script

  install [options] <coordinates>
    Install dependencies to the lib/ext directory

  uninstall [options] <coordinates>
    Uninstall dependencies from the lib/ext directory

  init [options] [location]
    Initialize a new project using Spring Initializr (start.spring.io)

  encodepassword [options] <password to encode>
    Encode a password for use with Spring Security

  shell
    Start a nested shell

Common options:

  --debug Verbose mode
    Print additional status information for the command you are running


See 'spring help <command>' for more information on a specific command.

In this post, we’ll look at spring init command to generate a new spring boot project. The spring init command internally uses Spring Initializr web app to generate and download the project.

Open your terminal and type –

$ spring help init

to get all the information about init command.

Following are few examples of how to use the command –

To list all the capabilities of the service:
    $ spring init --list

To creates a default project:
    $ spring init

To create a web my-app.zip:
    $ spring init -d=web my-app.zip

To create a web/data-jpa gradle project unpacked:
    $ spring init -d=web,jpa --build=gradle my-dir

Let’s generate a maven web application with JPA, Mysql and Thymleaf dependencies –

$ spring init -n=blog-sample -d=web,data-jpa,mysql,devtools,thymeleaf -g=com.test-a=blog-sample

Using service at https://start.spring.io
Content saved to 'blog-sample.zip'

This command will create a new spring boot project with all the specified settings.

Quick and Simple! isn’t it?

Conclusion

Congratulations folks! We explored two ways of quickly bootstrapping spring boot applications. I always use one of these two methods whenever I have to work with a new Spring Boot app. It saves a lot of time for me.

Java Optional Tutorial with Examples
03
Mar
2021

Java Optional Tutorial with Examples

If you’re a Java programmer, then you must have heard about or experienced NullPointerExceptions in your programs.

NullPointerExceptions are Runtime Exceptions which are thrown by the jvm at runtime. Null checks in programs are often overlooked by developers causing serious bugs in code.

Java 8 introduced a new type called Optional<T> to help developers deal with null values properly.

The concept of Optional is not new and other programming languages have similar constructs. For example – Scala has Optional[T]Haskell has Maybe type.

In this blog post, I’ll explain about Java 8’s Optional type and show you how to use it by giving simple examples.

What is Optional?

Optional is a container type for a value which may be absent. Confused? Let me explain.

Consider the following function which takes a user id, fetches the user’s details with the given id from the database and returns it –

User findUserById(String userId) { ... };

If userId is not present in the database then the above function returns null. Now, let’s consider the following code written by a client –

User user = findUserById("667290");
System.out.println("User's Name = " + user.getName());

A common NullPointerException situation, right? The developer forgot to add the null check in his code. If userId is not present in the database, then the above code snippet will throw a NullPointerException.

Now, let’s understand how Optional will help you mitigate the risk of running into NullPointerException here –

Optional<User> findUserById(String userId) { ... };

By returning Optional<User> from the function, we have made it clear to the clients of this function that there might not be a User with the given userId. Now the clients of this function are explicitly forced to handle this fact.

The client code can now be written as –

Optional<User> optional = findUserById("667290");

optional.ifPresent(user -> {
    System.out.println("User's name = " + user.getName());    
})

Once you have an Optional object, you can use various utility methods to work with the Optional. The ifPresent() method in the above example calls the supplied lambda expression if the user is present, otherwise it does nothing.

Well! You get the idea here right? The client is now forced by the type system to write the Optional check in his code.

Creating an Optional object

1. Create an empty Optional

An empty Optional Object describes the absence of a value.

Optional<User> user = Optional.empty();

2. Create an Optional with a non-null value –

User user = new User("667290", "Rajeev Kumar Singh");
Optional<User> userOptional = Optional.of(user);

If the argument supplied to Optional.of() is null, then it will throw a NullPointerException immediately and the Optional object won’t be created.

3. Create an Optional with a value which may or may not be null –

Optional<User> userOptional = Optional.ofNullable(user);

If the argument passed to Optional.ofNullable() is non-null, then it returns an Optional containing the specified value, otherwise it returns an empty Optional.

Checking the presence of a value

1. isPresent()

isPresent() method returns true if the Optional contains a non-null value, otherwise it returns false.

if(optional.isPresent()) {
    // value is present inside Optional
    System.out.println("Value found - " + optional.get());
} else {
    // value is absent
    System.out.println("Optional is empty");
}	

2. ifPresent()

ifPresent() method allows you to pass a Consumer function that is executed if a value is present inside the Optional object.

It does nothing if the Optional is empty.

optional.ifPresent(value -> {
    System.out.println("Value found - " + value);
});

Note that I have supplied a lambda expression to the ifPresent() method. This makes the code more readable and concise.

Retrieving the value using get() method

Optional’s get() method returns a value if it is present, otherwise it throws NoSuchElementException.

User user = optional.get()

You should avoid using get() method on your Optionals without first checking whether a value is present or not, because it throws an exception if the value is absent.

Returning default value using orElse()

orElse() is great when you want to return a default value if the Optional is empty. Consider the following example –

// return "Unknown User" if user is null
User finalUser = (user != null) ? user : new User("0", "Unknown User");

Now, let’s see how we can write the above logic using Optional’s orElse() construct –

// return "Unknown User" if user is null
User finalUser = optionalUser.orElse(new User("0", "Unknown User"));

Returning default value using orElseGet()

Unlike orElse(), which returns a default value directly if the Optional is empty, orElseGet() allows you to pass a Supplier function which is invoked when the Optional is empty. The result of the Supplier function becomes the default value of the Optional –

User finalUser = optionalUser.orElseGet(() -> {
    return new User("0", "Unknown User");
});

Throw an exception on absence of a value

You can use orElseThrow() to throw an exception if Optional is empty. A typical scenario in which this might be useful is – returning a custom ResourceNotFound() exception from your REST API if the object with the specified request parameters does not exist.

@GetMapping("/users/{userId}")
public User getUser(@PathVariable("userId") String userId) {
    return userRepository.findByUserId(userId).orElseThrow(
	    () -> new ResourceNotFoundException("User not found with userId " + userId);
    );
}

Filtering values using filter() method

Let’s say you have an Optional object of User. You want to check its gender and call a function if it’s a MALE. Here is how you would do it using old school method –

if(user != null && user.getGender().equalsIgnoreCase("MALE")) {
    // call a function
}

Now, let’s use Optional along with filter to achieve the same –

userOptional.filter(user -> user.getGender().equalsIgnoreCase("MALE"))
.ifPresent(() -> {
    // Your function
})

The filter() method takes a predicate as an argument. If the Optional contains a non-null value and the value matches the given predicate, then filter() method returns an Optional with that value, otherwise it returns an empty Optional.

So, the function inside ifPresent() in the above example will be called if and only if the Optional contains a user and user is a MALE.

Extracting and transforming values using map()

Let’s say that you want to get the address of a user if it is present and print it if the user is from India.

Considering the following getAddress() method inside User class –

Address getAddress() {
    return this.address;
}

Here is how you would achieve the desired result –

if(user != null) {
    Address address = user.getAddress();
    if(address != null && address.getCountry().equalsIgnoreCase("India")) {
	    System.out.println("User belongs to India");
    }
}

Now, let’s see how we can get the same result using map() method –

userOptional.map(User::getAddress)
.filter(address -> address.getCountry().equalsIgnoreCase("India"))
.ifPresent(() -> {
    System.out.println("User belongs to India");
});

You see how concise and readable the above code is? Let’s break the above code snippet and understand it in detail –

// Extract User's address using map() method.
Optional<Address> addressOptional = userOptional.map(User::getAddress)

// filter address from India
Optional<Address> indianAddressOptional = addressOptional.filter(address -> address.getCountry().equalsIgnoreCase("India"));

// Print, if country is India
indianAddressOptional.ifPresent(() -> {
    System.out.println("User belongs to India");
});

In the above example, map() method returns an empty Optional in the following cases – 1. user is absent in userOptional. 2. user is present but getAdderess() returns null.

otherwise, it returns an Optional<Address> containing user’s address.

Cascading Optionals using flatMap()

Let’s consider the above map() example again. You might ask that if user’s address can be null then why the heck aren’t you returning an Optional<Address> instead of plain Address from getAddress() method?

And, You’re right! Let’s correct that, let’s now assume that getAddress() returns Optional<Address>. Do you think that above code will still work?

The answer is no! The problem is the following line –

Optional<Address> addressOptional = userOptional.map(User::getAddress)

Since getAddress() returns Optional<Address>, the return type of userOptional.map() will be Optional<Optional<Address>>

Optional<Optional<Address>> addressOptional = userOptional.map(User::getAddress)

Oops! We certainly don’t want that nested Optional. Let’s use flatMap() to correct that –

Optional<Address> addressOptional = userOptional.flatMap(User::getAddress)

Cool! So, Rule of thumb here – if the mapping function returns an Optional, use flatMap() instead of map() to get the flattened result from your Optional

Conclusion

Thank you for reading. If you Optional<Liked> this blog post. Give an Optional<High Five> in the comment section below.

Spring Boot Database Migrations with Flyway
05
Mar
2021

Spring Boot Database Migrations with Flyway

Nothing is perfect and complete when it is built the first time. The database schema of your brand new application is no exception. It is bound to change over time when you try to fit new requirements or add new features.

Flyway is a tool that lets you version control incremental changes to your database so that you can migrate it to a new version easily and confidently.

In this article, you’ll learn how to use Flyway in Spring Boot applications to manage changes to your database.

We’ll build a simple Spring Boot application with MySQL Database & Spring Data JPA, and learn how to integrate Flyway in the app.

Let’s get started!

Creating the Application

Let’s use Spring Boot CLI to generate the application. Fire up your terminal and type the following command to generate the project.

spring init --name=flyway-demo --dependencies=web,mysql,data-jpa,flyway flyway-demo

Once the project is generated, import it into your favorite IDE. The directory structure of the application would look like this –

Spring Boot Flyway Database Migration Example Directory Structure

Configuring MySQL and Hibernate

First create a new MySQL database named flyway_demo.

Then Open src/main/application.properties file and add the following properties to it –

## Spring DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.url = jdbc:mysql://localhost:3306/flyway_demo?useSSL=false
spring.datasource.username = root
spring.datasource.password = root

## Hibernate Properties
# The SQL dialect makes Hibernate generate better SQL for the chosen database
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect

## This is important
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = validate

Please change spring.datasource.username and spring.datasource.password as per your MySQL installation.

The property spring.jpa.hibernate.ddl-auto is important. It tries to validate the database schema according to the entities that you have created in the application and throws an error if the schema doesn’t match the entity specifications.

Creating a Domain Entity

Let’s create a simple Entity in our application so that we can create and test flyway migration for this entity.

First, Create a new package called domain inside com.example.flywaydemo package. Then, create the following User.java file inside com.example.flywaydemo.domain package –

package com.example.flywaydemo.domain;

import javax.persistence.*;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.Size;

@Entity
@Table(name = "users")
public class User {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @NotBlank
    @Column(unique = true)
    @Size(min = 1, max = 100)
    private String username;

    @NotBlank
    @Size(max = 50)
    private String firstName;

    @Size(max = 50)
    private String lastName;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getUsername() {
        return username;
    }

    public void setUsername(String username) {
        this.username = username;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
}

Creating a Flyway Migration script

Flyway tries to read database migration scripts from classpath:db/migration folder by default.

All the migration scripts must follow a particular naming convention – V<VERSION_NUMBER>__<NAME>.sql. Checkout the Official Flyway documentation to learn more about naming convention.

Let’s create our very first database migration script. First, Create the db/migration folder inside src/main/resources directory –

mkdir -p src/main/resources/db/migration

Now, create a new file named V1__init.sql inside src/main/resources/db/migration directory and add the following sql scripts –

CREATE TABLE users (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  username varchar(100) NOT NULL,
  first_name varchar(50) NOT NULL,
  last_name varchar(50) DEFAULT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY UK_username (username)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Running the Application

When you run the application, flyway will automatically check the current database version and apply any pending migrations.

Run the app by typing the following command in terminal –

mvn spring-boot:run

When running the app for the first time, you’ll see the following logs pertaining to Flyway, which says that it has migrated the schema to version 1 – init.

Flyway Migration Logs

How does Flyway manage migrations?

Flyway creates a table called flyway_schema_history when it runs the migration for the first time and stores all the meta-data required for versioning the migrations in this table.

mysql> show tables;
+-----------------------+
| Tables_in_flyway_demo |
+-----------------------+
| flyway_schema_history |
| users                 |
+-----------------------+
2 rows in set (0.00 sec)

You can check the contents of flyway_schema_history table in your mysql database –

mysql> select * from flyway_schema_history;
+----------------+---------+-------------+------+--------------+------------+--------------+---------------------+----------------+---------+
| installed_rank | version | description | type | script       | checksum   | installed_by | installed_on        | execution_time | success |
+----------------+---------+-------------+------+--------------+------------+--------------+---------------------+----------------+---------+
|              1 | 1       | init        | SQL  | V1__init.sql | 1952043475 | root         | 2018-03-06 11:25:58 |             16 |       1 |
+----------------+---------+-------------+------+--------------+------------+--------------+---------------------+----------------+---------+
1 row in set (0.00 sec)

It stores the current migration’s version, script file name, and checksum among other details in the table.

When you run the app, Flyway first validates the already applied migration scripts by calculating their checksum and matching it with the checksum stored in the meta-data table.

So, If you change V1__init.sql after the migration is applied, Flyway will throw an error saying that the checksum doesn’t match.

Therefore, for doing any changes to the schema, you need to create another migration script file with a new version V2__<NAME>.sql and let Flyway apply that when you run the app.

Adding multiple migrations

Let’s create another migration script and see how flyway migrates the database to the new version automatically.

Create a new script V2__testdata.sql inside src/main/resources/db/migration with the following contents –

INSERT INTO users(username, first_name, last_name) 
VALUES('fusebes', 'Yaniv', 'Levy');
INSERT INTO users(username, first_name, last_name) VALUES('flywaytest', 'Flyway', 'Test');

If you run the app now, flyway will detect the new migration script, and migrate the database to this version.

Open mysql and check the users table. You’ll see that the above two entries are automatically created in the users table –

mysql> select * from users;
+----+------------+-----------+------------+
| id | first_name | last_name | username   |
+----+------------+-----------+------------+
|  4 | Yaniv      | Levy      | fusebes    |
|  5 | Flyway     | Test      | flywaytest |
+----+------------+-----------+------------+
2 rows in set (0.01 sec)

Also, Flyway stores this new schema version in its meta-data table –

mysql> select * from flyway_schema_history;
+----------------+---------+-------------+------+------------------+-------------+--------------+---------------------+----------------+---------+
| installed_rank | version | description | type | script           | checksum    | installed_by | installed_on        | execution_time | success |
+----------------+---------+-------------+------+------------------+-------------+--------------+---------------------+----------------+---------+
|              1 | 1       | init        | SQL  | V1__init.sql     |  1952043475 | root         | 2018-03-06 11:25:58 |             16 |       1 |
|              2 | 2       | testdata    | SQL  | V2__testdata.sql | -1926058189 | root         | 2018-03-06 11:25:58 |              6 |       1 |
+----------------+---------+-------------+------+------------------+-------------+--------------+---------------------+----------------+---------+
2 rows in set (0.00 sec)

Conclusion

That’s all folks! In this article, You learned how to integrate Flyway in a Spring Boot application for versioning database changes.

Thank you for reading. See you in the next blog post.

Spring Boot Quartz Scheduler Example Building an Email Scheduling App
06
Feb
2021

Spring Boot Quartz Scheduler Example Building an Email Scheduling App

Quartz is an open source Java library for scheduling Jobs. It has a very rich set of features including but not limited to persistent Jobs, transactions, and clustering.

You can schedule Jobs to be executed at a certain time of day, or periodically at a certain interval, and much more. Quartz provides a fluent API for creating jobs and scheduling them.

Quartz Jobs can be persisted into a database, or a cache, or in-memory.

In this article, you’ll learn how to schedule Jobs in spring boot using Quartz Scheduler by building a simple Email Scheduling application. The application will have a Rest API that allows clients to schedule Emails at a later time.

We’ll use MySQL to persist all the jobs and other job-related data.

Creating the Application

Let’s bootstrap the application using Spring Boot CLI. Open your terminal and type the following command –

spring init -d=web,jpa,mysql,quartz,mail -n=quartz-demo quartz-demo

The above command will generate the project with all the specified dependencies in a folder named quartz-demo.

Note that, you can also use Spring Initializr web tool to bootstrap the project by following the instructions below –

  • Open http://start.spring.io
  • Enter quartz-demo in the Artifact field.
  • Add WebJPAMySQLQuartz, and Mail in the dependencies section.
  • Click Generate to generate and download the project.

That’s it! You may now import the project into your favorite IDE and start working.

Directory Structure

Following is the directory structure of the complete application for your reference. We’ll create all the required folders and classes one-by-one in this article –

Spring Boot Quartz Scheduler Email Scheduling App directory structure

Configuring MySQL database, Quartz Scheduler, and Mail Sender

Let’s configure Quartz Scheduler, MySQL database, and Spring Mail. MySQL database will be used for storing Quartz Jobs, and Spring Mail will be used to send emails.

Open src/main/resources/application.properties file and add the following properties –

## Spring DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.url = jdbc:mysql://localhost:3306/quartz_demo?useSSL=false
spring.datasource.username = root
spring.datasource.password = password

## QuartzProperties
spring.quartz.job-store-type = jdbc
spring.quartz.properties.org.quartz.threadPool.threadCount = 5

## MailProperties
spring.mail.host=smtp.gmail.com
spring.mail.port=587
spring.mail.username=testme@gmail.com
spring.mail.password=

spring.mail.properties.mail.smtp.auth=true
spring.mail.properties.mail.smtp.starttls.enable=true

You’ll need to create a MySQL database named quartz_demo. Also, don’t forget to change the spring.datasource.username and spring.datasource.password properties as per your MySQL installation.

We’ll be using Gmail’s SMTP server for sending emails. Please add your password in the spring.mail.password property. You may also pass this property at runtime as command line argument or set it in the environment variable.

Note that, Gmail’s SMTP access is disabled by default. To allow this app to send emails using your Gmail account –

All the quartz specific properties are prefixed with spring.quartz. You can refer to the complete set of configurations supported by Quartz in its official documentation. To directly set configurations for Quartz scheduler, you can use the format spring.quartz.properties.<quartz_configuration_name>=<value>.

Creating Quartz Tables

Since we have configured Quartz to store Jobs in the database, we’ll need to create the tables that Quartz uses to store Jobs and other job-related meta-data.

Please download the following SQL script and run it in your MySQL database to create all the Quartz specific tables.

After downloading the above SQL script, login to MySQL and run the script like this –

mysql> source <PATH_TO_QUARTZ_TABLES.sql> 

Overview of Quartz Scheduler’s APIs and Terminologies

1. Scheduler

The Primary API for scheduling, unscheduling, adding, and removing Jobs.

2. Job

The interface to be implemented by classes that represent a ‘job’ in Quartz. It has a single method called execute() where you write the work that needs to be performed by the Job.

3. JobDetail

A JobDetail represents an instance of a Job. It also contains additional data in the form of a JobDataMap that is passed to the Job when it is executed.

Every JobDetail is identified by a JobKey that consists of a name and a group. The name must be unique within a group.

4. Trigger

A Trigger, as the name suggests, defines the schedule at which a given Job will be executed. A Job can have many Triggers, but a Trigger can only be associated with one Job.

Every Trigger is identified by a TriggerKey that comprises of a name and a group. The name must be unique within a group.

Just like JobDetails, Triggers can also send parameters/data to the Job.

5. JobBuilder

JobBuilder is a fluent builder-style API to construct JobDetail instances.

6. TriggerBuilder

TriggerBuilder is used to instantiate Triggers.

Creating a REST API to schedule Email Jobs dynamically in Quartz

All right! Let’s now create a REST API to schedule email Jobs in Quartz dynamically. All the Jobs will be persisted in the database and executed at the specified schedule.

Before writing the API, Let’s create the DTO classes that will be used as request and response payloads for the scheduleEmail API –

ScheduleEmailRequest

package com.example.quartzdemo.payload;

import javax.validation.constraints.Email;
import javax.validation.constraints.NotEmpty;
import javax.validation.constraints.NotNull;
import java.time.LocalDateTime;
import java.time.ZoneId;

public class ScheduleEmailRequest {
    @Email
    @NotEmpty
    private String email;

    @NotEmpty
    private String subject;

    @NotEmpty
    private String body;

    @NotNull
    private LocalDateTime dateTime;

    @NotNull
    private ZoneId timeZone;
	
	// Getters and Setters (Omitted for brevity)
}

ScheduleEmailResponse

package com.example.quartzdemo.payload;

import com.fasterxml.jackson.annotation.JsonInclude;

@JsonInclude(JsonInclude.Include.NON_NULL)
public class ScheduleEmailResponse {
    private boolean success;
    private String jobId;
    private String jobGroup;
    private String message;

    public ScheduleEmailResponse(boolean success, String message) {
        this.success = success;
        this.message = message;
    }

    public ScheduleEmailResponse(boolean success, String jobId, String jobGroup, String message) {
        this.success = success;
        this.jobId = jobId;
        this.jobGroup = jobGroup;
        this.message = message;
    }

    // Getters and Setters (Omitted for brevity)
}

ScheduleEmail Rest API

The following controller defines the /scheduleEmail REST API that schedules email Jobs in Quartz –

package com.example.quartzdemo.controller;

import com.example.quartzdemo.job.EmailJob;
import com.example.quartzdemo.payload.ScheduleEmailRequest;
import com.example.quartzdemo.payload.ScheduleEmailResponse;
import org.quartz.*;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;

import javax.validation.Valid;
import java.time.ZonedDateTime;
import java.util.Date;
import java.util.UUID;

@RestController
public class EmailJobSchedulerController {
    private static final Logger logger = LoggerFactory.getLogger(EmailJobSchedulerController.class);

    @Autowired
    private Scheduler scheduler;

    @PostMapping("/scheduleEmail")
    public ResponseEntity<ScheduleEmailResponse> scheduleEmail(@Valid @RequestBody ScheduleEmailRequest scheduleEmailRequest) {
        try {
            ZonedDateTime dateTime = ZonedDateTime.of(scheduleEmailRequest.getDateTime(), scheduleEmailRequest.getTimeZone());
            if(dateTime.isBefore(ZonedDateTime.now())) {
                ScheduleEmailResponse scheduleEmailResponse = new ScheduleEmailResponse(false,
                        "dateTime must be after current time");
                return ResponseEntity.badRequest().body(scheduleEmailResponse);
            }

            JobDetail jobDetail = buildJobDetail(scheduleEmailRequest);
            Trigger trigger = buildJobTrigger(jobDetail, dateTime);
            scheduler.scheduleJob(jobDetail, trigger);

            ScheduleEmailResponse scheduleEmailResponse = new ScheduleEmailResponse(true,
                    jobDetail.getKey().getName(), jobDetail.getKey().getGroup(), "Email Scheduled Successfully!");
            return ResponseEntity.ok(scheduleEmailResponse);
        } catch (SchedulerException ex) {
            logger.error("Error scheduling email", ex);

            ScheduleEmailResponse scheduleEmailResponse = new ScheduleEmailResponse(false,
                    "Error scheduling email. Please try later!");
            return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR).body(scheduleEmailResponse);
        }
    }

    private JobDetail buildJobDetail(ScheduleEmailRequest scheduleEmailRequest) {
        JobDataMap jobDataMap = new JobDataMap();

        jobDataMap.put("email", scheduleEmailRequest.getEmail());
        jobDataMap.put("subject", scheduleEmailRequest.getSubject());
        jobDataMap.put("body", scheduleEmailRequest.getBody());

        return JobBuilder.newJob(EmailJob.class)
                .withIdentity(UUID.randomUUID().toString(), "email-jobs")
                .withDescription("Send Email Job")
                .usingJobData(jobDataMap)
                .storeDurably()
                .build();
    }

    private Trigger buildJobTrigger(JobDetail jobDetail, ZonedDateTime startAt) {
        return TriggerBuilder.newTrigger()
                .forJob(jobDetail)
                .withIdentity(jobDetail.getKey().getName(), "email-triggers")
                .withDescription("Send Email Trigger")
                .startAt(Date.from(startAt.toInstant()))
                .withSchedule(SimpleScheduleBuilder.simpleSchedule().withMisfireHandlingInstructionFireNow())
                .build();
    }
}

Spring Boot has built-in support for Quartz. It automatically creates a Quartz Scheduler bean with the configuration that we supplied in the application.properties file. That’s why we could directly inject the Scheduler in the controller.

In the /scheduleEmail API,

  • We first validate the request body
  • Then, Build a JobDetail instance with a JobDataMap that contains the recipient email, subject, and body. The JobDetail that we create is of type EmailJob. We’ll define EmailJob in the next section.
  • Next, we Build a Trigger instance that defines when the Job should be executed.
  • Finally, we schedule the Job using scheduler.scheduleJob() API.

Creating the Quartz Job to sends emails

Let’s now define the Job that sends the actual emails. Spring Boot provides a wrapper around Quartz Scheduler’s Job interface called QuartzJobBean. This allows you to create Quartz Jobs as Spring beans where you can autowire other beans.

Let’s create our EmailJob by extending QuartzJobBean –

package com.example.quartzdemo.job;

import org.quartz.JobDataMap;
import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.autoconfigure.mail.MailProperties;
import org.springframework.mail.javamail.JavaMailSender;
import org.springframework.mail.javamail.MimeMessageHelper;
import org.springframework.scheduling.quartz.QuartzJobBean;
import org.springframework.stereotype.Component;

import javax.mail.MessagingException;
import javax.mail.internet.MimeMessage;
import java.nio.charset.StandardCharsets;

@Component
public class EmailJob extends QuartzJobBean {
    private static final Logger logger = LoggerFactory.getLogger(EmailJob.class);

    @Autowired
    private JavaMailSender mailSender;

    @Autowired
    private MailProperties mailProperties;
    
    @Override
    protected void executeInternal(JobExecutionContext jobExecutionContext) throws JobExecutionException {
        logger.info("Executing Job with key {}", jobExecutionContext.getJobDetail().getKey());

        JobDataMap jobDataMap = jobExecutionContext.getMergedJobDataMap();
        String subject = jobDataMap.getString("subject");
        String body = jobDataMap.getString("body");
        String recipientEmail = jobDataMap.getString("email");

        sendMail(mailProperties.getUsername(), recipientEmail, subject, body);
    }

    private void sendMail(String fromEmail, String toEmail, String subject, String body) {
        try {
            logger.info("Sending Email to {}", toEmail);
            MimeMessage message = mailSender.createMimeMessage();

            MimeMessageHelper messageHelper = new MimeMessageHelper(message, StandardCharsets.UTF_8.toString());
            messageHelper.setSubject(subject);
            messageHelper.setText(body, true);
            messageHelper.setFrom(fromEmail);
            messageHelper.setTo(toEmail);

            mailSender.send(message);
        } catch (MessagingException ex) {
            logger.error("Failed to send email to {}", toEmail);
        }
    }
}

Running the Application and Testing the API

It’s time to run the application and watch the live action. Open your terminal, go to the root directory of the project and type the following command to run it –

mvn spring-boot:run -Dspring.mail.password=<YOUR_SMTP_PASSWORD>

You don’t need to pass the spring.mail.password command line argument if you have already set the password in the application.properties file.

The application will start on port 8080 by default. Let’s now schedule an email using the /scheduleEmail API –

Spring Boot Quartz Scheduler Email Job Scheduler API

And, Here I get the email at the specified time 🙂

Spring Boot Quartz Scheduler Dynamic Email Job Scheduler API Example

Conclusion

That’s all folks! I hope you enjoyed the article. You can find the complete source code of the project in the Github Repository. Consider giving the project a star on Github if you find it useful.

References

Sounds interesting? Let’s start…

10 Common Software Architectural Patterns in a nutshell
18
Mar
2021

10 Common Software Architectural Patterns in a nutshell

Ever wondered how large enterprise scale systems are designed? Before major software development starts, we have to choose a suitable architecture that will provide us with the desired functionality and quality attributes. Hence, we should understand different architectures, before applying them to our design.

What is an Architectural Pattern?

According to Wikipedia,

An architectural pattern is a general, reusable solution to a commonly occurring problem in software architecture within a given context. Architectural patterns are similar to software design pattern but have a broader scope.

In this article, I will be briefly explaining the following 10 common architectural patterns with their usage, pros and cons.

  1. Layered pattern
  2. Client-server pattern
  3. Master-slave pattern
  4. Pipe-filter pattern
  5. Broker pattern
  6. Peer-to-peer pattern
  7. Event-bus pattern
  8. Model-view-controller pattern
  9. Blackboard pattern
  10. Interpreter pattern

1. Layered pattern

This pattern can be used to structure programs that can be decomposed into groups of subtasks, each of which is at a particular level of abstraction. Each layer provides services to the next higher layer.

The most commonly found 4 layers of a general information system are as follows.

  • Presentation layer (also known as UI layer)
  • Application layer (also known as service layer)
  • Business logic layer (also known as domain layer)
  • Data access layer (also known as persistence layer)

Usage

  • General desktop applications.
  • E commerce web applications.
Layered pattern

2. Client-server pattern

This pattern consists of two parties; a server and multiple clients. The server component will provide services to multiple client components. Clients request services from the server and the server provides relevant services to those clients. Furthermore, the server continues to listen to client requests.

Usage

  • Online applications such as email, document sharing and banking.
Client-server pattern

3. Master-slave pattern

This pattern consists of two parties; master and slaves. The master component distributes the work among identical slave components, and computes a final result from the results which the slaves return.

Usage

  • In database replication, the master database is regarded as the authoritative source, and the slave databases are synchronized to it.
  • Peripherals connected to a bus in a computer system (master and slave drives).
Master-slave pattern

4. Pipe-filter pattern

This pattern can be used to structure systems which produce and process a stream of data. Each processing step is enclosed within a filter component. Data to be processed is passed through pipes. These pipes can be used for buffering or for synchronization purposes.

Usage

  • Compilers. The consecutive filters perform lexical analysis, parsing, semantic analysis, and code generation.
  • Workflows in bioinformatics.
Pipe-filter pattern

5. Broker pattern

This pattern is used to structure distributed systems with decoupled components. These components can interact with each other by remote service invocations. A broker component is responsible for the coordination of communication among components.

Servers publish their capabilities (services and characteristics) to a broker. Clients request a service from the broker, and the broker then redirects the client to a suitable service from its registry.

Usage

Broker pattern

6. Peer-to-peer pattern

In this pattern, individual components are known as peers. Peers may function both as a client, requesting services from other peers, and as a server, providing services to other peers. A peer may act as a client or as a server or as both, and it can change its role dynamically with time.

Usage

Peer-to-peer pattern

7. Event-bus pattern

This pattern primarily deals with events and has 4 major components; event sourceevent listenerchannel and event bus. Sources publish messages to particular channels on an event bus. Listeners subscribe to particular channels. Listeners are notified of messages that are published to a channel to which they have subscribed before.

Usage

  • Android development
  • Notification services
Event-bus pattern

8. Model-view-controller pattern

This pattern, also known as MVC pattern, divides an interactive application in to 3 parts as,

  1. model — contains the core functionality and data
  2. view — displays the information to the user (more than one view may be defined)
  3. controller — handles the input from the user

This is done to separate internal representations of information from the ways information is presented to, and accepted from, the user. It decouples components and allows efficient code reuse.

Usage

  • Architecture for World Wide Web applications in major programming languages.
  • Web frameworks such as Django and Rails.
Model-view-controller pattern

9. Blackboard pattern

This pattern is useful for problems for which no deterministic solution strategies are known. The blackboard pattern consists of 3 main components.

  • blackboard — a structured global memory containing objects from the solution space
  • knowledge source — specialized modules with their own representation
  • control component — selects, configures and executes modules.

All the components have access to the blackboard. Components may produce new data objects that are added to the blackboard. Components look for particular kinds of data on the blackboard, and may find these by pattern matching with the existing knowledge source.

Usage

  • Speech recognition
  • Vehicle identification and tracking
  • Protein structure identification
  • Sonar signals interpretation.
Blackboard pattern

10. Interpreter pattern

This pattern is used for designing a component that interprets programs written in a dedicated language. It mainly specifies how to evaluate lines of programs, known as sentences or expressions written in a particular language. The basic idea is to have a class for each symbol of the language.

Usage

  • Database query languages such as SQL.
  • Languages used to describe communication protocols.
Interpreter pattern

Comparison of Architectural Patterns

The table given below summarizes the pros and cons of each architectural pattern.

Comparison of Architectural Patterns

Hope you found this article useful. I would love to hear your thoughts. 😇

Thanks for reading. 😊

Cheers! 😃

Generating unique IDs in a distributed environment at high scale
07
Mar
2021

Generating unique IDs in a distributed environment at high scale

I was recently working on an application that had a sharded MySQL database. While working on this application, I needed to generate unique IDs that could be used as the primary keys in the tables.

When you’re working with a single MySQL database, you can simply use an auto-increment ID as the primary key, But this won’t work in a sharded MySQL database.

So I looked at various existing solutions for this, and finally wrote a simple 64-bit unique ID generator that was inspired by a similar service by Twitter called Twitter snowflake.

In this article, I’ll share a simplified version of the unique ID generator that will work for any use-case of generating unique IDs in a distributed environment, not just sharded databases.

I’ll also outline other existing solutions and discuss their pros and cons.

Existing Solutions

UUID

UUIDs are 128-bit hexadecimal numbers that are globally unique. The chances of the same UUID getting generated twice is negligible.

The problem with UUIDs is that they are very big in size and don’t index well. When your dataset increases, the index size increases as well and the query performance takes a hit.

MongoDB’s ObjectId

MongoDB’s ObjectIDs are 12-byte (96-bit) hexadecimal numbers that are made up of –

  • a 4-byte epoch timestamp in seconds,
  • a 3-byte machine identifier,
  • a 2-byte process id, and
  • a 3-byte counter, starting with a random value.

This is smaller than the earlier 128-bit UUID. But again the size is relatively longer than what we normally have in a single MySQL auto-increment field (a 64-bit bigint value).

Database Ticket Servers

This approach uses a centralized database server to generate unique incrementing IDs. It’s like a centralized auto-increment. This approach is used by Flickr.

The problem with this approach is that the ticket server can become a write bottleneck. Moreover, you introduce one more component in your infrastructure that you need to manage and scale.

Twitter Snowflake

Twitter snowflake is a dedicated network service for generating 64-bit unique IDs at high scale. The IDs generated by this service are roughly time sortable.

The IDs are made up of the following components:

  • Epoch timestamp in millisecond precision – 41 bits (gives us 69 years with a custom epoch)
  • Configured machine id – 10 bits (gives us up to 1024 machines)
  • Sequence number – 12 bits (A local counter per machine that rolls over every 4096)

The extra 1 bit is reserved for future purposes. Since the IDs use timestamp as the first component, they are time sortable.

The IDs generated by twitter snowflake fits in 64-bits and are time sortable, which is great. That’s what we want.

But If we use Twitter snowflake, we’ll again be introducing another component in our infrastructure that we need to maintain.

Distributed 64-bit unique ID generator inspired by Twitter Snowflake

Finally, I wrote a simple sequence generator that generates 64-bit IDs based on the concepts outlined in the Twitter snowflake service.

The IDs generated by this sequence generator are composed of –

  • Epoch timestamp in milliseconds precision – 41 bits. The maximum timestamp that can be represented using 41 bits is 241 – 1, or 2199023255551, which comes out to be Wednesday, September 7, 2039 3:47:35.551 PM. That gives us 69 years with respect to a custom epoch.
  • Node ID – 10 bits. This gives us 1024 nodes/machines.
  • Local counter per machine – 12 bits. The counter’s max value would be 4095.

The remaining 1-bit is the signed bit and it is always set to 0.

Your microservices can use this Sequence Generator to generate IDs independently. This is efficient and fits in the size of a bigint.

Here is the complete program –

import java.net.NetworkInterface;
import java.security.SecureRandom;
import java.time.Instant;
import java.util.Enumeration;

/**
 * Distributed Sequence Generator.
 * Inspired by Twitter snowflake: https://github.com/twitter/snowflake/tree/snowflake-2010
 *
 * This class should be used as a Singleton.
 * Make sure that you create and reuse a Single instance of SequenceGenerator per node in your distributed system cluster.
 */
public class SequenceGenerator {
    private static final int UNUSED_BITS = 1; // Sign bit, Unused (always set to 0)
    private static final int EPOCH_BITS = 41;
    private static final int NODE_ID_BITS = 10;
    private static final int SEQUENCE_BITS = 12;

    private static final int maxNodeId = (int)(Math.pow(2, NODE_ID_BITS) - 1);
    private static final int maxSequence = (int)(Math.pow(2, SEQUENCE_BITS) - 1);

    // Custom Epoch (January 1, 2015 Midnight UTC = 2015-01-01T00:00:00Z)
    private static final long CUSTOM_EPOCH = 1420070400000L;

    private final int nodeId;

    private volatile long lastTimestamp = -1L;
    private volatile long sequence = 0L;

    // Create SequenceGenerator with a nodeId
    public SequenceGenerator(int nodeId) {
        if(nodeId < 0 || nodeId > maxNodeId) {
            throw new IllegalArgumentException(String.format("NodeId must be between %d and %d", 0, maxNodeId));
        }
        this.nodeId = nodeId;
    }

    // Let SequenceGenerator generate a nodeId
    public SequenceGenerator() {
        this.nodeId = createNodeId();
    }

    public synchronized long nextId() {
        long currentTimestamp = timestamp();

        if(currentTimestamp < lastTimestamp) {
            throw new IllegalStateException("Invalid System Clock!");
        }

        if (currentTimestamp == lastTimestamp) {
            sequence = (sequence + 1) & maxSequence;
            if(sequence == 0) {
                // Sequence Exhausted, wait till next millisecond.
                currentTimestamp = waitNextMillis(currentTimestamp);
            }
        } else {
            // reset sequence to start with zero for the next millisecond
            sequence = 0;
        }

        lastTimestamp = currentTimestamp;

        long id = currentTimestamp << (NODE_ID_BITS + SEQUENCE_BITS);
        id |= (nodeId << SEQUENCE_BITS);
        id |= sequence;
        return id;
    }


    // Get current timestamp in milliseconds, adjust for the custom epoch.
    private static long timestamp() {
        return Instant.now().toEpochMilli() - CUSTOM_EPOCH;
    }

    // Block and wait till next millisecond
    private long waitNextMillis(long currentTimestamp) {
        while (currentTimestamp == lastTimestamp) {
            currentTimestamp = timestamp();
        }
        return currentTimestamp;
    }

    private int createNodeId() {
        int nodeId;
        try {
            StringBuilder sb = new StringBuilder();
            Enumeration<NetworkInterface> networkInterfaces = NetworkInterface.getNetworkInterfaces();
            while (networkInterfaces.hasMoreElements()) {
                NetworkInterface networkInterface = networkInterfaces.nextElement();
                byte[] mac = networkInterface.getHardwareAddress();
                if (mac != null) {
                    for(int i = 0; i < mac.length; i++) {
                        sb.append(String.format("%02X", mac[i]));
                    }
                }
            }
            nodeId = sb.toString().hashCode();
        } catch (Exception ex) {
            nodeId = (new SecureRandom().nextInt());
        }
        nodeId = nodeId & maxNodeId;
        return nodeId;
    }
}

The above generator uses the system’s MAC address to create a unique identifier for the Node. You can also supply a NodeID to the sequence generator. That will guarantee uniqueness.

Let’s now understand how it works. Let’s say it’s June 9, 2018 10:00:00 AM GMT. The epoch timestamp for this particular time is 1528538400000.

First of all, we adjust our timestamp with respect to the custom epoch-

currentTimestamp = 1528538400000 - 1420070400000 // 108468000000 (Adjust for custom epoch)

Now, the first 41 bits of the ID (after the signed bit) will be filled with the epoch timestamp. Let’s do that using a left-shift –

id = currentTimestamp << (10 + 12)

Next, we take the configured node ID and fill the next 10 bits with the node ID. Let’s say that the nodeId is 786 –

id |= nodeId << 12

Finally, we fill the last 12 bits with the local counter. Considering the counter’s next value is 3450, i.e. sequence = 3450, the final ID is obtained like so –

id |= sequence  // 454947766275219456

That gives us our final ID.

More Learning Resources

What are Spring Boot Starter Dependencies
30
Mar
2021

What are Spring Boot Starter Dependencies?

Spring Boot starters are maven templates that contain a collection of all the relevant transitive dependencies that are needed to start a particular functionality.

For example, If we want to create a Spring WebMVC application then in a traditional setup, we would have included all required dependencies ourselves. It leaves the chances of version conflict which ultimately result in more runtime exceptions.

With Spring boot, to create web MVC application, all we need to import is spring-boot-starter-web dependency. Transitively, it brings in all other required dependencies to build a web application e.g. spring-webmvc, spring-web, hibernate-validator, tomcat-embed-core, tomcat-embed-el, tomcat-embed-websocket, jackson-databind, jackson-datatype-jdk8, jackson-datatype-jsr310 and jackson-module-parameter-names.

<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency>
What are Spring Boot common annotations
30
Mar
2021

What are Spring Boot Common Annotations?

The most commonly used and important spring boot annotations are as below:

  • @EnableAutoConfiguration – It enable auto-configuration mechanism.
  • @ComponentScan – enable component scanning in application classpath.
  • @SpringBootApplication – enable all 3 above three things in one step i.e. enable auto-configuration mechanism, enable component scanning and register extra beans in the context.
  • @ImportAutoConfiguration
  • @AutoConfigureBefore, @AutoConfigureAfter, @AutoConfigureOrder – shall be used if the configuration needs to be applied in a specific order (before of after).
  • @Conditional – annotations such as @ConditionalOnBean@ConditionalOnWebApplication or 
    @ConditionalOnClass allow to register a bean only when the condition meets.
Kotlin Functions, Default and Named Arguments, Varargs and Function Scopes
07
Mar
2021

Kotlin Functions, Default and Named Arguments, Varargs and Function Scopes

Functions are the basic building block of any program. In this article, you’ll learn how to declare and call functions in Kotlin. You’ll also learn about Function scopes, Default arguments, Named Arguments, and Varargs.

Defining and Calling Functions

You can declare a function in Kotlin using the fun keyword. Following is a simple function that calculates the average of two numbers –

fun avg(a: Double, b: Double): Double {
    return  (a + b)/2
}

Calling a function is simple. You just need to pass the required number of parameters in the function name like this –

avg(4.6, 9.0)  // 6.8

Following is the general syntax of declaring a function in Kotlin.

fun functionName(param1: Type1, param2: Type2,..., paramN: TypeN): Type {
	// Method Body
}

Every function declaration has a function name, a list of comma-separated parameters, an optional return type, and a method body. The function parameters must be explicitly typed.

Single Expression Functions

You can omit the return type and the curly braces if the function returns a single expression. The return type is inferred by the compiler from the expression –

fun avg(a: Double, b: Double) = (a + b)/2
avg(10.0, 20.0)  // 15.0

Note that, Unlike other statically typed languages like Scala, Kotlin does not infer return types for functions with block bodies. Therefore, Functions with block body must always specify return types explicitly.

Unit returning Functions

Functions which don’t return anything has a return type of Unit. The Unit type corresponds to void in Java.

fun printAverage(a: Double, b: Double): Unit {
    println("Avg of ($a, $b) = ${(a + b)/2}")
}

printAverage(10.0, 30.0)  // Avg of (10.0, 30.0) = 20.0

Note that, the Unit type declaration is completely optional. So you can also write the above function declaration like this –

fun printAverage(a: Double, b: Double) {
    println("Avg of ($a, $b) = ${(a + b)/2}")
}

Function Default Arguments

Kotlin supports default arguments in function declarations. You can specify a default value for a function parameter. The default value is used when the corresponding argument is omitted from the function call.

Consider the following function for example –

fun displayGreeting(message: String, name: String = "Guest") {
    println("Hello $name, $message")
}

If you call the above function with two arguments, it works just like any other function and uses the values passed in the arguments –

displayGreeting("Welcome to the CalliCoder Blog", "John") // Hello John, Welcome to the CalliCoder Blog

However, If you omit the argument that has a default value from the function call, then the default value is used in the function body –

displayGreeting("Welcome to the CalliCoder Blog") // Hello Guest, Welcome to the CalliCoder Blog

If the function declaration has a default parameter preceding a non-default parameter, then the default value cannot be used while calling the function with position-based arguments.

Consider the following function –

fun arithmeticSeriesSum(a: Int = 1, n: Int, d: Int = 1): Int {
    return n/2 * (2*a + (n-1)*d)
}

While calling the above function, you can not omit the argument a from the function call and selectively pass a value for the non-default parameter n –

arithmeticSeriesSum(10) // error: no value passed for parameter n

When you call a function with position-based arguments, the first argument corresponds to the first parameter, the second argument corresponds to the second parameter, and so on.

So for passing a value for the 2nd parameter, you need to specify a value for the first parameter as well –

arithmeticSeriesSum(1, 10)  // Result = 55

However, The above use-case of selectively passing a value for a parameter is solved by another feature of Kotlin called Named Arguments.

Function Named Arguments

Kotlin allows you to specify the names of arguments that you’re passing to the function. This makes the function calls more readable. It also allows you to pass the value of a parameter selectively if other parameters have default values.

Consider the following arithmeticSeriesSum() function that we defined in the previous section –

fun arithmeticSeriesSum(a: Int = 1, n: Int, d: Int = 1): Int {
    return n/2 * (2*a + (n-1)*d)
}

You can specify the names of arguments while calling the function like this –

arithmeticSeriesSum(n=10)  // Result = 55

The above function call will use the default values for parameters a and d.

Similarly, you can call the function with all the parameters like this –

arithmeticSeriesSum(a=3, n=10, d=2)  // Result = 120

You can also reorder the arguments if you’re specifying the names –

arithmeticSeriesSum(n=10, d=2, a=3)  // Result = 120

You can use a mix of named arguments and position-based arguments as long as all the position-based arguments are placed before the named arguments –

arithmeticSeriesSum(3, n=10)  // Result = 75

The following function call is not allowed since it contains position-based arguments after named arguments –

arithmeticSeriesSum(n=10, 2) // error: mixing named and positioned arguments is not allowed

Variable Number of Arguments (Varargs)

You can pass a variable number of arguments to a function by declaring the function with a vararg parameter.

Consider the following sumOfNumbers() function which accepts a vararg of numbers –

fun sumOfNumbers(vararg numbers: Double): Double {
    var sum: Double = 0.0
    for(number in numbers) {
        sum += number
    }
    return sum
}

You can call the above function with any number of arguments –

sumOfNumbers(1.5, 2.0)  // Result = 3.5

sumOfNumbers(1.5, 2.0, 3.5, 4.0, 5.8, 6.2)  // Result = 23.0

sumOfNumbers(1.5, 2.0, 3.5, 4.0, 5.8, 6.2, 8.1, 12.4, 16.5)  // Result = 60.0

In Kotlin, a vararg parameter of type T is internally represented as an array of type T (Array<T>) inside the function body.

A function may have only one vararg parameter. If there are other parameters following the vararg parameter, then the values for those parameters can be passed using the named argument syntax –

fun sumOfNumbers(vararg numbers: Double, initialSum: Double): Double {
    var sum = initialSum
    for(number in numbers) {
        sum += number
    }
    return sum
}
sumOfNumbers(1.5, 2.5, initialSum=100.0) // Result = 104.0

Spread Operator

Usually, we pass the arguments to a vararg function one-by-one. But if you already have an array and want to pass the elements of the array to the vararg function, then you can use the spread operator like this –

val a = doubleArrayOf(1.5, 2.6, 5.4)
sumOfNumbers(*a)  // Result = 9.5

Function Scope

Kotlin supports functional programming. Functions are first-class citizens in the language.

Unlike Java where every function needs to be encapsulated inside a class, Kotlin functions can be defined at the top level in a source file.

In addition to top-level functions, you also have the ability to define member functions, local functions, and extension functions.

1. Top Level Functions

Top level functions in Kotlin are defined in a source file outside of any class. They are also called package level functions because they are a member of the package in which they are defined.

The main() method itself is a top-level function in Kotlin since it is defined outside of any class.

Let’s now see an example of a top-level function. Check out the following findNthFibonacciNo() function which is defined inside a package named maths –

package maths

fun findNthFibonacciNo(n: Int): Int {
    var a = 0
    var b = 1
    var c: Int

    if(n == 0) {
        return a
    }

    for(i in 2..n) {
        c = a+b
        a = b
        b = c
    }
    return b
}

You can access the above function directly inside the maths package –

package maths

fun main(args: Array<String>) {
    println("10th fibonacci number is - ${findNthFibonacciNo(10)}")
}
//Outputs - 10th fibonacci number is - 55

However, If you want to call the findNthFibonacciNo() function from other packages, then you need to import it as in the following example –

package test
import maths.findNthFibonacciNo

fun main(args: Array<String>) {
    println("10th fibonacci number is - ${findNthFibonacciNo(10)}")
}

2. Member Functions

Member functions are functions which are defined inside a class or an object.

class User(val firstName: String, val lastName: String) {

	// Member function
    fun getFullName(): String {
        return firstName + " " + lastName
    }
}

Member functions are called on the objects of the class using the dot(.) notation –

val user = User("Bill", "Gates") // Create an object of the class
println("Display Name : ${user.getFullName()}") // Call the member function

3. Local/Nested Functions

Kotlin allows you to nest function definitions. These nested functions are called Local functions. Local functions bring more encapsulation and readability to your program –

fun findBodyMassIndex(weightInKg: Double, heightInCm: Double): Double {
    // Validate the arguments
    if(weightInKg <= 0) {
        throw IllegalArgumentException("Weight must be greater than zero")
    }
    if(heightInCm <= 0) {
        throw IllegalArgumentException("Height must be greater than zero")
    }

    fun calculateBMI(weightInKg: Double, heightInCm: Double): Double {
        val heightInMeter = heightInCm / 100
        return weightInKg / (heightInMeter * heightInMeter)
    }

    // Calculate BMI using the nested function
    return calculateBMI(weightInKg, heightInCm)
}

Local functions can access local variables of the outer function. So the above function is equivalent to the following –

fun findBodyMassIndex(weightInKg: Double, heightInCm: Double): Double {
    if(weightInKg <= 0) {
        throw IllegalArgumentException("Weight must be greater than zero")
    }
    if(heightInCm <= 0) {
        throw IllegalArgumentException("Height must be greater than zero")
    }

	// Nested function has access to the local variables of the outer function
    fun calculateBMI(): Double {
        val heightInMeter = heightInCm / 100
        return weightInKg / (heightInMeter * heightInMeter)
    }

    return calculateBMI()
}

Conclusion

Congratulations folks! In this article, you learned how to define and call functions in Kotlin, how to use default and named arguments, how to define and call functions with a variable number of arguments, and how to define top-level functions, member functions and local/nested functions

In future articles, I’ll write about extension functions, higher order functions and lambdas. So Stay tuned!

As always, Thank you for reading. Happy Kotlin Koding 🙂

Find the pair with the smallest difference in two unsorted arrays
06
Feb
2021

Find the pair with the smallest difference in two unsorted arrays

Given two non-empty arrays of integers, find the pair of values (one value from each array) with the smallest (non-negative) difference.

Example

Input: [1, 3, 15, 11, 2], [23, 127, 235, 19, 8]

Output: [11, 8]; this pair has the smallest difference.

Solution 1. Brute Force approach: Use two for loops

The naive way to solve this problem is to use two for loops and compare the difference of every pair to find the pair with the smallest difference:

Time complexity: O(n^2)

import java.util.Arrays;

class SmallestDifference {

  public static int[] findSmallestDifferencePair_Naive(int[] a1, int[] a2) {
    double smallestDiff = Double.MAX_VALUE;
    int[] smallestDiffPair = new int[2];

    for(int i = 0; i < a1.length; i++) {
      for(int j = 0; j < a2.length; j++) {
        int currentDiff = Math.abs(a1[i] - a2[j]);
        if(currentDiff < smallestDiff) {
          smallestDiff = currentDiff;
          smallestDiffPair[0] = a1[i];
          smallestDiffPair[1] = a2[j];  
        }
      }
    }
    return smallestDiffPair;
  }

  public static void main(String[] args) {
    int[] a1 = new int[] {-1, 5, 10, 20, 28, 3};
    int[] a2 = new int[] {26, 134, 135, 15, 17};

    int[] pair = findSmallestDifferencePair_Naive(a1, a2);
    System.out.println(pair[0] + " " + pair[1]);
  }
}

Solution 2. Use Sorting along with the two-pointer sliding window approach

You can improve upon the brute force solution by first sorting the array and then using the two-pointer sliding window pattern.

Here is how it will work in this case:

  • Initialize a variable to keep track of the smallest difference found so far (smallestDiff).
  • Sort both the arrays
  • Initialize two indexes (one for each array): i = 0 and j = 0.
  • Loop until we reach the end of any of the arrays.
  • For every iteration:
    • Compare the smallestDiff with abs(a1[i] - a2[j]) and reset it if the new difference is smaller.
    • If a1[i] < a2[j], increment i.
    • Otherwise, increment j

Time complexity: O(mlog(m) + nlog(n))

import java.util.Arrays;

class SmallestDifference {
  public static int[] findSmallestDifferencePair(int[] a1, int[] a2) {
    Arrays.sort(a1);
    Arrays.sort(a2);

    double smallestDiff = Double.MAX_VALUE;
    int[] smallestDiffPair = new int[2];
    int i = 0, j = 0;

    while(i < a1.length && j < a2.length) {
      double currentDiff = Math.abs(a1[i] - a2[j]);
      if(currentDiff < smallestDiff) {
        smallestDiff = currentDiff;
        smallestDiffPair[0] = a1[i];
        smallestDiffPair[1] = a2[j];
      }
      if(a1[i] < a2[j]) {
        i++;
      } else {
        j++;
      }
    }
    return smallestDiffPair;
  }

  public static void main(String[] args) {
    int[] a1 = new int[] {-1, 5, 10, 20, 28, 3};
    int[] a2 = new int[] {26, 134, 135, 15, 17};

    int[] pair = findSmallestDifferencePair(a1, a2);
    System.out.println(pair[0] + " " + pair[1]);
  }
}

Liked the Article? Share it on Social media!