Welcome To Fusebes - Dev & Programming Blog

Kotlin Abstract Classes with Examples
07
Mar
2021

Kotlin Abstract Classes with Examples

An abstract class is a class that cannot be instantiated. We create abstract classes to provide a common template for other classes to extend and use.

Declaring an Abstract Class

You can declare an abstract class using the abstract keyword –

abstract class Vehicle

An abstract class may contain both abstract and non-abstract properties and functions. You need to explicitly use the abstract keyword to declare a property or function as abstract –

abstract class Vehicle(val name: String,
                       val color: String,
                       val weight: Double) {   // Concrete (Non Abstract) Properties

    // Abstract Property (Must be overridden by Subclasses)
    abstract var maxSpeed: Double

    // Abstract Methods (Must be implemented by Subclasses)
    abstract fun start()
    abstract fun stop()

    // Concrete (Non Abstract) Method
    fun displayDetails() {
        println("Name: $name, Color: $color, Weight: $weight, Max Speed: $maxSpeed")
    }
}

Any subclass that extends the abstract class must implement all of its abstract methods and properties, or the subclass should also be declared as abstract.

If you recall from the Kotlin Inheritance tutorial, you need to annotate a class as open to allow other classes to inherit from it. But, you don’t need to do that with abstract classes. Abstract classes are open for extension by default.

Similarly, abstract methods and properties are open for overriding by default.

But, If you need to override a non-abstract method or property, then you must mark it with the open modifier.

Extending from an Abstract class

Following are two concrete classes that extend the Vehicle abstract class and override its abstract methods and properties –

class Car(name: String,
          color: String,
          weight: Double,
          override var maxSpeed: Double): Vehicle(name, color, weight) {

    override fun start() {
        // Code to start a Car
        println("Car Started")
    }

    override fun stop() {
        // Code to stop a Car
        println("Car Stopped")
    }
}
class Motorcycle(name: String,
           color: String,
           weight: Double,
           override var maxSpeed: Double): Vehicle(name, color, weight) {

    override fun start() {
        // Code to Start the Motorcycle
        println("Bike Started")
    }

    override fun stop() {
        // Code to Stop the Motorcycle
        println("Bike Stopped")
    }
}

Let’s now write some code to test our abstract and concrete classes in the main method –

fun main(args: Array<String>) {

    val car = Car("Ferrari 812 Superfast", "red", 1525.0, 339.60)
    val motorCycle = Motorcycle("Ducati 1098s", "red", 173.0, 271.0)

    car.displayDetails()
    motorCycle.displayDetails()

    car.start()
    motorCycle.start()
}

Here is the output of the above main() method –

# Output
Name: Ferrari 812 Superfast, Color: red, Weight: 1525.0, Max Speed: 339.6
Name: Ducati 1098s, Color: red, Weight: 173.0, Max Speed: 271.0
Car Started
Bike Started

Conclusion

Abstract classes help you abstract out common functionality into a base class. It may contain both abstract and non-abstract properties and methods. An abstract class is useless on its own because you cannot create objects from it. But, other concrete (non-abstract) classes can extend it and build upon it to provide the desired functionality.

Spring-boot-starter-parent Example
30
Mar
2021

Spring Boot Starter Parent Example

In this spring boot tutorial, we will learn about spring-boot-starter-parent dependency which is used internally by all spring boot dependencies. We will also learn what all configurations this dependency provides, and how to override them.

What is spring-boot-starter-parent dependency?

The spring-boot-starter-parent dependency is the parent POM providing dependency and plugin management for Spring Boot-based applications. It contains the default versions of Java to use, the default versions of dependencies that Spring Boot uses, and the default configuration of the Maven plugins.

Few important configurations provided by this file are as below. Please refer to this link to read the complete configuration.

<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd;<modelVersion>4.0.0</modelVersion><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-dependencies</artifactId><version>${revision}</version><relativePath>../../spring-boot-dependencies</relativePath></parent><artifactId>spring-boot-starter-parent</artifactId><packaging>pom</packaging><name>Spring Boot Starter Parent</name><description>Parent pom providing dependency and plugin management for applicationsbuilt with Maven</description><properties><java.version>1.8</java.version><resource.delimiter>@</resource.delimiter> <!-- delimiter that doesn't clash with Spring ${} placeholders --><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding><maven.compiler.source>${java.version}</maven.compiler.source><maven.compiler.target>${java.version}</maven.compiler.target></properties> ... <resource><directory>${basedir}/src/main/resources</directory><filtering>true</filtering><includes><include>**/application*.yml</include><include>**/application*.yaml</include><include>**/application*.properties</include></includes></resource> </project>

The spring-boot-starter-parent dependency further inherits from spring-boot-dependencies, which is defined at the top of above POM file at line number : 9.

This file is the actual file which contains the information of default version to use for all libraries. The following code shows the different versions of various dependencies that are configured in spring-boot-dependencies:

<properties><!-- Dependency versions --><activemq.version>5.15.3</activemq.version><antlr2.version>2.7.7</antlr2.version><appengine-sdk.version>1.9.63</appengine-sdk.version><artemis.version>2.4.0</artemis.version><aspectj.version>1.8.13</aspectj.version><assertj.version>3.9.1</assertj.version><atomikos.version>4.0.6</atomikos.version><bitronix.version>2.1.4</bitronix.version><byte-buddy.version>1.7.11</byte-buddy.version><caffeine.version>2.6.2</caffeine.version><cassandra-driver.version>3.4.0</cassandra-driver.version><classmate.version>1.3.4</classmate.version> ......</properties>

Above list is very long and you can read complete list in this link.

How to override default dependency version?

As you see, spring boot has default version to use for most of dependencies. You can override the version of your choice or project need, in properties tag in your project’s pom.xml file.

e.g. Spring boot used default version of google GSON library as 2.8.2.

<groovy.version>2.4.14</groovy.version><gson.version>2.8.2</gson.version><h2.version>1.4.197</h2.version>

I want to use 2.7 of gson dependency. So I will give this information in properties tag like this.

<properties><java.version>1.8</java.version><project.build.sourceEncoding>UTF-8</project.build.sourceEncoding><gson.version>2.7</gson.version></properties>

Now in your eclipse editor, you can see the message as : The managed version is 2.7 The artifact is managed in org.springframework.boot:spring-boot-dependencies:2.0.0.RELEASE.

GSON resolved dependency
GSON resolved dependency

Drop me your questions in comments section.

Happy Learning !!

Kotlin Properties, Backing Fields, Getters and Setters
07
Mar
2021

Kotlin Properties, Backing Fields, Getters and Setters

You can declare properties inside Kotlin classes in the same way you declare any other variable. These properties can be mutable (declared using var) or immutable (declared using val).

Here is an example –

// User class with a Primary constructor that accepts three parameters
class User(_id: Int, _name: String, _age: Int) {
    // Properties of User class
    val id: Int = _id         // Immutable (Read only)
    var name: String = _name  // Mutable
    var age: Int = _age       // Mutable
}

You can get or set the properties of an object using the dot(.) notation like so –

val user = User(1, "Jack Sparrow", 44)

// Getting a Property
val name = user.name

// Setting a Property
user.age = 46

// You cannot set read-only properties
user.id = 2	// Error: Val cannot be assigned

Getters and Setters

Kotlin internally generates a default getter and setter for mutable properties, and a getter (only) for read-only properties.

It calls these getters and setters internally whenever you access or modify a property using the dot(.) notation.

Here is how the User class that we saw in the previous section looks like with the default getters and setters –

class User(_id: Int, _name: String, _age: Int) {
    val id: Int = _id
        get() = field
    
    var name: String = _name
        get() = field
        set(value) {
            field = value
        }
    
    var age: Int = _age
        get() = field
        set(value) {
            field = value
        }
}

You might have noticed two strange identifiers in all the getter and setter methods – field and value.

Let’s understand what these identifiers are for –

1. value

We use value as the name of the setter parameter. This is the default convention in Kotlin but you’re free to use any other name if you want.

The value parameter contains the value that a property is assigned to. For example, when you write user.name = "Bill Gates", the value parameter contains the assigned value “Bill Gates”.

2. Backing Field (field)

Backing field helps you refer to the property inside the getter and setter methods. This is required because if you use the property directly inside the getter or setter then you’ll run into a recursive call which will generate a StackOverflowError.

Let’s see that in action. Let’s modify the User class and refer to the properties directly inside the getters and setters instead of using the field identifier –

class User(_name: String, _age: Int) {
    var name: String = _name
        get() = name		  // Calls the getter recursively

    var age: Int = _age
        set(value) {
            age = value		  // Calls the setter recursively
        }
}

fun main(args: Array<String>) {
    val user = User("Jack Sparrow", 44)

    // Getting a Property
    println("${user.name}") // StackOverflowError

    // Setting a Property
    user.age = 45           // StackOverflowError

}

The above program will generate a StackOverflowError due to the recursive calls in the getter and setter methods. This is why Kotlin has the concept of backing fields. It makes storing the property value in memory possible. When you initialize a property with a value in the class, the initializer value is written to the backing field of that property –

class User(_id: Int) {
    val id: Int	= _id   // The initializer value is written to the backing field of the property
}

A backing field is generated for a property if

  • A custom getter/setter references it through the field identifier or,
  • It uses the default implementation of at least one of the accessors (getter or setter). (Remember, the default getter and setter references the field identifier themselves)

For example, a backing field is not generated for the id property in the following class because it neither uses the default implementation of the getter nor refers to the field identifier in the custom getter –

class User() {
    val id	// No backing field is generated
        get() = 0
}

Since a backing field is not generated, you won’t be able to initialize the above property like so –

class User(_id: Int) {
    val id: Int = _id	//Error: Initialization not possible because no backing field is generated which could store the initialized value in memory
        get() = 0
}

Creating Custom Getters and Setters

You can ditch Kotlin’s default getter/setter and define a custom getter and setter for the properties of your class.

Here is an example –

class User(_id: Int, _name: String, _age: Int) {
    val id: Int = _id

    var name: String = _name 
        // Custom Getter
        get() {     
            return field.toUpperCase()
        }     

    var age: Int = _age
        // Custom Setter
        set(value) {
            field = if(value > 0) value else throw IllegalArgumentException("Age must be greater than zero")
        }
}

fun main(args: Array<String>) {
    val user = User(1, "Jack Sparrow", 44)

    println("${user.name}") // JACK SPARROW

    user.age = -1           // Throws IllegalArgumentException: Age must be greater that zero
}

Conclusion

Thanks for reading folks. In this article, You learned how Kotlin’s getters and setters work and how to define a custom getter and setter for the properties of a class.

A Comparison Between Spring and Spring Boot
26
Mar
2021

A Comparison Between Spring and Spring Boot

What Is Spring?

Simply put, the Spring framework provides comprehensive infrastructure support for developing Java applications.

It’s packed with some nice features like Dependency Injection, and out of the box modules like:

  • Spring JDBC
  • Spring MVC
  • Spring Security
  • Spring AOP
  • Spring ORM
  • Spring Test

These modules can drastically reduce the development time of an application.

For example, in the early days of Java web development, we needed to write a lot of boilerplate code to insert a record into a data source. By using the JDBCTemplate of the Spring JDBC module, we can reduce it to a few lines of code with only a few configurations.

3. What Is Spring Boot?

Spring Boot is basically an extension of the Spring framework, which eliminates the boilerplate configurations required for setting up a Spring application.

It takes an opinionated view of the Spring platform, which paves the way for a faster and more efficient development ecosystem.

Here are just a few of the features in Spring Boot:

  • Opinionated ‘starter’ dependencies to simplify the build and application configuration
  • Embedded server to avoid complexity in application deployment
  • Metrics, Health check, and externalized configuration
  • Automatic config for Spring functionality – whenever possible

Let’s get familiar with both of these frameworks step by step.

4. Maven Dependencies

First of all, let’s look at the minimum dependencies required to create a web application using Spring:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>5.3.5</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>5.3.5</version>
</dependency>

Unlike Spring, Spring Boot requires only one dependency to get a web application up and running:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>2.4.4</version>
</dependency>

All other dependencies are added automatically to the final archive during build time.

Another good example is testing libraries. We usually use the set of Spring Test, JUnit, Hamcrest, and Mockito libraries. In a Spring project, we should add all of these libraries as dependencies.

Alternatively, in Spring Boot we only need the starter dependency for testing to automatically include these libraries.

Spring Boot provides a number of starter dependencies for different Spring modules. Some of the most commonly used ones are:

  • spring-boot-starter-data-jpa
  • spring-boot-starter-security
  • spring-boot-starter-test
  • spring-boot-starter-web
  • spring-boot-starter-thymeleaf

For the full list of starters, also check out the Spring documentation.

5. MVC Configuration

Let’s explore the configuration required to create a JSP web application using both Spring and Spring Boot.

Spring requires defining the dispatcher servlet, mappings, and other supporting configurations. We can do this using either the web.xml file or an Initializer class:

public class MyWebAppInitializer implements WebApplicationInitializer {
 
    @Override
    public void onStartup(ServletContext container) {
        AnnotationConfigWebApplicationContext context
          = new AnnotationConfigWebApplicationContext();
        context.setConfigLocation("com.baeldung");
 
        container.addListener(new ContextLoaderListener(context));
 
        ServletRegistration.Dynamic dispatcher = container
          .addServlet("dispatcher", new DispatcherServlet(context));
         
        dispatcher.setLoadOnStartup(1);
        dispatcher.addMapping("/");
    }
}

We also need to add the @EnableWebMvc annotation to a @Configuration class, and define a view-resolver to resolve the views returned from the controllers:

@EnableWebMvc
@Configuration
public class ClientWebConfig implements WebMvcConfigurer { 
   @Bean
   public ViewResolver viewResolver() {
      InternalResourceViewResolver bean
        = new InternalResourceViewResolver();
      bean.setViewClass(JstlView.class);
      bean.setPrefix("/WEB-INF/view/");
      bean.setSuffix(".jsp");
      return bean;
   }
}

By comparison, Spring Boot only needs a couple of properties to make things work once we add the web starter:

spring.mvc.view.prefix=/WEB-INF/jsp/
spring.mvc.view.suffix=.jsp

All of the Spring configuration above is automatically included by adding the Boot web starter through a process called auto-configuration.

What this means is that Spring Boot will look at the dependencies, properties, and beans that exist in the application and enable configuration based on these.

Of course, if we want to add our own custom configuration, then the Spring Boot auto-configuration will back away.

5.1. Configuring Template Engine

Now let’s learn how to configure a Thymeleaf template engine in both Spring and Spring Boot.

In Spring, we need to add the thymeleaf-spring5 dependency and some configurations for the view resolver:

@Configuration
@EnableWebMvc
public class MvcWebConfig implements WebMvcConfigurer {

    @Autowired
    private ApplicationContext applicationContext;

    @Bean
    public SpringResourceTemplateResolver templateResolver() {
        SpringResourceTemplateResolver templateResolver = 
          new SpringResourceTemplateResolver();
        templateResolver.setApplicationContext(applicationContext);
        templateResolver.setPrefix("/WEB-INF/views/");
        templateResolver.setSuffix(".html");
        return templateResolver;
    }

    @Bean
    public SpringTemplateEngine templateEngine() {
        SpringTemplateEngine templateEngine = new SpringTemplateEngine();
        templateEngine.setTemplateResolver(templateResolver());
        templateEngine.setEnableSpringELCompiler(true);
        return templateEngine;
    }

    @Override
    public void configureViewResolvers(ViewResolverRegistry registry) {
        ThymeleafViewResolver resolver = new ThymeleafViewResolver();
        resolver.setTemplateEngine(templateEngine());
        registry.viewResolver(resolver);
    }
}

Spring Boot 1 only requires the dependency of spring-boot-starter-thymeleaf to enable Thymeleaf support in a web application. Due to the new features in Thymeleaf3.0, we also have to add thymeleaf-layout-dialect as a dependency in a Spring Boot 2 web application. Alternatively, we can choose to add a spring-boot-starter-thymeleaf dependency that’ll take care of all of this for us.

Once the dependencies are in place, we can add the templates to the src/main/resources/templates folder and the Spring Boot will display them automatically.

6. Spring Security Configuration

For the sake of simplicity, we’ll see how the default HTTP Basic authentication is enabled using these frameworks.

Let’s start by looking at the dependencies and configuration we need to enable Security using Spring.

Spring requires both the standard spring-security-web and spring-security-config dependencies to set up Security in an application.

Next we need to add a class that extends the WebSecurityConfigurerAdapter and makes use of the @EnableWebSecurity annotation:

@Configuration
@EnableWebSecurity
public class CustomWebSecurityConfigurerAdapter extends WebSecurityConfigurerAdapter {
 
    @Autowired
    public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
          .withUser("user1")
            .password(passwordEncoder()
            .encode("user1Pass"))
          .authorities("ROLE_USER");
    }
 
    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests()
          .anyRequest().authenticated()
          .and()
          .httpBasic();
    }
    
    @Bean
    public PasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }
}

Here we’re using inMemoryAuthentication to set up the authentication.

Spring Boot also requires these dependencies to make it work, but we only need to define the dependency of spring-boot-starter-security as this will automatically add all the relevant dependencies to the classpath.

The security configuration in Spring Boot is the same as the one above.

To see how the JPA configuration can be achieved in both Spring and Spring Boot, we can check out our article A Guide to JPA with Spring.

7. Application Bootstrap

The basic difference in bootstrapping an application in Spring and Spring Boot lies with the servlet. Spring uses either the web.xml or SpringServletContainerInitializer as its bootstrap entry point.

On the other hand, Spring Boot uses only Servlet 3 features to bootstrap an application. Let’s talk about this in detail.

7.1. How Spring Bootstraps?

Spring supports both the legacy web.xml way of bootstrapping as well as the latest Servlet 3+ method.

Let’s see the web.xml approach in steps:

  1. Servlet container (the server) reads web.xml.
  2. The DispatcherServlet defined in the web.xml is instantiated by the container.
  3. DispatcherServlet creates WebApplicationContext by reading WEB-INF/{servletName}-servlet.xml.
  4. Finally, the DispatcherServlet registers the beans defined in the application context.

Here’s how Spring bootstraps using the Servlet 3+ approach:

  1. The container searches for classes implementing ServletContainerInitializer and executes.
  2. The SpringServletContainerInitializer finds all classes implementing WebApplicationInitializer.
  3. The WebApplicationInitializer creates the context with XML or @Configuration classes.
  4. The WebApplicationInitializer creates the DispatcherServlet with the previously created context.

7.2. How Spring Boot Bootstraps?

The entry point of a Spring Boot application is the class which is annotated with @SpringBootApplication:

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

By default, Spring Boot uses an embedded container to run the application. In this case, Spring Boot uses the public static void main entry point to launch an embedded web server.

It also takes care of the binding of the Servlet, Filter, and ServletContextInitializer beans from the application context to the embedded servlet container.

Another feature of Spring Boot is that it automatically scans all the classes in the same package or sub packages of the Main-class for components.

Additionally, Spring Boot provides the option of deploying it as a web archive in an external container. In this case, we have to extend the SpringBootServletInitializer:

@SpringBootApplication
public class Application extends SpringBootServletInitializer {
    // ...
}

Here the external servlet container looks for the Main-class defined in the META-INF file of the web archive, and the SpringBootServletInitializer will take care of binding the Servlet, Filter, and ServletContextInitializer.

8. Packaging and Deployment

Finally, let’s see how an application can be packaged and deployed. Both of these frameworks support common package managing technologies like Maven and Gradle; however, when it comes to deployment, these frameworks differ a lot.

For instance, the Spring Boot Maven Plugin provides Spring Boot support in Maven. It also allows packaging executable jar or war archives and running an application “in-place.”

Some of the advantages of Spring Boot over Spring in the context of deployment include:

  • Provides embedded container support
  • Provision to run the jars independently using the command java -jar
  • Option to exclude dependencies to avoid potential jar conflicts when deploying in an external container
  • Option to specify active profiles when deploying
  • Random port generation for integration tests

9. Conclusion

In this article, we learned about the differences between Spring and Spring Boot.

In a few words, we can say that Spring Boot is simply an extension of Spring itself to make development, testing, and deployment more convenient.

How to delete a directory recursively with all its subdirectories and files in Java
03
Mar
2021

How to delete a directory recursively with all its subdirectories and files in Java

In this short article, you’ll learn how to delete a directory recursively along with all its subdirectories and files.

There are two examples that demonstrate how to achieve this task. The idea behind both of the examples is to traverse the file tree, and delete the files in any directory before deleting the directory itself.

Delete directory recursively – Java 8+

This example makes use of Files.walk(Path) method that returns a Stream<Path> populated with Path objects by walking the file-tree in depth-first order.

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Comparator;

public class DeleteDirectoryRecursively {
    public static void main(String[] args) throws IOException {
        Path dir = Paths.get("java");

        // Traverse the file tree in depth-first fashion and delete each file/directory.
        Files.walk(dir)
                .sorted(Comparator.reverseOrder())
                .forEach(path -> {
                    try {
                        System.out.println("Deleting: " + path);
                        Files.delete(path);
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                });
    }
}

Delete directory recursively – Java 7

The following example uses Files.walkFileTree(Path, FileVisitor) method that traverses a file tree and invokes the supplied FileVisitor for each file.

We use a SimpleFileVisitor to perform the delete operation.

import java.io.IOException;
import java.nio.file.*;
import java.nio.file.attribute.BasicFileAttributes;

public class DeleteDirectoryRecursively1 {
    public static void main(String[] args) throws IOException {
        Path dir = Paths.get("java");

        // Traverse the file tree and delete each file/directory.
        Files.walkFileTree(dir, new SimpleFileVisitor<Path>() {
            @Override
            public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
                System.out.println("Deleting file: " + file);
                Files.delete(file);
                return FileVisitResult.CONTINUE;
            }

            @Override
            public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
                System.out.println("Deleting dir: " + dir);
                if (exc == null) {
                    Files.delete(dir);
                    return FileVisitResult.CONTINUE;
                } else {
                    throw exc;
                }
            }
        });
    }
}
Spring Boot Database Migrations with Flyway
05
Mar
2021

Spring Boot Database Migrations with Flyway

Nothing is perfect and complete when it is built the first time. The database schema of your brand new application is no exception. It is bound to change over time when you try to fit new requirements or add new features.

Flyway is a tool that lets you version control incremental changes to your database so that you can migrate it to a new version easily and confidently.

In this article, you’ll learn how to use Flyway in Spring Boot applications to manage changes to your database.

We’ll build a simple Spring Boot application with MySQL Database & Spring Data JPA, and learn how to integrate Flyway in the app.

Let’s get started!

Creating the Application

Let’s use Spring Boot CLI to generate the application. Fire up your terminal and type the following command to generate the project.

spring init --name=flyway-demo --dependencies=web,mysql,data-jpa,flyway flyway-demo

Once the project is generated, import it into your favorite IDE. The directory structure of the application would look like this –

Spring Boot Flyway Database Migration Example Directory Structure

Configuring MySQL and Hibernate

First create a new MySQL database named flyway_demo.

Then Open src/main/application.properties file and add the following properties to it –

## Spring DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.url = jdbc:mysql://localhost:3306/flyway_demo?useSSL=false
spring.datasource.username = root
spring.datasource.password = root

## Hibernate Properties
# The SQL dialect makes Hibernate generate better SQL for the chosen database
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect

## This is important
# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = validate

Please change spring.datasource.username and spring.datasource.password as per your MySQL installation.

The property spring.jpa.hibernate.ddl-auto is important. It tries to validate the database schema according to the entities that you have created in the application and throws an error if the schema doesn’t match the entity specifications.

Creating a Domain Entity

Let’s create a simple Entity in our application so that we can create and test flyway migration for this entity.

First, Create a new package called domain inside com.example.flywaydemo package. Then, create the following User.java file inside com.example.flywaydemo.domain package –

package com.example.flywaydemo.domain;

import javax.persistence.*;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.Size;

@Entity
@Table(name = "users")
public class User {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @NotBlank
    @Column(unique = true)
    @Size(min = 1, max = 100)
    private String username;

    @NotBlank
    @Size(max = 50)
    private String firstName;

    @Size(max = 50)
    private String lastName;

    public Long getId() {
        return id;
    }

    public void setId(Long id) {
        this.id = id;
    }

    public String getUsername() {
        return username;
    }

    public void setUsername(String username) {
        this.username = username;
    }

    public String getFirstName() {
        return firstName;
    }

    public void setFirstName(String firstName) {
        this.firstName = firstName;
    }

    public String getLastName() {
        return lastName;
    }

    public void setLastName(String lastName) {
        this.lastName = lastName;
    }
}

Creating a Flyway Migration script

Flyway tries to read database migration scripts from classpath:db/migration folder by default.

All the migration scripts must follow a particular naming convention – V<VERSION_NUMBER>__<NAME>.sql. Checkout the Official Flyway documentation to learn more about naming convention.

Let’s create our very first database migration script. First, Create the db/migration folder inside src/main/resources directory –

mkdir -p src/main/resources/db/migration

Now, create a new file named V1__init.sql inside src/main/resources/db/migration directory and add the following sql scripts –

CREATE TABLE users (
  id bigint(20) NOT NULL AUTO_INCREMENT,
  username varchar(100) NOT NULL,
  first_name varchar(50) NOT NULL,
  last_name varchar(50) DEFAULT NULL,
  PRIMARY KEY (id),
  UNIQUE KEY UK_username (username)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Running the Application

When you run the application, flyway will automatically check the current database version and apply any pending migrations.

Run the app by typing the following command in terminal –

mvn spring-boot:run

When running the app for the first time, you’ll see the following logs pertaining to Flyway, which says that it has migrated the schema to version 1 – init.

Flyway Migration Logs

How does Flyway manage migrations?

Flyway creates a table called flyway_schema_history when it runs the migration for the first time and stores all the meta-data required for versioning the migrations in this table.

mysql> show tables;
+-----------------------+
| Tables_in_flyway_demo |
+-----------------------+
| flyway_schema_history |
| users                 |
+-----------------------+
2 rows in set (0.00 sec)

You can check the contents of flyway_schema_history table in your mysql database –

mysql> select * from flyway_schema_history;
+----------------+---------+-------------+------+--------------+------------+--------------+---------------------+----------------+---------+
| installed_rank | version | description | type | script       | checksum   | installed_by | installed_on        | execution_time | success |
+----------------+---------+-------------+------+--------------+------------+--------------+---------------------+----------------+---------+
|              1 | 1       | init        | SQL  | V1__init.sql | 1952043475 | root         | 2018-03-06 11:25:58 |             16 |       1 |
+----------------+---------+-------------+------+--------------+------------+--------------+---------------------+----------------+---------+
1 row in set (0.00 sec)

It stores the current migration’s version, script file name, and checksum among other details in the table.

When you run the app, Flyway first validates the already applied migration scripts by calculating their checksum and matching it with the checksum stored in the meta-data table.

So, If you change V1__init.sql after the migration is applied, Flyway will throw an error saying that the checksum doesn’t match.

Therefore, for doing any changes to the schema, you need to create another migration script file with a new version V2__<NAME>.sql and let Flyway apply that when you run the app.

Adding multiple migrations

Let’s create another migration script and see how flyway migrates the database to the new version automatically.

Create a new script V2__testdata.sql inside src/main/resources/db/migration with the following contents –

INSERT INTO users(username, first_name, last_name) 
VALUES('fusebes', 'Yaniv', 'Levy');
INSERT INTO users(username, first_name, last_name) VALUES('flywaytest', 'Flyway', 'Test');

If you run the app now, flyway will detect the new migration script, and migrate the database to this version.

Open mysql and check the users table. You’ll see that the above two entries are automatically created in the users table –

mysql> select * from users;
+----+------------+-----------+------------+
| id | first_name | last_name | username   |
+----+------------+-----------+------------+
|  4 | Yaniv      | Levy      | fusebes    |
|  5 | Flyway     | Test      | flywaytest |
+----+------------+-----------+------------+
2 rows in set (0.01 sec)

Also, Flyway stores this new schema version in its meta-data table –

mysql> select * from flyway_schema_history;
+----------------+---------+-------------+------+------------------+-------------+--------------+---------------------+----------------+---------+
| installed_rank | version | description | type | script           | checksum    | installed_by | installed_on        | execution_time | success |
+----------------+---------+-------------+------+------------------+-------------+--------------+---------------------+----------------+---------+
|              1 | 1       | init        | SQL  | V1__init.sql     |  1952043475 | root         | 2018-03-06 11:25:58 |             16 |       1 |
|              2 | 2       | testdata    | SQL  | V2__testdata.sql | -1926058189 | root         | 2018-03-06 11:25:58 |              6 |       1 |
+----------------+---------+-------------+------+------------------+-------------+--------------+---------------------+----------------+---------+
2 rows in set (0.00 sec)

Conclusion

That’s all folks! In this article, You learned how to integrate Flyway in a Spring Boot application for versioning database changes.

Thank you for reading. See you in the next blog post.

Remove Duplicates from Sorted Array II
06
Feb
2021

Remove Duplicates from Sorted Array II

Remove Duplicates from Sorted Array II

Given a sorted array, remove the duplicates from the array in-place such that each element appears at most twice, and return the new length.

Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.

Example

Given array [1, 1, 1, 3, 5, 5, 7]

The output should be 6, with the first six elements of the array being [1, 1, 3, 5, 5, 7]

Remove Duplicates from Sorted Array II solution in Java

It can also be solved in O(n) time complexity by using two pointers (indexes).

class RemoveDuplicatesSortedArrayII {
  private static int removeDuplicates(int[] nums) {
    int n = nums.length;

    /*
     * This index will move when we modify the array in-place to include an element
     * so that it is not repeated more than twice.
     */
    int j = 0;

    for (int i = 0; i < n; i++) {
      /*
       * If the current element is equal to the element at index i+2, then skip the
       * current element because it is repeated more than twice.
       */
      if (i < n - 2 && nums[i] == nums[i + 2]) {
        continue;
      }

      nums[j++] = nums[i];
    }

    return j;
  }

  public static void main(String[] args) {
    int[] nums = new int[] { 1, 1, 1, 3, 5, 5, 7 };
    int newLength = removeDuplicates(nums);

    System.out.println("Length of array after removing duplicates = " + newLength);

    System.out.print("Array = ");
    for (int i = 0; i < newLength; i++) {
      System.out.print(nums[i] + " ");
    }
    System.out.println();
  }
}
# Output
Length of array after removing duplicates = 6
Array = 1 1 3 5 5 7 
How to format Date and Time in Java
03
Mar
2021

How to format Date and Time in Java

In this article, you’ll learn how to format Date and Time represented using Date, LocalDate, LocalDateTime, or ZonedDateTime to a readable String in Java.

Format LocalDate using DateTimeFormatter

import java.time.LocalDate;
import java.time.format.DateTimeFormatter;

public class LocalDateFormatExample {
    public static void main(String[] args) {
        DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("dd/MM/yyyy");

        LocalDate localDate = LocalDate.of(2020, 1, 31);

        System.out.println(localDate.format(dateTimeFormatter));

    }
}
# Output
31/01/2020

Format LocalDateTime using DateTimeFormatter

import java.time.LocalDateTime;
import java.time.format.DateTimeFormatter;

public class LocalDateTimeFormatExample {
    public static void main(String[] args) {
        DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("E, MMM dd yyyy, hh:mm:ss a");

        LocalDateTime localDateTime = LocalDateTime.of(2020, 1, 31, 10, 45, 30);

        System.out.println(localDateTime.format(dateTimeFormatter));

    }
}
# Output
Fri, Jan 31 2020, 10:45:30 AM

Format ZonedDateTime using DateTimeFormatter

import java.time.LocalDateTime;
import java.time.ZoneId;
import java.time.ZonedDateTime;
import java.time.format.DateTimeFormatter;

public class ZonedDateTimeFormatExample {
    public static void main(String[] args) {
        DateTimeFormatter dateTimeFormatter = DateTimeFormatter.ofPattern("E, MMM dd yyyy, hh:mm:ss a (VV)");

        ZonedDateTime zonedDateTime = ZonedDateTime.of(
                LocalDateTime.of(2020, 1, 31, 10, 30, 45),
                ZoneId.of("America/New_York"));
        
        System.out.println(zonedDateTime.format(dateTimeFormatter));

    }
}
# Output
Fri, Jan 31 2020, 10:30:45 AM (America/New_York)

Format Date and Time using SimpleDateFormat

import java.text.SimpleDateFormat;
import java.util.Date;

public class DateFormatExample {
    public static void main(String[] args) {
        SimpleDateFormat sdf = new SimpleDateFormat("dd/MM/yyyy");

        Date date = new Date();

        System.out.println(sdf.format(date));

    }
}
# Output
24/02/2020

Let’s see another example with a more complex pattern:

import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;

public class DateFormatExample {
    public static void main(String[] args) {
        SimpleDateFormat sdf = new SimpleDateFormat("E, MMM dd yyyy, hh:mm:ss a");

        Calendar calendar = Calendar.getInstance();
        calendar.set(2020, 1, 26, 15, 30, 45);

        Date date = calendar.getTime();

        System.out.println(sdf.format(date));

    }
}
# Output
Wed, Feb 26 2020, 03:30:45 PM
Clarified CQRS
28
Mar
2021

Clarified CQRS

After listening how the community has interpreted Command-Query Responsibility Segregation I think that the time has come for some clarification. Some have been tying it together to Event Sourcing. Most have been overlaying their previous layered architecture assumptions on it. Here I hope to identify CQRS itself, and describe in which places it can connect to other patterns.
This is quite a long post.
Why CQRS Before describing the details of CQRS we need to understand the two main driving forces behind it: collaboration and staleness.
Collaboration refers to circumstances under which multiple actors will be using/modifying the same set of data – whether or not the intention of the actors is actually to collaborate with each other. There are often rules which indicate which user can perform which kind of modification and modifications that may have been acceptable in one case may not be acceptable in others. We’ll give some examples shortly. Actors can be human like normal users, or automated like software.Staleness refers to the fact that in a collaborative environment, once data has been shown to a user, that same data may have been changed by another actor – it is stale. Almost any system which makes use of a cache is serving stale data – often for performance reasons. What this means is that we cannot entirely trust our users decisions, as they could have been made based on out-of-date information.Standard layered architectures don’t explicitly deal with either of these issues. While putting everything in the same database may be one step in the direction of handling collaboration, staleness is usually exacerbated in those architectures by the use of caches as a performance-improving afterthought.A picture for referenceI’ve given some talks about CQRS using this diagram to explain it:The boxes named AC are Autonomous Components. We’ll describe what makes them autonomous when discussing commands. But before we go into the complicated parts, let’s start with queries:QueriesIf the data we’re going to be showing users is stale anyway, is it really necessary to go to the master database and get it from there? Why transform those 3rd normal form structures to domain objects if we just want data – not any rule-preserving behaviors? Why transform those domain objects to DTOs to transfer them across a wire, and who said that wire has to be exactly there? Why transform those DTOs to view model objects?In short, it looks like we’re doing a heck of a lot of unnecessary work based on the assumption that reusing code that has already been written will be easier than just solving the problem at hand. Let’s try a different approach:How about we create an additional data store whose data can be a bit out of sync with the master database – I mean, the data we’re showing the user is stale anyway, so why not reflect in the data store itself. We’ll come up with an approach later to keep this data store more or less in sync.Now, what would be the correct structure for this data store? How about just like the view model? One table for each view. Then our client could simply SELECT * FROM MyViewTable (or possibly pass in an ID in a where clause), and bind the result to the screen. That would be just as simple as can be. You could wrap that up with a thin facade if you feel the need, or with stored procedures, or using AutoMapper which can simply map from a data reader to your view model class. The thing is that the view model structures are already wire-friendly, so you don’t need to transform them to anything else.You could even consider taking that data store and putting it in your web tier. It’s just as secure as an in-memory cache in your web tier. Give your web servers SELECT only permissions on those tables and you should be fine.Query Data StorageWhile you can use a regular database as your query data store it isn’t the only option. Consider that the query schema is in essence identical to your view model. You don’t have any relationships between your various view model classes, so you shouldn’t need any relationships between the tables in the query data store.So do you actually need a relational database?The answer is no, but for all practical purposes and due to organizational inertia, it is probably your best choice (for now).Scaling QueriesSince your queries are now being performed off of a separate data store than your master database, and there is no assumption that the data that’s being served is 100% up to date, you can easily add more instances of these stores without worrying that they don’t contain the exact same data. The same mechanism that updates one instance can be used for many instances, as we’ll see later.This gives you cheap horizontal scaling for your queries. Also, since your not doing nearly as much transformation, the latency per query goes down as well. Simple code is fast code.Data modificationsSince our users are making decisions based on stale data, we need to be more discerning about which things we let through. Here’s a scenario explaining why:Let’s say we have a customer service representative who is one the phone with a customer. This user is looking at the customer’s details on the screen and wants to make them a ‘preferred’ customer, as well as modifying their address, changing their title from Ms to Mrs, changing their last name, and indicating that they’re now married. What the user doesn’t know is that after opening the screen, an event arrived from the billing department indicating that this same customer doesn’t pay their bills – they’re delinquent. At this point, our user submits their changes.Should we accept their changes?Well, we should accept some of them, but not the change to ‘preferred’, since the customer is delinquent. But writing those kinds of checks is a pain – we need to do a diff on the data, infer what the changes mean, which ones are related to each other (name change, title change) and which are separate, identify which data to check against – not just compared to the data the user retrieved, but compared to the current state in the database, and then reject or accept.Unfortunately for our users, we tend to reject the whole thing if any part of it is off. At that point, our users have to refresh their screen to get the up-to-date data, and retype in all the previous changes, hoping that this time we won’t yell at them because of an optimistic concurrency conflict.As we get larger entities with more fields on them, we also get more actors working with those same entities, and the higher the likelihood that something will touch some attribute of them at any given time, increasing the number of concurrency conflicts.If only there was some way for our users to provide us with the right level of granularity and intent when modifying data. That’s what commands are all about.CommandsA core element of CQRS is rethinking the design of the user interface to enable us to capture our users’ intent such that making a customer preferred is a different unit of work for the user than indicating that the customer has moved or that they’ve gotten married. Using an Excel-like UI for data changes doesn’t capture intent, as we saw above.We could even consider allowing our users to submit a new command even before they’ve received confirmation on the previous one. We could have a little widget on the side showing the user their pending commands, checking them off asynchronously as we receive confirmation from the server, or marking them with an X if they fail. The user could then double-click that failed task to find information about what happened.Note that the client sends commands to the server – it doesn’t publish them. Publishing is reserved for events which state a fact – that something has happened, and that the publisher has no concern about what receivers of that event do with it.Commands and ValidationIn thinking through what could make a command fail, one topic that comes up is validation. Validation is different from business rules in that it states a context-independent fact about a command. Either a command is valid, or it isn’t. Business rules on the other hand are context dependent.In the example we saw before, the data our customer service rep submitted was valid, it was only due to the billing event arriving earlier which required the command to be rejected. Had that billing event not arrived, the data would have been accepted.Even though a command may be valid, there still may be reasons to reject it.As such, validation can be performed on the client, checking that all fields required for that command are there, number and date ranges are OK, that kind of thing. The server would still validate all commands that arrive, not trusting clients to do the validation.Rethinking UIs and commands in light of validationThe client can make of the query data store when validating commands. For example, before submitting a command that the customer has moved, we can check that the street name exists in the query data store.At that point, we may rethink the UI and have an auto-completing text box for the street name, thus ensuring that the street name we’ll pass in the command will be valid. But why not take things a step further? Why not pass in the street ID instead of its name? Have the command represent the street not as a string, but as an ID (int, guid, whatever).On the server side, the only reason that such a command would fail would be due to concurrency – that someone had deleted that street and that that hadn’t been reflected in the query store yet; a fairly exceptional set of circumstances.Reasons valid commands fail and what to do about itSo we’ve got a well-behaved client that is sending valid commands, yet the server still decides to reject them. Often the circumstances for the rejection are related to other actors changing state relevant to the processing of that command.In the CRM example above, it is only because the billing event arrived first. But “first” could be a millisecond before our command. What if our user pressed the button a millisecond earlier? Should that actually change the business outcome? Shouldn’t we expect our system to behave the same when observed from the outside?So, if the billing event arrived second, shouldn’t that revert preferred customers to regular ones? Not only that, but shouldn’t the customer be notified of this, like by sending them an email? In which case, why not have this be the behavior for the case where the billing event arrives first? And if we’ve already got a notification model set up, do we really need to return an error to the customer service rep? I mean, it’s not like they can do anything about it other than notifying the customer.So, if we’re not returning errors to the client (who is already sending us valid commands), maybe all we need to do on the client when sending a command is to tell the user “thank you, you will receive confirmation via email shortly”. We don’t even need the UI widget showing pending commands.Commands and AutonomyWhat we see is that in this model, commands don’t need to be processed immediately – they can be queued. How fast they get processed is a question of Service-Level Agreement (SLA) and not architecturally significant. This is one of the things that makes that node that processes commands autonomous from a runtime perspective – we don’t require an always-on connection to the client.Also, we shouldn’t need to access the query store to process commands – any state that is needed should be managed by the autonomous component – that’s part of the meaning of autonomy.Another part is the issue of failed message processing due to the database being down or hitting a deadlock. There is no reason that such errors should be returned to the client – we can just rollback and try again. When an administrator brings the database back up, all the message waiting in the queue will then be processed successfully and our users receive confirmation.The system as a whole is quite a bit more robust to any error conditions.Also, since we don’t have queries going through this database any more, the database itself is able to keep more rows/pages in memory which serve commands, improving performance. When both commands and queries were being served off of the same tables, the database server was always juggling rows between the two.Autonomous ComponentsWhile in the picture above we see all commands going to the same AC, we could logically have each command processed by a different AC, each with it’s own queue. That would give us visibility into which queue was the longest, letting us see very easily which part of the system was the bottleneck. While this is interesting for developers, it is critical for system administrators.Since commands wait in queues, we can now add more processing nodes behind those queues (using the distributor with NServiceBus) so that we’re only scaling the part of the system that’s slow. No need to waste servers on any other requests.Service LayersOur command processing objects in the various autonomous components actually make up our service layer. The reason you don’t see this layer explicitly represented in CQRS is that it isn’t really there, at least not as an identifiable logical collection of related objects – here’s why:In the layered architecture (AKA 3-Tier) approach, there is no statement about dependencies between objects within a layer, or rather it is implied to be allowed. However, when taking a command-oriented view on the service layer, what we see are objects handling different types of commands. Each command is independent of the other, so why should we allow the objects which handle them to depend on each other?Dependencies are things which should be avoided, unless there is good reason for them.Keeping the command handling objects independent of each other will allow us to more easily version our system, one command at a time, not needing even to bring down the entire system, given that the new version is backwards compatible with the previous one.Therefore, keep each command handler in its own VS project, or possibly even in its own solution, thus guiding developers away from introducing dependencies in the name of reuse (it’s a fallacy). If you do decide as a deployment concern, that you want to put them all in the same process feeding off of the same queue, you can ILMerge those assemblies and host them together, but understand that you will be undoing much of the benefits of your autonomous components.Whither the domain model?Although in the diagram above you can see the domain model beside the command-processing autonomous components, it’s actually an implementation detail. There is nothing that states that all commands must be processed by the same domain model. Arguably, you could have some commands be processed by transaction script, others using table module (AKA active record), as well as those using the domain model. Event-sourcing is another possible implementation.Another thing to understand about the domain model is that it now isn’t used to serve queries. So the question is, why do you need to have so many relationships between entities in your domain model?(You may want to take a second to let that sink in.)Do we really need a collection of orders on the customer entity? In what command would we need to navigate that collection? In fact, what kind of command would need any one-to-many relationship? And if that’s the case for one-to-many, many-to-many would definitely be out as well. I mean, most commands only contain one or two IDs in them anyway.Any aggregate operations that may have been calculated by looping over child entities could be pre-calculated and stored as properties on the parent entity. Following this process across all the entities in our domain would result in isolated entities needing nothing more than a couple of properties for the IDs of their related entities – “children” holding the parent ID, like in databases.In this form, commands could be entirely processed by a single entity – viola, an aggregate root that is a consistency boundary.Persistence for command processingGiven that the database used for command processing is not used for querying, and that most (if not all) commands contain the IDs of the rows they’re going to affect, do we really need to have a column for every single domain object property? What if we just serialized the domain entity and put it into a single column, and had another column containing the ID? This sounds quite similar to key-value storage that is available in the various cloud providers. In which case, would you really need an object-relational mapper to persist to this kind of storage?You could also pull out an additional property per piece of data where you’d want the “database” to enforce uniqueness.I’m not suggesting that you do this in all cases – rather just trying to get you to rethink some basic assumptions.Let me reiterateHow you process the commands is an implementation detail of CQRS.Keeping the query store in syncAfter the command-processing autonomous component has decided to accept a command, modifying its persistent store as needed, it publishes an event notifying the world about it. This event often is the “past tense” of the command submitted:MakeCustomerPerferredCommand -> CustomerHasBeenMadePerferredEventThe publishing of the event is done transactionally together with the processing of the command and the changes to its database. That way, any kind of failure on commit will result in the event not being sent. This is something that should be handled by default by your message bus, and if you’re using MSMQ as your underlying transport, requires the use of transactional queues.The autonomous component which processes those events and updates the query data store is fairly simple, translating from the event structure to the persistent view model structure. I suggest having an event handler per view model class (AKA per table).Here’s the picture of all the pieces again:
Bounded ContextsWhile CQRS touches on many pieces of software architecture, it is still not at the top of the food chain. CQRS if used is employed within a bounded context (DDD) or a business component (SOA) – a cohesive piece of the problem domain. The events published by one BC are subscribed to by other BCs, each updating their query and command data stores as needed.UI’s from the CQRS found in each BC can be “mashed up” in a single application, providing users a single composite view on all parts of the problem domain. Composite UI frameworks are very useful for these cases.SummaryCQRS is about coming up with an appropriate architecture for multi-user collaborative applications. It explicitly takes into account factors like data staleness and volatility and exploits those characteristics for creating simpler and more scalable constructs.One cannot truly enjoy the benefits of CQRS without considering the user-interface, making it capture user intent explicitly. When taking into account client-side validation, command structures may be somewhat adjusted. Thinking through the order in which commands and events are processed can lead to notification patterns which make returning errors unnecessary.While the result of applying CQRS to a given project is a more maintainable and performant code base, this simplicity and scalability require understanding the detailed business requirements and are not the result of any technical “best practice”. If anything, we can see a plethora of approaches to apparently similar problems being used together – data readers and domain models, one-way messaging and synchronous calls.