Welcome To Fusebes - Dev & Programming Blog

Kotlin Operators with Examples
07
Mar
2021

Kotlin Operators with Examples

In the previous article, you learned how to create variables and what are various basic data types available in Kotlin for creating variables.

In this article, you’ll learn what are various operators provided by kotlin to perform operations on basic data types.

Operations on Numeric Types

Just like other languages, Kotlin provides various operators to perform computations on numbers –

  • Arithmetic operators (+-*/%)
  • Comparison operators (==!=<><=>=)
  • Assignment operators (+=-=*=/=%=)
  • Increment & Decrement operators (++--)

Following are few examples that demonstrate the usage of above operators –

var a = 10
var b = 20
var c = ((a + b) * ( a + b))/2   // 450

var isALessThanB = a < b   // true

a++     // a now becomes 11
b += 5  // b equals to 25 now

Understanding how operators work in Kotlin

Everything in Kotlin is an object, even the basic data types like IntCharDoubleBoolean etc. Kotlin doesn’t have separate primitive types and their corresponding boxed types like Java.

Note that Kotlin may represent basic types like IntCharBoolean etc. as primitive values at runtime to improve performance, but for the end users, all of them are objects.

Since all the data types are objects, the operations on these types are internally represented as function calls.

For example, the addition operation a + b between two numbers a and b is represented as a function call a.plus(b) –

var a = 4
var b = 5

println(a + b)

// equivalent to
println(a.plus(b))

All the operators that we looked at in the previous section have a symbolic name which is used to translate any expression containing those operators into the corresponding function calls –

ExpressionTranslates to
a + ba.plus(b)
a – ba.minus(b)
a * ba.times(b)
a / ba.div(b)
a % ba.rem(b)
a++a.inc()
a−−a.dec()
a > ba.compareTo(b) > 0
a < ba.compareTo(b) < 0
a += ba.plusAssign(b)

You can check out other expressions and their corresponding function calls on Kotlin’s reference page.

The concept of translating such expressions to function calls enable operator overloading in Kotlin. For example, you can provide implementation for the plus function in a class defined by you, and then you’ll be able to add the objects of that class using + operator like this – object1 + object2.

Kotlin will automatically convert the addition operation object1 + object2 into the corresponding function call object1.plus(object2) (Think of a ComplexNumber class with the + operator overloaded).

You’ll learn more about operator overloading in a future article.

Note that the operations on basic types like IntCharDoubleBoolean etc. are optimized and do not include the overhead of function calls.

Bitwise Operators

Unlike C, C++ and Java, Kotlin doesn’t have bitwise operators like |(bitwise-or), &(bitwise-and), ^(bitwise-xor), << (signed left shift), >>(signed right shift) etc.

For performing bitwise operations, Kotlin provides following methods that work for Int and Long types –

  • shl – signed shift left (equivalent of << operator)
  • shr – signed shift right (equivalent of >> operator)
  • ushr– unsigned shift right (equivalent of >>> operator)
  • and – bitwise and (equivalent of & operator)
  • or – bitwise or (equivalent of | operator)
  • xor – bitwise xor (equivalent of ^ operator)
  • inv – bitwise complement (equivalent of ~ operator)

Here are few examples demonstrating how to use above functions –

1 shl 2   // Equivalent to 1.shl(2), Result = 4
16 shr 2  // Result = 4
2 and 4   // Result = 0
2 or 3    // Result = 3
4 xor 5   // Result = 1
4.inv()   // Result = -5

All the bitwise functions, except inv(), can be called using infix notation. The infix notation of 2.and(4) is 2 and 4. Infix notation allows you to write function calls in a more intuitive way.

Operations on Boolean Types

Kotlin supports following logical operators for performing operations on boolean types –

  • || – Logical OR
  • && – Logical AND
  • !   – Logical NOT

Here are few examples of logical operators –

2 == 2 && 4 != 5  // true
4 > 5 && 2 < 7    // false
!(7 > 12 || 14 < 18)  // false

Logical operators are generally used in control flow statements like ifif-elsewhile etc., to test the validity of a condition.

Operations on Strings

String Concatenation

The + operator is overloaded for String types. It performs String concatenation –

var firstName = "Rajeev"
var lastName = "Singh"
var fullName = firstName + " " + lastName	// "Rajeev Singh"

String Interpolation

Kotlin has an amazing feature called String Interpolation. This feature allows you to directly insert a template expression inside a String. Template expressions are tiny pieces of code that are evaluated and their results are concatenated with the original String.

A template expression is prefixed with $ symbol. Following is an example of String interpolation –

var a = 12
var b = 18
println("Avg of $a and $b is equal to ${ (a + b)/2 }") 

// Prints - Avg of 12 and 18 is equal to 15

If the template expression is a simple variable, you can write it like $variableName. If it is an expression then you need to insert it inside a ${} block.

Conclusion

That’s all folks! In this article, you learned what are various operators provided in Kotlin to perform operations on Numbers, Booleans, and Strings. You also learned how the expressions containing operators are translated to function calls internally.

As always, Thank you for reading.

Configuring Spring Boot to use Gson instead of Jackson
03
Mar
2021

Configuring Spring Boot to use Gson instead of Jackson

pring Boot uses Jackson by default for serializing and deserializing request and response objects in your REST APIs.

If you want to use GSON instead of Jackson then it’s just a matter of adding Gson dependency in your pom.xml file and specifying a property in the application.properties file to tell Spring Boot to use Gson as your preferred json mapper.

Force Spring Boot to use GSON instead of Jackson

1. Add Gson dependency

Open your pom.xml file and add the GSON dependency like so –

<!-- Include GSON dependency -->
<dependency>
	<groupId>com.google.code.gson</groupId>
	<artifactId>gson</artifactId>
	<version>2.8.4</version>
</dependency>

Once you do that, Spring Boot will detect Gson dependency on the classpath and automatically create a Gson bean with sensible default configurations. You can also autowire gson in your spring components directly like so –

@Autowire
private Gson gson;

If you’re curious how Spring Boot does that, then take a look at this GsonAutoConfiguration class. Notice how it uses @ConditionalOnClass(Gson.class) annotation to trigger the auto-configuration when Gson is available on the classpath.

Jackson is also configured in a similar fashion with JacksonAutoConfiguration class.

2. Set the preferred json mapper to gson

You can now ask Spring Boot to use Gson as your preferred json mapper by specifying the following property in the application.properties file –

# Preferred JSON mapper to use for HTTP message conversion.
spring.http.converters.preferred-json-mapper=gson

That’s all you need to do to force Spring Boot to use Gson instead of Jackson.

Configure GSON in Spring Boot

Now that your Spring Boot application is using Gson, you can configure Gson by specifying various properties in the application.properties file. The following properties are taken from Spring Boot Common Application Properties index page –

# GSON (GsonProperties)

# Format to use when serializing Date objects.
spring.gson.date-format= 

# Whether to disable the escaping of HTML characters such as '<', '>', etc.
spring.gson.disable-html-escaping= 

# Whether to exclude inner classes during serialization.
spring.gson.disable-inner-class-serialization= 

# Whether to enable serialization of complex map keys (i.e. non-primitives).
spring.gson.enable-complex-map-key-serialization= 

# Whether to exclude all fields from consideration for serialization or deserialization that do not have the "Expose" annotation.
spring.gson.exclude-fields-without-expose-annotation= 

# Naming policy that should be applied to an object's field during serialization and deserialization.
spring.gson.field-naming-policy= 

# Whether to generate non executable JSON by prefixing the output with some special text.
spring.gson.generate-non-executable-json= 

# Whether to be lenient about parsing JSON that doesn't conform to RFC 4627.
spring.gson.lenient= 

# Serialization policy for Long and long types.
spring.gson.long-serialization-policy= 

# Whether to output serialized JSON that fits in a page for pretty printing.
spring.gson.pretty-printing= 

# Whether to serialize null fields.
spring.gson.serialize-nulls= 

All the above properties are bound to a class called GsonProperties defined in Spring Boot. The GsonAutoConfiguration class uses these properties to configure Gson.

Excluding Jackson completely

If you want to get rid of Jackson completely then you can exclude it from spring-boot-starter-web dependency in the pom.xml file like so –

<dependency>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-starter-web</artifactId>
	<!-- Exclude the default Jackson dependency -->
	<exclusions>
		<exclusion>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-json</artifactId>
		</exclusion>
	</exclusions>
</dependency>

I hope you enjoyed this article. Thanks for reading. See you next time!

What are Spring Boot common annotations
30
Mar
2021

What are Spring Boot Common Annotations?

The most commonly used and important spring boot annotations are as below:

  • @EnableAutoConfiguration – It enable auto-configuration mechanism.
  • @ComponentScan – enable component scanning in application classpath.
  • @SpringBootApplication – enable all 3 above three things in one step i.e. enable auto-configuration mechanism, enable component scanning and register extra beans in the context.
  • @ImportAutoConfiguration
  • @AutoConfigureBefore, @AutoConfigureAfter, @AutoConfigureOrder – shall be used if the configuration needs to be applied in a specific order (before of after).
  • @Conditional – annotations such as @ConditionalOnBean@ConditionalOnWebApplication or 
    @ConditionalOnClass allow to register a bean only when the condition meets.
Nullable Types and Null Safety in Kotlin
07
Mar
2021

Nullable Types and Null Safety in Kotlin

if you have been programming in Java or any other language that has the concept of null reference then you must have heard about or experienced NullPointerException in your programs.

NullPointerExceptions are Runtime Exceptions which are thrown by the program at runtime causing application failure and system crashes.

Wouldn’t it be nice if we could detect possible NullPointerException exception errors at compile time itself and guard against them?

Well, Enter Kotlin!

Nullability and Nullable Types in Kotlin

Kotlin supports nullability as part of its type System. That means You have the ability to declare whether a variable can hold a null value or not.

By supporting nullability in the type system, the compiler can detect possible NullPointerException errors at compile time and reduce the possibility of having them thrown at runtime.

Let’s understand how it works!

All variables in Kotlin are non-nullable by default. So If you try to assign a null value to a regular variable, the compiler will throw an error –

var greeting: String = "Hello, World"
greeting = null // Compilation Error

To allow null values, you have to declare a variable as nullable by appending a question mark in its type declaration –

var nullableGreeting: String? = "Hello, World"
nullableGreeting = null // Works

We know that NullPointerException occurs when we try to call a method or access a property on a variable which is null. Kotlin disallows method calls and property access on nullable variables and thereby prevents many possible NullPointerExceptions.

For example, The following method access works because Kotlin knows that the variable greeting can never be null –

val len = greeting.length 
val upper = greeting.toUpperCase() 

But the same method call won’t work with nullableGreeting variable –

val len = nullableGreeting.length // Compilation Error
val upper = nullableGreeting.toUpperCase()  // Compilation Error

Since Kotlin knows beforehand which variable can be null and which cannot, It can detect and disallow calls which could result in NullPointerException at compile-time itself.

Working with Nullable Types

All right, It’s nice that Kotlin disallows method calls and property access on nullable variables to guard against NullPointerException errors. But we still need to do that right?

Well, There are several ways of safely doing that in Kotlin.

1. Adding a null Check

The most trivial way to work with nullable variables is to perform a null check before accessing a property or calling a method on them –

val nullableName: String? = "John"

if(nullableName != null) {
    println("Hello, ${nullableName.toUpperCase()}.")
    println("Your name is ${nullableName.length} characters long.")
} else {
    println("Hello, Guest")
}

Once you perform a null comparison, the compiler remembers that and allows calls to toUpperCase() and length inside the if branch.

2. Safe call operator: ?.

Null Comparisons are simple but too verbose. Kotlin provides a Safe call operator, ?. that reduces this verbosity. It allows you to combine a null-check and a method call in a single expression.

For example, The following expression –

nullableName?.toUpperCase()

is same as –

if(nullableName != null) 
    nullableName.toUpperCase()
else
    null    

Wow! That saves a lot of keystrokes, right? 🙂

So if you were to print the name in uppercase and its length safely, you could do the following –

val nullableName: String? = null

println(nullableName?.toUpperCase())
println(nullableName?.length)
// Prints 
null
null

That printed null since the variable nullableName is null, otherwise, it would have printed the name in uppercase and its length.

But what if you don’t want to print anything if the variable is null?

Well, To perform an operation only if the variable is not null, you can use the safe call operator with let –

val nullableName: String? = null

nullableName?.let { println(it.toUpperCase()) }
nullableName?.let { println(it.length) }

// Prints nothing

The lambda expression inside let is executed only if the variable nullableName is not null.

That’s great but that’s not all. Safe call operator is even more powerful than you think. For example, You can chain multiple safe calls like this –

val currentCity: String? = user?.address?.city

The variable currentCity will be null if any of useraddress or city is null. (Imagine doing that using null-checks.)

3. Elvis operator: ?:

The Elvis operator is used to provide a default value when the original variable is null –

val name = nullableName ?: "Guest"

The above expression is same as –

val name = if(nullableName != null) nullableName else "Guest"

In other words, The Elvis operator takes two values and returns the first value if it is not null, otherwise, it returns the second value.

The Elvis operator is often used with Safe call operator to provide a default value other than null when the variable on which a method or property is called is null –

val len = nullableName?.length ?: -1

You can have more complex expressions on the left side of Elvis operator –

val currentCity = user?.address?.city ?: "Unknown"

Moreover, You can use throw and return expressions on the right side of Elvis operator. This is very useful while checking preconditions in a function. So instead of providing a default value in the right side of Elvis operator, you can throw an exception like this –

val name = nullableName ?: throw IllegalArgumentException("Name can not be null")

4. Not null assertion : !! Operator

The !! operator converts a nullable type to a non-null type, and throws a NullPointerException if the nullable type holds a null value.

So It’s a way of asking for NullPointerException explicitly. Please don’t use this operator.

val nullableName: String? = null
nullableName!!.toUpperCase() // Results in NullPointerException

Null Safety and Java Interoperability

Kotlin is fully interoperable with Java but Java doesn’t support nullability in its type system. So what happens when you call Java code from Kotlin?

Well, Java types are treated specially in Kotlin. They are called Platform types. Since Kotlin doesn’t have any information about the nullability of a type declared in Java, It relaxes compile-time null checks for these types.

So you don’t get any null safety guarantee for types declared in Java, and you have full responsibility for operations you perform on these types. The compiler will allow all operations. If you know that the Java variable can be null, you should compare it with null before use, otherwise, just like Java, you’ll get a NullPointerException at runtime if the value is null.

Consider the following User class declared in Java –

public class User {
    private final String name;

    public User(String name) {
        this.name = name;
    }

    public String getName() {
        return name;
    }
}

Since Kotlin doesn’t know about the nullability of the member variable name, It allows all operations on this variable. You can treat it as nullable or non-nullable, but the compiler won’t enforce anything.

In the following example, We simply treat the variable name as non-nullable and call methods and properties on it –

val javaUser = User(null)

println(javaUser.name.toUpperCase()) // Allowed (Throws NullPointerException)
println(javaUser.name.length) // Allowed (Throws NullPointerException)

The other option is to treat the member variable name as nullable and use the safe operator for calling methods or accessing properties –

val javaUser = User(null)

println(javaUser.name?.toUpperCase()) // Allowed (Prints null)
println(javaUser.name?.length) // Allowed (Prints null)

Nullability Annotations

Although Java doesn’t support nullability in its type system, You can use annotations like @Nullable and @NotNull provided by external packages like javax.validation.constraintsorg.jetbrains.annotations etc to mark a variable as Nullable or Not-null.

Java compiler doesn’t use these annotations, but these annotations are used by IDEs, ORM libraries and other external tools to provide assistance while working with null values.

Kotlin also respects these annotations when they are present in Java code. Java types which have these nullability annotations are represented as actual nullable or non-null Kotlin types instead of platform types.

Nullability and Collections

Kotlin’s collection API is built on top of Java’s collection API but it fully supports nullability on Collections.

Just as regular variables are non-null by default, a normal collection also can’t hold null values –

val regularList: List<Int> = listOf(1, 2, null, 3) // Compiler Error

1. Collection of Nullable Types

Here is how you can declare a Collection of Nullable Types in Kotlin –

val listOfNullableTypes: List<Int?> = listOf(1, 2, null, 3) // Works

To filter non-null values from a list of nullable types, you can use the filterNotNull() function –

val notNullList: List<Int> = listOfNullableTypes.filterNotNull()

2. Nullable Collection

Note that there is a difference between a collection of nullable types and a nullable collection.

A collection of nullable types can hold null values but the collection itself cannot be null –

var listOfNullableTypes: List<Int?> = listOf(1, 2, null, 3) // Works
listOfNullableTypes = null // Compilation Error

You can declare a nullable collection like this –

var nullableList: List<Int>? = listOf(1, 2, 3)
nullableList = null // Works

3. Nullable Collection of Nullable Types

Finally, you can declare a nullable collection of nullable types like this –

var nullableListOfNullableTypes: List<Int?>? = listOf(1, 2, null, 3) // Works
nullableListOfNullableTypes = null // Works

Conclusion

That’s all in this article folks. I hope you understood how kotlin helps you avoid NullPointerException errors with its nullable type concept.

Thanks for reading. See you in the next post.

Kotlin Properties, Backing Fields, Getters and Setters
07
Mar
2021

Kotlin Properties, Backing Fields, Getters and Setters

You can declare properties inside Kotlin classes in the same way you declare any other variable. These properties can be mutable (declared using var) or immutable (declared using val).

Here is an example –

// User class with a Primary constructor that accepts three parameters
class User(_id: Int, _name: String, _age: Int) {
    // Properties of User class
    val id: Int = _id         // Immutable (Read only)
    var name: String = _name  // Mutable
    var age: Int = _age       // Mutable
}

You can get or set the properties of an object using the dot(.) notation like so –

val user = User(1, "Jack Sparrow", 44)

// Getting a Property
val name = user.name

// Setting a Property
user.age = 46

// You cannot set read-only properties
user.id = 2	// Error: Val cannot be assigned

Getters and Setters

Kotlin internally generates a default getter and setter for mutable properties, and a getter (only) for read-only properties.

It calls these getters and setters internally whenever you access or modify a property using the dot(.) notation.

Here is how the User class that we saw in the previous section looks like with the default getters and setters –

class User(_id: Int, _name: String, _age: Int) {
    val id: Int = _id
        get() = field
    
    var name: String = _name
        get() = field
        set(value) {
            field = value
        }
    
    var age: Int = _age
        get() = field
        set(value) {
            field = value
        }
}

You might have noticed two strange identifiers in all the getter and setter methods – field and value.

Let’s understand what these identifiers are for –

1. value

We use value as the name of the setter parameter. This is the default convention in Kotlin but you’re free to use any other name if you want.

The value parameter contains the value that a property is assigned to. For example, when you write user.name = "Bill Gates", the value parameter contains the assigned value “Bill Gates”.

2. Backing Field (field)

Backing field helps you refer to the property inside the getter and setter methods. This is required because if you use the property directly inside the getter or setter then you’ll run into a recursive call which will generate a StackOverflowError.

Let’s see that in action. Let’s modify the User class and refer to the properties directly inside the getters and setters instead of using the field identifier –

class User(_name: String, _age: Int) {
    var name: String = _name
        get() = name		  // Calls the getter recursively

    var age: Int = _age
        set(value) {
            age = value		  // Calls the setter recursively
        }
}

fun main(args: Array<String>) {
    val user = User("Jack Sparrow", 44)

    // Getting a Property
    println("${user.name}") // StackOverflowError

    // Setting a Property
    user.age = 45           // StackOverflowError

}

The above program will generate a StackOverflowError due to the recursive calls in the getter and setter methods. This is why Kotlin has the concept of backing fields. It makes storing the property value in memory possible. When you initialize a property with a value in the class, the initializer value is written to the backing field of that property –

class User(_id: Int) {
    val id: Int	= _id   // The initializer value is written to the backing field of the property
}

A backing field is generated for a property if

  • A custom getter/setter references it through the field identifier or,
  • It uses the default implementation of at least one of the accessors (getter or setter). (Remember, the default getter and setter references the field identifier themselves)

For example, a backing field is not generated for the id property in the following class because it neither uses the default implementation of the getter nor refers to the field identifier in the custom getter –

class User() {
    val id	// No backing field is generated
        get() = 0
}

Since a backing field is not generated, you won’t be able to initialize the above property like so –

class User(_id: Int) {
    val id: Int = _id	//Error: Initialization not possible because no backing field is generated which could store the initialized value in memory
        get() = 0
}

Creating Custom Getters and Setters

You can ditch Kotlin’s default getter/setter and define a custom getter and setter for the properties of your class.

Here is an example –

class User(_id: Int, _name: String, _age: Int) {
    val id: Int = _id

    var name: String = _name 
        // Custom Getter
        get() {     
            return field.toUpperCase()
        }     

    var age: Int = _age
        // Custom Setter
        set(value) {
            field = if(value > 0) value else throw IllegalArgumentException("Age must be greater than zero")
        }
}

fun main(args: Array<String>) {
    val user = User(1, "Jack Sparrow", 44)

    println("${user.name}") // JACK SPARROW

    user.age = -1           // Throws IllegalArgumentException: Age must be greater that zero
}

Conclusion

Thanks for reading folks. In this article, You learned how Kotlin’s getters and setters work and how to define a custom getter and setter for the properties of a class.

Micro-frontends: The path to a scalable future — part 1
26
Mar
2021

Micro-frontends: The path to a scalable future — part 1

Introduction

We have all heard the term microservices and perhaps have worked with them in the backend world. Before microservices, there were monolith applications. Back then, team and application growth was leading monolithic applications into a non-scalable dead-end. The codebase was growing, and technologies were getting older. All of this has made migrations and upgrades incredibly painful and frustrating.

An increase in project growth meant that multiple teams would unwittingly create further bottlenecks and inter-team dependencies. Deployments and releases were being micro-managed, but the growing amount of unmanaged technical debt would cause problems further down the line.

The same problems were present in the front-end world too. Again, you can structure the fanciest framework/library at that time and build your application with the best intentions. However, several years later, your tiny little application will inevitably become huge and force you to increase the size of your team. Eventually, this work will be split between multiple teams that will find themselves spending hours defining a release management schedule.

It’s around this time that you will hear about another cool library trending in the industry. Exploring the complexities of how to migrate to a new library while maintaining the old one typically leads to paranoia and eventually makes you give up on the idea because it’s just too much hassle. But you are also faced with a shrinking number of engineers who are willing to work with deprecated technologies. The chosen path has led you to a non-scalable dead-end.

A path to scalability

Try to imagine what the outcome would have looked like would we bravely decide to choose a different path. First, we should know that everything comes with a price, and with great power comes great responsibility. That power is the implementation of micro-frontends, which is

an architectural pattern to develop and maintain web application as a composition of small front-end applications

With back-end microservices, we had completely separate services with their separate DB and API. Microservices can be divided by business logic. For example, in an e-commerce app, we can have different microservices for a product catalog, user’s shopping cart, orders, etc. Without micro-frontend applications, it would have been a monolithic front-end application communicating with different back-end microservices (Fig. 1).

Figure 1. Application with monolith front-end and back-end microservices.

By splitting up the front-end into micro-frontends, we can align front-end and back-end architecture approaches and have a more robust and elegant full-stack architecture (Fig. 2).

Figure 2. End-to-end front-end and back-end microservices.
Figure 2. End-to-end front-end and back-end microservices.

With this approach, we can reach several significant benefits:

  • Team and technology autonomy: each team would have its own mission on the product and could select its own technology stack;
  • Small, maintainable, and de-coupled codebases: each team’s codebase would remain small and isolated from others’ codebases;
  • Independent release management: each team would be able to independently release its own part of the application saving a lot of time on inter-team communication and release schedule;
  • Painless upgrades and migrations: having small codebases and being free of inter-team dependency would lead to painless and independent upgrades of application technologies as well as migrations from the old one to a new one;

With the mentioned benefits, there are also challenges attached to implementing the micro-frontends architecture. There will still remain some issues and concerns that will be shared between teams such as web performance, a common design system, etc.

Even though the idea behind micro-services and micro-frontends is very similar, the implementation challenges are slightly different, and we will discuss them in the next section.

Micro-frontends integration concepts

There are three main concepts and problems to be taken into consideration when implementing the micro-frontend architecture.

Routing and page transition

This is one of the most important concepts. Regardless of how many micro-frontends are implemented in our application, it is crucial to handle transitions between micro-frontends (Fig. 3) properly. That said, users must never notice a switch from one micro-frontend application to another when navigating in the application.

Figure 3. The transition from one micro-frontend to another.

Page transition, which is also known as routing, can be either server or client-side. Let’s briefly go over each of them.

  • Server routing: on each transition (usually triggered by a click), there is a request to a server to obtain and serve the whole HTML document to the user. This method eventually leads to a full page reload, which is also known as a hard transition. It has several pros and cons and was a traditional way to handle transitions before the client-side routing evolution.
  • Client routing: on each transition, only some part of the application is being reloaded and operated by JavaScript in the browser on the client-side. Compared to server routing, there is no full page reload and is also known as soft transition. It brings a much better user experience and is implemented in almost all modern web applications.

There is an excellent article that covers server and client routing in more detail. With micro-frontends, the problem has various solutions starting from simple link transitions (the way routing was being handled in the 90’s and early 2000’s) into multi-layered client-side routings(a very modern way of having one top-level routing for the whole application, and several low-level routings per each micro-frontend). They differ by their implementation complexity, so the selection should be made based on the needs of our application.

Composition

It might be necessary to have a page owned by one team, but it will include fragments from other teams. This means we should consider the composition of multiple micro-frontends within each other (Fig. 4). For example, we could have an e-commerce web application and a product detail page owned by one team. We should also show the shopping cart content, which is owned by another team.

Figure 4. Composition of multiple micro-frontends.

Composition is a rendering concern which, similarly to routing, can be client and/or server-side:

  • Server-side rendering (SSR): with SSR or Universal rendering, our page document is being constructed and rendered on the server and served to the user. It is beneficial on the initial page load of an application as it is much more performant than rendering it on the client-side.
  • Client-side rendering (CSR): With CSR, our application page content is being constructed, rendered, and loaded on the client-side, operated by JavaScript. This type of rendering is used with all modern libraries and frameworks.

There are many cases when having just SSR or CSR will not be enough to meet our app needs (for example, when we need good SEO, a smooth user experience, and a fast content load). In these cases, the application uses both server and client-side rendering.

In most cases, the most relevant and essential content is being rendered server-side. It is being served to the user first, and afterward, the rest and more dynamic content is being handed over to JavaScript, which renders it on the client-side. There is much more to say about CSR, SSR but it is out of our topic for now. I highly recommend reading this article to learn more about these concepts and differences.

Tip: Again, most of the applications use both client and server-side rendering. But there is an easy life-hack that can help you check which parts of an application are rendered on the server-side and which parts on the client-side. Simply go to your browser settings and turn off JavaScript. Then open your desired website, and you will see the content which is loaded on the server-side. Then by turning on JS again, reloading the page will allow JS to render the other parts handed over to the client-side.

As with routing, there are also many different techniques both server-side(SSI, Zalando, Podium, etc.) and client-side(iframe, Ajax, Web Components, etc.) to handle micro-frontends composition. Which of these to go with depends on our application requirements.

Communication

The last concept of micro-frontend integration is the communication between our micro-frontends (Fig. 5).

Figure 5. Communication between micro-frontends.

Remember that we can have multiple micro-frontends on one page, and interaction with one can eventually lead to others’ changes. For example, let’s imagine we are on the product detail page of an e-commerce application, and we want to add a product to our shopping cart.

On the header (which is another micro-frontend fragment application), there is an icon of our cart, which also has an indicator — the number of added items. Adding the product on the details page should also update the number of items on our header’s shopping cart icon. There can be multiple communication scenarios like parent to fragment, fragment to parent, as well as fragment to fragment, and each communication is being handled differently.

  • Parent to fragment: This case is similar to handling communication between the parent component to the child components. It can be done by passing the data to children via props/attributes. So if our fragment/child is implemented with web components technology, it should be getting some attributes when they are being loaded, and data change in parent micro-frontend can trigger changed data passing to the fragment/child micro-frontends;
  • Fragment to parent: In this case, we can use the browser’s native CustomEvents API. So, on the parent side, there can be a subscribed event listener, while inside a fragment, we emit/publish an event on data change. The event emitter will bubble up the data to the listener on the parent side, which will then be handled on the parent micro-frontend.
  • Fragment to fragment: In this case, we can use the combination of the previous two techniques by emitting events to the parent fragment, which will listen to the changes and pass them to another fragment via props/attributes.

Another way of micro-frontends communication is Broadcast Channel API, which is an implementation of the Publisher/Subscriber design pattern just like the Custom Events API.

What’s next?

So, we have learned how beneficial micro-frontends can be and what problems they can solve. We also learned what challenges and problems we might face when implementing the micro-frontend architecture. We saw that based on the described three concepts, some various technologies and patterns could be used to achieve a full micro-frontend integration.

This should be enough to understand the main concepts and the big picture of micro-frontend architecture. This article is the first part of the series of 3 articles. The next article will be focused on high-level micro-frontend architecture types starting from the simplest and ending with the most complex one. We will also learn the concepts that will help us go with the architecture type that best fits our needs. So, stay tuned.

What are Spring Boot Starter Dependencies
30
Mar
2021

What are Spring Boot Starter Dependencies?

Spring Boot starters are maven templates that contain a collection of all the relevant transitive dependencies that are needed to start a particular functionality.

For example, If we want to create a Spring WebMVC application then in a traditional setup, we would have included all required dependencies ourselves. It leaves the chances of version conflict which ultimately result in more runtime exceptions.

With Spring boot, to create web MVC application, all we need to import is spring-boot-starter-web dependency. Transitively, it brings in all other required dependencies to build a web application e.g. spring-webmvc, spring-web, hibernate-validator, tomcat-embed-core, tomcat-embed-el, tomcat-embed-websocket, jackson-databind, jackson-datatype-jdk8, jackson-datatype-jsr310 and jackson-module-parameter-names.

<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency>
Advantages and Disadvantages of Kafka
31
Mar
2021

Advantages and Disadvantages of Kafka

1. Advantages and Disadvantages of Kafka

Today, we will discuss the Advantages and Disadvantages of Kafka Because, it is very important to know the limitations of any technology before using it, same in case of advantages.
So, let’s discuss Kafka Advantage and Disadvantage in detail.

advantages & disadvantages of kafka

2. Advantages of Kafka

So, here we are listing out some of the advantages of Kafka. Basically, these Kafka advantages are making Kafka ideal for our data lake implementation. So, let’s start learning advantages of Kafka in detail:

Kafka Pros and Cons – Kafka Advantages

a. High-throughput
Without having not so large hardware, Kafka is capable of handling high-velocity and high-volume data. Also, able to support message throughput of thousands of messages per second.
b. Low Latency
It is capable of handling these messages with the very low latency of the range of milliseconds, demanded by most of the new use cases.
c. Fault-Tolerant
One of the best advantages is Fault Tolerance. There is an inherent capability in Kafka, to be resistant to node/machine failure within a cluster.
d. Durability
Here, durability refers to the persistence of data/messages on disk. Also, messages replication is one of the reasons behind durability, hence messages are never lost.
e. Scalability
Without incurring any downtime on the fly by adding additional nodes, Kafka can be scaled-out. Moreover, inside the Kafka cluster, the message handling is fully transparent and these are seamless.
f. Distributed
The distributed architecture of Kafka makes it scalable using capabilities like replication and partitioning.
g. Message Broker Capabilities
Kafka tends to work very well as a replacement for a more traditional message broker. Here, a message broker refers to an intermediary program, which translates messages from the formal messaging protocol of the publisher to the formal messaging protocol of the receiver.
h. High Concurrency
Kafka is able to handle thousands of messages per second and that too in low latency conditions with high throughput. In addition, it permits the reading and writing of messages into it at high concurrency.
i. By Default Persistent
As we discussed above that the messages are persistent, that makes it durable and reliable.
j. Consumer Friendly
It is possible to integrate with the variety of consumers using Kafka. The best part of Kafka is, it can behave or act differently according to the consumer, that it integrates with because each customer has a different ability to handle these messages, coming out of Kafka. Moreover, Kafka can integrate well with a variety of consumers written in a variety of languages.
k. Batch Handling Capable (ETL like functionality)
Kafka could also be employed for batch-like use cases and can also do the work of a traditional ETL, due to its capability of persists messages.
l. Variety of Use Cases
It is able to manage the variety of use cases commonly required for a Data Lake. For Example log aggregation, web activity tracking, and so on.
m. Real-Time Handling
Kafka can handle real-time data pipeline. Since we need to find a technology piece to handle real-time messages from applications, it is one of the core reasons for Kafka as our choice.

3. Disadvantages of Kafka

Cons of Kafka – Apache Kafka Disadvantages

It is good to know Kafka’s limitations even if its advantages appear more prominent then its disadvantages. However, consider it only when advantages are too compelling to omit. Here is one more condition that some disadvantages might be more relevant for a particular use case but not really linked to ours. So, here we are listing out some of the disadvantage associated with Kafka:
a. No Complete Set of Monitoring Tools
It is seen that it lacks a full set of management and monitoring tools. Hence, enterprise support staff felt anxious or fearful about choosing Kafka and supporting it in the long run.
b. Issues with Message Tweaking
As we know, the broker uses certain system calls to deliver messages to the consumer. However, Kafka’s performance reduces significantly if the message needs some tweaking. So, it can perform quite well if the message is unchanged because it uses the capabilities of the system.
c. Not support wildcard topic selection
There is an issue that Kafka only matches the exact topic name, that means it does not support wildcard topic selection. Because that makes it incapable of addressing certain use cases.
d. Lack of Pace
There can be a problem because of the lack of pace, while API’s which are needed by other languages are maintained by different individuals and corporates.
e. Reduces Performance
In general, there are no issues with the individual message size. However, the brokers and consumers start compressing these messages as the size increases. Due to this, when decompressed, the node memory gets slowly used. Also, compress happens when the data flow in the pipeline. It affects throughput and also performance.
f. Behaves Clumsy
Sometimes, it starts behaving a bit clumsy and slow, when the number of queues in a Kafka cluster increases.
g. Lacks some Messaging Paradigms
Some of the messaging paradigms are missing in Kafka like request/reply, point-to-point queues and so on. Not always but for certain use cases, it sounds problematic.
So, this was all about the advantages and disadvantages of Kafka. Hope you like our explanation.

4. Conclusion: Advantages and Disadvantages of Kafka

Hence, we have seen all the Advantages and Disadvantages of Kafka in detail. That will help you a lot before using it. However, if any doubt occurs regarding Kafka Pros and Cons, feel free to ask through the comment section.

Composite Primary Keys in JPA
14
May
2021

Composite Primary Keys in JPA

1. Introduction

In this tutorial, we’ll learn about Composite Primary Keys and the corresponding annotations in JPA.

2. Composite Primary Keys

A composite primary key – also called a composite key – is a combination of two or more columns to form a primary key for a table.

In JPA, we have two options to define the composite keys: The @IdClass and @EmbeddedId annotations.

In order to define the composite primary keys, we should follow some rules:

  • The composite primary key class must be public
  • It must have a no-arg constructor
  • It must define equals() and hashCode() methods
  • It must be Serializable

3. The IdClass Annotation

Let’s say we have a table called Account and it has two columns – accountNumber, accountType – that form the composite key. Now we have to map it in JPA.

As per the JPA specification, let’s create an AccountId class with these primary key fields:

public class AccountId implements Serializable {
    private String accountNumber;

    private String accountType;

    // default constructor

    public AccountId(String accountNumber, String accountType) {
        this.accountNumber = accountNumber;
        this.accountType = accountType;
    }

    // equals() and hashCode()
}

Next, let’s associate the AccountId class with the entity Account.

In order to do that, we need to annotate the entity with the @IdClass annotation. We must also declare the fields from the AccountId class in the entity Account and annotate them with @Id:

@Entity
@IdClass(AccountId.class)
public class Account {
    @Id
    private String accountNumber;

    @Id
    private String accountType;

    // other fields, getters and setters
}

4. The EmbeddedId Annotation

@EmbeddedId is an alternative to the @IdClass annotation.

Let’s consider another example where we have to persist some information of a Book with title and language as the primary key fields.

In this case, the primary key class, BookId, must be annotated with @Embeddable:

@Embeddable
public class BookId implements Serializable {
    private String title;
    private String language;

    // default constructor

    public BookId(String title, String language) {
        this.title = title;
        this.language = language;
    }

    // getters, equals() and hashCode() methods
}

Then, we need to embed this class in the Book entity using @EmbeddedId:

@Entity
public class Book {
    @EmbeddedId
    private BookId bookId;

    // constructors, other fields, getters and setters
}

5. @IdClass vs @EmbeddedId

As we just saw, the difference on the surface between these two is that with @IdClass, we had to specify the columns twice – once in AccountId and again in Account. But, with @EmbeddedId we didn’t.

There are some other tradeoffs, though.

For example, these different structures affect the JPQL queries that we write.

For example, with @IdClass, the query is a bit simpler:

SELECT account.accountNumber FROM Account account

With @EmbeddedId, we have to do one extra traversal:

SELECT book.bookId.title FROM Book book

Also, @IdClass can be quite useful in places where we are using a composite key class that we can’t modify.

Finally, if we’re going to access parts of the composite key individually, we can make use of @IdClass, but in places where we frequently use the complete identifier as an object, @EmbeddedId is preferred.

6. Conclusion

In this quick article, we explore composite primary keys in JPA.

How to delete a directory recursively with all its subdirectories and files in Java
03
Mar
2021

How to delete a directory recursively with all its subdirectories and files in Java

In this short article, you’ll learn how to delete a directory recursively along with all its subdirectories and files.

There are two examples that demonstrate how to achieve this task. The idea behind both of the examples is to traverse the file tree, and delete the files in any directory before deleting the directory itself.

Delete directory recursively – Java 8+

This example makes use of Files.walk(Path) method that returns a Stream<Path> populated with Path objects by walking the file-tree in depth-first order.

import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.util.Comparator;

public class DeleteDirectoryRecursively {
    public static void main(String[] args) throws IOException {
        Path dir = Paths.get("java");

        // Traverse the file tree in depth-first fashion and delete each file/directory.
        Files.walk(dir)
                .sorted(Comparator.reverseOrder())
                .forEach(path -> {
                    try {
                        System.out.println("Deleting: " + path);
                        Files.delete(path);
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                });
    }
}

Delete directory recursively – Java 7

The following example uses Files.walkFileTree(Path, FileVisitor) method that traverses a file tree and invokes the supplied FileVisitor for each file.

We use a SimpleFileVisitor to perform the delete operation.

import java.io.IOException;
import java.nio.file.*;
import java.nio.file.attribute.BasicFileAttributes;

public class DeleteDirectoryRecursively1 {
    public static void main(String[] args) throws IOException {
        Path dir = Paths.get("java");

        // Traverse the file tree and delete each file/directory.
        Files.walkFileTree(dir, new SimpleFileVisitor<Path>() {
            @Override
            public FileVisitResult visitFile(Path file, BasicFileAttributes attrs) throws IOException {
                System.out.println("Deleting file: " + file);
                Files.delete(file);
                return FileVisitResult.CONTINUE;
            }

            @Override
            public FileVisitResult postVisitDirectory(Path dir, IOException exc) throws IOException {
                System.out.println("Deleting dir: " + dir);
                if (exc == null) {
                    Files.delete(dir);
                    return FileVisitResult.CONTINUE;
                } else {
                    throw exc;
                }
            }
        });
    }
}