Welcome To Fusebes - Dev & Programming Blog

3 Layer Automated Testing
26
Mar
2021

3 Layer Automated Testing

Birth of Quality Engineering

Quality Assurance practice was relatively simple when we built the Monolith systems with traditional waterfall development models. The Quality Assurance (QA) teams would start the GUI layer’s validation process after waiting for months for the product development. To enhance the testing process, we would have to spend a lot of efforts and $$$ (commercial tools) in automating the GUI via various tools like Microfocus UFTSeleniumTest CompleteCoded UIRanorex etc., and most often these tests are complex to maintain and scale. Thus, most QA teams would have to restrict their automated tests to smoke and partial regression, ending in inadequate test coverage.

With modern technology, the new era tech companies, including Varo, have widely adopted Microservices-based architecture combined with the Agile/ Dev-Ops development model. This opens up a lot of opportunities for Quality Assurance practice, and in my opinion, this was the origin of the transformation from Quality Assurance to “Quality Engineering.”

The Common Pitfall

While automated testing gives us a massive benefit with the three R’s (Repeatable → Run any number of times, Reliable → Run with confidence, Reusable → Develop, and share), it also comes with maintenance costs. I like to quote Grady Boosh’s — “A fool with a tool is still a fool.” Targeting inappropriate areas would not provide us the desired benefit. We should consider several factors to choose the right candidate for automation. Few to name are the lifespan of the product, the volatility of the requirements, the complexity of the tests, business criticality, and technical feasibility.

It’s well known that the cost of a bug increases toward the right of software lifecycle. So it is necessary to implement several walls of defenses to arrest these software bugs as early as possible (Shift-Left Testing paradigm). By implementing an agile development model with a fail-fast mindset, we have taken care of our first wall of defense. But to move faster in this shorter development cycle, we must build robust automated test suites to take care of the rolled out features and make room for testing the new features.

The 3 Layer Architecture

The Varo architecture comprises three essential layers.

  • The Frontend layer (Web, iOS, Mobile apps) — User experience
  • The Orchestration layer (GraphQl) — Makes multiple microservices calls and returns the decision and data to the frontend apps
  • The Microservice layer (gRPC, Kafka, Postgres) — Core business layer

While understanding the microservice architecture for testing, there were several questions posed.

  • Which layer to test?
  • What to test in these layers?
  • Does testing frontend automatically validate downstream service?
  • Does testing multiple layers introduce redundancies?

We will try to answer these by analyzing the table below, which provides an overview of what these layers mean for quality engineering.

After inferring the table, we have loosely adopted the Testing Pyramid pattern to invest in automated testing as:

  • Full feature/ functional validations on Microservices layer
  • Business process validations on Orchestration layer
  • E2E validations on Frontend layer

The diagram below best represents our test strategy for each layer.

Note: Though we have automated white-box tests such as unit-test and integration-test, we exclude those in this discussion.

Use Case

Let’s take the example below for illustration to understand best how this Pyramid works.

The user is presented with a form to submit. The form accepts three inputs — Field A to get the user identifier, Field B a drop-down value, and Field C accepts an Integer value (based on a defined range).

Once the user clicks on the Submit button, the GraphQL API calls Microservice A to get what type of customer. Then it calls the next Microservice B to validate the acceptable range of values for Field C (which depends on the values from Field A and Field B).

Validations:

1. Feature Validations

✓ Positive behavior (Smoke, Functional, System Integration)

  • Validating behavior with a valid set of data combinations
  • Validating database
  • Validating integration — Impact on upstream/ downstream systems

✓ Negative behavior

  • Validating with invalid data (for example: Invalid authorization, Disqualified data)

2. Fluent Validations

✓ Evaluating field definitions — such as

  • Mandatory field (not empty/not null)
  • Invalid data types (for example: Int → negative value, String → Junk values with special characters or UUID → Invalid UUID formats)

Let’s look at how the “feature validations” can be written for the above use case by applying one of the test case authoring techniques — Boundary Value Analysis.

To test the scenario above, it would require 54 different combinations of feature validations, and below is the rationale to pick the right candidate for each layer.

Microservice Layer: This is the layer delivered first, enabling us to invest in automated testing as early as possible (Shift-Left). And the scope for our automation would be 100% of all the above scenarios.

Orchestration Layer: This layer translates the information from the microservice to frontend layers; we try to select at least two tests (1 positive & 1 negative) for each scenario. The whole objective is to ensure the integration is working as expected.

Frontend Layer: In this layer, we focus on E2E validations, which means these validations would be a part of the complete user journey. But we would ensure that we have at least one or more positive and negative scenarios embedded in those E2E tests. Business priority (frequently used data by the real-time users) helps us to select the best scenario for our E2E validations.

Conclusion

There are always going to be sets of redundant tests across these layers. But that is the trade-off we had to take to ensure that we have correct quality gates on each of these layers. The pros of this approach are that we achieve safe and faster deployments to Production by enabling quicker testing cycles, better test coverage, and risk-free decisions. In addition, having these functional test suites spread across the layers helps us to isolate the failures in respective areas, thus saving us time to troubleshoot an issue.

However, often, not one size fits all. The decision has to be made based on understanding how the software architecture is built and the supporting infrastructure to facilitate the testing efforts. One of the critical success factors for this implementation is building a good quality engineering team with the right skills and proper tools. But that is another story — Coming soon “Quality Engineering: Redefined.”

JPA / Hibernate ElementCollection Example with Spring Boot
03
Mar
2021

JPA / Hibernate ElementCollection Example with Spring Boot

In this article, You’ll learn how to map a collection of basic as well as embeddable types using JPA’s @ElementCollection and @CollectionTable annotations.

Let’s say that the users of your application can have multiple phone numbers and addresses. To map this requirement into the database schema, you need to create separate tables for storing the phone numbers and addresses –

Hibernate Spring Boot JPA @ElementCollection example table structure

Both the tables user_phone_numbers and user_addresses contain a foreign key to the users table.

You can implement such relationship at the object level using JPA’s one-to-many mapping. But for basic and embeddable types like the one in the above schema, JPA has a simple solution in the form of ElementCollection.

Let’s create a project from scratch and learn how to use an ElementCollection to map the above schema in your application using hibernate.

Creating the Application

If you have Spring Boot CLI installed, then simply type the following command in your terminal to generate the application –

spring init -n=jpa-element-collection-demo -d=web,jpa,mysql --package-name=com.example.jpa jpa-element-collection-demo

Alternatively, You can use Spring Initializr web app to generate the application by following the instructions below –

  1. Open http://start.spring.io
  2. Enter Artifact as “jpa-element-collection-demo”
  3. Click Options dropdown to see all the options related to project metadata.
  4. Change Package Name to “com.example.jpa”
  5. Select WebJPA and Mysql dependencies.
  6. Click Generate to generate and download the project.

Configuring MySQL database and Hibernate Log levels

Let’s first configure the database URL, username, password, hibernate log levels and other properties.

Open src/main/resources/application.properties and add the following properties to it –

# DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.url=jdbc:mysql://localhost:3306/jpa_element_collection_demo?useSSL=false&serverTimezone=UTC&useLegacyDatetimeCode=false
spring.datasource.username=root
spring.datasource.password=root

# Hibernate

# The SQL dialect makes Hibernate generate better SQL for the chosen database
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect

# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = update

logging.level.org.hibernate.SQL=DEBUG
logging.level.org.hibernate.type=TRACE

Please change spring.datasource.username and spring.datasource.password as per your MySQL installation. Also, create a database named jpa_element_collection_demo before proceeding to the next section.

Defining the Entity classes

Next, We’ll define the Entity classes that will be mapped to the database tables we saw earlier.

Before defining the User entity, let’s first define the Address type which will be embedded inside the User entity.

All the domain models will go inside the package com.example.jpa.model.

1. Address – Embeddable Type

package com.example.jpa.model;

import javax.persistence.Embeddable;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;

@Embeddable
public class Address {
    @NotNull
    @Size(max = 100)
    private String addressLine1;

    @NotNull
    @Size(max = 100)
    private String addressLine2;

    @NotNull
    @Size(max = 100)
    private String city;

    @NotNull
    @Size(max = 100)
    private String state;

    @NotNull
    @Size(max = 100)
    private String country;

    @NotNull
    @Size(max = 100)
    private String zipCode;

    public Address() {

    }

    public Address(String addressLine1, String addressLine2, String city, 
                   String state, String country, String zipCode) {
        this.addressLine1 = addressLine1;
        this.addressLine2 = addressLine2;
        this.city = city;
        this.state = state;
        this.country = country;
        this.zipCode = zipCode;
    }

    // Getters and Setters (Omitted for brevity)
}

2. User Entity

Let’s now see how we can map a collection of basic types (phone numbers) and embeddable types (addresses) using hibernate –

package com.example.jpa.model;

import javax.persistence.*;
import javax.validation.constraints.Email;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
import java.util.HashSet;
import java.util.Set;

@Entity
@Table(name = "users")
public class User {
    @Id
    @GeneratedValue(strategy = GenerationType.IDENTITY)
    private Long id;

    @NotNull
    @Size(max = 100)
    private String name;

    @NotNull
    @Email
    @Size(max = 100)
    @Column(unique = true)
    private String email;

    @ElementCollection
    @CollectionTable(name = "user_phone_numbers", joinColumns = @JoinColumn(name = "user_id"))
    @Column(name = "phone_number")
    private Set<String> phoneNumbers = new HashSet<>();

    @ElementCollection(fetch = FetchType.LAZY)
    @CollectionTable(name = "user_addresses", joinColumns = @JoinColumn(name = "user_id"))
    @AttributeOverrides({
            @AttributeOverride(name = "addressLine1", column = @Column(name = "house_number")),
            @AttributeOverride(name = "addressLine2", column = @Column(name = "street"))
    })
    private Set<Address> addresses = new HashSet<>();


    public User() {

    }

    public User(String name, String email, Set<String> phoneNumbers, Set<Address> addresses) {
        this.name = name;
        this.email = email;
        this.phoneNumbers = phoneNumbers;
        this.addresses = addresses;
    }

    // Getters and Setters (Omitted for brevity)
}

We use @ElementCollection annotation to declare an element-collection mapping. All the records of the collection are stored in a separate table. The configuration for this table is specified using the @CollectionTable annotation.

The @CollectionTable annotation is used to specify the name of the table that stores all the records of the collection, and the JoinColumn that refers to the primary table.

Moreover, When you’re using an Embeddable type with Element Collection, you can use the @AttributeOverrides and @AttributeOverride annotations to override/customize the fields of the embeddable type.

Defining the Repository

Next, Let’s create the repository for accessing the user’s data from the database. You need to create a new package called repository inside com.example.jpa package and add the following interface inside the repository package –

package com.example.jpa.repository;

import com.example.jpa.model.User;
import org.springframework.data.jpa.repository.JpaRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface UserRepository extends JpaRepository<User, Long> {

}

Testing the ElementCollection Setup

Finally, Let’s write the code to test our setup in the main class JpaElementCollectionDemoApplication.java –

package com.example.jpa;

import com.example.jpa.model.Address;
import com.example.jpa.model.User;
import com.example.jpa.repository.UserRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.CommandLineRunner;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

import java.util.HashSet;
import java.util.Set;

@SpringBootApplication
public class JpaElementCollectionDemoApplication implements CommandLineRunner {

    @Autowired
    private UserRepository userRepository;

    public static void main(String[] args) {
        SpringApplication.run(JpaElementCollectionDemoApplication.class, args);
    }

    @Override
    public void run(String... args) throws Exception {
        // Cleanup database tables.
        userRepository.deleteAll();

        // Insert a user with multiple phone numbers and addresses.
        Set<String> phoneNumbers = new HashSet<>();
        phoneNumbers.add("+91-9999999999");
        phoneNumbers.add("+91-9898989898");

        Set<Address> addresses = new HashSet<>();
        addresses.add(new Address("747", "Golf View Road", "Bangalore",
                "Karnataka", "India", "560008"));
        addresses.add(new Address("Plot No 44", "Electronic City", "Bangalore",
                "Karnataka", "India", "560001"));

        User user = new User("Yaniv Levy", "yaniv@fusebes.com",
                phoneNumbers, addresses);

        userRepository.save(user);
    }
}

The main class implements the CommandLineRunner interface and provides the implementation of its run() method.

The run() method is executed post application startup. In the run() method, we first cleanup all the tables and then insert a new user with multiple phone numbers and addresses.

You can run the application by typing mvn spring-boot:run from the root directory of the project.

After running the application, Go ahead and check all the tables in MySQL. The users table will have a new entry, the user_phone_numbers and user_addresses table will have two new entries –

mysql> select * from users;
+----+-----------------------+--------------------+
| id | email                 | name               |
+----+-----------------------+--------------------+
|  3 | yaniv@fusebes.com | Yaniv Levy |
+----+-----------------------+--------------------+
1 row in set (0.01 sec)
mysql> select * from user_phone_numbers;
+---------+----------------+
| user_id | phone_number   |
+---------+----------------+
|       3 | +91-9898989898 |
|       3 | +91-9999999999 |
+---------+----------------+
2 rows in set (0.00 sec)
mysql> select * from user_addresses;
+---------+--------------+-----------------+-----------+---------+-----------+----------+
| user_id | house_number | street          | city      | country | state     | zip_code |
+---------+--------------+-----------------+-----------+---------+-----------+----------+
|       3 | 747          | Golf View Road  | Bangalore | India   | Karnataka | 560008   |
|       3 | Plot No 44   | Electronic City | Bangalore | India   | Karnataka | 560001   |
+---------+--------------+-----------------+-----------+---------+-----------+----------+
2 rows in set (0.00 sec)

Conclusion

That’s all in this article Folks. I hope you learned how to use hibernate’s element-collection with Spring Boot.

Thanks for reading guys. Happy Coding! 🙂

Microkernel Architecture
14
May
2021

What is Microkernel Architecture?

Many third-party applications, in view of software architecture best practices, make avail software packages as downloadable plug-ins or versions. It is to this particular type, that the Microkernel Architecture is most suited as a result of which it is also called the plug-in architecture pattern. 

With this style, enterprise application development services can add pluggable features to an erstwhile version of the software providing for extensibility. The architecture is formulated of two components, with one part dedicated to the core system and the other to the plug-ins. Minimalism is followed while designing the core of the architecture, that stores just the right proportion of components to render the system effective. 

Microkernel Architecture

The most relatable example of the Microkernel Architecture would be any internet browser. You download a version of the application, that is essentially a software, and depending upon the missing functionalities, download and add plug-ins. Enterprise software development services rely on this pattern for designing large scale, complex applications as well. An example of such a business application could be a software for processing insurance claims. 

Benefits

  • This design has proven its worth as one being highly flexible. Operational possibilities arising from the capability of plug-ins make reacting to such changes in near real-time critical to sustenance. Such changes can be dealt with in isolation with the core system regaining its stable state, for the most part, therefore requiring less developmental updates over time.
  • An enterprise software development company could face a downtime issue at the time of deployment but that can be minimized or altogether avoided by adding plug-in modules to the core dynamically. 
  • A software development company could test plug-in prototypes in isolation and see for performance issues without affecting the core of the architecture. 
  • Microkernel Architecture is most appreciated for maintaining high-performance applications as the software can be customized to include only those capabilities that are needed the most. 

Potential Drawbacks 

  • Apps such as those conceptualized by enterprise mobile app development services, have a non-negotiable scope to scale. However, the Microkernel Architecture is grounded on designs of the product and naturally suited to apps that are smaller in size. 
  • An enterprise app development company could find the Microkernel pattern rather hard to execute due to the vast number of plug-ins compatible with the core. This calls for drawing out governance contracts, updating plug-in regitaries and so many formalities that the implementation becomes a challenge. 

Ideal For

Microkernel Architecture is best suited for workflow applications in addition to those that need job scheduling. As pointed above, like a web browser, any application that you want to release with just the right amount of specs but want to leave room that can be filled in by installing additional plug-ins can be built with this design pattern. 

Introduction to Data Classes in Kotlin
07
Mar
2021

Introduction to Data Classes in Kotlin

While building any application, we often need to create classes whose primary purpose is to hold data/state. These classes generally contain the same old boilerplate code in the form of getterssettersequals()hashcode() and toString() methods.

Motivation

Consider the following example of a Customer class in Java that just holds data about a Customer and doesn’t have any functionality whatsoever –

public class Customer {
    private String id;
    private String name;

    public Customer(String id, String name) {
        this.id = id;
        this.name = name;
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    @Override
    public boolean equals(Object o) {
        if (this == o) return true;
        if (o == null || getClass() != o.getClass()) return false;

        Customer customer = (Customer) o;

        if (id != null ? !id.equals(customer.id) : customer.id != null) return false;
        return name != null ? name.equals(customer.name) : customer.name == null;
    }

    @Override
    public int hashCode() {
        int result = id != null ? id.hashCode() : 0;
        result = 31 * result + (name != null ? name.hashCode() : 0);
        return result;
    }
}

You see, for creating a Simple class with only two member fields, we had to write almost 50 lines of code.

Yes, I know that you don’t need to write that code yourself and any good IDE can generate all that boilerplate code for you.

But that code will still be there in your source file and clutter it. Moreover, whenever you add a new member field to the Class, you’ll need to regenerate/modify the constructors, getters/setters and equals()/hashcode() methods.

You can also use a third party library like Project Lombok to generate getters/settersequals()/hashCode()toString() methods and more. But there is no out of the box solution without any library that can help us avoid these boilerplate codes in our application.

Kotlin Data Classes

Kotlin has a better solution for classes that are used to hold data/state. It’s called a Data Class. A Data Class is like a regular class but with some additional functionalities.

With Kotlin’s data classes, you don’t need to write/generate all the lengthy boilerplate code yourself. The compiler automatically generates a default getter and setter for all the mutable properties, and a getter (only) for all the read-only properties of the data class. Moreover, It also derives the implementation of standard methods like equals()hashCode() and toString() from the properties declared in the data class’s primary constructor.

For example, The Customer class that we wrote in the previous section in Java can be written in Kotlin in just one line –

data class Customer(val id: Long, val name: String)

Accessing the properties of the data class

The following example shows how you can access the properties of the data class –

val customer = Customer(1, "Sachin")

// Getting a property
val name = customer.name

Since all the properties of the Customer class are immutable, there is no default setter generated by the compiler. Therefore, If you try to set a property, the compiler will give an error –

// Setting a Property

// You cannot set read-only properties
customer.id = 2	// Error: Val cannot be assigned

Let’s now see how we can use the equals()hashCode(), and toString() methods of the data class-

1. Data class’s equals() method

val customer1 = Customer(1, "John")
val customer2 = Customer(1, "John")

println(customer1.equals(customer2))  // Prints true

You can also use Kotlin’s Structural equality operator == to check for equality. The == operator internally calls the equals() method –

println(customer1 == customer2)  // Prints true

2. Data class’s toString() method

The toString() method converts the object to a String in the form of "ClassName(field1=value1, field2=value)" –

val customer = Customer(2, "Robert")
println("Customer Details : $customer")
# Output
Customer Details : Customer(id=2, name=Robert)

3. Data class’s hashCode() method

val customer = Customer(2, "Robert")
println("Customer HashCode : ${customer.hashCode()}") // Prints -1841845792

Apart from the standard methods like equals()hashCode() and toString(), Kotlin also generates a copy() function and componentN() functions for all the data classes. Let’s understand what these functions do and how to use them –

Data Classes and Immutability: The copy() function

Although the properties of a data class can be mutable (declared using var), It’s strongly recommended to use immutable properties (declared using val) so as to keep the instances of the data class immutable.

Immutable objects are easier to work with and reason about while working with multi-threaded applications. Since they can not be modified after creation, you don’t need to worry about concurrency issues that arise when multiple threads try to modify an object at the same time.

Kotlin makes working with immutable data objects easier by automatically generating a copy() function for all the data classes. You can use the copy() function to copy an existing object into a new object and modify some of the properties while keeping the existing object unchanged.

The following example shows how copy() function can be used –

val customer = Customer(3, "James")

/* 
   Copies the customer object into a separate Object and updates the name. 
   The existing customer object remains unchanged.
*/
val updatedCustomer = customer.copy(name = "James Altucher")
println("Customer : $customer")
println("Updated Customer : $updatedCustomer")
# Output
Customer : Customer(id=3, name=James)
Updated Customer : Customer(id=3, name=James Altucher)

Data Classes and Destructuring Declarations: The componentN() functions

Kotlin also generates componentN() functions corresponding to all the properties declared in the primary constructor of the data class.

For the Customer data class that we defined in the previous section, Kotlin generates two componentN() functions – component1() and component2() corresponding to the id and name properties –

val customer = Customer(4, "Joseph")

println(customer.component1()) // Prints 4
println(customer.component2()) // Prints "Joseph"

The component functions enable us to use the so-called Destructuring Declaration in Kotlin. The Destructuring declaration syntax helps you destructure an object into a number of variables like this –

val customer = Customer(4, "Joseph")

// Destructuring Declaration
val (id, name) = customer
println("id = $id, name = $name") // Prints "id = 4, name = Joseph"

Requirements for Data Classes

Every Data Class in Kotlin needs to fulfill the following requirements –

  • The primary constructor must have at least one parameter
  • All the parameters declared in the primary constructor need to be marked as val or var.
  • Data classes cannot be abstract, open, sealed or inner.

Conclusion

Data classes help us avoid a lot of common boilerplate code and make the classes clean and concise. In this article, you learned how data classes work and how to use them. I hope you understood the all the concepts presented in this article.

Persistence in Event Driven Architectures
28
Mar
2021

Persistence in Event Driven Architectures

The importance of being persistent in event driven architectures.

Enterprises have to constantly adapt and evolve their enterprise architecture strategies in order to deliver the desired business outcomes. The evolving architecture patterns may involve business processing of sales transactions with a human in the loop or they may involve machine to machine data processing using automation. Enterprises earlier adopted a request-driven model where microservices made a call to a service and the service responded to the request being made. In this request-driven model, you run into challenges around flexibility as you try to scale your global deployment footprint.

A new approach, that is quickly gaining adoption in enterprises is called the event-driven architecture. In the new approach, you are able to increase application agility and flexibility by allowing for multiple data producers to coexist with multiple data consumers and you process data only after an event or state change. In this enterprise architecture, the producers and consumers of data can be quickly extended to deliver better flexibility and agility as you scale your operation globally. Examples of event-driven solutions are available in a hosted managed format from most cloud providers today. In this blog post, we will look at a Kafka solution running in a Kubernetes cluster and how you can make sure persistence is achieved for the solution. This approach is running at customers in production today supported by Confluent and Portworx.

Why Use Event-Driven Architecture?

With the advent of 5G technology, a vast amount of data will be generated by sensors, devices, systems, and humans in the loop to track, manage and achieve business outcomes. The use case for Event-Driven architectures may include the following:

  • Business Process state changes – You want to notify the change of state between a purchase order and accounts receivable with an event. The event-based approach allows a human or a decision engine to take the next appropriate step in the process.
  • Log and Metrics processing – The event-driven model allows for multiple actions to be triggered based on a single metric. The ability to send messages to multiple event handlers in different subsystems offers the scalability and resiliency required by certain business applications.

Why Use Kafka?

Apache Kafka is a scalable, fault-tolerant messaging system that enables you to build distributed real-time applications with an event-driven architecture. Kafka delivers events, with fast ingestion rates, and provides persistence and in-order guarantees. Kafka adoption in your solution will depend on your specific use-case. Below are some important concepts about Apache Kafka:

  • Kafka organizes messages into “topics”.
  • The process that does the work in Kafka is called the “broker”. A producer pushes data into a topic hosted on a broker and a consumer pulls messages from a topic via a broker.
  • Kafka topics can be divided into “partitions”. This allows for parallelizing a topic across multiple brokers and increasing message ingests and throughput.
  • Brokers can hold multiple partitions but at any given time, only one partition can act as leader of a topic. A leader is responsible for updating any replicas with new data.
  • Brokers are responsible for storing messages to disk. The messages are stored with unique offsets. Messages in Kafka do not have a unique ID.

Persistence With Portworx and Kafka on Kubernetes

Kafka needs Zookeeper to be deployed as a StatefulSets in Kubernetes. Kafka Brokers, which maintains the state of the topics and partitions also need to be deployed as StatefulSets which should be backed by persistent volumes.

  • Kafka offers replication of topics between different brokers. In the case of a node failure, Kafka can recover from failure using the replicated topics. This recovery mechanism does create additional network calls in order to synchronize with the replica on a different broker. The recovery time of the failed node and its broker depend on the amount of data that needs to be rehydrated and network latencies in the cluster.
  • Portworx offers data replication using the replication parameter in the Kuberbetes storage class. In this scenario, the storage system is responsible for maintaining copies of the topic on different nodes. In the case of a node failure, the Kafka broker is rescheduled on a node that already contains the replicated topic data. The rebuild time of the broker is reduced because it uses the storage system to rehydrate the topic data without any network latencies. Once the data is rehydrated using the storage system, the broker can quickly catch up on the topic offset from an existing broker and thus reducing the overall recovery time.

Let’s walk through a Kubernetes node failure scenario on a Kubernetes cluster where a Kafka application is running and is backed by Portworx volumes.

Figure 1: We will describe the deployment of Kafka and Portworx on a 5 node Kubernetes cluster. In the image below, you can see that the Kafka deployment has 3 Brokers, each with 2 partitions and a replication factor of 2. For the Portworx data platform, we have a volume replication factor set to 3 for each volume.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 1

Figure 2: We will describe a node failure event. In the diagram below, we have simulated a node failure and identified the Kubernetes worker node 2 has been taken out of production. Kubernetes worker node 2 contains a Kafka broker with 2 partitions and 2 Portworx persistent volumes.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 2
Figure 3: Once Kubernetes node failure is detected by the Kubernetes API server, the Kaffka broker is deployed to a node with available resources. The Portworx Platform has the ability to influence the placement of the broker on a Kubernete node. Since Kubernetes node 4 already has a copy of Kafka Broker’s persistent volumes, Portworx makes sure that the Kafka broker is deployed on Kubernetes node 4. On the Portworx platform’s recovery side, the volumes are also created on other nodes in order to maintain the defined replication factor of 3.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 3 Figure 4: Once the failed recovery node is back in production use, the Kaffka application does not get scheduled on the recovered node since the application is already at the desired number of Kafka brokers. On the Portworx platform side, replicated volumes are placed on the recovered Kubernetes node to maintain the desired replication factor for the volumes.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 4

How to copy a File or Directory in Java
03
Mar
2021

How to copy a File or Directory in Java

In this article, you’ll learn how to copy a file or directory in Java using various methods like Files.copy() or using BufferedInputStream and BufferedOutputStream.

Java Copy File using Files.copy()

Java NIO’s Files.copy() method is the simplest way of copying a file in Java.

import java.io.IOException;
import java.nio.file.*;

public class CopyFileExample {
    public static void main(String[] args) {

        Path sourceFilePath = Paths.get("./bar.txt");
        Path targetFilePath = Paths.get(System.getProperty("user.home") + "/Desktop/bar-copy.txt");

        try {
            Files.copy(sourceFilePath, targetFilePath);
        } catch (FileAlreadyExistsException ex) {
            System.out.println("File already exists");
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}

The Files.copy() method will throw FileAlreadyExistsException if the target file already exists. If you want to replace the target file then you can use the REPLACE_EXISTING option like this –

Files.copy(sourceFilePath, targetFilePath, StandardCopyOption.REPLACE_EXISTING);

Note that, Directories can be copied using the same method. However, files inside the directory are not copied, so the new directory will be empty even when the original directory contains files.

Read: How to copy Directories recursively in Java

Java Copy File using BufferedInputStream and BufferedOutputStream

You can also copy a file byte-by-byte using a byte-stream I/O. The following example uses BufferedInputStream to read a file into a byte array and then write the byte array using BufferedOutputStream.

You can also use a FileInputStream and a FileOutputStream directly for performing the reading and writing. But a Buffered I/O is more performant because it buffers data and reads/writes it in chunks.

import java.io.*;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class CopyFileExample1 {
    public static void main(String[] args) {
        Path sourceFilePath = Paths.get("./bar.txt");
        Path targetFilePath = Paths.get(System.getProperty("user.home") + "/Desktop/bar-copy.txt");

        try(InputStream inputStream = Files.newInputStream(sourceFilePath);
            BufferedInputStream bufferedInputStream = new BufferedInputStream(inputStream);

            OutputStream outputStream = Files.newOutputStream(targetFilePath);
            BufferedOutputStream bufferedOutputStream = new BufferedOutputStream(outputStream)) {

            byte[] buffer = new byte[4096];
            int numBytes;
            while ((numBytes = bufferedInputStream.read(buffer)) != -1) {
                bufferedOutputStream.write(buffer, 0, numBytes);
            }
        } catch (IOException ex) {
            System.out.format("I/O error: %s%n", ex);
        }
    }
}
Intro to Spring Boot Starters
26
Mar
2021

Intro to Spring Boot Starters

1. Overview

Dependency management is a critical aspects of any complex project. And doing this manually is less than ideal; the more time you spent on it the less time you have on the other important aspects of the project.

Spring Boot starters were built to address exactly this problem. Starter POMs are a set of convenient dependency descriptors that you can include in your application. You get a one-stop-shop for all the Spring and related technology that you need, without having to hunt through sample code and copy-paste loads of dependency descriptors.

We have more than 30 Boot starters available – let’s see some of them in the following section

2. The Web Starter

First, let’s look at developing the REST service; we can use libraries like Spring MVC, Tomcat and Jackson – a lot of dependencies for a single application.

Spring Boot starters can help to reduce the number of manually added dependencies just by adding one dependency. So instead of manually specifying the dependencies just add one starter as in the following example:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

Now we can create a REST controller. For the sake of simplicity we won’t use the database and focus on the REST controller:

@RestController
public class GenericEntityController {
    private List<GenericEntity> entityList = new ArrayList<>();

    @RequestMapping("/entity/all")
    public List<GenericEntity> findAll() {
        return entityList;
    }

    @RequestMapping(value = "/entity", method = RequestMethod.POST)
    public GenericEntity addEntity(GenericEntity entity) {
        entityList.add(entity);
        return entity;
    }

    @RequestMapping("/entity/findby/{id}")
    public GenericEntity findById(@PathVariable Long id) {
        return entityList.stream().
                 filter(entity -> entity.getId().equals(id)).
                   findFirst().get();
    }
}

The GenericEntity is a simple bean with id of type Long and value of type String.

That’s it – with the application running, you can access http://localhost:8080/entity/all and check the controller is working.

We have created a REST application with quite a minimal configuration.

3. The Test Starter

For testing we usually use the following set of libraries: Spring Test, JUnit, Hamcrest, and Mockito. We can include all of these libraries manually, but Spring Boot starter can be used to automatically include these libraries in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

Notice that you don’t need to specify the version number of an artifact. Spring Boot will figure out what version to use – all you need to specify is the version of spring-boot-starter-parent artifact. If later on you need to upgrade the Boot library and dependencies, just upgrade the Boot version in one place and it will take care of the rest.

Let’s actually test the controller we created in the previous example.

There are two ways to test the controller:

  • Using the mock environment
  • Using the embedded Servlet container (like Tomcat or Jetty)

In this example we’ll use a mock environment:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
public class SpringBootApplicationIntegrationTest {
    @Autowired
    private WebApplicationContext webApplicationContext;
    private MockMvc mockMvc;

    @Before
    public void setupMockMvc() {
        mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext).build();
    }

    @Test
    public void givenRequestHasBeenMade_whenMeetsAllOfGivenConditions_thenCorrect()
      throws Exception { 
        MediaType contentType = new MediaType(MediaType.APPLICATION_JSON.getType(),
        MediaType.APPLICATION_JSON.getSubtype(), Charset.forName("utf8"));
        mockMvc.perform(MockMvcRequestBuilders.get("/entity/all")).
        andExpect(MockMvcResultMatchers.status().isOk()).
        andExpect(MockMvcResultMatchers.content().contentType(contentType)).
        andExpect(jsonPath("$", hasSize(4))); 
    } 
}

The above test calls the /entity/all endpoint and verifies that the JSON response contains 4 elements. For this test to pass, we also have to initialize our list in the controller class:

public class GenericEntityController {
    private List<GenericEntity> entityList = new ArrayList<>();

    {
        entityList.add(new GenericEntity(1l, "entity_1"));
        entityList.add(new GenericEntity(2l, "entity_2"));
        entityList.add(new GenericEntity(3l, "entity_3"));
        entityList.add(new GenericEntity(4l, "entity_4"));
    }
    //...
}

What is important here is that @WebAppConfiguration annotation and MockMVC are part of the spring-test module, hasSize is a Hamcrest matcher, and @Before is a JUnit annotation. These are all available by importing one this one starter dependency.

4. The Data JPA Starter

Most web applications have some sort of persistence – and that’s quite often JPA.

Instead of defining all of the associated dependencies manually – let’s go with the starter instead:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>

Notice that out of the box we have automatic support for at least the following databases: H2, Derby and Hsqldb. In our example, we’ll use H2.

Now let’s create the repository for our entity:

public interface GenericEntityRepository extends JpaRepository<GenericEntity, Long> {}

Time to test the code. Here is the JUnit test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootJPATest {
    
    @Autowired
    private GenericEntityRepository genericEntityRepository;

    @Test
    public void givenGenericEntityRepository_whenSaveAndRetreiveEntity_thenOK() {
        GenericEntity genericEntity = 
          genericEntityRepository.save(new GenericEntity("test"));
        GenericEntity foundedEntity = 
          genericEntityRepository.findOne(genericEntity.getId());
        
        assertNotNull(foundedEntity);
        assertEquals(genericEntity.getValue(), foundedEntity.getValue());
    }
}

We didn’t spend time on specifying the database vendor, URL connection, and credentials. No extra configuration is necessary as we’re benefiting from the solid Boot defaults; but of course all of these details can still be configured if necessary.

5. The Mail Starter

A very common task in enterprise development is sending email, and dealing directly with Java Mail API usually can be difficult.

Spring Boot starter hides this complexity – mail dependencies can be specified in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-mail</artifactId>
</dependency>

Now we can directly use the JavaMailSender, so let’s write some tests.

For the testing purpose, we need a simple SMTP server. In this example, we’ll use Wiser. This is how we can include it in our POM:

<dependency>
    <groupId>org.subethamail</groupId>
    <artifactId>subethasmtp</artifactId>
    <version>3.1.7</version>
    <scope>test</scope>
</dependency>

Here is the source code for the test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootMailTest {
    @Autowired
    private JavaMailSender javaMailSender;

    private Wiser wiser;

    private String userTo = "user2@localhost";
    private String userFrom = "user1@localhost";
    private String subject = "Test subject";
    private String textMail = "Text subject mail";

    @Before
    public void setUp() throws Exception {
        final int TEST_PORT = 25;
        wiser = new Wiser(TEST_PORT);
        wiser.start();
    }

    @After
    public void tearDown() throws Exception {
        wiser.stop();
    }

    @Test
    public void givenMail_whenSendAndReceived_thenCorrect() throws Exception {
        SimpleMailMessage message = composeEmailMessage();
        javaMailSender.send(message);
        List<WiserMessage> messages = wiser.getMessages();

        assertThat(messages, hasSize(1));
        WiserMessage wiserMessage = messages.get(0);
        assertEquals(userFrom, wiserMessage.getEnvelopeSender());
        assertEquals(userTo, wiserMessage.getEnvelopeReceiver());
        assertEquals(subject, getSubject(wiserMessage));
        assertEquals(textMail, getMessage(wiserMessage));
    }

    private String getMessage(WiserMessage wiserMessage)
      throws MessagingException, IOException {
        return wiserMessage.getMimeMessage().getContent().toString().trim();
    }

    private String getSubject(WiserMessage wiserMessage) throws MessagingException {
        return wiserMessage.getMimeMessage().getSubject();
    }

    private SimpleMailMessage composeEmailMessage() {
        SimpleMailMessage mailMessage = new SimpleMailMessage();
        mailMessage.setTo(userTo);
        mailMessage.setReplyTo(userFrom);
        mailMessage.setFrom(userFrom);
        mailMessage.setSubject(subject);
        mailMessage.setText(textMail);
        return mailMessage;
    }
}

In the test, the @Before and @After methods are in charge of starting and stopping the mail server.

Notice that we’re wiring in the JavaMailSender bean – the bean was automatically created by Spring Boot.

Just like any other defaults in Boot, the email settings for the JavaMailSender can be customized in application.properties:

spring.mail.host=localhost
spring.mail.port=25
spring.mail.properties.mail.smtp.auth=false

So we configured the mail server on localhost:25 and we didn’t require authentication.

6. Conclusion

In this article we have given an overview of Starters, explained why we need them and provided examples on how to use them in your projects.

Let’s recap the benefits of using Spring Boot starters:

  • increase pom manageability
  • production-ready, tested & supported dependency configurations
  • decrease the overall configuration time for the project
Remove Duplicates from Sorted Array
06
Feb
2021

Remove Duplicates from Sorted Array

Remove Duplicates from Sorted Array

Given a sorted array, remove the duplicates from the array in-place such that each element appears only once, and return the new length.

Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.

Example

Given array [1, 1, 1, 3, 5, 5, 7]

The output should be 4, with the first four elements of the array being [1, 3, 5, 7]

Remove Duplicates from Sorted Array solution in Java

Given that the array is sorted, all the duplicate elements will appear together.

We can solve the problem in O(n) time complexity by using two pointer sliding window pattern. We’ll keep two pointers (indexes) –

  • One index i, to iterate over the array, and
  • Another index j, to keep track of the number of unique elements found so far. This index will move only when we modify the array in-place to include a new non-duplicate element.
class RemoveDuplicatesSortedArray {

  private static int removeDuplicates(int[] nums) {
    int n = nums.length;

    /*
     * This index will move only when we modify the array in-place to include a new
     * non-duplicate element.
     */
    int j = 0;

    for (int i = 0; i < n; i++) {
      /*
       * If the current element is equal to the next element, then skip the current
       * element because it's a duplicate.
       */
      if (i < n - 1 && nums[i] == nums[i + 1]) {
        continue;
      }

      nums[j++] = nums[i];
    }

    return j;
  }

  public static void main(String[] args) {
    int[] nums = new int[] { 1, 1, 1, 3, 5, 5, 7 };
    int newLength = removeDuplicates(nums);

    System.out.println("Length of array after removing duplicates = " + newLength);

    System.out.print("Array = ");
    for (int i = 0; i < newLength; i++) {
      System.out.print(nums[i] + " ");
    }
    System.out.println();
  }
}
# Output
Length of array after removing duplicates = 4
Array = 1 3 5 7 

Liked the Article? Share it on Social media!

How to check if a File or Directory exists in Java
03
Mar
2021

How to check if a File or Directory exists in Java

In this quick and short article, you’ll find two examples demonstrating how to check if a File or Directory exists at a given path in Java.
We’re going to get familiar with different ways to check the existence of a file or directory.

Check if a File/Directory exists using Java IO’s File.exists()

import java.io.File;

public class CheckFileExists1 {
    public static void main(String[] args) {

        File file = new File("/Users/callicoder/demo.txt");

        if(file.exists()) {
            System.out.printf("File %s exists%n", file);
        } else {
            System.out.printf("File %s doesn't exist%n", file);
        }

    }
}

Check if a File/Directory exists using Java NIO’s Files.exists() or Files.notExists()

import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;

public class CheckFileExists {
    public static void main(String[] args) {

        Path filePath = Paths.get("/Users/callicoder/demo.txt");

        // Checking existence using Files.exists
        if(Files.exists(filePath)) {
            System.out.printf("File %s exists%n", filePath);
        } else {
            System.out.printf("File %s doesn't exist%n", filePath);
        }


        // Checking existence using Files.notExists
        if(Files.notExists(filePath)) {
            System.out.printf("File %s doesn't exist%n", filePath);
        } else {
            System.out.printf("File %s exists%n", filePath);
        }
    }
}
Reading and Writing Environment Variables in Go
08
Mar
2021

Reading and Writing Environment Variables in Go

An environment variable is a way to supply dynamic configuration information to programs at runtime. Environment variables are often used to make the same program work in different environments like Local, QA, or Production.

Get, Set, Unset, and Expand environment variables in Go

The following program demonstrates how to work with environment variables in Go. It makes use of the following functions provided by the os package:

  • os.Setenv(key, value): Set an environment variable.
  • os.Getenv(key): Get the value of an environment variable. If the environment variable is not present, it returns empty. To distinguish between an empty value and an unset value, use LookupEnv
  • os.Unsetenv(key): Unset an environment variable.
  • os.LookupEnv(key): Get the value of an environment variable and a boolean indicating whether the environment variable is present or not. It returns a string and a boolean – The boolean will be false if the environment variable is not present.
  • os.ExpandEnv(str): Expand a string by replacing ${var} or $var in the string according to the values of the current environment variables.
package main

import (
	"fmt"
	"os"
)

func main() {
	// Set an Environment Variable
	os.Setenv("DB_HOST", "localhost")
	os.Setenv("DB_PORT", "5432")
	os.Setenv("DB_USERNAME", "root")
	os.Setenv("DB_PASSWORD", "admin")
	os.Setenv("DB_NAME", "test")

	// Get the value of an Environment Variable
	host := os.Getenv("DB_HOST")
	port := os.Getenv("DB_PORT")
	fmt.Printf("Host: %s, Port: %s\n", host, port)

	// Unset an Environment Variable
	os.Unsetenv("DB_PORT")
	fmt.Printf("After unset, Port: %s\n", os.Getenv("DB_PORT"))

	/*
		Get the value of an environment variable and a boolean indicating whether the
		environment variable is present or not.
	*/
	driver, ok := os.LookupEnv("DB_DRIVER")
	if !ok {
		fmt.Println("DB_DRIVER is not present")
	} else {
		fmt.Printf("Driver: %s\n", driver)
	}

	// Expand a string containing environment variables in the form of $var or ${var}
	dbURL := os.ExpandEnv("postgres://$DB_USERNAME:$DB_PASSWORD@DB_HOST:$DB_PORT/$DB_NAME")
	fmt.Println("DB URL: ", dbURL)

}
# Output
Host: localhost, Port: 5432
After unset, Port:
DB_DRIVER is not present
DB URL:  postgres://root:admin@DB_HOST:/test

List and Clear all environment variables in Go

  • os.Environ(): This function returns a []string containing all the environment variables in the form of key=value.
  • os.Clearenv(): This function deletes all the environment variables. It may come in handy while writing tests to start with a clean environment.

The following example demonstrates how to use these two functions:

package main

import (
	"fmt"
	"os"
	"strings"
)

func main() {

	// Environ returns a copy of strings representing the environment,
	// in the form "key=value".
	for _, env := range os.Environ() {
		// env is
		envPair := strings.SplitN(env, "=", 2)
		key := envPair[0]
		value := envPair[1]

		fmt.Printf("%s : %s\n", key, value)
	}

	// Delete all environment variables
	os.Clearenv()

	fmt.Println("Number of environment variables: ", len(os.Environ()))
}
# Output
TERM_SESSION_ID : w0t0p1:70C49068-9C87-4032-9C9B-49FB6B86687B
PATH : /Users/fusebes/.nvm/versions/node/v10.0.0/bin:/usr/local/sbin:/usr/local/sbin:/Users/fusebes/protobuf/bin:/Users/fusebes/go/bin:/Users/fusebes/vaultproject:/Users/fusebes/google-cloud-sdk/bin:/Users/fusebes/.rbenv/bin:/Users/fusebes/.rbenv/shims:/Users/fusebes/anaconda3/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Users/fusebes/Library/Android/sdk/platform-tools:/opt/flutter/bin
.... # Output truncated for brevity

Number of environment variables:  0