Welcome To Fusebes - Dev & Programming Blog

How to get current Date and Time in Java
03
Mar
2021

How to get current Date and Time in Java

In this article, you’ll find several examples to get the current date, current time, current date & time, current date & time in a specific timezone in Java.

Get current Date in Java

import java.time.LocalDate;

public class CurrentDateTimeExample {
    public static void main(String[] args) {
        // Current Date
        LocalDate currentDate = LocalDate.now();
        System.out.println("Current Date: " + currentDate);
    }
}

Get current Time in Java

import java.time.LocalTime;

public class CurrentDateTimeExample {
    public static void main(String[] args) {
        // Current Time
        LocalTime currentTime = LocalTime.now();
        System.out.println("Current Time: " + currentTime);
    }
}

Get current Date and Time in Java

import java.time.LocalDateTime;

public class CurrentDateTimeExample {
    public static void main(String[] args) {
        // Current Date and Time
        LocalDateTime currentDateTime = LocalDateTime.now();
        System.out.println("Current Date & time: " + currentDateTime);
    }
}

Get current Date and Time in a specific Timezone in Java

import java.time.ZoneId;
import java.time.ZonedDateTime;

public class CurrentDateTimeExample {
    public static void main(String[] args) {
        // Current Date and Time in a given Timezone
        ZonedDateTime currentNewYorkDateTime = ZonedDateTime.now(ZoneId.of("America/New_York"));
        System.out.println(currentNewYorkDateTime);
    }
}
What are Spring Boot common annotations
30
Mar
2021

What are Spring Boot Common Annotations?

The most commonly used and important spring boot annotations are as below:

  • @EnableAutoConfiguration – It enable auto-configuration mechanism.
  • @ComponentScan – enable component scanning in application classpath.
  • @SpringBootApplication – enable all 3 above three things in one step i.e. enable auto-configuration mechanism, enable component scanning and register extra beans in the context.
  • @ImportAutoConfiguration
  • @AutoConfigureBefore, @AutoConfigureAfter, @AutoConfigureOrder – shall be used if the configuration needs to be applied in a specific order (before of after).
  • @Conditional – annotations such as @ConditionalOnBean@ConditionalOnWebApplication or 
    @ConditionalOnClass allow to register a bean only when the condition meets.
Microservices Architecture
14
May
2021

Microservices Architecture

Microservices are defined as a self-regulating, and independent codebase that can be written and maintained even by a small team of developers. Microservices Architecture consists of such loosely coupled services with each service responsible for the execution of its associated business logic. 

The services are separated from each other based on the nature of their domains and belong to a mini-microservice pool. Enterprise mobile app developers leverage the capabilities of this architecture especially for complex applications. 

Microservices Architecture allows developers to release versions of software thanks to sophisticated automation of software building, testing, and deployment – something that acts as a prime differentiation point between Microservices and Monolithic architecture.

Microservices Architecture

Benefits

  • Since the services are bifurcated into pools, the architecture design pattern makes the system highly fault-tolerant. In other words, the whole software won’t collapse on its head even if some microservices cease to function. 
  • An enterprise mobile app development company working on such an architecture for clients can deploy multiple programming languages to build different microservices for their specific purpose. Therefore the technology stack can be kept updated with the latest upgrades in computing. 
  • This architecture is a perfect fit for applications that need to scale. Since the services are already independent of each other, they can scale individually rather than overloading the entire system with the need to expand. 
  • Services can be integrated into any application depending upon the scope of work. 

Potential Drawbacks 

  • Since each service is unique in its ability to contribute to the whole codebase, it could be challenging for an enterprise mobile application development company to interlink all and operate so many distinctive services seamlessly. 
  • Developers must define a standard protocol for all services to adhere to. It is important to do so, as the decentralized approach towards coding microservices in multiple languages can pose serious issues while debugging. 
  • Each microservice with its limited environment is responsible to maintain the integrity of the data. It is up to the architects of such a system to come up with a universally consistent data integrity protocol, wherever possible. 
  • You definitely need the best of breed professionals to design such a system for you as the technology stack keeps changing. 

Ideal For

Use Microservices Architecture for apps in which a specific segment will be used heavily than the others and would need a sporadic burst of scaling. Instead of a standalone application you may also deploy this for a service that provides functionality to other applications of the system. 

Remove Duplicates from Sorted Array II
06
Feb
2021

Remove Duplicates from Sorted Array II

Remove Duplicates from Sorted Array II

Given a sorted array, remove the duplicates from the array in-place such that each element appears at most twice, and return the new length.

Do not allocate extra space for another array, you must do this by modifying the input array in-place with O(1) extra memory.

Example

Given array [1, 1, 1, 3, 5, 5, 7]

The output should be 6, with the first six elements of the array being [1, 1, 3, 5, 5, 7]

Remove Duplicates from Sorted Array II solution in Java

It can also be solved in O(n) time complexity by using two pointers (indexes).

class RemoveDuplicatesSortedArrayII {
  private static int removeDuplicates(int[] nums) {
    int n = nums.length;

    /*
     * This index will move when we modify the array in-place to include an element
     * so that it is not repeated more than twice.
     */
    int j = 0;

    for (int i = 0; i < n; i++) {
      /*
       * If the current element is equal to the element at index i+2, then skip the
       * current element because it is repeated more than twice.
       */
      if (i < n - 2 && nums[i] == nums[i + 2]) {
        continue;
      }

      nums[j++] = nums[i];
    }

    return j;
  }

  public static void main(String[] args) {
    int[] nums = new int[] { 1, 1, 1, 3, 5, 5, 7 };
    int newLength = removeDuplicates(nums);

    System.out.println("Length of array after removing duplicates = " + newLength);

    System.out.print("Array = ");
    for (int i = 0; i < newLength; i++) {
      System.out.print(nums[i] + " ");
    }
    System.out.println();
  }
}
# Output
Length of array after removing duplicates = 6
Array = 1 1 3 5 5 7 
Spring Boot File Upload / Download Rest API Example
03
Mar
2021

Spring Boot File Upload / Download Rest API Example

Uploading and downloading files are very common tasks for which developers need to write code in their applications.

In this article, You’ll learn how to upload and download files in a RESTful spring boot web service.

We’ll first build the REST APIs for uploading and downloading files, and then test those APIs using Postman. We’ll also write front-end code in javascript to upload files.

Following is the final version of our application –

Spring Boot File Upload and Download AJAX Rest API Web Service

All right! Let’s get started.

Creating the Application

Let’s generate our application using Spring Boot CLI. Fire up your terminal, and type the following command to generate the app –

$ spring init --name=file-demo --dependencies=web file-demo
Using service at https://start.spring.io
Project extracted to '/Users/rajeevkumarsingh/spring-boot/file-demo'

You may also generate the application through Spring Initializr web tool by following the instructions below –

  1. Open http://start.spring.io
  2. Enter file-demo in the “Artifact” field.
  3. Add Web in the dependencies section.
  4. Click Generate to generate and download the project.

That’s it! You may now unzip the downloaded application archive and import it into your favorite IDE.

Configuring Server and File Storage Properties

First thing First! Let’s configure our Spring Boot application to enable Multipart file uploads, and define the maximum file size that can be uploaded. We’ll also configure the directory into which all the uploaded files will be stored.

Open src/main/resources/application.properties file, and add the following properties to it –

## MULTIPART (MultipartProperties)
# Enable multipart uploads
spring.servlet.multipart.enabled=true
# Threshold after which files are written to disk.
spring.servlet.multipart.file-size-threshold=2KB
# Max file size.
spring.servlet.multipart.max-file-size=200MB
# Max Request Size
spring.servlet.multipart.max-request-size=215MB

## File Storage Properties
# All files uploaded through the REST API will be stored in this directory
file.upload-dir=/Users/folder/uploads

Note: Please change the file.upload-dir property to the path where you want the uploaded files to be stored.

Automatically binding properties to a POJO class

Spring Boot has an awesome feature called @ConfigurationProperties using which you can automatically bind the properties defined in the application.properties file to a POJO class.

package com.example.filedemo.property;

import org.springframework.boot.context.properties.ConfigurationProperties;

@ConfigurationProperties(prefix = "file")
public class FileStorageProperties {
    private String uploadDir;

    public String getUploadDir() {
        return uploadDir;
    }

    public void setUploadDir(String uploadDir) {
        this.uploadDir = uploadDir;
    }
}

The @ConfigurationProperties(prefix = "file") annotation does its job on application startup and binds all the properties with prefix file to the corresponding fields of the POJO class.

If you define additional file properties in future, you may simply add a corresponding field in the above class, and spring boot will automatically bind the field with the property value.

Enable Configuration Properties

Now, To enable the ConfigurationProperties feature, you need to add @EnableConfigurationProperties annotation to any configuration class.

Open the main class src/main/java/com/example/filedemo/FileDemoApplication.java, and add the @EnableConfigurationProperties annotation to it like so –

package com.example.filedemo;

import com.example.filedemo.property.FileStorageProperties;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.context.properties.EnableConfigurationProperties;

@SpringBootApplication
@EnableConfigurationProperties({
        FileStorageProperties.class
})
public class FileDemoApplication {

    public static void main(String[] args) {
        SpringApplication.run(FileDemoApplication.class, args);
    }
}

Writing APIs for File Upload and Download

Let’s now write the REST APIs for uploading and downloading files. Create a new controller class called FileController inside com.example.filedemo.controller package.

Here is the complete code for FileController –

package com.example.filedemo.controller;

import com.example.filedemo.payload.UploadFileResponse;
import com.example.filedemo.service.FileStorageService;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.io.Resource;
import org.springframework.http.HttpHeaders;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import org.springframework.web.multipart.MultipartFile;
import org.springframework.web.servlet.support.ServletUriComponentsBuilder;
import javax.servlet.http.HttpServletRequest;
import java.io.IOException;
import java.util.Arrays;
import java.util.List;
import java.util.stream.Collectors;

@RestController
public class FileController {

    private static final Logger logger = LoggerFactory.getLogger(FileController.class);

    @Autowired
    private FileStorageService fileStorageService;
    
    @PostMapping("/uploadFile")
    public UploadFileResponse uploadFile(@RequestParam("file") MultipartFile file) {
        String fileName = fileStorageService.storeFile(file);

        String fileDownloadUri = ServletUriComponentsBuilder.fromCurrentContextPath()
                .path("/downloadFile/")
                .path(fileName)
                .toUriString();

        return new UploadFileResponse(fileName, fileDownloadUri,
                file.getContentType(), file.getSize());
    }

    @PostMapping("/uploadMultipleFiles")
    public List<UploadFileResponse> uploadMultipleFiles(@RequestParam("files") MultipartFile[] files) {
        return Arrays.asList(files)
                .stream()
                .map(file -> uploadFile(file))
                .collect(Collectors.toList());
    }

    @GetMapping("/downloadFile/{fileName:.+}")
    public ResponseEntity<Resource> downloadFile(@PathVariable String fileName, HttpServletRequest request) {
        // Load file as Resource
        Resource resource = fileStorageService.loadFileAsResource(fileName);

        // Try to determine file's content type
        String contentType = null;
        try {
            contentType = request.getServletContext().getMimeType(resource.getFile().getAbsolutePath());
        } catch (IOException ex) {
            logger.info("Could not determine file type.");
        }

        // Fallback to the default content type if type could not be determined
        if(contentType == null) {
            contentType = "application/octet-stream";
        }

        return ResponseEntity.ok()
                .contentType(MediaType.parseMediaType(contentType))
                .header(HttpHeaders.CONTENT_DISPOSITION, "attachment; filename=\"" + resource.getFilename() + "\"")
                .body(resource);
    }
}

The FileController class uses FileStorageService for storing files in the file system and retrieving them. It returns a payload of type UploadFileResponse after the upload is completed. Let’s define these classes one by one.

UploadFileResponse

As the name suggests, this class is used to return the response from the /uploadFile and /uploadMultipleFiles APIs.

Create UploadFileResponse class inside com.example.filedemo.payload package with the following contents –

package com.example.filedemo.payload;

public class UploadFileResponse {
    private String fileName;
    private String fileDownloadUri;
    private String fileType;
    private long size;

    public UploadFileResponse(String fileName, String fileDownloadUri, String fileType, long size) {
        this.fileName = fileName;
        this.fileDownloadUri = fileDownloadUri;
        this.fileType = fileType;
        this.size = size;
    }

	// Getters and Setters (Omitted for brevity)
}

Service for Storing Files in the FileSystem and retrieving them

Let’s now write the service for storing files in the file system and retrieving them. Create a new class called FileStorageService.java inside com.example.filedemo.service package with the following contents –

package com.example.filedemo.service;

import com.example.filedemo.exception.FileStorageException;
import com.example.filedemo.exception.MyFileNotFoundException;
import com.example.filedemo.property.FileStorageProperties;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.io.Resource;
import org.springframework.core.io.UrlResource;
import org.springframework.stereotype.Service;
import org.springframework.util.StringUtils;
import org.springframework.web.multipart.MultipartFile;
import java.io.IOException;
import java.net.MalformedURLException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.Paths;
import java.nio.file.StandardCopyOption;

@Service
public class FileStorageService {

    private final Path fileStorageLocation;

    @Autowired
    public FileStorageService(FileStorageProperties fileStorageProperties) {
        this.fileStorageLocation = Paths.get(fileStorageProperties.getUploadDir())
                .toAbsolutePath().normalize();

        try {
            Files.createDirectories(this.fileStorageLocation);
        } catch (Exception ex) {
            throw new FileStorageException("Could not create the directory where the uploaded files will be stored.", ex);
        }
    }

    public String storeFile(MultipartFile file) {
        // Normalize file name
        String fileName = StringUtils.cleanPath(file.getOriginalFilename());

        try {
            // Check if the file's name contains invalid characters
            if(fileName.contains("..")) {
                throw new FileStorageException("Sorry! Filename contains invalid path sequence " + fileName);
            }

            // Copy file to the target location (Replacing existing file with the same name)
            Path targetLocation = this.fileStorageLocation.resolve(fileName);
            Files.copy(file.getInputStream(), targetLocation, StandardCopyOption.REPLACE_EXISTING);

            return fileName;
        } catch (IOException ex) {
            throw new FileStorageException("Could not store file " + fileName + ". Please try again!", ex);
        }
    }

    public Resource loadFileAsResource(String fileName) {
        try {
            Path filePath = this.fileStorageLocation.resolve(fileName).normalize();
            Resource resource = new UrlResource(filePath.toUri());
            if(resource.exists()) {
                return resource;
            } else {
                throw new MyFileNotFoundException("File not found " + fileName);
            }
        } catch (MalformedURLException ex) {
            throw new MyFileNotFoundException("File not found " + fileName, ex);
        }
    }
}

Exception Classes

The FileStorageService class throws some exceptions in case of unexpected situations. Following are the definitions of those exception classes (All the exception classes go inside the package com.example.filedemo.exception).

1. FileStorageException

It’s thrown when an unexpected situation occurs while storing a file in the file system –

package com.example.filedemo.exception;

public class FileStorageException extends RuntimeException {
    public FileStorageException(String message) {
        super(message);
    }

    public FileStorageException(String message, Throwable cause) {
        super(message, cause);
    }
}

2. MyFileNotFoundException

It’s thrown when a file that the user is trying to download is not found.

package com.example.filedemo.exception;

import org.springframework.http.HttpStatus;
import org.springframework.web.bind.annotation.ResponseStatus;

@ResponseStatus(HttpStatus.NOT_FOUND)
public class MyFileNotFoundException extends RuntimeException {
    public MyFileNotFoundException(String message) {
        super(message);
    }

    public MyFileNotFoundException(String message, Throwable cause) {
        super(message, cause);
    }
}

Note that, we’ve annotated the above exception class with @ResponseStatus(HttpStatus.NOT_FOUND). This ensures that Spring boot responds with a 404 Not Found status when this exception is thrown.

Running the Application and Testing the APIs via Postman

We’re done developing our backend APIs. Let’s run the application and test the APIs via Postman. Type the following command from the root directory of the project to run the application –

mvn spring-boot:run

Once started, the application can be accessed at http://localhost:8080.

1. Upload File

Spring Boot File Upload Rest API Example

2. Upload Multiple Files

Spring Boot Upload Multiple Files Rest API Example

3. Download File

Spring Boot Download File Rest API Example

Developing the Front End

Our backend APIs are working fine. Let’s now write the front end code to let users upload and download files from our web app.

All the front end files will go inside src/main/resources/static folder. Following is the directory structure of our front-end code –

static
  └── css
       └── main.css
  └── js
       └── main.js
  └── index.html 

A bit of HTML

<!DOCTYPE html>
<html>
    <head>
        <meta name="viewport" content="width=device-width, initial-scale=1.0, minimum-scale=1.0">
        <title>Spring Boot File Upload / Download Rest API Example</title>
        <link rel="stylesheet" href="/css/main.css" />
    </head>
    <body>
        <noscript>
            <h2>Sorry! Your browser doesn't support Javascript</h2>
        </noscript>
        <div class="upload-container">
            <div class="upload-header">
                <h2>Spring Boot File Upload / Download Rest API Example</h2>
            </div>
            <div class="upload-content">
                <div class="single-upload">
                    <h3>Upload Single File</h3>
                    <form id="singleUploadForm" name="singleUploadForm">
                        <input id="singleFileUploadInput" type="file" name="file" class="file-input" required />
                        <button type="submit" class="primary submit-btn">Submit</button>
                    </form>
                    <div class="upload-response">
                        <div id="singleFileUploadError"></div>
                        <div id="singleFileUploadSuccess"></div>
                    </div>
                </div>
                <div class="multiple-upload">
                    <h3>Upload Multiple Files</h3>
                    <form id="multipleUploadForm" name="multipleUploadForm">
                        <input id="multipleFileUploadInput" type="file" name="files" class="file-input" multiple required />
                        <button type="submit" class="primary submit-btn">Submit</button>
                    </form>
                    <div class="upload-response">
                        <div id="multipleFileUploadError"></div>
                        <div id="multipleFileUploadSuccess"></div>
                    </div>
                </div>
            </div>
        </div>
        <script src="/js/main.js" ></script>
    </body>
</html>

Some Javascript

'use strict';

var singleUploadForm = document.querySelector('#singleUploadForm');
var singleFileUploadInput = document.querySelector('#singleFileUploadInput');
var singleFileUploadError = document.querySelector('#singleFileUploadError');
var singleFileUploadSuccess = document.querySelector('#singleFileUploadSuccess');

var multipleUploadForm = document.querySelector('#multipleUploadForm');
var multipleFileUploadInput = document.querySelector('#multipleFileUploadInput');
var multipleFileUploadError = document.querySelector('#multipleFileUploadError');
var multipleFileUploadSuccess = document.querySelector('#multipleFileUploadSuccess');

function uploadSingleFile(file) {
    var formData = new FormData();
    formData.append("file", file);

    var xhr = new XMLHttpRequest();
    xhr.open("POST", "/uploadFile");

    xhr.onload = function() {
        console.log(xhr.responseText);
        var response = JSON.parse(xhr.responseText);
        if(xhr.status == 200) {
            singleFileUploadError.style.display = "none";
            singleFileUploadSuccess.innerHTML = "<p>File Uploaded Successfully.</p><p>DownloadUrl : <a href='" + response.fileDownloadUri + "' target='_blank'>" + response.fileDownloadUri + "</a></p>";
            singleFileUploadSuccess.style.display = "block";
        } else {
            singleFileUploadSuccess.style.display = "none";
            singleFileUploadError.innerHTML = (response && response.message) || "Some Error Occurred";
        }
    }

    xhr.send(formData);
}

function uploadMultipleFiles(files) {
    var formData = new FormData();
    for(var index = 0; index < files.length; index++) {
        formData.append("files", files[index]);
    }

    var xhr = new XMLHttpRequest();
    xhr.open("POST", "/uploadMultipleFiles");

    xhr.onload = function() {
        console.log(xhr.responseText);
        var response = JSON.parse(xhr.responseText);
        if(xhr.status == 200) {
            multipleFileUploadError.style.display = "none";
            var content = "<p>All Files Uploaded Successfully</p>";
            for(var i = 0; i < response.length; i++) {
                content += "<p>DownloadUrl : <a href='" + response[i].fileDownloadUri + "' target='_blank'>" + response[i].fileDownloadUri + "</a></p>";
            }
            multipleFileUploadSuccess.innerHTML = content;
            multipleFileUploadSuccess.style.display = "block";
        } else {
            multipleFileUploadSuccess.style.display = "none";
            multipleFileUploadError.innerHTML = (response && response.message) || "Some Error Occurred";
        }
    }

    xhr.send(formData);
}


singleUploadForm.addEventListener('submit', function(event){
    var files = singleFileUploadInput.files;
    if(files.length === 0) {
        singleFileUploadError.innerHTML = "Please select a file";
        singleFileUploadError.style.display = "block";
    }
    uploadSingleFile(files[0]);
    event.preventDefault();
}, true);


multipleUploadForm.addEventListener('submit', function(event){
    var files = multipleFileUploadInput.files;
    if(files.length === 0) {
        multipleFileUploadError.innerHTML = "Please select at least one file";
        multipleFileUploadError.style.display = "block";
    }
    uploadMultipleFiles(files);
    event.preventDefault();
}, true);

The above code is self explanatory. I’m using XMLHttpRequest along with FormData object to upload file(s) as multipart/form-data.

And some CSS

* {
    -webkit-box-sizing: border-box;
    -moz-box-sizing: border-box;
    box-sizing: border-box;
}

body {
    margin: 0;
    padding: 0;
    font-weight: 400;
    font-family: "Helvetica Neue", Helvetica, Arial, sans-serif;
    font-size: 1rem;
    line-height: 1.58;
    color: #333;
    background-color: #f4f4f4;
}

body:before {
    height: 50%;
    width: 100%;
    position: absolute;
    top: 0;
    left: 0;
    background: #128ff2;
    content: "";
    z-index: 0;
}

.clearfix:after {
    display: block;
    content: "";
    clear: both;
}


h1, h2, h3, h4, h5, h6 {
    margin-top: 20px;
    margin-bottom: 20px;
}

h1 {
    font-size: 1.7em;
}

a {
    color: #128ff2;
}

button {
    box-shadow: none;
    border: 1px solid transparent;
    font-size: 14px;
    outline: none;
    line-height: 100%;
    white-space: nowrap;
    vertical-align: middle;
    padding: 0.6rem 1rem;
    border-radius: 2px;
    transition: all 0.2s ease-in-out;
    cursor: pointer;
    min-height: 38px;
}

button.primary {
    background-color: #128ff2;
    box-shadow: 0 2px 2px 0 rgba(0, 0, 0, 0.12);
    color: #fff;
}

input {
    font-size: 1rem;
}

input[type="file"] {
    border: 1px solid #128ff2;
    padding: 6px;
    max-width: 100%;
}

.file-input {
    width: 100%;
}

.submit-btn {
    display: block;
    margin-top: 15px;
    min-width: 100px;
}

@media screen and (min-width: 500px) {
    .file-input {
        width: calc(100% - 115px);
    }

    .submit-btn {
        display: inline-block;
        margin-top: 0;
        margin-left: 10px;
    }
}

.upload-container {
      max-width: 700px;
      margin-left: auto;
      margin-right: auto;
      background-color: #fff;
      box-shadow: 0 1px 11px rgba(0, 0, 0, 0.27);
      margin-top: 60px;
      min-height: 400px;
      position: relative;
      padding: 20px;
}

.upload-header {
    border-bottom: 1px solid #ececec;
}

.upload-header h2 {
    font-weight: 500;
}

.single-upload {
    padding-bottom: 20px;
    margin-bottom: 20px;
    border-bottom: 1px solid #e8e8e8;
}

.upload-response {
    overflow-x: hidden;
    word-break: break-all;
}

If you use JQuery

If you prefer using jQuery instead of vanilla javascript, then you can upload files using jQuery.ajax() method like so –

$('#singleUploadForm').submit(function(event) {
    var formElement = this;
    // You can directly create form data from the form element
    // (Or you could get the files from input element and append them to FormData as we did in vanilla javascript)
    var formData = new FormData(formElement);

    $.ajax({
        type: "POST",
        enctype: 'multipart/form-data',
        url: "/uploadFile",
        data: formData,
        processData: false,
        contentType: false,
        success: function (response) {
            console.log(response);
            // process response
        },
        error: function (error) {
            console.log(error);
            // process error
        }
    });

    event.preventDefault();
});

Conclusion

All right folks! In this article, we learned how to upload single as well as multiple files via REST APIs written in Spring Boot. We also learned how to download files in Spring Boot. Finally, we wrote code to upload files by calling the APIs through javascript.

Thank you for reading! See you in the next post!

Deploying AND Hosting Spring Boot applications on Heroku
06
Feb
2021

Deploying / Hosting Spring Boot applications on Heroku

Heroku is a cloud platform as a service (PaaS) that you can use to deploy, manage, and scale your applications. It supports several programming languages and has a very simple and convenient deployment model.

In this article, you’ll learn how to deploy Spring Boot applications on Heroku with step by step instructions.

There are several ways of deploying spring boot apps to Heroku. I’ll take you through all of them one by one. I’ll also show you the pros and cons of using one method over the other.

Creating a Simple Spring Boot app for deployment

Before learning about deployment, Let’s first create a simple Spring Boot app that we’ll deploy on Heroku.

  1. Bootstrapping the appThe easiest way to bootstrap a new spring boot app is using Spring Boot CLI. Just enter the following command in your terminal to generate the project using Spring Boot CLI –$ spring init -n=heroku-demo -d=web heroku-demo Using service at https://start.spring.io Project extracted to '/Users/rajeevkumarsingh/heroku-demo' The command will generate the application with all the necessary configurations in a directory named heroku-demo.You can also use the Spring Initializr web app to generate the application if you don’t want to install Spring Boot CLI:
    • Open http://start.spring.io.
    • Enter heroku-demo in the artifact field.
    • Add Web in the dependencies section.
    • Click Generate Project to generate and download the project.
    This will downloaded a zip archive of the project with all the directories and configurations.
  2. Building a Demo endpointLet’s add a controller endpoint to the application so that it responds with a message whenever we access the root (/) endpoint.Create a file named IndexController.java inside src/main/java/com/example/herokudemo/, and add the following code to it –package com.example.herokudemo; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class IndexController { @GetMapping("/") public String index() { return "Hello there! I'm running."; } }
  3. Running the appYou can run the app using the following command –$ mvn spring-boot:run The app will start on port 8080 by default. You can access it by going to http://localhost:8080. You should see the message Hello there! I'm running..

Deploying a Spring Boot app on Heroku using Git and Heroku CLI

All right! Now that we have a sample app, let’s learn how to deploy it to Heroku. In this section we’ll deploy the app using Git and Heroku CLI.

You need to have a Heroku account. If you don’t have one, you can create it from Heroku’s Signup Page. Once you’re registered with Heroku, follow the steps below to deploy the Spring Boot app.

  1. Download and install Heroku CLIHeroku CLI is a command line application that lets you create, deploy and manage Heroku apps from the command line. You can download Heroku CLI from Heroku Dev Center. You’ll also find the instructions for installing Heroku CLI on your platform on that page.
  2. Login using your email and passwordEnter the following command to login to Heroku using Heroku CLIheroku login You will be prompted to enter your Heroku account’s email and password. Enter all the details –Enter your Heroku credentials: Email: youremail@example.com Password: ************** Logged in as youremail@example.com That’s it! You’re logged in to Heroku. We can now proceed with the deployment.
  3. Set up Git and create a Heroku appExecute the following commands from the root directory of the project to create a local Git repository for the project –git init git add . git commit -m "initial commit" Now, create a new heroku app using heroku create command like so –$ heroku create Creating app... done, ⬢ apple-custard-73702 https://apple-custard-73702.herokuapp.com/ | https://git.heroku.com/apple-custard-73702.git The above command creates a new heroku app and also makes a remote repository for the app at Heroku.Heroku chooses a random unique name for the app by default. If you want to specify a custom name then you can pass the app name in the heroku create command like this –heroku create <YOUR_UNIQUE_APP_NAME> You can also rename an existing app using heroku apps:rename command –heroku apps:rename --app <OLD_NAME> <NEW_NAME> Let’s use the above command to rename our app –$ heroku apps:rename --app apple-custard-73702 projectname-heroku-demo Renaming apple-custard-73702 to projectname-heroku-demo... done
    Git remote heroku updated ▸ Don't forget to update git remotes for all other local checkouts of the app.
  4. Deploy the app to HerokuFinally, you can deploy the app by simply pushing the code to the remote repository named heroku which was created by the heroku create command.git push heroku master Heroku automatically detects that the project is a Maven/Java app and triggers the build accordingly.The deployment will take some time. You should see the build logs on console. Once the application is deployed, you can open it using the following command –heroku open The app should open in the system’s default browser and you should see the message Hello there! I'm running..You can see the logs of the application anytime by running the following command –heroku logs --tail

This is the easiest method of deploying spring boot apps to Heroku. But its not the ideal option if –

  • Your project doesn’t use Git.
  • Your project is closed source and you don’t want to push the source code to Heroku.
  • Your project uses some dependencies that are internal to your company. In this case the source code won’t compile at Heroku because it won’t be able to download the internal dependencies.

Deploying a Spring Boot app on Heroku using Heroku CLI Deploy Plugin

Unlike the previous method, this heroku deployment method doesn’t require you to setup a Git repository for your project and push the source code to Heroku.

You can build and package the app locally and deploy the packaged jar file to Heroku. This is done using Heorku CLI Deploy Plugin. It allows you to deploy a pre-built jar or war file to Heroku.

Follow the steps below to deploy a Spring Boot app on Heroku using Heroku CLI Deploy plugin:

  1. Install Heroku Deploy CLI PluginEnter the following command in your terminal to install the heroku-cli-deploy plugin:heroku plugins:install heroku-cli-deploy
  2. Package your Spring Boot applicationNext, you’ll need to package the application in the form of a jar file. You can do that using mvn package command –mvn clean package This command will create a packaged jar file of the application in the target folder of the project.
  3. Create a Heroku appNow let’s create a Heroku app using heroku create command –heroku create <APP-NAME> --no-remote Notice the use of --no-remote option. This tells heroku to not setup any git remote repository.
  4. Deploy the jar file on HerokuFinally, you can deploy the jar file using the following command –heroku deploy:jar target/heroku-demo-0.0.1-SNAPSHOT.jar --app <APP-NAME> The above command should have deployed the app successfully. But there is a caveat here. Let’s check the logs of the application to understand that-heroku logs --tail --app <APP-NAME> If you check the logs using the above command, you’ll notice an error like this –Error R10 (Boot timeout) -> Web process failed to bind to $PORT within 90 seconds of launch Heroku requires our application to listen on the port passed through $PORT environment variable. But the spring boot application listens on port 8080 by default.Fixing the Port binding errorThere are two ways of fixing the port binding error:
    • You can specify the port that the application should listen to in a property named server.port in the src/main/java/resources/application.properties file –server.port=${PORT:8080} The above property will evaluate to the PORT environment variable if it is available, otherwise it will fall back to the default port 8080.
    • Alternatively, you can use a Procfile to customize the command that is used to run the application. The customized command would include the server.port configuration that will be used by Spring Boot to bind the app.Create the Procfile in the root directory of the project with the following configuration –# Procfile web: java $JAVA_OPTS -jar target/heroku-demo-0.0.1-SNAPSHOT.jar -Dserver.port=$PORT $JAR_OPTS
    After doing one of the above two, You can deploy the app using the same heroku deploy:jar command –heroku deploy:jar target/heroku-demo-0.0.1-SNAPSHOT.jar --app <APP-NAME> This time the application should start successfully. You can verify that by opening the app in the browser using heroku open command.

Deploying a Spring Boot app on Heroku using Heroku Maven Plugin

The Heroku Maven plugin allows you to deploy any maven application without having to install Heroku CLI. It integrates the deployment into your build system, and works out of the box with any CI / CD server like Jenkins or Travis CI.

Follow the steps below to deploy your spring boot application using Heroku Maven Plugin.

  1. Adding and Configuring Heroku Maven Plugin
    Add the heroku-maven-plugin to the plugins section of your pom.xml file –
    Change the <appName>
  2.  configuration to an app name of your choice.
 -<build> <plugins> <!-- Heroku Maven Plugin Configuration --> <plugin> <groupId>com.heroku.sdk</groupId> <artifactId>heroku-maven-plugin</artifactId> <version>2.0.3</version> <configuration> <appName>project-heroku-maven-demo</appName> <includeTarget>false</includeTarget> <includes> <include>${project.build.directory}/${project.build.finalName}.jar</include> </includes> <jdkVersion>${java.version}</jdkVersion> <processTypes> <web>java $JAVA_OPTS -jar ${project.build.directory}/${project.build.finalName}.jar</web> </processTypes> </configuration> </plugin> </plugins> </build> Change the <appName>
  1. Creating the app
  2. You can create an app either using Heroku CLI or from the Heroku dashboard. To create the app using Heroku CLI, type the following command in your terminal –heroku create project-heroku-maven-demo Don’t forget to change the name project-heroku-maven-demo to the app name that you’ve set in the pom.xml file.If you don’t want to install Heroku CLI, then create the app from the dashboard.
  3. Deploying the app using Heroku maven plugin
  4. If you have Heroku CLI installed, then use the following maven command to deploy the application –mvn clean heroku:deploy If you don’t have Heroku CLI installed, then you need to set the HEROKU_API_KEY environment variable before running heroku:deploy –HEROKU_API_KEY="YOUR HEROKU API KEY" mvn clean heroku:deploy You can find the API key from Heroku dashboard.

Thanks for reading. In the next article, you’ll learn how to deploy spring boot applications on AWS using Elastic beanstalk. Also, Don’t forget to subscribe to my newsletter to get notified whenever I publish new articles on the blog.

Client-side vs. Server-side vs. Pre-rendering for Web Apps
26
Mar
2021

Client-side vs. Server-side vs. Pre-rendering for Web Apps

There is something going on within the front-end community recently. Server-side rendering is getting more and more traction thanks to React and its built-in server-side hydration feature. But it’s not the only solution to deliver a fast experience to the user with a super fast time-to-first-byte (TTFB) score: Pre-rendering is also a pretty good strategy. What’s the difference between these solutions and a fully client-rendered application?

Client-rendered Application

Since frameworks like Angular, Ember.js, and Backbone exists, front-end developers have tended to render everything client-side. Thanks to Google and its ability to “read” JavaScript, it works pretty well, and it’s even SEO friendly.

With a client-side rendering solution, you redirect the request to a single HTML file and the server will deliver it without any content (or with a loading screen) until you fetch all the JavaScript and let the browser compile everything before rendering the content.

Under a good and reliable internet connection, it’s pretty fast and works well. But it can be a lot better, and it doesn’t have to be difficult to make it that way. That’s what we will see in the following sections.

Server-side Rendering (SSR)

An SSR solution is something we used to do a lot, many years ago, but tend to forget in favor of a client-side rendering solution.

With old server-side rendering solutions, you built a web page—with PHP for example—the server compiled everything, included the data, and delivered a fully populated HTML page to the client. It was fast and effective.

But… every time you navigated to another route, the server had to do the work all over again: Get the PHP file, compile it, and deliver the HTML, with all the CSS and JS delaying the page load to a few hundred ms or even whole seconds.

What if you could do the first page load with the SSR solution, and then use a framework to do dynamic routing with AJAX, fetching only the necessary data?

This is why SSR is getting more and more traction within the community because React popularized this problem with an easy-to-use solution: The RenderToString method.

This new kind of web application is called a universal app or an isomorphic app. There’s still some controversy over the exact meanings of these terms and the relationship between them, but many people use them interchangeably.

Anyway, the advantage of this solution is being able to develop an app server-side and client-side with the same code and deliver a really fast experience to the user with custom data. The disadvantage is that you need to run a server.

SSR is used to fetch data and pre-populate a page with custom content, leveraging the server’s reliable internet connection. That is, the server’s own internet connection is better than that of a user with lie-fi), so it’s able to prefetch and amalgamate data before delivering it to the user.

With the pre-populated data, using an SSR app can also fix an issue that client-rendered apps have with social sharing and the OpenGraph system. For example, if you have only one index.html file to deliver to the client, they will only have one type of metadata—most likely your homepage metadata. This won’t be contextualized when you want to share a different route, so none of your routes will be shown on other sites with their proper user content (description and preview picture) that users would want to share with the world.

Pre-rendering

The mandatory server for a universal app can be a deterrent for some and may be overkill for a small application. This is why pre-rendering can be a really nice alternative.

I discovered this solution with Preact and its own CLI that allows you to compile all pre-selected routes so you can store a fully populated HTML file to a static server. This lets you deliver a super-fast experience to the user, thanks to the Preact/React hydration function, without the need for Node.js.

The catch is, because this isn’t SSR, you don’t have user-specific data to show at this point—it’s just a static (and somewhat generic) file sent directly on the first request, as-is. So if you have user-specific data, here is where you can integrate a beautifully designed skeleton to show the user their data is coming, to avoid some frustration on their part:

Using a document skeleton as part of a loading indicator

There is another catch: In order for this technique to work, you still need to have a proxy or something to redirect the user to the right file.

Why?

With a single-page application, you need to redirect all requests to the root file, and then the framework redirects the user with its built-in routing system. So the first page load is always the same root file.

In order for a pre-rendering solution to work, you need to tell your proxy that some routes need specific files and not always the root index.html file.

For example, say you have four routes (//about/jobs, and blog) and all of them have different layouts. You need four different HTML files to deliver the skeleton to the user that will then let React/Preact/etc. rehydrate it with data. So if you redirect all those routes to the root index.html file, the page will have an unpleasant, glitchy feel during loading, whereby the user will see the skeleton of the wrong page until it finishes loading and replaces the layout. For example, the user might see a homepage skeleton with only one column, when they had asked for a different page with a Pinterest-like gallery.

The solution is to tell your proxy that each of those four routes needs a specific file:

  • https://my-website.com → Redirect to the root index.html file
  • https://my-website.com/about → Redirect to the /about/index.html file
  • https://my-website.com/jobs → Redirect to the /jobs/index.html file
  • https://my-website.com/blog → Redirect to the /blog/index.html file

This is why this solution can be useful for small applications—you can see how painful it would be if you had a few hundred pages.

Strictly speaking, it’s not mandatory to do it this way—you could just use a static file directly. For example, https://my-website.com/about/ will work without any redirection because it will automatically search for an index.html inside its directory. But you need this proxy if you have param urls—https://my-website.com/profile/guillaume will need to redirect the request to /profile/index.html with its own layout, because profile/guillaume/index.html doesn’t exist and will trigger a 404 error.

A flowchart showing how a proxy makes a difference in a pre-rendering solution, as described in the previous paragraph

In short, there are three basic views at play with the rendering strategies described above: A loading screen, a skeleton, and the full page once it’s finally rendered.

Comparing a loading screen, a skeleton, and a fully-rendered page

Depending on the strategy, sometimes we use all three of these views, and sometimes we jump straight to a fully-rendered page. Only in one use case are we forced to use a different approach:

MethodLanding (e.g. /)Static (e.g. /about)Fixed Dynamic (e.g. /news)Parameterized Dynamic (e.g. /users/:user-id)
Client-renderedLoading → FullLoading → FullLoading → Skeleton → FullLoading → Skeleton → Full
Pre-renderedFullFullSkeleton → FullHTTP 404 (page not found)
Pre-rendered With ProxyFullFullSkeleton → FullSkeleton → Full
SSRFullFullFullFull

Client-only Rendering is Often Not Enough

Client-rendered applications are something we should avoid now because we can do better for the user. And doing better, in this case, is as easy as the pre-rendering solution. It’s definitely an improvement over client-only rendering and easier to implement than a fully server-side-rendered application.

An SSR/universal application can be really powerful if you have a large application with a lot of different pages. It allows your content to be focused and relevant when talking to a social crawler. This is also true for search engine robots, which now take your site’s performance into account when ranking it.

Stay tuned for a follow-up tutorial, where I will walk through the transformation of an SPA into pre-rendered and SSR versions, and compare their performance.

Log Aggregation For Microservice Architecture
08
Mar
2021

Log Aggregation For Microservice Architecture

To solve such problems, a preferred approach is to take advantage of a centralized logging service that aggregate logs from each service instance.

Users can search through these logs from one centralized spot and configure alerts when certain messages appear.

Standard tools are available and widely used by various enterprises.

ELK Stack is the most frequently used solution, where logging daemon, Logstash, collects and aggregate logs which can be searched via a Kibana dashboard indexed by Elasticsearch.

Persistence in Event Driven Architectures
28
Mar
2021

Persistence in Event Driven Architectures

The importance of being persistent in event driven architectures.

Enterprises have to constantly adapt and evolve their enterprise architecture strategies in order to deliver the desired business outcomes. The evolving architecture patterns may involve business processing of sales transactions with a human in the loop or they may involve machine to machine data processing using automation. Enterprises earlier adopted a request-driven model where microservices made a call to a service and the service responded to the request being made. In this request-driven model, you run into challenges around flexibility as you try to scale your global deployment footprint.

A new approach, that is quickly gaining adoption in enterprises is called the event-driven architecture. In the new approach, you are able to increase application agility and flexibility by allowing for multiple data producers to coexist with multiple data consumers and you process data only after an event or state change. In this enterprise architecture, the producers and consumers of data can be quickly extended to deliver better flexibility and agility as you scale your operation globally. Examples of event-driven solutions are available in a hosted managed format from most cloud providers today. In this blog post, we will look at a Kafka solution running in a Kubernetes cluster and how you can make sure persistence is achieved for the solution. This approach is running at customers in production today supported by Confluent and Portworx.

Why Use Event-Driven Architecture?

With the advent of 5G technology, a vast amount of data will be generated by sensors, devices, systems, and humans in the loop to track, manage and achieve business outcomes. The use case for Event-Driven architectures may include the following:

  • Business Process state changes – You want to notify the change of state between a purchase order and accounts receivable with an event. The event-based approach allows a human or a decision engine to take the next appropriate step in the process.
  • Log and Metrics processing – The event-driven model allows for multiple actions to be triggered based on a single metric. The ability to send messages to multiple event handlers in different subsystems offers the scalability and resiliency required by certain business applications.

Why Use Kafka?

Apache Kafka is a scalable, fault-tolerant messaging system that enables you to build distributed real-time applications with an event-driven architecture. Kafka delivers events, with fast ingestion rates, and provides persistence and in-order guarantees. Kafka adoption in your solution will depend on your specific use-case. Below are some important concepts about Apache Kafka:

  • Kafka organizes messages into “topics”.
  • The process that does the work in Kafka is called the “broker”. A producer pushes data into a topic hosted on a broker and a consumer pulls messages from a topic via a broker.
  • Kafka topics can be divided into “partitions”. This allows for parallelizing a topic across multiple brokers and increasing message ingests and throughput.
  • Brokers can hold multiple partitions but at any given time, only one partition can act as leader of a topic. A leader is responsible for updating any replicas with new data.
  • Brokers are responsible for storing messages to disk. The messages are stored with unique offsets. Messages in Kafka do not have a unique ID.

Persistence With Portworx and Kafka on Kubernetes

Kafka needs Zookeeper to be deployed as a StatefulSets in Kubernetes. Kafka Brokers, which maintains the state of the topics and partitions also need to be deployed as StatefulSets which should be backed by persistent volumes.

  • Kafka offers replication of topics between different brokers. In the case of a node failure, Kafka can recover from failure using the replicated topics. This recovery mechanism does create additional network calls in order to synchronize with the replica on a different broker. The recovery time of the failed node and its broker depend on the amount of data that needs to be rehydrated and network latencies in the cluster.
  • Portworx offers data replication using the replication parameter in the Kuberbetes storage class. In this scenario, the storage system is responsible for maintaining copies of the topic on different nodes. In the case of a node failure, the Kafka broker is rescheduled on a node that already contains the replicated topic data. The rebuild time of the broker is reduced because it uses the storage system to rehydrate the topic data without any network latencies. Once the data is rehydrated using the storage system, the broker can quickly catch up on the topic offset from an existing broker and thus reducing the overall recovery time.

Let’s walk through a Kubernetes node failure scenario on a Kubernetes cluster where a Kafka application is running and is backed by Portworx volumes.

Figure 1: We will describe the deployment of Kafka and Portworx on a 5 node Kubernetes cluster. In the image below, you can see that the Kafka deployment has 3 Brokers, each with 2 partitions and a replication factor of 2. For the Portworx data platform, we have a volume replication factor set to 3 for each volume.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 1

Figure 2: We will describe a node failure event. In the diagram below, we have simulated a node failure and identified the Kubernetes worker node 2 has been taken out of production. Kubernetes worker node 2 contains a Kafka broker with 2 partitions and 2 Portworx persistent volumes.
Blog-Tech: Persistence_in_Event_Driven_Architectures figure 2
Figure 3: Once Kubernetes node failure is detected by the Kubernetes API server, the Kaffka broker is deployed to a node with available resources. The Portworx Platform has the ability to influence the placement of the broker on a Kubernete node. Since Kubernetes node 4 already has a copy of Kafka Broker’s persistent volumes, Portworx makes sure that the Kafka broker is deployed on Kubernetes node 4. On the Portworx platform’s recovery side, the volumes are also created on other nodes in order to maintain the defined replication factor of 3.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 3 Figure 4: Once the failed recovery node is back in production use, the Kaffka application does not get scheduled on the recovered node since the application is already at the desired number of Kafka brokers. On the Portworx platform side, replicated volumes are placed on the recovered Kubernetes node to maintain the desired replication factor for the volumes.

Blog-Tech: Persistence_in_Event_Driven_Architectures figure 4

3 Layer Automated Testing
26
Mar
2021

3 Layer Automated Testing

Birth of Quality Engineering

Quality Assurance practice was relatively simple when we built the Monolith systems with traditional waterfall development models. The Quality Assurance (QA) teams would start the GUI layer’s validation process after waiting for months for the product development. To enhance the testing process, we would have to spend a lot of efforts and $$$ (commercial tools) in automating the GUI via various tools like Microfocus UFTSeleniumTest CompleteCoded UIRanorex etc., and most often these tests are complex to maintain and scale. Thus, most QA teams would have to restrict their automated tests to smoke and partial regression, ending in inadequate test coverage.

With modern technology, the new era tech companies, including Varo, have widely adopted Microservices-based architecture combined with the Agile/ Dev-Ops development model. This opens up a lot of opportunities for Quality Assurance practice, and in my opinion, this was the origin of the transformation from Quality Assurance to “Quality Engineering.”

The Common Pitfall

While automated testing gives us a massive benefit with the three R’s (Repeatable → Run any number of times, Reliable → Run with confidence, Reusable → Develop, and share), it also comes with maintenance costs. I like to quote Grady Boosh’s — “A fool with a tool is still a fool.” Targeting inappropriate areas would not provide us the desired benefit. We should consider several factors to choose the right candidate for automation. Few to name are the lifespan of the product, the volatility of the requirements, the complexity of the tests, business criticality, and technical feasibility.

It’s well known that the cost of a bug increases toward the right of software lifecycle. So it is necessary to implement several walls of defenses to arrest these software bugs as early as possible (Shift-Left Testing paradigm). By implementing an agile development model with a fail-fast mindset, we have taken care of our first wall of defense. But to move faster in this shorter development cycle, we must build robust automated test suites to take care of the rolled out features and make room for testing the new features.

The 3 Layer Architecture

The Varo architecture comprises three essential layers.

  • The Frontend layer (Web, iOS, Mobile apps) — User experience
  • The Orchestration layer (GraphQl) — Makes multiple microservices calls and returns the decision and data to the frontend apps
  • The Microservice layer (gRPC, Kafka, Postgres) — Core business layer

While understanding the microservice architecture for testing, there were several questions posed.

  • Which layer to test?
  • What to test in these layers?
  • Does testing frontend automatically validate downstream service?
  • Does testing multiple layers introduce redundancies?

We will try to answer these by analyzing the table below, which provides an overview of what these layers mean for quality engineering.

After inferring the table, we have loosely adopted the Testing Pyramid pattern to invest in automated testing as:

  • Full feature/ functional validations on Microservices layer
  • Business process validations on Orchestration layer
  • E2E validations on Frontend layer

The diagram below best represents our test strategy for each layer.

Note: Though we have automated white-box tests such as unit-test and integration-test, we exclude those in this discussion.

Use Case

Let’s take the example below for illustration to understand best how this Pyramid works.

The user is presented with a form to submit. The form accepts three inputs — Field A to get the user identifier, Field B a drop-down value, and Field C accepts an Integer value (based on a defined range).

Once the user clicks on the Submit button, the GraphQL API calls Microservice A to get what type of customer. Then it calls the next Microservice B to validate the acceptable range of values for Field C (which depends on the values from Field A and Field B).

Validations:

1. Feature Validations

✓ Positive behavior (Smoke, Functional, System Integration)

  • Validating behavior with a valid set of data combinations
  • Validating database
  • Validating integration — Impact on upstream/ downstream systems

✓ Negative behavior

  • Validating with invalid data (for example: Invalid authorization, Disqualified data)

2. Fluent Validations

✓ Evaluating field definitions — such as

  • Mandatory field (not empty/not null)
  • Invalid data types (for example: Int → negative value, String → Junk values with special characters or UUID → Invalid UUID formats)

Let’s look at how the “feature validations” can be written for the above use case by applying one of the test case authoring techniques — Boundary Value Analysis.

To test the scenario above, it would require 54 different combinations of feature validations, and below is the rationale to pick the right candidate for each layer.

Microservice Layer: This is the layer delivered first, enabling us to invest in automated testing as early as possible (Shift-Left). And the scope for our automation would be 100% of all the above scenarios.

Orchestration Layer: This layer translates the information from the microservice to frontend layers; we try to select at least two tests (1 positive & 1 negative) for each scenario. The whole objective is to ensure the integration is working as expected.

Frontend Layer: In this layer, we focus on E2E validations, which means these validations would be a part of the complete user journey. But we would ensure that we have at least one or more positive and negative scenarios embedded in those E2E tests. Business priority (frequently used data by the real-time users) helps us to select the best scenario for our E2E validations.

Conclusion

There are always going to be sets of redundant tests across these layers. But that is the trade-off we had to take to ensure that we have correct quality gates on each of these layers. The pros of this approach are that we achieve safe and faster deployments to Production by enabling quicker testing cycles, better test coverage, and risk-free decisions. In addition, having these functional test suites spread across the layers helps us to isolate the failures in respective areas, thus saving us time to troubleshoot an issue.

However, often, not one size fits all. The decision has to be made based on understanding how the software architecture is built and the supporting infrastructure to facilitate the testing efforts. One of the critical success factors for this implementation is building a good quality engineering team with the right skills and proper tools. But that is another story — Coming soon “Quality Engineering: Redefined.”