Tagged: Spring Boot

Spring Boot

Composite Primary Keys in JPA
14
May
2021

Composite Primary Keys in JPA

1. Introduction

In this tutorial, we’ll learn about Composite Primary Keys and the corresponding annotations in JPA.

2. Composite Primary Keys

A composite primary key – also called a composite key – is a combination of two or more columns to form a primary key for a table.

In JPA, we have two options to define the composite keys: The @IdClass and @EmbeddedId annotations.

In order to define the composite primary keys, we should follow some rules:

  • The composite primary key class must be public
  • It must have a no-arg constructor
  • It must define equals() and hashCode() methods
  • It must be Serializable

3. The IdClass Annotation

Let’s say we have a table called Account and it has two columns – accountNumber, accountType – that form the composite key. Now we have to map it in JPA.

As per the JPA specification, let’s create an AccountId class with these primary key fields:

public class AccountId implements Serializable {
    private String accountNumber;

    private String accountType;

    // default constructor

    public AccountId(String accountNumber, String accountType) {
        this.accountNumber = accountNumber;
        this.accountType = accountType;
    }

    // equals() and hashCode()
}

Next, let’s associate the AccountId class with the entity Account.

In order to do that, we need to annotate the entity with the @IdClass annotation. We must also declare the fields from the AccountId class in the entity Account and annotate them with @Id:

@Entity
@IdClass(AccountId.class)
public class Account {
    @Id
    private String accountNumber;

    @Id
    private String accountType;

    // other fields, getters and setters
}

4. The EmbeddedId Annotation

@EmbeddedId is an alternative to the @IdClass annotation.

Let’s consider another example where we have to persist some information of a Book with title and language as the primary key fields.

In this case, the primary key class, BookId, must be annotated with @Embeddable:

@Embeddable
public class BookId implements Serializable {
    private String title;
    private String language;

    // default constructor

    public BookId(String title, String language) {
        this.title = title;
        this.language = language;
    }

    // getters, equals() and hashCode() methods
}

Then, we need to embed this class in the Book entity using @EmbeddedId:

@Entity
public class Book {
    @EmbeddedId
    private BookId bookId;

    // constructors, other fields, getters and setters
}

5. @IdClass vs @EmbeddedId

As we just saw, the difference on the surface between these two is that with @IdClass, we had to specify the columns twice – once in AccountId and again in Account. But, with @EmbeddedId we didn’t.

There are some other tradeoffs, though.

For example, these different structures affect the JPQL queries that we write.

For example, with @IdClass, the query is a bit simpler:

SELECT account.accountNumber FROM Account account

With @EmbeddedId, we have to do one extra traversal:

SELECT book.bookId.title FROM Book book

Also, @IdClass can be quite useful in places where we are using a composite key class that we can’t modify.

Finally, if we’re going to access parts of the composite key individually, we can make use of @IdClass, but in places where we frequently use the complete identifier as an object, @EmbeddedId is preferred.

6. Conclusion

In this quick article, we explore composite primary keys in JPA.

@Scope – How to get Scope of Bean
14
May
2021

@Scope – How to get Scope of Bean

@Scope – How to get Scope of Bean from Code

When we create a Bean we are creating  actual instances of the class defined by that bean definition. We can also control the scope of the objects created from a particular bean definition.

There are 5 types of scopes in bean,

  • singleton (default scope)
  • prototype
  • request
  • session
  • global-session

Singleton:
Single instance per spring IoC container

Prototype:
Single bean definition to any number of object instances.

Request:
Bean definition for each request. Only valid web-aware Spring ApplicationContext.

Session:
Bean definition for a session. Only valid web-aware Spring ApplicationContext.

Global-Session:
Similar to session but the only makes sense in the context of portlet-based web applications. Only valid web-aware Spring ApplicationContext.

Advantages and Disadvantages of Kafka
31
Mar
2021

Advantages and Disadvantages of Kafka

1. Advantages and Disadvantages of Kafka

Today, we will discuss the Advantages and Disadvantages of Kafka Because, it is very important to know the limitations of any technology before using it, same in case of advantages.
So, let’s discuss Kafka Advantage and Disadvantage in detail.

advantages & disadvantages of kafka

2. Advantages of Kafka

So, here we are listing out some of the advantages of Kafka. Basically, these Kafka advantages are making Kafka ideal for our data lake implementation. So, let’s start learning advantages of Kafka in detail:

Kafka Pros and Cons – Kafka Advantages

a. High-throughput
Without having not so large hardware, Kafka is capable of handling high-velocity and high-volume data. Also, able to support message throughput of thousands of messages per second.
b. Low Latency
It is capable of handling these messages with the very low latency of the range of milliseconds, demanded by most of the new use cases.
c. Fault-Tolerant
One of the best advantages is Fault Tolerance. There is an inherent capability in Kafka, to be resistant to node/machine failure within a cluster.
d. Durability
Here, durability refers to the persistence of data/messages on disk. Also, messages replication is one of the reasons behind durability, hence messages are never lost.
e. Scalability
Without incurring any downtime on the fly by adding additional nodes, Kafka can be scaled-out. Moreover, inside the Kafka cluster, the message handling is fully transparent and these are seamless.
f. Distributed
The distributed architecture of Kafka makes it scalable using capabilities like replication and partitioning.
g. Message Broker Capabilities
Kafka tends to work very well as a replacement for a more traditional message broker. Here, a message broker refers to an intermediary program, which translates messages from the formal messaging protocol of the publisher to the formal messaging protocol of the receiver.
h. High Concurrency
Kafka is able to handle thousands of messages per second and that too in low latency conditions with high throughput. In addition, it permits the reading and writing of messages into it at high concurrency.
i. By Default Persistent
As we discussed above that the messages are persistent, that makes it durable and reliable.
j. Consumer Friendly
It is possible to integrate with the variety of consumers using Kafka. The best part of Kafka is, it can behave or act differently according to the consumer, that it integrates with because each customer has a different ability to handle these messages, coming out of Kafka. Moreover, Kafka can integrate well with a variety of consumers written in a variety of languages.
k. Batch Handling Capable (ETL like functionality)
Kafka could also be employed for batch-like use cases and can also do the work of a traditional ETL, due to its capability of persists messages.
l. Variety of Use Cases
It is able to manage the variety of use cases commonly required for a Data Lake. For Example log aggregation, web activity tracking, and so on.
m. Real-Time Handling
Kafka can handle real-time data pipeline. Since we need to find a technology piece to handle real-time messages from applications, it is one of the core reasons for Kafka as our choice.

3. Disadvantages of Kafka

Cons of Kafka – Apache Kafka Disadvantages

It is good to know Kafka’s limitations even if its advantages appear more prominent then its disadvantages. However, consider it only when advantages are too compelling to omit. Here is one more condition that some disadvantages might be more relevant for a particular use case but not really linked to ours. So, here we are listing out some of the disadvantage associated with Kafka:
a. No Complete Set of Monitoring Tools
It is seen that it lacks a full set of management and monitoring tools. Hence, enterprise support staff felt anxious or fearful about choosing Kafka and supporting it in the long run.
b. Issues with Message Tweaking
As we know, the broker uses certain system calls to deliver messages to the consumer. However, Kafka’s performance reduces significantly if the message needs some tweaking. So, it can perform quite well if the message is unchanged because it uses the capabilities of the system.
c. Not support wildcard topic selection
There is an issue that Kafka only matches the exact topic name, that means it does not support wildcard topic selection. Because that makes it incapable of addressing certain use cases.
d. Lack of Pace
There can be a problem because of the lack of pace, while API’s which are needed by other languages are maintained by different individuals and corporates.
e. Reduces Performance
In general, there are no issues with the individual message size. However, the brokers and consumers start compressing these messages as the size increases. Due to this, when decompressed, the node memory gets slowly used. Also, compress happens when the data flow in the pipeline. It affects throughput and also performance.
f. Behaves Clumsy
Sometimes, it starts behaving a bit clumsy and slow, when the number of queues in a Kafka cluster increases.
g. Lacks some Messaging Paradigms
Some of the messaging paradigms are missing in Kafka like request/reply, point-to-point queues and so on. Not always but for certain use cases, it sounds problematic.
So, this was all about the advantages and disadvantages of Kafka. Hope you like our explanation.

4. Conclusion: Advantages and Disadvantages of Kafka

Hence, we have seen all the Advantages and Disadvantages of Kafka in detail. That will help you a lot before using it. However, if any doubt occurs regarding Kafka Pros and Cons, feel free to ask through the comment section.

Kafka For Beginners
31
Mar
2021

Apache Kafka For Beginners

What is Kafka?

We use Apache Kafka when it comes to enabling communication between producers and consumers using message-based topics. Apache Kafka is a fast, scalable, fault-tolerant, publish-subscribe messaging system. Basically, it designs a platform for high-end new generation distributed applications. Also, it allows a large number of permanent or ad-hoc consumers. One of the best features of Kafka is, it is highly available and resilient to node failures and supports automatic recovery. This feature makes Apache Kafka ideal for communication and integration between components of large-scale data systems in real-world data systems.

Moreover, this technology replaces the conventional message brokers, with the ability to give higher throughput, reliability, and replication like JMS, AMQP and many more. In addition, core abstraction Kafka offers a Kafka broker, a Kafka Producer, and a Kafka Consumer. Kafka broker is a node on the Kafka cluster, its use is to persist and replicate the data. A Kafka Producer pushes the message into the message container called the Kafka Topic. Whereas a Kafka Consumer pulls the message from the Kafka Topic.

Before moving forward in Kafka Tutorial, let’s understand the actual meaning of term Messaging System in Kafka.

a. Messaging System in Kafka

When we transfer data from one application to another, we use the Messaging System. It results as, without worrying about how to share data, applications can focus on data only. On the concept of reliable message queuing, distributed messaging is based. Although, messages are asynchronously queued between client applications and messaging system. There are two types of messaging patterns available, i.e. point to point and publish-subscribe (pub-sub) messaging system. However, most of the messaging patterns follow pub-sub.

Apache Kafka — Kafka Messaging System
  • Point to Point Messaging System

Here, messages are persisted in a queue. Although, a particular message can be consumed by a maximum of one consumer only, even if one or more consumers can consume the messages in the queue. Also, it makes sure that as soon as a consumer reads a message in the queue, it disappears from that queue.

  • Publish-Subscribe Messaging System

Here, messages are persisted in a topic. In this system, Kafka Consumers can subscribe to one or more topic and consume all the messages in that topic. Moreover, message producers refer publishers and message consumers are subscribers here.

History of Apache Kafka

Previously, LinkedIn was facing the issue of low latency ingestion of huge amount of data from the website into a lambda architecture which could be able to process real-time events. As a solution, Apache Kafka was developed in the year 2010, since none of the solutions was available to deal with this drawback, before.

However, there were technologies available for batch processing, but the deployment details of those technologies were shared with the downstream users. Hence, while it comes to Real-time Processing, those technologies were not enough suitable. Then, in the year 2011 Kafka was made public.

Why Should we use Apache Kafka Cluster?

As we all know, there is an enormous volume of data in Big Data. And, when it comes to big data, there are two main challenges. One is to collect the large volume of data, while another one is to analyze the collected data. Hence, in order to overcome those challenges, we need a messaging system. Then Apache Kafka has proved its utility. There are numerous benefits of Apache Kafka such as:

  • Tracking web activities by storing/sending the events for real-time processes.
  • Alerting and reporting the operational metrics.
  • Transforming data into the standard format.
  • Continuous processing of streaming data to the topics.

Therefore, this technology is giving a tough competition to some of the most popular applications like ActiveMQ, RabbitMQ, AWS etc. because of its wide use.

Kafka Tutorial — Audience

Professionals who are aspiring to make a career in Big Data Analytics using Apache Kafka messaging system should refer this Kafka Tutorial article. It will give you complete understanding about Apache Kafka.

Kafka Tutorial — Prerequisites

You must have a good understanding ofJavaScala, Distributed messaging system, and Linux environment, before proceeding with this Apache Kafka Tutorial.

Kafka Architecture

Below we are discussing four core APIs in this Apache Kafka tutorial:

Apache Kafka — Kafka Architecture

a. Kafka Producer API

This Kafka Producer API permits an application to publish a stream of records to one or more Kafka topics.

b. Kafka Consumer API

To subscribe to one or more topics and process the stream of records produced to them in an application, we use this Kafka Consumer API.

c. Kafka Streams API

In order to act as a stream processor consuming an input stream from one or more topics and producing an output stream to one or more output topics and also effectively transforming the input streams to output streams, this Kafka Streams API gives permission to an application.

d. Kafka Connector API

This Kafka Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems. For example, a connector to a relational database might capture every change to a table.

Kafka Components

Using the following components, Kafka achieves messaging:

a. Kafka Topic

Basically, how Kafka stores and organizes messages across its system and essentially a collection of messages are Topics. In addition, we can replicate and partition Topics. Here, replicate refers to copies and partition refers to the division. Also, visualize them as logs wherein, Kafka stores messages. However, this ability to replicate and partitioning topics is one of the factors that enable Kafka’s fault tolerance and scalability.

Apache Kafka — Kafka Topic

b. Kafka Producer

It publishes messages to a Kafka topic.

c. Kafka Consumer

This component subscribes to a topic(s), reads and processes messages from the topic(s).

d. Kafka Broker

Kafka Broker manages the storage of messages in the topic(s). If Kafka has more than one broker, that is what we call a Kafka cluster.

e. Kafka Zookeeper

To offer the brokers with metadata about the processes running in the system and to facilitate health checking and broker leadership election, Kafka uses Kafka zookeeper.

Kafka Tutorial — Log Anatomy

We view log as the partitions in this Kafka tutorial. Basically, a data source writes messages to the log. One of the advantages is, at any time one or more consumers read from the log they select. Here, the below diagram shows a log is being written by the data source and the log is being read by consumers at different offsets.

Apache Kafka Tutorial — Log Anatomy

Kafka Tutorial — Data Log

By Kafka, messages are retained for a considerable amount of time. Also, consumers can read as per their convenience. However, if Kafka is configured to keep messages for 24 hours and a consumer is down for time greater than 24 hours, the consumer will lose messages. And, messages can be read from last known offset, if the downtime on part of the consumer is just 60 minutes. Kafka doesn’t keep state on what consumers are reading from a topic.

Kafka Tutorial — Partition in Kafka

There are few partitions in every Kafka broker. Moreover, each partition can be either a leader or a replica of a topic. In addition, along with updating of replicas with new data, Leader is responsible for all writes and reads to a topic. The replica takes over as the new leader if somehow the leader fails.

Apache Kafka Tutorial — Partition In Kafka

Importance of Java in Apache Kafka

Apache Kafka is written in pure Java and also Kafka’s native API is java. However, many other languages like C++, Python, .Net, Go, etc. also support Kafka. Still, a platform where there is no need of using a third-party library is Java. Also, we can say, writing code in languages apart from Java will be a little overhead.

In addition, we can useJavalanguage if we need the high processing rates that come standard on Kafka. Also, Java provides a good community support for Kafka consumer clients. Hence, it is a right choice to implement Kafka in Java.

Kafka Use Cases

There are several use Cases of Kafka that show why we actually use Apache Kafka.

  • Messaging

For a more traditional message broker, Kafka works well as a replacement. We can say Kafka has better throughput, built-in partitioning, replication, and fault-tolerance which makes it a good solution for large-scale message processing applications.

  • Metrics

For operational monitoring data, Kafka finds the good application. It includes aggregating statistics from distributed applications to produce centralized feeds of operational data.

  • Event Sourcing

Since it supports very large stored log data, that means Kafka is an excellent backend for applications of event sourcing.

Kafka Tutorial — Comparisons in Kafka

Many applications offer the same functionality as Kafka like ActiveMQ, RabbitMQ, Apache Flume, Storm, and Spark. Then why should you go for Apache Kafka instead of others?

Let’s see the comparisons below:

a. Apache Kafka vs Apache Flume

Kafka Tutorial — Apache Kafka vs Flume

i. Types of tool

Apache Kafka– For multiple producers and consumers, it is a general-purpose tool.

Apache Flume– Whereas, it is a special-purpose tool for specific applications.

ii. Replication feature

Apache Kafka– Using ingest pipelines, it replicates the events.

Apache Flume- It does not replicate the events.

b. RabbitMQ vs Apache Kafka

One among the foremost Apache Kafka alternatives is RabbitMQ. So, let’s see how they differ from one another:

Kafka Tutorial — Kafka vs RabbitMQ

i. Features

Apache Kafka– Basically, Kafka is distributed. Also, with guaranteed durability and availability, the data is shared and replicated.

RabbitMQ– It offers relatively less support for these features.

ii. Performance rate

Apache Kafka — Its performance rate is high to the tune of 100,000 messages/second.

RabbitMQ — Whereas, the performance rate of RabbitMQ is around 20,000 messages/second.

iii. Processing

Apache Kafka — It allows reliable log distributed processing. Also, stream processing semantics built into the Kafka Streams.

RabbitMQ — Here, the consumer is just FIFO based, reading from the HEAD and processing 1 by 1.

c. Traditional queuing systems vs Apache Kafka

Kafka Tutorial — Traditional queuing systems vs Apache Kafka

i. Messages Retaining

Traditional queuing systems — Most queueing systems remove the messages after it has been processed typically from the end of the queue.

Apache Kafka — Here, messages persist even after being processed. They don’t get removed as consumers receive them.

ii. Logic-based processing

Traditional queuing systems — It does not allow to process logic based on similar messages or events.

Apache Kafka — It allows to process logic based on similar messages or events.

So, this was all about Apache Kafka Tutorials. Hope you like our explanation.

Conclusion: Kafka Tutorial

Hence, in this Kafka Tutorial, we have seen the whole concept of Apache Kafka and seen what is Kafka. Moreover, we discussed Kafka components, use cases, and Kafka architecture. At last, we discussed the comparison of Kafka vs other messaging tools. Furthermore, if you have any query regarding Kafka Tutorial, feel free to ask in the comment section. Also, keep visiting Data Flair for more knowledgeable articles on Apache Kafka.

What is Spring Actuator? What are its advantages?
30
Mar
2021

What is Spring Actuator? What are its advantages?

Spring boot’s actuator module allows us to monitor and manage application usages in production environment, without coding and configuration for any of them. These monitoring and management information is exposed via REST like endpoint URLs.

The simplest way to enable the features is to add a dependency to the spring-boot-starter-actuator starter pom file.

<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-actuator</artifactId></dependency>

Spring Boot includes a number of built-in endpoints and lets we add our own. Further, each individual endpoint can be enabled or disabled as well.

Some of important and widely used actuator endpoints are given below:

ENDPOINTUSAGE
/envReturns list of properties in current environment
/healthReturns application health information.
/auditeventsReturns all auto-configuration candidates and the reason why they ‘were’ or ‘were not’ applied.
/beansReturns a complete list of all the Spring beans in your application.
/traceReturns trace logs (by default the last 100 HTTP requests).
/dumpIt performs a thread dump.
/metricsIt shows several useful metrics information like JVM memory used, system CPU usage, open files, and much more.
Spring Boot Actuator
30
Mar
2021

Spring Boot Actuator

In this Spring boot actuator tutorial, learn about in-built HTTP endpoints available for any boot application for different monitoring and management purposes. Before the spring framework, if we had to introduce this type of monitoring functionality in our applications then we had to manually develop all those components and that too was very specific to our need. But with spring boot we have Actuator module which makes it very easy.

We just need to configure a few things and we are done – all the management and monitoring related information is easily available. Let’s learn to configure Spring boot 2 actuator endpoints.

1. Spring Boot Actuator Module

Spring boot’s module Actuator allows you to monitor and manage application usages in production environment, without coding and configuration for any of them. These monitoring and management information is exposed via REST like endpoint URLs.

1.1. Actuator Maven Dependency

<dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-actuator</artifactId></dependency>
dependencies {implementation 'org.springframework.boot:spring-boot-starter-actuator'}

1.2. Important Actuator Endpoints

Most applications exposes endpoints via HTTP, where the ID of the endpoint along with a prefix of /actuator is mapped to a URL. For example, by default, the health endpoint is mapped to /actuator/health.

By default, only /health and /info are exposed via Web APIs. Rest are exposed via JMX. Use management.endpoints.web.exposure.include=* to expose all endpoints through the Web APIs.

management.endpoints.web.exposure.include=* # To expose only selected endpoints#management.endpoints.jmx.exposure.include=health,info,env,beans

Some of important and widely used actuator endpoints are given below:

ENDPOINTUSAGE
/auditeventsReturns all auto-configuration candidates and the reason why they ‘were’ or ‘were not’ applied.
/beansReturns a complete list of all the Spring beans in your application.
/mappingsDisplays a collated list of all @RequestMapping paths..
/envReturns list of properties in current environment
/healthReturns application health information.
/cachesIt exposes available caches.
/conditionsShows the conditions that were evaluated on configuration and auto-configuration.
/configpropsIt displays a collated list of all @ConfigurationProperties.
/integrationgraphIt shows the Spring Integration graph. Requires a dependency on spring-integration-core.
/loggersThe configuration of loggers in the application..
/scheduledtasksDisplays the scheduled tasks in the application.
/sessionsReturns trace logs (by default the last 100 HTTP requests). Requires an HttpTraceRepository bean.
/httptraceIt allows retrieval and deletion of user sessions from a Spring Session-backed session store. Requires a Servlet-based web application using Spring Session.
/shutdownLets the application be gracefully shutdown. Disabled by default.
/threaddumpIt performs a thread dump.
/metricsIt shows several useful metrics information like JVM memory used, system CPU usage, open files, and much more.

The Spring web application (Spring MVC, Spring WebFlux, or Jersey) provide the following additional endpoints:

ENDPOINTUSAGE
/heapdumpReturns an hprof heap dump file.
/logfileReturns the contents of the logfile if logging.file.name or logging.file.path properties have been set.

1.3. Securing Endpoints

By default, Spring Security is enabled for all actuator endpoints if it available in the classpath.

If you wish to configure custom security for HTTP endpoints, for example, only allow users with a certain role to access then configure WebSecurityConfigurerAdapter in following manner:

@Configuration(proxyBeanMethods = false)public class ActuatorSecurity extends WebSecurityConfigurerAdapter { @Overrideprotected void configure(HttpSecurity http) throws Exception {http.requestMatcher(EndpointRequest.toAnyEndpoint()).authorizeRequests((requests) ->requests.anyRequest().hasRole("ENDPOINT_ADMIN"));http.httpBasic();} }

The above configuration ensures that only users with role ENDPOINT_ADMIN have access to actuation endpoints.

1.4. Enabling Endpoints

By default, all endpoints (except /shutdown) are enabled. To disable all endpoints, by default, use property:

management.endpoints.enabled-by-default=false

Then use the only required endpoints which the application need to expose using the pattern management.endpoint.<id>.enabled.

management.endpoint.health.enabled=truemanagement.endpoint.loggers.enabled=true

1.5. CORS support

CORS Support is disabled by default and is only enabled once the endpoints.cors.allowed-origins property has been set.

management.endpoints.web.cors.allowed-origins=https://example.commanagement.endpoints.web.cors.allowed-methods=GET,POST

Here the management context path is /management.

1.6. Caching the Response

Actuator endpoints automatically cache the responses to read operations that do not take any parameters. Use cache.time-to-live property to configure the amount of time for which an endpoint will cache the response.

management.endpoint.beans.cache.time-to-live=20s

2. Spring Boot Actuator Endpoint Example

In this example, we will create a simple spring boot application and access the actuator endpoints to know more about them.

2.1. Development environment

  • JDK 1.8, Eclipse, Maven – Development environment
  • Spring-boot – Underlying application framework
  • Spring-boot Actuator – Management endpoints

2.2. Create Maven Project

Start with creating one spring boot project from Spring Initializer site with WebRest Repositories and Actuator dependencies. Download project in zipped format. Unzip and then import project in eclipse as maven project.

Generate Spring Boot Project
Generate Spring Boot Project

2.3. Add simple Rest endpoint

Now add one simple Rest endpoint /example to the application.

package com.example.springbootmanagementexample; import java.util.Date;import org.springframework.web.bind.annotation.GetMapping;import org.springframework.web.bind.annotation.RestController; @RestControllerpublic class SimpleRestController {@GetMapping("/example")public String example() {return "Hello User !! " + new Date();}}

3. Spring Boot Actuator Endpoints Demo

I have added management.security.enabled=false entry to the application.properties file to disable actuator security. Here I am more interested in actuator endpoints responses.

Do maven build using mvn clean install and start the application using java -jar target\spring-boot-actuator-example-0.0.1-SNAPSHOT.jar command. This will bring up one tomcat server in default port 8080 and application will be deployed in it.

Access /example API in browser to generate few monitoring information on server.

  • http://localhost:8080/actuator/envThis will give all the environmental configuration about the server.Endpoint env outputEndpoint env output
  • http://localhost:8080/actuator/beansThis will give all the spring beans loaded in the context.Endpoint beans outputEndpoint beans output
  • http://localhost:8080/actuator/threaddumpThis will give the current server thread dump.Endpoint threaddump outputEndpoint threaddump output

Those endpoints will give standard information in the browser. These are the basic important endpoints we generally refer, but spring boot provides many more endpoints as mentioned in this link

4. Actuator Advance Configuration Options

4.1. Change the Management endpoint context path

By default all endpoints comes in default context path of the application, suffixed with /actuator. If for some reason, we have existing endpoints in application starting with /actuator then we can customize the base path to something else.

All we need to specify the new base path in the application.properties.

management.endpoints.web.base-path=/manage

Now you will be able to access all actuator endpoints under a new URL. e.g.

  • /manage/health
  • /manage/dump
  • /manage/env
  • /manage/beans

4.2. Customize the management server port

To customize the management endpoint port, we need to add this entry in the application.properties file.

management.server.port=8081

5. Summary

In this spring boot actuator example, we learned to configure management and monitoring endpoints with few easy configurations. So next time, you need to add application health checks or add monitoring support, you should consider adding the Spring actuator project and use these endpoints.

Feel free to drop your questions in the comments section.

Happy Learning !!

How to create and bootstrap a simple boot application
30
Mar
2021

How to Create and Bootstrap a Simple Boot Application?

  • To create any simple spring boot application, we need to start by creating a pom.xml file. Add spring-boot-starter-parent as parent of the project which makes it a spring boot application.pom.xml<?xml version="1.0" encoding="UTF-8"?><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"><modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId><artifactId>myproject</artifactId><version>0.0.1-SNAPSHOT</version> <parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>2.2.1.RELEASE</version></parent></project>Above is simplest spring boot application which can be packaged as jar file. Now import the project in your IDE (optional).
  • Now we can start adding other starter dependencies that we are likely to need when developing a specific type of application. For example, for a web application, we add a spring-boot-starter-web dependency.pom.xml<dependencies><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency></dependencies>
  • Start adding application business logic, views and domain data at this step. By default, Maven compiles sources from src/main/java so create your application code inside this folder only.As the very initial package, add the application bootstrap class and add @SpringBootApplication annotation. This class will be used to run the application. It’s main() method acts as the application entry point.MyApplication.javaimport org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplicationpublic class MyApplication {public static void main(String[] args) {SpringApplication.run(MyApplication.class, args);}}
  • The SpringApplication class provided in above main() method bootstraps our application, starting Spring, which, in turn, starts the auto-configured Tomcat web server. MyApplication (method argument) indicates the primary Spring component.
  • Since we used the spring-boot-starter-parent POM, we have a useful “run” goal that we can use to start the application. Type 'mvn spring-boot:run' from the root project directory to start the application.It will start the application which we can verify in console logs as well as in browser by hitting URL: localhost:8080.
Why We Use Spring Boot Maven Plugin
30
Mar
2021

Why We Use Spring Boot Maven Plugin?

It provides Spring Boot support in Maven, letting us package executable jar or war archives and run an application “in-place”. To use it, we must use Maven 3.2 (or later).

The plugin provides several goals to work with a Spring Boot application:

  • spring-boot:repackage: create a jar or war file that is auto-executable. It can replace the regular artifact or can be attached to the build lifecycle with a separate classifier.
  • spring-boot:run: run your Spring Boot application with several options to pass parameters to it.
  • spring-boot:start and stop: integrate your Spring Boot application to the integration-test phase so that the application starts before it.
  • spring-boot:build-info: generate a build information that can be used by the Actuator.
What are Spring Boot common annotations
30
Mar
2021

What are Spring Boot Common Annotations?

The most commonly used and important spring boot annotations are as below:

  • @EnableAutoConfiguration – It enable auto-configuration mechanism.
  • @ComponentScan – enable component scanning in application classpath.
  • @SpringBootApplication – enable all 3 above three things in one step i.e. enable auto-configuration mechanism, enable component scanning and register extra beans in the context.
  • @ImportAutoConfiguration
  • @AutoConfigureBefore, @AutoConfigureAfter, @AutoConfigureOrder – shall be used if the configuration needs to be applied in a specific order (before of after).
  • @Conditional – annotations such as @ConditionalOnBean@ConditionalOnWebApplication or 
    @ConditionalOnClass allow to register a bean only when the condition meets.