Category: Spring

Getting started with Spring Boot
26
Mar
2021

Getting started with Spring Boot

1. Overview

Spring Boot is an opinionated, convention-over-configuration focused addition to the Spring platform – highly useful to get started with minimum effort and create stand-alone, production-grade applications.

This tutorial is a starting point for Boot – a way to get started in a simple manner, with a basic web application.

We’ll go over some core configuration, a front-end, quick data manipulation, and exception handling.

2. Setup

First, let’s use Spring Initializr to generate the base for our project.

The generated project relies on the Boot parent:

<parent>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-parent</artifactId>
    <version>2.4.0</version>
    <relativePath />
</parent>

The initial dependencies are going to be quite simple:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
</dependency>

3. Application Configuration

Next, we’ll configure a simple main class for our application:

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

Notice how we’re using @SpringBootApplication as our primary application configuration class; behind the scenes, that’s equivalent to @Configuration@EnableAutoConfiguration, and @ComponentScan together.

Finally, we’ll define a simple application.properties file – which for now only has one property:

server.port=8081

server.port changes the server port from the default 8080 to 8081; there are of course many more Spring Boot properties available.

4. Simple MVC View

Let’s now add a simple front end using Thymeleaf.

First, we need to add the spring-boot-starter-thymeleaf dependency to our pom.xml:

<dependency> 
    <groupId>org.springframework.boot</groupId> 
    <artifactId>spring-boot-starter-thymeleaf</artifactId> 
</dependency>

That enables Thymeleaf by default – no extra configuration is necessary.

We can now configure it in our application.properties:

spring.thymeleaf.cache=false
spring.thymeleaf.enabled=true 
spring.thymeleaf.prefix=classpath:/templates/
spring.thymeleaf.suffix=.html

spring.application.name=Bootstrap Spring Boot

Next, we’ll define a simple controller and a basic home page – with a welcome message:

@Controller
public class SimpleController {
    @Value("${spring.application.name}")
    String appName;

    @GetMapping("/")
    public String homePage(Model model) {
        model.addAttribute("appName", appName);
        return "home";
    }
}

Finally, here is our home.html:

<html>
<head><title>Home Page</title></head>
<body>
<h1>Hello !</h1>
<p>Welcome to <span th:text="${appName}">Our App</span></p>
</body>
</html>

Note how we used a property we defined in our properties – and then injected that so that we can show it on our home page.

5. Security

Next, let’s add security to our application – by first including the security starter:

<dependency> 
    <groupId>org.springframework.boot</groupId> 
    <artifactId>spring-boot-starter-security</artifactId> 
</dependency>

By now, you’re hopefully noticing a pattern – most Spring libraries are easily imported into our project with the use of simple Boot starters.

Once the spring-boot-starter-security dependency on the classpath of the application – all endpoints are secured by default, using either httpBasic or formLogin based on Spring Security’s content-negotiation strategy.

That’s why, if we have the starter on the classpath, we should usually define our own custom Security configuration by extending the WebSecurityConfigurerAdapter class:

@Configuration
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {

    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests()
            .anyRequest()
            .permitAll()
            .and().csrf().disable();
    }
}

In our example, we’re allowing unrestricted access to all endpoints.

Of course, Spring Security is an extensive topic and one not easily covered in a couple of lines of configuration – so I definitely encourage you to go deeper into the topic.

6. Simple Persistence

Let’s start by defining our data model – a simple Book entity:

@Entity
public class Book {
 
    @Id
    @GeneratedValue(strategy = GenerationType.AUTO)
    private long id;

    @Column(nullable = false, unique = true)
    private String title;

    @Column(nullable = false)
    private String author;
}

And its repository, making good use of Spring Data here:

public interface BookRepository extends CrudRepository<Book, Long> {
    List<Book> findByTitle(String title);
}

Finally, we need to of course configure our new persistence layer:

@EnableJpaRepositories("com.baeldung.persistence.repo") 
@EntityScan("com.baeldung.persistence.model")
@SpringBootApplication 
public class Application {
   ...
}

Note that we’re using:

  • @EnableJpaRepositories to scan the specified package for repositories
  • @EntityScan to pick up our JPA entities

To keep things simple, we’re using an H2 in-memory database here – so that we don’t have any external dependencies when we run the project.

Once we include H2 dependency, Spring Boot auto-detects it and sets up our persistence with no need for extra configuration, other than the data source properties:

spring.datasource.driver-class-name=org.h2.Driver
spring.datasource.url=jdbc:h2:mem:bootapp;DB_CLOSE_DELAY=-1
spring.datasource.username=sa
spring.datasource.password=

Of course, like security, persistence is a broader topic than this basic set here, and one you should certainly explore further.

7. Web and the Controller

Next, let’s have a look at a web tier – and we’ll start that by setting up a simple controller – the BookController.

We’ll implement basic CRUD operations exposing Book resources with some simple validation:

@RestController
@RequestMapping("/api/books")
public class BookController {

    @Autowired
    private BookRepository bookRepository;

    @GetMapping
    public Iterable findAll() {
        return bookRepository.findAll();
    }

    @GetMapping("/title/{bookTitle}")
    public List findByTitle(@PathVariable String bookTitle) {
        return bookRepository.findByTitle(bookTitle);
    }

    @GetMapping("/{id}")
    public Book findOne(@PathVariable Long id) {
        return bookRepository.findById(id)
          .orElseThrow(BookNotFoundException::new);
    }

    @PostMapping
    @ResponseStatus(HttpStatus.CREATED)
    public Book create(@RequestBody Book book) {
        return bookRepository.save(book);
    }

    @DeleteMapping("/{id}")
    public void delete(@PathVariable Long id) {
        bookRepository.findById(id)
          .orElseThrow(BookNotFoundException::new);
        bookRepository.deleteById(id);
    }

    @PutMapping("/{id}")
    public Book updateBook(@RequestBody Book book, @PathVariable Long id) {
        if (book.getId() != id) {
          throw new BookIdMismatchException();
        }
        bookRepository.findById(id)
          .orElseThrow(BookNotFoundException::new);
        return bookRepository.save(book);
    }
}

Given this aspect of the application is an API, we made use of the @RestController annotation here – which equivalent to a @Controller along with @ResponseBody – so that each method marshalls the returned resource right to the HTTP response.

Just one note worth pointing out – we’re exposing our Book entity as our external resource here. That’s fine for our simple application here, but in a real-world application, you will likely want to separate these two concepts.

8. Error Handling

Now that the core application is ready to go, let’s focus on a simple centralized error handling mechanism using @ControllerAdvice:

@ControllerAdvice
public class RestExceptionHandler extends ResponseEntityExceptionHandler {

    @ExceptionHandler({ BookNotFoundException.class })
    protected ResponseEntity<Object> handleNotFound(
      Exception ex, WebRequest request) {
        return handleExceptionInternal(ex, "Book not found", 
          new HttpHeaders(), HttpStatus.NOT_FOUND, request);
    }

    @ExceptionHandler({ BookIdMismatchException.class, 
      ConstraintViolationException.class, 
      DataIntegrityViolationException.class })
    public ResponseEntity<Object> handleBadRequest(
      Exception ex, WebRequest request) {
        return handleExceptionInternal(ex, ex.getLocalizedMessage(), 
          new HttpHeaders(), HttpStatus.BAD_REQUEST, request);
    }
}

Beyond the standard exceptions we’re handling here, we’re also using a custom exception:

BookNotFoundException:

public class BookNotFoundException extends RuntimeException {

    public BookNotFoundException(String message, Throwable cause) {
        super(message, cause);
    }
    // ...
}

This should give you an idea of what’s possible with this global exception handling mechanism. If you’d like to see a full implementation, have a look at the in-depth tutorial.

Note that Spring Boot also provides an /error mapping by default. We can customize its view by creating a simple error.html:

<html lang="en">
<head><title>Error Occurred</title></head>
<body>
    <h1>Error Occurred!</h1>    
    <b>[<span th:text="${status}">status</span>]
        <span th:text="${error}">error</span>
    </b>
    <p th:text="${message}">message</p>
</body>
</html>

Like most other aspects in Boot, we can control that with a simple property:

server.error.path=/error2

9. Testing

Finally, let’s test our new Books API.

We can make use of @SpringBootTest to load the application context and verify there are no errors when running the app:

@RunWith(SpringRunner.class)
@SpringBootTest
public class SpringContextTest {

    @Test
    public void contextLoads() {
    }
}

Next, let’s add a JUnit test that verifies the calls to the API we’re written, using RestAssured:

public class SpringBootBootstrapLiveTest {

    private static final String API_ROOT
      = "http://localhost:8081/api/books";

    private Book createRandomBook() {
        Book book = new Book();
        book.setTitle(randomAlphabetic(10));
        book.setAuthor(randomAlphabetic(15));
        return book;
    }

    private String createBookAsUri(Book book) {
        Response response = RestAssured.given()
          .contentType(MediaType.APPLICATION_JSON_VALUE)
          .body(book)
          .post(API_ROOT);
        return API_ROOT + "/" + response.jsonPath().get("id");
    }
}

First, we can try to find books using variant methods:

@Test
public void whenGetAllBooks_thenOK() {
    Response response = RestAssured.get(API_ROOT);
 
    assertEquals(HttpStatus.OK.value(), response.getStatusCode());
}

@Test
public void whenGetBooksByTitle_thenOK() {
    Book book = createRandomBook();
    createBookAsUri(book);
    Response response = RestAssured.get(
      API_ROOT + "/title/" + book.getTitle());
    
    assertEquals(HttpStatus.OK.value(), response.getStatusCode());
    assertTrue(response.as(List.class)
      .size() > 0);
}
@Test
public void whenGetCreatedBookById_thenOK() {
    Book book = createRandomBook();
    String location = createBookAsUri(book);
    Response response = RestAssured.get(location);
    
    assertEquals(HttpStatus.OK.value(), response.getStatusCode());
    assertEquals(book.getTitle(), response.jsonPath()
      .get("title"));
}

@Test
public void whenGetNotExistBookById_thenNotFound() {
    Response response = RestAssured.get(API_ROOT + "/" + randomNumeric(4));
    
    assertEquals(HttpStatus.NOT_FOUND.value(), response.getStatusCode());
}

Next, we’ll test creating a new book:

@Test
public void whenCreateNewBook_thenCreated() {
    Book book = createRandomBook();
    Response response = RestAssured.given()
      .contentType(MediaType.APPLICATION_JSON_VALUE)
      .body(book)
      .post(API_ROOT);
    
    assertEquals(HttpStatus.CREATED.value(), response.getStatusCode());
}

@Test
public void whenInvalidBook_thenError() {
    Book book = createRandomBook();
    book.setAuthor(null);
    Response response = RestAssured.given()
      .contentType(MediaType.APPLICATION_JSON_VALUE)
      .body(book)
      .post(API_ROOT);
    
    assertEquals(HttpStatus.BAD_REQUEST.value(), response.getStatusCode());
}

Update an existing book:

@Test
public void whenUpdateCreatedBook_thenUpdated() {
    Book book = createRandomBook();
    String location = createBookAsUri(book);
    book.setId(Long.parseLong(location.split("api/books/")[1]));
    book.setAuthor("newAuthor");
    Response response = RestAssured.given()
      .contentType(MediaType.APPLICATION_JSON_VALUE)
      .body(book)
      .put(location);
    
    assertEquals(HttpStatus.OK.value(), response.getStatusCode());

    response = RestAssured.get(location);
    
    assertEquals(HttpStatus.OK.value(), response.getStatusCode());
    assertEquals("newAuthor", response.jsonPath()
      .get("author"));
}

And delete a book:

@Test
public void whenDeleteCreatedBook_thenOk() {
    Book book = createRandomBook();
    String location = createBookAsUri(book);
    Response response = RestAssured.delete(location);
    
    assertEquals(HttpStatus.OK.value(), response.getStatusCode());

    response = RestAssured.get(location);
    assertEquals(HttpStatus.NOT_FOUND.value(), response.getStatusCode());
}

10. Conclusion

This was a quick but comprehensive intro to Spring Boot.

We of course barely scratched the surface here – there’s a lot more to this framework that we can cover in a single intro article.

Why Kafka Is so Fast
26
Mar
2021

Why Kafka Is so Fast

Discover the deliberate design decisions that have made Kafka the performance powerhouse it is today.

The last few years have brought about immense changes in the software architecture landscape. The notion of a single monolithic application or even several coarse-grained services sharing a common data store has been all but erased from the hearts and minds of software practitioners world-wide. Autonomous microservices, event-driven architecture, and CQRS are the dominant tools in the construction of contemporary business-centric applications. To top it off, the proliferation of device connectivity — IoT, mobile, wearables — is creating an upward pressure on the number of events a system must handle in near-real-time.

Let’s start by acknowledging that the term ‘fast’ is multi-faceted, complex, and highly ambiguous. Latency, throughput, jitter, are metrics that shape and influence one’s interpretation of the term. It is also inherently contextual: the industry and application domains in themselves set the norms and expectations around performance. Whether or not something is fast depends largely on one’s frame of reference.

Apache Kafka is optimized for throughput at the expense of latency and jitter, while preserving other desirable qualities, such as durability, strict record order, and at-least-once delivery semantics. When someone says ‘Kafka is fast’, and assuming they are at least mildly competent, you can assume they are referring to Kafka’s ability to safely accumulate and distribute a very high number of records in a short amount of time.

Historically, Kafka was born out of LinkedIn’s need to move a very large number of messages efficiently, amounting to multiple terabytes of data on an hourly basis. The individual message propagation delay was deemed of secondary importance, as was the variability of that time. After all, LinkedIn is not a financial institution that engages in high-frequency trading, nor is it an industrial control system that operates within deterministic deadlines. Kafka can be used to implement near-real-time (otherwise known as soft real-time) systems.

Note: For those unfamiliar with the term, ‘real-time’ does not mean ‘fast’, it means ‘predictable’. Specifically, real-time implies a hard upper bound, otherwise known as a deadline, on the time taken to complete an action. If the system as a whole is unable to meet this deadline each and every time, it cannot be classed as real-time. Systems that are able to perform within a probabilistic tolerance, are labelled as ‘near-real-time’. In terms of sheer throughput, real-time systems are often slower than their near-real-time or non-real-time counterparts.

There are two significant areas that Kafka draws upon for its speed, and they need to be discussed separately. The first relates to the low-level efficiency of the client and broker implementations. The second derives from the opportunistic parallelism of stream processing.

Broker performance

Log-structured persistence

Kafka utilizes a segmented, append-only log, largely limiting itself to sequential I/O for both reads and writes, which is fast across a wide variety of storage media. There is a wide misconception that disks are slow; however, the performance of storage media (particularly rotating media) is greatly dependent on access patterns. The performance of random I/O on a typical 7,200 RPM SATA disk is between three and four orders of magnitude slower when compared to sequential I/O. Furthermore, a modern operating system provides read-ahead and write-behind techniques that prefetch data in large block multiples and group smaller logical writes into large physical writes. Because of this, the difference between sequential I/O and random I/O is still evident in flash and other forms of solid-state non-volatile media, although it is far less dramatic compared to rotating media.

Record batching

Sequential I/O is blazingly fast on most media types, comparable to the peak performance of network I/O. In practice, this means that a well-designed log-structured persistence layer will keep up with the network traffic. In fact, quite often the bottleneck with Kafka isn’t the disk, but the network. So in addition to the low-level batching provided by the OS, Kafka clients and brokers will accumulate multiple records in a batch — for both reading and writing — before sending them over the network. Batching of records amortizes the overhead of the network round-trip, using larger packets and improving bandwidth efficiency.

Batch compression

The impact of batching is particularly obvious when compression is enabled, as compression becomes generally more effective as the data size increases. Especially when using text-based formats such as JSON, the effects of compression can be quite pronounced, with compression ratios typically ranging from 5x to 7x. Furthermore, record batching is largely done as a client-side operation, which transfers the load onto the client and has a positive effect not only on the network bandwidth but also on the brokers’ disk I/O utilization.

Cheap consumers

Unlike traditional MQ-style brokers which remove messages at point of consumption (incurring the penalty of random I/O), Kafka doesn’t remove messages after they are consumed — instead, it independently tracks offsets at each consumer group level. The progression of offsets themselves is published on an internal Kafka topic __consumer_offsets. Again, being an append-only operation, this is fast. The contents of this topic are further reduced in the background (using Kafka’s compaction feature) to only retain the last known offsets for any given consumer group.

Compare this model with more traditional message brokers which typically offer several contrasting message distribution topologies. On one hand is the message queue — a durable transport for point-to-point messaging, with no point-to-multipoint ability. On the other hand, a pub-sub topic allows for point-to-multipoint messaging but does so at the expense of durability. Implementing a durable point-to-multipoint messaging model in a traditional MQ requires maintaining a dedicated message queue for each stateful consumer. This creates both read and write amplification. On one hand, the publisher is forced to write to multiple queues. Alternatively, a fan-out relay may consume records from one queue and write to several others, but this only defers the point of amplification. On the other hand, several consumers are generating load on the broker — being a mixture of read and write I/O, both sequential and random.

Consumers in Kafka are ‘cheap’, insofar as they don’t mutate the log files (only the producer or internal Kafka processes are permitted to do that). This means that a large number of consumers may concurrently read from the same topic without overwhelming the cluster. There is still some cost in adding a consumer, but it is mostly sequential reads with a low rate of sequential writes. So it’s fairly normal to see a single topic being shared across a diverse consumer ecosystem.

Unflushed buffered writes

Another fundamental reason for Kafka’s performance, and one that is worth exploring further: Kafka doesn’t actually call fsync when writing to the disk before acknowledging the write; the only requirement for an ACK is that the record has been written to the I/O buffer. This is a little known fact, but a crucial one: in fact, this is what actually makes Kafka perform as if it were an in-memory queue — because for all intents and purposes Kafka is a disk-backed in-memory queue (limited by the size of the buffer/pagecache).

On the flip side, this form of writing is unsafe, as the failure of a replica can lead to a data loss even though the record has seemingly been acknowledged. In other words, unlike say a relational database, acknowledging a write alone does not imply durability. What makes Kafka durable is running several in-sync replicas; even if one were to fail, the others (assuming there is more than one) will remain operational — providing that the failure is uncorrelated (i.e. multiple replicas failing simultaneously due of a common upstream failure). So the combination of a non-blocking approach to I/O with no fsync, and redundant in-sync replicas give Kafka the combination of high throughput, durability, and availability.

Client-side optimisations

Most databases, queues, and other forms of persistent middleware are designed around the notion of an all-mighty server (or a cluster of servers), and fairly thin clients that communicate with the server(s) over a well-known wire protocol. Client implementations are generally considered to be significantly simpler than the server. As a result, the server will absorb the bulk of the load — the clients merely act as interfaces between the application code and the server.

Kafka takes a different approach to client design. A significant amount of work is performed on the client before records get to the server. This includes the staging of records in an accumulator, hashing the record keys to arrive at the correct partition index, checksumming the records and the compression of the record batch. The client is aware of the cluster metadata and periodically refreshes this metadata to keep abreast of any changes to the broker topology. This lets the client make low-level forwarding decisions; rather than sending a record blindly to the cluster and relying on the latter to forward it to the appropriate broker node, a producer client will forward writes directly to partition masters. Similarly, consumer clients are able to make intelligent decisions when sourcing records, potentially using replicas that geographically closer to the client when issuing read queries. (This feature is a more recent addition to Kafka, available as of version 2.4.0.)

Zero-copy

One of the typical sources of inefficiencies is copying byte data between buffers. Kafka uses a binary message format that is shared by the producer, the broker, and the consumer parties so that data chunks can flow end-to-end without modification, even if it’s compressed. While eliminating structural differences between communicating parties is an important step, it doesn’t in itself avoid the copying of data.

Kafka solves this problem on Linux and UNIX systems by using Java’s NIO framework, specifically, the transferTo() method of a java.nio.channels.FileChannel. This method permits the transfer of bytes from a source channel to a sink channel without involving the application as a transfer intermediary. To appreciate the difference that NIO makes, consider the traditional approach where a source channel is read into a byte buffer, then written to a sink channel as two separate operations:

File.read(fileDesc, buf, len);
Socket.send(socket, buf, len);

Diagrammatically, this can be represented using the following.

Although this looks simple enough, internally, the copy operation requires four context switches between user mode and kernel mode, and the data is copied four times before the operation is complete. The diagram below outlines the context switches at each step.

Looking at this in more detail —

  1. The initial read() causes a context switch from user mode to kernel mode. The file is read, and its contents are copied to a buffer in the kernel address space by the DMA (Direct Memory Access) engine. This is not the same buffer that was used in the code snippet.
  2. Prior to returning from read(), the kernel buffer is copied into the user-space buffer. At this point, our application can read the contents of the file.
  3. The subsequent send() will switch back into kernel mode, copying the user-space buffer into the kernel address space — this time into a different buffer associated with the destination socket. Behind the scenes, the DMA engine takes over, asynchronously copying the data from the kernel buffer to the protocol stack. The send() method does not wait for this prior to returning.
  4. The send() call returns, switching back to the user-space context.

In spite of its mode-switching inefficiencies and additional copying, the intermediate kernel buffer can actually improve performance in many cases. It can act as a read-ahead cache, asynchronously prefetching blocks, thereby front-running requests from the application. However, when the amount of requested data is significantly larger than the kernel buffer size, the kernel buffer becomes a performance bottleneck. Rather than copying the data directly, it forces the system to oscillate between user and kernel modes until all the data is transferred.

By contrast, the zero-copy approach is handled in a single operation. The snippet from the earlier example can be rewritten as a one-liner:

fileDesc.transferTo(offset, len, socket);

The zero-copy approach is illustrated below.

Under this model, the number of context switches is reduced to one. Specifically, the transferTo() method instructs the block device to read data into a read buffer by the DMA engine. This buffer is then copied another kernel buffer for staging to the socket. Finally, the socket buffer is copied to the NIC buffer by DMA.

As a result, we have reduced the number of copies from four to three, and only one of those copies involves the CPU. We have also reduced the number of context switches from four to two.

This is a massive improvement, but it’s not query zero-copy yet. The latter can be achieved as a further optimization when running Linux kernels 2.4 and later, and on network interface cards that support the gather operation. This is illustrated below.

Calling the transferTo() method causes the device to read data into a kernel read buffer by the DMA engine, as per the previous example. However, with the gather operation, there is no copying between the read buffer and the socket buffer. Instead, the NIC is given a pointer to the read buffer, along with the offset and the length, which is vacuumed up by DMA. At no point is the CPU involved in copying buffers.

Comparisons of traditional and zero-copy on file sizes ranging from a few megabytes to a gigabyte show performance gains by a factor of two to three in favor of zero-copy. But what’s more impressive, is that Kafka achieves this using a plain JVM with no native libraries or JNI code.

Avoiding the GC

The heavy use of channels, native buffers, and the page cache has one additional benefit — reducing the load on the garbage collector (GC). For example, running Kafka on a machine with 32 GB of RAM will result in 28–30 GB usable for the page cache, completely outside of the GC’s scope. The difference in throughput is minimal — in the region of several percentage points — as the throughput of a correctly-tuned GC can be quite high, especially when dealing with short-lived objects. The real gains are in the reduction of jitter; by avoiding the GC, the brokers are less likely to experience a pause that may impact the client, extending the end-to-end propagation delay of records.

To be fair, the avoidance of GC is less of a problem now, compared to what it used to be when Kafka was conceived. Modern GCs like Shenandoah and ZGC scale to huge, multi-terabyte heaps, and have tunable worst-case pause times, down to single-digit milliseconds. It is not uncommon these days to see JVM-based applications using large heap-based caches outperform off-heap designs.

Stream parallelism

The efficiency of log-structured I/O is one crucial aspect of performance, mostly affecting writes; Kafka’s treatment of parallelism in the topic structure and the consumer ecosystem is fundamental to its read performance. The combination produces an overall very high end-to-end messaging throughput. Concurrency is ingrained into its partitioning scheme and the operation of consumer groups, which is effectively a load-balancing mechanism within Kafka — distributing partition assignments approximately evenly among the individual consumer instances within the group. Compare this to a more traditional MQ: in an equivalent RabbitMQ setup, multiple concurrent consumers may read from a queue in a round-robin fashion, but in doing so they forfeit the notion of message ordering.

The partitioning mechanism also allows for the horizontal scalability of Kafka brokers. Every partition has a dedicated leader; any nontrivial topic (with multiple partitions) can, therefore, utilize the entire cluster of broker for writes. This is yet another point of distinction between Kafka and a message queue; where the latter utilizes clustering for availability, Kafka will genuinely balance the load across the brokers for availability, durability, and throughput.

The producer specifies the partition when publishing a record, assuming that you are publishing to a topic with multiple partitions. (One may have a single-partition topic, in which case this is a non-issue.) This may be accomplished either directly — by specifying a partition index, or indirectly — by way of a record key, which deterministically hashes to a consistent (i.e. same every time) partition index. Records sharing the same hash are guaranteed to occupy the same partition. Assuming a topic with multiple partitions, records with a different key will likely end up in different partitions. However, due to hash collisions, records with different hashes may also end up in the same partition. Such is the nature of hashing. If you understand how a hash table works, this is no different.

The actual processing of records is done by consumers, operating within an (optional) consumer group. Kafka guarantees that a partition may only be assigned to at most one consumer within its consumer group. (We say ‘at most’ to cover the case when all consumers are offline.) When the first consumer in a group subscribes to the topic, it will receive all partitions on that topic. When a second consumer subsequently joins, it will get approximately half of the partitions, relieving the first consumer of half of its prior load. This enables you to process an event stream in parallel, adding consumers as necessary (ideally, using an auto-scaling mechanism), providing that you have adequately partitioned your event stream.

Control of record throughput accomplished in two ways:

  1. The topic partitioning scheme. Topics should be partitioned to maximize the number of independent event sub-streams. In other words, record order should only be preserved where it is absolutely necessary. If any two records are not legitimately related in a causal sense, they shouldn’t be bound to the same partition. This implies the use of different keys, as Kafka will use a record’s key as a hashing source to derive its consistent partition mapping.
  2. The number of consumers in the group. You can increase the number of consumers to match the load of inbound records, up to the number of partitions in the topic. (You can have more consumers if you wish, but the partition count will place an upper bound on the number of active consumers which get at least one partition assignment; the remaining consumers will remain idle.) Note that a consumer could be a process or a thread. Depending on the type of workload that the consumer performs, you may be able to employ multiple individual consumer threads or process records in a thread pool.

If you were wondering whether Kafka is fast, how it achieves its renowned performance characteristics, or if it can scale to your use cases, you should hopefully by now have all the answers you need.

To make things abundantly clear, Kafka is not the fastest (that is, most throughput-capable) messaging middleware — there are other platforms capable of greater throughput — some are software-based and some are implemented in hardware. Nor is it the best throughput-latency compromise — Apache Pulsar is a promising technology that is scalable and achieves a better throughput-latency profile while offering identical ordering and durability guarantees. The rationale for adopting Kafka is that as a complete ecosystem, it remains unmatched overall. It exhibits excellent performance while offering an environment that is abundant and mature, but also involving — in spite of its size, Kafka is still growing at an enviable pace.

The designers and maintainers of Kafka have done an amazing job at devising a solution that is performance-oriented at its core. Few of its design elements feel like an afterthought or a bolt-on. From offloading of work to clients to the log-structured persistence on the broker, batching, compression, zero-copy I/O, and stream-level parallelism — Kafka throws down the gauntlet to just about any other message-oriented middleware, commercial or open-source. And most impressively, it does so without compromising on qualities such as durability, record order, and at-least-once delivery semantics.

Kafka is not the simplest of messaging platforms, and there is a fair bit to learn. One must come to grips with the concepts of a total and partial order, topics, partitions, consumers and consumer groups, before comfortably designing and building high-performance event-driven systems. And while the knowledge curve is substantial, the results will certainly be worth your while. If you are keen on taking the proverbial ‘red pill’, read the Introduction to Event Streaming with Kafka and Kafdrop.

Introduction to Event Streaming with Kafka and Kafdrop
26
Mar
2021

Introduction to Event Streaming with Kafka and Kafdrop

Event sourcing, eventual consistency, microservices, CQRS… These are quickly becoming household names in mainstream application development. But do you know what makes them tick? What are the basic building blocks required to assemble complex, business-centric applications from fine-grained services without turning the lot into a big ball of mud?

This article examines a fundamental building block — event streaming. Leading the charge will be Apache Kafka — the de facto standard in event streaming platforms, which we’ll observe through Kafdrop — a feature-packed web UI.

A Brief Intro

Event streaming platforms reside in the broader class of Message-oriented Middleware (MoM) and are similar to traditional message queues and topics but offer stronger temporal guarantees and typically order-of-magnitude performance gains due to log-structured immutability. In simple terms, write operations are mostly limited to sequential appends, which make them fast. Really fast.

Whereas messages in a traditional Message Queue (MQ) tend to be arbitrarily ordered and generally independent of one another, events (or records) in a stream tend to be strongly ordered, often chronologically or causally. Also, a stream persists its records, whereas an MQ will discard a message as soon as it has been read.

For this reason, event streaming tends to be a better fit for Event-Driven Architectures, encompassing event sourcing, eventual consistency, and CQRS concepts. (Of course, there are FIFO message queues too, but the differences between FIFO queues and fully-fledged event streaming platforms are quite substantial, not limited to ordering alone.)

Event streaming platforms are a comparatively recent paradigm within the broader MoM domain. There are only a handful of mainstream implementations available, compared to hundreds of MQ-style brokers, some going back to the 1980s (e.g. Tuxedo). Compared to established standards such as AMQP, MQTT, XMPP, and JMS, there are no equivalent standards in the streaming space.

Event streaming platforms are an active area of continuous research and experimentation. In spite of this, streaming platforms aren’t just a niche concept or an academic idea with a few esoteric use cases; they can be applied effectively to a broad range of messaging and eventing scenarios, routinely displacing their more traditional counterparts.

You may also like: A Kafka Tutorial for Everyone, no Matter Your Stage in Development.

Architecture Overview

The diagram below offers a brief overview of the Kafka component architecture. While the intention isn’t to indoctrinate you with all of Kafka’s inner workings, some appreciation of its design will go a long way in explaining the key concepts that we will cover shortly.

Kafka Architecture Overview

Kafka is a distributed system comprising several key components:

  • Broker nodes: Responsible for the bulk of I/O operations and durable persistence within the cluster. Brokers accommodate the append-only log files that comprise the topic partitions hosted by the cluster. Partitions can be replicated across multiple brokers for both horizontal scalability and increased durability — these are called replicas. A broker node may act as the leader for certain replicas, while being a follower for others. A single broker node will also be elected as the cluster controller — responsible for the internal management of partition states. This includes the arbitration of the leader-follower roles for any given partition.
  • ZooKeeper nodes: Under the hood, Kafka needs a way to manage the overall controller status in the cluster. Should the controller drop out for whatever reason, there is a protocol in place to elect another controller from the set of remaining brokers. The actual mechanics of controller election, heart-beating, and so forth, are largely implemented in ZooKeeper. ZooKeeper also acts as a configuration repository of sorts, maintaining cluster metadata, leader-follower states, quotas, user information, ACLs, and other housekeeping items. Due to the underlying gossiping and consensus protocol, the number of ZooKeeper nodes must be odd.
  • Producers: These are client applications responsible for appending records to Kafka topics. Because of the log-structured nature of Kafka and the ability to share topics across multiple consumer ecosystems, only producers are able to modify the data in the underlying log files. The actual I/O is performed by the broker nodes on behalf of the producer clients. Any number of producers may publish to the same topic, selecting the partitions used to persist the records.
  • Consumers: These are client applications that read from topics. Any number of consumers may read from the same topic; however, depending on the configuration and grouping of consumers, there are rules governing the distribution of records among the consumers.

Topics, Partitions, Records, and Offsets

partition is a totally ordered sequence of records and is fundamental to Kafka. A record has an ID — a 64-bit integer offset and a millisecond-precise timestamp. Also, it may have a key and a value; both are byte arrays and both are optional. The term “totally ordered” simply means that for any given producer, records will be written in the order they were emitted by the application. If record P was published before Q, then P will precede Q in the partition. (Assuming P and Q share a partition.)

Furthermore, they will be read in the same order by all consumers; P will always be read before Q, for every possible consumer. This ordering guarantee is vital in a large majority of use cases. Published records will generally correspond to some real-life events, and preserving the timeline of these events is often essential.

Note: Kafka uses the term “record,” where others might use “message” or “event.” In this article, we will use the terms interchangeably, depending on the context. Similarly, you might see the term “stream” as a generic substitute for “topic.”

There is no recognized ordering across producers. If two (or more) producers emit records simultaneously, those records may materialize in arbitrary order. That said, this order will still be observed uniformly across all consumers.

A record’s offset uniquely identifies it in the partition. The offset is a strictly monotonically-increasing integer in a sparse address space, meaning that each successive offset is always higher than its predecessor and there may be varying gaps between neighboring offsets. Gaps might legitimately appear if compaction is enabled or as a result of transactions; we don’t need to delve into the details at this stage. Suffice it to say that offsets need not be contiguous.

Your application shouldn’t attempt to literally interpret an offset or guess what the next offset might be. It may, however, infer the relative order of any record pair based on their offsets, sort the records by their offset, and so forth.

The diagram below shows what a partition looks like on the inside.1

     start of partition

2

+--------+-----------------+

3

|0..00000|First record     |

4

+--------+-----------------+

5

|0..00001|Second record    |

6

+--------+-----------------+

7

|0..00002|Third record     |

8

+--------+-----------------+

9

|0..00003|Fourth record    |

10

+--------+-----------------+

11

|0..00007|Fifth record     |

12

+--------+-----------------+

13

|0..00008|Sixth record     |

14

+--------+-----------------+

15

|0..00010|Seventh record   |

16

+--------+-----------------+

17

            ...

18

+--------+-----------------+

19

|0..56789|Last record      |

20

+--------+-----------------+

21

       end of partition

The beginning offset, also called the low-water mark, is the first message that will be presented to a consumer. Due to Kafka’s bounded retention, this is not necessarily the first message that was published. Records may be pruned on the basis of time and/or partition size. When this occurs, the low-water mark will appear to advance, and records earlier than the low-water mark will be truncated.

Conversely, the high-water mark is the offset immediately following the last record in the partition, also known as the end offset. It is the offset that will be assigned to the next record that will be published. It is not the offset of the last record.

topic is a logical composition of partitions. A topic may have one or more partitions, and a partition must be a part of exactly one topic. Topics are fundamental to Kafka, allowing for both parallelism and load balancing.

Earlier, we said that partitions exhibit total order. Because partitions within a topic are mutually independent, the topic is said to exhibit partial order. In simple terms, this means that certain records may be ordered in relation to one another while being unordered with respect to certain other records. The concepts of total and partial order, while sounding somewhat academic, are hugely important in the construction of performant event streaming pipelines. They enables us to process records in parallel where we can, while maintaining order where we must. We’ll explore the concepts of record order, consumer parallelism, and topic sizing in a short while.

Example: Publishing Messages

Let’s put some of this theory into practice. We are going to spin up a pair of Docker containers — one for Kafka and another for Kafdrop. Rather than launching them individually, we’ll use Docker Compose.

Create a docker-compose.yaml file in a directory of your choice, containing the following:1

version: "2"

2

services:

3

  kafdrop:

4

    image: obsidiandynamics/kafdrop

5

    restart: "no"

6

    ports:

7

      - "9000:9000"

8

    environment:

9

      KAFKA_BROKERCONNECT: "kafka:29092"

10

    depends_on:

11

      - "kafka"

12

  kafka:

13

    image: obsidiandynamics/kafka

14

    restart: "no"

15

    ports:

16

      - "2181:2181"

17

      - "9092:9092"

18

    environment:

19

      KAFKA_LISTENERS: "INTERNAL://:29092,EXTERNAL://:9092"

20

      KAFKA_ADVERTISED_LISTENERS: "INTERNAL://kafka:29092,EXTERNAL://localhost:9092"

21

      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: "INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT"

22

      KAFKA_INTER_BROKER_LISTENER_NAME: "INTERNAL"

Note: We’re using the obsidiandynamics/kafka image for convenience because it neatly bundles Kafka and ZooKeeper into a single image. If you wanted to, you could replace this with images from Confluent or Wurstmeister, but then you’d have to wire it all up properly. The obsidiandynamics/kafka image does all this for you, so it’s highly recommended for beginners (and lazy pros).

Then, start it with docker-compose up. Once it boots, navigate to localhost:9000 in your browser. You should see the Kafdrop landing screen.

Kafdrop landing page

You should see our single-broker cluster. It’s a promising start, but there are no topics. Not a problem; let’s create a topic and publish some messages using Kafka’s command-line tools. Conveniently, we already have a Kafka image running as part of our docker-compose stack, so we can shell into it to use the built-in CLI tools.1

docker exec -it kafka-kafdrop_kafka_1 bash

This gets you into a Bash shell. The tools are in the /opt/kafka/bin directory, so let’s cd into it:1

cd /opt/kafka/bin

Create a topic named streams-intro with three partitions:1

./kafka-topics.sh --bootstrap-server localhost:9092 \

2

    --create --partitions 3 --replication-factor 1 \

3

    --topic streams-intro

Switching back to Kafdrop, we should now see the new topic in the list.

Kafdrop topics list

Time to publish stuff. We are going to use the kafka-console-producer tool:1

./kafka-console-producer.sh --broker-list localhost:9092 \

2

    --topic streams-intro --property "parse.key=true" \

3

    --property "key.separator=:"

Note:kafka-topics uses the --bootstrap-server argument to configure the Kafka broker list, while kafka-console-producer uses the --broker-list argument for the same purpose. Also, --property arguments are largely undocumented; be prepared to Google your way around.

Records are separated by newlines. The key and the value parts are delimited by colons, as indicated by the key.separator property. For the sake of an example, type in the following (a copy-paste will do):1

foo:first message

2

foo:second message

3

bar:first message

4

foo:third message

5

bar:second message

Press CTRL+D when done. Then, switch back to Kafdrop and click on the streams-intro topic. You’ll see an overview of the topic, along with a detailed breakdown of the underlying partitions:

Kafdrop topic overview

Let’s pause for a moment and dissect what’s been done. We created a topic with three partitions. We then published five records using two unique keys — foo and bar. Kafka uses keys to map records to partitions, such that all records with the same key will always appear on the same partition. Handy, but also important because it lets the publisher dictate the precise order of records. We’ll discuss key hashing and partition assignments in more detail later; in the meanwhile, sit back and enjoy the ride.

Looking at the partitions table, partition #0 has the first and last offsets at zero and two respectively. Partition #2 has them at zero and three, while partition #1 appears to blank. Clicking on #0 in the Kafdrop web UI sends us to a topic viewer:

Kafdrop topic viewer

We can see the two records published under the bar key. Note, they are completely unrelated to the foo records. Other than being collated within the same topic, there is nothing that binds records across partitions.

Note: In case you were wondering, the arrow to the left of the message lets you expand and pretty-print JSON-encoded messages. As our examples didn’t use JSON, there’s nothing to pretty-print.

It can be said without exaggeration that Kafka’s built-in tooling is an abomination. There is no consistency in the naming of command arguments and the simple act of publishing keyed messages requires you to jump through hoops — passing in obscure, undocumented properties. The usability of the built-in tools is a well-known heartache within the Kafka community. This is a real shame. It’s like buying a Ferrari, only to have it delivered with plastic hub caps. Fortunately, there are alternatives — both commercial and open source — that can fill the glaring gaps in tooling and observability.

Consumers and Consumer Groups

So far we have learned that producers emit records into the stream; these records are organized into nicely ordered partitions. Kafka’s pub-sub topology adheres to a flexible multipoint-to-multipoint model, meaning that there may be any number of producers and consumers simultaneously interacting with a stream. Depending on the actual solution context, stream topologies may also be point-to-multipoint, multipoint-to-point, and point-to-point. It’s about time we looked at how records are consumed.

consumer is a process or a thread that attaches to a Kafka cluster via a client library. (One is available for most languages.) A consumer generally, but not necessarily, operates as part of an encompassing consumer group. The group is specified by the group.id property. Consumer groups are effectively a load-balancing mechanism within Kafka — distributing partition assignments approximately evenly among the individual consumer instances within the group.

When the first consumer in a group joins the topic, it will receive all partitions in that topic. When a second consumer subsequently joins, it will get approximately half of the partitions, relieving the first consumer of half of its prior load. The process runs in reverse when consumers leave (by disconnecting or timing out) — the remaining consumers will absorb a greater number of partitions.

So, a consumer siphons records from a topic, pulling from the share of partitions that have been assigned to it by Kafka, alongside the other consumers in its group. As far as load-balancing goes, this should be fairly straightforward. But here’s the kicker — the act of consuming a record does not remove it. This might seem contradictory at first, especially if you associate the act of consuming with depletion. (If anything, a consumer should have been called a ‘reader’, but let’s not dwell on the choice of terminology.)

The simple fact is, consumers have absolutely no impact on topics and their partitions. A topic is an append-only ledger that may only be mutated by the producer, or by Kafka itself (as part of compaction or cleanup). Consumers are “cheap,” so you can have quite a number of them tail the logs without stressing the cluster. This is yet another point of distinction between an event stream and a traditional message queue, and it’s a crucial one.

A consumer internally maintains an offset that points to the next record in a partition, advancing the offset for every successive read. When a consumer first subscribes to a topic, it may elect to start at either the head-end or the tail-end of the topic. This behavior is controlled by setting the auto.offset.reset property to one of latestearliest or none. In the latter case, an exception will be thrown if no previous offset exists for the consumer group.

Consumers retain their offset state vector locally. Since consumers across different consumer groups do not interfere, there may be any number of them reading concurrently from the same topic. Consumers run at their own pace — a slow or backlogged consumer has no impact on its peers.

To illustrate this concept, consider a scenario involving a topic with two partitions. Two consumer groups, A and B, are subscribed to the topic. Each group has three instances, the consumers being named A1A2A3B1B2, and B3. The diagram below illustrates how the two groups might share the topic and how the consumers advance through the records independently of one another.1

               Partition 0                       Partition 1

2

               +--------+                        +--------+

3

               |0..00000|                        |0..00000|

4

               +--------+                        +--------+

5

               |0..00001| <= consumer A2         |0..00001|

6

               +--------+                        +--------+

7

               |0..00002|                        |0..00002| <= consumer A1

8

               +--------+                        +--------+

9

               |0..00003|                        |0..00003| 

10

               +--------+                        +--------+

11

                  ...                               ...

12

               +--------+                        +--------+

13

               |0..00008| <= consumer B3         |0..00008| <= consumer B2

14

               +--------+                        +--------+

15

               |0..00009|                        |0..00009|

16

               +--------+                        +--------+

17

producer P1 => |0..00010|                        |0..00010|

18

               +--------+                        +--------+

19

                                  producer P1 => |0..00011|

20

                                                 +--------+

21

Look carefully and you’ll notice something is missing. Consumers A3 and B1 aren’t there. That’s because Kafka guarantees that a partition may only be assigned to at most one consumer within its consumer group. (We say ‘at most’ to cover the case when all consumers are offline.) Because there are three consumers in each group, but only two partitions, one consumer will remain idle — waiting for another consumer in its respective group to depart before being assigned a partition.

In this manner, consumer groups are not only a load-balancing mechanism, but also a fence-like exclusivity control, used to build highly performant pipelines without sacrificing safety, particularly when there is a requirement that a record may only be handled by one thread or process at any given time.

Consumer groups are also used to ensure availability. By periodically pulling records from a topic, the consumer implicitly signals to the cluster that it’s in a ‘healthy’ state, thereby extending the lease over its partition assignment. However, should the consumer fail to read again within the allowable deadline, it will be deemed faulty and its partitions will be reassigned — apportioned among the remaining ‘healthy’ consumers within its group. This deadline is controlled by the max.poll.interval.ms consumer client property, set to five minutes by default.

To use a transportation analogy, a topic is like a highway, while a partition is a lane. A record is the equivalent of a car, and its occupants correspond to the record’s value. Several cars can safely travel on the same highway, providing they keep to their lane. Cars sharing the same line ride in a sequence, forming a queue. Now, suppose each lane leads to an off-ramp, diverting its traffic to some location. If one off-ramp gets banked up, other off-ramps may still flow smoothly.

It’s precisely this highway-lane metaphor that Kafka exploits to achieve its end-to-end throughput, easily reaching millions of records per second on commodity hardware. When creating a topic, one can choose the partition count — the number of lanes, if you will.

The partitions are divided approximately evenly among the individual consumers in a consumer group, with a guarantee that no partition will be assigned to two (or more) consumers at the same time, providing that these consumers are part of the same consumer group. Referring to our analogy, a car will never end up in two off-ramps simultaneously; however, two lanes might conceivably lead to the same off-ramp.

Note: A topic may be resized after creation by increasing the number of partitions. It is not possible, however, to decrease the partition count without recreating the topic.

Records correspond to events, messages, commands, or any other streamable content. Precisely how records are partitioned is left to the discretion of the producer(s). A producer may explicitly assign a partition index when publishing a record, although this approach is rarely used. A much more common approach is to assign a key to a record, as we have done in our earlier example. The key is completely opaque to Kafka. In other words, Kafka doesn’t attempt to interpret the contents of the key, treating it as an array of bytes. These bytes are hashed to derive a partition index, using a consistent hashing technique.

Records sharing the same hash are guaranteed to occupy the same partition. Assuming a topic with multiple partitions, records with a different key will likely end up in different partitions. However, due to hash collisions, records with different hashes may also end up in the same partition. Such is the nature of hashing. If you understand how a hash table works, this is no different.

Producers rarely care which specific partition the records will map to, only that related records end up in the same partition, and that their order is preserved. Similarly, consumers are largely indifferent to their assigned partitions, so long that they receive the records in the same order as they were published, and their partition assignment does not overlap with any other consumer in their group.

Committing Offsets

We already said that consumers maintain an internal state with respect to their partition offsets. At some point, that state must be shared with Kafka, so that when a partition is reassigned, the new consumer can resume processing from where the outgoing consumer left off. Similarly, if the consumers were to disconnect, upon reconnection they would ideally skip over those records that have already been processed.

Persisting the consumer state back to the Kafka cluster is called committing an offset. Typically, a consumer will read a record (or a batch of records) and commit the offset of the last record, plus one. If a new consumer takes over the topic, it will commence processing from the last committed offset, hence the plus-one step is essential. (Otherwise, the last processed record would be handled a second time.)

Fun fact: Kafka employs a recursive approach to managing committed offsets, elegantly utilising itself to persist and track offsets. When an offset is committed, Kafka will publish a binary record on the internal __consumer_offsets topic. The contents of this topic are compacted in the background, creating an efficient event store that progressively reduces to only the last known commit points for any given consumer group.

Controlling the point when an offset is committed provides a great deal of flexibility around delivery guarantees, handing Kafka a yet another trump card. The term ‘delivery’ assumes not just reading a record, but the full processing cycle, complete with any side-effects. One can shift from an at-most-once to an at-least-once delivery model by simply moving the commit operation from a point before the processing of a record is commenced, to a point sometime after the processing is complete. With this model, should the consumer fail midway through processing a record, the record will be re-read the following partition reassignment.

By default, a Kafka consumer will automatically commit offsets every five seconds, regardless of whether the consumer has finished processing the record. Often, this is not what you want, as it may lead to mixed delivery semantics. For example, in the event of consumer failure, some records might be delivered twice, while others might not be delivered at all. To enable manual offset committing, set the enable.auto.commit property to false.

Note: There are a few gotchas like this in Kafka. Pay close attention to the (producer and consumer) client properties in the official Kafka documentation, particularly to the stated defaults. Don’t assume for a moment that the defaults are sensible, insofar as they ought to favour safety over other competing qualities. Kafka defaults tend to be optimised for performance, and will need to be explicitly overridden on the client when safety is a critical objective. Fortunately, setting the properties to insure safety has only a minor impact on performance — Kafka is still a beast. Remember the first rule of optimisation: Don’t do it. Kafka would have been even better, had their creators given this more thought.

Getting offset committing right can be tricky, and routinely catches out beginners. A committed offset implies that the record one below that offset and all prior records have been dealt with by the consumer. When designing at-least-once or exactly-once applications, an offset should only be committed when the application is dealt with with the record in question and all records before it.

In other words, the record has been processed to the point that any actions that would have resulted from the record have been carried out and finalized. This may include calling other APIs, updating a database, committing transactions, persisting the record’s payload, or publishing more records. Stated otherwise, if the consumer were to fail after committing the record, then not ever seeing this record again must not be detrimental to its correctness.

In the at-least-once (and by extension, the exactly-once) scenario, a typical consumer implementation will commit its offset linearly, in tandem with the processing of the records. That is, read a record, commit it (plus-one), read the next, commit it (plus one), and so on. A common tactic is to process a batch of records concurrently (where this makes sense), using a thread pool, and only confirm the last record when the entire batch is done. The commit process in Kafka is very efficient, the client library will send commit requests asynchronously to the cluster using an in-memory queue, without blocking the consumer. The client application can register an optional callback, notifying it when the commit has been acknowledged by the cluster.

The consumer group is a somewhat understated concept that is pivotal to the versatility of an event streaming platform. By simply varying the affinity of consumers with their groups, one can arrive at vastly different distribution topologies — from a topic-like, pub-sub behavior to an MQ-style, point-to-point model. Because records are never truly consumed (the advancing offset only creates the illusion of consumption), one can concurrently superimpose disparate distribution topologies over a single event stream.

Free Consumers

Consumer groups are completely optional; a consumer does not need to be encompassed in a consumer group to pull messages from a topic. A free consumer omits the group.id property. Doing so allows it to operate under relaxed rules, entirely transferring the responsibility for consumer management to the application.

Note: The use of the term ‘free’ to denote a consumer without an encompassing group is not part of the standard Kafka nomenclature. As Kafka lacks a canonical term to describe this, the term ‘free’ was adopted here.

Free consumers do not subscribe to a topic. Instead, the consuming application is responsible for manually assigning a set of topic-partitions to the consumer, individually specifying the starting offset for each topic-partition pair. Free consumers do not commit their offsets to Kafka; it is up to the application to track the progress of such consumers and persist their state as appropriate, using a datastore of their choosing. The concepts of automatic partition assignment, rebalancing, offset persistence, partition exclusivity, consumer heart-beating and failure detection, and other so-called niceties accorded to consumer groups cease to exist in this mode.

Free consumers are not observed in the wild as often as their grouped counterparts. There are predominantly two use cases where a free consumer is an appropriate choice. The first, is when you genuinely need full control of the partition assignment scheme and/or you require an alternative place to store consumer offsets. This is very rare.

Needless to say, it’s also very difficult to implement correctly, given the multitude of scenarios one must account for. The second, more commonly seen use case, is when you have a stateless or ephemeral consumer that needs to monitor a topic. For example, you might be interested in tailing a topic to identify specific records, or just as a debugging tool. You might only care about records that were published when your stateless consumer was online, so concerns such as persisting offsets and resuming from the last processed record are completely irrelevant.

A good example of where this is used routinely is the Kafdrop web UI, which we’ve already seen. When you click on a topic to view the messages, Kafdrop creates a free consumer and assigns the requested partition to it, reading the records from the supplied offsets. Navigating to a different topic or partition will reset the consumer, discarding any prior state.

The illustration below outlines the relationship between producers, topics, partitions, consumers, and consumer groups.1

+----------+          +----------+

2

|PRODUCER 1|          |PRODUCER 2|

3

+-----v----+          +-----v----+

4

      |                     |

5

      |                     |

6

      |                     |

7

 +----V---------------------V-----------------------------------------+

8

 |                            >>> TOPIC >>>                           |

9

 |            +---------------------------------------------------+   |

10

 | PARTITION 0|record 0..00|record 0..01|record 0..02|record 0..03|   |

11

 |            +--------------------v------------------------------+   |

12

 |                                 |                                  |

13

 |            +--------------------|------------------------------+   |

14

 | PARTITION 1|record 0..00|       |    |record 0..02|record 0..03|   |

15

 |            +--------------------|-------------v----------------+   |

16

 |                                 |             |                    |

17

 +----------v----------------------|-------------|--------------------+

18

            |                      |             |       

19

            |                      |             | 

20

            |                      |             | 

21

            |              +-------|-------------|----------------------+

22

            |              |       |             |                      |

23

       +----V-----+        | +-----V----+   +----V-----+   +----------+ |

24

       |CONSUMER 1|        | |CONSUMER 2|   |CONSUMER 3|   |CONSUMER 4| |

25

       +----------+        | +----------+   +----------+   +----------+ |

26

                           |               CONSUMER GROUP               |

27

                           +--------------------------------------------+

28

29

The key takeaways are:

  • Topics are subdivided into partitions, each forming an independent, totally-ordered sequence within a wider, partially-ordered stream.
  • Multiple producers are able to publish to a topic, picking a partition at will. This may be accomplished either directly, by specifying a partition index, or indirectly, by way of a record key, which deterministically hashes to a consistent partition index. (In the diagram above, both Producer 1 and Producer 2 publish to the same topic.)
  • Partitions in a topic can be load-balanced across a population of consumers in a consumer group, allocating partitions approximately evenly among the members of that group. (Consumer 2 and Consumer 3 each get one partition.)
  • A consumer in a group is not guaranteed a partition assignment. Where the group’s population outnumbers the partitions, some consumers will remain idle until this balance equalizes or tips in favor of the other side. (Consumer 4 remains partition-less.)
  • Partitions may be manually assigned to free consumers. If necessary, an entire topic may be assigned to a single free consumer — this is done by individually assigning all partitions. (Consumer 1 can be freely assigned any partition.)

Exactly-Once Delivery

When contrasting at-least-once with at-most-once delivery semantics, an often-asked question is: Why can’t we have it exactly once?

Without delving into the academic details, which involve conjectures and impossibility proofs, it is sufficient to say that exactly-once semantics are not possible without collaboration with the consumer application. What does this mean in practice?

Consumers in event streaming applications must be idempotent. In other words, processing the same record repeatedly should have no net effect on the consumer ecosystem. If a record has no additive effects, the consumer is inherently idempotent. (For example, if the consumer simply overwrites an existing database entry with a new one, then the update is naturally idempotent.) Otherwise, the consumer must check whether a record has already been processed, and to what extent, prior to processing a record. The combination of at-least-once delivery and consumer idempotence collectively leads to exactly-once semantics.

Example: A Trading Platform

With all this theory looming over us like Kubrick’s Monolith, it would be inappropriate to conclude without offering the reader a practical scenario.

Let’s say you were looking for specific price patterns in listed stocks, emitting trading signals when a particular pattern is identified. There are a large number of stocks, and understandably you’d like them processed in parallel. However, the time series for any given ticker code must be processed sequentially on a single consumer.

Kafka makes this use case, and others like it, almost trivial to implement. We would create a pair of topics: prices for the raw price data, and orders for any resulting orders. We can be fairly generous with our partition counts, as the nature of the data gives us ample opportunities for parallelism.

At the feed source, we could publish a record for each price on the prices topic, keyed by the ticker code. Kafka’s automatic partition assignment will ensure that every ticker code is handled by (at most) one consumer in its group. The consumer instances are free to scale in and out to match the processing load. Consumer groups should be meaningfully named, ideally reflecting the purpose of the consuming application. A good example might be trading-strategy.abc, for a fictitious trading strategy named ‘ABC’.

Once a price pattern is identified by the consumer, it can publish another message — the order request — on the orders topic. We’ll muster up another consumer group — order-execution — responsible for reading the orders and forwarding them to the broker.

In this simple example, we have created an end-to-end trading pipeline that is entirely event-driven and highly scalable — at least theoretically, assuming there are no other bottlenecks. We can dynamically add more processing nodes to the individual stages to cope with the increased load where it’s called for.

Now, let’s spice things up a bit. Suppose you need several trading strategies operating concurrently, driven by a common data feed. Furthermore, the trading strategies will be developed by different teams; the objective being to decouple these implementations as much as possible, allowing the teams to operate autonomously — develop and deploy at their individual cadence, perhaps even using different programming languages and tool-chains. That said, you’d ideally want to reuse as much of what’s already been written. So, how would we pull this off? 

Trading Platform

Kafka’s flexible multipoint-to-multipoint pub-sub architecture combines stateful consumption with broadcast semantics. Using distinct consumer groups, Kafka allows disparate applications to share input topics, processing events at their own pace. The second trading strategy would need a dedicated consumer group — trading-strategy.xyz — applying its specific business logic to the common pricing stream, publishing the resulting orders to the same orders topic. In this fashion, Kafka enables you to construct modular event processing pipelines from discrete elements that are readily reusable and composable.

Note: In the days of service buses and traditional ‘enterprisey’ message brokers, before event sourcing entered the mainstream, you would have had to choose between persistent message queues or transient broadcast topics. In our example, you would likely have created multiple FIFO queues, using the fan-out pattern. Because Kafka generalises pub-sub topics and persistent message queues into a unified model, a single source topic can power a diverse range of consumers without incurring duplication.

In Conclusion

Event streaming platforms are a highly effective building block in the construction of modular, loosely-coupled, event-driven applications. Within the world of event streaming, Kafka has solidified its position as the go-to open-source solution that is both amazingly flexible and highly performant. Concurrency and parallelism are at the heart of Kafka’s architecture, forming partially-ordered event streams that can be load-balanced across a scalable consumer ecosystem. A simple reconfiguration of consumers and their encompassing groups can bring about vastly different event distribution and processing semantics; shifting the offset commit point can invert the delivery guarantee from an at-most-once to an at-least-once model.

Of course, Kafka isn’t without its flaws. The tooling is sub-par, to put it mildly; most Kafka practitioners have long abandoned the out-of-the-box CLI utilities in favour of other open-source tools such as KafdropKafkacat and third-party commercial offerings like Kafka Tool. The breadth of Kafka’s configuration options is overwhelming, with defaults that are riddled with gotchas, ready to shock the unsuspecting first-time user.

All in all, Kafka represents a paradigm shift in how we architect and build complex systems. Its benefits go beyond the superfluous, and they dwarf any of the niggles that are bound to exist in a technology that has undergone such aggressive adoption. Crucially, it paves the way for further progress in its space; Apache Pulsar is a prime example of an alternative platform that has improved on much of Kafka’s shortcomings, yet owes a great deal to its predecessor for laying the cornerstone and bringing the genre to the mainstream.

Building Restful APIs with Kotlin, Spring Boot, Mysql, JPA and Hibernate
05
Mar
2021

Building Restful APIs with Kotlin, Spring Boot, MySQL , JPA and Hibernate

Kotlin has gained a lot of popularity in recent times due to its productivity features and a first class support in Android.

Owing to the increasing popularity of Kotlin, Spring framework 5 has also introduced a dedicated support for Kotlin in Spring applications.

In this article, You’ll learn how to build a Restful CRUD API with Kotlin and Spring Boot 2.x, which is based on Spring framework 5.

So Stay tuned!

What will we build?

In this blog post, we’ll build Restful APIs for a mini blog application. The blog has a list of Articles. We’ll write APIs for creating, retrieving, updating and deleting an Article. An Article has an id, a title and some content.

We’ll use MySQL as our data source and JPA & Hibernate to access the data from the database.

All right, Let’s now create the application.

Creating the Application

We’ll use Spring initializr web tool to bootstrap our application. Follow the steps below to generate the application :

  1. Go to http://start.spring.io
  2. Select Kotlin in the language section.
  3. Enter Artifact as kotlin-demo
  4. Add WebJPA, and MySQL dependencies.
  5. Click Generate to generate and download the project.
Kotlin Spring Boot Restful CRUD API Example

Once the project is generated, unzip it and import it into your favorite IDE. Here is the Project’s directory Structure for your reference.

Kotlin Spring Boot Restful CRUD API Example Directory Structure

Configure MySQL

We’ll need to configure MySQL database url, username, and password so that Spring Boot can create a Data source.

Open src/main/resources/application.properties file and add the following properties to it –

## Spring DATASOURCE (DataSourceAutoConfiguration & DataSourceProperties)
spring.datasource.url = jdbc:mysql://localhost:3306/kotlin_demo_app?autoReconnect=true&useUnicode=true&characterEncoding=UTF-8&allowMultiQueries=true&useSSL=false
spring.datasource.username = root
spring.datasource.password = root


## Hibernate Properties

# The SQL dialect makes Hibernate generate better SQL for the chosen database
spring.jpa.properties.hibernate.dialect = org.hibernate.dialect.MySQL5InnoDBDialect

# Hibernate ddl auto (create, create-drop, validate, update)
spring.jpa.hibernate.ddl-auto = update

Please don’t forget to change spring.datasource.username and spring.datasource.password as per your MySQL installation.

Note that, I’ve set spring.jpa.hibernate.ddl-auto property to update. This property updates the database schema whenever you create or modify the domain models in your application.

Creating the Domain Model

Let’s now create the Article domain entity. Create a new package called model inside com.example.kotlindemo package, and then create a new Kotlin file called Article.kt with the following contents –

package com.example.kotlindemo.model

import javax.persistence.Entity
import javax.persistence.GeneratedValue
import javax.persistence.GenerationType
import javax.persistence.Id
import javax.validation.constraints.NotBlank

@Entity
data class Article (
    @Id @GeneratedValue(strategy = GenerationType.IDENTITY)
    val id: Long = 0,

    @get: NotBlank
    val title: String = "",

    @get: NotBlank
    val content: String = ""
)

The Entity class is so small and concise, right? That’s because A Kotlin class doesn’t need getters and setters like Java. Moreover, I have used a data class here. A data class automatically generates equals()hashcode()toString() and copy() methods.

Note that, I’ve assigned a default value for all the fields in the Article class. This is needed because Hibernate requires an entity to have a no-arg constructor.

Assigning default values to all the member fields will let hibernate instantiate an Article without passing any argument. It Works because Kotlin supports Default Arguments 🙂

Creating the Repository

Let’s now create the repository for accessing the data from the database. First, create a package called repository inside com.example.kotlindemo package, and then create a Kotlin file named ArticleRepository.kt with the following contents –

package com.example.kotlindemo.repository

import com.example.kotlindemo.model.Article
import org.springframework.data.jpa.repository.JpaRepository
import org.springframework.stereotype.Repository

@Repository
interface ArticleRepository : JpaRepository<Article, Long>

That’s all we need to do here. Since we’ve extended ArticleRepository from JpaRepository interface, all the CRUD methods on Article entity is readily available to us. Spring boot automatically plugs-in a default implementation of JpaRepository called SimpleJpaRepository at runtime.

Creating the controller End-points

Finally, Let’s create the controller end-points for all the CRUD operations on Article entity.

First, create a new package called controller inside com.example.kotlindemo package and then create a new kotlin file called ArticleController.kt inside controller package with the following contents –

package com.example.kotlindemo.controller

import com.example.kotlindemo.model.Article
import com.example.kotlindemo.repository.ArticleRepository
import org.springframework.http.HttpStatus
import org.springframework.http.ResponseEntity
import org.springframework.web.bind.annotation.*
import java.util.*
import javax.validation.Valid

@RestController
@RequestMapping("/api")
class ArticleController(private val articleRepository: ArticleRepository) {

    @GetMapping("/articles")
    fun getAllArticles(): List<Article> =
            articleRepository.findAll()


    @PostMapping("/articles")
    fun createNewArticle(@Valid @RequestBody article: Article): Article =
            articleRepository.save(article)


    @GetMapping("/articles/{id}")
    fun getArticleById(@PathVariable(value = "id") articleId: Long): ResponseEntity<Article> {
        return articleRepository.findById(articleId).map { article -> 
            ResponseEntity.ok(article)
        }.orElse(ResponseEntity.notFound().build())
    }

    @PutMapping("/articles/{id}")
    fun updateArticleById(@PathVariable(value = "id") articleId: Long,
                          @Valid @RequestBody newArticle: Article): ResponseEntity<Article> {

        return articleRepository.findById(articleId).map { existingArticle ->
            val updatedArticle: Article = existingArticle
                    .copy(title = newArticle.title, content = newArticle.content)
            ResponseEntity.ok().body(articleRepository.save(updatedArticle))
        }.orElse(ResponseEntity.notFound().build())

    }

    @DeleteMapping("/articles/{id}")
    fun deleteArticleById(@PathVariable(value = "id") articleId: Long): ResponseEntity<Void> {

        return articleRepository.findById(articleId).map { article  ->
            articleRepository.delete(article)
            ResponseEntity<Void>(HttpStatus.OK)
        }.orElse(ResponseEntity.notFound().build())

    }
}

The controller defines APIs for all the CRUD operations. I have used Kotlin’s functional style syntax in all the methods to make them short and concise.

Running the Application

You can run the application by typing the following command in the terminal –

mvn spring-boot:run

The application will start at Spring Boot’s default port 8080.

Exploring the Rest APIs

1. POST /api/articles – Create an Article

curl -i -H "Content-Type: application/json" -X POST \
-d '{"title": "How to learn Spring framework", "content": "Resources to learn Spring framework"}' \
http://localhost:8080/api/articles

# Output
HTTP/1.1 200 
Content-Type: application/json;charset=UTF-8
Transfer-Encoding: chunked
Date: Fri, 06 Oct 2017 03:25:59 GMT

{"id":1,"title":"How to learn Spring framework","content":"Resources to learn Spring framework"}

2. GET /api/articles – Get all Articles

curl -i -H 'Accept: application/json' http://localhost:8080/api/articles

# Output
HTTP/1.1 200 
Content-Type: application/json;charset=UTF-8
Transfer-Encoding: chunked
Date: Fri, 06 Oct 2017 03:25:29 GMT

[{"id":1,"title":"How to learn Spring framework","content":"Resources to learn Spring framework"}]

3. Get /api/articles/{id} – Get an Article by id

curl -i -H 'Accept: application/json' http://localhost:8080/api/articles/1

# Output
HTTP/1.1 200 
Content-Type: application/json;charset=UTF-8
Transfer-Encoding: chunked
Date: Fri, 06 Oct 2017 03:27:51 GMT

{"id":1,"title":"How to learn Spring framework","content":"Resources to learn Spring framework"}

4. PUT /api/articles/{id} – Update an Article

curl -i -H "Content-Type: application/json" -X PUT \
-d '{"title": "Learning Spring Boot", "content": "Some resources to learn Spring Boot"}' \
http://localhost:8080/api/articles/1

# Output
HTTP/1.1 200 
Content-Type: application/json;charset=UTF-8
Transfer-Encoding: chunked
Date: Fri, 06 Oct 2017 03:33:15 GMT

{"id":1,"title":"Learning Spring Boot","content":"Some resources to learn Spring Boot"}

5. DELETE /api/articles/{id} – Delete an Article

curl -i -X DELETE http://localhost:8080/api/articles/1

# Output
HTTP/1.1 200 
Content-Length: 0
Date: Fri, 06 Oct 2017 03:34:22 GMT

Conclusion

That’s all folks! In this article, You learned how to use Kotlin with Spring Boot for building restful web services.

You can find the entire code for the application that we built in this article in my github repository. Consider giving a star on github if you find the project useful.

Thank you for reading folks! See you next time 🙂

Spring Boot Logging Best Practices Guide
05
May
2021

Spring Boot Logging Best Practices Guide

Logging in Spring Boot can be confusing, and the wide range of tools and frameworks make it a challenge to even know where to start. This guide talks through the most common best practices for Spring Boot logging and gives five key suggestions to add to your logging tool kit.

What’s in the Spring Boot Box?

The Spring Boot Starters all depend on spring-boot-starter-logging. This is where the majority of the logging dependencies for your application come from. The dependencies involve a facade (SLF4J) and frameworks (Logback). It’s important to know what these are and how they fit together.

SLF4J is a simple front-facing facade supported by several logging frameworks. It’s main advantage is that you can easily switch from one logging framework to another. In our case, we can easily switch our logging from Logback to Log4j, Log4j2 or JUL.

The dependencies we use will also write logs. For example, Hibernate uses SLF4J, which fits perfectly as we have that available. However, the AWS SDK for Java uses Apache Commons Logging (JCL). Spring-boot-starter-logging includes the necessary bridges to ensure those logs are delegated to our logging framework out of the box.

SLF4J usage:

At a high level, all the application code has to worry about is:

  1. Getting an instance of an SLF4J logger (Regardless of the underlying framework):
    private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);Copy
  2. Writing some logs:
    LOG.info(“My message set at info level”);Copy

Logback or Log4j2?

Spring Boot’s default logging framework is Logback. Your application code should interface only with the SLF4J facade so that it’s easy to switch to an alternative framework if necessary.

Log4j2 is newer and claims to improve on the performance of Logback. Log4j2 also supports a wide range of appenders so it can log to files, HTTP, databases, Cassandra, Kafka, as well as supporting asynchronous loggers. If logging performance is of high importance, switching to log4j2 may improve your metrics. Otherwise, for simplicity, you may want to stick with the default Logback implementation.

This guide will provide configuration examples for both frameworks.

Want to use log4j2? You’ll need to exclude spring-boot-starter-logging and include spring-boot-starter-logging-log4j2.

spring boot logging frameworks

5 Tips for Getting the Most Out of Your Spring Boot Logging

With your initial set up out of the way, here are 5 top tips for spring boot logging.

1. Configuring Your Log Format

Spring Boot Logging provides default configurations for logback and log4j2. These specify the logging level, the appenders (where to log) and the format of the log messages.

For all but a few specific packages, the default log level is set to INFO, and by default, the only appender used is the Console Appender, so logs will be directed only to the console.

The default format for the logs using logback looks like this:

logback default logging format

Let’s take a look at that last line of log, which was a statement created from within a controller with the message “My message set at info level”.

It looks simple, yet the default log pattern for logback seems “off” at first glance. As much as it looks like it could be, it’s not regex, it doesn’t parse email addresses, and actually, when we break it down it’s not so bad.

%clr(%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}){faint}
%clr(${LOG_LEVEL_PATTERN:-%5p}) %clr(${PID:- }){magenta} %clr(---){faint}
%clr([%15.15t]){faint} %clr(%-40.40logger{39}){cyan} %clr(:){faint}
%m%n${LOG_EXCEPTION_CONVERSION_WORD:-%wEx}Copy

Understanding the Default Logback Pattern

The variables that are available for the log format allow you to create meaningful logs, so let’s look a bit deeper at the ones in the default log pattern example.Show 102550100 entriesSearch:

Pattern PartWhat it Means
%clr%clr specifies a colour. By default, it is based on log levels, e.g, INFO is green. If you want to specify specific colours, you can do that too.

The format is:
%clr(Your message){your colour}

So for example, if we wanted to add “Demo” to the start of every log message, in green, we would write:
%clr(Demo){green}
%d{${LOG_DATEFORMAT_PATTERN:-yyyy-MM-dd HH:mm:ss.SSS}}%d is the current date, and the part in curly braces is the format. ${VARIABLE}:-default is a way of specifying that we should use the $VARIABLE environment variable for the format, if it is available, and if not, fall back to default. This is handy if you want to override these values in your properties files, by providing arguments, or by setting environment variables.

In this example, the default format is yyyy-MM-dd HH:mm:ss.SSS unless we specify a variable named LOG_DATEFORMAT_PATTERN. In the logs above, we can see 2020-10-19 10:09:58.152 matches the default pattern, meaning we did not specify a custom LOG_DATEFORMAT_PATTERN.
${LOG_LEVEL_PATTERN:-%5p}Uses the LOG_LEVEL_PATTERN if it is defined, else will print the log level with right padding up to 5 characters (E.g “INFO” becomes “INFO “ but “TRACE” will not have the trailing space). This keeps the rest of the log aligned as it’ll always be 5 characters.
${PID:- }The environment variable $PID, if it exists. If not, space.
tThe name of the thread triggering the log message.
loggerThe name of the logger (up to 39 characters), in our case this is the class name.
%mThe log message.
%nThe platform-specific line separator.
%wExIf one exists, wEx is the stack trace of any exception, formatted using Spring Boot’s ExtendedWhitespaceThrowableProxyConverter.

Showing 1 to 9 of 9 entriesPreviousNext

Customising the log format

You can customise the ${} variables that are found in the logback-spring.xml by passing in properties or environment variables. For example, you may set logging.pattern.console to override the whole of the console log pattern. 

However, for more control, including adding additional appenders, it is recommended to create your logback-spring.xml and place it inside your resources folder. You can do the same with log4j2 by adding log4j2-spring.xml to your resources folder.

Armed with the ability to customise your logs, you should consider adding:

  • Application name.
  • A request ID.
  • The endpoint being requested (E.g /health).

There are a few items in the default log that I would remove unless you have a specific use case for them:

  • The ‘—’ separator.
  • The thread name.
  • The process ID.

With the ability to customise these through the use of the logback-spring.xml or log4j2-spring.xml, the format of your logs is fully within your control.

2. Configuring the Destination for Your Logs (Appenders and Loggers)

An appender is just a fancy name for the part of the logging framework that sends your logs to a particular target. Both frameworks can output to console, over HTTP, to databases, or over a TCP socket, as well as to many other targets. The way we configure the destination for the logs is by adding, removing and configuring these appenders. 

You have more control over which appenders you use, and the configuration of them, if you create your own custom .xml configuration. However, the default logging configuration does make use of environment properties that allow you to override some parts of it, for example, the date format.

Preset configuration for logging to files are available within Spring Boot Logging. You can use the logback configuration with a file appender or the log4j2 configuration with a file appender if you specify logging.file or logging.path in your application properties.

The official docs for logback appenders and log4j2 appenders detail the parameters required for each of the appenders available, and how to configure them in your XML file. One tip for choosing the destination for your logs is to have a plan for rotating them. Writing logs to a file always feels like a great idea, until the storage used for that file runs out and brings down the whole service. 

Log4j and logback both have a RollingFileAppender which handles rotating these log files based on file size, or time, and it’s exactly that which Spring Boot Logging uses if you set the logging.file property. 

3. Logging as a Cross-Cutting Concern to Keep Your Code Clean (Using Filters and Aspects)

You might want to log every HTTP request your API receives. That’s a fairly normal requirement, but putting a log statement into every controller is unnecessary duplication. It’s easy to forget and make mistakes. A requirement that you want to log every method within your packages that your application calls would be even more cumbersome. 

I’ve seen developers use this style of logging at trace level so that they can turn it on to see exactly what is happening in a production environment. Adding log statements to the start and end of every method is messy, and there is a better way. This is where filters and aspects save the day and avoid the code duplication.

When to Use a Filter Vs When to Use Aspect-Oriented Programming

If you are looking to create log statements related to specific requests, you should opt for using filters, as they are part of the handling chain that your application already goes through for each request. They are easier to write, easier to test and usually more performant than using aspects. If you are considering more cross-cutting concerns, for example, audit logging, or logging every method that causes an exception to be thrown, use AOP. 

Using a Filter to Log Every Request

Filters can be registered with your web container by creating a class implementing javax.servlet.Filter and annotating it with @Component, or adding it as an @Bean in one of your configuration classes. When your spring-boot-starter application starts up, it will create the Filter and register it with the container.

You can choose to create your own Filter, or to use an existing one. To make use of the existing Filter, you need to supply a CommonsRequestLoggingFilter bean and set your logging level to debug. You’ll get something that looks like:

2020-10-27 18:50:50.427 DEBUG 24168 --- [nio-8080-exec-2] o.a.coyote.http11.Http11InputBuffer      : Received [GET /health HTTP/1.1
tracking-header: my-tracking
User-Agent: PostmanRuntime/7.26.5
Accept: */*
Postman-Token: 04a661b7-209c-43c3-83ea-e09466cf3d92
Host: localhost:8080
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
]Copy

If you use the existing one, you have little control over the message that gets logged. 

If you want more control, create your own Filter using this example, and you then have full control over the content of the log message.

Using Aspects for Cross-Cutting Concerns

Aspect-oriented programming enables you to fulfill cross-cutting concerns, like logging for example, in one place. You can do this without your logging code needing to sprawl across every class.

This approach is great for use cases such as:

  • Logging any exceptions thrown from any method within your packages (See @AfterThrowing)
  • Logging performance metrics by timing before/after each method is run (See @Around)
  • Audit logging. You can log calls to methods that have your a custom annotation on, such as adding @Audit. You only need to create a pointcut matching calls to methods with that annotation

Let’s start with a simple example – we want to log the name of every public method that we call within our package, com.example.demo. There are only a few steps to writing an Aspect that will run before every public method in a package that you specify.

  1. Include spring-boot-starter-aop in your pom.xml or build.gradle.
  2. Add @EnableAspectJAutoProxy to one of your configuration classes. This line tells spring-boot that you want to enable AspectJ support.
  3. Add your pointcut, which defines a pattern that is matched against method signatures as they run. You can find more about how to construct your matching pattern in the spring boot documentation for AOP. In our example, we match any method inside the com.example.demo package.
  4. Add your Aspect. This defines when you want to run your code in relation to the pointcut (E.g, before, after or around the methods that it matches). In this example, the @Before annotation causes the method to be executed before any methods that match the pointcut. 

That’s all there is to logging every method call. The logs will appear as:

2020-10-27 19:26:33.269  INFO 2052 --- [nio-8080-exec-2]
com.example.demo.MyAspect                : Called checkHealthCopy

By making changes to your pointcut, you can write logs for every method annotated with a specific annotation. For example, consider what you can do with:

@annotation(com.example.demo.Audit)Copy

4. Applying Context to Your Logs Using MDC

(This would run for every method annotated with a custom annotation, @Audit).

MDC (Mapped Diagnostic Context) is a complex-sounding name for a map of key-value pairs, associated with a single thread. Each thread has its own map. You can add keys/values to the map at runtime, and then reference the keys from that map in your logging pattern. 

The approach comes with a warning that threads may be reused, and so you’ll need to make sure to clear your MDC after each request to avoid your context leaking from one request to the next.

MDC is accessible through SLF4J and supported by both Logback and Log4j2, so we don’t need to worry about the specifics of the underlying implementation. 

The MDC section in the SLF4J documentation gives the simplest examples.

Tracking Requests Through Your Application Using Filters and MDC

Want to be able to group logs for a specific request? The Mapped Diagnostic Context (MDC) will help. 

The steps are:

  1. Add a header to each request going to your API, for example, ‘tracking-id’. You can generate this on the fly (I suggest using a UUID) if your client cannot provide one.
  2. Create a filter that runs once per request and stores that value in the MDC.
  3. Update your logging pattern to reference the key in the MDC to retrieve the value.

Using a Filter, this is how you can read values from the request and set them on the MDC. Make sure to clear up after the request by calling MDC.clear(), preferably in a finally block so that it always runs. 

After setting the value on your MDC, just add %X{tracking}  to your logging pattern (Replacing the word “tracking” with the key you have put in MDC) and your logs will contain the value in every log message for that request. 

If a client reports a problem, as long as you can get a unique tracking-id from your client, then you’ll be able to search your logs and pull up every log statement generated from that specific request.

Other use cases that you may want to put into your MDC and include on every log message include:

  • The application version.
  • Details of the request, for example, the path.
  • Details of the logged-in user, for example, the username.

5. Unit Testing Your Log Statements

Why Test Your Logs?

You can unit test your logging code. Too often this is overlooked because the log statements return void. For example, logger.info(“foo”);  does not return a value that you can assert against. 

It’s easy to make mistakes. Log statements usually involve parameters or formatted strings, and it’s easy to put log statements in the wrong place. Unit testing reassures you that your logs do what you expect and that you’re covered when refactoring to avoid any accidental modifications to your logging behaviour.

The Approach to Testing Your Logs

The Problem

SLF4J’s LoggerFactory.getLogger is static, making it difficult to mock. Searching through any outputted log files in our unit tests is error-prone (E.g we need to consider resetting the log files between each unit test). How do we assert against the logs?

The Solution

The trick is to add your own test appender to the logging framework (e.g Logback or Log4j2) that captures the logs from your application in memory, allowing us to assert against the output later. The steps are:

  1. Before each test case, add an appender to your logger.
  2. Within the test, call your application code that logs some output.
  3. The logger will delegate to your test appender.
  4. Assert that your expected logs have been received by your test appender.

Each logging framework has suitable appenders, but referencing those concrete appenders in our tests means we need to depend on the specific framework rather than SLF4J. That’s not ideal, but the alternatives of searching through logged output in files, or implementing our own SLF4J implementation is overkill, making this the pragmatic choice.

Here are a couple of tricks for unit testing using JUnit 4 rules or JUnit 5 extensions that will keep your test classes clean, and reduce the coupling with the logging framework.

Testing Log Statements Using Junit 5 Extensions in Two Steps

JUnit 5 extensions help to avoid code duplicates between your tests. Here’s how to set up your logging tests in two steps:

Step 1: Create your JUnit extension

Create your extension for Logback

Create your extension for Log4j2

Step 2: Use that rule to assert against your log statement with logback or log4j2

Testing Log Statements Using Junit 4 Rules in Two Steps

JUnit 4 rules help to avoid code duplication by extracting the common test code away from the test classes. In our example, we don’t want to duplicate the code for adding a test appender to our logger in every test class.

Step 1: Create your JUnit rule. 

Create your rule for Logback

Create your rule for Log4j2

Step 2: Use that rule to assert against your log statements using logback or log4j2.

With these approaches, you can assert that your log statements have been called with a message and level that you expect. 

Conclusion

The Spring Boot Logging Starter provides everything you need to quickly get started, whilst allowing full control when you need it. We’ve looked at how most logging concerns (formatting, destinations, cross-cutting logging, context and unit tests) can be abstracted away from your core application code.

Any global changes to your logging can be done in one place, and the classes for the rest of your application don’t need to change. At the same time, unit tests for your log statements provide you with reassurance that your log statements are being fired after making any alterations to your business logic.

These are my top 5 tips for configuring Spring Boot Logging. However, when your logging configuration is set up, remember that your logs are only ever as good as the content you put in them. Be mindful of the content you are logging, and make sure you are using the right logging levels.

Simple Way to Monitor the Spring Boot Apps
18
Mar
2021

Simple Way to Monitor the Spring Boot Apps

Administration of spring boot applications using spring boot admin.

This includes health status, various metrics, log level management, JMX-Beans interaction, thread dumps and traces, and much more. Spring Boot Admin is a community project initiated and maintained by code-centric.

Spring boot admin will provide UI to monitor and do some administrative work for your spring boot applications.

This project has been started by codecentric and its open source. You can do your own customization if you want to.

Git Repo: Link

The above video will give you a better idea of what is this project, so we will directly start with an example.

Spring Boot provides actuator endpoints to monitor metrics of individual microservices. These endpoints are very helpful for getting information about applications like if they are up if their components like database etc are working well. But a major drawback or difficulty about using actuator endpoints is that we have to individually hit the endpoints for applications to know their status or health. Imagine microservices involving 150 applications, the admin will have to hit the actuator endpoints of all 150 applications. To help us to deal with this situation we are using Spring Boot Admin app.

Sample Code:

To implement this we will create two projects one is server and another is the client.

  1. Spring Boot Admin server.
  2. Spring Boot Admin client.

Spring Boot Admin Server:

The project structure should look like any spring boot application:

POM.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>com.techwasti</groupId>
<artifactId>spring-boot-admin</artifactId>
<version>0.0.1-SNAPSHOT</version>
<packaging>jar</packaging>

<name>spring-boot-admin</name>
<description>Demo project for Spring Boot</description>

<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>1.5.4.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>

<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<java.version>1.8</java.version>
</properties>

<dependencies>
<!-- admin dependency-->
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-server-ui-login</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-server</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-server-ui</artifactId>
<version>1.5.1</version>
</dependency>
<!-- end admin dependency-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-security</artifactId>
</dependency>

</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>


</project>

We need to configure security as well since we are accessing sensitive information:

package com.techwasti;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.annotation.Configuration;
import org.springframework.security.config.annotation.web.builders.HttpSecurity;
import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter;

import de.codecentric.boot.admin.config.EnableAdminServer;

@EnableAdminServer
@Configuration
@SpringBootApplication
public class SpringBootAdminApplication {

public static void main(String[] args) {
SpringApplication.run(SpringBootAdminApplication.class, args);
}

@Configuration
public static class SecurityConfig extends WebSecurityConfigurerAdapter {
@Override
protected void configure(HttpSecurity http) throws Exception {
http.formLogin().loginPage("/login.html").loginProcessingUrl("/login").permitAll();
http.logout().logoutUrl("/logout");
http.csrf().disable();

http.authorizeRequests().antMatchers("/login.html", "/**/*.css", "/img/**", "/third-party/**").permitAll();
http.authorizeRequests().antMatchers("/**").authenticated();

http.httpBasic();
}
}

}

application.propertie file content

spring.application.name=SpringBootAdminEx
server.port=8081
security.user.name=admin
security.user.password=admin

Run the app and localhost:8081

Enter username and password and click on login button

As this is a sample example so we hardcoded username and password but you can use spring security to integrate LDAP or any other security.

Spring Boot Admin can be configured to display only the information that we consider useful.

spring.boot.admin.routes.endpoints=env, metrics, trace, info, configprops

Notifications and Alerts:

We can notify and send alerts using any below channels.

  • Email
  • PagerDuty
  • OpsGenie
  • Hipchat
  • Slack
  • Let’s Chat

Spring Boot Admin Client:

Now we are ready with the admin server application let us create the client application. Create any HelloWorld spring boot application or if you have any existing spring boot app you can use the same as a client application.

Add below Maven dependency

<dependency>
<groupId>de.codecentric</groupId>
<artifactId>spring-boot-admin-starter-client</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>

Next, update application.properties and add the following properties

spring.boot.admin.url=http://localhost:8081
spring.boot.admin.username=admin
spring.boot.admin.password=admin

These changes are fine in your client application now run the client application. Once the client application is up and running go and check your admin server application. It will show all your applications.

Beautiful Dashboards:

Spring boot admin is providing a wide range of features. As part of this article, we have just listed very few dig deeper and explore more.

Intro to Spring Boot Starters
26
Mar
2021

Intro to Spring Boot Starters

1. Overview

Dependency management is a critical aspects of any complex project. And doing this manually is less than ideal; the more time you spent on it the less time you have on the other important aspects of the project.

Spring Boot starters were built to address exactly this problem. Starter POMs are a set of convenient dependency descriptors that you can include in your application. You get a one-stop-shop for all the Spring and related technology that you need, without having to hunt through sample code and copy-paste loads of dependency descriptors.

We have more than 30 Boot starters available – let’s see some of them in the following section

2. The Web Starter

First, let’s look at developing the REST service; we can use libraries like Spring MVC, Tomcat and Jackson – a lot of dependencies for a single application.

Spring Boot starters can help to reduce the number of manually added dependencies just by adding one dependency. So instead of manually specifying the dependencies just add one starter as in the following example:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
</dependency>

Now we can create a REST controller. For the sake of simplicity we won’t use the database and focus on the REST controller:

@RestController
public class GenericEntityController {
    private List<GenericEntity> entityList = new ArrayList<>();

    @RequestMapping("/entity/all")
    public List<GenericEntity> findAll() {
        return entityList;
    }

    @RequestMapping(value = "/entity", method = RequestMethod.POST)
    public GenericEntity addEntity(GenericEntity entity) {
        entityList.add(entity);
        return entity;
    }

    @RequestMapping("/entity/findby/{id}")
    public GenericEntity findById(@PathVariable Long id) {
        return entityList.stream().
                 filter(entity -> entity.getId().equals(id)).
                   findFirst().get();
    }
}

The GenericEntity is a simple bean with id of type Long and value of type String.

That’s it – with the application running, you can access http://localhost:8080/entity/all and check the controller is working.

We have created a REST application with quite a minimal configuration.

3. The Test Starter

For testing we usually use the following set of libraries: Spring Test, JUnit, Hamcrest, and Mockito. We can include all of these libraries manually, but Spring Boot starter can be used to automatically include these libraries in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-test</artifactId>
    <scope>test</scope>
</dependency>

Notice that you don’t need to specify the version number of an artifact. Spring Boot will figure out what version to use – all you need to specify is the version of spring-boot-starter-parent artifact. If later on you need to upgrade the Boot library and dependencies, just upgrade the Boot version in one place and it will take care of the rest.

Let’s actually test the controller we created in the previous example.

There are two ways to test the controller:

  • Using the mock environment
  • Using the embedded Servlet container (like Tomcat or Jetty)

In this example we’ll use a mock environment:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
@WebAppConfiguration
public class SpringBootApplicationIntegrationTest {
    @Autowired
    private WebApplicationContext webApplicationContext;
    private MockMvc mockMvc;

    @Before
    public void setupMockMvc() {
        mockMvc = MockMvcBuilders.webAppContextSetup(webApplicationContext).build();
    }

    @Test
    public void givenRequestHasBeenMade_whenMeetsAllOfGivenConditions_thenCorrect()
      throws Exception { 
        MediaType contentType = new MediaType(MediaType.APPLICATION_JSON.getType(),
        MediaType.APPLICATION_JSON.getSubtype(), Charset.forName("utf8"));
        mockMvc.perform(MockMvcRequestBuilders.get("/entity/all")).
        andExpect(MockMvcResultMatchers.status().isOk()).
        andExpect(MockMvcResultMatchers.content().contentType(contentType)).
        andExpect(jsonPath("$", hasSize(4))); 
    } 
}

The above test calls the /entity/all endpoint and verifies that the JSON response contains 4 elements. For this test to pass, we also have to initialize our list in the controller class:

public class GenericEntityController {
    private List<GenericEntity> entityList = new ArrayList<>();

    {
        entityList.add(new GenericEntity(1l, "entity_1"));
        entityList.add(new GenericEntity(2l, "entity_2"));
        entityList.add(new GenericEntity(3l, "entity_3"));
        entityList.add(new GenericEntity(4l, "entity_4"));
    }
    //...
}

What is important here is that @WebAppConfiguration annotation and MockMVC are part of the spring-test module, hasSize is a Hamcrest matcher, and @Before is a JUnit annotation. These are all available by importing one this one starter dependency.

4. The Data JPA Starter

Most web applications have some sort of persistence – and that’s quite often JPA.

Instead of defining all of the associated dependencies manually – let’s go with the starter instead:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>
<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <scope>runtime</scope>
</dependency>

Notice that out of the box we have automatic support for at least the following databases: H2, Derby and Hsqldb. In our example, we’ll use H2.

Now let’s create the repository for our entity:

public interface GenericEntityRepository extends JpaRepository<GenericEntity, Long> {}

Time to test the code. Here is the JUnit test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootJPATest {
    
    @Autowired
    private GenericEntityRepository genericEntityRepository;

    @Test
    public void givenGenericEntityRepository_whenSaveAndRetreiveEntity_thenOK() {
        GenericEntity genericEntity = 
          genericEntityRepository.save(new GenericEntity("test"));
        GenericEntity foundedEntity = 
          genericEntityRepository.findOne(genericEntity.getId());
        
        assertNotNull(foundedEntity);
        assertEquals(genericEntity.getValue(), foundedEntity.getValue());
    }
}

We didn’t spend time on specifying the database vendor, URL connection, and credentials. No extra configuration is necessary as we’re benefiting from the solid Boot defaults; but of course all of these details can still be configured if necessary.

5. The Mail Starter

A very common task in enterprise development is sending email, and dealing directly with Java Mail API usually can be difficult.

Spring Boot starter hides this complexity – mail dependencies can be specified in the following way:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-mail</artifactId>
</dependency>

Now we can directly use the JavaMailSender, so let’s write some tests.

For the testing purpose, we need a simple SMTP server. In this example, we’ll use Wiser. This is how we can include it in our POM:

<dependency>
    <groupId>org.subethamail</groupId>
    <artifactId>subethasmtp</artifactId>
    <version>3.1.7</version>
    <scope>test</scope>
</dependency>

Here is the source code for the test:

@RunWith(SpringJUnit4ClassRunner.class)
@SpringApplicationConfiguration(classes = Application.class)
public class SpringBootMailTest {
    @Autowired
    private JavaMailSender javaMailSender;

    private Wiser wiser;

    private String userTo = "user2@localhost";
    private String userFrom = "user1@localhost";
    private String subject = "Test subject";
    private String textMail = "Text subject mail";

    @Before
    public void setUp() throws Exception {
        final int TEST_PORT = 25;
        wiser = new Wiser(TEST_PORT);
        wiser.start();
    }

    @After
    public void tearDown() throws Exception {
        wiser.stop();
    }

    @Test
    public void givenMail_whenSendAndReceived_thenCorrect() throws Exception {
        SimpleMailMessage message = composeEmailMessage();
        javaMailSender.send(message);
        List<WiserMessage> messages = wiser.getMessages();

        assertThat(messages, hasSize(1));
        WiserMessage wiserMessage = messages.get(0);
        assertEquals(userFrom, wiserMessage.getEnvelopeSender());
        assertEquals(userTo, wiserMessage.getEnvelopeReceiver());
        assertEquals(subject, getSubject(wiserMessage));
        assertEquals(textMail, getMessage(wiserMessage));
    }

    private String getMessage(WiserMessage wiserMessage)
      throws MessagingException, IOException {
        return wiserMessage.getMimeMessage().getContent().toString().trim();
    }

    private String getSubject(WiserMessage wiserMessage) throws MessagingException {
        return wiserMessage.getMimeMessage().getSubject();
    }

    private SimpleMailMessage composeEmailMessage() {
        SimpleMailMessage mailMessage = new SimpleMailMessage();
        mailMessage.setTo(userTo);
        mailMessage.setReplyTo(userFrom);
        mailMessage.setFrom(userFrom);
        mailMessage.setSubject(subject);
        mailMessage.setText(textMail);
        return mailMessage;
    }
}

In the test, the @Before and @After methods are in charge of starting and stopping the mail server.

Notice that we’re wiring in the JavaMailSender bean – the bean was automatically created by Spring Boot.

Just like any other defaults in Boot, the email settings for the JavaMailSender can be customized in application.properties:

spring.mail.host=localhost
spring.mail.port=25
spring.mail.properties.mail.smtp.auth=false

So we configured the mail server on localhost:25 and we didn’t require authentication.

6. Conclusion

In this article we have given an overview of Starters, explained why we need them and provided examples on how to use them in your projects.

Let’s recap the benefits of using Spring Boot starters:

  • increase pom manageability
  • production-ready, tested & supported dependency configurations
  • decrease the overall configuration time for the project
A Comparison Between Spring and Spring Boot
26
Mar
2021

A Comparison Between Spring and Spring Boot

What Is Spring?

Simply put, the Spring framework provides comprehensive infrastructure support for developing Java applications.

It’s packed with some nice features like Dependency Injection, and out of the box modules like:

  • Spring JDBC
  • Spring MVC
  • Spring Security
  • Spring AOP
  • Spring ORM
  • Spring Test

These modules can drastically reduce the development time of an application.

For example, in the early days of Java web development, we needed to write a lot of boilerplate code to insert a record into a data source. By using the JDBCTemplate of the Spring JDBC module, we can reduce it to a few lines of code with only a few configurations.

3. What Is Spring Boot?

Spring Boot is basically an extension of the Spring framework, which eliminates the boilerplate configurations required for setting up a Spring application.

It takes an opinionated view of the Spring platform, which paves the way for a faster and more efficient development ecosystem.

Here are just a few of the features in Spring Boot:

  • Opinionated ‘starter’ dependencies to simplify the build and application configuration
  • Embedded server to avoid complexity in application deployment
  • Metrics, Health check, and externalized configuration
  • Automatic config for Spring functionality – whenever possible

Let’s get familiar with both of these frameworks step by step.

4. Maven Dependencies

First of all, let’s look at the minimum dependencies required to create a web application using Spring:

<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-web</artifactId>
    <version>5.3.5</version>
</dependency>
<dependency>
    <groupId>org.springframework</groupId>
    <artifactId>spring-webmvc</artifactId>
    <version>5.3.5</version>
</dependency>

Unlike Spring, Spring Boot requires only one dependency to get a web application up and running:

<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-web</artifactId>
    <version>2.4.4</version>
</dependency>

All other dependencies are added automatically to the final archive during build time.

Another good example is testing libraries. We usually use the set of Spring Test, JUnit, Hamcrest, and Mockito libraries. In a Spring project, we should add all of these libraries as dependencies.

Alternatively, in Spring Boot we only need the starter dependency for testing to automatically include these libraries.

Spring Boot provides a number of starter dependencies for different Spring modules. Some of the most commonly used ones are:

  • spring-boot-starter-data-jpa
  • spring-boot-starter-security
  • spring-boot-starter-test
  • spring-boot-starter-web
  • spring-boot-starter-thymeleaf

For the full list of starters, also check out the Spring documentation.

5. MVC Configuration

Let’s explore the configuration required to create a JSP web application using both Spring and Spring Boot.

Spring requires defining the dispatcher servlet, mappings, and other supporting configurations. We can do this using either the web.xml file or an Initializer class:

public class MyWebAppInitializer implements WebApplicationInitializer {
 
    @Override
    public void onStartup(ServletContext container) {
        AnnotationConfigWebApplicationContext context
          = new AnnotationConfigWebApplicationContext();
        context.setConfigLocation("com.baeldung");
 
        container.addListener(new ContextLoaderListener(context));
 
        ServletRegistration.Dynamic dispatcher = container
          .addServlet("dispatcher", new DispatcherServlet(context));
         
        dispatcher.setLoadOnStartup(1);
        dispatcher.addMapping("/");
    }
}

We also need to add the @EnableWebMvc annotation to a @Configuration class, and define a view-resolver to resolve the views returned from the controllers:

@EnableWebMvc
@Configuration
public class ClientWebConfig implements WebMvcConfigurer { 
   @Bean
   public ViewResolver viewResolver() {
      InternalResourceViewResolver bean
        = new InternalResourceViewResolver();
      bean.setViewClass(JstlView.class);
      bean.setPrefix("/WEB-INF/view/");
      bean.setSuffix(".jsp");
      return bean;
   }
}

By comparison, Spring Boot only needs a couple of properties to make things work once we add the web starter:

spring.mvc.view.prefix=/WEB-INF/jsp/
spring.mvc.view.suffix=.jsp

All of the Spring configuration above is automatically included by adding the Boot web starter through a process called auto-configuration.

What this means is that Spring Boot will look at the dependencies, properties, and beans that exist in the application and enable configuration based on these.

Of course, if we want to add our own custom configuration, then the Spring Boot auto-configuration will back away.

5.1. Configuring Template Engine

Now let’s learn how to configure a Thymeleaf template engine in both Spring and Spring Boot.

In Spring, we need to add the thymeleaf-spring5 dependency and some configurations for the view resolver:

@Configuration
@EnableWebMvc
public class MvcWebConfig implements WebMvcConfigurer {

    @Autowired
    private ApplicationContext applicationContext;

    @Bean
    public SpringResourceTemplateResolver templateResolver() {
        SpringResourceTemplateResolver templateResolver = 
          new SpringResourceTemplateResolver();
        templateResolver.setApplicationContext(applicationContext);
        templateResolver.setPrefix("/WEB-INF/views/");
        templateResolver.setSuffix(".html");
        return templateResolver;
    }

    @Bean
    public SpringTemplateEngine templateEngine() {
        SpringTemplateEngine templateEngine = new SpringTemplateEngine();
        templateEngine.setTemplateResolver(templateResolver());
        templateEngine.setEnableSpringELCompiler(true);
        return templateEngine;
    }

    @Override
    public void configureViewResolvers(ViewResolverRegistry registry) {
        ThymeleafViewResolver resolver = new ThymeleafViewResolver();
        resolver.setTemplateEngine(templateEngine());
        registry.viewResolver(resolver);
    }
}

Spring Boot 1 only requires the dependency of spring-boot-starter-thymeleaf to enable Thymeleaf support in a web application. Due to the new features in Thymeleaf3.0, we also have to add thymeleaf-layout-dialect as a dependency in a Spring Boot 2 web application. Alternatively, we can choose to add a spring-boot-starter-thymeleaf dependency that’ll take care of all of this for us.

Once the dependencies are in place, we can add the templates to the src/main/resources/templates folder and the Spring Boot will display them automatically.

6. Spring Security Configuration

For the sake of simplicity, we’ll see how the default HTTP Basic authentication is enabled using these frameworks.

Let’s start by looking at the dependencies and configuration we need to enable Security using Spring.

Spring requires both the standard spring-security-web and spring-security-config dependencies to set up Security in an application.

Next we need to add a class that extends the WebSecurityConfigurerAdapter and makes use of the @EnableWebSecurity annotation:

@Configuration
@EnableWebSecurity
public class CustomWebSecurityConfigurerAdapter extends WebSecurityConfigurerAdapter {
 
    @Autowired
    public void configureGlobal(AuthenticationManagerBuilder auth) throws Exception {
        auth.inMemoryAuthentication()
          .withUser("user1")
            .password(passwordEncoder()
            .encode("user1Pass"))
          .authorities("ROLE_USER");
    }
 
    @Override
    protected void configure(HttpSecurity http) throws Exception {
        http.authorizeRequests()
          .anyRequest().authenticated()
          .and()
          .httpBasic();
    }
    
    @Bean
    public PasswordEncoder passwordEncoder() {
        return new BCryptPasswordEncoder();
    }
}

Here we’re using inMemoryAuthentication to set up the authentication.

Spring Boot also requires these dependencies to make it work, but we only need to define the dependency of spring-boot-starter-security as this will automatically add all the relevant dependencies to the classpath.

The security configuration in Spring Boot is the same as the one above.

To see how the JPA configuration can be achieved in both Spring and Spring Boot, we can check out our article A Guide to JPA with Spring.

7. Application Bootstrap

The basic difference in bootstrapping an application in Spring and Spring Boot lies with the servlet. Spring uses either the web.xml or SpringServletContainerInitializer as its bootstrap entry point.

On the other hand, Spring Boot uses only Servlet 3 features to bootstrap an application. Let’s talk about this in detail.

7.1. How Spring Bootstraps?

Spring supports both the legacy web.xml way of bootstrapping as well as the latest Servlet 3+ method.

Let’s see the web.xml approach in steps:

  1. Servlet container (the server) reads web.xml.
  2. The DispatcherServlet defined in the web.xml is instantiated by the container.
  3. DispatcherServlet creates WebApplicationContext by reading WEB-INF/{servletName}-servlet.xml.
  4. Finally, the DispatcherServlet registers the beans defined in the application context.

Here’s how Spring bootstraps using the Servlet 3+ approach:

  1. The container searches for classes implementing ServletContainerInitializer and executes.
  2. The SpringServletContainerInitializer finds all classes implementing WebApplicationInitializer.
  3. The WebApplicationInitializer creates the context with XML or @Configuration classes.
  4. The WebApplicationInitializer creates the DispatcherServlet with the previously created context.

7.2. How Spring Boot Bootstraps?

The entry point of a Spring Boot application is the class which is annotated with @SpringBootApplication:

@SpringBootApplication
public class Application {
    public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}

By default, Spring Boot uses an embedded container to run the application. In this case, Spring Boot uses the public static void main entry point to launch an embedded web server.

It also takes care of the binding of the Servlet, Filter, and ServletContextInitializer beans from the application context to the embedded servlet container.

Another feature of Spring Boot is that it automatically scans all the classes in the same package or sub packages of the Main-class for components.

Additionally, Spring Boot provides the option of deploying it as a web archive in an external container. In this case, we have to extend the SpringBootServletInitializer:

@SpringBootApplication
public class Application extends SpringBootServletInitializer {
    // ...
}

Here the external servlet container looks for the Main-class defined in the META-INF file of the web archive, and the SpringBootServletInitializer will take care of binding the Servlet, Filter, and ServletContextInitializer.

8. Packaging and Deployment

Finally, let’s see how an application can be packaged and deployed. Both of these frameworks support common package managing technologies like Maven and Gradle; however, when it comes to deployment, these frameworks differ a lot.

For instance, the Spring Boot Maven Plugin provides Spring Boot support in Maven. It also allows packaging executable jar or war archives and running an application “in-place.”

Some of the advantages of Spring Boot over Spring in the context of deployment include:

  • Provides embedded container support
  • Provision to run the jars independently using the command java -jar
  • Option to exclude dependencies to avoid potential jar conflicts when deploying in an external container
  • Option to specify active profiles when deploying
  • Random port generation for integration tests

9. Conclusion

In this article, we learned about the differences between Spring and Spring Boot.

In a few words, we can say that Spring Boot is simply an extension of Spring itself to make development, testing, and deployment more convenient.

How to Change the Default Port in Spring Boot
26
Mar
2021

How to Change the Default Port in Spring Boot

Spring Boot provides sensible defaults for many configuration properties. But we sometimes need to customize these with our case-specific values.

And a common use case is changing the default port for the embedded server.

In this quick tutorial, we’ll cover several ways to achieve this.

2. Using Property Files

The fastest and easiest way to customize Spring Boot is by overriding the values of the default properties.

For the server port, the property we want to change is server.port.

By default, the embedded server starts on port 8080.

So, let’s see how to provide a different value in an application.properties file:

server.port=8081

Now the server will start on port 8081.

And we can do the same if we’re using an application.yml file:

server:
  port : 8081

Both files are loaded automatically by Spring Boot if placed in the src/main/resources directory of a Maven application.

2.1. Environment-Specific Ports

If we have an application deployed in different environments, we may want it to run on different ports on each system.

We can easily achieve this by combining the property files approach with Spring profiles. Specifically, we can create a property file for each environment.

For example, we’ll have an application-dev.properties file with this content:

server.port=8081

Then we’ll add another application-qa.properties file with a different port:

server.port=8082

Now, the property files configuration should be sufficient for most cases. However, there are other options for this goal, so let’s explore them as well.

3. Programmatic Configuration

We can configure the port programmatically either by setting the specific property when starting the application or by customizing the embedded server configuration.

First, let’s see how to set the property in the main @SpringBootApplication class:

@SpringBootApplication
public class CustomApplication {
    public static void main(String[] args) {
        SpringApplication app = new SpringApplication(CustomApplication.class);
        app.setDefaultProperties(Collections
          .singletonMap("server.port", "8083"));
        app.run(args);
    }
}

Next, to customize the server configuration, we have to implement the WebServerFactoryCustomizer interface:

@Component
public class ServerPortCustomizer 
  implements WebServerFactoryCustomizer<ConfigurableWebServerFactory> {
 
    @Override
    public void customize(ConfigurableWebServerFactory factory) {
        factory.setPort(8086);
    }
}

Note that this applies to the Spring Boot 2.x version.

For Spring Boot 1.x, we can similarly implement the EmbeddedServletContainerCustomizer interface.

4. Using Command-Line Arguments

When packaging and running our application as a jar, we can set the server.port argument with the java command:

java -jar spring-5.jar --server.port=8083

or by using the equivalent syntax:

java -jar -Dserver.port=8083 spring-5.jar

5. Order of Evaluation

As a final note, let’s look at the order in which these approaches are evaluated by Spring Boot.

Basically, the configurations priority is

  • embedded server configuration
  • command-line arguments
  • property files
  • main @SpringBootApplication configuration

6. Conclusion

In this article, we saw how to configure the server port in a Spring Boot application.

Building Reactive Rest APIs with Spring WebFlux and Reactive MongoDB
05
Mar
2021

Building Reactive Rest APIs with Spring WebFlux and Reactive MongoDB

Spring 5 has embraced reactive programming paradigm by introducing a brand new reactive framework called Spring WebFlux.

Spring WebFlux is an asynchronous framework from the bottom up. It can run on Servlet Containers using the Servlet 3.1 non-blocking IO API as well as other async runtime environments such as netty or undertow.

It will be available for use alongside Spring MVC. Yes, Spring MVC is not going anywhere. It’s a popular web framework that developers have been using for a long time.

But You now have a choice between the new reactive framework and the traditional Spring MVC. You can choose to use any of them depending on your use case.

Spring WebFlux uses a library called Reactor for its reactive support. Reactor is an implementation of the Reactive Streams specification.

Reactor Provides two main types called Flux and Mono. Both of these types implement the Publisher interface provided by Reactive Streams. Flux is used to represent a stream of 0..N elements and Mono is used to represent a stream of 0..1 element.

Although Spring uses Reactor as a core dependency for most of its internal APIs, It also supports the use of RxJava at the application level.

Programming models supported by Spring WebFlux

Spring WebFlux supports two types of programming models :

  1. Traditional annotation-based model with @Controller@RequestMapping, and other annotations that you have been using in Spring MVC.
  2. A brand new Functional style model based on Java 8 lambdas for routing and handling requests.

In this article, We’ll be using the traditional annotation-based programming model. I will write about functional style model in a future article.

Let’s build a Reactive Restful Service in Spring Boot

In this article, we’ll build a Restful API for a mini twitter application. The application will only have a single domain model called Tweet. Every Tweet will have a text and a createdAt field.

We’ll use MongoDB as our data store along with the reactive mongodb driver. We’ll build REST APIs for creating, retrieving, updating and deleting a Tweet. All the REST APIs will be asynchronous and will return a Publisher.

We’ll also learn how to stream data from the database to the client.

Finally, we’ll write integration tests to test all the APIs using the new asynchronous WebTestClient provided by Spring 5.

Creating the Project

Let’s use Spring Initializr web app to generate our application. Follow the steps below to generate the Project –

  1. Head over to http://start.spring.io
  2. Enter artifact’s value as webflux-demo
  3. Add Reactive Web and Reactive MongoDB dependencies
  4. Click Generate to generate and download the Project.
Spring WebFlux Reactive MongoDB REST API Example

Once the project is downloaded, unzip it and import it into your favorite IDE. The project’s directory structure should look like this –

Spring WebFlux Reactive MongoDB REST API Application Directory Structure

Configuring MongoDB

You can configure MongoDB by simply adding the following property to the application.properties file –

spring.data.mongodb.uri=mongodb://localhost:27017/webflux_demo

Spring Boot will read this configuration on startup and automatically configure the data source.

Creating the Domain Model

Let’s create our domain model – Tweet. Create a new package called model inside com.example.webfluxdemo package and then create a file named Tweet.java with the following contents –

package com.example.webfluxdemo.model;

import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
import javax.validation.constraints.NotBlank;
import javax.validation.constraints.NotNull;
import javax.validation.constraints.Size;
import java.util.Date;

@Document(collection = "tweets")
public class Tweet {
    @Id
    private String id;

    @NotBlank
    @Size(max = 140)
    private String text;

    @NotNull
    private Date createdAt = new Date();

    public Tweet() {

    }

    public Tweet(String text) {
        this.id = id;
        this.text = text;
    }

    public String getId() {
        return id;
    }

    public void setId(String id) {
        this.id = id;
    }

    public String getText() {
        return text;
    }

    public void setText(String text) {
        this.text = text;
    }

    public Date getCreatedAt() {
        return createdAt;
    }

    public void setCreatedAt(Date createdAt) {
        this.createdAt = createdAt;
    }
}

Simple enough! The Tweet model contains a text and a createdAt field. The text field is annotated with @NotBlank and @Size annotations to ensure that it is not blank and have a maximum of 140 characters.

Creating the Repository

Next, we’re going to create the data access layer which will be used to access the MongoDB database. Create a new package called repository inside com.example.webfluxdemo and then create a new file called TweetRepository.java with the following contents –

package com.example.webfluxdemo.repository;

import com.example.webfluxdemo.model.Tweet;
import org.springframework.data.mongodb.repository.ReactiveMongoRepository;
import org.springframework.stereotype.Repository;

@Repository
public interface TweetRepository extends ReactiveMongoRepository<Tweet, String> {

}

The TweetRepository interface extends from ReactiveMongoRepository which exposes various CRUD methods on the Document.

Spring Boot automatically plugs in an implementation of this interface called SimpleReactiveMongoRepository at runtime.

So you get all the CRUD methods on the Document readily available to you without needing to write any code. Following are some of the methods available from SimpleReactiveMongoRepository –

reactor.core.publisher.Flux<T>  findAll(); 

reactor.core.publisher.Mono<T>  findById(ID id); 

<S extends T> reactor.core.publisher.Mono<S>  save(S entity); 

reactor.core.publisher.Mono<Void>   delete(T entity);

Notice that all the methods are asynchronous and return a publisher in the form of a Flux or a Mono type.

Creating the Controller Endpoints

Finally, Let’s write the APIs that will be exposed to the clients. Create a new package called controller inside com.example.webfluxdemo and then create a new file called TweetController.java with the following contents –

package com.example.webfluxdemo.controller;

import com.example.webfluxdemo.model.Tweet;
import com.example.webfluxdemo.repository.TweetRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;

import javax.validation.Valid;

@RestController
public class TweetController {

    @Autowired
    private TweetRepository tweetRepository;

    @GetMapping("/tweets")
    public Flux<Tweet> getAllTweets() {
        return tweetRepository.findAll();
    }

    @PostMapping("/tweets")
    public Mono<Tweet> createTweets(@Valid @RequestBody Tweet tweet) {
        return tweetRepository.save(tweet);
    }

    @GetMapping("/tweets/{id}")
    public Mono<ResponseEntity<Tweet>> getTweetById(@PathVariable(value = "id") String tweetId) {
        return tweetRepository.findById(tweetId)
                .map(savedTweet -> ResponseEntity.ok(savedTweet))
                .defaultIfEmpty(ResponseEntity.notFound().build());
    }

    @PutMapping("/tweets/{id}")
    public Mono<ResponseEntity<Tweet>> updateTweet(@PathVariable(value = "id") String tweetId,
                                                   @Valid @RequestBody Tweet tweet) {
        return tweetRepository.findById(tweetId)
                .flatMap(existingTweet -> {
                    existingTweet.setText(tweet.getText());
                    return tweetRepository.save(existingTweet);
                })
                .map(updatedTweet -> new ResponseEntity<>(updatedTweet, HttpStatus.OK))
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    @DeleteMapping("/tweets/{id}")
    public Mono<ResponseEntity<Void>> deleteTweet(@PathVariable(value = "id") String tweetId) {

        return tweetRepository.findById(tweetId)
                .flatMap(existingTweet ->
                        tweetRepository.delete(existingTweet)
                            .then(Mono.just(new ResponseEntity<Void>(HttpStatus.OK)))
                )
                .defaultIfEmpty(new ResponseEntity<>(HttpStatus.NOT_FOUND));
    }

    // Tweets are Sent to the client as Server Sent Events
    @GetMapping(value = "/stream/tweets", produces = MediaType.TEXT_EVENT_STREAM_VALUE)
    public Flux<Tweet> streamAllTweets() {
        return tweetRepository.findAll();
    }
}

All the controller endpoints return a Publisher in the form of a Flux or a Mono. The last endpoint is very interesting where we set the content-type to text/event-stream. It sends the tweets in the form of Server Sent Events to a browser like this –

data: {"id":"59ba5389d2b2a85ed4ebdafa","text":"tweet1","createdAt":1505383305602}
data: {"id":"59ba5587d2b2a85f93b8ece7","text":"tweet2","createdAt":1505383814847}

Now that we’re talking about event-stream, You might ask that doesn’t the following endpoint also return a Stream?

@GetMapping("/tweets")
public Flux<Tweet> getAllTweets() {
    return tweetRepository.findAll();
}

And the answer is Yes. Flux<Tweet> represents a stream of tweets. But, by default, it will produce a JSON array because If a stream of individual JSON objects is sent to the browser then It will not be a valid JSON document as a whole. A browser client has no way to consume a stream other than using Server-Sent-Events or WebSocket.

However, Non-browser clients can request a stream of JSON by setting the Accept header to application/stream+json, and the response will be a stream of JSON similar to Server-Sent-Events but without extra formatting :

{"id":"59ba5389d2b2a85ed4ebdafa","text":"tweet1","createdAt":1505383305602}
{"id":"59ba5587d2b2a85f93b8ece7","text":"tweet2","createdAt":1505383814847}

Integration Test with WebTestClient

Spring 5 also provides an asynchronous and reactive http client called WebClient for working with asynchronous and streaming APIs. It is a reactive alternative to RestTemplate.

Moreover, You also get a WebTestClient for writing integration tests. The test client can be either run on a live server or used with mock request and response.

We’ll use WebTestClient to write integration tests for our REST APIs. Open WebfluxDemoApplicationTests.java file and add the following tests to it –

package com.example.webfluxdemo;

import com.example.webfluxdemo.model.Tweet;
import com.example.webfluxdemo.repository.TweetRepository;
import org.assertj.core.api.Assertions;
import org.junit.Test;
import org.junit.runner.RunWith;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.http.MediaType;
import org.springframework.test.context.junit4.SpringRunner;
import org.springframework.test.web.reactive.server.WebTestClient;
import reactor.core.publisher.Mono;

import java.util.Collections;

@RunWith(SpringRunner.class)
@SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT)
public class WebfluxDemoApplicationTests {

	@Autowired
	private WebTestClient webTestClient;

	@Autowired
    TweetRepository tweetRepository;

	@Test
	public void testCreateTweet() {
		Tweet tweet = new Tweet("This is a Test Tweet");

		webTestClient.post().uri("/tweets")
				.contentType(MediaType.APPLICATION_JSON_UTF8)
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .body(Mono.just(tweet), Tweet.class)
				.exchange()
				.expectStatus().isOk()
				.expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
				.expectBody()
                .jsonPath("$.id").isNotEmpty()
                .jsonPath("$.text").isEqualTo("This is a Test Tweet");
	}

	@Test
    public void testGetAllTweets() {
	    webTestClient.get().uri("/tweets")
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .exchange()
                .expectStatus().isOk()
                .expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
                .expectBodyList(Tweet.class);
    }

    @Test
    public void testGetSingleTweet() {
        Tweet tweet = tweetRepository.save(new Tweet("Hello, World!")).block();

        webTestClient.get()
                .uri("/tweets/{id}", Collections.singletonMap("id", tweet.getId()))
                .exchange()
                .expectStatus().isOk()
                .expectBody()
                .consumeWith(response ->
                        Assertions.assertThat(response.getResponseBody()).isNotNull());
    }

    @Test
    public void testUpdateTweet() {
        Tweet tweet = tweetRepository.save(new Tweet("Initial Tweet")).block();

        Tweet newTweetData = new Tweet("Updated Tweet");

        webTestClient.put()
                .uri("/tweets/{id}", Collections.singletonMap("id", tweet.getId()))
                .contentType(MediaType.APPLICATION_JSON_UTF8)
                .accept(MediaType.APPLICATION_JSON_UTF8)
                .body(Mono.just(newTweetData), Tweet.class)
                .exchange()
                .expectStatus().isOk()
                .expectHeader().contentType(MediaType.APPLICATION_JSON_UTF8)
                .expectBody()
                .jsonPath("$.text").isEqualTo("Updated Tweet");
    }

    @Test
    public void testDeleteTweet() {
	    Tweet tweet = tweetRepository.save(new Tweet("To be deleted")).block();

	    webTestClient.delete()
                .uri("/tweets/{id}", Collections.singletonMap("id",  tweet.getId()))
                .exchange()
                .expectStatus().isOk();
    }
}

In the above example, I have written tests for all the CRUD APIs. You can run the tests by going to the root directory of the project and typing mvn test.

Conclusion

In this article, we learned the basics of reactive programming with Spring and built a simple Restful service with the reactive support provided by Spring WebFlux framework. We also tested all the Rest APIs using WebTestClient.

References

I strongly recommend the following awesome YouTube videos for learning more about reactive programming with Spring and Reactor –

Thanks for reading folks! Let me know what do you think about the new Spring WebFlux framework in the comment section below.