Following is an example program to add pages to a PDF document using Java.
import java.io.File;
import java.io.IOException;
import org.apache.pdfbox.pdmodel.PDDocument;
import org.apache.pdfbox.pdmodel.PDPage;
public class AddingPagesToPdf {
public static void main(String args[]) throws IOException {
//Creating PDF document object
PDDocument document = new PDDocument();
File file = new File("C:/pdfBox/AddPages.pdf");
PDDocument.load(file);
for (int i=0; i<10; i++){
//Creating a blank page
PDPage blankPage = new PDPage();
//Adding the blank page to the document
document.addPage(blankPage);
}
//Saving the document
document.save("C:/pdfBox/AddPages_OP.pdf");
System.out.println("PDF created");
//Closing the document
document.close();
}
}
Following is an example program to split a PDF in to many using Java.
mport org.apache.pdfbox.multipdf.Splitter;
import org.apache.pdfbox.pdmodel.PDDocument;
import java.io.File;
import java.io.IOException;
import java.util.List;
import java.util.Iterator;
public class SplittingPDF {
public static void main(String[] args) throws IOException {
//Loading an existing PDF document
File file = new File("C:/pdfBox/splitpdf_IP.pdf");
PDDocument doc = PDDocument.load(file);
//Instantiating Splitter class
Splitter splitter = new Splitter();
//splitting the pages of a PDF document
List<PDDocument> Pages = splitter.split(doc);
//Creating an iterator
Iterator<PDDocument> iterator = Pages.listIterator();
//Saving each page as an individual document
int i = 1;
while(iterator.hasNext()){
PDDocument pd = iterator.next();
pd.save("C:/pdfBox/splitOP"+ i++ +".pdf");
}
System.out.println("PDF splitted");
}
}
The Spring Boot annotations are mostly placed in org.springframework.boot.autoconfigure and org.springframework.boot.autoconfigure.condition packages. Let’s learn about some frequently used spring boot annotations as well as which work behind the scene.
1. @SpringBootApplication
Spring boot is mostly about auto-configuration. This auto-configuration is done by component scanning i.e. finding all classes in classspath for @Component annotation. It also involve scanning of @Configuration annotation and initialize some extra beans.
@SpringBootApplication annotation enable all able things in one step. It enables the three features:
This annotation enables auto-configuration of the Spring Application Context, attempting to guess and configure beans that we are likely to need based on the presence of predefined classes in classpath.
For example, if we have tomcat-embedded.jar on the classpath, we are likely to want a TomcatServletWebServerFactory.
As this annotation is already included via @SpringBootApplication, so adding it again on main class has no impact. It is also advised to include this annotation only once via @SpringBootApplication.
Auto-configuration classes are regular Spring Configuration beans. They are located using the SpringFactoriesLoader mechanism (keyed against this class). Generally auto-configuration beans are @Conditional beans (most often using @ConditionalOnClass and @ConditionalOnMissingBean annotations).
3. @SpringBootConfiguration
It indicates that a class provides Spring Boot application configuration. It can be used as an alternative to the Spring’s standard @Configuration annotation so that configuration can be found automatically.
Application should only ever include one @SpringBootConfiguration and most idiomatic Spring Boot applications will inherit it from @SpringBootApplication.
The main difference is both annotations is that @SpringBootConfiguration allows configuration to be automatically located. This can be especially useful for unit or integration tests.
4. @ImportAutoConfiguration
It import and apply only the specified auto-configuration classes. The difference between @ImportAutoConfiguration and @EnableAutoConfiguration is that later attempts to configure beans that are found in the classpath during scanning, whereas @ImportAutoConfiguration only runs the configuration classes that we provide in the annotation.
We should use @ImportAutoConfiguration when we don’t want to enable the default auto-configuration.
We can use the @AutoConfigureAfter or @AutoConfigureBefore annotations if our configuration needs to be applied in a specific order (before of after).
If we want to order certain auto-configurations that should not have any direct knowledge of each other, we can also use @AutoConfigureOrder. That annotation has the same semantic as the regular @Order annotation but provides a dedicated order for auto-configuration classes.
All auto-configuration classes generally have one or more @Conditional annotations. It allow to register a bean only when the condition meets. Following are some useful conditional annotations to use.
5.1. @ConditionalOnBean and @ConditionalOnMissingBean
These annotations let a bean be included based on the presence or absence of specific beans.
It’s value attribute is used to specify beans by type or by name. Also the search attribute lets us limit the ApplicationContext hierarchy that should be considered when searching for beans.
Using these annotations at the class level prevents registration of the @Configuration class as a bean if the condition does not match.
In below example, bean JpaTransactionManager will only be loaded if a bean of type JpaTransactionManager is not already defined in the application context.
5.2. @ConditionalOnClass and @ConditionalOnMissingClass
These annotations let configuration classes be included based on the presence or absence of specific classes. Notice that annotation metadata is parsed by using spring ASM module, and even if a class might not be present in runtime – you can still refer to the class in annotation.
We can also use value attribute to refer the real class or the name attribute to specify the class name by using a String value.
Below configuration will create EmbeddedAcmeService only if this class is available in runtime and no other bean with same name is present in application context.
5.3. @ConditionalOnNotWebApplication and @ConditionalOnWebApplication
These annotations let configuration be included depending on whether the application is a “web application” or not. In Spring, a web application is one which meets at least one of below three requirements:
uses a Spring WebApplicationContext
defines a session scope
has a StandardServletEnvironment
5.4. @ConditionalOnProperty
This annotation lets configuration be included based on the presence and value of a Spring Environment property.
For example, if we have different datasource definitions for different environments, we can use this annotation.
This annotation lets configuration be included only when a specific resource is present in the classpath. Resources can be specified by using the usual Spring conventions.
This annotation lets configuration be included based on the result of a SpEL expression. Use this annotation when condition to evaluate is complex one and shall be evaluated as one condition.
Inheritance is one of the key concepts of Object Oriented Programming (OOP). Inheritance enables re-usability. It allows a class to inherit features (properties and methods) from another class.
The class that inherits the features of another class is called the Child class or Derived class or Sub class, and the class whose features are inherited is called the Parent class or Base class or Super class.
All the classes in Kotlin have a common base class called Any. It corresponds to the Object class in Java. Every class that you create in Kotlin implicitly inherits from Any –
class Person // Implicitly inherits from the default Super class - Any
The Any class contains three methods namely equals(), hashCode() and toString(). All the classes in Kotlin inherit these three methods from Any, and can override them to provide their own implementation.
Inheritance (Creating Base and Derived classes)
Here is how you declare a base class and a derived class in Kotlin –
// Base class (Super class)
open class Computer {
}
// Derived class (Sub class)
class Laptop: Computer() {
}
Notice the use of open keyword in the base class. By default, all the classes in Kotlin are final (non-inheritable).
To allow a class to be inherited by others, you must mark it with the open modifier.
Note that the child class has the responsibility to initialize the parent class. If the child class has a primary constructor, then it must initialize the parent class right in the class header with the parameters passed to its primary constructor –
// Parent class
open class Computer(val name: String,
val brand: String) {
}
// Child class (initializes the parent class)
class Laptop(name: String,
brand: String,
val batteryLife: Double) : Computer(name, brand) {
}
If the child class doesn’t have a primary constructor, then all of its secondary constructors have to initialize the parent class either by calling the super keyword directly or by delegating to another constructor that does that –
class Laptop : Computer {
val batteryLife: Double
// Calls super() to initialize the Parent class
constructor(name: String, brand: String, batteryLife: Double): super(name, brand) {
this.batteryLife = batteryLife
}
// Calls another constructor (which calls super())
constructor(name: String, brand: String): this(name, brand, 0.0) {
}
}
In the above examples, we initialized the parent class using its primary constructor. If the parent class contains one or more secondary constructors, then the child class can initialize the parent class using any of the primary constructor or secondary constructors.
Just keep in mind that the parent class needs to be initialized. It doesn’t matter which of its constructor is used to initialize it.
Inheritance Example with Properties and Member Functions
Let’s now see a complete example of Inheritance in Kotlin. Consider a banking application where people can have several types of Bank accounts like SavingsAccount, CurrentAccount etc.
In such cases, it makes sense to create a base class called BankAccount and let other classes like SavingsAccount and CurrentAccount inherit from the BankAccount class.
Following is a simple BankAccount class for our Banking application –
/**
* BankAccount (Base Class)
* @property accountNumber - Account Number (read-only)
* @property accountName - Account Name (read-only)
* @property balance - Current Balance (Mutable)
*/
open class BankAccount(val accountNumber: String, val accountName: String) {
var balance : Double = 0.0
fun depositeMoney(amount: Double): Boolean {
if(amount > 0) {
balance += amount
return true
} else {
return false
}
}
fun withdrawMoney(amount: Double): Boolean {
if(amount > balance) {
return false
} else {
balance -= amount
return true
}
}
}
A Savings account is a Bank account with some interest rate on the balance amount. We can model the SavingsAccount class in the following way –
/**
* SavingsAccount (Derived Class)
* @property interestRate - Interest Rate for SavingsAccount (read-only)
* @constructor - Primary constructor for creating a Savings Account
* @param accountNumber - Account Number (used to initialize BankAccount)
* @param accountName - Account Name (used to initialize BankAccount)
*/
class SavingsAccount (accountNumber: String, accountName: String, val interestRate: Double) :
BankAccount(accountNumber, accountName) {
fun depositInterest() {
val interest = balance * interestRate / 100
this.depositeMoney(interest);
}
}
The SavingsAccount class inherits the following features from the base class –
Properties – accountNumber, accountName, balance
Methods – depositMoney, withdrawMoney
Let’s now write some code to test the above classes and methods –
fun main(args: Array<String>) {
// Create a Savings Account with 6% interest rate
val savingsAccount = SavingsAccount("64524627", "Rajeev Kumar Singh", 6.0)
savingsAccount.depositeMoney(1000.0)
savingsAccount.depositInterest()
println("Current Balance = ${savingsAccount.balance}")
}
Overriding Member Functions
Just like Kotlin classes, members of a Kotlin class are also final by default. To allow a member function to be overridden, you need to mark it with the open modifier.
Moreover, The derived class that overrides a base class function must use the override modifier, otherwise, the compiler will generate an error –
open class Teacher {
// Must use "open" modifier to allow child classes to override it
open fun teach() {
println("Teaching...")
}
}
class MathsTeacher : Teacher() {
// Must use "override" modifier to override a base class function
override fun teach() {
println("Teaching Maths...")
}
}
Let’s test the above classes by defining the main method –
fun main(args: Array<String>) {
val teacher = Teacher()
val mathsTeacher = MathsTeacher()
teacher.teach() // Teaching...
mathsTeacher.teach() // Teaching Maths..
}
Dynamic Polymorphism
Polymorphism is an important concept in Object Oriented Programming. There are two types of polymorphism –
Static (compile-time) Polymorphism
Dynamic (run-time) Polymorphism
Static polymorphism occurs when you define multiple overloaded functions with same name but different signatures. It is called compile-time polymorphism because the compiler can decide which function to call at compile itself.
Dynamic polymorphism occurs in case of function overriding. In this case, the function that is called is decided at run-time.
Here is an example –
fun main(args: Array<String>) {
val teacher1: Teacher = Teacher() // Teacher reference and object
val teacher2: Teacher = MathsTeacher() // Teacher reference but MathsTeacher object
teacher1.teach() // Teaching...
teacher2.teach() // Teaching Maths..
}
The line teacher2.teach() calls teach() function of MathsTeacher class even if teacher2 is of type Teacher. This is because teacher2 refers to a MathsTeacher object.
Overriding Properties
Just like functions, you can override the properties of a super class as well. To allow child classes to override a property of a parent class, you must annotate it with the open modifier.
Moreover, The child class must use override keyword for overriding a property of a parent class –
open class Employee {
// Use "open" modifier to allow child classes to override this property
open val baseSalary: Double = 30000.0
}
class Programmer : Employee() {
// Use "override" modifier to override the property of base class
override val baseSalary: Double = 50000.0
}
fun main(args: Array<String>) {
val employee = Employee()
println(employee.baseSalary) // 30000.0
val programmer = Programmer()
println(programmer.baseSalary) // 50000.0
}
Overriding Property’s Getter/Setter method
You can override a super class property either using an initializer or using a custom getter/setter.
In the example below, we’re overriding the age property by defining a custom setter method –
open class Person {
open var age: Int = 1
}
class CheckedPerson: Person() {
override var age: Int = 1
set(value) {
field = if(value > 0) value else throw IllegalArgumentException("Age can not be negative")
}
}
fun main(args: Array<String>) {
val person = Person()
person.age = -5 // Works
val checkedPerson = CheckedPerson()
checkedPerson.age = -5 // Throws IllegalArgumentException : Age can not be negative
}
Calling properties and functions of Super class
When you override a property or a member function of a super class, the super class implementation is shadowed by the child class implementation.
You can access the properties and functions of the super class using super() keyword.
Here is an example –
open class Employee {
open val baseSalary: Double = 10000.0
open fun displayDetails() {
println("I am an Employee")
}
}
class Developer: Employee() {
override var baseSalary: Double = super.baseSalary + 10000.0
override fun displayDetails() {
super.displayDetails()
println("I am a Developer")
}
}
Conclusion
That’s all in this article folks. I hope you understood how inheritance works in Kotlin
pring Boot uses Jackson by default for serializing and deserializing request and response objects in your REST APIs.
If you want to use GSON instead of Jackson then it’s just a matter of adding Gson dependency in your pom.xml file and specifying a property in the application.properties file to tell Spring Boot to use Gson as your preferred json mapper.
Force Spring Boot to use GSON instead of Jackson
1. Add Gson dependency
Open your pom.xml file and add the GSON dependency like so –
<!-- Include GSON dependency -->
<dependency>
<groupId>com.google.code.gson</groupId>
<artifactId>gson</artifactId>
<version>2.8.4</version>
</dependency>
Once you do that, Spring Boot will detect Gson dependency on the classpath and automatically create a Gson bean with sensible default configurations. You can also autowire gson in your spring components directly like so –
@Autowire
private Gson gson;
If you’re curious how Spring Boot does that, then take a look at this GsonAutoConfiguration class. Notice how it uses @ConditionalOnClass(Gson.class) annotation to trigger the auto-configuration when Gson is available on the classpath.
You can now ask Spring Boot to use Gson as your preferred json mapper by specifying the following property in the application.properties file –
# Preferred JSON mapper to use for HTTP message conversion.
spring.http.converters.preferred-json-mapper=gson
That’s all you need to do to force Spring Boot to use Gson instead of Jackson.
Configure GSON in Spring Boot
Now that your Spring Boot application is using Gson, you can configure Gson by specifying various properties in the application.properties file. The following properties are taken from Spring Boot Common Application Properties index page –
# GSON (GsonProperties)
# Format to use when serializing Date objects.
spring.gson.date-format=
# Whether to disable the escaping of HTML characters such as '<', '>', etc.
spring.gson.disable-html-escaping=
# Whether to exclude inner classes during serialization.
spring.gson.disable-inner-class-serialization=
# Whether to enable serialization of complex map keys (i.e. non-primitives).
spring.gson.enable-complex-map-key-serialization=
# Whether to exclude all fields from consideration for serialization or deserialization that do not have the "Expose" annotation.
spring.gson.exclude-fields-without-expose-annotation=
# Naming policy that should be applied to an object's field during serialization and deserialization.
spring.gson.field-naming-policy=
# Whether to generate non executable JSON by prefixing the output with some special text.
spring.gson.generate-non-executable-json=
# Whether to be lenient about parsing JSON that doesn't conform to RFC 4627.
spring.gson.lenient=
# Serialization policy for Long and long types.
spring.gson.long-serialization-policy=
# Whether to output serialized JSON that fits in a page for pretty printing.
spring.gson.pretty-printing=
# Whether to serialize null fields.
spring.gson.serialize-nulls=
All the above properties are bound to a class called GsonProperties defined in Spring Boot. The GsonAutoConfiguration class uses these properties to configure Gson.
Excluding Jackson completely
If you want to get rid of Jackson completely then you can exclude it from spring-boot-starter-web dependency in the pom.xml file like so –
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
<!-- Exclude the default Jackson dependency -->
<exclusions>
<exclusion>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-json</artifactId>
</exclusion>
</exclusions>
</dependency>
I hope you enjoyed this article. Thanks for reading. See you next time!
Logging in Spring Boot can be confusing, and the wide range of tools and frameworks make it a challenge to even know where to start. This guide talks through the most common best practices for Spring Boot logging and gives five key suggestions to add to your logging tool kit.
What’s in the Spring Boot Box?
The Spring Boot Starters all depend on spring-boot-starter-logging. This is where the majority of the logging dependencies for your application come from. The dependencies involve a facade (SLF4J) and frameworks (Logback). It’s important to know what these are and how they fit together.
SLF4J is a simple front-facing facade supported by several logging frameworks. It’s main advantage is that you can easily switch from one logging framework to another. In our case, we can easily switch our logging from Logback to Log4j, Log4j2 or JUL.
The dependencies we use will also write logs. For example, Hibernate uses SLF4J, which fits perfectly as we have that available. However, the AWS SDK for Java uses Apache Commons Logging (JCL). Spring-boot-starter-logging includes the necessary bridges to ensure those logs are delegated to our logging framework out of the box.
SLF4J usage:
At a high level, all the application code has to worry about is:
Getting an instance of an SLF4J logger (Regardless of the underlying framework): private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);Copy
Writing some logs: LOG.info(“My message set at info level”);Copy
Logback or Log4j2?
Spring Boot’s default logging framework is Logback. Your application code should interface only with the SLF4J facade so that it’s easy to switch to an alternative framework if necessary.
Log4j2 is newer and claims to improve on the performance of Logback. Log4j2 also supports a wide range of appenders so it can log to files, HTTP, databases, Cassandra, Kafka, as well as supporting asynchronous loggers. If logging performance is of high importance, switching to log4j2 may improve your metrics. Otherwise, for simplicity, you may want to stick with the default Logback implementation.
This guide will provide configuration examples for both frameworks.
5 Tips for Getting the Most Out of Your Spring Boot Logging
With your initial set up out of the way, here are 5 top tips for spring boot logging.
1. Configuring Your Log Format
Spring Boot Logging provides default configurations for logback and log4j2. These specify the logging level, the appenders (where to log) and the format of the log messages.
For all but a few specific packages, the default log level is set to INFO, and by default, the only appender used is the Console Appender, so logs will be directed only to the console.
The default format for the logs using logback looks like this:
Let’s take a look at that last line of log, which was a statement created from within a controller with the message “My message set at info level”.
It looks simple, yet the default log pattern for logback seems “off” at first glance. As much as it looks like it could be, it’s not regex, it doesn’t parse email addresses, and actually, when we break it down it’s not so bad.
The variables that are available for the log format allow you to create meaningful logs, so let’s look a bit deeper at the ones in the default log pattern example.Show 102550100 entriesSearch:
Pattern Part
What it Means
%clr
%clr specifies a colour. By default, it is based on log levels, e.g, INFO is green. If you want to specify specific colours, you can do that too.
The format is: %clr(Your message){your colour}
So for example, if we wanted to add “Demo” to the start of every log message, in green, we would write: %clr(Demo){green}
%d is the current date, and the part in curly braces is the format. ${VARIABLE}:-default is a way of specifying that we should use the $VARIABLE environment variable for the format, if it is available, and if not, fall back to default. This is handy if you want to override these values in your properties files, by providing arguments, or by setting environment variables.
In this example, the default format is yyyy-MM-dd HH:mm:ss.SSS unless we specify a variable named LOG_DATEFORMAT_PATTERN. In the logs above, we can see 2020-10-19 10:09:58.152 matches the default pattern, meaning we did not specify a custom LOG_DATEFORMAT_PATTERN.
${LOG_LEVEL_PATTERN:-%5p}
Uses the LOG_LEVEL_PATTERN if it is defined, else will print the log level with right padding up to 5 characters (E.g “INFO” becomes “INFO “ but “TRACE” will not have the trailing space). This keeps the rest of the log aligned as it’ll always be 5 characters.
${PID:- }
The environment variable $PID, if it exists. If not, space.
t
The name of the thread triggering the log message.
logger
The name of the logger (up to 39 characters), in our case this is the class name.
%m
The log message.
%n
The platform-specific line separator.
%wEx
If one exists, wEx is the stack trace of any exception, formatted using Spring Boot’s ExtendedWhitespaceThrowableProxyConverter.
You can customise the ${} variables that are found in the logback-spring.xml by passing in properties or environment variables. For example, you may set logging.pattern.console to override the whole of the console log pattern.
Armed with the ability to customise your logs, you should consider adding:
Application name.
A request ID.
The endpoint being requested (E.g /health).
There are a few items in the default log that I would remove unless you have a specific use case for them:
The ‘—’ separator.
The thread name.
The process ID.
With the ability to customise these through the use of the logback-spring.xml or log4j2-spring.xml, the format of your logs is fully within your control.
2. Configuring the Destination for Your Logs (Appenders and Loggers)
An appender is just a fancy name for the part of the logging framework that sends your logs to a particular target. Both frameworks can output to console, over HTTP, to databases, or over a TCP socket, as well as to many other targets. The way we configure the destination for the logs is by adding, removing and configuring these appenders.
You have more control over which appenders you use, and the configuration of them, if you create your own custom .xml configuration. However, the default logging configuration does make use of environment properties that allow you to override some parts of it, for example, the date format.
The official docs for logback appenders and log4j2 appenders detail the parameters required for each of the appenders available, and how to configure them in your XML file. One tip for choosing the destination for your logs is to have a plan for rotating them. Writing logs to a file always feels like a great idea, until the storage used for that file runs out and brings down the whole service.
Log4j and logback both have a RollingFileAppender which handles rotating these log files based on file size, or time, and it’s exactly that which Spring Boot Logging uses if you set the logging.file property.
3. Logging as a Cross-Cutting Concern to Keep Your Code Clean (Using Filters and Aspects)
You might want to log every HTTP request your API receives. That’s a fairly normal requirement, but putting a log statement into every controller is unnecessary duplication. It’s easy to forget and make mistakes. A requirement that you want to log every method within your packages that your application calls would be even more cumbersome.
I’ve seen developers use this style of logging at trace level so that they can turn it on to see exactly what is happening in a production environment. Adding log statements to the start and end of every method is messy, and there is a better way. This is where filters and aspects save the day and avoid the code duplication.
When to Use a Filter Vs When to Use Aspect-Oriented Programming
If you are looking to create log statements related to specific requests, you should opt for using filters, as they are part of the handling chain that your application already goes through for each request. They are easier to write, easier to test and usually more performant than using aspects. If you are considering more cross-cutting concerns, for example, audit logging, or logging every method that causes an exception to be thrown, use AOP.
Using a Filter to Log Every Request
Filters can be registered with your web container by creating a class implementing javax.servlet.Filter and annotating it with @Component, or adding it as an @Bean in one of your configuration classes. When your spring-boot-starter application starts up, it will create the Filter and register it with the container.
Aspect-oriented programming enables you to fulfill cross-cutting concerns, like logging for example, in one place. You can do this without your logging code needing to sprawl across every class.
This approach is great for use cases such as:
Logging any exceptions thrown from any method within your packages (See @AfterThrowing)
Logging performance metrics by timing before/after each method is run (See @Around)
Audit logging. You can log calls to methods that have your a custom annotation on, such as adding @Audit. You only need to create a pointcut matching calls to methods with that annotation
Let’s start with a simple example – we want to log the name of every public method that we call within our package, com.example.demo. There are only a few steps to writing an Aspect that will run before every public method in a package that you specify.
Add @EnableAspectJAutoProxy to one of your configuration classes. This line tells spring-boot that you want to enable AspectJ support.
Add your pointcut, which defines a pattern that is matched against method signatures as they run. You can find more about how to construct your matching pattern in the spring boot documentation for AOP. In our example, we match any method inside the com.example.demo package.
Add your Aspect. This defines when you want to run your code in relation to the pointcut (E.g, before, after or around the methods that it matches). In this example, the @Before annotation causes the method to be executed before any methods that match the pointcut.
That’s all there is to logging every method call. The logs will appear as:
2020-10-27 19:26:33.269 INFO 2052 --- [nio-8080-exec-2]
com.example.demo.MyAspect : Called checkHealthCopy
By making changes to your pointcut, you can write logs for every method annotated with a specific annotation. For example, consider what you can do with:
@annotation(com.example.demo.Audit)Copy
4. Applying Context to Your Logs Using MDC
(This would run for every method annotated with a custom annotation, @Audit).
MDC (Mapped Diagnostic Context) is a complex-sounding name for a map of key-value pairs, associated with a single thread. Each thread has its own map. You can add keys/values to the map at runtime, and then reference the keys from that map in your logging pattern.
The approach comes with a warning that threads may be reused, and so you’ll need to make sure to clear your MDC after each request to avoid your context leaking from one request to the next.
MDC is accessible through SLF4J and supported by both Logback and Log4j2, so we don’t need to worry about the specifics of the underlying implementation.
Tracking Requests Through Your Application Using Filters and MDC
Want to be able to group logs for a specific request? The Mapped Diagnostic Context (MDC) will help.
The steps are:
Add a header to each request going to your API, for example, ‘tracking-id’. You can generate this on the fly (I suggest using a UUID) if your client cannot provide one.
Create a filter that runs once per request and stores that value in the MDC.
Update your logging pattern to reference the key in the MDC to retrieve the value.
After setting the value on your MDC, just add %X{tracking} to your logging pattern (Replacing the word “tracking” with the key you have put in MDC) and your logs will contain the value in every log message for that request.
If a client reports a problem, as long as you can get a unique tracking-id from your client, then you’ll be able to search your logs and pull up every log statement generated from that specific request.
Other use cases that you may want to put into your MDC and include on every log message include:
The application version.
Details of the request, for example, the path.
Details of the logged-in user, for example, the username.
5. Unit Testing Your Log Statements
Why Test Your Logs?
You can unit test your logging code. Too often this is overlooked because the log statements return void. For example, logger.info(“foo”); does not return a value that you can assert against.
It’s easy to make mistakes. Log statements usually involve parameters or formatted strings, and it’s easy to put log statements in the wrong place. Unit testing reassures you that your logs do what you expect and that you’re covered when refactoring to avoid any accidental modifications to your logging behaviour.
The Approach to Testing Your Logs
The Problem
SLF4J’s LoggerFactory.getLogger is static, making it difficult to mock. Searching through any outputted log files in our unit tests is error-prone (E.g we need to consider resetting the log files between each unit test). How do we assert against the logs?
The Solution
The trick is to add your own test appender to the logging framework (e.g Logback or Log4j2) that captures the logs from your application in memory, allowing us to assert against the output later. The steps are:
Before each test case, add an appender to your logger.
Within the test, call your application code that logs some output.
The logger will delegate to your test appender.
Assert that your expected logs have been received by your test appender.
Each logging framework has suitable appenders, but referencing those concrete appenders in our tests means we need to depend on the specific framework rather than SLF4J. That’s not ideal, but the alternatives of searching through logged output in files, or implementing our own SLF4J implementation is overkill, making this the pragmatic choice.
Here are a couple of tricks for unit testing using JUnit 4 rules or JUnit 5 extensions that will keep your test classes clean, and reduce the coupling with the logging framework.
Testing Log Statements Using Junit 5 Extensions in Two Steps
JUnit 5 extensions help to avoid code duplicates between your tests. Here’s how to set up your logging tests in two steps:
Step 2: Use that rule to assert against your log statement with logback or log4j2
Testing Log Statements Using Junit 4 Rules in Two Steps
JUnit 4 rules help to avoid code duplication by extracting the common test code away from the test classes. In our example, we don’t want to duplicate the code for adding a test appender to our logger in every test class.
Step 2: Use that rule to assert against your log statements using logback or log4j2.
With these approaches, you can assert that your log statements have been called with a message and level that you expect.
Conclusion
The Spring Boot Logging Starter provides everything you need to quickly get started, whilst allowing full control when you need it. We’ve looked at how most logging concerns (formatting, destinations, cross-cutting logging, context and unit tests) can be abstracted away from your core application code.
Any global changes to your logging can be done in one place, and the classes for the rest of your application don’t need to change. At the same time, unit tests for your log statements provide you with reassurance that your log statements are being fired after making any alterations to your business logic.
These are my top 5 tips for configuring Spring Boot Logging. However, when your logging configuration is set up, remember that your logs are only ever as good as the content you put in them. Be mindful of the content you are logging, and make sure you are using the right logging levels.
Learn to create and configure spring boot jsp view resolver which uses JSP template files to render view layer. This example uses embedded Tomcat server to run the application.
Maven dependencies – pom.xml
This application uses given below dependencies.
<projectxmlns="http://maven.apache.org/POM/4.0.0"xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"xsi:schemaLocation="http://maven.apache.org/POM/4.0.0http://maven.apache.org/maven-v4_0_0.xsd"><modelVersion>4.0.0</modelVersion><groupId>com.fusebes</groupId><artifactId>spring-boot-demo</artifactId><packaging>war</packaging><version>0.0.1-SNAPSHOT</version><name>spring-boot-demo Maven Webapp</name><url>http://maven.apache.org</url><parent><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-parent</artifactId><version>1.5.1.RELEASE</version></parent><properties><java.version>1.8</java.version></properties><dependencies><!-- Web --><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-web</artifactId></dependency><!-- Tomcat Embed --><dependency><groupId>org.springframework.boot</groupId><artifactId>spring-boot-starter-tomcat</artifactId><scope>provided</scope></dependency><!-- JSTL --><dependency><groupId>javax.servlet</groupId><artifactId>jstl</artifactId></dependency><!-- To compile JSP files --><dependency><groupId>org.apache.tomcat.embed</groupId><artifactId>tomcat-embed-jasper</artifactId><scope>provided</scope></dependency></dependencies></project>
Spring Boot Application Initializer
The first step in producing a deployable war file is to provide a SpringBootServletInitializer subclass and override its configure() method. This makes use of Spring Framework’s Servlet 3.0 support and allows you to configure your application when it’s launched by the servlet container.
Controller classes can have methods mapped to specific URLs in the application. In given application, it has two views i.e. “/” and “/next”.
packagecom.fusebes.app.controller;importjava.util.Map;importorg.springframework.stereotype.Controller;importorg.springframework.web.bind.annotation.RequestMapping;@ControllerpublicclassIndexController {@RequestMapping("/")publicString home(Map<String, Object> model) {model.put("message", "Fusebes Reader !!");return"index";}@RequestMapping("/next")publicString next(Map<String, Object> model) {model.put("message", "You are in new page !!");return"next";}}
Spring Boot JSP ViewResolver Configuration
To resolve JSP files location, you can have two approaches.
1) Add entries in application.properties
spring.mvc.view.prefix=/WEB-INF/view/spring.mvc.view.suffix=.jsp//For detailed logging during developmentlogging.level.org.springframework=TRACElogging.level.com=TRACE
2) Configure InternalResourceViewResolver to serve JSP pages
Functions are the basic building block of any program. In this article, you’ll learn how to declare and call functions in Kotlin. You’ll also learn about Function scopes, Default arguments, Named Arguments, and Varargs.
Defining and Calling Functions
You can declare a function in Kotlin using the fun keyword. Following is a simple function that calculates the average of two numbers –
fun avg(a: Double, b: Double): Double {
return (a + b)/2
}
Calling a function is simple. You just need to pass the required number of parameters in the function name like this –
avg(4.6, 9.0) // 6.8
Following is the general syntax of declaring a function in Kotlin.
fun functionName(param1: Type1, param2: Type2,..., paramN: TypeN): Type {
// Method Body
}
Every function declaration has a function name, a list of comma-separated parameters, an optional return type, and a method body. The function parameters must be explicitly typed.
Single Expression Functions
You can omit the return type and the curly braces if the function returns a single expression. The return type is inferred by the compiler from the expression –
fun avg(a: Double, b: Double) = (a + b)/2
avg(10.0, 20.0) // 15.0
Note that, Unlike other statically typed languages like Scala, Kotlin does not infer return types for functions with block bodies. Therefore, Functions with block body must always specify return types explicitly.
Unit returning Functions
Functions which don’t return anything has a return type of Unit. The Unit type corresponds to void in Java.
fun printAverage(a: Double, b: Double): Unit {
println("Avg of ($a, $b) = ${(a + b)/2}")
}
printAverage(10.0, 30.0) // Avg of (10.0, 30.0) = 20.0
Note that, the Unit type declaration is completely optional. So you can also write the above function declaration like this –
fun printAverage(a: Double, b: Double) {
println("Avg of ($a, $b) = ${(a + b)/2}")
}
Function Default Arguments
Kotlin supports default arguments in function declarations. You can specify a default value for a function parameter. The default value is used when the corresponding argument is omitted from the function call.
If you call the above function with two arguments, it works just like any other function and uses the values passed in the arguments –
displayGreeting("Welcome to the CalliCoder Blog", "John") // Hello John, Welcome to the CalliCoder Blog
However, If you omit the argument that has a default value from the function call, then the default value is used in the function body –
displayGreeting("Welcome to the CalliCoder Blog") // Hello Guest, Welcome to the CalliCoder Blog
If the function declaration has a default parameter preceding a non-default parameter, then the default value cannot be used while calling the function with position-based arguments.
Consider the following function –
fun arithmeticSeriesSum(a: Int = 1, n: Int, d: Int = 1): Int {
return n/2 * (2*a + (n-1)*d)
}
While calling the above function, you can not omit the argument a from the function call and selectively pass a value for the non-default parameter n –
arithmeticSeriesSum(10) // error: no value passed for parameter n
When you call a function with position-based arguments, the first argument corresponds to the first parameter, the second argument corresponds to the second parameter, and so on.
So for passing a value for the 2nd parameter, you need to specify a value for the first parameter as well –
arithmeticSeriesSum(1, 10) // Result = 55
However, The above use-case of selectively passing a value for a parameter is solved by another feature of Kotlin called Named Arguments.
Function Named Arguments
Kotlin allows you to specify the names of arguments that you’re passing to the function. This makes the function calls more readable. It also allows you to pass the value of a parameter selectively if other parameters have default values.
Consider the following arithmeticSeriesSum() function that we defined in the previous section –
fun arithmeticSeriesSum(a: Int = 1, n: Int, d: Int = 1): Int {
return n/2 * (2*a + (n-1)*d)
}
You can specify the names of arguments while calling the function like this –
arithmeticSeriesSum(n=10) // Result = 55
The above function call will use the default values for parameters a and d.
Similarly, you can call the function with all the parameters like this –
arithmeticSeriesSum(a=3, n=10, d=2) // Result = 120
You can also reorder the arguments if you’re specifying the names –
arithmeticSeriesSum(n=10, d=2, a=3) // Result = 120
You can use a mix of named arguments and position-based arguments as long as all the position-based arguments are placed before the named arguments –
arithmeticSeriesSum(3, n=10) // Result = 75
The following function call is not allowed since it contains position-based arguments after named arguments –
arithmeticSeriesSum(n=10, 2) // error: mixing named and positioned arguments is not allowed
Variable Number of Arguments (Varargs)
You can pass a variable number of arguments to a function by declaring the function with a vararg parameter.
Consider the following sumOfNumbers() function which accepts a vararg of numbers –
fun sumOfNumbers(vararg numbers: Double): Double {
var sum: Double = 0.0
for(number in numbers) {
sum += number
}
return sum
}
You can call the above function with any number of arguments –
sumOfNumbers(1.5, 2.0) // Result = 3.5
sumOfNumbers(1.5, 2.0, 3.5, 4.0, 5.8, 6.2) // Result = 23.0
sumOfNumbers(1.5, 2.0, 3.5, 4.0, 5.8, 6.2, 8.1, 12.4, 16.5) // Result = 60.0
In Kotlin, a vararg parameter of type T is internally represented as an array of type T (Array<T>) inside the function body.
A function may have only one vararg parameter. If there are other parameters following the vararg parameter, then the values for those parameters can be passed using the named argument syntax –
fun sumOfNumbers(vararg numbers: Double, initialSum: Double): Double {
var sum = initialSum
for(number in numbers) {
sum += number
}
return sum
}
sumOfNumbers(1.5, 2.5, initialSum=100.0) // Result = 104.0
Spread Operator
Usually, we pass the arguments to a vararg function one-by-one. But if you already have an array and want to pass the elements of the array to the vararg function, then you can use the spread operator like this –
val a = doubleArrayOf(1.5, 2.6, 5.4)
sumOfNumbers(*a) // Result = 9.5
Function Scope
Kotlin supports functional programming. Functions are first-class citizens in the language.
Unlike Java where every function needs to be encapsulated inside a class, Kotlin functions can be defined at the top level in a source file.
In addition to top-level functions, you also have the ability to define member functions, local functions, and extension functions.
1. Top Level Functions
Top level functions in Kotlin are defined in a source file outside of any class. They are also called package level functions because they are a member of the package in which they are defined.
The main() method itself is a top-level function in Kotlin since it is defined outside of any class.
Let’s now see an example of a top-level function. Check out the following findNthFibonacciNo() function which is defined inside a package named maths –
package maths
fun findNthFibonacciNo(n: Int): Int {
var a = 0
var b = 1
var c: Int
if(n == 0) {
return a
}
for(i in 2..n) {
c = a+b
a = b
b = c
}
return b
}
You can access the above function directly inside the maths package –
package maths
fun main(args: Array<String>) {
println("10th fibonacci number is - ${findNthFibonacciNo(10)}")
}
//Outputs - 10th fibonacci number is - 55
However, If you want to call the findNthFibonacciNo() function from other packages, then you need to import it as in the following example –
package test
import maths.findNthFibonacciNo
fun main(args: Array<String>) {
println("10th fibonacci number is - ${findNthFibonacciNo(10)}")
}
2. Member Functions
Member functions are functions which are defined inside a class or an object.
class User(val firstName: String, val lastName: String) {
// Member function
fun getFullName(): String {
return firstName + " " + lastName
}
}
Member functions are called on the objects of the class using the dot(.) notation –
val user = User("Bill", "Gates") // Create an object of the class
println("Display Name : ${user.getFullName()}") // Call the member function
3. Local/Nested Functions
Kotlin allows you to nest function definitions. These nested functions are called Local functions. Local functions bring more encapsulation and readability to your program –
fun findBodyMassIndex(weightInKg: Double, heightInCm: Double): Double {
// Validate the arguments
if(weightInKg <= 0) {
throw IllegalArgumentException("Weight must be greater than zero")
}
if(heightInCm <= 0) {
throw IllegalArgumentException("Height must be greater than zero")
}
fun calculateBMI(weightInKg: Double, heightInCm: Double): Double {
val heightInMeter = heightInCm / 100
return weightInKg / (heightInMeter * heightInMeter)
}
// Calculate BMI using the nested function
return calculateBMI(weightInKg, heightInCm)
}
Local functions can access local variables of the outer function. So the above function is equivalent to the following –
fun findBodyMassIndex(weightInKg: Double, heightInCm: Double): Double {
if(weightInKg <= 0) {
throw IllegalArgumentException("Weight must be greater than zero")
}
if(heightInCm <= 0) {
throw IllegalArgumentException("Height must be greater than zero")
}
// Nested function has access to the local variables of the outer function
fun calculateBMI(): Double {
val heightInMeter = heightInCm / 100
return weightInKg / (heightInMeter * heightInMeter)
}
return calculateBMI()
}
Conclusion
Congratulations folks! In this article, you learned how to define and call functions in Kotlin, how to use default and named arguments, how to define and call functions with a variable number of arguments, and how to define top-level functions, member functions and local/nested functions
In future articles, I’ll write about extension functions, higher order functions and lambdas. So Stay tuned!
As always, Thank you for reading. Happy Kotlin Koding 🙂
In this article, we’ll go a step further and deploy a stateless Go web app with Redis on Kubernetes. You’ll get to understand how the deployment of multiple distinct Pods work and how two Pods can communicate with each other in the cluster.
Building a sample Go app that uses Redis
We’ll create a simple web application in Go that contains an API to display the “Quote of the day”.
The app fetches the quote of the day from a public API hosted at http://quotes.rest/, then it caches the result in Redis until the end of the day. For subsequent API calls, the app will return the result from Redis cache instead of fetching it from the public API.
Open your terminal and type the following commands to create the project and initialize Go modules
Next, create a file called main.go with the following code:
package main
import (
"context"
"encoding/json"
"errors"
"log"
"net/http"
"os"
"os/signal"
"syscall"
"time"
"github.com/go-redis/redis"
"github.com/gorilla/mux"
)
func indexHandler(w http.ResponseWriter, r *http.Request) {
w.Write([]byte("Welcome! Please hit the `/qod` API to get the quote of the day."))
}
func quoteOfTheDayHandler(client *redis.Client) http.HandlerFunc {
return func(w http.ResponseWriter, r *http.Request) {
currentTime := time.Now()
date := currentTime.Format("2006-01-02")
val, err := client.Get(date).Result()
if err == redis.Nil {
log.Println("Cache miss for date ", date)
quoteResp, err := getQuoteFromAPI()
if err != nil {
w.Write([]byte("Sorry! We could not get the Quote of the Day. Please try again."))
return
}
quote := quoteResp.Contents.Quotes[0].Quote
client.Set(date, quote, 24*time.Hour)
w.Write([]byte(quote))
} else {
log.Println("Cache Hit for date ", date)
w.Write([]byte(val))
}
}
}
func main() {
// Create Redis Client
var (
host = getEnv("REDIS_HOST", "localhost")
port = string(getEnv("REDIS_PORT", "6379"))
password = getEnv("REDIS_PASSWORD", "")
)
client := redis.NewClient(&redis.Options{
Addr: host + ":" + port,
Password: password,
DB: 0,
})
_, err := client.Ping().Result()
if err != nil {
log.Fatal(err)
}
// Create Server and Route Handlers
r := mux.NewRouter()
r.HandleFunc("/", indexHandler)
r.HandleFunc("/qod", quoteOfTheDayHandler(client))
srv := &http.Server{
Handler: r,
Addr: ":8080",
ReadTimeout: 10 * time.Second,
WriteTimeout: 10 * time.Second,
}
// Start Server
go func() {
log.Println("Starting Server")
if err := srv.ListenAndServe(); err != nil {
log.Fatal(err)
}
}()
// Graceful Shutdown
waitForShutdown(srv)
}
func waitForShutdown(srv *http.Server) {
interruptChan := make(chan os.Signal, 1)
signal.Notify(interruptChan, os.Interrupt, syscall.SIGINT, syscall.SIGTERM)
// Block until we receive our signal.
<-interruptChan
// Create a deadline to wait for.
ctx, cancel := context.WithTimeout(context.Background(), time.Second*10)
defer cancel()
srv.Shutdown(ctx)
log.Println("Shutting down")
os.Exit(0)
}
func getQuoteFromAPI() (*QuoteResponse, error) {
API_URL := "http://quotes.rest/qod.json"
resp, err := http.Get(API_URL)
if err != nil {
return nil, err
}
defer resp.Body.Close()
log.Println("Quote API Returned: ", resp.StatusCode, http.StatusText(resp.StatusCode))
if resp.StatusCode >= 200 && resp.StatusCode <= 299 {
quoteResp := &QuoteResponse{}
json.NewDecoder(resp.Body).Decode(quoteResp)
return quoteResp, nil
} else {
return nil, errors.New("Could not get quote from API")
}
}
func getEnv(key, defaultValue string) string {
value := os.Getenv(key)
if value == "" {
return defaultValue
}
return value
}
Also, create the following structs in a file named quote.go to parse the JSON response returned from http://quotes.rest/ API.
package main
type QuoteData struct {
Id string `json:"id"`
Quote string `json:"quote"`
Length string `json:"length"`
Author string `json:"author"`
Tags []string `json:"tags"`
Category string `json:"category"`
Date string `json:"date"`
Permalink string `json:"parmalink"`
Title string `json:"title"`
Background string `json:"Background"`
}
type QuoteResponse struct {
Success APISuccess `json:"success"`
Contents QuoteContent `json:"contents"`
}
type QuoteContent struct {
Quotes []QuoteData `json:"quotes"`
Copyright string `json:"copyright"`
}
type APISuccess struct {
Total string `json:"total"`
}
Let’s now build and run the app locally:
$ go build
$ ./go-redis-kubernetes
2019/07/28 13:32:05 Starting Server
$ curl localhost:8080
Welcome! Please hit the `/qod` API to get the quote of the day.
$ curl localhost:8080/qod
I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.
Containerizing the Go app
Let’s now containerize our Go app by creating a Dockerfile with the following configurations:
# Dockerfile References: https://docs.docker.com/engine/reference/builder/
# Start from the latest golang base image
FROM golang:latest as builder
# Add Maintainer Info
LABEL maintainer="Rajeev Singh <rajeevhub@gmail.com>"
# Set the Current Working Directory inside the container
WORKDIR /app
# Copy go mod and sum files
COPY go.mod go.sum ./
# Download all dependencies. Dependencies will be cached if the go.mod and go.sum files are not changed
RUN go mod download
# Copy the source from the current directory to the Working Directory inside the container
COPY . .
# Build the Go app
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main .
######## Start a new stage from scratch #######
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
# Copy the Pre-built binary file from the previous stage
COPY --from=builder /app/main .
# Expose port 8080 to the outside world
EXPOSE 8080
# Command to run the executable
CMD ["./main"]
I’ve already built and published the docker image for our app on docker hub. You can use the following commands to do so –
# Build the image
$ docker build -t go-redis-kubernetes .
# Tag the image
$ docker tag go-redis-kubernetes username/go-redis-app:1.0.0
# Login to docker with your docker Id
$ docker login
Login with your Docker ID to push and pull images from Docker Hub. If you don\'t have a Docker ID, head over to https://hub.docker.com to create one.
Username (username): username
Password:
Login Succeeded
# Push the image to docker hub
$ docker push project/go-redis-app:1.0.0
Creating the Kubernetes deployment and service manifest for Redis
Let’s now create the configuration for deploying our Redis app on Kubernetes. We’ll need to create a deployment for managing the Redis instance and a Service to proxy traffic from our Go app to the Redis Pod.
Create a folder called deployments inside the project’s root directoy to store all the deployment manifests. And then, create a file called redis-master.yml with the following configurations:
---
apiVersion: apps/v1 # API version
kind: Deployment
metadata:
name: redis-master # Unique name for the deployment
labels:
app: redis # Labels to be applied to this deployment
spec:
selector:
matchLabels: # This deployment applies to the Pods matching these labels
app: redis
role: master
tier: backend
replicas: 1 # Run a single pod in the deployment
template: # Template for the pods that will be created by this deployment
metadata:
labels: # Labels to be applied to the Pods in this deployment
app: redis
role: master
tier: backend
spec: # Spec for the container which will be run inside the Pod.
containers:
- name: master
image: redis
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 6379
---
apiVersion: v1
kind: Service # Type of Kubernetes resource
metadata:
name: redis-master # Name of the Kubernetes resource
labels: # Labels that will be applied to this resource
app: redis
role: master
tier: backend
spec:
ports:
- port: 6379 # Map incoming connections on port 6379 to the target port 6379 of the Pod
targetPort: 6379
selector: # Map any Pod with the specified labels to this service
app: redis
role: master
tier: backend
The redis-master Service is only accessible within the container cluster because the default type for a Service is ClusterIP. ClusterIP provides a single IP address for the set of Pods the Service is pointing to. This IP address is accessible only within the cluster.
Kubernetes deployment manifest for the Go app
Let’s now create a deployment and a service for our Go app. We’ll run 3 Pods for the Go app and the Pods will be exposed via a Service to the outside world:
---
apiVersion: apps/v1
kind: Deployment # Type of Kubernetes resource
metadata:
name: go-redis-app # Unique name of the Kubernetes resource
spec:
replicas: 3 # Number of pods to run at any given time
selector:
matchLabels:
app: go-redis-app # This deployment applies to any Pods matching the specified label
template: # This deployment will create a set of pods using the configurations in this template
metadata:
labels: # The labels that will be applied to all of the pods in this deployment
app: go-redis-app
spec:
containers:
- name: go-redis-app
image: project/go-redis-app:1.0.0
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 100m
memory: 100Mi
ports:
- containerPort: 8080 # Should match the port number that the Go application listens on
env: # Environment variables passed to the container
- name: REDIS_HOST
value: redis-master
- name: REDIS_PORT
value: "6379"
---
apiVersion: v1
kind: Service # Type of kubernetes resource
metadata:
name: go-redis-app-service # Unique name of the resource
spec:
type: NodePort # Expose the Pods by opening a port on each Node and proxying it to the service.
ports: # Take incoming HTTP requests on port 9090 and forward them to the targetPort of 8080
- name: http
port: 9090
targetPort: 8080
selector:
app: go-redis-app # Map any pod with label `app=go-redis-app` to this service
The Golang app can communicate with Redis using the hostname redis-master. This is automatically resolved by Kubernetes to point to the IP address of the service redis-master.
Deploying the Go app and Redis on Kubernetes
We’ll deploy the Go web app and Redis on a local kubernetes cluster created using Minikube.
Please install Minikube and Kubectl if you haven’t installed them already. Check out the Kubernetes official documentation for instructions.
Start a Kubernetes cluster using minikube
$ minikube start
Deploy Redis
$ kubectl apply -f deployments/redis-master.yml
deployment.apps/redis-master created
service/redis-master created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
redis-master-7b44998456-pl8h9 1/1 Running 0 34s
Deploy the Go app
$ kubectl apply -f deployments/go-redis-app.yml
deployment.apps/go-redis-app created
service/go-redis-app-service created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
go-redis-app-57b7d4d4cd-fkddw 1/1 Running 0 27s
go-redis-app-57b7d4d4cd-l9wg9 1/1 Running 0 27s
go-redis-app-57b7d4d4cd-m9t8b 1/1 Running 0 27s
redis-master-7b44998456-pl8h9 1/1 Running 0 82s
Accessing the application
The Go app is exposed as NodePort via the service. You can get the service URL using minikube like this –
You can use the above endpoint to access the application:
$ curl http://192.168.99.100:30435
Welcome! Please hit the `/qod` API to get the quote of the day.
$ curl http://192.168.99.100:30435/qod
I’ve missed more than 9000 shots in my career. I’ve lost almost 300 games. 26 times, I’ve been trusted to take the game winning shot and missed. I’ve failed over and over and over again in my life. And that is why I succeed.
Conclusion
In this article, you learned how to deploy a stateless Go web app with Redis on a local Kubernetes cluster created using Minikube.
While building any application, we often need to create classes whose primary purpose is to hold data/state. These classes generally contain the same old boilerplate code in the form of getters, setters, equals(), hashcode() and toString() methods.
Motivation
Consider the following example of a Customer class in Java that just holds data about a Customer and doesn’t have any functionality whatsoever –
public class Customer {
private String id;
private String name;
public Customer(String id, String name) {
this.id = id;
this.name = name;
}
public String getId() {
return id;
}
public void setId(String id) {
this.id = id;
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
@Override
public boolean equals(Object o) {
if (this == o) return true;
if (o == null || getClass() != o.getClass()) return false;
Customer customer = (Customer) o;
if (id != null ? !id.equals(customer.id) : customer.id != null) return false;
return name != null ? name.equals(customer.name) : customer.name == null;
}
@Override
public int hashCode() {
int result = id != null ? id.hashCode() : 0;
result = 31 * result + (name != null ? name.hashCode() : 0);
return result;
}
}
You see, for creating a Simple class with only two member fields, we had to write almost 50 lines of code.
Yes, I know that you don’t need to write that code yourself and any good IDE can generate all that boilerplate code for you.
But that code will still be there in your source file and clutter it. Moreover, whenever you add a new member field to the Class, you’ll need to regenerate/modify the constructors, getters/setters and equals()/hashcode() methods.
You can also use a third party library like Project Lombok to generate getters/setters, equals()/hashCode(), toString() methods and more. But there is no out of the box solution without any library that can help us avoid these boilerplate codes in our application.
Kotlin Data Classes
Kotlin has a better solution for classes that are used to hold data/state. It’s called a Data Class. A Data Class is like a regular class but with some additional functionalities.
With Kotlin’s data classes, you don’t need to write/generate all the lengthy boilerplate code yourself. The compiler automatically generates a default getter and setter for all the mutable properties, and a getter (only) for all the read-only properties of the data class. Moreover, It also derives the implementation of standard methods like equals(), hashCode() and toString() from the properties declared in the data class’s primary constructor.
For example, The Customer class that we wrote in the previous section in Java can be written in Kotlin in just one line –
data class Customer(val id: Long, val name: String)
Accessing the properties of the data class
The following example shows how you can access the properties of the data class –
val customer = Customer(1, "Sachin")
// Getting a property
val name = customer.name
Since all the properties of the Customer class are immutable, there is no default setter generated by the compiler. Therefore, If you try to set a property, the compiler will give an error –
// Setting a Property
// You cannot set read-only properties
customer.id = 2 // Error: Val cannot be assigned
Let’s now see how we can use the equals(), hashCode(), and toString() methods of the data class-
1. Data class’s equals() method
val customer1 = Customer(1, "John")
val customer2 = Customer(1, "John")
println(customer1.equals(customer2)) // Prints true
You can also use Kotlin’s Structural equality operator== to check for equality. The == operator internally calls the equals() method –
println(customer1 == customer2) // Prints true
2. Data class’s toString() method
The toString() method converts the object to a String in the form of "ClassName(field1=value1, field2=value)" –
val customer = Customer(2, "Robert")
println("Customer Details : $customer")
val customer = Customer(2, "Robert")
println("Customer HashCode : ${customer.hashCode()}") // Prints -1841845792
Apart from the standard methods like equals(), hashCode() and toString(), Kotlin also generates a copy() function and componentN() functions for all the data classes. Let’s understand what these functions do and how to use them –
Data Classes and Immutability: The copy() function
Although the properties of a data class can be mutable (declared using var), It’s strongly recommended to use immutable properties (declared using val) so as to keep the instances of the data class immutable.
Immutable objects are easier to work with and reason about while working with multi-threaded applications. Since they can not be modified after creation, you don’t need to worry about concurrency issues that arise when multiple threads try to modify an object at the same time.
Kotlin makes working with immutable data objects easier by automatically generating a copy() function for all the data classes. You can use the copy() function to copy an existing object into a new object and modify some of the properties while keeping the existing object unchanged.
The following example shows how copy() function can be used –
val customer = Customer(3, "James")
/*
Copies the customer object into a separate Object and updates the name.
The existing customer object remains unchanged.
*/
val updatedCustomer = customer.copy(name = "James Altucher")
println("Customer : $customer")
println("Updated Customer : $updatedCustomer")
Data Classes and Destructuring Declarations: The componentN() functions
Kotlin also generates componentN() functions corresponding to all the properties declared in the primary constructor of the data class.
For the Customer data class that we defined in the previous section, Kotlin generates two componentN() functions – component1() and component2() corresponding to the id and name properties –
val customer = Customer(4, "Joseph")
println(customer.component1()) // Prints 4
println(customer.component2()) // Prints "Joseph"
The component functions enable us to use the so-called Destructuring Declaration in Kotlin. The Destructuring declaration syntax helps you destructure an object into a number of variables like this –
val customer = Customer(4, "Joseph")
// Destructuring Declaration
val (id, name) = customer
println("id = $id, name = $name") // Prints "id = 4, name = Joseph"
Requirements for Data Classes
Every Data Class in Kotlin needs to fulfill the following requirements –
The primary constructor must have at least one parameter
All the parameters declared in the primary constructor need to be marked as val or var.
Data classes cannot be abstract, open, sealed or inner.
Conclusion
Data classes help us avoid a lot of common boilerplate code and make the classes clean and concise. In this article, you learned how data classes work and how to use them. I hope you understood the all the concepts presented in this article.