Guide to Clear Java Developer Interview
Guide to Clear Java Developer Interview
OVERVIEW ........................................................................................................ 16
Please tell me about your project and its architecture. Please explain it
and draw the architecture, framework, and technology used. ....................... 22
What are the best practices to follow while developing an application? ... 23
What challenging tasks have you accomplished so far? Could you provide
examples? .................................................................................................................................. 24
Please explain the code and the flow of the project, both within it and
outside of it............................................................................................................................... 24
Explain the diamond problem in java and how to resolve it. ........................... 62
Which Classes are eligible to be used inside the resource block? ................. 67
Can we insert the null key in the HashMap and HashTable? ............................ 71
What is the object class in Java? what are the methods in it? ....................... 77
Where do strings get stored and where does the reference get stored? .. 87
What is a Runtime exception and how they are they implemented? ........... 92
How does HashMap behaves when it reaches its maximum capacity? ....... 96
What is the difference between them, which one will compile and what is
the best way to declare? .................................................................................................. 101
Why does HashMa not maintain the order like Linked-HashMap? .............. 105
How does LinkedHashMap is able to maintain the insertion order? .......... 105
Write a Hashcode implementation and what is return type of it? ............... 107
Where are static methods or static variables stored in Java memory? .... 118
What is the difference between Class and Instance variables? ................... 119
What is the difference between try/catch block and throws? ...................... 120
How to create a Thread Pool and how to use it in the database connection
pool? ........................................................................................................................................... 124
How to check if there is deadlock and how to prevent it? .............................. 128
Which exception can be thrown from the threads run method? .................. 131
What are the features of Java 8 and Java 11? ...................................................... 133
What are lambda expressions and their use in java 8? .................................... 133
What are the Intermediate and terminal operations in java 8? ................... 139
What is parallel processing in Java-8, and what are its uses? ..................... 140
What is the difference between Flat and flat-map methods in Java-8? ... 140
What are the memory changes that happened in java8? ................................ 143
Why are the variable inside lambda function final in java? ............................ 144
What are the types of dependency injection and what benefit we are
getting using that? .............................................................................................................. 146
How does inversion of control works inside the Spring Container? ........... 148
What is difference between application context and bean context? ......... 150
What are bean scopes? What are prototype and request bean scopes? .. 152
What is the stateless bean in spring? name it and explain it. ....................... 152
Tell me the Design pattern used inside the spring framework..................... 160
What is the difference between Spring boot and spring? ............................... 165
How does the Spring boot auto-detect feature works? .................................... 178
How to make the post method idempotent inside spring boot? ................... 181
How to set the properties across different environments like Dev, QA and
PROD? ........................................................................................................................................ 183
Describe the AOP concept and which annotations are used. How do you
define the point cuts? ........................................................................................................ 184
What is a JWT token and how does spring boot fetch that information? 190
How to ensure that token has not been tampered with?................................. 195
How to handle exceptions in Spring boot applications? What are the best
practices for doing so? ...................................................................................................... 197
Write an endpoint in spring boot for getting and saving employees with
syntax. ....................................................................................................................................... 200
Which Microservice design pattern have you used so far and why? .......... 210
Which design patterns are used for database design in Microservice? .... 211
Which Microservice pattern will you use for read-heavy and write-heavy
applications? .......................................................................................................................... 213
What is circuit breaker pattern? What are examples of it? ............................ 215
Which library have you used to implement circuit breaker in spring boot?
...................................................................................................................................................... 217
How to restrict the Microservice from calling the other Microservice? .... 220
What is memory leak in java? how to rectify that in java? ............................. 223
What is the difference between POST and PUT methods? .............................. 228
What is sent in headers? Can we intercept the header? If yes, how? ...... 228
Design Rest API for tiny URL application, how many endpoints it
requires? .................................................................................................................................. 231
Which design pattern is used by spring AOP? Explain with logic? .............. 236
What is Adapter design pattern & Proxy design pattern? ............................... 236
Write a SQL query to find 5th max salary from employee table? ................ 241
Write SQL Query to find students who are enrolled in courses whose
price is above 50000? ........................................................................................................ 243
What does the JDBC forName() method do for you when you connect to
any DB? ..................................................................................................................................... 244
Write a query to find duplicate entries in a table against a column? ........ 249
Explain Entity in JPA and all annotations used to create Entity class. ...... 252
How can we define a composite key in the Entity class? ................................. 253
What are the JPA Annotation used for a composite attribute? ..................... 254
How to handle the Parent and child relationship in JPA? ................................ 258
Write a Program to find the duplicates in an array using stream API. ..... 260
How to sort the employee list in ascending and descending order using
java 8 streams API?............................................................................................................ 260
Find an average of even numbers using Java 8 stream API? ........................ 263
Write a program to find the sum of the entire array result using java 8
streams? ................................................................................................................................... 272
Write a program to find even numbers from a list of integers and multiply
by 2 using stream java 8? ................................................................................................ 272
Write a program to convert string to integer in java without any API? ... 275
Write a program to find the missing number in an Array in java. ............... 277
Can you write down a Spring boot rest API for addition of two integers?
...................................................................................................................................................... 287
Design an application where you are getting millions of requests how will
you design it. .......................................................................................................................... 298
Suppose you have an application where the user wants the order history
to be generated, and that history pdf generation take almost 15 minutes
how will you optimise this solution. How this can be reduced. .................... 300
How to persist data directly from Kafka topic.is it possible or not? .......... 336
What is the difference between a container and a virtual machine? ......... 341
How to resolve this Mockito exception “Mockito cannot mock this class”?
...................................................................................................................................................... 344
This guide covers a wide range of topics to make sure you're well-
prepared. From the basics like Object-Oriented Programming and Core
Java to more advanced topics like Java-8, Spring Framework, Spring-
Boot, Microservice architecture, Memory Management in Java, REST
principles, Design Patterns, System Design, SQL and Hibernate-JPA, and
various Coding and Programming Questions – it's all covered!
By the end of this guide, you'll walk into your interview with confidence
and expertise. The knowledge you gain here will set you apart from the
competition.
So, embrace this opportunity and start your journey toward interview
success with enthusiasm. Best of luck!
Best Regards,
Ajay Rathod
• Manager Round
• HR round
Usually, if you can clear the technical rounds, you are well on your way to
receiving an offer letter, as the manager and HR rounds primarily involve
discussions. Hence, our focus should be on preparing for the technical
rounds.
• Garbage Collection
• Java Generics
Key Lessons:
4. Clean Code Practices: Influence from "Clean Code" and "Clean Coder"
by Uncle Bob contributed to cleaner coding habits and improved code
reviews.
In Closing:
When you introduce yourself, it's crucial to only talk about things you're
very sure about, things you know 100 percent. For example, if you're not
familiar with a technology like JavaScript, it's best not to mention it
because the interviewer might ask you about it.
Be concise: Keep your answers brief and to the point. Don't ramble on or
share irrelevant information. Stick to the main points and be clear and
concise.
Highlight your skills: When asked about your skills, provide specific
examples of how you have used them in the past to achieve success. Talk
about your strengths and how they will benefit the company. Be sure to
include both technical and soft skills, such as problem-solving,
communication, and teamwork.
Start with an overview: Begin by giving a brief overview of the project and
the business problem it was designed to solve. This will help provide
context for the architecture you will describe.
Describe the design decisions: Talk about the design decisions that
were made during the project. This could include how the architecture was
designed to meet specific performance requirements, how the system was
designed to be scalable, or how it was designed to be maintainable.
Highlight your role: Be sure to discuss your role in the project and how
you contributed to the architecture design and implementation. This could
include any specific tasks you performed or any technical challenges you
helped overcome.
Use visual aids: If possible, use diagrams or other visual aids to help
illustrate the architecture and design decisions. This can help the
interviewer better understand your explanation and provide a more
comprehensive answer.
Plan and prioritize: Before starting development, make sure to plan the
project thoroughly and prioritize tasks based on their importance and
urgency.
Test early and often: Test the software early and often to catch bugs
and errors before they become more difficult to fix.
Write clean and modular code: Write clean, modular, and maintainable
code to make it easier to maintain and extend the software over time.
Please explain the code and the flow of the project, both within
it and outside of it.
When asked to explain the code and flow of a project, it's important to
provide a clear and concise overview of the project and how it works. Here
are some tips to help you answer this question:
Describe the code flow: Describe how the code flows through the
different components of the project. This should include an explanation of
the different modules, functions, and classes that make up the codebase.
Explain the logic: Explain the logic behind the code and how it
implements the functionality of the project. This should include an
explanation of the algorithms and data structures used in the code.
Use visual aids: If possible, use diagrams or other visual aids to help
illustrate the code flow and architecture. This can help the listener better
understand your explanation and provide a more comprehensive answer.
Cannot be instantiated
Instantiation directly Cannot be instantiated directly
Main method Can have a main method Cannot have a main method
When to Use:
Abstract classes:
Interfaces:
• Constructors can enforce rules and constraints that must hold true
for all objects in the hierarchy. This ensures data integrity and
validity.
• Example: A BankAccount abstract class might require a non-
negative initial balance in its constructor.
Controlling Instantiation:
Key Points:
• Abstract class constructors are not used for object creation directly,
but they are invoked when a subclass object is created.
What is it?
Advantages of abstraction:
Encapsulation:
• Focus: How the object's data and behavior are bundled together.
• Goal: Protecting data integrity and controlling access.
• Mechanisms: Access modifiers (public, private, protected), getters
and setters.
Key Differences:
Benefits:
Mechanisms:
Polymorphism
Benefits:
Mechanisms:
For example, if we have an Animal class, a Mammal class, and a Cat class,
the Cat class can inherit properties and behaviors from both Animal and
Mammal classes while adding its own specific methods.
Benefits:
Benefits:
Inheritance: Look for classes that extend or inherit from other classes.
This is typically indicated by the extends keyword in Java, for example:
public class Car extends Vehicle {...}. Inheritance is used to create a
hierarchy of classes where subclasses inherit properties and methods from
their parent classes.
for example:
class Animal {
void eat() {
System.out.println("Eating..."); Animal
}
System.out.println("Barking...");
}
Bulldog
class Bulldog extends Dog {
void guard() {
System.out.println("Guarding...");
In this example, Animal is the base class, Dog is a derived class from
Animal, and Bulldog is a derived class from Dog.
Animal has a single method eat(). Dog inherits eat() from Animal and
adds a new method bark(). Bulldog inherits both eat() and bark() from
Dog and adds a new method guard().
Private: Private variables and methods can only be accessed within the
same class.
return name;
this.name = name;
return age;
if (age < 0) {
this.age = age;
In this example, the Person class has two private variables, name and
age. These variables are not directly accessible from outside the class,
which means that other classes cannot modify or access them directly.
Note that we can also add validation logic to the setter methods to ensure
that the values being set are valid. In this example, the setAge method
throws an exception if the age is negative.
By using access modifiers and getter and setter methods, we can achieve
encapsulation in Java. This allows us to protect the data and behavior of
our objects and prevent other objects from accessing or modifying them
directly, which makes our code more robust and maintainable.
Method overloading is when a class has two or more methods with the
same name, but different parameters. When a method is called, the
compiler determines which method to call based on the number and types
of the arguments passed to it.
System.out.println("Animal speaks");
System.out.println("Dog barks");
In this example, the Animal class has a method named speak. The Dog
class extends the Animal class and provides its own implementation of the
speak method. When we call the speak method on a Dog object, the Dog
version of the method is called instead of the Animal version.
return x + y;
return x + y + z;
return x + y;
In this example, Calculator defines three different add() methods with the
same name but different parameters. The first method takes two int
arguments, the second takes three int arguments, and the third takes two
double arguments. The compiler decides which method to call based on
the number and type of arguments passed to it.
@Override
Access specifiers determine the visibility of a method, and they can also
be used when overriding methods. When overriding a method, the access
specifier of the overriding method cannot be more restrictive than the
access specifier of the overridden method. In other words, if the
overridden method is public, the overriding method must also be public or
less restrictive.
System.out.println("Animal speaks");
System.out.println("Animal eats");
@Override
System.out.println("Dog barks");
@Override
System.out.println("Dog eats");
In this example, the Animal class has a method named speak that is
public, and a method named eat that is protected. The Dog class extends
the Animal class and provides its own implementations of the speak and
eat methods.
The speak method in the Dog class overrides the speak method in the
Animal class and is also public. The eat method in the Dog class overrides
the eat method in the Animal class and is also protected. Since the eat
method in the Animal class is also protected, the access specifier of the
eat method in the Dog class can be the same or less restrictive, but not
more restrictive.
System.out.println("Animal speaks");
System.out.println("Animal eats");
@Override
System.out.println("Dog barks");
@Override
System.out.println("Dog eats");
In this example, the Animal class has two methods: speak and eat. Both
methods are declared to throw an Exception.
The Dog class extends the Animal class and overrides both the speak and
eat methods.
The speak method in the Dog class overrides the speak method in the
Animal class and throws an IOException. The IOException is a subclass of
Exception, so this is allowed.
The eat method in the Dog class overrides the eat method in the Animal
class but does not throw any exceptions.
try {
animal.speak();
} catch (IOException e) {
} catch (Exception e) {
try {
animal.eat();
} catch (Exception e) {
Since the speak method in the Dog class throws an IOException, we catch
that exception specifically and print out its message. If the speak method
in the Dog class threw a different type of exception, such as
RuntimeException, it would not be caught by this catch block.
The eat method in the Dog class does not throw any exceptions, so the
catch block for Exception will not be executed. If the eat method in the
Dog class did throw an exception, it would be caught by this catch block.
return x + y;
In this example, we have two add() methods with the same parameters,
but different return types (int and double). This will cause a compilation
error because the compiler cannot determine which method to call based
on the return type alone.
In general, software design principles strive for high cohesion and low
coupling, as this leads to code that is more modular, maintainable, and
easier to understand and change.
Static methods: A static method is a method that belongs to the class and
can be called without creating an instance of the class. Static methods are
declared using the static keyword and are often used for utility methods
that do not depend on the state of an instance.
System.out.println(message);
Static blocks: A static block is a block of code that is executed when the
class is loaded. Static blocks are used to initialize static variables or to
perform other one-time initialization tasks.
static {
In this example, the static block is executed when the Example class is
loaded and prints a message to the console.
Static variables are declared using the "static" keyword and are shared
across all instances of a class. They are initialized only once, when the
class is loaded, and retain their value throughout the execution of the
program. Static variables are typically used to store data that is common
to all instances of a class, such as a constant or a count of objects
created.
Instance variables, on the other hand, are declared without the "static"
keyword and are unique to each instance of a class. They are initialized
when an object is created and are destroyed when the object is destroyed.
Instance variables are typically used to store data that is specific to each
instance of a class, such as the name or age of a person.
Scope: Static variables have class scope, while instance variables have
object scope.
Lifetime: Static variables are initialized once and retain their value
throughout the execution of the program, while instance variables are
created and destroyed with the objects they belong to.
Usage: Static variables are used to store data that is common to all
instances of a class, while instance variables are used to store data that is
specific to each instance of a class.
class A { }
class B extends A { }
A a = new A();
B b = new B();
a = b; // valid
In the above example, the variable "a" is of type A, and the variable "b" is
of type B. However, the assignment "a = b" is valid, because B is a
subclass of A.
class A { }
class B extends A { }
class C {
In the above example, the method getA() returns an object of type A, and
the method getB() returns an object of type B. Because B is a subclass of
A, the method getB() can be overridden to return B instead of A.
However, classes that implement the Serializable interface gain the ability
to be serialized and deserialized, which is the behavior that the
Serializable marker interface indicates.
The overriding method can throw the same exceptions as the overridden
method, or any subset of those exceptions.
The overriding method can also throw unchecked exceptions, even if the
overridden method does not.
The overriding method cannot throw checked exceptions that are not in
the same class hierarchy as the exceptions thrown by the overridden
method. This means that the overriding method cannot throw checked
exceptions that are more general than those thrown by the overridden
method. However, it can throw more specific checked exceptions or
unchecked exceptions.
If the overridden method does not throw any exceptions, the overriding
method cannot throw checked exceptions.
void foo() {
The fourth override is not valid, since SQLException is not in the same
class hierarchy as IOException.
The fifth override is valid, since it does not throw any exceptions.
public: The public access modifier is the most permissive access level,
and it allows access to a class, method, or variable from any other class,
regardless of whether they are in the same package or not.
private: The private access modifier is the most restrictive access level,
and it allows access to a class, method, or variable only from within the
same class. It cannot be accessed from any other class, even if they are
in the same package.
int defaultVar;
void defaultMethod() {
In this example, the MyClass class has four instance variables and four
instance methods, each with a different access modifier. The publicVar
and publicMethod() can be accessed from any other class, while
protectedVar and protectedMethod() can be accessed from any subclass
and from within the same package. defaultVar and defaultMethod() can be
accessed from within the same package only, while privateVar and
privateMethod() can be accessed only from within the same class.
Visibility: Private members are only visible within the same class, while
protected members are visible within the same class and its subclasses,
as well as within the same package.
The protected access modifier is useful when you want to expose certain
methods or variables to subclasses, while still hiding them from other
classes in the same package or outside the package. For example, you
might have a superclass with some variables and methods that are not
intended to be used outside the class or package, but should be accessible
to subclasses. In this case, you can declare those variables and methods
as protected.
package com.example.package1;
// ...
package com.example.package2;
import com.example.package1.Superclass;
protectedVar = 42;
protectedMethod();
Even with questions you will get the idea like which topic is important and
which is not. As an interviewee we should be focussing on the hot topics
to prepare better.
• String
• Collection framework (HashMap, Concurrent HashMap)
• Concepts of immutability
• Exception
• Serialization
• Garbage collectors
• Multithreading
• Executor Framework
• Lambdas
• Stream
• Java 8 all new features
• Optional
• Functional interface
Here are some key differences between static and default methods in Java
interfaces:
In this example, MyClass is the class and myObject is the object that is
created using the new keyword and calling the constructor of the class
MyClass(). Once an object is created, you can use it to call methods and
access fields of the class.
It's worth mentioning that when an object is created, the memory for the
object is allocated on the heap, and the object's constructor is called to
initialize the object's state. When the object is no longer being used, the
memory for the object is reclaimed by the garbage collector.
this.name = name;
this.age = age;
return name;
return age;
private MyClass() {
// constructor implementation
private MySingleton() {
// constructor implementation
if (instance == null) {
return instance;
// class implementation
interface MyInterface {
// interface methods
To use the custom class loader, you can create an instance of the
MyClassLoader class and use the loadClass method to load the class:
It's not typical to use main method without static keyword, as it's not the
standard way of running the program and also it will cause confusion
among developers.
interface B extends A {
public void method2();
}
interface C extends A {
public void method3();
}
class D Implements B, C {
// implementation
}
Here, class D inherits from both interfaces B and C, which both inherit
from class A. If class A defines a method named method1, it is unclear
class D Implements B, C {
public void method1() {
super.A.method1();
}
// implementation
}
this.value = value;
return value;
this.value = value;
value++;
value--;
return Integer.toString(value);
In the above example, the IntWrapper class wraps an int variable, called
"value", and provides additional functionality such as increment() and
decrement() methods that can be used to increment or decrement the
value, and a toString() method that returns a string representation of the
value.
System.out.println(iw.getValue()); // 5
System.out.println(iw.getValue()); // 6
System.out.println(iw); // 6
It's worth noting that Java provides wrapper classes for all of the primitive
data types like int, char, double, boolean, etc. These classes are called as
Integer, Character, Double, Boolean etc. These classes have additional
functionalities that we can use like parseInt, parseDouble etc.
When a key-value pair is retrieved from the HashMap, the hash function is
used to calculate the hash code of the key, which is used to determine the
index of the bucket in the array. The linked list of Entry objects at that
index is then searched for the key-value pair that has the same key as the
one being retrieved.
The HashMap uses an array and linked list, which allows for constant-time
O(1) performance for basic operations like put() and get() in an average
case, and O(n) in the worst case when there's a high number of collisions.
The load factor is a metric that determines when the HashMap should
resize the array to maintain good performance. The default load factor is
Key Components:
put(key, value):
• Calculate the hashcode of the key.
• Use the hash function to find the corresponding index in the hash
table.
• If no nodes exist at that index, add a new node with the key-value
pair.
• If a collision occurs:
1. Chaining: Add the new node to the linked list at the index.
2. Open addressing: Probe for an empty slot nearby using a
specific strategy.
get(key):
• Calculate the hashcode of the key.
• Use the hash function to find the corresponding index in the hash
table.
The following classes are eligible to be used inside the resource block:
Any class that implements the AutoCloseable interface, such as:
FileInputStream, FileOutputStream, BufferedReader, BufferedWriter,
Scanner, PrintWriter, Connection, Statement.
When an element is retrieved from the HashSet, the hash code of the
element is calculated and used to look up the corresponding value in the
The HashSet class uses the equals() method to compare the elements for
equality. When an element is added to the HashSet, the HashSet calls the
equals() method to check if the element is already present in the HashSet.
If the element is already present, it is not added to the HashSet,
otherwise, it is added to the HashSet.
The HashSet class also uses the hashCode() method to determine the
position of an element in the underlying HashMap. The hashCode()
method returns an integer value, which is used as the index of the
element in the HashMap.
It's also worth mentioning that the HashSet is not thread-safe, if multiple
threads are accessing a HashSet at the same time, it's necessary to use
synchronization or other thread-safe collections like ConcurrentHashSet.
Null Keys and Values: Hashtable does not allow null keys or values, it will
throw NullPointerException if you try to insert a null key or value into a
Hashtable. While HashMap allows one null key and multiple null values.
Legacy: HashMap is a legacy class and was introduced in the first version
of Java, while ConcurrentHashMap was introduced in Java 5.
When a null key is used in a HashMap, the hashCode() method of the key
returns 0, which is used to determine the index of the bucket in the array.
All the keys that have a hashCode() of 0 will be stored in the same
bucket, and this can lead to collisions and poor performance.
When a null key is used in a Hashtable, the hashCode() method of the key
returns 0, which is used to determine the index of the bucket in the array.
All the keys that have a hashCode() of 0 will be stored in the same
bucket, and this can lead to collisions and poor performance.
It's important to notice that even though it's allowed to insert null keys in
a HashMap and Hashtable, it's not a good practice, since it can lead to
unexpected behavior and errors. Instead, it's recommended to use a
sentinel value or a special object as a key to represent a null value, and
handle it in a specific way.
map.put("key1", "value1");
map = Collections.unmodifiableMap(map);
Map.entry("key1", "value1"),
Map.entry("key2", "value2")
);
Using the ImmutableMap.Builder class from Guava library: You can also
use the ImmutableMap.Builder class to build an immutable map by adding
key-value pairs to it.
.put("key1", "value1")
.put("key2", "value2")
.build();
All of the above methods will create an immutable map, which means that
any attempt to modify the map will result in an exception being thrown.
Integer, Long, Short, Byte, Character, Double, Float: These are the
classes for the primitive data types int, long, short, byte, char, double and
float respectively. They are also immutable classes and once object of
these classes is created, it's value cannot be changed.
Enum: Enum types are classes that represent enumerated values. They
are also immutable classes and once an object of Enum is created, it's
value cannot be changed.
It's important to notice that even though these classes are immutable,
their state can be changed when they are used as fields in a class, to
prevent this, it's necessary to make the fields final and private.
There are several types of indexes that can be used in a database, such
as:
Primary key index: This is a unique index that is used to enforce the
integrity of the primary key constraint.
First, you will need to add the Jackson library to your project. You can do
this by adding the following dependency to your build file:
<dependency>
<groupId>com.fasterxml.jackson.dataformat</groupId>
<artifactId>jackson-dataformat-xml</artifactId>
<version>2.11.3</version>
</dependency>
Next, you will need to read the XML file and convert it to an object using
the XmlMapper class.
Finally, you can convert the JsonNode object to a JSON string using the
writeValueAsString() method.
It's important to notice that this process is not a direct conversion from
XML to JSON, it's rather parsing the XML into JsonNode object and then
converting that object to a JSON string, this is why the method used is
writeValueAsString()
It's also worth mentioning that some of the information from the XML file
may not be preserved during this process, such as comments, processing
instructions, and text nodes that do not have a corresponding element.
What is the object class in Java? what are the methods in it?
Java JDK (Java Development Kit) provides many built-in classes with their
respective methods. Some commonly used object class methods in Java
JDK are:
equals(Object obj): This method compares the current object with the
specified object and returns true if they are equal, else it returns false.
hashCode(): This method returns the hash code of the object. The hash
code is a unique integer value that is used by hash-based data structures
such as HashMap, HashSet, etc.
clone(): This method creates a new object that is a copy of the current
object. The clone() method is used to create a copy of an object without
modifying the original object.
getClass(): This method returns the class of the current object. It is used
to get the runtime class of the object.
wait(): This method causes the current thread to wait until it is notified. It
is generally used in multi-threaded applications.
finalize(): This method is called by the garbage collector when the object
is no longer referenced.
These are some of the commonly used object class methods in Java JDK.
However, there are many other methods available in Java JDK for
different classes, and you can explore them in the Java documentation.
Here are some reasons why you may want to use the clone() method:
To create an object with default values: You may want to create an object
with default values, and the clone() method can be used to create an
object with the same default values as the original object.
Inside the clone() method, call the super.clone() method to create a copy
of the object.
Here's an example code snippet that demonstrates how to use the clone()
method:
this.value = value;
this.value = value;
@Override
return super.clone();
obj2.setValue(20);
Gson: Gson is another popular JSON parser for Java. It can parse JSON
data into Java objects and vice versa. It also provides support for custom
serialization and deserialization of Java objects.
These are some of the popular parsing libraries in Java that can be used
to parse various types of data. The choice of a parsing library depends on
the specific requirements and the data format being parsed.
1. Comparable:
@Override
2. Comparator:
For instance, let's say you want to sort instances of the Person class not
only by age but also by name. You can create a separate
NameComparator class that implements the Comparator<Person>
interface:
import java.util.Comparator;
@Override
return person1.getName().compareTo(person2.getName());
A Classpath exception occurs when the JVM is unable to find the required
class file in the specified Classpath locations. This can happen for several
reasons, such as:
Missing class files: The required class file may be missing or may have
been moved or deleted from the Classpath.
Permissions issues: The user running the application may not have
sufficient permissions to access the required Classpath locations.
Verify the Classpath: Verify that the Classpath specified is correct and
includes the required directory or JAR file.
Check for missing class files: Check if the required class file is missing or
has been moved or deleted.
Check permissions: Ensure that the user running the application has
sufficient permissions to access the required Classpath locations.
Throwable
├── Error
│ ├── AssertionError
│ ├── OutOfMemoryError
│ └── StackOverflowError
└── Exception
├── RuntimeException
│ ├── NullPointerException
│ ├── IndexOutOfBoundsException
│ ├── IllegalArgumentException
│ ├── IllegalStateException
│ └── ArithmeticException
├── IOException
├── SQLException
└── ClassNotFoundException
The "Throwable" class is the parent class of all exceptions and errors in
Java. It is the superclass of all classes that can be thrown by the Java
Virtual Machine (JVM) or the user. The Throwable class defines several
methods that can be used to print the stack trace of an exception, get the
message of an exception, and get the cause of an exception.
In summary, "throw" keyword is used to throw an exception, "throws"
keyword is used to indicate that a method can throw an exception and
"Throwable" is the base class for all the exception and errors in Java.
A String object can be created using a string literal or by using the new
keyword and a constructor. For example:
The String class provides many useful methods for manipulating strings,
such as charAt(), indexOf(), substring(), toUpperCase(), toLowerCase(),
trim(), length(), and many others.
String objects are also widely used in Java as parameters to method calls,
in concatenation operations, and as return values from methods.
Strings are often used to store sensitive information like passwords, URLs,
and database connection strings. Immutability makes them tamper-proof,
preventing accidental or malicious modification of sensitive data. This
enhances security and reduces the risk of vulnerabilities.
Java caches String literals in a String pool for efficient reuse. If Strings
were mutable, changes to one String could affect other Strings using the
same literal, leading to unpredictable behavior. Immutability ensures that
String values remain consistent and predictable, even when shared across
multiple references.
4. Class Loading:
Java uses String objects for class names and resource paths. Immutability
ensures that these references remain stable and reliable throughout the
application's lifecycle, preventing issues with class loading and resource
access.
6. Substring Operations:
For scenarios where you need to modify String content frequently, Java
offers mutable alternatives: StringBuilder (for non-thread-safe operations)
and StringBuffer (for thread-safe operations). These classes are designed
for efficient String manipulation and modification.
When you create a String using a literal (e.g., String str = "Hello";), Java
checks the String pool first.
If an identical String already exists in the pool, it reuses that object for
efficiency.
This means multiple variables can refer to the same String object in
memory, saving space.
2. Heap Memory:
If the String literal doesn't exist in the pool, a new String object is created
and stored in the heap memory.
When you create a String using the new keyword: String str = new
String("Hello");
3. String Interning:
You can explicitly place a String in the pool using the intern() method:
String internedStr = "Hello".intern();
This ensures that all String variables with the same content refer to the
same object in the pool, even if they were created with new.
Where do strings get stored and where does the reference get
stored?
Strings in Java are stored in the heap memory. The heap memory is a
region of memory that is used to store objects. When you create a string
object, the Java Virtual Machine (JVM) allocates space for the object in the
heap memory.
The reference to the string object is stored on the stack. The stack is a
region of memory that is used to store local variables and method
parameters. When you assign a string object to a variable, the JVM stores
the reference to the object in the stack memory.
The JVM manages the heap memory and the stack memory automatically.
This means that you do not need to worry about allocating or freeing
memory for string objects.
If you don’t want to use the String class then what is the alternative?
If you don't want to use the String class in Java, you can use the following
alternatives:
To create a custom immutable string class, you can follow these steps:
Define a private final field of type String to store the string value.
Do not provide any setter methods that can modify the value of the
private field.
Override the toString() method to return the value of the private field.
this.value = value;
return value;
if (this == obj) {
return true;
return false;
return Objects.hash(value);
By following this approach, you can create a custom class that behaves
similarly to the String class but is immutable.
In summary:
Therefore, String is best used for situations where the string value will not
change frequently, while StringBuffer or StringBuilder should be used for
situations where frequent string manipulations are required. If the code is
running in a multi-threaded environment, StringBuffer should be used to
avoid concurrency issues. If the code is running in a single-threaded
environment, StringBuilder can be used for even better performance.
append()
insert()
delete()
deleteCharAt()
replace()
substring()
charAt()
setCharAt()
length()
capacity()
ensureCapacity()
trimToSize()
toString()
It's worth noting that in Java 5, a new class called StringBuilder was
introduced, which is similar to StringBuffer but is not synchronized. If you
do not need thread-safety, you can use StringBuilder instead of
StringBuffer, as it can be faster in some cases. However, if you need to
access a mutable string from multiple threads concurrently, you should
use StringBuffer to ensure thread-safety.
One common issue with the substring() method is that it creates a new
string object every time it is called. This can lead to performance
problems if it is called repeatedly in a loop or in performance-critical code.
To avoid this, you can use the StringBuilder or StringBuffer classes to
build a string gradually instead of using substring().
It's also important to note that the substring() method returns a new
string object that shares the same character array as the original string
object. This means that if you modify the substring, it will also modify the
original string. To avoid this, you can create a new string object from the
substring.
try {
} catch (NullPointerException e) {
} catch (ArrayIndexOutOfBoundsException e) {
} catch (Exception e) {
Set: This interface also extends the Collection interface, but it represents
a collection of unique elements. Set does not allow duplicates and
provides methods for testing whether a particular element is present in
the set.
There are also several classes in the Collection Framework that provide
implementations of the various interfaces, such as ArrayList and
LinkedList for the List interface, HashSet and TreeSet for the Set
interface, and HashMap and TreeMap for the Map interface. These classes
provide different performance characteristics and are designed for
different use cases.
Both syntaxes create an instance of the ArrayList class in Java, but they
differ in the type of reference variable that is used to store the reference
to the object.
In general, it's recommended to use the first syntax (List list = new
ArrayList<>();) to create instances of collection classes in Java, unless
there is a specific reason to use the implementation class directly
(ArrayList alist = new ArrayList<>();). This allows for greater flexibility
and maintainability of the code.
hasNext(): Returns true if there are more elements in the collection, and
false otherwise.
list.add("apple");
list.add("cherry");
while (iterator.hasNext()) {
System.out.println(element);
This code creates an ArrayList of strings, adds some elements to it, and
then creates an Iterator for the list. The while loop uses the hasNext() and
next() methods of the Iterator to iterate over the elements in the list and
print them to the console.
During rehashing, a new internal array is created with twice the capacity
of the original array. Each key-value pair from the old array is then
hashed again and added to the new array at a new index, based on the
new array size and the hash code of the key.
this.name = name;
this.age = age;
@Override
return result;
@Override
people.put(john, "555-1234");
people.put(sarah, "555-5678");
set.add("apple");
set.add("banana");
set.add("orange");
set.add("apple");
set.add("banana");
set.add("orange");
In general, you should choose a HashSet when you don't care about the
order of the elements and need fast performance for basic operations, and
a TreeSet when you need to maintain a specific ordering of the elements
or perform range queries over the elements.
set.add("apple");
set.add("banana");
set.add("orange");
while (iterator.hasNext()) {
System.out.println(value);
Alternatively, you can use a for-each loop to iterate over the elements of
the HashSet:
set.add("apple");
set.add("banana");
set.add("orange");
System.out.println(value);
The first line of code is not valid in Java, as List is an interface and cannot
be directly instantiated.
The second line of code creates an ArrayList object and assigns it to a List
reference variable:
This code creates an empty ArrayList that can store objects of any type,
and assigns it to the ls reference variable of type List. This is a common
practice in Java, as it allows for greater flexibility in the code, since you
can switch to a different List implementation (such as LinkedList) without
changing the rest of the code.
arrayList.add("one");
arrayList.add("two");
arrayList.add("three");
arrayList.add("four");
linkedList.add("one");
linkedList.add("two");
linkedList.add("three");
linkedList.add("four");
System.out.println(element);
System.out.println(element);
Duplicates: Set does not allow duplicate elements, while List does. If you
try to add a duplicate element to a Set, it will not be added, while in a List
it will be added as a new element.
Order: List maintains the order of elements as they are added to the list,
while Set does not guarantee any specific order of elements.
Iteration: Both List and Set provide ways to iterate over their elements,
but the order of iteration is guaranteed for List and not for Set.
list.add("one");
list.add("two");
list.add("three");
list.add("two");
set.add("one");
set.add("two");
set.add("three");
set.add("two");
In this example, we create a List and a Set with the same elements. We
add a duplicate element ("two") to both collections. When we print the
collections, we see that the List contains the duplicate element, while the
Overall, you should use a List when you need to maintain the order of
elements and allow duplicates, and a Set when you don't care about the
order of elements and need to ensure that there are no duplicates.
Ordering: HashMap does not guarantee any specific order of its elements,
while HashSet does not maintain the order of its elements. If you need to
maintain the order of elements in a collection, you should use
LinkedHashMap or LinkedHashSet.
hashMap.put("one", 1);
hashMap.put("two", 2);
hashMap.put("three", 3);
hashSet.add(1);
hashSet.add(2);
hashSet.add(3);
Overall, you should use HashMap when you need to associate a value with
a key and allow duplicate values, and use HashSet when you need to
store unique values and don't need to associate them with keys.
HashMap uses a hash table to store its entries. A hash table is a data
structure that maps keys to values by using a hash function to convert
each key to a unique index. This allows HashMap to quickly find the value
for a given key.
LinkedHashMap {
// Hash table
// Doubly-linked list
linkedHashMap.put("one", 1);
linkedHashMap.put("two", 2);
linkedHashMap.put("three", 3);
priorityQueue.add(3);
priorityQueue.add(2);
Overall, you should use LinkedHashMap when you need to maintain the
insertion order of elements and access them by key, while PriorityQueue
should be used when you need to maintain a priority order of elements
and access them in a first-in-first-out (FIFO) order.
@Override
int result = 1;
return result;
The return type of hashCode() is int. The returned hash code should
ideally be unique for each object, but it is not required to be unique.
Instead, it should be consistent with the equals() method of the class,
which is used to determine if two objects are equal. If two objects are
equal according to equals(), they should have the same hash code. If two
objects are not equal according to equals(), they can have the same hash
code (this is called a hash code collision), but this can decrease the
performance of hash-based data structures.
What is ConcurrentHashmap?
ConcurrentHashmap is a thread-safe implementation of the Map interface
in Java. It was introduced in Java 5 to provide a high-performance,
scalable, and concurrent hash table that can be used in multi-threaded
environments.
map.put("two", 2);
map.put("three", 3);
System.out.println(map.get("one")); // prints 1
map.remove("two");
Note that ConcurrentHashMap does not provide any guarantees about the
order in which elements are inserted or accessed, so if you need to
maintain ordering, you should use a different type of map, such as
LinkedHashMap.
Access Modifiers: Static methods can have any access modifiers like
public, private, protected, default. But for default methods, the access
modifiers can only be public or default.
Use case: Static methods are useful when you want to provide utility
methods that are not tied to any particular instance of a class. Default
methods are useful when you want to provide a default implementation
for a method that is common to all classes that implement the interface,
but can be overridden if needed.
It's important to notice that, even though it's allowed to insert null keys
and values, it's not a good practice, since concurrent data structures like
ConcurrentHashMap are designed to handle concurrent operations and
null keys and values can lead to unexpected behaviour and errors.
synchronized (list) {
list.add("item");
list.remove("item");
list.add("item");
list.remove("item");
Iterator<String> it = list.iterator();
if (item.equals("item")) {
it.remove();
Using a for-each loop: Using a for-each loop to iterate over the collection
and modify it, it will throw ConcurrentModificationException when the
collection is modified.
if (item.equals("item")) {
list.remove(item);
It's important to note that all of the above solutions are based on the
collection type, the number of threads and the read-write operations that
you are going to perform. The best solution is the one that fits the best
with the project requirements.
What is serialization?
Serialization is the process of converting an object into a stream of bytes
that can be stored or transmitted over a network, and later reconstructed
to create a new object with the same properties as the original. The
process of serialization is commonly used in Java for data persistence,
inter-process communication, and distributed computing.
import java.io.*;
try {
out.writeObject(person);
out.close();
fileOut.close();
} catch (IOException e) {
e.printStackTrace();
this.name = name;
this.age = age;
return age;
Serialization is a powerful tool in Java, but it also has some limitations and
potential drawbacks. Serialized objects can take up a lot of disk space or
network bandwidth, and they may not be compatible with different
versions of the same class. It's important to carefully design and test
serialization code to ensure that it meets the requirements of the
application.
Deep Cloning: Object streams can also be used to create deep clones of
objects. By serializing and deserializing an object, we can create a new
object with the same state, but with new references to all of its fields.
Overall, object streams are a powerful and flexible tool in Java, with many
potential use cases beyond just serialization.
ArrayList:
Random access is fast since array provides constant time for get and set
operations.
Insertions and deletions are slow since arrays are of fixed size and when
an element is inserted or deleted, all the elements after the insertion or
deletion point have to be shifted.
It can be used when the number of read operations are more than the
write operations.
LinkedList:
Each element in a linked list contains a reference to the next and previous
element.
Insertions and deletions are faster since only the references need to be
updated.
Random access is slow since it requires traversing the linked list starting
from the head or tail.
It can be used when the number of write operations are more than the
read operations.
It's important to note that, the choice between ArrayList and LinkedList
depends on the use case and the operations that will be performed on the
collection. If you need fast random access, go for ArrayList, if you need
fast insertions and deletions and don't mind slower random access, go for
LinkedList.
Also, it's worth mentioning that, LinkedList also implements the Deque
interface and can be used as a double-ended queue, while ArrayList
doesn't have that capability.
The garbage collector works by identifying all objects that are no longer
being used, marking them as garbage, and then reclaiming the memory
occupied by those objects. The garbage collector uses various algorithms
to identify and reclaim memory, such as reference counting, mark-and-
sweep, and copying.
It's worth noting that calling System.gc() unnecessarily can actually have
a negative impact on performance, as it can cause the JVM to spend time
running the garbage collector when it's not necessary. Therefore, it's
generally best to let the JVM manage memory automatically and only use
System.gc() when there is a specific need to force garbage collection,
such as when profiling or debugging a program.
Fast iterators:
ArrayList iterator
HashMap iterator
Fail-safe iterators:
CopyOnWriteArrayList iterator
ConcurrentHashMap iterator
e.g.
super(message);
The equals() method in Java is used to compare the logical equality of two
objects. This means that it compares the values of the two objects. If the
two objects have the same values, then the equals() method will return
true, otherwise it will return false.
For primitive types, the == operator and the equals() method are
equivalent. However, for object types, the == operator and the equals()
method are not equivalent. The == operator will only return true if the
two objects have the same memory address, while the equals() method
will return true if the two objects have the same values.
The throws declaration in the myMethod() method tells the caller of the
method that it may throw a MyException. The caller of the method must
then handle the exception if it occurs.
this.name = name;
return name;
super(name);
System.out.println("Woof!");
The main advantage of using a thread pool is that it allows multiple tasks
to be executed concurrently by reusing threads from a pool, rather than
creating a new thread for each task. This reduces the overhead of creating
and destroying threads, which can be expensive in terms of memory and
CPU usage.
executor.submit(task);
executor.shutdown();
The above code creates a fixed-size thread pool with 10 worker threads.
The Executors.newFixedThreadPool() method takes an integer argument
that specifies the number of worker threads in the thread pool.
Once the thread pool is created, you can submit tasks to be executed by
the worker threads using the Executor.execute() method:
executor.execute(new MyTask());
dataSource.setDriverClassName("com.mysql.jdbc.Driver");
dataSource.setUrl("jdbc:mysql://localhost:3306/mydb");
dataSource.setUsername("user");
dataSource.setPassword("password");
dataSource.setInitialSize(10);
dataSource.setMaxTotal(50);
Once the connection pool is created, you can retrieve a connection from
the pool using the BasicDataSource.getConnection() method.
New: The thread is in the new state when it is first created using the new
Thread() constructor, but before the start() method is called.
Runnable: The thread is in the runnable state when the start() method is
called. It's now eligible to run and can be scheduled by the JVM to
execute.
Waiting: The thread is in the waiting state when it is waiting for another
thread to perform a specific action.
The life cycle of a thread in a thread pool includes the following states:
Idle: The thread is in the idle state when it is first created and is waiting
for a task to be assigned.
jstack <pid>
Analyze the thread dump: Once you have a thread dump, you can analyze
it to identify potential issues such as deadlocks or high CPU usage. Look
for threads that are blocked or waiting, as these can indicate potential
issues. Pay attention to the stack traces of each thread, as they can
Identify the root cause: Based on the information gathered from the
thread dump analysis, you can identify the root cause of the issue and
take appropriate action to address it. For example, if you identify a
deadlock, you may need to modify the code to avoid acquiring locks in a
circular order or use a timeout on locks to avoid indefinite blocking.
Overall, thread pools are an important tool for optimizing the performance
and scalability of concurrent programs, and are widely used in Java and
other programming languages.
Test and debug your code: Testing and debugging your code using tools
such as jUnit, JMeter, or debuggers can help you identify and fix potential
issues before they cause deadlocks in production.
By using these techniques, you can prevent deadlocks and ensure the
reliability and performance of your Java program.
CPU usage spikes: A deadlock can cause a spike in CPU usage, as the
program may be using more CPU resources than necessary to execute a
task.
If you suspect that your Java program is experiencing a deadlock, you can
use various tools and techniques, such as thread dump analysis or
profiling, to identify and fix the issue.
synchronized (MyClass.class) {
myStaticVariable += value;
It's always a good practice to include try-catch block in the run method, it
will handle any unexpected exception and prevent the thread from getting
terminated abruptly.
What is thread-local?
Thread-local is a Java class that allows you to store data that is specific to
a given thread.
Streams: A new API for processing collections of data that allows for
operations such as filtering, mapping, and reducing to be performed in a
more functional and readable way.
Date and time API: A new API for working with date and time, which
replaces the legacy java.util.Date and java.util.Calendar classes.
Lambda expressions can also be used with other features of Java 8 such
as streams and the new date and time API to perform operations such as
filtering, mapping and reducing collections of data, in a more functional
and readable way.
It's worth noting that, although lambda expressions can help make your
code more concise and readable, they can also make it more difficult to
understand if they are not used correctly. It's important to use them in a
way that makes the code easy to understand and maintain.
Static methods: Java 8 also introduced the ability for interfaces to have
static methods, which are methods that can be called on the interface
itself, rather than on an instance of the interface.
For example,
@FunctionalInterface
interface MyFunctionalInterface {
public void myMethod();
}
These are some of the most common functional interfaces, but there are
many others in the Java standard library, each with its own specific use
case.
ClassName::methodName
For example, if you have a class called "MyClass" with a method called
"myMethod", you could use a method reference to invoke that method like
this:
MyClass::myMethod
You can also use method references with constructors and array
constructors. The basic syntax for a constructor reference is:
ClassName::new
For example, if you have a class called "MyClass", you could use a
constructor reference to create a new instance of that class like this:
MyClass::new
TypeName[]::new
int[]::new
For example,
Intermediate operations are lazy, meaning that they are not executed
until a terminal operation is called. This allows multiple intermediate
operations to be chained together, with the result of one operation being
passed as the input to the next.
For example,
A parallel stream automatically splits the data into smaller chunks and
assigns each chunk to a separate thread for processing. The results from
each thread are then combined to produce the final result.
For example,
Arrays.asList(1, 2),
Arrays.asList(3, 4),
Arrays.asList(5, 6)
.flatMap(Collection::stream)
.collect(Collectors.toList());
In this example, we start with a List of List objects. We use the flatMap
method to convert each inner List into a stream of integers, and then
concatenate all the streams into a single stream of integers. Finally, we
collect the resulting stream into a new List object.
The flat method, on the other hand, is not a standard method in Java
Streams. It may be implemented as a custom method or library method,
but its behavior would depend on the implementation.
int size();
return size() == 0;
This interface defines three methods for adding items to the collection,
checking if an item is contained in the collection, and getting the size of
the collection. It also defines a default method, isEmpty(), that returns
true if the size of the collection is 0.
Classes that implement this interface are not required to provide their own
implementation for isEmpty(), because a default implementation is
already provided in the interface. However, they can override the default
implementation if they need to provide a different behaviour.
System.out.println("I am eating.");
Any class that implements the Animal interface will have access to the
eat() method, even if the class does not explicitly implement the eat()
method.
Default and static methods can be used to improve the design of Java
applications in a number of ways. For example, default methods can be
used to add new functionality to existing interfaces, and static methods
can be used to provide utility methods that are available to all classes that
implement a particular interface.
Overall, the memory changes in Java 8 have made Java applications more
memory-efficient. This is important for applications that run on devices
with limited memory, such as mobile devices and embedded systems.
New hash function for Strings: Java 8 introduced a new hash function for
Strings that is more resistant to hash collisions. This can improve the
performance of HashMap when it is used to store Strings.
int x = 0;
};
thread1.start();
thread2.start();
If the x variable were not final, then it is possible that both threads would
increment the x variable at the same time, and the final value of x would
be unpredictable.
By making variables inside lambda functions final, Java can ensure that
these variables cannot be modified by multiple threads at the same time.
int x = 0;
int y = x + 1;
System.out.println(y);
};
thread1.start();
thread2.start();
Each type of dependency injection has its own benefits, and the choice of
which one to use will depend on the specific requirements of the
application.
Here are some benefits and considerations for each type of dependency
injection:
Constructor injection:
Setter injection:
Setter injection makes the code more testable, as the dependencies are
explicit.
Field injection:
Field injection can make the code more difficult to read and understand.
Field injection can make the code more difficult to test, as the
dependencies are not explicit.
IoC containers are responsible for creating and managing objects, and
they do this by relying on a set of configuration rules that define how
objects are created and wired together.
Overall, the IoC container is responsible for managing object creation and
lifecycle management, while the application code is responsible for
defining the rules that govern how objects are created and wired together.
This separation of concerns allows for greater flexibility and modularity in
the application, as the application code can be easily modified without
affecting the underlying object creation and management processes.
The Spring bean lifecycle can be divided into three phases: instantiation,
configuration, and destruction.
@PostConstruct: Invoked after the bean has been constructed and all
dependencies have been injected
init-method: Specifies a method to be called after the bean has been
constructed and all dependencies have been injected
destroy-method: Specifies a method to be called just before the bean is
destroyed.
@PreDestroy: Invoked before the bean is destroyed.
The Spring bean lifecycle is controlled by the Spring IoC container, which
creates, configures, and manages the lifecycle of the beans. Developers
can take advantage of the bean lifecycle callbacks to add custom
initialization and destruction logic to their beans, making it easier to
manage the lifecycle of their objects and ensuring that resources are
properly.
The following are the most commonly used bean scopes in Spring
Framework:
It's important to note that the scope of a bean affects the lifecycle and
visibility of that bean within the Spring IoC container. By choosing the
appropriate scope for a bean, developers can control how and when the
bean is created and how it interacts with other beans in the application.
Stateless beans also have the advantage of being simpler and easier to
reason about, since they do not have to worry about maintaining state
between invocations. Additionally, since stateless beans do not maintain
any state, they can be easily serialized and replicated for high availability
and scalability.
Spring provides several ways to inject beans into other beans, including:
It's important to note that, you can use any combination of the above
methods, but you should choose the appropriate one depending on your
use case.
By default, Spring will try to autowire beans by type, but if there are
multiple beans of the same type, it will try to autowire by name using the
bean's name defined in the configuration file.
A cyclic dependency between beans occurs when two or more beans have
a mutual dependency on each other, which can cause issues with the
creation and initialization of these beans.
@Lazy
@Autowired
private BeanA beanA;
Use a proxy: A proxy can be used to break the cycle by delaying the
initialization of one of the beans until it is actually needed. Spring AOP can
be used to create a proxy for one of the beans involved in the cycle.
Use BeanFactory: Instead of injecting the bean directly, you can use
BeanFactory to retrieve the bean when it's actually needed.
@Autowired
public BeanA(BeanFactory beanFactory) {
this.beanB = beanFactory.getBean(BeanB.class);
}
}
It's important to note that, the best way to handle cyclic dependencies will
depend on the specific requirements of your application. Therefore, you
should carefully analyze the problem and choose the approach that best
suits your needs.
main() method: The main() method is typically the entry point of a Spring
Boot application. It is used to start the Spring Boot application by calling
the SpringApplication.run() method.
It's important to note that the choice of method will depend on the
specific requirements of the application, such as whether the method
needs to be called after the application context has been loaded or after
specific Application events.
try-catch block: You can use a try-catch block to catch and handle
exceptions in the method where they occur. This approach is useful for
handling specific exceptions that are likely to occur within a particular
method.
Controller processes the request: The controller handles the request and
performs the necessary processing logic. It may interact with the model
component to retrieve data or update the data.
Model updates the data: The model component manages the data and
provides an interface for the controller to retrieve or update the data.
View renders the response: The view template is rendered to generate the
response. It may include data from the model component.
The Spring MVC flow is a cyclical process, as the client may send
additional requests to the application, and the cycle repeats.
However, it's important to note that if the singleton bean is stateful, and
the state is shared among requests, this could lead to race conditions and
other concurrency issues. For example, if two requests are trying to
modify the same piece of data at the same time, it could lead to data
inconsistencies.
To avoid these issues, it's important to make sure that any stateful
singleton beans are designed to be thread-safe. One way to do this is to
These are just a few examples of the design patterns used in Spring,
there are many more. Spring framework makes use of these patterns to
provide a consistent and simple way to build applications, making it easier
to manage complex systems.
If the singleton bean is stateful, and the state is shared among requests,
it could lead to race conditions and other concurrency issues if not
designed properly. For example, if two requests are trying to modify the
same piece of data at the same time, it could lead to data inconsistencies.
To avoid these issues, it's important to make sure that any stateful
singleton beans are designed to be thread-safe by using synchronization
or other concurrency control mechanisms such as the synchronized
keyword, Lock or ReentrantLock classes, or the @Transactional annotation
if the bean is performing database operations.
The IoC container is responsible for creating and managing the lifecycle of
beans. When you define a bean in the configuration, the IoC container will
use the factory pattern to create an instance of the bean. The IoC
container will then manage the lifecycle of the bean, including injecting
dependencies, initializing the bean, and destroying the bean when it is no
longer needed.
Here's an example of how you can define a bean in Spring using the
factory design pattern:
@Configuration
public class MyConfig {
@Bean
public MyService myService() {
return new MyService();
}
}
In Spring, AOP proxies are created by the IoC container, and they are
used to intercept method calls made to the target bean. This allows you to
add additional behaviour, such as logging or security checks, before or
after the method call is made to the target bean.
AOP proxies are created using one of three proxy types: JDK dynamic
proxies, CGLIB proxies, or AspectJ proxies.
JDK dynamic proxies: This is the default proxy type in Spring, and it is
used to proxy interfaces.
CGLIB proxies: This proxy type is used to proxy classes, and it works by
creating a subclass of the target bean.
AspectJ proxies: This proxy type uses the AspectJ library to create
proxies, and it allows you to use AspectJ pointcuts and advice in your
application.
Spring uses the proxy pattern to provide AOP functionality by generating a
proxy object that wraps the target bean. The proxy object will intercept
method calls made to the target bean, and it will invoke additional
behavior, such as logging or security checks, before or after the method
call is made to the target bean.
Here's an example of how you can use Spring AOP to add logging to a
bean:
@Aspect
@Component
public class LoggingAspect {
@Before("execution(* com.example.service.*.*(..))")
public void logBefore(JoinPoint joinPoint) {
log.info("Started method: " + joinPoint.getSignature().getName());
If a singleton bean is injected into a prototype bean, then each time the
prototype bean is created, it will receive the same instance of the
singleton bean. This is because the singleton bean is only created once
during the startup of the application context, and that same instance is
then injected into the prototype bean each time it is created.
@Component
@Scope("singleton")
@Component
@Scope("prototype")
@Autowired
In this example, when a prototype bean is created and injected with the
singleton bean, it will receive the same instance of the singleton bean
each time it is created. However, if the singleton bean is created and
injected with the prototype bean, it will receive a new instance of the
prototype bean each time it is called.
You want to build a modular application where you can pick and choose
only the components that you need.
Overall, both Spring and Spring Boot are powerful frameworks that can be
used to build enterprise-level applications. The choice between them
depends on the specific needs of your application and the level of
flexibility and customization that you require.
Here's an example of how you can create a prototype bean using XML
configuration:
@Configuration
public class AppConfig {
@Bean(name="prototypeBean")
@Scope("prototype")
public PrototypeBean prototypeBean() {
return new PrototypeBean();
}
}
In both cases, each time you request the prototype bean from the
application context, you will get a new instance of the PrototypeBean
class.
1.Declare the class as final. This will prevent other classes from extending
it.
2.Make the constructor private. This will prevent other classes from
creating instances of the class.
@SpringBootApplication
SpringApplication.run(MyApplication.class, args);
@Component
// class implementation
@Service
@Autowired
this.myRepository = myRepository;
// Class implementation
When Spring Boot starts up, it scans the application context for beans that
have dependencies and automatically wires them together. This allows
you to write modular and maintainable code by decoupling the
components of your application.
@GetMapping:
@GetMapping("/hello")
// Other methods
@PostMapping:
@RestController
// Other methods
@RestController
@PostMapping("/submit/{id}")
// Other methods
@Repository:
@Repository
return null;
The @Autowired annotation can be used to inject the repository into other
Spring-managed components. For example, here's a service class that
uses the MyRepository class:
@Autowired
myRepository.save(myObject);
return myRepository.findById(id);
@Service:
@Service
@Autowired
myRepository.save(myObject);
return myRepository.findById(id);
@Controller:
@Controller
@Autowired
@GetMapping("/my-path")
model.addAttribute("myObject", myObject);
return "my-view";
The @Autowired annotation is used to inject the MyService bean into the
MyController class. This allows the controller to use the service to perform
business logic operations.
It's worth noting that these annotations are not mutually exclusive, you
can use multiple annotations on a single class to provide more context
about the class and its function. Also, it's important to note that these
annotations are part of the spring-stereotype package, which is a package
of stereotypes for annotating classes that play a specific role within your
application.
@Configuration
@ComponentScan("com.example.myapp.services")
public class MyConfig {
// ...
}
In this example, the @ComponentScan annotation is used to scan the
package "com.example.myapp.services" for classes annotated with
stereotype annotations, and register them as beans in the application
context.
When Spring Boot starts up, it automatically scans the classpath of the
application for certain annotations, such as @Component, @Service, and
@Repository, and registers any classes that are annotated with these
annotations as beans in the Spring application context.
This feature allows developers to quickly and easily set up a new Spring
application without having to manually configure each individual
component. It also makes it easy to add or remove features from the
application by simply adding or removing the appropriate libraries from
the classpath.
However, it's important to note that if you want to customize the auto-
configured feature or want to use a different version of a library, you can
override the auto-configured settings by providing your own configuration.
To summarize:
Both annotations can be used to handle incoming HTTP requests, but the
choice between them depends on the use case and the type of response
that is required.
@Controller
public class MyController {
@ResponseBody
@RequestMapping("/example")
public Object example() {
return new Object();
}
}
It's important to note that, if you use this annotation on a method, Spring
will not try to resolve a view for the request and instead it will directly
return the response body.
@SpringBootApplication(exclude = {SecurityAutoConfiguration.class})
SpringApplication.run(MyApplication.class, args);
spring.autoconfigure.exclude=org.springframework.boot.autoconfigure.sec
urity.servlet.SecurityAutoConfiguration
@PostMapping("/resource")
public ResponseEntity<Resource> createResource(@RequestBody
Resource resource) {
Resource existingResource =
resourceService.findByUniqueId(resource.getUniqueId());
if (existingResource != null) {
return new ResponseEntity<>(existingResource, HttpStatus.OK);
} else {
Resource createdResource = resourceService.create(resource);
return new ResponseEntity<>(createdResource,
HttpStatus.CREATED);
}
}
Note that this is just one way to make a POST method idempotent, and
other methods can be used depending on the requirements of your
specific application.
Spring Boot allows you to define different profiles for your application,
each with its own set of configuration properties. For example, you can
Profiles in Spring Boot are a powerful tool for managing the configuration
of your application in different environments. They make it easy to switch
between different configurations and ensure that your application is
properly configured for each environment.
Using profiles: Spring Boot allows you to define different sets of properties
for different environments using profiles.
Using environment variables: Spring Boot can also read properties from
environment variables. You can set environment variables for different
environments and reference them in the application.properties file.
Using ConfigServer: You can use Spring Cloud Config Server to manage
externalized configuration, it will allow you to store configuration
It's important to note that, the best approach to set properties across
different environments will depend on the specific requirements of your
application and the infrastructure you have. Therefore, you should
carefully analyze the problem and choose the approach that best suits
your needs.
@Service
public class MyService {
@Transactional
public void updateData() {
// Perform database updates
}
}
@Service
public class MyService {
@Autowired
private MyRepository myRepository;
@Transactional
public void updateData(Long id, String data) {
MyEntity myEntity = myRepository.findById(id);
myEntity.setData(data);
myRepository.save(myEntity);
}
}
@Service
public class MyService {
@Autowired
private MyRepository myRepository;
@Autowired
private TransactionTemplate transactionTemplate;
@Service
public class ExampleService {
@Transactional
public void exampleMethod() {
// Code that needs to be executed in a transaction
}
}
READ UNCOMMITTED: A transaction can read data that has not been
committed by other transactions. This is the lowest level of isolation.
READ COMMITTED: A transaction can only read data that has been
committed by other transactions. This is a higher level of isolation.
REPEATABLE READ: A transaction can read data that has been committed
by other transactions, but other transactions cannot modify or insert data
that the current transaction has read.
@Transactional(isolation = Isolation.READ_COMMITTED)
public void exampleMethod() {
// Code that needs to be executed in a transaction
}
It's important to note that the isolation level that you choose will depend
on your application's specific requirements, and that different isolation
levels can have different performance impacts.
@Configuration
@EnableWebSecurity
public class SecurityConfig extends WebSecurityConfigurerAdapter {
@Autowired
private UserDetailsService userDetailsService;
@Autowired
private PasswordEncoder passwordEncoder;
@Override
protected void configure(AuthenticationManagerBuilder auth) throws
Exception {
auth.userDetailsService(userDetailsService).passwordEncoder(passw
ordEncoder);
@Override
protected void configure(HttpSecurity http) throws Exception {
http.authorizeRequests()
.antMatchers("/admin/**").hasRole("ADMIN")
.antMatchers("/user/**").hasRole("USER")
.anyRequest().permitAll()
.and()
.formLogin();
}
}
You can also use other authentication methods such as OAuth2, JWT, etc.
It's important to note that the security configuration will only be effective
if the spring-security-web and spring-security-config modules are on the
classpath. When using Spring Boot, these modules are included by default
in the spring-boot-starter-security starter.
Also, it's important to keep in mind that security is a complex topic and
it's important to always keep the system updated and to test the security
measures in place, to ensure that the system is secure and to fix any
vulnerabilities that may arise.
What is a JWT token and how does spring boot fetch that
information?
JWT (JSON Web Token) is a compact, URL-safe means of representing
claims to be transferred between two parties. It is often used to
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwib
mFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2Q
T4fwpMeJf36POk6yJV_adQssw5c
In Spring Boot, you can use the spring-security-jwt library to handle JWT
tokens. The library provides a JwtTokenProvider class that you can use to
generate and validate JWT tokens.
@Service
public class JwtTokenProvider {
@Value("${security.jwt.token.secret-key}")
private String secretKey;
@Value("${security.jwt.token.expire-length}")
private long validityInMilliseconds;
Payload: The payload contains the claims. Claims are statements about an
entity (typically, the user) and additional metadata. There are three types
of claims: registered, public, and private claims.
HMACSHA256(
base64UrlEncode(header) + "." +
base64UrlEncode(payload),
secret)
The JWT token is passed to the client, usually in the form of an HTTP-only
and secure cookie, which is then passed back to the server with every
request. The server can then use the signature to verify the authenticity
of the token and the claims.
It's important to note that, JWT tokens are stateless, which means that
the server does not need to maintain a session or other state for the
client. This makes JWT tokens an attractive option for RESTful APIs and
Single Page Applications (SPAs) that need to authenticate and authorize
users.
1. You want to use an application: Let's say you want to connect your
Facebook account to a new fitness app.
2. Application redirects you: The fitness app directs you to the Facebook
login page.
5. You grant or deny access: You review the requested permissions and
choose to grant or deny the app access.
6. Facebook sends token to app: If you grant access, Facebook sends the
fitness app an access token, which is like a temporary key that allows the
app to access your data on Facebook.
7. Application uses token: The fitness app uses the access token to read
your Facebook data, such as your profile information and activities, and
integrate it into your fitness experience.
Improved security: You don't share your password with the application,
reducing the risk of it being compromised.
Granular control: You can choose what data the application can access.
Widely supported: Many popular websites and services use OAuth 2.0.
The signature of a JWT token is created by taking the encoded header, the
encoded payload, a secret, and the algorithm specified in the header, and
signing that. The signature is then added to the JWT token as the third
part.
When the token is received by the server, the server will decode the token
to retrieve the header and payload, and then it will re-create the signature
using the same secret and algorithm specified in the header.
@ControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(CustomException.class)
public ResponseEntity<ErrorResponse>
handleCustomException(CustomException ex) {
ErrorResponse error = new ErrorResponse("Error",
ex.getMessage());
return new ResponseEntity<>(error, HttpStatus.BAD_REQUEST);
}
}
@ControllerAdvice(basePackages = "com.example.myapp.controllers")
This will only apply the exception handling logic to controllers in the
specified package, while all other controllers will not be affected by this.
@ControllerAdvice
@ExceptionHandler(value = Exception.class)
// handling logic
Local Exception Handling: You can handle exceptions within the controller
methods where they occur. You can use the try-catch block or the
@ExceptionHandler annotation to handle exceptions locally.
@RestController
@GetMapping("/employees/{id}")
try {
return employeeService.getEmployeeById(id);
@ControllerAdvice
public class CustomExceptionHandler extends
ResponseEntityExceptionHandler {
@ExceptionHandler(CustomException.class)
public final ResponseEntity<ErrorResponse>
handleCustomException(CustomException ex, WebRequest request) {
@Configuration
public class RestConfiguration {
@Autowired
private CustomExceptionHandler customExceptionHandler;
@Bean
public HandlerExceptionResolver handlerExceptionResolver() {
return customExceptionHandler;
}
}
With this, your custom exception handler will be registered and used by
Spring to handle exceptions in your application.
EmployeeController: This class defines the REST endpoints for getting and
saving employees.
@RestController
@RequestMapping("/employees")
public class EmployeeController {
@GetMapping
public List<Employee> getAllEmployees() {
return employeeService.getAllEmployees();
}
@GetMapping("/{id}")
public Employee getEmployeeById(@PathVariable Long id) {
return employeeService.getEmployeeById(id);
}
@PostMapping
public Employee saveEmployee(@RequestBody Employee employee) {
return employeeService.saveEmployee(employee);
}
}
The @RestController annotation is used to indicate that the class is a
controller and will handle HTTP requests. The @RequestMapping
annotation is used to map the endpoint to a specific URL.
EmployeeService: This class defines the business logic for getting and
saving employees.
@Service
public class EmployeeService {
Features of Microservice:
Decentralized Governance - The focus is on using the right tool for the
right job. That means there is no standardized pattern or any technology
pattern. Developers have the freedom to choose the best useful tools to
solve their problems.
Unreliable - If even one feature of the system does not work, then the
entire system does not work.
API Gateway: An API gateway acts as a single entry point for all external
requests, forwarding them to the appropriate microservice for handling.
The gateway can also perform tasks such as authentication, rate limiting,
and caching.
Each of these patterns can help to improve the scalability, reliability, and
maintainability of a microservice architecture. The choice of pattern will
depend on the specific requirements and constraints of the system being
developed.
Database per Service: Each service has its own database, allowing for a
high degree of independence and autonomy.
Shared Database: A shared database is used by multiple services to
store data that is commonly used across the system.
Event Sourcing: The state of the system is stored as a series of events,
allowing for better scalability and fault tolerance.
Command Query Responsibility Segregation (CQRS): Queries and
commands are separated, allowing for improved scalability and
performance.
Saga: A long-running transaction is broken down into smaller,
autonomous transactions that can be executed by different services.
Materialized View: A pre-computed view of data is used to provide fast
access to commonly used data.
API Composition: APIs are composed to provide a unified view of data
from multiple services.
Read Replicas: Read replicas are used to offload read requests from the
primary database, improving performance and scalability.
For write-heavy applications, you may use the Event Sourcing pattern.
This pattern involves storing every change to the state of your application
as an event. Each microservice can subscribe to these events and update
its own state accordingly. This allows multiple microservices to collaborate
and ensures that all changes are captured and recorded.
In both cases, you can also consider using a message queue to handle
asynchronous communication between the microservices, and a cache to
improve performance for read-heavy applications.
By using these fault tolerance mechanisms, you can build robust and
resilient Microservice with Spring framework. This can help you to deliver
high-quality applications that are able to withstand failures and handle
high levels of traffic and load.
Here's a simple example of how you can implement the Circuit Breaker
pattern in the Spring framework:
@Service
@Autowired
@HystrixCommand(fallbackMethod = "defaultResponse")
return myService.doSomething();
@Configuration
@EnableCircuitBreaker
@Bean
When the callService method is called, the Hystrix Circuit Breaker will
monitor the health of the MyService class and, if necessary, open the
circuit and fall back to the defaultResponse method. This helps to prevent
failures from propagating throughout the system and causing widespread
disruption.
To use the circuit breaker in Spring Boot, you can use the following
annotations:
Polly (.NET): A library for .NET that provides support for circuit breakers,
timeouts, and retries.
Ruby Circuit Breaker (Ruby): A library for Ruby that implements the
circuit breaker pattern.
@Service
@Async
@Configuration
@EnableAsync
@Bean(name = "taskExecutor")
return Executors.newFixedThreadPool(10);
Resilience: If one microservice fails, the message queue can hold the
messages until the other microservice is able to process them.
Finally, you can also use code-level access control in each microservice,
for example by using role-based access control (RBAC), to control which
microservices can call which APIs and what actions they can perform.
External Service: You can also use external services like AWS Secrets
Manager, Azure KeyVault, Google Cloud Key Management Service (KMS)
to store and manage your credentials.
It's important to note that, whatever the approach you choose, you should
make sure that the sensitive information is encrypted and stored in a
secure location that is only accessible to authorized users and services.
Also, you should use the best practice of never hardcoding the sensitive
information in the code, this could help to avoid security issues, and make
it easy to rotate and manage your credentials.
Java has several different garbage collectors that can be used, each with
its own set of features and trade-offs. The default garbage collector in
most JVMs is called the "Serial GC", which is a basic garbage collector that
runs serially on a single thread. Other garbage collectors include the
"Parallel GC", which uses multiple threads to speed up garbage collection,
and the "G1 GC", which is designed for large heap sizes and can help
reduce the frequency of long pauses caused by garbage collection.
It's worth noting that, starting from Java 11, the MetaSpace has been
replaced by the "Class Data Sharing" (CDS) feature. CDS allows to share
read-only class data across multiple JVMs, reducing the memory footprint
and the time required to start the JVM.
Use command-line tools: The Java Virtual Machine (JVM) provides several
command-line tools that can be used to profile memory usage, such as
jmap, jstat, and jhat. These tools can be used to create heap dumps,
monitor garbage collection statistics, and analyze memory usage.
To use a profiler, you will need to first run your application in profiling
mode, and then analyze the data that the profiler collects. This may
include analyzing heap dumps, looking at object references, and
identifying patterns of memory usage. Once you have identified the
source of the leak, you can then take steps to fix the problem.
It's worth noting that, Profilers can be quite complex, it's advisable to
have some familiarity with Java Memory Model, Garbage Collection and
JVM internals to effectively use them.
Insufficient heap size: The heap is the area of memory where the JVM
stores objects. If the heap size is not large enough, the JVM may not be
able to allocate enough memory for the application's needs, resulting in
an Out of Memory error.
High usage of non-heap memory: The JVM also uses non-heap memory
for things like class metadata, JIT compilation data and native resources.
If the non-heap memory usage is high, it can cause Out of Memory error.
It's worth noting that, Out of Memory errors can be challenging to debug,
it's advisable to use a profiler and analyse the heap dump or thread dump
to understand the root cause of the error.
POST: Used to create a new resource. The POST method can be used to
submit data to the server, such as form data or JSON payloads.
PUT: Used to update an existing resource. The PUT method can be used
to submit data to the server, and it should completely replace the
resource if it exists.
GET
PUT
DELETE
Use HTTP Verbs: RESTful services should use HTTP verbs such as GET,
POST, PUT, and DELETE to perform operations on resources.
Use HTTP Status Codes: RESTful services should use appropriate HTTP
status codes to indicate the result of an operation, such as 200 OK for
success, 404 Not Found for a missing resource, and 500 Internal Server
Error for a server-side error.
PUT Method:
Purpose: PUT is used to update a resource or create it if it doesn't exist at
a specified URL. It is often used when you want to fully replace an existing
resource with new data.
Idempotence: It is idempotent, meaning that multiple identical requests
will have the same effect as a single request. If the resource doesn't exist,
PUT will create it, and if it does, it will update it.
Safety: It is considered safe when used for updates because it doesn't
create new resources; it only modifies or replaces existing ones.
Validation: This is the process of validating the input data to ensure that
it meets certain criteria. REST APIs can use input validation libraries to
validate data before processing it.
Rate Limiting: This is the process of limiting the number of requests that
a client can make to an API within a certain time period. This can help
prevent Denial of Service (DoS) attacks.
Logging and Auditing: This is the process of recording API access and
activity, which can be used to detect and investigate security incidents.
Via URL: The URL query string can be used to pass parameters in a
request. The parameters are appended to the URL after the "?" character,
and multiple parameters are separated by "&". For example,
http://example.com/api/users?name=john&age=30.
Also, you can use Spring Security to secure your REST API.
Request
POST /tinyurls
Body:
"long_url": "https://www.example.com/very/long/url"
Response
Body:
"short_url": "http://tiny.url/abc123",
"long_url": "https://www.example.com/very/long/url"
Request
GET /tinyurls/{short_url_id}
Response
Location: https://www.example.com/very/long/url
Request
Response
Status: 200 OK
Body:
"click_count": 42,
"last_clicked_at": "2022-01-01T12:00:00Z"
Note: The implementation details (such as the format of the short URL
and the storage backend) and the error handling are omitted for
simplicity.
private Singleton() {}
The singleton pattern is useful when you need to ensure that a class has
only one instance, and you need to provide a global access point to that
instance. It is also useful when you need to control the number of objects
that are created because you can ensure that only one object is created.
However, the singleton pattern can make it difficult to test your code,
because you cannot use the constructor to create new instances of the
singleton class for testing purposes.
Use a Subclass: You can also create a subclass of the Singleton class
and override its behavior. The subclass can then be used to create
multiple instances with different behavior, while the original Singleton
class continues to provide a single instance.
Here are some techniques to help prevent breaking the Singleton pattern:
By using these techniques, you can help prevent the Singleton pattern
from being broken and ensure that the class behaves consistently and as
intended.
The Builder design pattern is a creational design pattern that allows for
the step-by-step construction of complex objects using a specific
construction process. It separates the construction of an object from its
representation so that the same construction process can create different
representations.
The Builder pattern is useful when you want to create complex objects,
but the construction process for these objects is relatively simple. It
allows you to create the object step by step and provides a way to
retrieve the final object once it has been constructed.
class ComplexObject {
// fields for the complex object
}
class Director {
public void construct(Builder builder) {
// use the builder to construct the complex object
}
}
Target Interface: This is the interface that the client expects to work with.
Adaptee: This is the class that has the interface that does not match the
target interface.
The Adapter pattern can be implemented in two ways: Class Adapter and
Object Adapter.
In the Class Adapter approach, the Adapter class extends the Adaptee
class and implements the Target Interface. The Adapter class inherits the
behavior of the Adaptee class and adds the behavior required to match
the Target Interface.
Suppose you have a client that expects a simple interface for a printer
that only has a print() method. However, you have an existing class called
AdvancedPrinter that has a complex interface with multiple methods. You
can create an Adapter class that adapts the interface of the
AdvancedPrinter to the simple interface expected by the client. The
Adapter class would have a print() method that calls the appropriate
methods on the AdvancedPrinter to accomplish the print operation. The
client can then use the Adapter class to print documents without having to
know about the complex interface of the AdvancedPrinter.
Subject: This is the interface that both the Real Subject and the Proxy
implement. It defines the common interface for the Real Subject and the
Proxy so that the client can work with both objects interchangeably.
Real Subject: This is the object that the client wants to access. It
implements the Subject interface and provides the real implementation of
the operations.
Protection Proxy: This is a type of Proxy that checks whether the client
has the necessary permissions to access the Real Subject. If the client has
the required permissions, the Proxy forwards the request to the Real
Subject. Otherwise, it denies the request.
In the Decorator pattern, a set of decorator classes are created that add
new functionality to the original object. These decorators conform to the
same interface as the object being decorated, and they contain a
reference to the object they are decorating. The decorators can add new
The original object can remain unchanged, which helps to ensure that
existing code and unit tests still work as expected.
The facade pattern can also be used to decouple clients from the
implementation of a subsystem. This can make the code more flexible and
easier to maintain. For example, if a client needs to use multiple classes in
a subsystem, the facade can provide a single interface that the client can
use. This allows the client to be independent of the implementation of the
subsystem.
this.connection = connection;
this.logger = logger;
connection.open();
connection.close();
The facade pattern is a useful design pattern for simplifying the use of
complex subsystems and decoupling clients from the implementation of
subsystems.
SELECT salary
FROM (
SELECT salary, ROW_NUMBER() OVER (ORDER BY salary DESC) AS
row_num
FROM employee
) AS sub_query
WHERE row_num = 5;
This query uses a subquery to assign a row number to each salary in the
employee table, ordered by salary in descending order. The outer query
then selects the salary where the row number is equal to 5, which gives
the 5th highest salary.
In this query, the subquery selects the minimum id for each group of
duplicate records. The outer query then deletes all records whose id is not
in the subquery result, effectively removing all duplicate records except
one.
FROM employees
GROUP BY department
In this query, we select the department column from the employees table
and count the number of employees in each department using the
COUNT(*) aggregate function. We also give the count column an alias
num_employees.
The output of this query would be a list of departments with the number
of employees in each department.
------------+---------------+------------+-------
1 | Alice | HR | 50000
2 | Bob | IT | 60000
3 | Charlie | IT | 55000
6 | Frank | IT | 65000
8 | Harry | HR | 55000
9 | Ivan | IT | 70000
department | num_employees
-----------+--------------
HR |3
IT |4
Marketing | 2
In this output, we can see that there are three employees in the HR
department, four employees in the IT department, and two employees in
the Marketing department.
FROM students s
address_id INT,
);
);
What does the JDBC forName() method do for you when you
connect to any DB?
The forName() method is a static method in the java.lang.Class class that
is used to load a JDBC driver at runtime. In the context of JDBC, a driver
is a software component that provides the necessary functionality to
connect to a specific type of database.
The forName() method takes a string parameter that specifies the fully
qualified name of the class that implements the JDBC driver. For example,
to load the JDBC driver for MySQL, you would use the following code:
Class.forName("com.mysql.jdbc.Driver");
This code loads the class com.mysql.jdbc.Driver using the current class
loader. The class loader then initializes the driver, which registers itself
with the DriverManager class. Once the driver is registered, it can be used
to establish a connection to the database.
It's worth noting that the forName() method is used less frequently in
modern JDBC code, as many JDBC drivers now include a static
initialization block that registers the driver automatically when the class is
loaded. In such cases, you can simply include the JDBC driver JAR file in
your project's classpath and use the DriverManager.getConnection()
method to establish a connection to the database.
Triggers are associated with a specific table or view and can be set to
execute either before or after an event occurs. The events that can trigger
a trigger include inserts, updates, and deletes to the table. When a trigger
is fired, it can execute one or more SQL statements or call a stored
procedure.
ON table_name
BEGIN
END;
Inner Join: An inner join returns only the rows that have matching values
in both tables being joined. The join is performed by comparing values in
the columns that are common to both tables.
Right Join: A right join is similar to a left join, but it returns all the rows
from the right table and the matching rows from the left table. If there is
no matching row in the left table, the result will contain null values for the
left table columns.
Full Outer Join: A full outer join returns all the rows from both tables,
including those that do not have matching values in the other table. If
there is no matching row in one of the tables, the result will contain null
values for the columns of the missing table.
Cross Join: A cross join returns the Cartesian product of the two tables,
which means that every row from one table is combined with every row
from the other table.
Suppose you have two tables: Customers and Orders. The Customers
table has columns for CustomerID, Name, and Address, while the Orders
table has columns for OrderID, CustomerID, and OrderDate. You can join
these two tables to get a result set that contains information about
customers and their orders.
To perform an inner join on these tables, you would match the rows in the
Customers table with the rows in the Orders table based on the
CustomerID column. The resulting joined table would contain only the
rows where there is a matching value in both tables.
To perform a left join on these tables, you would return all the rows from
the Customers table and the matching rows from the Orders table based
on the CustomerID column. If there is no matching row in the Orders
table, the result would contain null values for the OrderID and OrderDate
columns.
To perform a right join on these tables, you would return all the rows from
the Orders table and the matching rows from the Customers table based
on the CustomerID column. If there is no matching row in the Customers
table, the result would contain null values for the Name and Address
columns.
These are just a few examples of how joins can be used to combine data
from multiple tables in a relational database.
Here's an example of how you can use the Criteria API to perform an inner
join:
In this example, the JOIN clause is used to create an inner join between
the Order and Customer entities on the customer property of the Order
entity. The WHERE clause is used to add a filter on the name property of
the Customer entity, and the ORDER BY clause is used to sort the results
by the total property of the Order entity.
Adjacency List Model: In this model, each node in the hierarchy is stored
as a record in a database table, with a column that references the parent
node. This makes it easy to navigate the hierarchy, as you can simply
perform a recursive query to retrieve all the descendants of a given node.
However, this approach can be inefficient for very large hierarchies, as it
requires multiple database queries.
In general, the index data type should match the data type of the column
being indexed. This ensures that the index can be used efficiently to
speed up queries that search for specific values in the column. It's also
important to consider the length of the indexed values, as longer values
may require more storage space and may affect the performance of the
index.
It's worth noting that in addition to the data type of the indexed column,
the database may also use a specific data structure for the index, such as
a B-tree or hash table, to improve the efficiency of index lookups. The
specific data structure used will depend on the database system and the
configuration options used when creating the index.
FROM my_table
GROUP BY my_column
This query will return a list of values in the my column column that have
more than one entry in the table, along with the count of entries for each
value. You can use this information to identify the duplicate entries in the
table and take appropriate action, such as deleting the duplicates or
modifying the schema to prevent future duplicates.
Partitioning, on the other hand, is the process of dividing a large table into
smaller, more manageable pieces called partitions, based on some criteria
such as date range or geographic location. Partitioning can improve the
performance of both read and write operations on the table by allowing
the database to process only the necessary partitions, instead of the
entire table. Partitioning can also help with data management, as it allows
administrators to more easily archive or delete old data.
@SqlResultSetMapping(
name="EmployeeResult",
entities={
@EntityResult(
entityClass=Employee.class,
fields={
@FieldResult(name="id", column="emp_id"),
@FieldResult(name="name", column="emp_name"),
@FieldResult(name="salary", column="emp_salary")
}
)
}
)
Here are some of the main annotations used to create an entity class in
JPA:
@Id: This annotation is used to mark a field as the primary key for the
entity. The field that is annotated with @Id will be mapped to the primary
key column of the table.
@Entity
@IdClass(CompositeKey.class)
public class Employee {
@Id
private int id;
@Id
In this example, the Employee entity class has a composite key made up
of the id and name fields. The CompositeKey class is used as the primary
key class, and it has the same fields as the composite key in the
Employee entity class.
@Entity
public class Employee {
@EmbeddedId
private CompositeKey key;
// ...
}
@Embeddable
public class CompositeKey implements Serializable {
private int id;
private String name;
// ...
}
In this example, the Employee entity class has a composite key made up
of the id and name fields, which are embedded within.
Here's an example of how you can use the @Embedded annotation to map
a composite attribute:
@Entity
public class Employee {
@Embedded
private Address address;
// ...
}
@Embeddable
public class Address {
private String street;
private String city;
private String state;
private String zip;
}
How do you handle unidirectional join and bidirectional join at the Entity
level?
@Entity
@OneToMany(mappedBy = "department")
@Entity
@ManyToOne
Here, the "Department" entity is the owner of the relationship and the
"Employee" entity is the inverse end of the relationship.
It's important to note that for bidirectional relationship, it's crucial to keep
both sides of the relationship in sync.
@Entity
@OneToOne
@JoinColumn(name = "address_id")
@Entity
@OneToOne(mappedBy = "address")
@Entity
@OneToMany(mappedBy = "employee")
@Entity
@ManyToOne
@JoinColumn(name = "employee_id")
For example, If you have a parent Entity Department and child Entity
@Entity
public class Department {
@Id
private Long id;
private String name;
@OneToMany(mappedBy = "department", cascade = CascadeType.ALL,
orphanRemoval = true)
private List<Employee> employees;
//getters and setters
@Entity
public class Employee {
@Id
private Long id;
private String name;
@ManyToOne
@JoinColumn(name = "department_id")
private Department department;
//getters and setters
}
It is important to note that you should always test your JPA configurations
and use appropriate indexes to improve performance.
list.stream()
.filter(i -> Collections.frequency(list, i) > 1)
.distinct()
.forEach(System.out::println);
}
}
employees.stream()
.sorted(Comparator.comparing(Employee::getSalary))
And here's an example of how you can sort the same list in descending
order based on the employee's name:
employees.stream()
.sorted(Comparator.comparing(Employee::getName).reversed())
.forEach(System.out::println);
You can use the filter and max methods of the Stream API in Java 8 to
find the highest salary of an employee from the HR department. Here's an
example:
.max(Comparator.comparing(Employe
e::getSalary));
if (highestPaidHrEmployee.isPresent()) {
} else {
In this example, the filter method is used to select only the employees
from the HR department, and the max method is used to find the
employee with the highest salary. The max method returns an Optional
Find all employees who live in ‘Pune’ city, sort them by their
name, and print the names of employees using Stream API.
import java.util.ArrayList;
import java.util.Comparator;
import java.util.List;
this.name = name;
this.city = city;
return name;
return city;
.sorted(Comparator.comparing(Employee::getName))
.toList();
puneEmployees.stream()
.map(Employee::getName)
.forEach(System.out::println);
The code uses the filter method to filter out employees who live in Pune,
then the sorted method to sort them by name, and finally, the map
method to extract the names of the employees. The toList method is used
to convert the filtered and sorted stream of employees to a list.
.filter(n -> n % 2 == 0)
.mapToDouble(n -> n)
.average()
numbers.stream()
.sorted()
.forEach(System.out::println);
And here's an example of how you can sort a list of strings in descending
order based on their length:
words.stream()
.sorted(Comparator.comparingInt(String::length).reversed())
.forEach(System.out::println);
Note that the sorted method returns a new stream with the elements
sorted in the specified order, and it does not modify the original stream or
collection.
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
this.name = name;
this.department = department;
return name;
return department;
);
.collect(Collectors.groupingBy(Employee::getDepartment,
Collectors.counting()));
System.out.println(employeeCountByDepartment);
import java.util.*;
import java.util.stream.Collectors;
);
.sorted(Comparator.comparing(Employee::getName))
.sorted(Comparator.comparing(Employee::getSalary).reversed())
.collect(Collectors.toList());
this.name = name;
this.location = location;
this.salary = salary;
return name;
return location;
return salary;
@Override
return "Employee{" +
'}';
We use a stream to filter the employees based on the given location using
the filter() method, and then sort them in alphabetical order by name
using the sorted() method with
Comparator.comparing(Employee::getName). Finally, we sort each city
employee’s salary from highest to lowest using the sorted() method with
Comparator.comparing(Employee::getSalary).reversed().
);
nameFrequencyMap.put(name,
nameFrequencyMap.getOrDefault(name, 0) + 1);
this.name = name;
this.location = location;
this.salary = salary;
return name;
return location;
return salary;
@Override
return "Employee{" +
'}';
We iterate over the employees using a for-each loop, and for each
employee, we extract the name using the getName() method. We then
put the name in the nameFrequencyMap map and increment its frequency
using nameFrequencyMap.getOrDefault(name, 0) + 1.
import java.util.Arrays;
.chars()
.filter(Character::isDigit)
.map(Character::getNumericValue)
.toArray();
In this program, we first define an integer array arr with some elements.
Then we use the Arrays.stream() method to create a stream of integers
from the array, and then we call the sum() method on the stream to find
the sum of all the elements in the array.
import java.util.List;
import java.util.stream.Collectors;
.filter(n -> n % 2 == 0)
.map(n -> n * 2)
.collect(Collectors.toList());
We then use the filter() method on the stream to filter out all the odd
numbers, and then use the map() method to multiply each even number
by 2.
Finally, we collect the result of the stream into a list using the collect()
method with Collectors.toList() and print the result using the
System.out.println() statement.
import java.util.Map;
wordCounts.put(word, 1);
} else {
The program takes a string as input and uses a HashMap to store the
count of each word. It splits the string into words using the split method,
and then iterates through each word, incrementing its count in the
HashMap. Finally, it prints the occurrence of each word.
The output of the program for the input string "Hello world hello java
world" would be:
world: 2
java: 1
Hello: 1
hello: 1
import java.util.Arrays;
In this program, we first define three ArrayList of integers arr1, arr2, and
arr3 with some elements.
We then iterate over the elements of the first list arr1 using a for-each
loop, and for each element, we check if it is present in the other two lists
arr2 and arr3 using the contains() method. If the element is present in
both lists, we print it as a common element.
int num = 0;
if (len == 0) {
int i = 0;
if (firstChar == '-') {
negative = true;
i++;
i++;
char ch = str.charAt(i);
} else {
System.out.println(num);
char ch = 'o';
if (index == -1) {
} else {
In this example, we define a string str and a character ch. We then use
the indexOf method of the String class to find the first occurrence of the
character in the string. If the character is not found, the indexOf method
returns -1. Otherwise, it returns the index of the first occurrence of the
character.
The output of this program for the string "Hello World" and the character
'o' would be:
int n = arr.length + 1;
int actualSum = 0;
actualSum += arr[i];
The output of this program for the array {1, 2, 3, 4, 6, 7, 8, 9, 10} would
be:
int n = str.length();
combinations("", str);
int n = str.length();
if (n == 0) {
System.out.println(prefix);
} else {
In this example, we define a string str with the value "GOD". We then call
the combinations method with an empty prefix and the string str. The
combinations method takes a prefix and a string as arguments. If the
length of the string is 0, it prints the prefix. Otherwise, it recursively calls
itself with each character of the string added to the prefix, and the
character removed from the string.
The output of this program for the string "GOD" would be:
GOD
GDO
OGD
ODG
DOG
DGO
import java.util.*;
stack.push(c);
stack.pop();
stack.pop();
stack.pop();
} else {
return false;
return stack.isEmpty();
String s1 = "()";
String s2 = "()[]{}";
String s4 = "([)]";
String s5 = "{[]}";
The main method in this example demonstrates how to use the isValid
method with some sample inputs. The output of running this program
would be:
() is valid: true
(] is valid: false
import java.util.*;
if (!set.add(i)) {
duplicates.add(i);
return duplicates;
int i = left - 1;
i++;
swap(array, i, j);
swap(array, i + 1, right);
return i + 1;
array[i] = array[j];
array[j] = temp;
char ch = scanner.nextLine().charAt(0);
if (string.charAt(i) == ch) {
int count = 0;
if (string.charAt(j) == ch) {
count++;
minCount = count;
if (minCount == Integer.MAX_VALUE) {
} else {
Finally, we check if minCount has been updated from its initial value. If it
hasn't, we know that the specified character was not present in the string.
Enter a character: l
Enter a character: z
int left_product = 1;
result[i] = left_product;
left_product *= arr[i];
int right_product = 1;
result[i] *= right_product;
right_product *= arr[i];
We then initialize a new array called result with the same length as the
input array. This array will store the final result.
We then compute the product of all elements to the left of each element
in the input array. We do this by maintaining a variable called left_product
that keeps track of the product of all elements seen so far. We iterate
Next, we compute the product of all elements to the right of each element
in the input array. We do this in a similar way to the previous step, but
iterating through the input array from right to left. We maintain a variable
called right_product that keeps track of the product of all elements seen
so far from the right side of the array. We multiply each element in result
by right_product and update right_product to include the current element.
The result shows that the product of all elements in the array except the
first one is 3 x 4 x 5 x 6 = 360. Similarly, the product of all elements
except the second one is 2 x 4 x 5 x 6 = 240, and so on. The output
matches these products with the corresponding element removed.
Can you write down a Spring boot rest API for addition of two
integers?
@RestController
@GetMapping("/addition/{a}/{b}")
return a + b;
This REST API creates a simple Spring MVC @RestController that listens
for GET requests to the URL path /addition/{a}/{b} where {a} and {b}
are placeholders for integer values. The @PathVariable annotation maps
these values to the int parameters a and b, and the addition() method
simply returns the sum of a and b. This API can be easily extended to
handle other HTTP methods or to perform more complex operations.
.boxed()
.collect(Collectors.groupingBy(Function.identity(),
Collectors.counting()));
output:
if (charSet[val]) {
return false;
charSet[val] = true;
return true;
Explanation:
We first check if the input string is null or if its length is greater than 128,
which is the maximum number of unique ASCII characters. If either of
these conditions is true, we return false.
We create a boolean array of size 128, which will be used to keep track of
whether a character has been seen before or not.
We iterate through each character of the input string and convert it to its
ASCII value. We check if the corresponding index in the boolean array is
already true. If it is, it means that the character has been seen before and
we return false. Otherwise, we mark the corresponding index in the
boolean array as true.
If we have iterated through the entire string without finding any repeated
characters, we return true.
Here's how you can call this function with the input string "india" and print
the result:
import java.util.List;
spikes.add(i);
return spikes;
int[] stockPrices = {100, 110, 120, 130, 140, 150, 160, 170, 180, 190,
200};
System.out.println(spikes);
import java.util.Map;
map.put(e2,"M2");
System.out.println(map.get(e1));
System.out.println(map.get(e2));
M2
M2
Explanation:
In Java, String literals are interned, meaning all String objects with the
same content share the same memory space.
Therefore, e1, e2, and e3 refer to the same String object "AJAY" in the
memory.
When adding entries to the map, the keys are compared based on their
object identity, not just their content.
Since e1 and e2 are the same object, they map to the same value ("I" and
"M2" respectively).
Therefore, the output is "M2" and "M2", even though they have the same
content "AJAY".
Determining the number of threads needed for a cached thread pool can
be a challenging task, as it depends on various factors such as the nature
of the tasks, the resources they require, and the overall system
performance.
Here are some general guidelines that can help you determine the number
of threads needed for a cached thread pool:
Start with a small number of threads: You can start with a small number
of threads and increase it gradually until you find the optimal number.
Keep in mind that having too many threads can lead to thread context
switching overhead and degrade performance, while having too few
threads can limit the utilization of available resources.
Compression: You can compress the data before sending it back in the
response. This can reduce the amount of data sent over the network and
improve the response time. You can use algorithms such as GZIP to
compress the data.
Streaming: You can stream the data from the database directly to the
response without storing it in memory. This can reduce the memory usage
and allow you to handle large data sets more efficiently. You can use the
Java API for Streaming XML (StAX) to stream the data.
Keep in mind that these solutions may not be suitable for all use cases
and that the best solution depends on your specific requirements and
constraints.
Database-based-scenario:1
How would you design a binary tree kind of data structure in
database design? Basically, the interviewer wants to know how
you would design a database in a hierarchical way.
data VARCHAR(255),
parent_id INT,
);
In this example, the node table has four columns: id, data, parent_id, and
a self-referencing foreign key parent_id that refers to the id of the parent
node. The root node of the tree will have a NULL value in the parent_id
column, and the other nodes will have a reference to their parent node.
You can use SQL queries to traverse the tree and perform various
operations, such as inserting, updating, and deleting nodes. You can also
use recursion to traverse the tree and retrieve the data for all nodes in a
specific order, such as pre-order, in-order, or post-order.
Database-based-scenario:2
How would you store millions of records in a table? How many
tables does it require, any database pattern can you use here?
Indexing: Indexing the columns used in the queries can improve the
query performance and reduce the response time. You can use either
In addition, you can use database patterns, such as the Star Schema, to
design the database and improve the performance and scalability of the
system. The Star Schema is a data warehousing pattern that uses a
central fact table to store the data and dimension tables to store the
metadata. This pattern can improve the query performance and reduce
the response time for large data sets.
If one of the microservice is having high latency, how can you handle
that, and in which direction can you think of to resolve this problem?
Monitor the Latency: You need to monitor the latency of the microservice
to identify the root cause of the problem. You can use tools like
Application Performance Management (APM) software or log analysis tools
to monitor the microservice.
Identify the Root Cause: You need to identify the root cause of the high
latency. The root cause can be anything from slow database queries to
resource constraints on the server. Once you identify the root cause, you
can take the necessary steps to resolve it.
Optimize the Code: You can optimize the code to reduce the latency. For
example, you can improve the algorithms used in the microservice, reduce
the number of database queries, or improve the caching mechanism.
Scale the Microservice: You can scale the microservice to handle the
increased load. You can scale the microservice horizontally, by adding
more instances, or vertically, by increasing the resources of the existing
instances.
Use Caching: You can use caching to reduce the latency. Caching can help
reduce the load on the database and improve the response time of the
microservice.
Load Balancing: You can use load balancing to distribute the load across
multiple instances of the microservice. Load balancing can help improve
the performance and availability of the microservice.
Microservice-based-scenario
Service A is calling Service B, C, D. I want to log or handle specific
conditions before calling B, C, and D but in a generic way. How can
you handle this situation?
In your case, you can define an aspect to handle specific conditions before
calling Service B, C, and D. The aspect can include code to log the
conditions or perform error handling, and you can apply the aspect to the
methods in Service A that call Service B, C, and D.
To implement this in Java, you can use a framework like Spring AOP. In
Spring AOP, you can define aspects using annotations and apply them to
your code using pointcuts. For example:
@Aspect
@Component
@Before("execution(* com.example.ServiceA.*(..))")
Inheritance Scenario-Based
If a child class overrides the parent where the singleton pattern is
implemented, then will it break the same? If Yes/No, why?
The Singleton pattern is used to ensure that a class has only one instance,
and provides a global point of access to that instance. If a child class
overrides the parent where the Singleton pattern is implemented, then it
may break the Singleton pattern, depending on how the overriding is
done.
If the child class simply inherits the parent's Singleton instance, and does
not override any Singleton-specific methods or properties, then the
Singleton pattern will still be maintained. The child class will have access
to the same instance as the parent, and any modifications made to that
instance in the parent or child class will be visible to both classes.
Increase the initial capacity of the HashMap: By setting the initial capacity
to a higher value, you can reduce the number of times the array needs to
be resized. However, this will increase the memory consumption.
Use a distributed cache: A distributed cache can help reduce the load on
the database by storing frequently accessed data in memory. This can
speed up the application's response time and reduce the number of
database queries.
Use containers and orchestration tools: Containers like Docker can help to
package applications and their dependencies, allowing them to be
deployed and scaled quickly. Orchestration tools like Kubernetes or
Docker Swarm can automate the deployment and management of
containers.
Use a distributed file system: A distributed file system can help with
scalability and redundancy by distributing files across multiple servers.
Optimize the database queries: The slow generation of a PDF could be due
to slow database queries. You can optimize the database queries by using
indexing, caching, and partitioning. This will help the queries execute
faster, and the PDF generation time will be reduced.
Generate the PDF asynchronously: You can generate the PDF in the
background while the user continues to use the application. This way, the
user will not have to wait for the PDF to be generated. You can also notify
the user when the PDF is ready to be downloaded.
Use a caching system: You can cache the generated PDF and serve it to
subsequent requests. This way, if the same user requests the same PDF,
you can serve it from the cache, and the user will not have to wait for the
PDF to be generated.
Optimize the PDF generation code: You can optimize the PDF generation
code to make it more efficient. This may involve changing the libraries or
tools you are using or optimizing the code itself.
Use a distributed system: You can distribute the PDF generation task
across multiple servers to reduce the time it takes to generate the PDF.
This is especially useful if you have a large number of users requesting
the PDF.
Optimize the server: You can optimize the server to handle the load
better. This may involve increasing the server's processing power,
memory, or storage.
We'll cover what happened in Java since its update in 2014 to the most
recent developments. Instead of just sticking to Java 8 topics, we'll
explore the significant improvements and new tools introduced in the later
versions.
This will help us answer interview questions more effectively. I've focused
on the important features that matter in interviews, so let's jump in and
see what's new in Java!
Java 8 Features
Features:
1.Lambda Expressions:
2.Functional Interfaces:
3.Stream API:
4.Default Methods:
6.Optional class:
8.Default Methods:
10.Parallel Streams:
11.Collectors:
Java 9 added new static factory methods to the collection interfaces (List,
Set, Map, etc.), making it more convenient to create immutable instances
of these collections.
System.out.println(colors);
The Stream API was enhanced with several new methods, such as
takeWhile, dropWhile, and ofNullable, which improve the flexibility and
functionality of working with streams.
// Example 1: takeWhile
.collect(Collectors.toList());
// Example 2: dropWhile
.collect(Collectors.toList());
// Example 3: ofNullable
.flatMap(name ->
StreamAPIImprovementsExample.nullSafeStream(name))
.collect(Collectors.toList());
In this example:
5.HTTP/2 Client:
Java 9 introduced a new lightweight HTTP client that supports HTTP/2 and
WebSocket. This client is designed to be more efficient and flexible than
the old HttpURLConnection API.
.uri(new URI("https://www.example.com"))
.GET()
.build();
Java 10 Features
1.Local-Variable Type Inference (var):
Java 10 introduced the ability to use the var keyword for local variable
type inference. This allows developers to declare local variables without
explicitly specifying the type, letting the compiler infer it based on the
assigned value.
/**
* Use:
* Don't use:
*/
var b = "b";
var c = 5; // int
// the benefit becomes more evident with types with long names
// vs.
/**
* new .orElseThrow()
*/
.stream()
.min(comparing(Flight::date));
earliestFlight.orElseThrow(FlightNotFoundException::new);
Java 11 Features
1.HTTP client
*/
HttpRequest request =
HttpRequest.newBuilder(URI.create("https://github.com/"))
.build();
HttpResponse<String> response =
client.send(request, HttpResponse.BodyHandlers.ofString());
print(response.headers().map());
These methods simplify reading and writing the contents of a file as a list
of strings. The readAllLines method reads all lines from a file into a list,
and the write method writes a collection of strings to a file.
These methods create buffered readers and writers for efficient reading
and writing of files. They simplify the process of working with character
streams.
This method compares the content of two files and returns the position of
the first mismatched byte. If the files are identical, it returns -1.
/**
*/
print(content);
if(!Files.exists(newFile)) {
} else {
Java 12 Features
1.Compact Number Formatting:
NumberFormat compactFormatter =
NumberFormat.getCompactNumberInstance(Locale.US,
NumberFormat.Style.SHORT);
NumberFormat compactLongFormatter =
NumberFormat.getCompactNumberInstance(Locale.US,
NumberFormat.Style.LONG);
The String class in Java 12 introduced a new method called indent(int n).
This method is used to adjust the indentation of each line in a string by a
specified number of spaces.
The Collectors utility class in Java 12 introduced new collectors like teeing,
which allows combining two collectors into a single collector.
// Finds and returns the position of the first mismatched byte in the
content of two files,
print(result); // -1
Java-13 Features
Nothing much interesting happend: — API update to ByteBuffer — Update
to localization (support for new chars and emojis) — GC updates
int dayOfWeek = 2;
};
“Yield” Statement:
case 1, 2, 3, 4, 5 -> {
System.out.println("Working day");
yield "Weekday";
case 6, 7 -> {
System.out.println("Weekend");
yield "Weekend";
};
*/
oldStyleWithBreak(FruitType.APPLE);
withSwitchExpression(FruitType.PEAR);
switchExpressionWithReturn(FruitType.KIWI);
switchWithYield(FruitType.PINEAPPLE);
switch (fruit) {
print("Common fruit");
break;
print("Exotic fruit");
break;
default:
print("Undefined fruit");
switch (fruit) {
};
print(text);
/**
*/
// https://stackoverflow.com/questions/58049131/what-does-the-new-
keyword-yield-mean-in-java-13
};
print(text);
Java 15 Features:
1.Text-block:
Text blocks are a new kind of string literals that span multiple lines. They
aim to simplify the task of writing and maintaining strings that span
several lines of source code while avoiding escape sequences.
" <body>\n" +
" </body>\n" +
"</html>";
<html>
<body>
<p>Hello, world</p>
</body>
</html>
""";
Escape Sequences: Escape sequences are still valid within text blocks,
allowing the inclusion of special characters.
/**
* Use cases for TextBlocks (What's New in Java 15 > Text Blocks in
Practice)
* - Simple templating
*/
oldStyle();
emptyBlock();
jsonBlock();
jsonMovedEndQuoteBlock();
jsonMovedBracketsBlock();
"}";
print(text);
""";
"age": 45,
print(text);
"age": 45,
print(text);
"age": 45,
print(text);
Java 16 Features
1.Pattern matching for instanceof:
Java 16’s pattern matching for instanceof is a nifty feature that improves
type checking and extraction. Here's a rundown of its key aspects:
What it does:
Combines type checking and casting into a single, more concise and
readable expression.
Benefits:
Syntax:
} else {
Example
// old way
if (o instanceof Book) {
// new way
Record:
Records in Java are a special type of class specifically designed for holding
immutable data. They help reduce boilerplate code and improve
readability and maintainability when dealing with simple data structures.
1. Conciseness:
Unlike traditional classes, records require minimal code to define. You just
specify the data fields (components) in the record declaration, and the
compiler automatically generates essential methods like:
2. Immutability:
Record fields are declared as final, making the data stored within them
unmodifiable after the record is created. This ensures data consistency
and simplifies thread safety concerns.
3. Readability:
4. Reduced Errors:
Overall, records are a valuable tool for Java developers to create concise,
immutable, and readable data structures, leading to cleaner, more
maintainable code.
/**
* Record are data-only immutable classes (thus have specific use cases)
* Not suitable for objects that are meant to change state, etc.
* <p>
* <p>
* <p>
* Use cases:
*/
/**
*/
public Product {
if(price < 0) {
General usage and features of the DateTimeFormatter API in Java 16: This
includes understanding format patterns, creating custom formats, parsing
dates and times, and available formatting options.
TextStyle.FULL, Locale.US,
TextStyle.SHORT, Locale.FRENCH,
TextStyle.NARROW, Locale.GERMAN
);
.toFormatter(entry.getValue());
print(formattedDateTime);
Java 16 brought some exciting changes to the Stream API, making it even
more powerful and convenient to use. Here are the key highlights:
String streams now support the limit and skip methods directly, removing
the need for intermediate operations.
The peek method can now be used with parallel streams, allowing side
effects without impacting parallelism.
Java 17 Features
1.Sealed classes(Subclassing):
You declare a class or interface as sealed using the sealed keyword. Then,
you use the permits clause to specify a list of classes that are allowed to
extend or implement it. Only these permitted classes can directly inherit,
while all other classes are prohibited.
// ...
Benefits:
Rules:
2. A child class MUST either be final, sealed or non-sealed. (or code won’t
compile)
3. A permitted child class MUST extend the parent sealed class. Permitting
without using the permit is now allowed.
either in the same module (if the superclass is in a named module) (see
Java 9 modularity)
More on point 4:
The motivation is that a sealed class and its (direct) subclasses are tightly
coupled since they must be compiled and maintained together.
If you use modules, you get some additional flexibility, because of the
safety boundaries modules give you.
Java 18 Features
1.UTF-8 by Default:
Java 18 makes UTF-8 the default character encoding for the platform,
aligning with modern standards and simplifying character handling.
// Problem:
// Solution before Java 18: always specify the charset, (and good
luck not forgetting it!)
This new API provides a basic web server for serving static files, ideal for
quick prototyping and embedded applications.
Have your static files (HTML, CSS, JavaScript, images, etc.) ready in a
specific directory.
server.createContext("/", fileServer);
server.start();
<!DOCTYPE html>
<html>
<head>
</head>
<body>
This page can be served with Java's Simple Web Server using the
"jwebserver" command
</body>
</html>
You should see the default file (usually index.html) from your static files
directory being served.
To stop the server, press Ctrl+C in the terminal where it’s running.
It should output:
Binding to loopback by default. For all interfaces use “-b 0.0.0.0” or “-b
::”.
URL http://127.0.0.1:8000/
/**
*/
HttpRequest head =
HttpRequest.newBuilder(URI.create("https://api.github.com/"))
.HEAD()
.build();
print(response);
Java 19 Features
Either preview or incubator features:
www.oracle.com
Java 20 Features
Either preview or incubator features:
www.oracle.com
Java 21 Features
1.Virtual Threads:
});
});
vThread1.join();
vThread2.join();
Output:
This will interleave the outputs from both virtual threads, demonstrating
concurrent execution without the overhead of full OS threads. You might
see something like:
Virtual Thread 1: 0
Virtual Thread 2: 0
Virtual Thread 1: 1
Virtual Thread 2: 1
...
Virtual Thread 1: 9
Virtual Thread 2: 9
executor.submit(() -> {
Thread.sleep(Duration.ofSeconds(1));
return i;
});
});
Records were introduced as a preview in Java 14, which also gave us Java
enums. record is another special type in Java, and its purpose is to ease
the process of developing classes that act as data carriers only.
In JDK 21, record patterns and type patterns can be nested to enable a
declarative and composable form of data navigation and processing.
// To create a record:
// To create an Object:
System.out.print(title);
System.out.print(completed);
ColoredPoint lr)) {
System.out.println(c);
2.Sequenced collections:
In JDK 21, a new set of collection interfaces are introduced to enhance the
experience of using collections. For example, if one needs to get a reverse
order of elements from a collection, depending on which collection is in
use, it can be tedious. There can be inconsistencies retrieving the
encounter order depending on which collection is being used; for example,
SortedSet implements one, but HashSet doesn't, making it cumbersome
to achieve this on different data sets.
// new method
SequencedCollection<E> reversed();
void addFirst(E);
void addLast(E);
E getFirst();
E getLast();
E removeFirst();
E removeLast();
3.String templates:
// As of Java 21
System.out.println(greeting);
In this case, the second line is the expression, and upon invoking, it
should render Hello Ajay. Furthermore, in cases where there is a chance
of illegal Strings—for example, SQL statements or HTML that can cause
security issues—the template rules only allow escaped quotes and no
illegal entities in HTML documents.
Kafka Broker: Kafka brokers are the core components of the Kafka
system. They store and manage the data that is produced by the
producers, and make it available for consumption by the consumers.
Kafka brokers are distributed and can be scaled horizontally to handle
large amounts of data.
Topics: Topics are logical channels or streams of data in Kafka. Each topic
is partitioned into one or more partitions, and each partition is replicated
across multiple Kafka brokers for fault tolerance. Producers publish data
to topics, and consumers subscribe to one or more topics to receive the
data.
One way to persist data from a Kafka topic is to use a Kafka consumer to
read data from the topic and write it to a database or file system. This can
be done using a Kafka consumer application that reads data from the
topic and writes it to a file or database.
Another way to persist data from a Kafka topic is to use Kafka Connect,
which is a framework for streaming data between Kafka and other data
systems. Kafka Connect can be used to move data from a Kafka topic to a
database or other storage system, and can also be used to move data
from a database or other storage system to a Kafka topic.
Additionally, some databases have their own Kafka connectors that allow
you to persist data directly to the database from a Kafka topic. For
example, Confluent provides a Kafka connector for PostgreSQL that can
be used to write data from Kafka to a PostgreSQL database.
If a consumer fails or crashes, and then comes alive after some time, it
can continue consuming messages from the last committed offset for each
partition it is subscribed to. When the consumer restarts, it retrieves the
last committed offset for each partition from the "__consumer_offsets"
topic and starts consuming messages from that point.
Each consumer in a consumer group keeps track of its own offset, which
represents the position of the last message it has processed in each
partition it is consuming. These offsets are periodically committed to the
"consumer_offsets" topic, which serves as a centralized store for the
committed offsets of all consumers in the group.
Install Kafka: First, you need to download and install Kafka. You can
download Kafka from the Apache Kafka website or from a cloud provider
like Confluent or Amazon Web Services (AWS).
These are the basic steps to configure Kafka. Depending on your use case
and requirements, you may need to configure additional settings or use
additional components, such as Kafka Connect, Kafka Streams, or Kafka
Schema Registry.
Consider the retention period of data: If you need to retain data for a long
period of time, you may want to increase the replication factor to ensure
that the data is not lost in the event of a broker failure or network outage.
@Configuration
@EnableKafka
@Bean
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
"localhost:9092");
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
StringSerializer.class);
@Bean
This configuration class will enable the Kafka support in your Spring Boot
application and it also creates a KafkaTemplate bean that you can use to
send messages to a Kafka topic.
Also, you should have kafka and zookeeper running on your local machine
or on the server.
Here are some key differences between containers and virtual machines:
In summary, containers and virtual machines are both useful tools for
running applications in isolated environments, but they have different
strengths and weaknesses. Containers are more lightweight, efficient, and
portable, while VMs provide stronger isolation and security but require
more resources. The choice between the two will depend on the specific
requirements of the application and the environment in which it will be
deployed.
A pod is a logical host for one or more containers, and it provides a shared
environment for those containers to run in. Containers within a pod can
communicate with each other using local hostnames and ports, and they
can share the same storage volumes.
Pods are usually not deployed directly in Kubernetes, but rather as part of
a higher-level deployment or replica set. These higher-level objects define
the desired state of the pods and manage their creation, scaling, and
termination.
To write a JUnit test for a static method, you can use the @Test
annotation and call the static method directly from the test method
Here are a few reasons why you might be getting this exception:
The class is final: Mockito cannot create mocks of final classes. You can
either remove the "final" modifier from the class, or use a different
mocking library that supports mocking final classes, such as PowerMock or
JMockit.
The class is an interface: If the class is an interface, you should use the
Mockito.mock() method instead of Mockito.mock(class).
In general, if you are getting the "Mockito cannot mock this class"
exception, it is a sign that you may need to refactor your code or your
test in order to make it more testable. You may also want to consider
using a different mocking library that supports mocking the specific class
or scenario you are working with.
Binary search trees are useful data structures for storing and searching
large sets of data efficiently. They are often used in computer science
applications such as database indexing, file system organization, and
network routing algorithms.
Inserting a new value into a binary search tree involves traversing the
tree from the root node to a leaf node, comparing the value to be inserted
with the value of each node along the way, and choosing the appropriate
child node to continue the traversal. Searching for a value in a binary
search tree follows a similar process, starting at the root node and
traversing the tree until the desired value is found or it is determined that
the value is not in the tree.