Spring Retryable – Retry to consume Restful Services

Spring Retryable – Retry to consume Restful Services:

Generally in our application, We might face an issue 404 not found whenever a REST end point is not available. This exception is thrown by the application server or web-server.

But to make processing more robust and less prone to failure, It is sometimes best to make retry attempts to the same end point as it might get succeed on further attempts.

This is where Spring Retry comes to our rescue!

what is Spring Retry?
Spring Retry helps us to automatically retry the operation- most commonly consuming the Rest endpoint, again after the operation is failed.

Spring helps us with an interface – Retryable to achieve the above operation.

Most commonly used methods from Retryable interface,

BackoffSpecify backoff properties for retrying the operation
excludeExclude properties that do not have to be retried
IncludeInclude properties that have to be retried
maxAttemptsNumber of times the retry attempt to be made
valueExceptions that are retryable
Retryable Interface Methods


We need to annotate @EnableRetry in our spring boot main class to indicate that this springboot application has implemented retryable interface.

Our Project Structure:


First let us write a simple Rest End point to perform retry operation.

We are going to write a code that will throw arithmetic exception, once exception is thrown – We are going to make 4 attempts of retries to reach the endpoint with a time gap of 5 seconds.

From the above code,

We have used the annotation @Retryable to indicate that this method has enabled Retry operation.

value = Exception.class –> Whenever a exception is thrown, it will perform the retry operation for maxAttempts of 4 times and with time delay of 5 seconds. i.e. Retries made every 5 seconds for 4 times.


Now let us write an endpoint to exclude when Arithmetic exception happens.

In the above code we have mentioned exclude – ArithmeticException.class, so whenever the Arithmetic exception occurs, the retry will not be executed.


So, Can we handle the exception after all retries are exhausted?




We have an annotation – @Recover, which we can make it to get executed once all the retries are made and still not able to execute the endpoint.

Now let us see an example with Recover,

So what will this recover do?

Once all the attempts are made for the /retry endpoint, then this Recover block of code will get executed.

IMPORTANT: The return type of /retry endpoint and @Recover should be same in order for recover to get executed!


Complete Code:


Code can be downloaded at : Spring-Retryable

Java 8 Streams – Streams in Java With Examples

Java 8 Streams – Streams in Java With Examples

What are Streams?

  •  Sequence of objects supporting multiple methods.


  • Stream is not a data structure and does not store elements.
  • Source of stream remains unmodified after operations are performed.
  • Streams perform operation over the collection but actually does not modify the underlying collection.
  • Laziness, evaluates only when requried. (When Terminal operation is identified)
  • Visitation of elements only once, to revisit we need new stream. If same stream used it throws IllegalStateException.


  • Intermediate operations are not evaluated until terminal operation invoked.
  • Each intermediate operation creates a new stream and process and returns stream.
  • When terminal invoked, traversal begins and operations are performed one by one.
  • Parallel streams does not evaluate only by one but simultaneously based on available ones.

Intermediate and Terminal Operations:

  • Every stream will perform Intermediate and Terminal Operations.
  • There can be any number of Intermediate operations but only one Terminal operation
  • Streams will not process until it reaches the Terminal operation, once it reaches Terminal operation then only stream processing will start.
  • It is mandatory for a stream to have Terminal operation for the stream to process or Streams will not process.
Intermediate Operations Terminal Operations
filter toArray()
map Collect
flatmap Count
distinct reduce
sorted forEach
limit forEachOrdering
skip min
peek max



let us see some examples of some Intermediate and Terminal operations,

Let us create a main method to populate the object,


==================== STUDLIST========================
[Student [studentName=Alpha Student, studentSubjects=[Computers, Information Technology]], Student [studentName=Beta Student, studentSubjects=[Java, Spring]]]


Map vs FlatMap:
produces one output value for each input value

FlatMap takes one input value and provides arbitrary number of values. In short helps to flatten a Collection<Collection<T>> into a Collection<T>.

Now let us see an example,


To get unique values from the collection.


Limit result to certain number of records. In the below example we are limiting the total number of records to 3.


When we have to view any value in the middle of Stream operations, We can use peek.


Skips the specific number of elements. In the below example, first 3 elements are skipped and values are printed from 4th element.


We can do sorting using Streams, reverse sorting and natural order sorting. We had done reverse and natural order sorting in the below example.


To count the number to elements in the collection.


Min And Max:
To find the Minimum and Maximum from collection. Below are few examples,



ForEach vs Stream.forEach:
Both are used for Iterating a collection, The difference between them are,

forEach – Does not maintain order
Stream.forEach – Maintains Order


Used to perform operations are multiple values and returns a single result. In the below example we have used addition.


Filter, FindAny, FindFirst, AnyMatch and AllMatch:
Similar to if condition to search or find any value
Returns an object which may or may not contain a non-null value
Finds the first element when it is matched
Return true if atleast one element matches the condition
Returns true if all elements in the stream are matched (satisfies the condition)


 Complete Code:




Integrate Shell with SpringBoot – SpringBoot Shell

Integrate Shell with SpringBoot – SpringBoot Shell:

What is a Shell?
Shell is an environment where user provides the input and it program will execute and returns the result. We can run shell with commands or scripts.

SpringBoot provides us a way to integrate this shell environment within our application. We can easily build a shell application just by including a dependency which has all Spring shell jars. We can also declare our own commands in the application to executing it.

Dependency for Shell in SpringBoot:

The above dependency will download Springshell jars.

Lets create a main class for springboot application,

Now let us create our shell class,

@ShellComponent is an stereotype kind of annotation similar to @Component, @Service etc. to make springboot to understand this class is Shell Component.

Now let us write a method for shell to execute,

@ShellMethod – This annotation is used to explicitly define the custom shell command.
We have written a method which takes String as a parameter and will return the String with – “Shell Command Executed” followed by the parameter String.

Let us check this by running our SpringBoot application,

Once we deploy the application, we will get CLI for entering shell command in our console. When we give help,


We can see the available in-build commands and the method which we have created.

display – method name
value – Description for the method which we have provided along with @ShellMethod annotation.

Now let us execute the Shell Command,

We have to give the method name – display followed by the parameter. Here we given display and parameter – ‘Success’ where we get the output – Shell Command Executed Success.

Let us see another example to perform addition of 2 numbers,

Here we have written a method to pass 2 integer parameters and returning addition of 2 values.

let us try to execute our shell command,

We have executed our add method, we have declared in 2 different ways,

We have used positional parameter concept, without giving parameter name we directly passed the values for a and b. Suppose if we have to pass based on parameter we can use the 2nd way to executing the method,

add –b 2 –a 1.

Now let us see another method to pass String parameter with default value.

We have to declare with @ShellOption value,
Here we have mentioned defaultValue for the parameter s, so if we try to execute this command even without any parameter it will return with default value.

Here we have called default-display without any parameter, it gives the default value. With parameter it gives the parameter value instead of default value.

We can also perform validation for the parameters. Let us see an example below.

Here we have 2 methods, one to check with Integer with size minimum of 2 and maximum of 4. Another method to check String is not null.

Now let us execute the commands in shell,

From the above screenshot, we can confirm the validation is done.

There is also a way where we can change the text that is being displayed in console – shell:> – we can replace this text. Let us see how to change,

We need to implement the PromptProvider interface and need to override the method – getPrompt().

We are returning new AttributedString(“String”, String start index,String end Index);

In our code we have used the String – “ShellCommandPrompt:>”. Now let use run our SpringBoot application and see our changes,

The shell:> is changed to ShellCommandPrompt: from our Configuration.

Please find the complete code below,






What happens if we do not override hashcode() and equals() in hashmap?

What happens if we do not override hashcode() and equals() in hashmap?

In this post, let us understand what is the use of equals() and hashcode(), why it is important to override them and what will happen if we dont override them

To understand this post, it is expected to have knowledge of Internal Working of HashMap.

If not aware of internal working of hashmap, lets have a quick overview.

Important Points of HashMap:

  • HashMap has Key and Value pairs.
  • HashMap is not Synchronized (i.e. Not Thread Safe)
  • HashMap has no guarantees as to the order of the map
  • HashMap does not guarantee that the order will remain constant over time.
  • HashMap allows only one Null Key, Which will always be stored at 0th position in the bucket
  • HashMap can have multiple null values, but only one null key is allowed
  • HashMap can be Synchronized externally using Collections.synchronizedMap(HashMap)

Internal Working of HashMap: (Quick Summary)
So How does HashMap works Internally?
HashMap has the implementation of Map Interface.  HashMap has key and value pairs.

The initial capacity of HashMap is 16 and where as load factor is 0.75.

What is Initial Capacity and LoadFactor?
The initial capacity is the number of buckets in the hashtable and the load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased.

When we try to put a Key and Value within HashMap, first the equals method will check whether both objects are equal and then it hashes and find the bucket where to insert this Key and Value. If both the objects are equal, it will override the already existing Object in Bucket.
In Short,  equals() will help us to identify if the object is unique and HashCode helps us to identify the bucket in which the values has to be Stored.

Now Let us see this with an Example,

For the below Example, we are going to create Employee class and another Main class.

We have created an Employee class, Now let us create a main method

equals() and hashcode() – Not Overridden:

The above code, we haven’t implemented equals() and hashcode() method. Let us the what output our code gives,

Though both objects are equal() since we haven’t overridden the equals(), so it considers both the objects as unique keys. We also haven’t overridden the hashcode(), so even when both objects are same, their hashcode is different.

How it works without Overriding equals() and hashcode()?

From the output, first it compares two objects (eventhough they are equals, since we have not overridden the equals()) it shows both the objects are not equals and will be considered as Unqiue Keys and with values.

From the hashing, it goes to different buckets – duplicate entries for same object.

Override equals() and Not hashcode:

Now let us Override only Equals() method,


Now after overriding equals method, let use slightly modify our main method,

Now after overriding only equals() method, we can see both objects are compared and identified as equals objects (Both Objects are  Equal: true)

But here the problem is, though both the objects are equal() the hashcode() is different and so the both objects will be stored in different buckets.

If both objects are equal and if it points to same bucket then the value will be overridden.

Override hashcode() and Not equals():

When we run the code,

Since we have overridden only hashcode() and not equals method() –> The objects comparison becomes false, means the objects are unique. So even though the hashcode points to same bucket, the objects are considered as unique keys and both the values will be stored in the bucket.

override both equals() and hashcode():

Now let us run the above code and see,

Since we have overridden the equals() and hashcode(), when inserting in map — It has identified both objects are equal and both has same hashcode values. So when trying to insert into the bucket, only one value will be inserted that is the reason we have only one element in hashmap (Key is: Employee@13665 Value is: Two)

Complete Code:


How does hashmap work Internally | Internal Working of HashMap

How does hashmap work Internally – Explained:

In this post, let us see how hashmap works internally – How the elements are added and retrieved from buckets.

Features of HashMap:

  • Implementation of Map Interface – With Key and Value pairs
  • No order of maps, also order of map changes over time
  • Accepts one null key and multiple null values
  • Keys should be unique and cannot have duplicates
  • Unsynchronized i.e. Not Thread Safe
  • Initial capacity is 16 and load factor is 0.75
  • Synchronized externally using Collections.SynchronizedMap(hashmap)

What is Initial Capacity and Load factor?

Initial Capacity is number of buckets in hashtable, that i.e. by default HashMap provides 16 buckets when created, we can define number of buckets required when creating the HashMap.

Load Factor indicates when a certain capacity of elements are populated in HashMap, it size will increaase. Load factor decides how much of capacity should be filled before increasing the size of hashmap.


HashMap Creation:

  • public HashMap()
  • public HashMap(int initialCapacity)
  • public HashMap(int initialCapacity, float loadFactor)
  • public HashMap(Map<? extends k, ? extends v> m) –> Cretes new HashMap with the specified Map.

To understand how hashmap works internally, let us consider we have an employee model class with a variable employee name,

Let us create employee object and put the employee object as key in HashMap and let us see how it gets inserted into HashMap.


From the above,

We have created 3 employee objects – e1, e2 and e3 with names Alpha, Beta and Charlie

Then we have put these values in hashmap with employee object as Key

HashMap<Employee, String> hm = new HashMap<Employee, String>();

hm.put(e1, “One”);
hm.put(e1, “Two”);
hm.put(e1, “Three”);

Now we have inserted the values into HashMap, we can see how they are populated in Buckets.

For our example, let us consider the hashcode values of

e1 – 756475 and it puts in bucket 2
e2 – 897865 and it puts in bucket 14
e3 – 756909 and it puts in bucket 2 again

First it checks the bucket, if any values are present. Since we do not have any values, it inserts the value in bucket 2.
It will insert hashcode first , i.e. 756475, next it will insert the key of HashMap – here the key is e1, then the value “One”, and last will be the pointer to next node. For now it will be null as no other elements are present to be populated in bucket.

Second we have the hashmap with element key e2 and value “Two”. It will check the bucket 14 and since no value is already present, it will populate the values in bucket 14.
hashcode of e2 – 897865, then the element Key, then element value, followed by the next value pointer, which is null for now.

Third, we have the hashmap with element key e3 and value “Three”. It will check for the bucket 2, and we have a value in it.

Here comes the key role for equals() method, it will check whether the e3 Key is unique when compared to e1. (e3.equals(e1)). It will return false as both objects are not equal. Then from the node which is kept as null when inserting first element, will point to next element i.e. e3.

In the same bucket as second entry, e3 will be populated with hashcode –  756909, Key – e3, value – Three and next will be kept as null until we get another value to be populated.

e1 entry which had null next now will point to e3 entry.


What happens when we insert duplicate Key?

Here we have created another employee object e4, with same value as of e1.

Employee e4 = new Employee(“Alpha”);

hm.put(e4, “Four”);

How as we discussed, it will calculate the hashcode for the object e4. The hascode is 756475 and it points to bucket 2.

But the bucket 2 already has more records available. So first will check for hashcode, if the inserting record hashcode and available record hashcode matches, then it will compare the Key. This is where equals() plays an important role.

In the above case, e1.equals(e4) which will return true as both objects are equal. Then the values of object e1 will be replaced with values of e4.

so now the hashbucket first entry will be of e4 values,

Important Keypoints:

  • HashMap uses hashcode() to identify the bucket where the record to be inserted
  • If a record is already available in the bucket, it will check for key (to check uniqueness) with equals() method, if it return true then the record will be replaced. If it false, new record will be inserted.
  • When more than one record is available in the bucket, it will have a next node which will point to the next available record in the bucket.

How does get() method work in HashMap?

Now let us try to retrieve an element from hashmap, let us retrieve e3,


Now hashmap will use hashcode() and will identify the bucket in which element might be present. Next it will go to the bucket (In this case 2) and finds more than one record is available.

Now it will compare the key, if both the keys are matched then it will retrieve the value.



So When we try to get e3, first it will calculate the hashcode

hashcode() for Key e3 : 756909 and based on indexing to identify the bucket, it is calculated to be Bucket 2.

But Bucket 2, already has more than one record. Next the calculated hashcode will be compared to the inserted record in hashmap bucket 2. First it will compare 756909 with first record in hashmap that is 756475. Both are not equal, next will check the next node. Here the next node is pointing to another record. So it will go to the next record and will compare the hashcodes. 756909 and next record’s hashcode 756909 both are equal. Next will take the key of that record from bucket 2 and will compare with the key user requested. In this case, e3.equals(e3) which will return true.

Since both objects are equal and hashcodes are matching, that record will be retrieved from hashbucket and given to the user. This is how get method works.

Key Note:

If two objects are equal, their hashcode will always be same
If two hashcodes are same, that doesn’t mean both objects are equal.