Spring Data Redis Usage Across Microservices

Alexander Ang
FAUN — Developer Community 🐾
5 min readMar 18, 2021

--

Prerequisite: More about redis here

In this post, I will show how to use redis caching across microservices.

I have 2 simple projects:

  • learn-redis-api (I will call it front from now on) that acts as an external part of a microsevices, handling incoming requests.
  • learn-redis-core (I will call it core) that acts as an internal part of a microservices, handling database operations.

First, the dependencies. The important ones are highlighted, other than that are the usual basic dependencies.

Dependencies for the API:

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>


<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>

<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
</dependencies>

For core:

<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>

<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>


<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
</dependency>


<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>

<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</dependency>

<dependency>
<groupId>org.flywaydb</groupId>
<artifactId>flyway-core</artifactId>
</dependency>

</dependencies>

To simulate redis usage, the flow is as follows:

  1. Front will handle inquiry requests, and requests will be passed on to the core.
Inquiry Request

2. Core will validate if inquiries with specified code already exist in DB.
a. If it exists and already completed, return failed response, otherwise update existing transaction and return success response.
b. If no such inquiry code exists, create a new transaction and return a success response.

Core: TrxService — Inquiry

3. Front will handle payment requests, and requests will be passed on to the core.

Payment Request

4. Core will validate if payment with specified code already exists in DB.
a. If it exists and already completed, return failed response, otherwise update existing transaction to success and return success response.
b. If no transaction with that inquiry code found in DB, return failed response.

Core: TrxService — Payment

Without redis, this system works fine. But when the amount of request is huge, trouble will start to rise because every time front received the request it needs to pass it to the core, even though the request is exactly the same. It also applies to payment requests with the same code that was sent multiple times.

Using redis, we can filter requests with the same content — that we already know what the response will be — without passing it to another application, in this case, it’s the core.

Where do we put our redis caching to use? The answer is here, on the front service class:

Front: TrxService

By using @Cacheable on each of the inquiry and payment methods, we filter identical requests by storing request parameters as keys. So when a request with the same key came, we can just return what we store in the value (inquiryResponse and paymentResponse) without even executing the method and sending request to core.

Redis configuration is as follows:

Front: application.properties

For testing purposes, I set the cache time-to-live to 5 seconds.

I run redis server on a docker container in localhost with port 6379.
To run redis on your local with docker just do:

docker pull redis

Wait until the process is finished and run the redis docker image with:

docker run --rm -it -p 6379:6379 redis

Now we can test our redis caching implementation by sending an inquiry request:

{
"inquiryCode":"T002",
"productName":"Tote Bag",
"amount":1
}

The applications will produce these lines in the log:

// front
Sending inquiry request to core with inquiry code: T002
// core
Received inquiry with inquiry code: T002
Creating new transaction

And we will receive this response:

{
"status": "SUCCESS"
}

If we send the request again without using cache, or after cache expiry time:

// front
Sending inquiry request to core with inquiry code: T002
// core
Received inquiry with inquiry code: T002
Updating existing transaction

Changing “productName” or “amount” will also update the transaction because the key (consisting of inquiryCode, productName, and amount) is not stored in redis server.

But if we send the same request within cache time-to-live:

...

Yes, it’s empty. It doesn’t record anything on the log because the method on front is not executed, yet we still get the same response and we get it faster because there’s no wait time from our request passed to core, processed, and passed back to front.

Same thing with payment request:

{
"inquiryCode":"T002"
}

Log produced:

// front
Sending payment request to core with inquiry code: T002
// core
Received payment with inquiry code: T002
Transaction completed

If we try to do payment again with the same inquiryCode it will record these in the log and return a failed response:

// front
Sending payment request to core with inquiry code: T002
// core
Received payment with inquiry code: T002
Transaction is already completed!

Using cache, next time when that same payment request is sent, it will immediately return a failed response without passing the request to core.

--

--