ELK with SpringCloud Sleuth and Zipkin (GitCode Available)

Vivek Singh
6 min readApr 5, 2023

--

Git Link : https://github.com/Viveksingh1313/elk-sleuth-springboot

Prerequisites : Java 1.8 and Mvn 3.9. Screenshot below :

Java and Mvn versions

ReadMe file has been updated in the github repo on how to run the application. If you face any issues follow this blog post : This is more explanatory with theoretical concepts included.

What is Sleuth ?

Spring Cloud Sleuth is used for effective logging in a microservice architecture by providing unique identifiers. It adds two types of identifiers to your logs — TraceID is used to identify a complete request/task. SpanId is used to indentify a specific job across a request or task. So if we make a request which travels through 3 microservices; TraceId will be same across the entire request, however spanId will be different for every individual microservice. TraceId in this example will be only 1 value, and SpanId will have 3 individual values generated.

What is Zipkin ?

Zipkin is a tool used to trace in a distributed microservices ecosystem. Tracing here means tracking the latency for each microservice in a distributed system. It helps to give metrics on how much time each individual microservice takes to execute an API Call. Helps in debugging to find out the specific service which is slow when lots of underlying systems are involved and the application become slow.

What is ELK Stack ?

ELK is an acronym for three open source projects — ElasticSearch, Logstash, Kibana. Elasticsearch is a search and analytics engine which also acts as a data storage in ELK stack . Logstash is a server‑side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a “stash” like Elasticsearch. Kibana lets users visualize data with charts and graphs by fetching data from Elasticsearch.

Image Source : https://www.guru99.com/elk-stack-tutorial.html

Setup Guide with Explanation

  1. Git clone https://github.com/Viveksingh1313/elk-sleuth-springboot
  2. Start Eureka Server. This service will act as a discovery server where all the client applications registers itself with IP and Port details.
cd elk-sleuth-springboot
cd modules
cd service-registry
mvn clean install
mvn spring-boot:run

Eureka Server : http://localhost:8761/

Screenshot for Eureka Server running on local

3. Start Spring Cloud Server . Cloud Server Git Repo Link : https://github.com/Viveksingh1313/cloud-config-server

The Spring Cloud Server is used to store some specific configurations for client applications. We can move properties defined in application.yml file to cloud server.

cd modules
cd cloud-config-server
mvn clean install
mvn spring-boot:run

Cloud Server address : http://localhost:9196/

4. Start Zipkin Server

docker run -d -p 9411:9411 openzipkin/zipkin

Link for more details : https://zipkin.io/pages/quickstart

Zipkin tracing dashboard : http://localhost:9411/zipkin/

5. Start client application order-service. We will make a Post request to Order-service, which will make a call to payment-service. We just have endpoints implementation in order and payment service.

cd modules
cd order-service
mvn clean install
mvn spring-boot:run

6. Start client application payment-service.

cd modules
cd payment-service
mvn clean install
mvn spring-boot:run

7. Start ELK Stack. Using the below image we can start the ELK. Kibana runs on port 5601, elasticsearch on 9200, and logstash uses port 5044 to read messages using TCP connection .

docker build . --tag local-elk
docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk local-elk

Hit localhost:5601 to check if Kibana is running and localhost:9200 for elasticsearch.

All the setup is done. We are good to make an endpoint call to see wheteher everything works as expected.

ELK stack configuration explanation

Logstash will read the logs from TCP Port 5044. It will then send the logs to ElasticSearch. Kibana will internally connect to Elastcisearch to visualize the logs.

To read the logs and then send logs to ElasticSearch — Logstash uses 2 files which are described in logstash/input.conf and logstash/output.conf
(check the git repo)

//input.conf
input {
tcp {
port => 5044
codec => json
}
}

We provide tcp port connection 5044 for logstash to read the log data. logstash will read all logs from port 5044

// output.conf
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
manage_template => false
index => "logstash-local1"
}
}

We then transform the messages based on our requirement using filter plugin and then send the messages to elasticsearch using output elasticsearch hosts parameter.

We then set this path inside logstash config folder which is done inside Dockerfile.

FROM sebp/elk
# overwrite existing file
RUN rm /etc/logstash/conf.d/30-output.conf
COPY /logstash/conf/output.conf /etc/logstash/conf.d/30-output.conf
RUN rm /etc/logstash/conf.d/02-beats-input.conf
COPY /logstash/conf/input.conf /etc/logstash/conf.d/02-beats-input.conf

So now we have done the setup for logstash to read log data from application and send the data to Elasticsearch.

We will know have to configure a way to send the logs to port 5044 using a TCP connection. This is done inside the resources/logback.xml file.

Fore brevity purposes only showing the main lines below :

<include resource="org/springframework/boot/logging/logback/base.xml"/>
<appender name="logstash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>localhost:5044</destination>
<encoder class="net.logstash.logback.encoder.LoggingEventCompositeJsonEncoder">

Complete code

Let’s Test

Now we just can hit this endpoint : http://localhost:9192/order/bookOrder

Post request using below json body.

{
"order":{
"id":100,
"name":"Mobile",
"qty":1,
"price":1000

},
"payment":{}

}

Screenshot for Postman call :

After this visualize the data on Kibana localhost: 5601.

As soon as you hit the endpoint and reload Kibana server, you will start seeing a popup message as below :

You have data in Elasticsearch. Now, create a data view.

Go ahead and create a data view as below :

Name could be anything. Index-pattern field should have a name which you used in your logstash output configuration . (/logstash/output.conf)

Index Pattern name :

Screenshot of logs visible on localhost Kibana server :

You can filter the logs based on different id’s. Screenshot below for filtering :

Where does ElasticSearch store logs

Elasticsearch uses RAM and Hard disk to store the data. Data is read from disk when required and the heap(RAM) is basically used as working memory. The size of the heap should be as most 50% of available RAM.

https://stackoverflow.com/questions/33303786/where-does-elasticsearch-store-its-data

On the hard disk, the elastic search logs can be found in directory :
var/lib/elasticsearch

Zipkin Screenshot to trace latency:

We are done. Thanks.

Ending Notes :

I am active on Medium so please reach out to me in case you face any issues while running the application on local/server. Happy to help.
For any freelancing work or a healthy conversation — reach me out at
vivek.sinless@gmail.com or LinkedIn

--

--

Vivek Singh
Vivek Singh

Written by Vivek Singh

Software Developer. I write about Full Stack, NLP and Blockchain. Buy me a coffee - buymeacoffee.com/viveksinless

No responses yet