In this tutorial, we will discuss the installation of the Elasticsearch, Logstash and Kibana (ELK Stack) on CentOS/RHEL. The ELK stack is mainly used for centralizing and visualizing logs from multiple servers for log analysis. ELK stack setup has following four main components:
- Elasticsearch is basically a distributed, NoSQL data store, used to storing logs.
- Logstash is a log collection tool that accepts inputs from various sources (Filebeat), executes different filtering and formatting, and writes the data to Elasticsearch.
- Kibana is a graphical-user-interface (GUI) for visualization of Elasticsearch data.
- Filebeat is installed on client servers that will send their logs to Logstash, Filebeat serves as a log shipping agent that utilizes the lumberjack networking protocol to communicate with Logstash.

Let’s get started on setting up our ELK Server!
Prerequisites
Elasticsearch is built using Java, and requires at least Java 8 in order to run. Only Oracle’s Java and the OpenJDK are supported. The same JVM version should be used on all Elasticsearch nodes and clients. Please visit our previous article to install and configure Java 8.
root@devops# java -version java version "1.8.0_151" Java(TM) SE Runtime Environment (build 1.8.0_151-b12) Java HotSpot(TM) 64-Bit Server VM (build 25.151-b12, mixed mode)
Install and Configure Elasticsearch
Step 1 – Elasticsearch can be installed with a package manager by adding Elastic’s package repository. Run the following command to import the Elasticsearch public GPG key into rpm:
root@devops# rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
Step 2 – Create a new yum repository file for Elasticsearch. Insert the following lines to the repository configuration file elasticsearch.repo
[elasticsearch-5.x] name=Elasticsearch repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Step 3 – Install the Elasticsearch package.
root@devops# yum install elasticsearch
Step 4 – Elasticsearch is now installed. Let’s edit the configuration, for configuration setting visit official website.
Step 5 – Start and enable the elasticsearch service.
root@devops# systemctl daemon-reload root@devops# systemctl enable elasticsearch root@devops# systemctl start elasticsearch
Step 6 – Allow traffic through TCP port 9200 in your firewall
root@devops# firewall-cmd --add-port=9200/tcp root@devops# firewall-cmd --add-port=9200/tcp --permanent
Step 7 – Test installed Elasticsearch
root@devops# curl -X GET http://localhost:9200
If you are familiar with Ansible IT automation tool then you can install Elasticsearch by one command. We have automated above all steps Elasticsearch installation by using Ansible Role. For that Ansible Role, please visit our article to install Elasticsearch with Ansible
Install and Configure Logstash
Step 1 – Create a new yum repository file for Logstash. Insert the following lines to the repository configuration file logstash.repo
[logstash-5.x] name=Elastic repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Step 3 – Install the Logstash package.
root@devops# yum install logstash
Step 4 – Logstash is now installed. Before configuration of Logstash, need to generate SSL certificate. Follow step listed below to generate SSL certificate
Add following line into[ v3_ca ]
section in /etc/pki/tls/openssl.cnf
[ v3_ca ] subjectAltName = IP: 192.168.0.1
Generate a self-signed SSL certificate by OpenSSL
root@devops# cd /etc/pki/tls root@devops# openssl req -config /etc/pki/tls/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt
Step 5 – Configure Logstash input, output, and filter files. For detail configuration setting visit official website.
Input: Create /etc/logstash/conf.d/input.conf
and insert the following lines into it.
input { beats { port => 5044 ssl => true ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }
Output: Create /etc/logstash/conf.d/output.conf
and insert the following lines into it.
output { elasticsearch { hosts => ["localhost:9200"] sniffing => true manage_template => false index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" document_type => "%{[@metadata][type]}" } }
Filter: Create /etc/logstash/conf.d/filter.conf
and insert the following lines into it.
filter { if [type] == "syslog" { grok { match => { "message" => "%{SYSLOGLINE}" } } date { match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ] } } }
Step 6 – Start and enable the logstash service.
root@devops# systemctl daemon-reload root@devops# systemctl enable logstash root@devops# systemctl start logstash
Step 7 – Allow traffic through TCP port 5044 in your firewall
root@devops# firewall-cmd --add-port=5044/tcp root@devops# firewall-cmd --add-port=5044/tcp --permanent
If you are familiar with Ansible IT automation tool then you can install Logstash by one command. We have automated above all steps of logstash installation by using Ansible Role. For that Ansible Role, please visit our article to install Logstash with Ansible
Install and Configure Kibana
Step 1 – Create a new yum repository file for Kibana. Insert the following lines to the repository configuration file kibana.repo
[logstash-5.x] name=Elastic repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Step 3 – Install the Kibana package.
root@devops# yum install kibana
Step 4 – Kibana is now installed. For configuration setting visit official website.
Step 5 – Start and enable the Kibana service.
root@devops# systemctl daemon-reload root@devops# systemctl enable kibana root@devops# systemctl start kibana
Step 6 – Make sure you can access access Kibana’s web interface (allow traffic on TCP port 5601)
root@devops# firewall-cmd --add-port=5601/tcp root@devops# firewall-cmd --add-port=5601/tcp --permanent
Step 7 – Launch Kibana (http://localhost:5601
) to verify that you can access the web interface
If you are familiar with Ansible IT automation tool then you can install Kibana by one command. We have automated above all steps of kibana installation by using Ansible Role. For that Ansible Role, please visit our article to install Kibana with Ansible
Install Filebeat on the Client Servers
We will show you how to install and configure filebeat for client servers.
Step 1 – Copy the SSL certificate from the logstash server to the clients
root@devops# scp /etc/pki/tls/certs/logstash-forwarder.crt root@192.168.0.2:/etc/pki/tls/certs/
Step 2 – Import same Elastic’s package repository. Run the following command to import the Elasticsearch public GPG key into rpm:
root@devops# rpm --import http://packages.elastic.co/GPG-KEY-elasticsearch
Step 3 – Create a new yum repository file for Elasticsearch. Insert the following lines to the repository configuration file filebeat.repo
[elasticsearch-5.x] name=Elasticsearch repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
Step 4 – Install the Filebeat package.
root@devops# yum install filebeat
Step 5 – Configure Filebeat, for detail configuration setting visit official website.
Filebeat configuration is stored in a YAML file, which requires strict indentation. You can configure filebeat as output Logstash or Elasticsearch.
Edit /etc/filebeat/filebeat.yml
as follows:
filebeat.prospectors: - input_type: log paths: - /var/log/*.log
#----------------------------- Logstash output -------------------------------- output.logstash: hosts: ["192.168.0.1:5044"] #----------------------------- OR Elasticsearch output -------------------------------- output.elasticsearch: hosts: ["192.168.0.1:9200"]
Step 6 – Start and enable the filebeat service.
root@devops# systemctl start filebeat root@devops# systemctl enable filebeat
Step 7 – Testing Filebeat
root@devops# curl -XGET 'http://localhost:9200/filebeat-*/_search?pretty'
Testing and Setup Kibana for Index Pattern
After we have verified that logs are being shipped by the clients and received successfully on the server. The first thing that we will have to do in Kibana is configuring an index pattern and set it as default. For more details please visit official website
That’s it from the manual step by step elk stack installation on centos 7, having the ability to install elk stack from a single command is a wonderous thing. So for one click and automate the installation using Ansible, please visit our GitHub project – Elk Stack.
References:
https://www.elastic.co/
https://www.tecmint.com/install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-rhel-7/