Managing your own servers and services can be a fun and rewarding hobby. You get to know the software you are using better and can help developers by providing feedback and bug reports. Also you lern a ton about the OS of your choice! Even better than having your own servers is knowing what is going on. Not only on the machines but on the network and maybe who is trying to login as “printserver” on port 22 ;)

In this post I will take you with me while installing ElasticSearch, Logstash and Kibana on my Ubuntu 18.04 VPS’s to which I just recently migrated from a big clunky root server.

Preparing and updating Ubuntu

In general it is a good idea to keep your OS up to date and secure. Since my VPS’s are installed image-based within a few seconds, I can invest the few extra seconds in doing an update right away:

$ sudo apt update
$ sudo apt upgrade

Installing java runtime environment

Since ElasticSearch is written in java, we will have to install the latest required runtime environment for it to work. (In my first version of this post I installed the default-jre, which in Ubuntu 18.04 is version 10 before September 2018 and version 11 after September 2018. Unfortunately it is not fully compatible with logstash as of Oct. 2018)

Let’s go ahead and install it, then check for the version to be sure everything runs as expected:

$ sudo apt install openjdk-8-jre
$ java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-1ubuntu0.18.04.1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)

(If your version differs, you might have a different version of the jre installed)

Installing ElasticSearch

Next thing to do is get ElasticSearch. For this we have to get the apt key, so we can verify the downloaded packages are the ones we want. Then we create an apt repository file that holds the description of the ES repository. Finally, we update the package repositories and install the latest version of ElasticSearch.

wget -qO - | sudo apt-key add -

echo "deb stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list

sudo apt-get update 
sudo apt-get install elasticsearch

You can then always verify which package got installed with apt policy like this:

$ apt policy elasticsearch
  Installed: 6.4.2
  Candidate: 6.4.2
  Version table:
 *** 6.4.2 500
        500 stable/main amd64 Packages
        500 stable/main i386 Packages
        100 /var/lib/dpkg/status
     6.4.1 500
        500 stable/main amd64 Packages
        500 stable/main i386 Packages

Start elasticsearch and get its system status:

$ systemctl start elasticsearch
$ systemctl status elasticsearch
● elasticsearch.service - Elasticsearch
   Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; disabled; vendor preset: enabled)
   Active: active (running) since Tue 2018-10-23 21:15:13 CEST; 7s ago
 Main PID: 7259 (java)
    Tasks: 14 (limit: 2299)
   CGroup: /system.slice/elasticsearch.service
           └─7259 /usr/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+A

Oct 23 21:15:13 xyz systemd[1]: Started Elasticsearch.

Also, you can easily find out, whether or not it is really working with curl:

$ curl -XGET 'http://localhost:9200/'
  "name" : "yuAmJnx",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "ctg7ChZfReu5pZKZxpEcAQ",
  "version" : {
    "number" : "6.4.2",
    "build_flavor" : "default",
    "build_type" : "deb",
    "build_hash" : "04711c2",
    "build_date" : "2018-09-26T13:34:09.098244Z",
    "build_snapshot" : false,
    "lucene_version" : "7.4.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  "tagline" : "You Know, for Search"

Installing Kibana - a JavaScript SuperMachine

Having ElasticSearch as a blazing fast “database” to store all our Log files and test replies is only the first step. We now install Kibana, to analyze all the data in our browser. Since Kibana is written in JavaScript, it is THE tool to use :)

After installing kibana, you will have to change some settings in its config file /etc/kibana/kibana.yml.

$ sudo apt install kibana
$ vim /etc/kibana/kibana.yml

# I changed the following values in this file: 
server.port: 5601 # put your IP here, or kibana will bind to theMonkeyelk
elasticsearch.url: "http://localhost:9200"

So there we go. Now we have ElasticSearch and Kibana ready at our hands for first discovery. Clearly, in this state there is only very little we can do with it. First, we need some data. Let’s get our hands dirty on that too!

Installing MetricBeat

MetricBeat, also part of the ES Repository is installed as easy as ElasticSearch or Kibana. After installing it, you already have a plethora of predefined config files sitting in /etc/metricbeat/modules.d/ waiting for you to set them up to your liking:

$ sudo apt install metricbeat
$ vim /etc/metricbeat/metricbeat.yml

Default settings and first peek into indices

For the sake of the test I kept /etc/metricbeat/metricbeat.yml as it was shipped and also only uncommented diskio,core and socket metrics in /etc/metricbeat/modules.d/system.yml.

Once we’ve setup MetricBeat and started the service, log entries start pouring into Elasticsearch. We can now also check, whether or not ElasticSearch has an index for us:

$ systemctl start metricbeat
$ curl http://localhost:9200/_cat/indices?v
health status index                       uuid                   pri rep docs.count docs.deleted store.size
yellow open   metricbeat-6.4.2-2018.10.23 IZWI1KB7RE28k-G5eHfmtA   1   1        261            0    344.9kb        344.9kb

Nice! We have an index for metricbeat and there are already some documents in it! Let’s go and explore!

First view into kibana

Now that we have installed and setup the software part, let’s start our browser and take a look at kibana. Navigate to http://hostname.of-your.server:5601 and see kibana load for the first time.

Kibana dashboard

On your first visit, you will be greeted with the dashboard. Since we have not configured anything as of yet, let’s quickly head over to the Management in the navigation on the left and create our first index pattern.

Kibana: chosing your first index

As you can see on this picture, there is an index called metricbeat-6.2.4.... right below the input field. This is a list of patterns, that might match what you entered. Since we only have one index available from our local metricbeat-monitoring, just type metricbeat- into the input.

The regex will match our index and you will be taken to the next step.

After you have setup your first data stream, you can can change to the Discover tab.

Here you can now set your first field @timestamp and add it to filter your data. It should then look like on the following screenshot:

Kibana: Discovering first data

Since this is heavily unstructured data and we want to aggregate multiple VPS Instances to have a general overview of all servers, lets add some other fields, namely, metricset.module and This gives us the opportunity to visualize the data with regards to these fields.

Now that we have some simple structured data (timestamp, hostname, module,, change to the Visualize Tab in the navigation. You will be provided with a screen like the following, because there is not yet any search setup. That is, what we are going to do now.

The powers of Kibana: Creating informative visualizations

Kibana offers a vast amount of different visualization options, Area maps, heat maps, pie charts, gauges an many many others for you to play around with. This is especially nice if you want to Create your own Panels with different information. For the time being we will only create an area, to get a better understanding of what Kibana does.

Once you selected to “create a new search”, you will find all the available options listet. Select “area” for now, as we will create an overview over cpu usage of the local machine.

Kibana: Create a new search screen

On the next screen you can now select one of the many visualization types, that I already mentioned before. Choose “area” now, as the first visualisation that we are going to setup is an overview of CPU-Usage on the local machine.

Kibana: Create a new search screen

Configuring your first visualization

Knowing how much resources your machines use at any given time helps you to recognize bottlenecks and adjust the power your VPS’s get to use. (That is, if you have the option to scale the machines.)

To create such an overview, an area map is a pretty nice way, since you can just layer mupltiple measurements easily which crates a good, comparable view at the data.

For the first part of our metrics, we need to add a Y-Axis metric. Use system.cpu.system.pct as Field-value here. Also select average as the aggregation type. This will give you an overview of the average time your core(s) spent in system space at a given time.

Visualizations: Configure Metrics

Now to add a new metric to the Y-Axis, hit the add metrics-Button below your first entry twice, then add two more metrics:

  • Add an Average aggregation of the field system.cpu.user.pct.
  • Add an Average aggregation type for the field system.cpu.iowait.pct.

NOTE: You can hit the “play”-Button above your metrics at any time to take a look at the view you are just creating.

Now there is just one Metric missing, to have the map actually show something usable. The X-Axis measuring.

Scroll a little down to the Buckets section of your metrics.

  • Select Date Histogram aggregation.
  • Select @timestamp field.
  • Select Auto interval.
  • Hit the “Play” button again.

Voila, you now have a view of your cpu usage. It should look a little like this:

Visualizations: Your first area map visualization

Perfect! Now hit “Save” at the top of the screen, give your baby a name and return to it, whenever you need some insight!

Using pre-built dashboards

You might have guessed: Kibana is a great tool to visualize near to any data you can imagine. MetricBeat ships with some pre-built modules (one of which we already use to send data: the system module). Also, there are a couple of dashboards, that can be used, so you do not have to build them yourselves.

To “install” these dashboards in Kibana, we can use the command line tool of metricbeat.

Since we set the variable to our hosts ip when setting up kibana, we have two options:

  • change the setting
  • declare the host, when installing the dashboards

For this tutorial, let us change the setting to keep it simple. In /etc/kibana/kibana.yml, change the setting like this: "" ""

Now we have to restart kibana, so that it binds to all available ips (which includes loopback as well).

$ systemctl status kibana
● kibana.service - Kibana
   Loaded: loaded (/etc/systemd/system/kibana.service; disabled; vendor preset: enabled)
   Active: active (running) since Wed 2018-10-31 16:37:58 CET; 22s ago
 Main PID: 10326 (node)
    Tasks: 10 (limit: 2299)
   CGroup: /system.slice/kibana.service
           └─10326 /usr/share/kibana/bin/../node/bin/node --no-warnings /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

Oct 31 16:37:58 xyz systemd[1]: Started Kibana.

Now that Kibana listens on all ips, we can tell metricbeat to install the dashboards.

$ metricbeat setup
Loaded index template
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards

Now head over to your Kibana instance and click on “Dashboards”, then search for “host”. The only search result will be [Metricbeat System] Host overview, click on it and find a screen like the following:

Visualizations: Premade Kibana Dashboard

Tah-dah. Instead of going through the hassle of creating your own dashboards, you can now use these to see what is going on on your machines.

If you want to learn more about how these dashboards were made, check out each individual visualization on the Visualize section on the Kibana navigation. It’s a TON of information to digest.


What we have learned in this post:

  • Adding ElasticSearch repositories
  • Installing java-jre, ElasticSearch, Kibana and MetricBeat
  • Configuring ElasticSearch, Kibana and MetricBeat
  • Setting up and saving our first Visualization.

As you can surely imagine, this was just scratching the surface very slightly. As of yet, there are still some things missing:

  • Configuring nginx as a reverse proxy for Kibana
  • Setting up SSL-Certificates to encrypt our connection.
  • Setting up a firewall and some rules to make the setup somewhat “save” to use.
  • Setting up multiple input-Streams to create views for all the machines
  • Installing Logstash to complete our elk Setup!

There is much more you can do with ElasticSearch and Kibana. Also, for this to be an “elk” Stack setup, Logstash will have to be installed.

I will cover all these in seperate posts, so we can digest everything in small portions.

Thank you for reading!

If you feel like I made a mistake somewhere, or you’d like me to cover something else, shoot me message!


  • 2018-10-31: Added information about pre-built dashboards an visualizations.
  • 2018-10-31: Changed the installed JRE to Version 8, since Logstash is not ready for Version 10.