Managing your own servers and services can be a fun and rewarding hobby. You get to know the software you are using better and can help developers by providing feedback and bug reports. Also you lern a ton about the OS of your choice! Even better than having your own servers is knowing what is going on. Not only on the machines but on the network and maybe who is trying to login as “printserver” on port 22 ;)
In this post I will take you with me while installing ElasticSearch, Logstash and Kibana on my Ubuntu 18.04 VPS’s to which I just recently migrated from a big clunky root server.
Preparing and updating Ubuntu
In general it is a good idea to keep your OS up to date and secure. Since my VPS’s are installed image-based within a few seconds, I can invest the few extra seconds in doing an update right away:
Installing java runtime environment
Since ElasticSearch is written in java, we will have to install the
latest required runtime environment for it to work. (In my first version of this post I installed the default-jre, which in Ubuntu 18.04 is version 10 before September 2018 and version 11 after September 2018. Unfortunately it is not fully compatible with logstash as of Oct. 2018)
Let’s go ahead and install it, then check for the version to be sure everything runs as expected:
(If your version differs, you might have a different version of the jre installed)
Next thing to do is get ElasticSearch. For this we have to get the apt key, so we can verify the downloaded packages are the ones we want. Then we create an apt repository file that holds the description of the ES repository. Finally, we update the package repositories and install the latest version of ElasticSearch.
You can then always verify which package got installed with
apt policy like this:
Start elasticsearch and get its system status:
Also, you can easily find out, whether or not it is really working with
After installing kibana, you will have to change some settings in its config file
So there we go. Now we have ElasticSearch and Kibana ready at our hands for first discovery. Clearly, in this state there is only very little we can do with it. First, we need some data. Let’s get our hands dirty on that too!
MetricBeat, also part of the ES Repository is installed as easy as ElasticSearch or Kibana.
After installing it, you already have a plethora of predefined config files sitting in
/etc/metricbeat/modules.d/ waiting for you to set them up to your liking:
Default settings and first peek into indices
For the sake of the test I kept
/etc/metricbeat/metricbeat.yml as it was shipped and also only uncommented
socket metrics in
Once we’ve setup MetricBeat and started the service, log entries start pouring into Elasticsearch. We can now also check, whether or not ElasticSearch has an index for us:
Nice! We have an index for metricbeat and there are already some documents in it! Let’s go and explore!
First view into kibana
Now that we have installed and setup the software part, let’s start our browser and take a look at kibana.
http://hostname.of-your.server:5601 and see kibana load for the first time.
On your first visit, you will be greeted with the dashboard.
Since we have not configured anything as of yet, let’s quickly head over to the
Management in the navigation on the left and create our first index pattern.
As you can see on this picture, there is an index called
metricbeat-6.2.4.... right below the input field.
This is a list of patterns, that might match what you entered.
Since we only have one index available from our local metricbeat-monitoring, just type
metricbeat- into the input.
The regex will match our index and you will be taken to the next step.
After you have setup your first data stream, you can can change to the Discover tab.
Here you can now set your first field
@timestamp and add it to filter your data. It should then look like on the following screenshot:
Since this is heavily unstructured data and we want to aggregate multiple VPS Instances to have a general overview of all servers, lets add some other fields, namely
metricset.name. This gives us the opportunity to visualize the data with regards to these fields.
Now that we have some simple structured data (timestamp, hostname, module, metricset.name), change to the Visualize Tab in the navigation. You will be provided with a screen like the following, because there is not yet any search setup. That is, what we are going to do now.
The powers of Kibana: Creating informative visualizations
Kibana offers a vast amount of different visualization options, Area maps, heat maps, pie charts, gauges an many many others for you to play around with. This is especially nice if you want to Create your own Panels with different information. For the time being we will only create an area, to get a better understanding of what Kibana does.
Once you selected to “create a new search”, you will find all the available options listet. Select “area” for now, as we will create an overview over cpu usage of the local machine.
On the next screen you can now select one of the many visualization types, that I already mentioned before. Choose “area” now, as the first visualisation that we are going to setup is an overview of CPU-Usage on the local machine.
Configuring your first visualization
Knowing how much resources your machines use at any given time helps you to recognize bottlenecks and adjust the power your VPS’s get to use. (That is, if you have the option to scale the machines.)
To create such an overview, an area map is a pretty nice way, since you can just layer mupltiple measurements easily which crates a good, comparable view at the data.
For the first part of our metrics, we need to add a Y-Axis metric. Use
Field-value here. Also select
average as the aggregation type.
This will give you an overview of the average time your core(s) spent in system space at a given time.
Now to add a new metric to the Y-Axis, hit the
add metrics-Button below your first entry twice, then add two more metrics:
- Add an
Averageaggregation of the field
- Add an
Averageaggregation type for the field
NOTE: You can hit the “play”-Button above your metrics at any time to take a look at the view you are just creating.
Now there is just one Metric missing, to have the map actually show something usable. The X-Axis measuring.
Scroll a little down to the
Buckets section of your metrics.
- Hit the “Play” button again.
Voila, you now have a view of your cpu usage. It should look a little like this:
Perfect! Now hit “Save” at the top of the screen, give your baby a name and return to it, whenever you need some insight!
Using pre-built dashboards
You might have guessed: Kibana is a great tool to visualize near to any data you can imagine. MetricBeat ships with some pre-built modules (one of which we already use to send data: the system module). Also, there are a couple of dashboards, that can be used, so you do not have to build them yourselves.
To “install” these dashboards in Kibana, we can use the command line tool of metricbeat.
Since we set the
server.host variable to our hosts ip when setting up kibana, we have two options:
- change the setting
- declare the host, when installing the dashboards
For this tutorial, let us change the setting to keep it simple.
/etc/kibana/kibana.yml, change the setting like this:
Now we have to restart kibana, so that it binds to all available ips (which includes loopback as well).
Now that Kibana listens on all ips, we can tell metricbeat to install the dashboards.
Now head over to your Kibana instance and click on “Dashboards”, then search for “host”.
The only search result will be
[Metricbeat System] Host overview, click on it and find a screen like the following:
Tah-dah. Instead of going through the hassle of creating your own dashboards, you can now use these to see what is going on on your machines.
If you want to learn more about how these dashboards were made, check out each individual visualization on the
Visualize section on the Kibana navigation. It’s a TON of information to digest.
What we have learned in this post:
- Adding ElasticSearch repositories
- Installing java-jre, ElasticSearch, Kibana and MetricBeat
- Configuring ElasticSearch, Kibana and MetricBeat
- Setting up and saving our first Visualization.
As you can surely imagine, this was just scratching the surface very slightly. As of yet, there are still some things missing:
- Configuring nginx as a reverse proxy for Kibana
- Setting up SSL-Certificates to encrypt our connection.
- Setting up a firewall and some rules to make the setup somewhat “save” to use.
- Setting up multiple input-Streams to create views for all the machines
- Installing Logstash to complete our elk Setup!
There is much more you can do with ElasticSearch and Kibana. Also, for this to be an “elk” Stack setup, Logstash will have to be installed.
I will cover all these in seperate posts, so we can digest everything in small portions.
Thank you for reading!
If you feel like I made a mistake somewhere, or you’d like me to cover something else, shoot me message!
- 2018-10-31: Added information about pre-built dashboards an visualizations.
- 2018-10-31: Changed the installed JRE to Version 8, since Logstash is not ready for Version 10.