Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

Monitoring a Docker Container with Elasticsearch, Kibana, and Metricbeat

Yashadatt Sawant

Cloud & DevOps

Since you are on this page, you have probably already started using Docker to deploy your applications and are enjoying it compared to virtual machines, because of it being lightweight, easy to deploy and its exceptional security management features.

And, once the applications are deployed, monitoring your containers and tracking their activities in real time is very essential. Imagine a scenario where you are managing one or many virtual machines. Your pre-configured session will be doing everything, including monitoring. If you face any problems during production, then—with a handful of commands such as top, htop, iotop, and with flags like -o, %CPU, and %MEM—you are good to troubleshoot the issue.

On the other hand, consider a scenario where you have the same nodes spread across 100-200 containers. You will need to see all activity in one place to query for information about what happened. Here, monitoring comes into the picture. We will be discussing more benefits as we move further.

This blog will cover Docker monitoring with Elasticsearch, Kibana, and Metricbeat. Basically, Elasticsearch is a platform that allows us to have distributed search and analysis of data in real-time along with visualization. We’ll be discussing how all these work interdependently as we move ahead. Like Elasticsearch, Kibana is also open-source software. Kibana is an interface mainly used to visualize the data sent from Elasticsearch. Metricbeat is a lightweight shipper of collected metrics from your system to the desired target (Elasticsearch in this case). 

What is Docker Monitoring?

In simple terms, monitoring containers is how we keep track of the above metrics and analyze them to ensure the performance of applications built on microservices and to keep track of issues so that they can be solved more easily. This monitoring is vital for performance improvement and optimization and to find the RCA of various issues.

There is a lot of software available for monitoring the Docker container, both open-source as well as proprietary, like Prometheus, AppOptics, Metricbeats, Datadog, Sumologic, etc.

You can choose any of these based on convenience. 

Why is Docker Monitoring needed?

  1. Monitoring helps early detection and to fix issues to avoid a breakdown during production
  2. New feature additions/updates implemented safely as the entire application is monitored
  3. Docker monitoring is beneficial for developers, IT pros, and enterprises as well.
  • For developers, Docker monitoring tracks bugs and helps to resolve them quickly along with enhancing security.
  • For IT pros, it helps with flexible integration of existing processes and enterprise systems and satisfies all the requirements.
  • For enterprises, it helps to build the application within a certified container within a secured ecosystem that runs smoothly. 

Elasticsearch is a platform that allows us to have distributed search and analysis of data in real-time, along with visualization. Elasticsearch is free and open-source software. It goes well with a huge number of technologies, like Metricbeat, Kibana, etc. Let’s move onto the installation of Elasticsearch.

Installation of Elasticsearch:

Prerequisite: Elasticsearch is built in Java. So, make sure that your system at least has Java8 to run Elasticsearch.

For installing Elasticsearch for your OS, please follow the steps at Installing Elasticsearch | Elasticsearch Reference [7.11].

After installing,  check the status of Elasticsearch by sending an HTTP request on port 9200 on localhost.

http://localhost:9200/

This will give you a response as below:

Elasticsearch connect output

You can configure Elasticsearch by editing $ES_HOME/config/elasticsearch.yml 

Learn more about configuring Elasticsearch here.

Now, we are done with the Elasticsearch setup and are ready to move onto Kibana.

Kibana:

Like Elasticsearch, Kibana is also open-source software. Kibana is an interface mainly used to visualize the data from Elasticsearch. Kibana allows you to do anything via query and let’s you generate numerous visuals as per your requirements. Kibana lets you visualize enormous amounts of data in terms of line graphs, gauges, and all other graphs.

Let's cover the installation steps of Kibana.

Installing Kibana

Prerequisites: 

  • Must have Java1.8+ installed 
  • Elasticsearch v1.4.4+
  • Web browser such as Chrome, Firefox

For installing Kibana with respect to your OS, please follow the steps at Install Kibana | Kibana Guide [7.11]

Kibana runs on default port number 5601. Just send an HTTP request to port 5601 on localhost with http://localhost:5601/ 

You should land on the Kibana dashboard, and it is now ready to use:

Kibana homescreen

You can configure Kibana by editing $KIBANA_HOME/config. For more about configuring Kibana, visit here.

Let’s move onto the final part—setting up with Metricbeat.

Metricbeat

Metricbeat sends metrics frequently, and we can say it's a lightweight shipper of collected metrics from your system.

You can simply install Metricbeat to your system or servers to periodically collect metrics from the OS and the microservices running on services. The collected metrics are shipped to the output you specified, e.g., Elasticsearch, Logstash. 

Installing Metricbeat

For installing Metricbeat according to your OS, follow the steps at Install Kibana | Kibana Guide [7.11]

As soon as we start the Metricbeat service, it sends Docker metrics to the Elasticsearch index, which can be confirmed by curling Elasticsearch indexes with the command:

CODE: https://gist.github.com/velotiotech/4e3bf8528e0e90ceed30d9b7973aed50.js

How Are They Internally Connected?

We have now installed all three and they are up and running. As per the period mentioned, docker.yml will hit the Docker API and send the Docker metrics to Elasticsearch. Those metrics are now available in different indexes of Elasticsearch. As mentioned earlier, Kibana queries the data of Elasticsearch and visualizes it in the form of graphs. In this, all three are connected. 

Please refer to the flow chart for more clarification:

Internal connection

How to Create Dashboards?

Now that we are aware of how these three tools work interdependently, let’s create dashboards to monitor our containers and understand those. 

First of all, open the Dashboards section on Kibana (localhost:5601/) and click the Create dashboard button:

Available dashboards


You will be directed to the next page:

Choose the type of visualization you want from all options:

New visualization

For example, let's go with Lens

(Learn more about Kibana Lens)

Here, we will be looking for the number of containers vs. timestamps by selecting the timestamp on X-axis and the unique count of docker.container.created on Y-axis.

As soon we have selected both parameters, it will generate a graph as shown in the snapshot, and we will be getting the count of created containers (here Count=1). If you create move containers on your system, when that data metric is sent to Kibana, the graph and the counter will be modified. In this way, you can monitor how many containers are created over time. In similar fashion, depending on your monitoring needs, you can choose a parameter from the left panel showing available fields like: 

activemq.broker.connections.count

docker.container.status

Docker.container.tags

Stacked bar output

Now, we will show one more example of how to create a bar graph:

Vertical bar

As mentioned above, to create a bar graph just choose “vertical bar” from the above snapshot. Here, I’m trying to get a bar graph for the count of documents vs. metricset names, such as network, file system, cpu, etc. So, as shown in the snapshot on the left, choose the Y-axis parameter as count and X-axis parameter as metricset.name as shown in the right side of the snapshot

Using parameters

After hitting enter, a graph will be generated: 

Vertical graph output

Similarly, you can try it out with multiple parameters with different types of graphs to monitor. Now, we will move onto the most important and widely used monitoring tool to track warnings, errors, etc., which is DISCOVER.

Discover for Monitoring:

Basically, Discover provides deep insights into data, showing you where you can apply searches and filters as well. With it, you can show which processes are taking more time and show only those. Filter out errors occurring with the message filter with a value of ERROR. Check the health of the container; check for logged-in users. These kinds of queries can be sent and the desired results can be achieved, leading to good monitoring of containers, same as the SQL queries. 

[More about Discover here.]

To apply filters, just click on the “filter by type” from the left panel, and you will see all available filtering options. From there, you can select one as per your requirements, and view those on the central panel. 

Discover for monitoring

Similar to filter, you can choose fields to be shown on the dashboard from the left panel with “Selected fields” right below the filters. (Here, we have only selected info for Source.)

Discover logs

Now, if you take a look at the top part of the snapshot, you will find the search bar. This is the most useful part of Discover for monitoring.

Discover Searchbar

In that bar, you just need to put a query, and according to that query, logs will be filtered. For example, I will be putting a query for error messages equal to No memory stats data available.

When we hit the update button on the right side, only logs containing that error message will be there and highlighted for differentiation, as shown in the snapshot. All other logs will not be shown. In this way, you can track a particular error and ensure that it does not exist after fixing it.

Discover search results

In addition to query, it also provides keyword search. So, if you input a word like warning, error, memory, or user, then it will provide logs for that word, like “memory” in the snapshot:

Search result example


Similar to Kibana, we also receive logs in the terminal. For example, the following highlighted portion is about the state of your cluster. In the terminal, you can put a simple grep command for required logs. 

Terminal output

With this, you can monitor Docker containers with multiple queries, such as nested queries for the Discover facility. There are many different graphs you can try depending on your requirements to keep your application running smoothly.

Conclusion

Monitoring requires a lot of time and effort. What we have seen here is a drop in the ocean. For some next steps, try:

  1. Monitoring network
  2. Aggregating logs from your different applications
  3. Aggregating logs from multiple containers
  4. Alerts setting and monitoring
  5. Nested queries for logs
Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

Monitoring a Docker Container with Elasticsearch, Kibana, and Metricbeat

Since you are on this page, you have probably already started using Docker to deploy your applications and are enjoying it compared to virtual machines, because of it being lightweight, easy to deploy and its exceptional security management features.

And, once the applications are deployed, monitoring your containers and tracking their activities in real time is very essential. Imagine a scenario where you are managing one or many virtual machines. Your pre-configured session will be doing everything, including monitoring. If you face any problems during production, then—with a handful of commands such as top, htop, iotop, and with flags like -o, %CPU, and %MEM—you are good to troubleshoot the issue.

On the other hand, consider a scenario where you have the same nodes spread across 100-200 containers. You will need to see all activity in one place to query for information about what happened. Here, monitoring comes into the picture. We will be discussing more benefits as we move further.

This blog will cover Docker monitoring with Elasticsearch, Kibana, and Metricbeat. Basically, Elasticsearch is a platform that allows us to have distributed search and analysis of data in real-time along with visualization. We’ll be discussing how all these work interdependently as we move ahead. Like Elasticsearch, Kibana is also open-source software. Kibana is an interface mainly used to visualize the data sent from Elasticsearch. Metricbeat is a lightweight shipper of collected metrics from your system to the desired target (Elasticsearch in this case). 

What is Docker Monitoring?

In simple terms, monitoring containers is how we keep track of the above metrics and analyze them to ensure the performance of applications built on microservices and to keep track of issues so that they can be solved more easily. This monitoring is vital for performance improvement and optimization and to find the RCA of various issues.

There is a lot of software available for monitoring the Docker container, both open-source as well as proprietary, like Prometheus, AppOptics, Metricbeats, Datadog, Sumologic, etc.

You can choose any of these based on convenience. 

Why is Docker Monitoring needed?

  1. Monitoring helps early detection and to fix issues to avoid a breakdown during production
  2. New feature additions/updates implemented safely as the entire application is monitored
  3. Docker monitoring is beneficial for developers, IT pros, and enterprises as well.
  • For developers, Docker monitoring tracks bugs and helps to resolve them quickly along with enhancing security.
  • For IT pros, it helps with flexible integration of existing processes and enterprise systems and satisfies all the requirements.
  • For enterprises, it helps to build the application within a certified container within a secured ecosystem that runs smoothly. 

Elasticsearch is a platform that allows us to have distributed search and analysis of data in real-time, along with visualization. Elasticsearch is free and open-source software. It goes well with a huge number of technologies, like Metricbeat, Kibana, etc. Let’s move onto the installation of Elasticsearch.

Installation of Elasticsearch:

Prerequisite: Elasticsearch is built in Java. So, make sure that your system at least has Java8 to run Elasticsearch.

For installing Elasticsearch for your OS, please follow the steps at Installing Elasticsearch | Elasticsearch Reference [7.11].

After installing,  check the status of Elasticsearch by sending an HTTP request on port 9200 on localhost.

http://localhost:9200/

This will give you a response as below:

Elasticsearch connect output

You can configure Elasticsearch by editing $ES_HOME/config/elasticsearch.yml 

Learn more about configuring Elasticsearch here.

Now, we are done with the Elasticsearch setup and are ready to move onto Kibana.

Kibana:

Like Elasticsearch, Kibana is also open-source software. Kibana is an interface mainly used to visualize the data from Elasticsearch. Kibana allows you to do anything via query and let’s you generate numerous visuals as per your requirements. Kibana lets you visualize enormous amounts of data in terms of line graphs, gauges, and all other graphs.

Let's cover the installation steps of Kibana.

Installing Kibana

Prerequisites: 

  • Must have Java1.8+ installed 
  • Elasticsearch v1.4.4+
  • Web browser such as Chrome, Firefox

For installing Kibana with respect to your OS, please follow the steps at Install Kibana | Kibana Guide [7.11]

Kibana runs on default port number 5601. Just send an HTTP request to port 5601 on localhost with http://localhost:5601/ 

You should land on the Kibana dashboard, and it is now ready to use:

Kibana homescreen

You can configure Kibana by editing $KIBANA_HOME/config. For more about configuring Kibana, visit here.

Let’s move onto the final part—setting up with Metricbeat.

Metricbeat

Metricbeat sends metrics frequently, and we can say it's a lightweight shipper of collected metrics from your system.

You can simply install Metricbeat to your system or servers to periodically collect metrics from the OS and the microservices running on services. The collected metrics are shipped to the output you specified, e.g., Elasticsearch, Logstash. 

Installing Metricbeat

For installing Metricbeat according to your OS, follow the steps at Install Kibana | Kibana Guide [7.11]

As soon as we start the Metricbeat service, it sends Docker metrics to the Elasticsearch index, which can be confirmed by curling Elasticsearch indexes with the command:

CODE: https://gist.github.com/velotiotech/4e3bf8528e0e90ceed30d9b7973aed50.js

How Are They Internally Connected?

We have now installed all three and they are up and running. As per the period mentioned, docker.yml will hit the Docker API and send the Docker metrics to Elasticsearch. Those metrics are now available in different indexes of Elasticsearch. As mentioned earlier, Kibana queries the data of Elasticsearch and visualizes it in the form of graphs. In this, all three are connected. 

Please refer to the flow chart for more clarification:

Internal connection

How to Create Dashboards?

Now that we are aware of how these three tools work interdependently, let’s create dashboards to monitor our containers and understand those. 

First of all, open the Dashboards section on Kibana (localhost:5601/) and click the Create dashboard button:

Available dashboards


You will be directed to the next page:

Choose the type of visualization you want from all options:

New visualization

For example, let's go with Lens

(Learn more about Kibana Lens)

Here, we will be looking for the number of containers vs. timestamps by selecting the timestamp on X-axis and the unique count of docker.container.created on Y-axis.

As soon we have selected both parameters, it will generate a graph as shown in the snapshot, and we will be getting the count of created containers (here Count=1). If you create move containers on your system, when that data metric is sent to Kibana, the graph and the counter will be modified. In this way, you can monitor how many containers are created over time. In similar fashion, depending on your monitoring needs, you can choose a parameter from the left panel showing available fields like: 

activemq.broker.connections.count

docker.container.status

Docker.container.tags

Stacked bar output

Now, we will show one more example of how to create a bar graph:

Vertical bar

As mentioned above, to create a bar graph just choose “vertical bar” from the above snapshot. Here, I’m trying to get a bar graph for the count of documents vs. metricset names, such as network, file system, cpu, etc. So, as shown in the snapshot on the left, choose the Y-axis parameter as count and X-axis parameter as metricset.name as shown in the right side of the snapshot

Using parameters

After hitting enter, a graph will be generated: 

Vertical graph output

Similarly, you can try it out with multiple parameters with different types of graphs to monitor. Now, we will move onto the most important and widely used monitoring tool to track warnings, errors, etc., which is DISCOVER.

Discover for Monitoring:

Basically, Discover provides deep insights into data, showing you where you can apply searches and filters as well. With it, you can show which processes are taking more time and show only those. Filter out errors occurring with the message filter with a value of ERROR. Check the health of the container; check for logged-in users. These kinds of queries can be sent and the desired results can be achieved, leading to good monitoring of containers, same as the SQL queries. 

[More about Discover here.]

To apply filters, just click on the “filter by type” from the left panel, and you will see all available filtering options. From there, you can select one as per your requirements, and view those on the central panel. 

Discover for monitoring

Similar to filter, you can choose fields to be shown on the dashboard from the left panel with “Selected fields” right below the filters. (Here, we have only selected info for Source.)

Discover logs

Now, if you take a look at the top part of the snapshot, you will find the search bar. This is the most useful part of Discover for monitoring.

Discover Searchbar

In that bar, you just need to put a query, and according to that query, logs will be filtered. For example, I will be putting a query for error messages equal to No memory stats data available.

When we hit the update button on the right side, only logs containing that error message will be there and highlighted for differentiation, as shown in the snapshot. All other logs will not be shown. In this way, you can track a particular error and ensure that it does not exist after fixing it.

Discover search results

In addition to query, it also provides keyword search. So, if you input a word like warning, error, memory, or user, then it will provide logs for that word, like “memory” in the snapshot:

Search result example


Similar to Kibana, we also receive logs in the terminal. For example, the following highlighted portion is about the state of your cluster. In the terminal, you can put a simple grep command for required logs. 

Terminal output

With this, you can monitor Docker containers with multiple queries, such as nested queries for the Discover facility. There are many different graphs you can try depending on your requirements to keep your application running smoothly.

Conclusion

Monitoring requires a lot of time and effort. What we have seen here is a drop in the ocean. For some next steps, try:

  1. Monitoring network
  2. Aggregating logs from your different applications
  3. Aggregating logs from multiple containers
  4. Alerts setting and monitoring
  5. Nested queries for logs

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings