Skip to content
This repository has been archived by the owner on Aug 1, 2024. It is now read-only.

Latest commit

 

History

History
262 lines (166 loc) · 12.3 KB

File metadata and controls

262 lines (166 loc) · 12.3 KB

How to install the Elastic Stack using Docker Compose

Prerequisites

You need to have Docker and Docker Compose running on a Linux box.

If you are on a Windows 10/11 machine, such Linux box can be a WSL2 instance.

See this article 📗 to see how to install WSL2 on Windows 10/11, if you are not familiar with the procedure.

The remainder of this document shows how to install the Elastic Stack on Windows 11 using WSL2 and Docker Desktop running on it.

The installation

There are several ways to install and run the Elastic Stack on a development machine. We will install the Elastic Stack using Docker Compose as inspired by the articles of Eddie Mitchell.

The environment used for this installation is a Windows 11 machine with WSL2 and Docker Desktop installed. Please read the

The necessary files have been copied in the /docker folder so that we can run it directly without the need to Mitchell's repository, and so that we can alter them as necessary in the future.

The files we might need to access and change have also been grouped under the solution folder docker:

  • .env: this file contains the environment variables that will be used by Docker Compose.

    1. ⚠️ The .env file needs to be created manually, as it is not part of the repository. Without this file, Docker Compose will not work.
    2. To create a valid .env file, copy the contents of the .env.example file and paste them into a new file named .env. Then, update the values of the variables as needed.
  • .env.example: this file contains a complete example of all the options available.

    • It is not used by Docker Compose, and it should only be used as a reference when creating the .env file. This is how such file reads:

⚠️ The default username/password for Kibana and Elasticsearch is elastic/changeme. You can change such values in your .env file.

  • docker-compose.yml: this file contains the configuration for Docker Compose. The compose file will allow us to run the Elastic Stack on a single machine and get access to the Elasticsearch, Kibana, Logstash, Filebeat, and Metricbeat.

The remaining files allow to configure options of the individual services. It's unlikely that you will have to change any of them at the beginning.

  • filebeat.yml: This file contains the configuration for Filebeat. It is used by the Filebeat container to collect and ship logs to Elasticsearch. It's is unlikely you will need to change this file.

  • kibana.yml: This file contains the configuration for Kibana. It is used by the Kibana container to connect to Elasticsearch.

  • logstash.yml: This file contains the configuration for Logstash. It is used by the Logstash container to connect to Elasticsearch. It's is unlikely you will need to change this file.

  • metricbeat.yml: This file contains the configuration for Metricbeat. It is used by the Metricbeat container to collect and ship metrics to Elasticsearch. It's is unlikely you will need to change this file.

  • README.md: this file.

High level overview

From a high-level perspective, the Elastic Stack we will launch is composed of the following components:

  • Elasticsearch (es01): This is the heart of the Elastic Stack, acting as a powerful and scalable search engine. It stores, indexes, and retrieves data, allowing for fast and efficient searching and analysis.

  • Kibana (kibana): It serves as the visualization layer in the Elastic Stack. It offers a user-friendly interface to visualize data stored in Elasticsearch. With Kibana, you can create and share dashboards, charts, and reports, making data analysis accessible and insightful, even for those new to data analytics.

  • Logstash (logstash01): A data processing pipeline that ingests, transforms, and sends data to Elasticsearch. Logstash allows you to collect data from various sources, process it with a wide range of filters, and enhance it before it gets indexed in Elasticsearch.

  • Filebeat (filebeat01): Acting as a lightweight log shipper, Filebeat forwards log data from multiple sources directly to Elasticsearch or Logstash. It simplifies data collection, is resource-efficient, and is ideal for collecting and aggregating log data in real time.

  • Metricbeat (metricbeat01): Similar to Filebeat but focused on metrics, Metricbeat collects various system and service metrics. It's essential for real-time monitoring of servers and services, providing valuable insights into their performance and health.

The running environment

Once you launch the docker compose file, you will have access to the Kibana, from where you will do most of your work:

The following sections will guide you through the installation process.

The configuration files

Make sure you created the .env file as explained above.

Step 1/3: Ensure the vm.max_map_count setting is set to at least 262144

As explained at the beginning of this document, in these instructions we are using Docker Desktop on top of WSL2. Elasticsearch will run in a container on the Linux host, not on Windows.

When setting up Elasticsearch on Linux, it's essential to configure the vm.max_map_count kernel setting on the Linux host to at least 262144. This setting is critical for Elasticsearch to startup and to function.

⚠️ Once again: this change has to be made on the Linux machine running Docker, not inside the container nor on Windows.

There are two ways to set vm.max_map_count:

  1. Temporary:

    • This method is quick and useful for testing purposes. The setting can be changed temporarily by executing a command on your Docker host. It's an immediate change but won't persist after a system reboot. Here's how to do it:
    # Set vm.max_map_count temporarily
    $ sysctl -w vm.max_map_count=262144

    This approach is ideal when you need to quickly set up Elasticsearch for short-term use or testing, without the need for the setting to persist after a reboot.

  2. Permanent (recommended):

    • For long-term use, especially in containerized environments like Docker, you'll want this setting to be permanent. This requires editing a system configuration file to ensure the setting persists across reboots and container restarts. Follow these steps:
    # Edit the sysctl configuration file for persistent changes
    $ echo 'vm.max_map_count=262144' >> /etc/sysctl.conf
    
    # Apply the changes without rebooting
    $ sysctl -p

Additional information can be found here and in Eddie Mitchell's original article.

Step 2/3: Launch the docker compose file

Launch a terminal and navigate to the docker directory of this repo. Then run the following command:

$ docker-compose up

Be prepared to wait a minute or two for the containers to start up. In the end your terminal should display something like this:

Once the containers are up and running and finish their initial setup you will be able to access the Kibana UI at https://localhost:5601.

The Elasticsearch API at https://localhost:9200.

And, from inside Docker desktop, our Compose should look like this:

Step 3/3: Adjust the Settings of Elastic Agent

Now that Elasticsearch and Kibana are running we can apply our last configuration step: adjust the settings of the Elastic Agent which is currently not working as expected.

To see the problem, click on 'Management Fleet':

In the Fleet management screen you should now see the following issues: CPU and Memory are not reading correctly. This is because, by default, our Elastic Agent is attempting to log data to a local Elasticsearch instance, which is not correct for our Docker environment.

We will need to perform a couple of updates in the Fleet -> Settings UI in order to resolve this. Click on the 'Settings' tab and then the edit action (green circle):

This should display the following. Notice the red circles.

We now need to change three values:

  1. Hosts:

  2. The CA fingerprint:

    • We'll need to get the CA fingerprint from the cluster, as explained in the next section.
  3. Advanced YAML configuration:

    • We'll need to get the CA certificate from the cluster, as explained in the next section.

How to get the CA certificate from the cluster?

Run the following command to pull the CA certificate from the cluster:

docker cp es-cluster-es01-1:/usr/share/elasticsearch/config/certs/ca/ca.crt /tmp/.

Note: This command will be different based on either the directory you’re running the docker-compose.yml file from or the COMPOSE_PROJECT_NAME variable that is specified in the .env file.

Next, we will need to get the fingerprint of the certificate. For this, we can use an OpenSSL command:

openssl x509 -fingerprint -sha256 -noout -in /tmp/ca.crt | awk -F"=" {' print $2 '} | sed s/://g

This will produce a value similar to:

C8EEE11A0713CF5E3E49979A548F1D133DE0ED4A9263DA43AE039A883F94A726

Finally, we need to get the whole cert into a yml format. We can do this with a cat command or just by opening the cert in a text editor:

cat /tmp/ca.crt        

The correct settings

The final settings should look like this (ignore the fingerprint):

Don't forget to click “Save and Apply Settings” -> “Save and Deploy.”

Your agent should now be running and reporting data to Elasticsearch correctly.

And dashboards should work properly:

Final considerations

  • These instructions have been tested on Windows, using WSL2 and Docker Desktop.

Resources

  1. Getting started with the Elastic Stack and Docker Compose: Part 1 1. The Githun repo for this article can be found here

  2. Getting started with the Elastic Stack and Docker Compose: Part 2

    1. The Github repo for this article can be found here
  3. Install Elasticsearch with Docker

  4. Install Kibana with Docker