How To Setup Elk & Filebeat with authorization

What is ELK?

ELK stands for Elasticsearch, Logstash, and Kibana.

  • Elasticsearch is a search and computation engine.
  • Logstash is a data processing pipeline that imports data into Elasticsearch from various sources (or any other stash)
  • Kibana is a visualization tool.

Filebeat

Filebeat is a lightweight tool used for forwarding and centralizing log data. Logs can be forwarded to elasticsearch or logstash.

Here we are going to see about setting up ELK logs in docker for all the containers running in the server using FileBeat.

Flow Diagram

The below diagram depicts the flow of getting the docker logs in elasticsearch and eventually visualizing them in Kibana.

Docker writes its container logs into a location and filebeat picks them and processes them with elasticsearch. Here logstash is replaced with filebeat since filebeat is lightweight whereas logstash tends to consume a lot of resources. In spite of being lightweight, it supports more of the features.

Installation

There are many ways to install FileBeat, ElasticSearch, and Kibana. To make things as simple as possible, we will use docker-compose to set them up. There will be a single ElasticSearch node and we will use the official Docker images.

The docker compose file docker-compose.yml looks like this:

version: “3”
services:
    elasticsearch:
        image: “docker.elastic.co/elasticsearch/elasticsearch:7.2.0”
        environment:
            – “ES_JAVA_OPTS=-Xms1g -Xmx1g”
            – “discovery.type=single-node”
        ports:
            – “9200:9200”
        volumes:
            – elasticsearch_data:/usr/share/elasticsearch/data

    kibana:
        image: “docker.elastic.co/kibana/kibana:7.2.0”
        ports:
            – “5601:5601”

    filebeat:
        image: “docker.elastic.co/beats/filebeat:7.2.0”
        user: root
        volumes:
            – MY_WORKDIR/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
            – /var/lib/docker:/var/lib/docker:ro
            – /var/run/docker.sock:/var/run/docker.sock

volumes:
    elasticsearch_data:

Next we need to create filebeat config file:

filebeat.inputs:
– type: container
  paths:
    – ‘/var/lib/docker/containers/*/*.log’

processors:
– add_docker_metadata:
    host: “unix:///var/run/docker.sock”

– decode_json_fields:
    fields: [“message”]
    target: “json”
    overwrite_keys: true

output.elasticsearch:
  hosts: [“elasticsearch:9200”]
  indices:
    – index: “filebeat-%{[agent.version]}–%{+yyyy.MM.dd}“

logging.json: true
logging.metrics.enabled: false

Docker compose can be made up and the containers to be available.

Kibana can be seen in the web browser: http://localhost:5601

After getting the UI, kibana has to be configured for index patterns.

Out of the list of indices, file-beat* needs to be selected in order to receive the logs from the filebeat.

Authorization

What is xpack?

The Elastic Stack extension X-Pack offers a variety of features, including security, alerting, monitoring, reporting, machine learning, and many others. Elasticsearch comes with an X-Pack pre-installed by default. With a basic subscription, we will have access to core security. (Only for versions 7.0 and after)

Generation of SSL Certificate with Key

We need to create SSL certificate in order to utilise the facility of xpack security. To do so, we will run an elasticsearch container and destroy it after generation of certificate.

Run the below command to start an elastic search container.

docker run -d -e “discovery.type=single-node”

docker.elastic.co/elasticsearch/elasticsearch:7.8.0

We can check whether the container is running by executing docker ps command. After that we will execute the following command to get into the container through interactive bash:

docker exec it container_name bash

Check the present working directory by running pwd command. It should be /usr/share/elasticsearch.

Run the following command to generate the certificate through utility inside the container.

bin/elasticsearch-certutil ca

Here ca stands for certificate authority.

Ignore the warnings if any and the system will prompt for filename and password. You can give them if needed. Here we are going to leave it as default by simply pressing enter keys for the prompt. It leaves the certificate file name as elastic-stack-ca.p12

Check whether the certificate is generated using ls command. Once it is confirmed we can come out of the container by giving exit command.

Next, we need to copy the certificate to local directory by executing the following

docker cp container_name:/usr/share/elasticsearch/elastic-stack-ca.p12 .

Then you can remove the elasticsearch container.

Generation of SSL Certificate with Key

We need to modify the docker-compose.yml like below:

version: “3”
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.8.0
    container_name: elasticsearch
    environment:
      – node.name=elasticsearch
      – discovery.seed_hosts=elasticsearch
      – cluster.initial_master_nodes=elasticsearch
      – cluster.name=docker-cluster
      – bootstrap.memory_lock=true
      – “ES_JAVA_OPTS=-Xms2g -Xmx2g”
      – xpack.security.enabled=true
      – xpack.security.transport.ssl.enabled=true
      – xpack.security.transport.ssl.keystore.type=PKCS12
      – xpack.security.transport.ssl.verification_mode=certificate
      – xpack.security.transport.ssl.keystore.path=elastic-stack-ca.p12
      – xpack.security.transport.ssl.truststore.path=elastic-stack-ca.p12
      – xpack.security.transport.ssl.truststore.type=PKCS12
    ulimits:
      memlock:
        soft: –1
        hard: –1
    volumes:
      – ./elastic-stack-ca.p12:/usr/share/elasticsearch/config/elastic-stack-ca.p12
      – esdata1:/usr/share/elasticsearch/data
    ports:

             – 9200:9200
 
  kibana:
    image: docker.elastic.co/kibana/kibana:7.8.0
    container_name: kibana
    environment:
      ELASTICSEARCH_URL: “http://elasticsearch:9200”
      ELASTICSEARCH_USERNAME: “kibana_system”
      ELASTICSEARCH_PASSWORD: “changeme”
    ports:
      – 5601:5601
    depends_on:
      – elasticsearch

  filebeat:
    image: “docker.elastic.co/beats/filebeat:7.8.0”
    user: root
    environment:
      – ELASTICSEARCH_HOST=“http://elasticsearch:9200”
      – KIBANA_HOST=“http://kibana:5601”
      – ELASTICSEARCH_USERNAME=“elastic”
      – ELASTICSEARCH_PASSWORD=“changeme”
    volumes:
      – /home/decoders/Documents/fileBeat/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml:ro
      – /var/lib/docker:/var/lib/docker:ro
      – /var/run/docker.sock:/var/run/docker.sock
    depends_on:
      – elasticsearch

volumes:
  esdata1:
    driver: local

We need to modify the docker-compose.yml like below:

Modifying the filebeat.yaml file

filebeat.inputs:
  – type: container
    paths:
      – “/var/lib/docker/containers/*/*.log”

processors:
  – add_docker_metadata:

 host: “unix:///var/run/docker.sock”

  – decode_json_fields:
      fields: [“message”]
      target: “json”
      overwrite_keys: true

output.elasticsearch:
  hosts: [“elasticsearch:9200”]
  username: “elastic”
  password: “changeme”
  indices:
    – index: “filebeat-%{[agent.version]}–%{+yyyy.MM.dd}“

logging.json: true
logging.metrics.enabled: true

Generating passwords for the default users

11In order to do this, we need to run only elasticsearch which can be achieved through the following command

docker-compose up -d elasticsearch

After it runs we may need to go inside the container for generating passwords.

Execute the previous command we used for getting into the container.

After getting into the bash, execute the following

bin/elasticsearch-setup-passwords interactive

The command will prompt for passwords for different default users. You can give the passwords as you need. (Remember that the passwords should be given in such a way that it should match the passwords given in docker-compose.yml and filebeat.yml files)

Then come out of the bash and go to http://localhost:9200 and check whether it prompts for username and password.

If it does, then start kibana and filebeat too.

Generating passwords for the default users

If you go to kibana (http://localhost:5601), you’ll see the web page like below.

You need to give, elastic user username and password.

If you click on the stack management section in the left panel, you’ll see a window like below

With the security menu in the left panel, you can update/delete/create users and roles as required.

Facebook
Twitter
LinkedIn

Related Posts

From Flutter to React Native: The Top Cross-Platform Frameworks to Choose in 2024

From Flutter to React Native: The Top Cross-Platform Frameworks to Choose in 2024

Cross-platform app development has become indispensable in the fast-paced world of mobile applications. In 2024,

Open Banking Revolution Leveraging APIs for a Frictionless Financial Future Open

Open Banking Revolution: Leveraging APIs for a Frictionless Financial Future

In the dynamic landscape of modern finance, the Open Banking Revolution stands as a beacon

The rise of wearables: How technology is transforming personal health tracking

In the era of digital transformation, wearables have emerged as pivotal tools, reshaping the landscape