Swiftorial Logo
Home
Swift Lessons
Matchups
CodeSnaps
Tutorials
Career
Resources

Kubernetes - Using ELK Stack for Logging

Monitoring and Logging in Kubernetes

Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. This guide provides an understanding of using the ELK Stack (Elasticsearch, Logstash, and Kibana) for logging in Kubernetes, which is essential for maintaining the observability and debugging capabilities of your applications.

Key Points:

  • The ELK Stack (Elasticsearch, Logstash, and Kibana) is a powerful set of tools for collecting, storing, and visualizing log data.
  • Elasticsearch is a search and analytics engine, Logstash is a data processing pipeline, and Kibana is a visualization tool.
  • Integrating the ELK Stack with Kubernetes provides a comprehensive logging solution for monitoring and troubleshooting your applications.

What is the ELK Stack?

The ELK Stack consists of three open-source tools: Elasticsearch, Logstash, and Kibana. Elasticsearch is used for storing and searching log data, Logstash processes and transforms the log data before sending it to Elasticsearch, and Kibana is used for visualizing the log data and creating dashboards.

# Example of an Elasticsearch deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
  name: elasticsearch
  namespace: logging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:7.10.1
        env:
        - name: discovery.type
          value: single-node
        ports:
        - containerPort: 9200
          name: http
        - containerPort: 9300
          name: transport

# Example of a Logstash configuration
input {
  beats {
    port => 5044
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
}

output {
  elasticsearch {
    hosts => ["elasticsearch:9200"]
  }
}

# Example of a Kibana deployment configuration
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: logging
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: docker.elastic.co/kibana/kibana:7.10.1
        ports:
        - containerPort: 5601
          name: http
        env:
        - name: ELASTICSEARCH_HOSTS
          value: "http://elasticsearch:9200"
                

Installing the ELK Stack

The ELK Stack can be installed using various methods, including manifest files, Helm charts, and kubectl commands. Here is an example of installing the ELK Stack using Helm:

# Add the Elastic Helm repository
helm repo add elastic https://helm.elastic.co

# Update Helm repositories
helm repo update

# Create a namespace for logging
kubectl create namespace logging

# Install Elasticsearch using Helm
helm install elasticsearch elastic/elasticsearch --namespace logging

# Install Kibana using Helm
helm install kibana elastic/kibana --namespace logging
                

Using Fluentd with ELK Stack

Fluentd is commonly used with the ELK Stack to collect and forward logs from Kubernetes clusters to Elasticsearch. Here is an example of setting up Fluentd:

# Example of a Fluentd configuration (ConfigMap)
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluentd-config
  namespace: logging
data:
  fluent.conf: |
    
      @type tail
      path /var/log/containers/*.log
      pos_file /var/log/containers/fluentd.pos
      tag kubernetes.*
      
        @type json
        time_key time
        time_format %Y-%m-%dT%H:%M:%S.%N%:z
      
    
    
      @type elasticsearch
      host elasticsearch.logging.svc.cluster.local
      port 9200
      logstash_format true
      logstash_prefix kubernetes
      logstash_dateformat %Y.%m.%d
      include_tag_key true
      type_name access_log
                

Configuring Kibana Dashboards

Kibana provides a powerful interface for querying and visualizing log data stored in Elasticsearch. You can create custom dashboards to monitor the health and performance of your Kubernetes applications. Here is an example of creating a Kibana dashboard:

# Steps to create a Kibana dashboard
1. Open Kibana in your browser (use port-forwarding or an ingress resource):
   kubectl port-forward --namespace logging service/kibana 5601:5601

2. Go to the "Dashboard" section in Kibana.

3. Click "Create dashboard" and add visualizations by querying log data from Elasticsearch.

4. Save the dashboard and share it with your team.
                

Best Practices

Follow these best practices when using the ELK Stack for logging in Kubernetes:

  • Centralize Logs: Use centralized logging to collect logs from all parts of your system for easier management and analysis.
  • Use Structured Logging: Use structured logging (e.g., JSON) to make logs easier to parse and query.
  • Set Log Retention Policies: Define log retention policies to manage the volume of stored logs and comply with regulatory requirements.
  • Monitor Log Storage: Regularly monitor log storage usage to ensure you have sufficient capacity and to avoid performance issues.
  • Secure Log Data: Implement access controls and encryption to protect sensitive log data from unauthorized access.

Conclusion

This guide provided an overview of using the ELK Stack for logging in Kubernetes, including its installation, usage, and best practices. By implementing the ELK Stack, you can ensure the observability, debugging capabilities, and reliability of your Kubernetes applications.