Kubernetes - Using Metrics Server
Monitoring and Logging in Kubernetes
Kubernetes is an open-source platform designed to automate deploying, scaling, and operating application containers. This guide provides an understanding of using Metrics Server in Kubernetes, which is essential for monitoring the resource usage of your clusters and workloads.
Key Points:
- Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
- It provides CPU and memory usage metrics to Kubernetes for use in Horizontal Pod Autoscaler (HPA) and the Kubernetes dashboard.
- Metrics Server is a cluster-wide aggregator of resource usage data.
What is Metrics Server?
Metrics Server is a cluster-wide aggregator of resource usage data in Kubernetes. It collects metrics from the Kubelet on each node and provides aggregated metrics through the Kubernetes API. These metrics are used by components like the Horizontal Pod Autoscaler (HPA) and the Kubernetes dashboard to make decisions based on current resource usage.
# Example of deploying Metrics Server using a manifest file
apiVersion: v1
kind: ServiceAccount
metadata:
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: system:aggregated-metrics-reader
rules:
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: metrics-server-auth-reader
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:aggregated-metrics-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
template:
metadata:
labels:
k8s-app: metrics-server
spec:
serviceAccountName: metrics-server
containers:
- name: metrics-server
image: k8s.gcr.io/metrics-server/metrics-server:v0.4.4
args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS,ExternalIP
- --kubelet-use-node-status-port
ports:
- name: main-port
containerPort: 4443
protocol: TCP
livenessProbe:
httpGet:
path: /livez
port: 4443
scheme: HTTPS
readinessProbe:
httpGet:
path: /readyz
port: 4443
scheme: HTTPS
---
apiVersion: v1
kind: Service
metadata:
name: metrics-server
namespace: kube-system
labels:
k8s-app: metrics-server
spec:
ports:
- port: 443
targetPort: 4443
selector:
k8s-app: metrics-server
Installing Metrics Server
Metrics Server can be installed using various methods, including manifest files, Helm charts, and kubectl commands. Here is an example of installing Metrics Server using Helm:
# Add the Metrics Server Helm repository
helm repo add metrics-server https://kubernetes-sigs.github.io/metrics-server/
# Update Helm repositories
helm repo update
# Install Metrics Server using Helm
helm install metrics-server metrics-server/metrics-server --namespace kube-system
Using Metrics Server
Once Metrics Server is installed, you can use kubectl commands to view resource usage metrics. Here are some examples:
# View metrics for nodes
kubectl top nodes
# View metrics for pods
kubectl top pods
# View metrics for pods in a specific namespace
kubectl top pods -n my-namespace
Integrating with Horizontal Pod Autoscaler
Metrics Server is commonly used with the Horizontal Pod Autoscaler (HPA) to automatically scale pods based on resource usage. Here is an example of setting up HPA:
# Example of a Horizontal Pod Autoscaler definition
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: my-hpa
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-deployment
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 50
Best Practices
Follow these best practices when using Metrics Server in Kubernetes:
- Ensure Compatibility: Verify that your Kubernetes version is compatible with the Metrics Server version you are deploying.
- Monitor Metrics Server: Regularly monitor the health and performance of Metrics Server to ensure it is providing accurate metrics.
- Secure Metrics Data: Use TLS and authentication to secure the communication between Metrics Server and the Kubelets.
- Use Resource Requests and Limits: Define resource requests and limits for Metrics Server to ensure it has sufficient resources to operate efficiently.
- Integrate with Other Tools: Integrate Metrics Server with other monitoring and observability tools for comprehensive cluster monitoring.
Conclusion
This guide provided an overview of using Metrics Server in Kubernetes, including its installation, usage, and best practices. By implementing Metrics Server, you can effectively monitor the resource usage of your Kubernetes clusters and workloads, enabling better resource management and autoscaling capabilities.