Kibana
In this step, we will create a custom deployment file for Kibana. We will then deploy Kibana to help visualize your Elasticsearch data. We will build on the example provided here
Step 1: Configure YAML¶
Copy the following yaml document into a file called "kibana.yaml".
Important
This yaml file includes the Ingress for exposing Kibana service externally. If you would like to expose the Kibana service externally via LoadBalancer or NodePort service type, remove the ingress from the yaml.
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: kibana
spec:
version: 7.8.1
count: 1
elasticsearchRef:
name: elasticsearch
## Uncomment below if you would like to expose the Kibana ervice with a LoadBalancer instead of ingress
#http:
# service:
# spec:
# type: LoadBalancer
podTemplate:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
requests:
memory: 1Gi
cpu: 1
---
# This sample sets up an ingress for exposing Kibana service externally.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
# Add annotation to use Rafay's built-in nginx ingress controller
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
# Add annotation to use cert-manager for generating and maintaining the cert for kibana ingress
cert-manager.io/cluster-issuer: "letsencrypt-http"
name: kibana-ingress
spec:
rules:
- host: kibana.infra.gorafay.net
http:
paths:
- backend:
serviceName: kibana-kb-http
servicePort: 5601
path: /
tls:
- hosts:
- kibana.infra.gorafay.net
secretName: kibana-ingress-tls
Step 2: Deploy Kibana¶
- In the Rafay Console, navigate to your Project as an Org Admin or Project Admin
- Go to Applications > Workloads
- Select "New Workload" to create a new workload called "kibana"
- Ensure that you select "NativeYaml" for Package Type and select the namespace as "elk"
- Click CONTINUE to next step
- Upload the kibana.yaml file from above to NativeYaml > Choose File
- Save and Go to Placement
- Select the same cluster where you deployed the elastic-operator and elasticsearch workloads
- Publish workload
Step 3: Verify Deployment¶
You will now verify whether the required resources for Kibana have been created on the cluster.
After publishing the kibana workload
- Click on the Debug button
- Click on the Kubectl button to open a virtual terminal for kubectl proxy access right to the "elk" namespace context of the cluster
First, we will verify the status of the "kibana pods"
kubectl get pod
NAME READY STATUS RESTARTS AGE
elasticsearch-es-default-0 1/1 Running 0 48m
kibana-kb-59c8f5df99-mm52c 1/1 Running 0 3m14s
Second, we will verify the status of "kibana services"
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch-es-default ClusterIP None <none> 9200/TCP 49m
elasticsearch-es-http ClusterIP 10.100.112.53 <none> 9200/TCP 49m
elasticsearch-es-transport ClusterIP None <none> 9300/TCP 49m
kibana-kb-http ClusterIP 10.98.173.238 <none> 5601/TCP 3m48s
Next, we will verify the status of kibana
kubectl get kibana
NAME HEALTH NODES VERSION AGE
kibana green 1 7.8.1 4m25s
Next, we will verify the status of kibana's Ingress
kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
elasticsearch-ingress elasticsearch.infra.gorafay.net 10.108.234.72 80, 443 50m
kibana-ingress kibana.infra.gorafay.net 10.108.234.72 80, 443 4m52s
Step 4: Access Kibana¶
Now, we will try to access the Kibana service via the ingress hostname and login with the default user "elastic" and password from the base64 decoded string retrieved for the default superuser credentials