Skip to content

Elasticsearch

We will now create a custom Elasticsearch YAML file to deploy an Elasticsearch cluster. We will use the example provided here.


Step 1: Configure YAML

Copy the YAML document below into a file called "elasticsearch.yaml"

Important

This yaml file includes the ingress for exposing Elasticsearch service externally. If you would like to expose the Elasticsearch service externally via LoadBalancer or NodePort service type, remove the ingress from the yaml.

# This sample sets up an Elasticsearch cluster with elasticsearch-operator.
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elasticsearch
spec:
  version: 7.8.1
  nodeSets:
  - name: default
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.ml: true
    podTemplate:
      metadata:
        labels:
          app: elk
      spec:
        initContainers:
        - name: sysctl
          securityContext:
            privileged: true
          command: ['sh', '-c', 'sysctl -w vm.max_map_count=262144']
        containers:
        - name: elasticsearch
          # Specify resource limits and requests
          resources:
            limits:
              memory: 4Gi
              cpu: 1
            requests:
              memory: 4Gi
              cpu: 1
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms2g -Xmx2g"
    count: 1
  # Request 2Gi of persistent data storage for pods in this topology element
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 2Gi
  http:
  ## Uncomment below if you would like to expose the Elasticsearch cluster ervice with a LoadBalancer instead of ingress
  #   service:
  #     spec:
  #       #
  #       type: LoadBalancer
    tls:
      selfSignedCertificate:
        disabled: false
---
# This sample sets up an ingress for the Elasticsearch cluster.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    # Add annotation to use Rafay's built-in nginx ingress controller
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
    # Add annotation to use cert-manager for generating and maintaining the cert for elasticsearch ingress
    cert-manager.io/cluster-issuer: "letsencrypt-http"
  name: elasticsearch-ingress
spec:
  rules:
  - host: elasticsearch.infra.gorafay.net
    http:
      paths:
      - backend:
          serviceName: elasticsearch-es-http
          servicePort: 9200
        path: /
  tls:
  - hosts:
    - elasticsearch.infra.gorafay.net
    secretName: elasticsearch-ingress-tls

Step 2: Deploy Workload

  • In the Rafay Console, navigate to your Project as an Org Admin or Project Admin
  • Under Infrastructure (or Applications if accessed with Project Admin role), select "Namespaces" and create a new namespace called "elk"
  • Go to Applications > Workloads
  • Select "New Workload" to create a new workload called "elasticsearch"
  • Ensure that you select "NativeYaml" for Package Type and select the namespace as "elk"
  • Click CONTINUE to next step

Create Elasticsearch workload

  • Upload the elasticsearch.yaml file from above to NativeYaml > Choose File

Create Elasticsearch workload

  • Save and Go to Placement
  • Select the same cluster where you deployed the elastic-operator
  • Publish the elasticsearch workload to this cluster

Step 3: Verify Deployment

Now, we will verify whether the required resources have been created on the cluster for Elasticsearch. After publishing the elasticsearch workload

  • Click on the Debug button
  • Click on the Kubectl button to open a virtual terminal for kubectl proxy access right to the "elk" namespace context of the cluster

First, we will verify the status of the "elasticsearch pods"

kubectl get pod

NAME                         READY   STATUS    RESTARTS   AGE
elasticsearch-es-default-0   1/1     Running   0          13m

Next, we will verify the status of the PVCs for elasticsearch

kubectl get pvc

NAME                                            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE
elasticsearch-data-elasticsearch-es-default-0   Bound    pvc-5dd079ed-ddb4-11ea-b75b-000d3af96464   2Gi        RWO            glusterfs-storage   15m

Next, we will verify the status of the elasticsearch services

kubectl get svc

NAME                                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
elasticsearch-es-default                                 ClusterIP   None            <none>        9200/TCP   16m
elasticsearch-es-http                                    ClusterIP   10.100.112.53   <none>        9200/TCP   16m
elasticsearch-es-transport                               ClusterIP   None            <none>        9300/TCP   17m

Next, we will verify the status of the elasticsearch cluster.

kubectl get elasticsearch

NAME            HEALTH   NODES   VERSION   PHASE   AGE
elasticsearch   green    1       7.8.1     Ready   16m

Next, we will verify the status of ingress for Elasticsearch.

kubectl get ingress

NAME                    HOSTS                            ADDRESS         PORTS     AGE
elasticsearch-ingress   elasticsearch.infra.gorafay.net   10.108.234.72   80, 443   18m

Step 4: Access Credentials

We will now retrieve the default elasticsearch superuser credentials from the k8s secret elasticsearch-es-elastic-user. We will require this credential when we configure and deploy Kibana in the next step as well.

kubectl get secret elasticsearch-es-elastic-user -o yaml

apiVersion: v1
data:
  elastic: xxxxxxxxxxxxxxxxxxxxx

Step 5: Verify Credentials

Access the Elasticsearch service via the ingress hostname and login with the default user "elastic" and password from the base64 decoded string retrieved from the default superuser credentials above.

You should see something similar to the following image.

Access elasticsearch workload