Skip to content

Kong

Overview

Kong is an Ingress Controller for use on k8s clusters. Features include Ingress routing, low latency, API management which allows users to use various plugins. These plugins allow users to take advantage of additional features which provide monitoring, TLS termination, transformations, and Deep Packet Inspection. In addition users can take advantage of health checking, load-balancing, and authentication functionality.


What Will You Do

In this exercise you will:

Important

This tutorial describes the steps to create and use a custom cluster blueprint using the Web Console. The entire workflow can also be fully automated and embedded into an automation pipeline.


Assumptions

  • A Kubernetes cluster running the Kong controller. This cluster must be imported into the Console. Follow these import steps to import a cluster.
  • The Managed Prometheus add-on components installed in the rafay-infra namespace by the Controller on the Kubernetes cluster. See Managed Prometheus for more information.

Install Kong

Use Kong's Helm chart to install the Kong Controller using a Workload.

Create a namespace

To allow Managed Prometheus to collect metrics data from Kong (or any other application), add the following Annotations.

# Additional annotation to be added to Kong pods,
#so it will be scraped by Managed Prometheus
# Ref doc: https://docs.rafay.co/workloads/k8s_yaml/#pod-metrics
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "8100"
#Uncomment below section for baremetal based k8s cluster where
#loadbalancer is not available
# proxy:
# # Enable creating a Kubernetes service for the proxy
# enabled: true
# type: NodePort

Integrate the repository

For the Controller to be able to connect and download any Helm chart, it needs to be integrated with Repositories in the Console.

  1. In the Console, select Integrations > Repositories.
  2. Click New Repository.
  3. Add a name and description to the repository.

    New Repository

  4. Make sure Helm is selected for Type.

  5. Click Create.
  6. Edit the new repository.
  7. For Endpoint, add https://charts.konghq.com to add the Kong Helm chart endpoint.

    Add Kong Helm Chart

  8. Make sure Internet is selected for Reachability.

  9. Click Save.

Install using Workloads

After Repository integration, install the Kong Helm chart with the kong-custom-values.yaml file.

  1. In the Console, select Applications > Workloads.
  2. Select New Workload > Create New Workload.
  3. Add the following details to the workload:

    • Name: kong
    • Package Type: Helm 3
    • Artifact Sync: Pull files from repository
    • Repository Type: Helm
    • Namespace: kong
  4. Click Continue.

  5. Add the following details to the repository configuration:

    • Repository: kong
    • Chart Name: kong
  6. Click Upload Files, select the kong-custom-values.yaml file, then click Open. This adds the file to the configuration.

  7. Click SAVE AND GO TO PLACEMENT.
  8. Select the cluster.
  9. Click SAVE AND GO TO PUBLISH.
  10. Click PUBLISH.
  11. After the Helm chart is published, verify that the Kong Helm chart installed correctly by running the following kubectl command:

    sh
    kubectl get all -n kong
    

    Results should be similar to the following:

    Kong Helm Results

  12. Verify the Prometheus pod Annotations by running the following kubectl command:

    sh
    kubectl describe pod kong-pod-name -n kong|grep -i -A 10 'Annotations'
    

    Results should be similar to the following:

    Prometheus Annotations

    Because of these Annotations, the Managed Prometheus can collect metrics of the Kong pod at port 8100 and the path /metrics.


Enable Prometheus in Kong

There are two options for installing the KongClusterPlugin.

Install using Workloads

Enable the Prometheus plugin in Kong at the global level. Each request to the Kubernetes cluster is tracked by Prometheus.

To create a workload using the Kubernetes YAML approach, follow the Create Workloads process and use the KongClusterPlugin.yaml example below.

KongClusterPlugin.yaml

apiVersion: configuration.konghq.com/v1
kind: KongClusterPlugin
metadata:
name: prometheus
annotations:
kubernetes.io/ingress.class: kong
labels:
global: "true"
plugin: prometheus

Install using Helm charts

Create a Kong umbrella helm chart that deploys the KongClusterPlugin along with the Kong installation via a helm chart.

  1. Run the following command to create the Kong umbrella chart.

    sh
    helm create kong-umbrella-chart
    

    Note

    Warning messages about group-readable and world-readable might display.

  2. To check the installation, run cd kong-umbrella-chart. There should be charts and template folder and some YAML files.

  3. Install the Kong helm chart as a dependency chart along with the KongClusterPlugin as part of the chart.
  4. Amend the chart.yaml file with Kong as a dependency, as shown below.

    apiVersion: v2
    name: kong-umberella
    description: A Helm chart for Kubernetes
    type: application
    version: 0.1.0
    # This is the version number of the application being deployed. This
    version number should be
    # incremented each time you make changes to the application. Versions are
    not expected to
    # follow Semantic Versioning. They should reflect the version the
    application is using.
    appVersion: 2.8
    dependencies:
    - name: kong
    version: 2.9.1
    repository: https://github.com/Kong/charts
    
  5. Remove the content of the values.yaml file and keep it as a blank file. The values.yaml file will not be used for any customization, but the file is still needed.

  6. Remove most of the default files from the /kong-umbrella-chart/template/ folder. Do not delete the KongClusterPlugin.yaml file.

    sh
    ls -ltrh templates/
    total 32K
    drwxr-xr-x 2 infracloud infracloud 4.0K Jun 21 19:47 tests
    -rw-r--r-- 1 infracloud infracloud 397 Jun 21 19:47 service.yaml
    -rw-r--r-- 1 infracloud infracloud 344 Jun 21 19:47 serviceaccount.yaml
    -rw-r--r-- 1 infracloud infracloud 1.8K Jun 21 19:47 NOTES.txt
    -rw-r--r-- 1 infracloud infracloud 2.1K Jun 21 19:47 ingress.yaml
    -rw-r--r-- 1 infracloud infracloud 952 Jun 21 19:47 hpa.yaml
    -rw-r--r-- 1 infracloud infracloud 1.9K Jun 21 19:47 _helpers.tpl
    -rw-r--r-- 1 infracloud infracloud 1.9K Jun 21 19:47 deployment.yaml
    
  7. Follow the Helm Charts instructions to install the Helm chart.


Install Grafana Helm chart

Installing the Grafana Helm chart is similar to installing the Kong Helm chart as a Workload.

Install Summary

  • Create a monitoring namespace.
  • Use the grafana-custom-values.yaml file (see below).
  • Integrate the Grafana Helm chart repository.
  • Install the Grafana Helm chart using a Workload.

Grafana YAML file

The grafana-custom-values.yaml file contains the following:

  • Uses the Managed Prometheus service as a data source.
  • Provides the Kong Grafana dashboard for visualization.

grafana-custom-values.yaml

## Custom values for Grafana
## Test framework configuration
testFramework:
  enabled: false

## Pod Annotations
podAnnotations: {}

## Deployment annotations
annotations: {}

## Service - set to type: LoadBalancer to expose service via load balancing instead of using ingress
service:
  enabled: true
  type: ClusterIP
  annotations: {}
  labels: {}

## Ingress configuration to expose Grafana to external using ingress
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: kong

## Resource Limits and Requests settings
resources: {}
#  limits:
#    cpu: 100m
#    memory: 128Mi
#  requests:
#    cpu: 100m
#    memory: 128Mi

## Node labels for pod assignment
nodeSelector: {}

## Tolerations for pod assignment
tolerations: []

## Affinity for pod assignment
affinity: {}

## Enable persistence using Persistent Volume Claims
persistence:
  type: pvc
  enabled: true
#  storageClassName: default
  accessModes:
  - ReadWriteOnce
  size: 10Gi
#  annotations: {}
#  existingClaim:

#  Administrator credentials when not using an existing secret (see below)
adminUser: admin
#  adminPassword: strongpassword

# Use an existing secret for the admin user.
admin:
  existingSecret: ""
  userKey: admin-user
  passwordKey: admin-password

## Extra environment variables
env: {}
envValueFrom: {}
envFromSecret: ""

## Configure Grafana datasources to point to Rafay Prometheus Service
datasources:
datasources.yaml:
  apiVersion: 1
  datasources:
  - name: Rafay-Prometheus
    type: prometheus
    url: http://rafay-prometheus-server.rafay-infra.svc.cluster.local:9090
    access: proxy
    isDefault: true

## Configure Grafana dashboard providers for importing dashboards by defaults
dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
    - name: 'default'
      orgId: 1
      folder: ''
      type: file
      disableDeletion: false
      editable: true
      options:
        path: /var/lib/grafana/dashboards/default

## Configure grafana dashboard to import by default. gnetId is dashboard ID from https://grafana.com/grafana/dashboards
dashboards:
  default:
    k8sClusterDashboard:
      gnetId: 7249
      datasource: Rafay-Prometheus
    k8sClusterResource:
      gnetId: 12114
      datasource: Rafay-Prometheus
    k8sNamespaceResource:
      gnetId: 12117
      datasource: Rafay-Prometheus
    k8sPodResource:
      gnetId: 12120
      datasource: Rafay-Prometheus
    k8sNodeResource:
      gnetId: 12119
      datasource: Rafay-Prometheus
    k8sNodeExporter:
      gnetId: 11074
      datasource: Rafay-Prometheus
    k8sDeployStsDs:
      gnetId: 8588
      datasource: Rafay-Prometheus
    k8sAppMetrics:
      gnetId: 1471
      datasource: Rafay-Prometheus
    k8sNetworkingCluster:
      gnetId: 12124
      datasource: Rafay-Prometheus
    k8sNetworkingNamespace:
      gnetId: 12125
      datasource: Rafay-Prometheus
    k8sNetworkingPod:
      gnetId: 12661
      datasource: Rafay-Prometheus

# new grafana dashboards for Kong monitoring
  kong-dash:
    gnetId: 7424 # Install the following Grafana dashboard in the
    revision: 5 # instance: https://grafana.com/dashboards/7424
    datasource: Rafay-Prometheus

After publishing the Grafana Helm workload, verify the installation by running the following command.

sh
kubectl get all -n monitoring

Set up Port Forwards

For the purposes of this exercise, port-forwarding is used to get access to the Grafana, Managed Prometheus, and Kong proxy. It is not advisable to do this in production. In a production environment, use a Kubernetes Service with an external IP address or a load balancer.

  1. Open a new terminal and run the following command to allow access to Prometheus using localhost:9090.

    bash
    POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/instance=promstack-kube-prometheus-prometheus" -o jsonpath="{.items[0].metadata.name}")
    kubectl --namespace monitoring port-forward $POD_NAME 9090 &
    
  2. Run the following command to allow access to Grafana using localhost:3000.

    POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=grafana" -o jsonpath="{.items[0].metadata.name}")
    kubectl --namespace monitoring port-forward $POD_NAME 3000 &
    
  3. Run the following command to allow access to the Kong proxy using localhost:8000. For this exercise, a plain-text HTTP proxy is used. Use the IP address of a LoadBalancer if running this in a cloud environment.

    POD_NAME=$(kubectl get pods --namespace kong -o jsonpath="{.items[0].metadata.name}")
    kubectl --namespace kong port-forward $POD_NAME 8000 &
    

Access the Grafana Dashboard

Accessing Grafana requires the Admin user password.

  1. Run the following command to read the Admin user password.

    bash
    kubectl get secret --namespace monitoring grafana-helm -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
    
  2. Using a web browser, go to http://localhost:3000.

  3. Use admin for the username and use the password obtained earlier.

Setup Sample Services

The next part of this exercise is to setup some services, including an Ingress for routing them.

Install Services

Set up three services: Billing, Invoice, and Comments.

  1. Execute the following command to set up the services.

    bash
    curl -s https://docs.konghq.com/kubernetes-ingress-controller/{{page.kong_version}}/examples/001_multiple-services.yaml | kubectl apply -f -
    

Install Ingress for the Services

After the services are running, create Ingress routing rules in Kubernetes.

  1. Execute the following command.

    bash
    echo '
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: sample-ingresses
    annotations:
        konghq.com/strip-path: "true"
    spec:
    ingressClassName: kong
    rules:
    - http:
        paths:
        - path: /billing
            pathType: ImplementationSpecific
            backend:
            service:
                name: billing
                port:
                number: 80
        - path: /comments
            pathType: ImplementationSpecific
            backend:
            service:
                name: comments
                port:
                number: 80
        - path: /invoice
            pathType: ImplementationSpecific
            backend:
            service:
                name: invoice
                port:
                number: 80
    ' | kubectl apply -f -
    

Create Some Traffic

After configuring the services and proxies, create some traffic and view the results.

  1. Execute the following command. Also try adjusting the script to send different traffic patterns and see how the metrics change.

    bash
    while true;
    do
    curl http://localhost:8000/billing/status/200
    curl http://localhost:8000/billing/status/501
    curl http://localhost:8000/invoice/status/201
    curl http://localhost:8000/invoice/status/404
    curl http://localhost:8000/comments/status/200
    curl http://localhost:8000/comments/status/200
    sleep 0.01
    done
    

With the Prometheus plugin enabled in Kong, it collects metrics for requests proxied via Kong. Metrics related to traffic flowing through the services should be visible in the Kong Grafana dashboard. The upstream services are httpbin instances, so a variety of endpoints can be used to shape the traffic.


Metrics Collected

Request Latencies of Services

Upstream Time

Kong collects latency data of how long a service takes to respond to requests. Data can be used to alert the on-call engineer if the latency goes beyond a certain threshold. For example, if there is an Service Level Agreement (SLA) that the APIs will respond with a latency of less than 20 milliseconds for 95% of the requests, Prometheus can be configured to alert based on the following query:

text histogram_quantile(0.95, sum(rate(kong_latency_bucket{type="request"}[1m])) by (le,service)) > 20

The query calculates the 95th percentile of the total request latency (or duration) for all of the services, and alerts if it is more than 20 milliseconds. The type label in this query is request, which tracks the latency added by Kong and the service.

Switch this to upstream to track latency added by the service only.

Prometheus is highly flexible and well documented. See the Prometheus documentation for more information about setting up alerts.

Kong Proxy Latency

Kong also collects metrics about its performance. The following query is similar to the previous, but gives insight into the latency added by Kong.

text histogram_quantile(0.90, sum(rate(kong_latency_bucket{type="kong"}[1m])) by (le,service)) > 2

Error Rates

HTTP Status

Another important metric to track is the rate of errors and requests the services are serving. The time series kong_http_status collects HTTP status code metrics for each service.

This metric helps track the rate of errors for each of the services.

text sum(rate(kong_http_status{code=~"5[0-9]{2}"}[1m])) by (service)

It is also possible to calculate the percentage of requests, in any duration, that are errors. All HTTP status codes are indexed, meaning it is possible to learn more about typical traffic patterns and identify problems. For example, a sudden rise in 404 response codes could be indicative of client codes requesting an endpoint that was removed in a recent deployment.

Request Rate and Bandwidth

It is possible to derive the total request rate for each of the services or across the Kubernetes cluster using the kong_http_status time series.

Total Requests

Another metric that Kong keeps track of is the amount of network bandwidth (kong_bandwidth) being consumed. This gives you an estimate of how request/response sizes correlate with other behaviors in your infrastructure.

Total Bandwidth

The metrics for services running inside the Kubernetes cluster is now available. This provides more visibility into the applications without making any modifications tot he services. Use Alertmanager r Grafana to configure alerts based on the metrics observed and Service Level Objectives (SLO).


Recap

Congratulations! You have successfully created a custom cluster blueprint with the "kong" addon and applied to a cluster. You can now use this blueprint on as many clusters as you require.