Skip to content



Kong is an Ingress Controller for use on k8s clusters. Features include Ingress routing, low latency, API management which allows users to use various plugins. These plugins allow users to take advantage of additional features which provide monitoring, TLS termination, transformations, and Deep Packet Inspection. In addition users can take advantage of health checking, load-balancing, and authentication functionality.

What Will You Do

In this exercise you will:


This tutorial describes the steps to create and use a custom cluster blueprint using the Web Console. The entire workflow can also be fully automated and embedded into an automation pipeline.


  • A Kubernetes cluster running the Kong controller. This cluster must be imported into the Console. Follow these import steps to import a cluster.
  • The Managed Prometheus add-on components installed in the rafay-infra namespace by the Controller on the Kubernetes cluster. See Managed Prometheus for more information.

Install Kong

Use Kong's Helm chart to install the Kong Controller using a Workload.

Create a namespace

To allow Managed Prometheus to collect metrics data from Kong (or any other application), add the following Annotations.

# Additional annotation to be added to Kong pods,
#so it will be scraped by Managed Prometheus
# Ref doc:
podAnnotations: "true" /metrics "8100"
#Uncomment below section for baremetal based k8s cluster where
#loadbalancer is not available
# proxy:
# # Enable creating a Kubernetes service for the proxy
# enabled: true
# type: NodePort

Integrate the repository

For the Controller to be able to connect and download any Helm chart, it needs to be integrated with Repositories in the Console.

  1. In the Console, select Integrations > Repositories.
  2. Click New Repository.
  3. Add a name and description to the repository.

    New Repository

  4. Make sure Helm is selected for Type.

  5. Click Create.
  6. Edit the new repository.
  7. For Endpoint, add to add the Kong Helm chart endpoint.

    Add Kong Helm Chart

  8. Make sure Internet is selected for Reachability.

  9. Click Save.

Install using Workloads

After Repository integration, install the Kong Helm chart with the kong-custom-values.yaml file.

  1. In the Console, select Applications > Workloads.
  2. Select New Workload > Create New Workload.
  3. Add the following details to the workload:

    • Name: kong
    • Package Type: Helm 3
    • Artifact Sync: Pull files from repository
    • Repository Type: Helm
    • Namespace: kong
  4. Click Continue.

  5. Add the following details to the repository configuration:

    • Repository: kong
    • Chart Name: kong
  6. Click Upload Files, select the kong-custom-values.yaml file, then click Open. This adds the file to the configuration.

  8. Select the cluster.
  10. Click PUBLISH.
  11. After the Helm chart is published, verify that the Kong Helm chart installed correctly by running the following kubectl command:

    kubectl get all -n kong

    Results should be similar to the following:

    Kong Helm Results

  12. Verify the Prometheus pod Annotations by running the following kubectl command:

    kubectl describe pod kong-pod-name -n kong|grep -i -A 10 'Annotations'

    Results should be similar to the following:

    Prometheus Annotations

    Because of these Annotations, the Managed Prometheus can collect metrics of the Kong pod at port 8100 and the path /metrics.

Enable Prometheus in Kong

There are two options for installing the KongClusterPlugin.

Install using Workloads

Enable the Prometheus plugin in Kong at the global level. Each request to the Kubernetes cluster is tracked by Prometheus.

To create a workload using the Kubernetes YAML approach, follow the Create Workloads process and use the KongClusterPlugin.yaml example below.


kind: KongClusterPlugin
name: prometheus
annotations: kong
global: "true"
plugin: prometheus

Install using Helm charts

Create a Kong umbrella helm chart that deploys the KongClusterPlugin along with the Kong installation via a helm chart.

  1. Run the following command to create the Kong umbrella chart.

    helm create kong-umbrella-chart


    Warning messages about group-readable and world-readable might display.

  2. To check the installation, run cd kong-umbrella-chart. There should be charts and template folder and some YAML files.

  3. Install the Kong helm chart as a dependency chart along with the KongClusterPlugin as part of the chart.
  4. Amend the chart.yaml file with Kong as a dependency, as shown below.

    apiVersion: v2
    name: kong-umberella
    description: A Helm chart for Kubernetes
    type: application
    version: 0.1.0
    # This is the version number of the application being deployed. This
    version number should be
    # incremented each time you make changes to the application. Versions are
    not expected to
    # follow Semantic Versioning. They should reflect the version the
    application is using.
    appVersion: 2.8
    - name: kong
    version: 2.9.1
  5. Remove the content of the values.yaml file and keep it as a blank file. The values.yaml file will not be used for any customization, but the file is still needed.

  6. Remove most of the default files from the /kong-umbrella-chart/template/ folder. Do not delete the KongClusterPlugin.yaml file.

    ls -ltrh templates/
    total 32K
    drwxr-xr-x 2 infracloud infracloud 4.0K Jun 21 19:47 tests
    -rw-r--r-- 1 infracloud infracloud 397 Jun 21 19:47 service.yaml
    -rw-r--r-- 1 infracloud infracloud 344 Jun 21 19:47 serviceaccount.yaml
    -rw-r--r-- 1 infracloud infracloud 1.8K Jun 21 19:47 NOTES.txt
    -rw-r--r-- 1 infracloud infracloud 2.1K Jun 21 19:47 ingress.yaml
    -rw-r--r-- 1 infracloud infracloud 952 Jun 21 19:47 hpa.yaml
    -rw-r--r-- 1 infracloud infracloud 1.9K Jun 21 19:47 _helpers.tpl
    -rw-r--r-- 1 infracloud infracloud 1.9K Jun 21 19:47 deployment.yaml
  7. Follow the Helm Charts instructions to install the Helm chart.

Install Grafana Helm chart

Installing the Grafana Helm chart is similar to installing the Kong Helm chart as a Workload.

Install Summary

  • Create a monitoring namespace.
  • Use the grafana-custom-values.yaml file (see below).
  • Integrate the Grafana Helm chart repository.
  • Install the Grafana Helm chart using a Workload.

Grafana YAML file

The grafana-custom-values.yaml file contains the following:

  • Uses the Managed Prometheus service as a data source.
  • Provides the Kong Grafana dashboard for visualization.


## Custom values for Grafana
## Test framework configuration
  enabled: false

## Pod Annotations
podAnnotations: {}

## Deployment annotations
annotations: {}

## Service - set to type: LoadBalancer to expose service via load balancing instead of using ingress
  enabled: true
  type: ClusterIP
  annotations: {}
  labels: {}

## Ingress configuration to expose Grafana to external using ingress
  enabled: true
  annotations: kong

## Resource Limits and Requests settings
resources: {}
#  limits:
#    cpu: 100m
#    memory: 128Mi
#  requests:
#    cpu: 100m
#    memory: 128Mi

## Node labels for pod assignment
nodeSelector: {}

## Tolerations for pod assignment
tolerations: []

## Affinity for pod assignment
affinity: {}

## Enable persistence using Persistent Volume Claims
  type: pvc
  enabled: true
#  storageClassName: default
  - ReadWriteOnce
  size: 10Gi
#  annotations: {}
#  existingClaim:

#  Administrator credentials when not using an existing secret (see below)
adminUser: admin
#  adminPassword: strongpassword

# Use an existing secret for the admin user.
  existingSecret: ""
  userKey: admin-user
  passwordKey: admin-password

## Extra environment variables
env: {}
envValueFrom: {}
envFromSecret: ""

## Configure Grafana datasources to point to Rafay Prometheus Service
  apiVersion: 1
  - name: Rafay-Prometheus
    type: prometheus
    url: http://rafay-prometheus-server.rafay-infra.svc.cluster.local:9090
    access: proxy
    isDefault: true

## Configure Grafana dashboard providers for importing dashboards by defaults
    apiVersion: 1
    - name: 'default'
      orgId: 1
      folder: ''
      type: file
      disableDeletion: false
      editable: true
        path: /var/lib/grafana/dashboards/default

## Configure grafana dashboard to import by default. gnetId is dashboard ID from
      gnetId: 7249
      datasource: Rafay-Prometheus
      gnetId: 12114
      datasource: Rafay-Prometheus
      gnetId: 12117
      datasource: Rafay-Prometheus
      gnetId: 12120
      datasource: Rafay-Prometheus
      gnetId: 12119
      datasource: Rafay-Prometheus
      gnetId: 11074
      datasource: Rafay-Prometheus
      gnetId: 8588
      datasource: Rafay-Prometheus
      gnetId: 1471
      datasource: Rafay-Prometheus
      gnetId: 12124
      datasource: Rafay-Prometheus
      gnetId: 12125
      datasource: Rafay-Prometheus
      gnetId: 12661
      datasource: Rafay-Prometheus

# new grafana dashboards for Kong monitoring
    gnetId: 7424 # Install the following Grafana dashboard in the
    revision: 5 # instance:
    datasource: Rafay-Prometheus

After publishing the Grafana Helm workload, verify the installation by running the following command.

kubectl get all -n monitoring

Set up Port Forwards

For the purposes of this exercise, port-forwarding is used to get access to the Grafana, Managed Prometheus, and Kong proxy. It is not advisable to do this in production. In a production environment, use a Kubernetes Service with an external IP address or a load balancer.

  1. Open a new terminal and run the following command to allow access to Prometheus using localhost:9090.

    POD_NAME=$(kubectl get pods --namespace monitoring -l "" -o jsonpath="{.items[0]}")
    kubectl --namespace monitoring port-forward $POD_NAME 9090 &
  2. Run the following command to allow access to Grafana using localhost:3000.

    POD_NAME=$(kubectl get pods --namespace monitoring -l "" -o jsonpath="{.items[0]}")
    kubectl --namespace monitoring port-forward $POD_NAME 3000 &
  3. Run the following command to allow access to the Kong proxy using localhost:8000. For this exercise, a plain-text HTTP proxy is used. Use the IP address of a LoadBalancer if running this in a cloud environment.

    POD_NAME=$(kubectl get pods --namespace kong -o jsonpath="{.items[0]}")
    kubectl --namespace kong port-forward $POD_NAME 8000 &

Access the Grafana Dashboard

Accessing Grafana requires the Admin user password.

  1. Run the following command to read the Admin user password.

    kubectl get secret --namespace monitoring grafana-helm -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
  2. Using a web browser, go to http://localhost:3000.

  3. Use admin for the username and use the password obtained earlier.

Setup Sample Services

The next part of this exercise is to setup some services, including an Ingress for routing them.

Install Services

Set up three services: Billing, Invoice, and Comments.

  1. Execute the following command to set up the services.

    curl -s{{page.kong_version}}/examples/001_multiple-services.yaml | kubectl apply -f -

Install Ingress for the Services

After the services are running, create Ingress routing rules in Kubernetes.

  1. Execute the following command.

    echo '
    kind: Ingress
    name: sample-ingresses
    annotations: "true"
    ingressClassName: kong
    - http:
        - path: /billing
            pathType: ImplementationSpecific
                name: billing
                number: 80
        - path: /comments
            pathType: ImplementationSpecific
                name: comments
                number: 80
        - path: /invoice
            pathType: ImplementationSpecific
                name: invoice
                number: 80
    ' | kubectl apply -f -

Create Some Traffic

After configuring the services and proxies, create some traffic and view the results.

  1. Execute the following command. Also try adjusting the script to send different traffic patterns and see how the metrics change.

    while true;
    curl http://localhost:8000/billing/status/200
    curl http://localhost:8000/billing/status/501
    curl http://localhost:8000/invoice/status/201
    curl http://localhost:8000/invoice/status/404
    curl http://localhost:8000/comments/status/200
    curl http://localhost:8000/comments/status/200
    sleep 0.01

With the Prometheus plugin enabled in Kong, it collects metrics for requests proxied via Kong. Metrics related to traffic flowing through the services should be visible in the Kong Grafana dashboard. The upstream services are httpbin instances, so a variety of endpoints can be used to shape the traffic.

Metrics Collected

Request Latencies of Services

Upstream Time

Kong collects latency data of how long a service takes to respond to requests. Data can be used to alert the on-call engineer if the latency goes beyond a certain threshold. For example, if there is an Service Level Agreement (SLA) that the APIs will respond with a latency of less than 20 milliseconds for 95% of the requests, Prometheus can be configured to alert based on the following query:

text histogram_quantile(0.95, sum(rate(kong_latency_bucket{type="request"}[1m])) by (le,service)) > 20

The query calculates the 95th percentile of the total request latency (or duration) for all of the services, and alerts if it is more than 20 milliseconds. The type label in this query is request, which tracks the latency added by Kong and the service.

Switch this to upstream to track latency added by the service only.

Prometheus is highly flexible and well documented. See the Prometheus documentation for more information about setting up alerts.

Kong Proxy Latency

Kong also collects metrics about its performance. The following query is similar to the previous, but gives insight into the latency added by Kong.

text histogram_quantile(0.90, sum(rate(kong_latency_bucket{type="kong"}[1m])) by (le,service)) > 2

Error Rates

HTTP Status

Another important metric to track is the rate of errors and requests the services are serving. The time series kong_http_status collects HTTP status code metrics for each service.

This metric helps track the rate of errors for each of the services.

text sum(rate(kong_http_status{code=~"5[0-9]{2}"}[1m])) by (service)

It is also possible to calculate the percentage of requests, in any duration, that are errors. All HTTP status codes are indexed, meaning it is possible to learn more about typical traffic patterns and identify problems. For example, a sudden rise in 404 response codes could be indicative of client codes requesting an endpoint that was removed in a recent deployment.

Request Rate and Bandwidth

It is possible to derive the total request rate for each of the services or across the Kubernetes cluster using the kong_http_status time series.

Total Requests

Another metric that Kong keeps track of is the amount of network bandwidth (kong_bandwidth) being consumed. This gives you an estimate of how request/response sizes correlate with other behaviors in your infrastructure.

Total Bandwidth

The metrics for services running inside the Kubernetes cluster is now available. This provides more visibility into the applications without making any modifications tot he services. Use Alertmanager r Grafana to configure alerts based on the metrics observed and Service Level Objectives (SLO).


Congratulations! You have successfully created a custom cluster blueprint with the "kong" addon and applied to a cluster. You can now use this blueprint on as many clusters as you require.