Overview
Although namespaces provide the ability to logically partition a cluster, it is not possible to truly enforce partitioning. This means users or resources operating in the same Kubernetes cluster can access any other resource in the cluster regardless of the namespace it is operating in.
Even if compensating controls such as Network Policies are implemented to block/control "namespace-to-namespace" communication, there is still the "noisy neighbor" problem to consider.
For scenarios where this can be problematic, the only practical solution is to use "dedicated" Kubernetes clusters to guarantee true separation across operational boundaries.
A commonly used term for this single tenant model is Cluster as a Service.
One common example for this is the use of a dedicated cluster for "Production" and "Staging". Users can use "Projects" to achieve this isolation. An illustrative example is shown below.
Environment | Project |
---|---|
Production | "Production" Project with a dedicated Kubernetes cluster. Project configured with least privilege RBAC so that only required users can access the project |
Staging | "Staging" Project with a dedicated Kubernetes cluster. Project configured with RBAC so that only required users can access the project |
One or multiple Kubernetes clusters can exist in a Project. In the example below, an Amazon EKS cluster has been provisioned in the Project.
Users within a project can switch to another project using the search bar. When a user initiates a search with a specific letter(s) (for example, qe), the system auto complete the search and list out all the project names corresponding to the entered text (qe) as shown in the below example
Deployment Options¶
The below matrix presents a breakdown of actions like creation, updating, and deletion of projects across multiple deployment methods: Interactive UI, Declarative RCTL commands, API-driven automation, and Terraform.
Action | UI | CLI | API | Terraform |
---|---|---|---|---|
Create | Yes | Yes | Yes | Yes |
Update | Yes | Yes | Yes | Yes |
Delete | Yes | Yes | Yes | Yes |
Access Control¶
Organizations can restrict which users can access a particular project and with what privileges (i.e. read, read/write etc.) using role based access control
Resource Quotas¶
Organizations can leverage "Resource Quotas" to restrict how much of the cluster resources a namespace can use.
Configurations¶
On this configuration page, users possess the ability to enable the webhook at the project level. Enabling the drift configuration within a project empowers users to deploy the webhook across all clusters under that project using blueprints, contingent upon its activation at the organization level. Should the webhook be deactivated at the project level, users are not allowed to deploy the drift webhook on any cluster under the same project, even if the organization-level webhook remains enabled. By default, this option is enabled.