In the previous blog, we discussed why tracking and reporting GPU SM Clock metrics matters. In this blog, we will dive deeper into another critical GPU metric i.e. GPU Power.
In the previous blog, we discussed why tracking and reporting GPU Memory Utilization metrics matters. In this blog, we will dive deeper into another critical GPU metric i.e. GPU SM Clock. The GPU SM clock (Streaming Multiprocessor clock) metric refers to the clock speed at which the GPU's cores (SMs) are running.
The SM is the main processing unit of the GPU, responsible for executing compute tasks such as deep learning operations, simulations, and graphics rendering. Monitoring the SM clock speed can help users assess the performance and health of your GPU during workloads and detect potential bottlenecks related to clock speed throttling.
In the introductory blog on GPU metrics, we discussed about the GPU metrics that matter and why they matter. In this blog, we will dive deeper into one of the critical GPU metrics i.e. GPU Memory Utilization.
GPU memory utilization refers to the percentage of the GPU’s dedicated memory (i.e. framebuffer) that is currently in use. It measures how much of the available GPU memory is occupied by data such as models, textures, tensors, or intermediate results during computation.
With the increasing reliance on GPUs for compute-intensive tasks such as machine learning, deep learning, data processing, and rendering, both infrastructure administrators and users of GPUs (i.e. data scientists, ML engineers and GenAI app developers) require timely access and insights into performance, efficiency, and overall health of their GPU resources.
In order to make data driven, logical decisions, it is critical for these users to have access to critical metrics for their GPUs. This is the first blog in a series where we will describe the GPU metrics that you should track and monitor. In subsequent blogs, we will do a deep dive into each metric, why it matters and how to use it effectively.
Listen to a conversation based on this blog post. Tell us what you think about it.
When it comes to deep learning frameworks, PyTorch and TensorFlow are two of the most prominent tools in the field. Both have been widely adopted by researchers and developers alike, and while they share many similarities, they also have key differences that make them suitable for different use cases.
We thought this blog would be timely especially with the PyTorch 2024 Conference right around the corner.
In this blog, we’ll explore the main differences between PyTorch and TensorFlow across several dimensions such as ease of use, dynamic vs. static computation, ecosystem, deployment, community, and industry adoption. In a follow-on blog, we will describe how Rafay’s customers use both PyTorch and TensorFlow for their AI/ML projects.
Although Azure Kubernetes Service (AKS) allows you to deploy containerized workloads in a managed Kubernetes environment, developers still need to deal with the challenge of securely managing access to Azure resources (e.g. Key Vault or Azure Storage). Traditionally, secrets like API keys or service account credentials are used to authenticate and authorize workloads, but this approach presents security risks and operational overhead.
In Azure for AKS clusters, developers have access to something similar called Workload Identity. It is a modern, secure, and scalable way to manage access without the hassle of managing secrets. In this blog post, we'll dive deep into what Workload Identity is, how it works in AKS, and why it's a game-changer for Kubernetes clusters on Azure.
Note
In a related blog, we will see how users can achieve something similar in Amazon EKS clusters using EKS Pod Identity.
Access reviews are required and mandated by regulations such as SOX, HIPAA, GLBA, PCI, NYDFS, and SOC-2. Access reviews are critical to help organizations maintain a strong risk management posture and uphold compliance. These reviews are typically conducted on a periodic basis (e.g. monthly, quarterly or annually) depending on the organization's policies and tolerance to risk.
Providing auditors with periodic access to user access reports for Kubernetes is a critical task for any typical platform team. This becomes onerous and burdensome especially for organizations that operate 10s or 100s of Kubernetes clusters that are used by 100s of app developers and SREs. Doing this via manual processes is impractical.
In this blog, we will look at why user access reports are critical for organizations and how Rafay's customers implement this with very high levels of automation.
When it comes to running workloads on Amazon Web Services (AWS), two popular choices are Amazon Elastic Compute Cloud (EC2) and AWS Fargate. Both have their merits, but understanding their cost implications is crucial for making an informed decision.
In this blog, we'll dive into a cost comparison of EC2 and Fargate configurations within an Amazon Elastic Kubernetes Service (EKS) cluster.
This is a multi-series blog on GPUs, how they intersect with Kubernetes and containers. In this blog, we will discuss how CPUs and GPUs are architecturally similar and different. We will also review when it is ideal to use a CPU vs a GPU.
Our upcoming release scheduled for June to our Preview environment adds support for a number of new features and enhancements. We will write about these in separate blogs. This blog is focused on support for Kubernetes v1.30 with Rafay MKS (i.e. upstream Kubernetes for bare metal and VM based environments).
Both new cluster provisioning and in-place upgrades of existing clusters are supported. As with most Kubernetes releases, this version also deprecates and removes a number of features. To ensure there is zero impact to our customers, we have made sure that every feature in the Rafay Kubernetes Operations Platform has been validated on this Kubernetes version. This will be promoted from Preview to Production in a few days and will be made available to all customers.