Quick Facts
- Category: Cloud Computing
- Published: 2026-05-01 00:30:27
- 271 Zero-Day Flaws Found in Firefox via Advanced AI – A Record Security Haul
- Austrian-Albanian Police Takedown of €50 Million Crypto Scam Ring: How They Operated
- The Dawn of Autonomous Exploit Discovery: Anthropic's Claude Mythos and Its Cybersecurity Ripple Effects
- Testing Sealed Bootable Container Images for Fedora Atomic Desktops
- Navigating Post-Quantum Cryptography: Meta's Blueprint for a Secure Future
The Memory QoS feature in Kubernetes v1.36 introduces significant enhancements to how container memory is managed using the cgroup v2 memory controller. Building on its alpha debut in v1.22 and subsequent update in v1.27, the new release adds opt-in memory reservation, tiered protection by Quality of Service (QoS) class, observability metrics, and a kernel-version warning for memory.high. These changes give administrators finer control over memory allocation and throttling, reducing the risk of system-wide out-of-memory (OOM) events while improving workload reliability. Below, we answer key questions about what's new in v1.36 and how to leverage these features.
1. What is Memory QoS in Kubernetes and how does it work in v1.36?
Memory QoS is a feature that enables the kubelet to use cgroup v2 memory controllers to guide the Linux kernel on how to handle container memory. In Kubernetes v1.36, the feature gate MemoryQoS (still alpha) activates two core behaviors: throttling via memory.high and optional reservation via memory.min or memory.low. The throttling factor, controlled by memoryThrottlingFactor (default 0.9), sets a soft limit that triggers reclaim before the container’s memory limit is reached. The major innovation in v1.36 is the separation of throttling from reservation through a new kubelet configuration field: memoryReservationPolicy. This allows operators to enable throttling without automatically locking memory, reducing the chance of OOM kills caused by excessive hard reservations.
2. What is the new opt-in memory reservation with memoryReservationPolicy?
The memoryReservationPolicy field in the kubelet configuration controls how memory is reserved for containers. It offers two settings:
None(default): Nomemory.minormemory.lowvalues are written to the cgroup. Throttling viamemory.highstill works as before.TieredReservation: The kubelet writes tiered memory protection based on the Pod's QoS class (see next question).
This opt-in design lets operators first enable throttling to observe workload behavior, then gradually introduce reservation when they are confident the node has enough memory headroom. It avoids the earlier pitfall where enabling the feature gate immediately set memory.min for all containers with memory requests, which could lock large amounts of memory and increase OOM risk.
3. How does tiered protection by QoS class work in v1.36?
When memoryReservationPolicy is set to TieredReservation, the kubelet assigns different levels of memory protection based on the Pod's QoS class:
- Guaranteed Pods receive hard protection via
memory.min. The kernel will never reclaim this memory; if it cannot honor the guarantee, it triggers the OOM killer on other processes. For example, a Guaranteed Pod requesting 512 MiB setsmemory.minto 536870912 bytes. - Burstable Pods get soft protection via
memory.low. The kernel avoids reclaiming this memory under normal pressure, but may reclaim it under extreme pressure to prevent a system-wide OOM. - BestEffort Pods receive neither
memory.minnormemory.low, so their memory remains fully reclaimable.
This tiered approach ensures that only Guaranteed workloads get unconditional protection, while Burstable workloads can yield memory in emergencies, improving overall node stability.
4. How does the v1.36 behavior differ from v1.27 regarding memory.min and memory.low?
In Kubernetes v1.27, enabling the MemoryQoS feature gate immediately set memory.min for every container with a memory request, regardless of QoS class. This hard reservation was problematic: if Burstable Pods requested a large total memory (e.g., 7 GiB on an 8 GiB node), that memory would be locked as memory.min, leaving little room for the kernel, system daemons, or BestEffort workloads. It increased the risk of OOM kills. In v1.36, with TieredReservation, only Guaranteed Pods use memory.min; Burstable Pods map to memory.low. This means under normal pressure, Burstable memory is still protected, but under extreme pressure it can be reclaimed to avoid system-wide OOM. This separation reduces the total hard reservation and provides more flexibility.
5. What observability metrics are introduced in v1.36?
Kubernetes v1.36 exposes two new alpha-stability metrics on the kubelet /metrics endpoint to help administrators monitor memory protection:
kubelet_memory_qos_node_memory_min_bytes– Reports the total amount of memory (in bytes) protected bymemory.minacross all cgroups on the node.kubelet_memory_qos_node_memory_low_bytes– Reports the total amount of memory (in bytes) protected bymemory.low.
These metrics allow operators to observe how much memory is subject to hard or soft protection, aiding capacity planning and debugging. They are particularly useful when evaluating how changes to memoryReservationPolicy affect the node’s memory pressure profile.
6. Why is the kernel-version warning for memory.high important?
In v1.36, the kubelet emits a warning if the host kernel does not properly support the memory.high cgroup v2 interface. This is crucial because memory.high is the mechanism used to implement throttling (via the kubelet's memory throttling factor). Some older kernel versions (below 5.0 or certain distributions) may have bugs or lack full support for this file. Without an adequate kernel, setting memory.high can lead to unintended OOM kills or ineffective reclaim. The warning helps operators identify incompatible environments early, allowing them to decide whether to disable the MemoryQoS feature gate or upgrade the kernel. This is especially important when adopting the new tiered reservation policy, as throttling is a prerequisite for proper protection.