TopologySpreadConstraintV1 Properties
Class Properties 8 members
Represents the topology spread constraints for distributing pods across a Kubernetes cluster. These constraints define policies for balancing pods across different node topologies such as zones or regions. The constraints aim to ensure high availability and resilience by spreading pods evenly based on the specified rules.
Gets or sets the label selector used for dynamically filtering Kubernetes resources based on their labels. This property enables the selection of specific objects by defining criteria that must match the labels within the resource set.
public LabelSelectorV1 LabelSelector { get; set; }Remarks
LabelSelector provides mechanisms to define filtering using key-value pairs (MatchLabels) or complex expressions (MatchExpressions). This helps in tailoring resource selection based on the requirements of the topology spread constraints.
Defines a list of specific label keys that are required to be matched for topology spreading constraints.
public List<string> MatchLabelKeys { get; }Remarks
This property specifies which label keys should be included for matching when determining the topology spread constraints of a Kubernetes resource. By specifying these keys, users can focus on specific labels that must be present for enforcing the desired topological spread among nodes.
Represents the maximum allowed skew for a topology spread constraint in Kubernetes. Skew is defined as the difference in the number of matching pods between the target and the other domains defined by the topologyKey.
public int? MaxSkew { get; set; }Remarks
This property is used to control the dispersion of pods across domains, ensuring that they are evenly balanced based on the specified topologyKey. A lower value enforces stricter balancing, while a higher value allows more skew in the pod distribution.
Specifies the minimum number of topology domains that must have at least one replica present for resource scheduling. This is used to ensure that replicas are distributed across a certain number of domains to achieve a balanced topology spread for better availability and fault tolerance.
public int? MinDomains { get; set; }Remarks
The value of this property helps enforce the desired distribution of replicas across different topology domains. For example, the topology domains could represent zones, regions, or any other logical grouping defined by the cluster. If this property is not set, the behavior may default to the Kubernetes scheduler's default logic for topology constraints.
Defines the policy for node affinity within a Kubernetes topology spread constraint. Determines how nodes are selected to satisfy the specified affinity rules.
public string? NodeAffinityPolicy { get; set; }Remarks
The policy can control the placement of pods on nodes based on node affinity rules, ensuring that workloads adhere to specific distribution or colocation requirements. This property is optional and may be null if no specific node affinity policy is set.
Gets or sets the policy for handling node taints within a topology spread constraint.
public string? NodeTaintsPolicy { get; set; }Remarks
The NodeTaintsPolicy determines the behavior of the scheduler in deciding how node taints are considered when applying topology constraints. This property allows fine-grained control over the interaction between taints and topology spread requirements.
Represents the key used to indicate the topology domain of a Kubernetes resource. The
TopologyKey defines the scope for spreading or balancing resources across nodes based on the specified topology rule. It is typically matched to a node label, such as failure-domain.beta.kubernetes.io/zone or kubernetes.io/hostname, to align resources with the desired domain or affinity policy. public string? TopologyKey { get; set; }Remarks
This property is used in conjunction with topology constraints to evenly distribute resources across the specified topology domains or to avoid overloading a single topology domain. The specified
TopologyKey must correspond to a valid label key present on the cluster nodes. Specifies the behavior when the constraints defined for topology spreading cannot be satisfied. This is used to handle scenarios where the node placement does not meet the desired topology spread constraints.
public string? WhenUnsatisfiable { get; set; }Remarks
Possible values might include: - "DoNotSchedule": Prevents the scheduler from placing the pod on a node that would violate topology constraints. - "ScheduleAnyway": Allows scheduling on a node despite the constraints not being met. The exact values and their implications depend on the Kubernetes version and configuration.