class NodeGroupDeploymentStrategy(pb_classes.Message): (source)
Constructor: NodeGroupDeploymentStrategy(initial_message, max_unavailable, max_surge, drain_timeout)
Undocumented
Method | __dir__ |
Undocumented |
Method | __init__ |
Undocumented |
Method | drain |
Undocumented |
Method | max |
Undocumented |
Method | max |
Undocumented |
Constant | __PB2 |
Undocumented |
Constant | __PY |
Undocumented |
Class Variable | __mask |
Undocumented |
Property | drain |
Maximum amount of time that the service will spend on attempting gracefully draining a node (evicting it's pods), before falling back to pod deletion. By default, node can be drained unlimited time. Important consequence of that is if PodDisruptionBudget doesn't allow to evict a pod, then NodeGroup update with node re-creation will hung on that pod eviction... |
Property | max |
The maximum number of additional nodes that can be provisioned above the desired number of nodes during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%)... |
Property | max |
The maximum number of nodes that can be simultaneously unavailable during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%)... |
Inherited from Message
:
Class Method | get |
Undocumented |
Class Method | is |
Undocumented |
Class Method | is |
Undocumented |
Method | __repr__ |
Undocumented |
Method | check |
Undocumented |
Method | get |
Undocumented |
Method | get |
Undocumented |
Method | is |
Undocumented |
Method | set |
Undocumented |
Method | which |
Undocumented |
Class Variable | __PB2 |
Undocumented |
Instance Variable | __pb2 |
Undocumented |
Method | _clear |
Undocumented |
Method | _get |
Undocumented |
Method | _set |
Undocumented |
Class Variable | __credentials |
Undocumented |
Class Variable | __default |
Undocumented |
Class Variable | __sensitive |
Undocumented |
Instance Variable | __recorded |
Undocumented |
message_1.Message | None
= None, *, max_unavailable: PercentOrCount | node_group_pb2.PercentOrCount | None | unset.UnsetType
= unset.Unset, max_surge: PercentOrCount | node_group_pb2.PercentOrCount | None | unset.UnsetType
= unset.Unset, drain_timeout: duration_pb2.Duration | datetime.timedelta | None | unset.UnsetType
= unset.Unset):
(source)
¶
Undocumented
def drain_timeout(self, value:
duration_pb2.Duration | datetime.timedelta | None
):
(source)
¶
Undocumented
def max_surge(self, value:
PercentOrCount | node_group_pb2.PercentOrCount | None
):
(source)
¶
Undocumented
def max_unavailable(self, value:
PercentOrCount | node_group_pb2.PercentOrCount | None
):
(source)
¶
Undocumented
Undocumented
Value |
|
Undocumented
Value |
|
Maximum amount of time that the service will spend on attempting gracefully draining a node (evicting it's pods), before falling back to pod deletion. By default, node can be drained unlimited time. Important consequence of that is if PodDisruptionBudget doesn't allow to evict a pod, then NodeGroup update with node re-creation will hung on that pod eviction. Note, that it is different from `kubectl drain --timeout`
The maximum number of additional nodes that can be provisioned above the desired number of nodes during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%). When specified as a percentage, the actual number is calculated by rounding up to the nearest whole number. This value cannot be 0 if `max_unavailable` is also set to 0. Defaults to 1. Example: If set to 25%, the node group can scale up by an additional 25% during the update, allowing new nodes to be added before old nodes are removed, which helps minimize workload disruption. NOTE: it is user responsibility to ensure that there are enough quota for provision nodes above the desired number. Available quota effectively limits `max_surge`. In case of not enough quota even for one extra node, update operation will hung because of quota exhausted error. Such error will be visible in Operation.progress_data.
The maximum number of nodes that can be simultaneously unavailable during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%). When specified as a percentage, the actual number is calculated by rounding down to the nearest whole number. This value cannot be 0 if `max_surge` is also set to 0. Defaults to 0. Example: If set to 20%, up to 20% of the nodes can be taken offline at once during the update, ensuring that at least 80% of the desired nodes remain operational.