class NodeGroupDeploymentStrategy(pb_classes.Message): (source)
Constructor: NodeGroupDeploymentStrategy(initial_message, max_unavailable, max_surge, drain_timeout)
Undocumented
| Method | __dir__ |
Undocumented |
| Method | __init__ |
Create a wrapper around a protobuf message instance. |
| Method | drain |
Undocumented |
| Method | max |
Undocumented |
| Method | max |
Undocumented |
| Constant | __PB2 |
Undocumented |
| Constant | __PY |
Undocumented |
| Class Variable | __mask |
Undocumented |
| Property | drain |
Maximum amount of time that the service will spend on attempting gracefully draining a node (evicting it's pods), before falling back to pod deletion. By default, node can be drained unlimited time. Important consequence of that is if PodDisruptionBudget doesn't allow to evict a pod, then NodeGroup update with node re-creation will hung on that pod eviction... |
| Property | max |
The maximum number of additional nodes that can be provisioned above the desired number of nodes during the update process. |
| Property | max |
The maximum number of nodes that can be simultaneously unavailable during the update process. |
Inherited from Message:
| Class Method | get |
Return the protobuf descriptor for this message class. |
| Class Method | is |
Return True if the field contains credentials. |
| Class Method | is |
Return True if the field is marked as sensitive. |
| Method | __repr__ |
Return a human-readable representation of the message, sanitizing sensitive fields. |
| Method | check |
Check explicit presence for a field in the protobuf message. |
| Method | get |
Build a reset mask for a full update of this message. |
| Method | get |
Return the tracked reset mask. |
| Method | is |
Return True if a field equals its default value. |
| Method | set |
Replace the tracked reset mask. |
| Method | which |
Return the set field name for a given oneof. |
| Instance Variable | __PB2 |
Protobuf message class associated with this wrapper. |
| Instance Variable | __pb2 |
Underlying protobuf message instance. |
| Method | _clear |
Clear a field and record it in the reset mask. |
| Method | _get |
Return a field value with optional wrapping and presence handling. |
| Method | _set |
Set a field value and update the reset mask. |
| Class Variable | __credentials |
Undocumented |
| Class Variable | __default |
Undocumented |
| Class Variable | __sensitive |
Undocumented |
| Instance Variable | __recorded |
Mask tracking fields cleared or set to default. |
message_1.Message | None = None, *, max_unavailable: PercentOrCount | node_group_pb2.PercentOrCount | None | unset.UnsetType = unset.Unset, max_surge: PercentOrCount | node_group_pb2.PercentOrCount | None | unset.UnsetType = unset.Unset, drain_timeout: duration_pb2.Duration | datetime.timedelta | None | unset.UnsetType = unset.Unset):
(source)
¶
Create a wrapper around a protobuf message instance.
| Raises | |
AttributeError | If the wrapper is missing required class metadata. |
def drain_timeout(self, value:
duration_pb2.Duration | datetime.timedelta | None):
(source)
¶
Undocumented
def max_surge(self, value:
PercentOrCount | node_group_pb2.PercentOrCount | None):
(source)
¶
Undocumented
def max_unavailable(self, value:
PercentOrCount | node_group_pb2.PercentOrCount | None):
(source)
¶
Undocumented
Undocumented
| Value |
|
Undocumented
| Value |
|
Maximum amount of time that the service will spend on attempting gracefully draining a node (evicting it's pods), before falling back to pod deletion. By default, node can be drained unlimited time. Important consequence of that is if PodDisruptionBudget doesn't allow to evict a pod, then NodeGroup update with node re-creation will hung on that pod eviction. Note, that it is different from kubectl drain --timeout
The maximum number of additional nodes that can be provisioned above the desired number of nodes during the update process.
This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%).
When specified as a percentage, the actual number is calculated by rounding up to the nearest whole number. This value cannot be 0 if max_unavailable is also set to 0.
Defaults to 1.
Example: If set to 25%, the node group can scale up by an additional 25% during the update, allowing new nodes to be added before old nodes are removed, which helps minimize workload disruption.
NOTE:
it is user responsibility to ensure that there are enough quota for provision nodes above the desired number. Available quota effectively limits max_surge. In case of not enough quota even for one extra node, update operation will hung because of quota exhausted error. Such error will be visible in Operation.progress_data.
The maximum number of nodes that can be simultaneously unavailable during the update process.
This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%).
When specified as a percentage, the actual number is calculated by rounding down to the nearest whole number. This value cannot be 0 if max_surge is also set to 0.
Defaults to 0.
Example: If set to 20%, up to 20% of the nodes can be taken offline at once during the update, ensuring that at least 80% of the desired nodes remain operational.