class documentation

Undocumented

Method __dir__ Undocumented
Method __init__ Undocumented
Method drain_timeout.setter Undocumented
Method max_surge.setter Undocumented
Method max_unavailable.setter Undocumented
Constant __PB2_DESCRIPTOR__ Undocumented
Constant __PY_TO_PB2__ Undocumented
Class Variable __mask_functions__ Undocumented
Property drain_timeout Maximum amount of time that the service will spend on attempting gracefully draining a node (evicting it's pods), before falling back to pod deletion. By default, node can be drained unlimited time. Important consequence of that is if PodDisruptionBudget doesn't allow to evict a pod, then NodeGroup update with node re-creation will hung on that pod eviction...
Property max_surge The maximum number of additional nodes that can be provisioned above the desired number of nodes during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%)...
Property max_unavailable The maximum number of nodes that can be simultaneously unavailable during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%)...

Inherited from Message:

Class Method get_descriptor Undocumented
Class Method is_credentials Undocumented
Class Method is_sensitive Undocumented
Method __repr__ Undocumented
Method check_presence Undocumented
Method get_full_update_reset_mask Undocumented
Method get_mask Undocumented
Method is_default Undocumented
Method set_mask Undocumented
Method which_field_in_oneof Undocumented
Class Variable __PB2_CLASS__ Undocumented
Instance Variable __pb2_message__ Undocumented
Method _clear_field Undocumented
Method _get_field Undocumented
Method _set_field Undocumented
Class Variable __credentials_fields Undocumented
Class Variable __default Undocumented
Class Variable __sensitive_fields Undocumented
Instance Variable __recorded_reset_mask Undocumented
def __dir__(self) -> abc.Iterable[builtins.str]: (source)

Undocumented

def __init__(self, initial_message: message_1.Message | None = None, *, max_unavailable: PercentOrCount | node_group_pb2.PercentOrCount | None | unset.UnsetType = unset.Unset, max_surge: PercentOrCount | node_group_pb2.PercentOrCount | None | unset.UnsetType = unset.Unset, drain_timeout: duration_pb2.Duration | datetime.timedelta | None | unset.UnsetType = unset.Unset): (source)
@drain_timeout.setter
def drain_timeout(self, value: duration_pb2.Duration | datetime.timedelta | None): (source)

Undocumented

@max_surge.setter
def max_surge(self, value: PercentOrCount | node_group_pb2.PercentOrCount | None): (source)

Undocumented

@max_unavailable.setter
def max_unavailable(self, value: PercentOrCount | node_group_pb2.PercentOrCount | None): (source)

Undocumented

__PB2_DESCRIPTOR__ = (source)

Undocumented

Value
descriptor.DescriptorWrap[descriptor_1.Descriptor]('.nebius.mk8s.v1.NodeGroupDep
loymentStrategy',
                                                   node_group_pb2.DESCRIPTOR,
                                                   descriptor_1.Descriptor)
__PY_TO_PB2__: builtins.dict[builtins.str, builtins.str] = (source)

Undocumented

Value
{'max_unavailable': 'max_unavailable',
 'max_surge': 'max_surge',
 'drain_timeout': 'drain_timeout'}
@builtins.property
drain_timeout: datetime.timedelta = (source)

Maximum amount of time that the service will spend on attempting gracefully draining a node (evicting it's pods), before falling back to pod deletion. By default, node can be drained unlimited time. Important consequence of that is if PodDisruptionBudget doesn't allow to evict a pod, then NodeGroup update with node re-creation will hung on that pod eviction. Note, that it is different from `kubectl drain --timeout`

@builtins.property
max_surge: PercentOrCount = (source)

The maximum number of additional nodes that can be provisioned above the desired number of nodes during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%). When specified as a percentage, the actual number is calculated by rounding up to the nearest whole number. This value cannot be 0 if `max_unavailable` is also set to 0. Defaults to 1. Example: If set to 25%, the node group can scale up by an additional 25% during the update, allowing new nodes to be added before old nodes are removed, which helps minimize workload disruption. NOTE: it is user responsibility to ensure that there are enough quota for provision nodes above the desired number. Available quota effectively limits `max_surge`. In case of not enough quota even for one extra node, update operation will hung because of quota exhausted error. Such error will be visible in Operation.progress_data.

@builtins.property
max_unavailable: PercentOrCount = (source)

The maximum number of nodes that can be simultaneously unavailable during the update process. This value can be specified either as an absolute number (for example 3) or as a percentage of the desired number of nodes (for example 5%). When specified as a percentage, the actual number is calculated by rounding down to the nearest whole number. This value cannot be 0 if `max_surge` is also set to 0. Defaults to 0. Example: If set to 20%, up to 20% of the nodes can be taken offline at once during the update, ensuring that at least 80% of the desired nodes remain operational.