Class: Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb
Overview
Basic autoscaling configurations for Spark Standalone.
Instance Attribute Summary collapse
-
#graceful_decommission_timeout ⇒ String
Required.
-
#remove_only_idle_workers ⇒ Boolean
(also: #remove_only_idle_workers?)
Optional.
-
#scale_down_factor ⇒ Float
Required.
-
#scale_down_min_worker_fraction ⇒ Float
Optional.
-
#scale_up_factor ⇒ Float
Required.
-
#scale_up_min_worker_fraction ⇒ Float
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
constructor
A new instance of SparkStandaloneAutoscalingConfig.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
Returns a new instance of SparkStandaloneAutoscalingConfig.
5694 5695 5696 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5694 def initialize(**args) update!(**args) end |
Instance Attribute Details
#graceful_decommission_timeout ⇒ String
Required. Timeout for Spark graceful decommissioning of spark workers.
Specifies the duration to wait for spark worker to complete spark
decommissioning tasks before forcefully removing workers. Only applicable to
downscaling operations.Bounds: 0s, 1d.
Corresponds to the JSON property gracefulDecommissionTimeout
5650 5651 5652 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5650 def graceful_decommission_timeout @graceful_decommission_timeout end |
#remove_only_idle_workers ⇒ Boolean Also known as: remove_only_idle_workers?
Optional. Remove only idle workers when scaling down cluster
Corresponds to the JSON property removeOnlyIdleWorkers
5655 5656 5657 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5655 def remove_only_idle_workers @remove_only_idle_workers end |
#scale_down_factor ⇒ Float
Required. Fraction of required executors to remove from Spark Serverless
clusters. A scale-down factor of 1.0 will result in scaling down so that there
are no more executors for the Spark Job.(more aggressive scaling). A scale-
down factor closer to 0 will result in a smaller magnitude of scaling donw (
less aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleDownFactor
5665 5666 5667 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5665 def scale_down_factor @scale_down_factor end |
#scale_down_min_worker_fraction ⇒ Float
Optional. Minimum scale-down threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2 worker scale-down for the
cluster to scale. A threshold of 0 means the autoscaler will scale down on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleDownMinWorkerFraction
5674 5675 5676 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5674 def scale_down_min_worker_fraction @scale_down_min_worker_fraction end |
#scale_up_factor ⇒ Float
Required. Fraction of required workers to add to Spark Standalone clusters. A
scale-up factor of 1.0 will result in scaling up so that there are no more
required workers for the Spark Job (more aggressive scaling). A scale-up
factor closer to 0 will result in a smaller magnitude of scaling up (less
aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleUpFactor
5683 5684 5685 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5683 def scale_up_factor @scale_up_factor end |
#scale_up_min_worker_fraction ⇒ Float
Optional. Minimum scale-up threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2-worker scale-up for the
cluster to scale. A threshold of 0 means the autoscaler will scale up on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleUpMinWorkerFraction
5692 5693 5694 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5692 def scale_up_min_worker_fraction @scale_up_min_worker_fraction end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
5699 5700 5701 5702 5703 5704 5705 5706 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 5699 def update!(**args) @graceful_decommission_timeout = args[:graceful_decommission_timeout] if args.key?(:graceful_decommission_timeout) @remove_only_idle_workers = args[:remove_only_idle_workers] if args.key?(:remove_only_idle_workers) @scale_down_factor = args[:scale_down_factor] if args.key?(:scale_down_factor) @scale_down_min_worker_fraction = args[:scale_down_min_worker_fraction] if args.key?(:scale_down_min_worker_fraction) @scale_up_factor = args[:scale_up_factor] if args.key?(:scale_up_factor) @scale_up_min_worker_fraction = args[:scale_up_min_worker_fraction] if args.key?(:scale_up_min_worker_fraction) end |