Class: Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Inherits:
-
Object
- Object
- Google::Apis::DataprocV1::SparkStandaloneAutoscalingConfig
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataproc_v1/classes.rb,
lib/google/apis/dataproc_v1/representations.rb,
lib/google/apis/dataproc_v1/representations.rb
Overview
Basic autoscaling configurations for Spark Standalone.
Instance Attribute Summary collapse
-
#graceful_decommission_timeout ⇒ String
Required.
-
#remove_only_idle_workers ⇒ Boolean
(also: #remove_only_idle_workers?)
Optional.
-
#scale_down_factor ⇒ Float
Required.
-
#scale_down_min_worker_fraction ⇒ Float
Optional.
-
#scale_up_factor ⇒ Float
Required.
-
#scale_up_min_worker_fraction ⇒ Float
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
constructor
A new instance of SparkStandaloneAutoscalingConfig.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ SparkStandaloneAutoscalingConfig
Returns a new instance of SparkStandaloneAutoscalingConfig.
9464 9465 9466 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 9464 def initialize(**args) update!(**args) end |
Instance Attribute Details
#graceful_decommission_timeout ⇒ String
Required. Timeout for Spark graceful decommissioning of spark workers.
Specifies the duration to wait for spark worker to complete spark
decommissioning tasks before forcefully removing workers. Only applicable to
downscaling operations.Bounds: 0s, 1d.
Corresponds to the JSON property gracefulDecommissionTimeout
9420 9421 9422 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 9420 def graceful_decommission_timeout @graceful_decommission_timeout end |
#remove_only_idle_workers ⇒ Boolean Also known as: remove_only_idle_workers?
Optional. Remove only idle workers when scaling down cluster
Corresponds to the JSON property removeOnlyIdleWorkers
9425 9426 9427 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 9425 def remove_only_idle_workers @remove_only_idle_workers end |
#scale_down_factor ⇒ Float
Required. Fraction of required executors to remove from Spark Serverless
clusters. A scale-down factor of 1.0 will result in scaling down so that there
are no more executors for the Spark Job.(more aggressive scaling). A scale-
down factor closer to 0 will result in a smaller magnitude of scaling donw (
less aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleDownFactor
9435 9436 9437 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 9435 def scale_down_factor @scale_down_factor end |
#scale_down_min_worker_fraction ⇒ Float
Optional. Minimum scale-down threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2 worker scale-down for the
cluster to scale. A threshold of 0 means the autoscaler will scale down on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleDownMinWorkerFraction
9444 9445 9446 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 9444 def scale_down_min_worker_fraction @scale_down_min_worker_fraction end |
#scale_up_factor ⇒ Float
Required. Fraction of required workers to add to Spark Standalone clusters. A
scale-up factor of 1.0 will result in scaling up so that there are no more
required workers for the Spark Job (more aggressive scaling). A scale-up
factor closer to 0 will result in a smaller magnitude of scaling up (less
aggressive scaling).Bounds: 0.0, 1.0.
Corresponds to the JSON property scaleUpFactor
9453 9454 9455 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 9453 def scale_up_factor @scale_up_factor end |
#scale_up_min_worker_fraction ⇒ Float
Optional. Minimum scale-up threshold as a fraction of total cluster size
before scaling occurs. For example, in a 20-worker cluster, a threshold of 0.1
means the autoscaler must recommend at least a 2-worker scale-up for the
cluster to scale. A threshold of 0 means the autoscaler will scale up on any
recommended change.Bounds: 0.0, 1.0. Default: 0.0.
Corresponds to the JSON property scaleUpMinWorkerFraction
9462 9463 9464 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 9462 def scale_up_min_worker_fraction @scale_up_min_worker_fraction end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
9469 9470 9471 9472 9473 9474 9475 9476 |
# File 'lib/google/apis/dataproc_v1/classes.rb', line 9469 def update!(**args) @graceful_decommission_timeout = args[:graceful_decommission_timeout] if args.key?(:graceful_decommission_timeout) @remove_only_idle_workers = args[:remove_only_idle_workers] if args.key?(:remove_only_idle_workers) @scale_down_factor = args[:scale_down_factor] if args.key?(:scale_down_factor) @scale_down_min_worker_fraction = args[:scale_down_min_worker_fraction] if args.key?(:scale_down_min_worker_fraction) @scale_up_factor = args[:scale_up_factor] if args.key?(:scale_up_factor) @scale_up_min_worker_fraction = args[:scale_up_min_worker_fraction] if args.key?(:scale_up_min_worker_fraction) end |