Class: Google::Apis::DataflowV1b3::Environment
- Inherits:
-
Object
- Object
- Google::Apis::DataflowV1b3::Environment
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataflow_v1b3/classes.rb,
lib/google/apis/dataflow_v1b3/representations.rb,
lib/google/apis/dataflow_v1b3/representations.rb
Overview
Describes the environment in which a Dataflow Job runs.
Instance Attribute Summary collapse
-
#cluster_manager_api_service ⇒ String
The type of cluster manager API to use.
-
#dataset ⇒ String
Optional.
-
#debug_options ⇒ Google::Apis::DataflowV1b3::DebugOptions
Describes any options that have an effect on the debugging of pipelines.
-
#experiments ⇒ Array<String>
The list of experiments to enable.
-
#flex_resource_scheduling_goal ⇒ String
Optional.
-
#internal_experiments ⇒ Hash<String,Object>
Experimental settings.
-
#sdk_pipeline_options ⇒ Hash<String,Object>
The Cloud Dataflow SDK pipeline options specified by the user.
-
#service_account_email ⇒ String
Optional.
-
#service_kms_key_name ⇒ String
Optional.
-
#service_options ⇒ Array<String>
Optional.
-
#shuffle_mode ⇒ String
Output only.
-
#streaming_mode ⇒ String
Optional.
-
#temp_storage_prefix ⇒ String
The prefix of the resources the system should use for temporary storage.
-
#use_streaming_engine_resource_based_billing ⇒ Boolean
(also: #use_streaming_engine_resource_based_billing?)
Output only.
-
#user_agent ⇒ Hash<String,Object>
A description of the process that generated the request.
-
#version ⇒ Hash<String,Object>
A structure describing which components and their versions of the service are required in order to run the job.
-
#worker_pools ⇒ Array<Google::Apis::DataflowV1b3::WorkerPool>
The worker pools.
-
#worker_region ⇒ String
Optional.
-
#worker_zone ⇒ String
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ Environment
constructor
A new instance of Environment.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ Environment
Returns a new instance of Environment.
1492 1493 1494 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1492 def initialize(**args) update!(**args) end |
Instance Attribute Details
#cluster_manager_api_service ⇒ String
The type of cluster manager API to use. If unknown or unspecified, the service
will attempt to choose a reasonable default. This should be in the form of the
API service name, e.g. "compute.googleapis.com".
Corresponds to the JSON property clusterManagerApiService
1368 1369 1370 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1368 def cluster_manager_api_service @cluster_manager_api_service end |
#dataset ⇒ String
Optional. The dataset for the current project where various workflow related
tables are stored. The supported resource type is: Google BigQuery: bigquery.
googleapis.com/dataset
Corresponds to the JSON property dataset
1375 1376 1377 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1375 def dataset @dataset end |
#debug_options ⇒ Google::Apis::DataflowV1b3::DebugOptions
Describes any options that have an effect on the debugging of pipelines.
Corresponds to the JSON property debugOptions
1380 1381 1382 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1380 def @debug_options end |
#experiments ⇒ Array<String>
The list of experiments to enable. This field should be used for SDK related
experiments and not for service related experiments. The proper field for
service related experiments is service_options.
Corresponds to the JSON property experiments
1387 1388 1389 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1387 def experiments @experiments end |
#flex_resource_scheduling_goal ⇒ String
Optional. Which Flexible Resource Scheduling mode to run in.
Corresponds to the JSON property flexResourceSchedulingGoal
1392 1393 1394 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1392 def flex_resource_scheduling_goal @flex_resource_scheduling_goal end |
#internal_experiments ⇒ Hash<String,Object>
Experimental settings.
Corresponds to the JSON property internalExperiments
1397 1398 1399 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1397 def internal_experiments @internal_experiments end |
#sdk_pipeline_options ⇒ Hash<String,Object>
The Cloud Dataflow SDK pipeline options specified by the user. These options
are passed through the service and are used to recreate the SDK pipeline
options on the worker in a language agnostic and platform independent way.
Corresponds to the JSON property sdkPipelineOptions
1404 1405 1406 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1404 def @sdk_pipeline_options end |
#service_account_email ⇒ String
Optional. Identity to run virtual machines as. Defaults to the default account.
Corresponds to the JSON property serviceAccountEmail
1409 1410 1411 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1409 def service_account_email @service_account_email end |
#service_kms_key_name ⇒ String
Optional. If set, contains the Cloud KMS key identifier used to encrypt data
at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/
PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
Corresponds to the JSON property serviceKmsKeyName
1416 1417 1418 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1416 def service_kms_key_name @service_kms_key_name end |
#service_options ⇒ Array<String>
Optional. The list of service options to enable. This field should be used for
service related experiments only. These experiments, when graduating to GA,
should be replaced by dedicated fields or become default (i.e. always on).
Corresponds to the JSON property serviceOptions
1423 1424 1425 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1423 def @service_options end |
#shuffle_mode ⇒ String
Output only. The shuffle mode used for the job.
Corresponds to the JSON property shuffleMode
1428 1429 1430 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1428 def shuffle_mode @shuffle_mode end |
#streaming_mode ⇒ String
Optional. Specifies the Streaming Engine message processing guarantees.
Reduces cost and latency but might result in duplicate messages committed to
storage. Designed to run simple mapping streaming ETL jobs at the lowest cost.
For example, Change Data Capture (CDC) to BigQuery is a canonical use case.
For more information, see Set the pipeline streaming mode.
Corresponds to the JSON property streamingMode
1438 1439 1440 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1438 def streaming_mode @streaming_mode end |
#temp_storage_prefix ⇒ String
The prefix of the resources the system should use for temporary storage. The
system will append the suffix "/temp-JOBNAME
to this resource prefix, where
JOBNAME
is the value of the job_name field. The resulting bucket and object
prefix is used as the prefix of the resources used to store temporary data
needed during the job execution. NOTE: This will override the value in
taskrunner_settings. The supported resource type is: Google Cloud Storage:
storage.googleapis.com/bucket
/object
bucket.storage.googleapis.com/object
Corresponds to the JSON property tempStoragePrefix
1449 1450 1451 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1449 def temp_storage_prefix @temp_storage_prefix end |
#use_streaming_engine_resource_based_billing ⇒ Boolean Also known as: use_streaming_engine_resource_based_billing?
Output only. Whether the job uses the Streaming Engine resource-based billing
model.
Corresponds to the JSON property useStreamingEngineResourceBasedBilling
1455 1456 1457 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1455 def use_streaming_engine_resource_based_billing @use_streaming_engine_resource_based_billing end |
#user_agent ⇒ Hash<String,Object>
A description of the process that generated the request.
Corresponds to the JSON property userAgent
1461 1462 1463 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1461 def user_agent @user_agent end |
#version ⇒ Hash<String,Object>
A structure describing which components and their versions of the service are
required in order to run the job.
Corresponds to the JSON property version
1467 1468 1469 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1467 def version @version end |
#worker_pools ⇒ Array<Google::Apis::DataflowV1b3::WorkerPool>
The worker pools. At least one "harness" worker pool must be specified in
order for the job to have workers.
Corresponds to the JSON property workerPools
1473 1474 1475 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1473 def worker_pools @worker_pools end |
#worker_region ⇒ String
Optional. The Compute Engine region (https://cloud.google.com/compute/docs/
regions-zones/regions-zones) in which worker processing should occur, e.g. "us-
west1". Mutually exclusive with worker_zone. If neither worker_region nor
worker_zone is specified, default to the control plane's region.
Corresponds to the JSON property workerRegion
1481 1482 1483 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1481 def worker_region @worker_region end |
#worker_zone ⇒ String
Optional. The Compute Engine zone (https://cloud.google.com/compute/docs/
regions-zones/regions-zones) in which worker processing should occur, e.g. "us-
west1-a". Mutually exclusive with worker_region. If neither worker_region nor
worker_zone is specified, a zone in the control plane's region is chosen based
on available capacity.
Corresponds to the JSON property workerZone
1490 1491 1492 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1490 def worker_zone @worker_zone end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1497 def update!(**args) @cluster_manager_api_service = args[:cluster_manager_api_service] if args.key?(:cluster_manager_api_service) @dataset = args[:dataset] if args.key?(:dataset) @debug_options = args[:debug_options] if args.key?(:debug_options) @experiments = args[:experiments] if args.key?(:experiments) @flex_resource_scheduling_goal = args[:flex_resource_scheduling_goal] if args.key?(:flex_resource_scheduling_goal) @internal_experiments = args[:internal_experiments] if args.key?(:internal_experiments) @sdk_pipeline_options = args[:sdk_pipeline_options] if args.key?(:sdk_pipeline_options) @service_account_email = args[:service_account_email] if args.key?(:service_account_email) @service_kms_key_name = args[:service_kms_key_name] if args.key?(:service_kms_key_name) @service_options = args[:service_options] if args.key?(:service_options) @shuffle_mode = args[:shuffle_mode] if args.key?(:shuffle_mode) @streaming_mode = args[:streaming_mode] if args.key?(:streaming_mode) @temp_storage_prefix = args[:temp_storage_prefix] if args.key?(:temp_storage_prefix) @use_streaming_engine_resource_based_billing = args[:use_streaming_engine_resource_based_billing] if args.key?(:use_streaming_engine_resource_based_billing) @user_agent = args[:user_agent] if args.key?(:user_agent) @version = args[:version] if args.key?(:version) @worker_pools = args[:worker_pools] if args.key?(:worker_pools) @worker_region = args[:worker_region] if args.key?(:worker_region) @worker_zone = args[:worker_zone] if args.key?(:worker_zone) end |