Class: Google::Apis::DataflowV1b3::FlexTemplateRuntimeEnvironment
- Inherits:
-
Object
- Object
- Google::Apis::DataflowV1b3::FlexTemplateRuntimeEnvironment
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dataflow_v1b3/classes.rb,
lib/google/apis/dataflow_v1b3/representations.rb,
lib/google/apis/dataflow_v1b3/representations.rb
Overview
The environment values to be set at runtime for flex template.
Instance Attribute Summary collapse
-
#additional_experiments ⇒ Array<String>
Additional experiment flags for the job.
-
#additional_user_labels ⇒ Hash<String,String>
Additional user labels to be specified for the job.
-
#autoscaling_algorithm ⇒ String
The algorithm to use for autoscaling Corresponds to the JSON property
autoscalingAlgorithm
. -
#disk_size_gb ⇒ Fixnum
Worker disk size, in gigabytes.
-
#dump_heap_on_oom ⇒ Boolean
(also: #dump_heap_on_oom?)
If true, when processing time is spent almost entirely on garbage collection ( GC), saves a heap dump before ending the thread or process.
-
#enable_launcher_vm_serial_port_logging ⇒ Boolean
(also: #enable_launcher_vm_serial_port_logging?)
If true serial port logging will be enabled for the launcher VM.
-
#enable_streaming_engine ⇒ Boolean
(also: #enable_streaming_engine?)
Whether to enable Streaming Engine for the job.
-
#flexrs_goal ⇒ String
Set FlexRS goal for the job.
-
#ip_configuration ⇒ String
Configuration for VM IPs.
-
#kms_key_name ⇒ String
Name for the Cloud KMS key for the job.
-
#launcher_machine_type ⇒ String
The machine type to use for launching the job.
-
#machine_type ⇒ String
The machine type to use for the job.
-
#max_workers ⇒ Fixnum
The maximum number of Google Compute Engine instances to be made available to your pipeline during execution, from 1 to 1000.
-
#network ⇒ String
Network to which VMs will be assigned.
-
#num_workers ⇒ Fixnum
The initial number of Google Compute Engine instances for the job.
-
#save_heap_dumps_to_gcs_path ⇒ String
Cloud Storage bucket (directory) to upload heap dumps to.
-
#sdk_container_image ⇒ String
Docker registry location of container image to use for the 'worker harness.
-
#service_account_email ⇒ String
The email address of the service account to run the job as.
-
#staging_location ⇒ String
The Cloud Storage path for staging local files.
-
#streaming_mode ⇒ String
Optional.
-
#subnetwork ⇒ String
Subnetwork to which VMs will be assigned, if desired.
-
#temp_location ⇒ String
The Cloud Storage path to use for temporary files.
-
#worker_region ⇒ String
The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/ regions-zones) in which worker processing should occur, e.g.
-
#worker_zone ⇒ String
The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/ regions-zones) in which worker processing should occur, e.g.
-
#zone ⇒ String
The Compute Engine availability zone for launching worker instances to run your pipeline.
Instance Method Summary collapse
-
#initialize(**args) ⇒ FlexTemplateRuntimeEnvironment
constructor
A new instance of FlexTemplateRuntimeEnvironment.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ FlexTemplateRuntimeEnvironment
Returns a new instance of FlexTemplateRuntimeEnvironment.
1844 1845 1846 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1844 def initialize(**args) update!(**args) end |
Instance Attribute Details
#additional_experiments ⇒ Array<String>
Additional experiment flags for the job.
Corresponds to the JSON property additionalExperiments
1680 1681 1682 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1680 def additional_experiments @additional_experiments end |
#additional_user_labels ⇒ Hash<String,String>
Additional user labels to be specified for the job. Keys and values must
follow the restrictions specified in the labeling restrictions page. An object
containing a list of "key": value pairs. Example: "name": "wrench", "mass": "
1kg", "count": "3"
.
Corresponds to the JSON property additionalUserLabels
1689 1690 1691 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1689 def additional_user_labels @additional_user_labels end |
#autoscaling_algorithm ⇒ String
The algorithm to use for autoscaling
Corresponds to the JSON property autoscalingAlgorithm
1694 1695 1696 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1694 def autoscaling_algorithm @autoscaling_algorithm end |
#disk_size_gb ⇒ Fixnum
Worker disk size, in gigabytes.
Corresponds to the JSON property diskSizeGb
1699 1700 1701 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1699 def disk_size_gb @disk_size_gb end |
#dump_heap_on_oom ⇒ Boolean Also known as: dump_heap_on_oom?
If true, when processing time is spent almost entirely on garbage collection (
GC), saves a heap dump before ending the thread or process. If false, ends the
thread or process without saving a heap dump. Does not save a heap dump when
the Java Virtual Machine (JVM) has an out of memory error during processing.
The location of the heap file is either echoed back to the user, or the user
is given the opportunity to download the heap file.
Corresponds to the JSON property dumpHeapOnOom
1709 1710 1711 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1709 def dump_heap_on_oom @dump_heap_on_oom end |
#enable_launcher_vm_serial_port_logging ⇒ Boolean Also known as: enable_launcher_vm_serial_port_logging?
If true serial port logging will be enabled for the launcher VM.
Corresponds to the JSON property enableLauncherVmSerialPortLogging
1715 1716 1717 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1715 def enable_launcher_vm_serial_port_logging @enable_launcher_vm_serial_port_logging end |
#enable_streaming_engine ⇒ Boolean Also known as: enable_streaming_engine?
Whether to enable Streaming Engine for the job.
Corresponds to the JSON property enableStreamingEngine
1721 1722 1723 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1721 def enable_streaming_engine @enable_streaming_engine end |
#flexrs_goal ⇒ String
Set FlexRS goal for the job. https://cloud.google.com/dataflow/docs/guides/
flexrs
Corresponds to the JSON property flexrsGoal
1728 1729 1730 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1728 def flexrs_goal @flexrs_goal end |
#ip_configuration ⇒ String
Configuration for VM IPs.
Corresponds to the JSON property ipConfiguration
1733 1734 1735 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1733 def ip_configuration @ip_configuration end |
#kms_key_name ⇒ String
Name for the Cloud KMS key for the job. Key format is: projects//locations//
keyRings//cryptoKeys/
Corresponds to the JSON property kmsKeyName
1739 1740 1741 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1739 def kms_key_name @kms_key_name end |
#launcher_machine_type ⇒ String
The machine type to use for launching the job. The default is n1-standard-1.
Corresponds to the JSON property launcherMachineType
1744 1745 1746 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1744 def launcher_machine_type @launcher_machine_type end |
#machine_type ⇒ String
The machine type to use for the job. Defaults to the value from the template
if not specified.
Corresponds to the JSON property machineType
1750 1751 1752 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1750 def machine_type @machine_type end |
#max_workers ⇒ Fixnum
The maximum number of Google Compute Engine instances to be made available to
your pipeline during execution, from 1 to 1000.
Corresponds to the JSON property maxWorkers
1756 1757 1758 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1756 def max_workers @max_workers end |
#network ⇒ String
Network to which VMs will be assigned. If empty or unspecified, the service
will use the network "default".
Corresponds to the JSON property network
1762 1763 1764 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1762 def network @network end |
#num_workers ⇒ Fixnum
The initial number of Google Compute Engine instances for the job.
Corresponds to the JSON property numWorkers
1767 1768 1769 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1767 def num_workers @num_workers end |
#save_heap_dumps_to_gcs_path ⇒ String
Cloud Storage bucket (directory) to upload heap dumps to. Enabling this field
implies that dump_heap_on_oom
is set to true.
Corresponds to the JSON property saveHeapDumpsToGcsPath
1773 1774 1775 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1773 def save_heap_dumps_to_gcs_path @save_heap_dumps_to_gcs_path end |
#sdk_container_image ⇒ String
Docker registry location of container image to use for the 'worker harness.
Default is the container for the version of the SDK. Note this field is only
valid for portable pipelines.
Corresponds to the JSON property sdkContainerImage
1780 1781 1782 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1780 def sdk_container_image @sdk_container_image end |
#service_account_email ⇒ String
The email address of the service account to run the job as.
Corresponds to the JSON property serviceAccountEmail
1785 1786 1787 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1785 def service_account_email @service_account_email end |
#staging_location ⇒ String
The Cloud Storage path for staging local files. Must be a valid Cloud Storage
URL, beginning with gs://
.
Corresponds to the JSON property stagingLocation
1791 1792 1793 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1791 def staging_location @staging_location end |
#streaming_mode ⇒ String
Optional. Specifies the Streaming Engine message processing guarantees.
Reduces cost and latency but might result in duplicate messages committed to
storage. Designed to run simple mapping streaming ETL jobs at the lowest cost.
For example, Change Data Capture (CDC) to BigQuery is a canonical use case.
For more information, see Set the pipeline streaming mode.
Corresponds to the JSON property streamingMode
1801 1802 1803 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1801 def streaming_mode @streaming_mode end |
#subnetwork ⇒ String
Subnetwork to which VMs will be assigned, if desired. You can specify a
subnetwork using either a complete URL or an abbreviated path. Expected to be
of the form "https://www.googleapis.com/compute/v1/projects/HOST_PROJECT_ID/
regions/REGION/subnetworks/SUBNETWORK" or "regions/REGION/subnetworks/
SUBNETWORK". If the subnetwork is located in a Shared VPC network, you must
use the complete URL.
Corresponds to the JSON property subnetwork
1811 1812 1813 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1811 def subnetwork @subnetwork end |
#temp_location ⇒ String
The Cloud Storage path to use for temporary files. Must be a valid Cloud
Storage URL, beginning with gs://
.
Corresponds to the JSON property tempLocation
1817 1818 1819 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1817 def temp_location @temp_location end |
#worker_region ⇒ String
The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/
regions-zones) in which worker processing should occur, e.g. "us-west1".
Mutually exclusive with worker_zone. If neither worker_region nor worker_zone
is specified, default to the control plane's region.
Corresponds to the JSON property workerRegion
1825 1826 1827 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1825 def worker_region @worker_region end |
#worker_zone ⇒ String
The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/
regions-zones) in which worker processing should occur, e.g. "us-west1-a".
Mutually exclusive with worker_region. If neither worker_region nor
worker_zone is specified, a zone in the control plane's region is chosen based
on available capacity. If both worker_zone
and zone
are set, worker_zone
takes precedence.
Corresponds to the JSON property workerZone
1835 1836 1837 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1835 def worker_zone @worker_zone end |
#zone ⇒ String
The Compute Engine availability zone for launching worker instances to run your
pipeline. In the future, worker_zone will take precedence.
Corresponds to the JSON property zone
1842 1843 1844 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1842 def zone @zone end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 |
# File 'lib/google/apis/dataflow_v1b3/classes.rb', line 1849 def update!(**args) @additional_experiments = args[:additional_experiments] if args.key?(:additional_experiments) @additional_user_labels = args[:additional_user_labels] if args.key?(:additional_user_labels) @autoscaling_algorithm = args[:autoscaling_algorithm] if args.key?(:autoscaling_algorithm) @disk_size_gb = args[:disk_size_gb] if args.key?(:disk_size_gb) @dump_heap_on_oom = args[:dump_heap_on_oom] if args.key?(:dump_heap_on_oom) @enable_launcher_vm_serial_port_logging = args[:enable_launcher_vm_serial_port_logging] if args.key?(:enable_launcher_vm_serial_port_logging) @enable_streaming_engine = args[:enable_streaming_engine] if args.key?(:enable_streaming_engine) @flexrs_goal = args[:flexrs_goal] if args.key?(:flexrs_goal) @ip_configuration = args[:ip_configuration] if args.key?(:ip_configuration) @kms_key_name = args[:kms_key_name] if args.key?(:kms_key_name) @launcher_machine_type = args[:launcher_machine_type] if args.key?(:launcher_machine_type) @machine_type = args[:machine_type] if args.key?(:machine_type) @max_workers = args[:max_workers] if args.key?(:max_workers) @network = args[:network] if args.key?(:network) @num_workers = args[:num_workers] if args.key?(:num_workers) @save_heap_dumps_to_gcs_path = args[:save_heap_dumps_to_gcs_path] if args.key?(:save_heap_dumps_to_gcs_path) @sdk_container_image = args[:sdk_container_image] if args.key?(:sdk_container_image) @service_account_email = args[:service_account_email] if args.key?(:service_account_email) @staging_location = args[:staging_location] if args.key?(:staging_location) @streaming_mode = args[:streaming_mode] if args.key?(:streaming_mode) @subnetwork = args[:subnetwork] if args.key?(:subnetwork) @temp_location = args[:temp_location] if args.key?(:temp_location) @worker_region = args[:worker_region] if args.key?(:worker_region) @worker_zone = args[:worker_zone] if args.key?(:worker_zone) @zone = args[:zone] if args.key?(:zone) end |