Class: Google::Apis::BigqueryV2::SparkStatistics
- Inherits:
-
Object
- Object
- Google::Apis::BigqueryV2::SparkStatistics
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb
Overview
Statistics for a BigSpark query. Populated as part of JobStatistics2
Instance Attribute Summary collapse
-
#endpoints ⇒ Hash<String,String>
Output only.
-
#gcs_staging_bucket ⇒ String
Output only.
-
#kms_key_name ⇒ String
Output only.
-
#logging_info ⇒ Google::Apis::BigqueryV2::SparkLoggingInfo
Spark job logs can be filtered by these fields in Cloud Logging.
-
#spark_job_id ⇒ String
Output only.
-
#spark_job_location ⇒ String
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ SparkStatistics
constructor
A new instance of SparkStatistics.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ SparkStatistics
Returns a new instance of SparkStatistics.
9601 9602 9603 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9601 def initialize(**args) update!(**args) end |
Instance Attribute Details
#endpoints ⇒ Hash<String,String>
Output only. Endpoints returned from Dataproc. Key list: -
history_server_endpoint: A link to Spark job UI.
Corresponds to the JSON property endpoints
9559 9560 9561 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9559 def endpoints @endpoints end |
#gcs_staging_bucket ⇒ String
Output only. The Google Cloud Storage bucket that is used as the default file
system by the Spark application. This field is only filled when the Spark
procedure uses the invoker security mode. The gcsStagingBucket bucket is
inferred from the @@spark_proc_properties.staging_bucket system variable (if
it is provided). Otherwise, BigQuery creates a default staging bucket for the
job and returns the bucket name in this field. Example: * gs://[bucket_name]
Corresponds to the JSON property gcsStagingBucket
9569 9570 9571 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9569 def gcs_staging_bucket @gcs_staging_bucket end |
#kms_key_name ⇒ String
Output only. The Cloud KMS encryption key that is used to protect the
resources created by the Spark job. If the Spark procedure uses the invoker
security mode, the Cloud KMS encryption key is either inferred from the
provided system variable, @@spark_proc_properties.kms_key_name, or the
default key of the BigQuery job's project (if the CMEK organization policy is
enforced). Otherwise, the Cloud KMS key is either inferred from the Spark
connection associated with the procedure (if it is provided), or from the
default key of the Spark connection's project if the CMEK organization policy
is enforced. Example: * projects/[kms_project_id]/locations/[region]/keyRings/
[key_region]/cryptoKeys/[key]
Corresponds to the JSON property kmsKeyName
9583 9584 9585 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9583 def kms_key_name @kms_key_name end |
#logging_info ⇒ Google::Apis::BigqueryV2::SparkLoggingInfo
Spark job logs can be filtered by these fields in Cloud Logging.
Corresponds to the JSON property loggingInfo
9588 9589 9590 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9588 def logging_info @logging_info end |
#spark_job_id ⇒ String
Output only. Spark job ID if a Spark job is created successfully.
Corresponds to the JSON property sparkJobId
9593 9594 9595 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9593 def spark_job_id @spark_job_id end |
#spark_job_location ⇒ String
Output only. Location where the Spark job is executed. A location is selected
by BigQueury for jobs configured to run in a multi-region.
Corresponds to the JSON property sparkJobLocation
9599 9600 9601 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9599 def spark_job_location @spark_job_location end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
9606 9607 9608 9609 9610 9611 9612 9613 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9606 def update!(**args) @endpoints = args[:endpoints] if args.key?(:endpoints) @gcs_staging_bucket = args[:gcs_staging_bucket] if args.key?(:gcs_staging_bucket) @kms_key_name = args[:kms_key_name] if args.key?(:kms_key_name) @logging_info = args[:logging_info] if args.key?(:logging_info) @spark_job_id = args[:spark_job_id] if args.key?(:spark_job_id) @spark_job_location = args[:spark_job_location] if args.key?(:spark_job_location) end |