Class: Google::Apis::BigqueryV2::SparkStatistics
- Inherits:
-
Object
- Object
- Google::Apis::BigqueryV2::SparkStatistics
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb
Overview
Statistics for a BigSpark query. Populated as part of JobStatistics2
Instance Attribute Summary collapse
-
#endpoints ⇒ Hash<String,String>
Output only.
-
#gcs_staging_bucket ⇒ String
Output only.
-
#kms_key_name ⇒ String
Output only.
-
#logging_info ⇒ Google::Apis::BigqueryV2::SparkLoggingInfo
Spark job logs can be filtered by these fields in Cloud Logging.
-
#spark_job_id ⇒ String
Output only.
-
#spark_job_location ⇒ String
Output only.
Instance Method Summary collapse
-
#initialize(**args) ⇒ SparkStatistics
constructor
A new instance of SparkStatistics.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ SparkStatistics
Returns a new instance of SparkStatistics.
9571 9572 9573 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9571 def initialize(**args) update!(**args) end |
Instance Attribute Details
#endpoints ⇒ Hash<String,String>
Output only. Endpoints returned from Dataproc. Key list: -
history_server_endpoint: A link to Spark job UI.
Corresponds to the JSON property endpoints
9529 9530 9531 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9529 def endpoints @endpoints end |
#gcs_staging_bucket ⇒ String
Output only. The Google Cloud Storage bucket that is used as the default file
system by the Spark application. This field is only filled when the Spark
procedure uses the invoker security mode. The gcsStagingBucket bucket is
inferred from the @@spark_proc_properties.staging_bucket system variable (if
it is provided). Otherwise, BigQuery creates a default staging bucket for the
job and returns the bucket name in this field. Example: * gs://[bucket_name]
Corresponds to the JSON property gcsStagingBucket
9539 9540 9541 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9539 def gcs_staging_bucket @gcs_staging_bucket end |
#kms_key_name ⇒ String
Output only. The Cloud KMS encryption key that is used to protect the
resources created by the Spark job. If the Spark procedure uses the invoker
security mode, the Cloud KMS encryption key is either inferred from the
provided system variable, @@spark_proc_properties.kms_key_name, or the
default key of the BigQuery job's project (if the CMEK organization policy is
enforced). Otherwise, the Cloud KMS key is either inferred from the Spark
connection associated with the procedure (if it is provided), or from the
default key of the Spark connection's project if the CMEK organization policy
is enforced. Example: * projects/[kms_project_id]/locations/[region]/keyRings/
[key_region]/cryptoKeys/[key]
Corresponds to the JSON property kmsKeyName
9553 9554 9555 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9553 def kms_key_name @kms_key_name end |
#logging_info ⇒ Google::Apis::BigqueryV2::SparkLoggingInfo
Spark job logs can be filtered by these fields in Cloud Logging.
Corresponds to the JSON property loggingInfo
9558 9559 9560 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9558 def logging_info @logging_info end |
#spark_job_id ⇒ String
Output only. Spark job ID if a Spark job is created successfully.
Corresponds to the JSON property sparkJobId
9563 9564 9565 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9563 def spark_job_id @spark_job_id end |
#spark_job_location ⇒ String
Output only. Location where the Spark job is executed. A location is selected
by BigQueury for jobs configured to run in a multi-region.
Corresponds to the JSON property sparkJobLocation
9569 9570 9571 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9569 def spark_job_location @spark_job_location end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
9576 9577 9578 9579 9580 9581 9582 9583 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 9576 def update!(**args) @endpoints = args[:endpoints] if args.key?(:endpoints) @gcs_staging_bucket = args[:gcs_staging_bucket] if args.key?(:gcs_staging_bucket) @kms_key_name = args[:kms_key_name] if args.key?(:kms_key_name) @logging_info = args[:logging_info] if args.key?(:logging_info) @spark_job_id = args[:spark_job_id] if args.key?(:spark_job_id) @spark_job_location = args[:spark_job_location] if args.key?(:spark_job_location) end |