Class: Google::Apis::BigqueryV2::ExternalDataConfiguration

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ ExternalDataConfiguration

Returns a new instance of ExternalDataConfiguration.



3466
3467
3468
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3466

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#autodetectBoolean Also known as: autodetect?

Try to detect schema and format options automatically. Any option specified explicitly will be honored. Corresponds to the JSON property autodetect

Returns:

  • (Boolean)


3260
3261
3262
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3260

def autodetect
  @autodetect
end

#avro_optionsGoogle::Apis::BigqueryV2::AvroOptions

Options for external data sources. Corresponds to the JSON property avroOptions



3266
3267
3268
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3266

def avro_options
  @avro_options
end

#bigtable_optionsGoogle::Apis::BigqueryV2::BigtableOptions

Options specific to Google Cloud Bigtable data sources. Corresponds to the JSON property bigtableOptions



3271
3272
3273
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3271

def bigtable_options
  @bigtable_options
end

#compressionString

Optional. The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups, Avro, ORC and Parquet formats. An empty string is an invalid value. Corresponds to the JSON property compression

Returns:

  • (String)


3279
3280
3281
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3279

def compression
  @compression
end

#connection_idString

Optional. The connection specifying the credentials to be used to read external storage, such as Azure Blob, Cloud Storage, or S3. The connection_id can have the form project_id`.`location_id`;`connection_id or projects/ project_id/locations/location_id/connections/connection_id`. Corresponds to the JSON propertyconnectionId`

Returns:

  • (String)


3287
3288
3289
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3287

def connection_id
  @connection_id
end

#csv_optionsGoogle::Apis::BigqueryV2::CsvOptions

Information related to a CSV data source. Corresponds to the JSON property csvOptions



3292
3293
3294
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3292

def csv_options
  @csv_options
end

#date_formatString

Optional. Format used to parse DATE values. Supports C-style and SQL-style values. Corresponds to the JSON property dateFormat

Returns:

  • (String)


3298
3299
3300
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3298

def date_format
  @date_format
end

#datetime_formatString

Optional. Format used to parse DATETIME values. Supports C-style and SQL-style values. Corresponds to the JSON property datetimeFormat

Returns:

  • (String)


3304
3305
3306
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3304

def datetime_format
  @datetime_format
end

#decimal_target_typesArray<String>

Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If ( precision,scale) is: * (38,9) -> NUMERIC; * (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error if value exceeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", " NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and [" NUMERIC"] for the other file formats. Corresponds to the JSON property decimalTargetTypes

Returns:

  • (Array<String>)


3325
3326
3327
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3325

def decimal_target_types
  @decimal_target_types
end

#file_set_spec_typeString

Optional. Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems. Corresponds to the JSON property fileSetSpecType

Returns:

  • (String)


3333
3334
3335
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3333

def file_set_spec_type
  @file_set_spec_type
end

#google_sheets_optionsGoogle::Apis::BigqueryV2::GoogleSheetsOptions

Options specific to Google Sheets data sources. Corresponds to the JSON property googleSheetsOptions



3338
3339
3340
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3338

def google_sheets_options
  @google_sheets_options
end

#hive_partitioning_optionsGoogle::Apis::BigqueryV2::HivePartitioningOptions

Options for configuring hive partitioning detect. Corresponds to the JSON property hivePartitioningOptions



3343
3344
3345
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3343

def hive_partitioning_options
  @hive_partitioning_options
end

#ignore_unknown_valuesBoolean Also known as: ignore_unknown_values?

Optional. Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored. ORC: This setting is ignored. Parquet: This setting is ignored. Corresponds to the JSON property ignoreUnknownValues

Returns:

  • (Boolean)


3356
3357
3358
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3356

def ignore_unknown_values
  @ignore_unknown_values
end

#json_extensionString

Optional. Load option to be used together with source_format newline-delimited JSON to indicate that a variant of JSON is being loaded. To load newline- delimited GeoJSON, specify GEOJSON (and source_format must be set to NEWLINE_DELIMITED_JSON). Corresponds to the JSON property jsonExtension

Returns:

  • (String)


3365
3366
3367
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3365

def json_extension
  @json_extension
end

#json_optionsGoogle::Apis::BigqueryV2::JsonOptions

Json Options for load and make external tables. Corresponds to the JSON property jsonOptions



3370
3371
3372
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3370

def json_options
  @json_options
end

#max_bad_recordsFixnum

Optional. The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups, Avro, ORC and Parquet formats. Corresponds to the JSON property maxBadRecords

Returns:

  • (Fixnum)


3379
3380
3381
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3379

def max_bad_records
  @max_bad_records
end

#metadata_cache_modeString

Optional. Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source. Corresponds to the JSON property metadataCacheMode

Returns:

  • (String)


3385
3386
3387
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3385

def 
  @metadata_cache_mode
end

#object_metadataString

Optional. ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type. Corresponds to the JSON property objectMetadata

Returns:

  • (String)


3393
3394
3395
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3393

def 
  @object_metadata
end

#parquet_optionsGoogle::Apis::BigqueryV2::ParquetOptions

Parquet Options for load and make external tables. Corresponds to the JSON property parquetOptions



3398
3399
3400
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3398

def parquet_options
  @parquet_options
end

#reference_file_schema_uriString

Optional. When creating an external table, the user can provide a reference file with the table schema. This is enabled for the following formats: AVRO, PARQUET, ORC. Corresponds to the JSON property referenceFileSchemaUri

Returns:

  • (String)


3405
3406
3407
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3405

def reference_file_schema_uri
  @reference_file_schema_uri
end

#schemaGoogle::Apis::BigqueryV2::TableSchema

Schema of a table Corresponds to the JSON property schema



3410
3411
3412
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3410

def schema
  @schema
end

#source_formatString

[Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify " NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". For Apache Iceberg tables, specify "ICEBERG". For ORC files, specify "ORC". For Parquet files, specify " PARQUET". [Beta] For Google Cloud Bigtable, specify "BIGTABLE". Corresponds to the JSON property sourceFormat

Returns:

  • (String)


3420
3421
3422
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3420

def source_format
  @source_format
end

#source_urisArray<String>

[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed. Corresponds to the JSON property sourceUris

Returns:

  • (Array<String>)


3431
3432
3433
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3431

def source_uris
  @source_uris
end

#time_formatString

Optional. Format used to parse TIME values. Supports C-style and SQL-style values. Corresponds to the JSON property timeFormat

Returns:

  • (String)


3437
3438
3439
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3437

def time_format
  @time_format
end

#time_zoneString

Optional. Time zone used when parsing timestamp values that do not have specific time zone information (e.g. 2024-04-20 12:34:56). The expected format is a IANA timezone string (e.g. America/Los_Angeles). Corresponds to the JSON property timeZone

Returns:

  • (String)


3444
3445
3446
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3444

def time_zone
  @time_zone
end

#timestamp_formatString

Optional. Format used to parse TIMESTAMP values. Supports C-style and SQL- style values. Corresponds to the JSON property timestampFormat

Returns:

  • (String)


3450
3451
3452
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3450

def timestamp_format
  @timestamp_format
end

#timestamp_target_precisionArray<Fixnum>

Precisions (maximum number of total digits in base 10) for seconds of TIMESTAMP types that are allowed to the destination table for autodetection mode. Available for the formats: CSV, PARQUET, and AVRO. Possible values include: Not Specified, [], or [6]: timestamp(6) for all auto detected TIMESTAMP columns [6, 12]: timestamp(6) for all auto detected TIMESTAMP columns that have less than 6 digits of subseconds. timestamp(12) for all auto detected TIMESTAMP columns that have more than 6 digits of subseconds. [12]: timestamp(12) for all auto detected TIMESTAMP columns. The order of the elements in this array is ignored. Inputs that have higher precision than the highest target precision in this array will be truncated. Corresponds to the JSON property timestampTargetPrecision

Returns:

  • (Array<Fixnum>)


3464
3465
3466
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3464

def timestamp_target_precision
  @timestamp_target_precision
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



3471
3472
3473
3474
3475
3476
3477
3478
3479
3480
3481
3482
3483
3484
3485
3486
3487
3488
3489
3490
3491
3492
3493
3494
3495
3496
3497
3498
3499
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3471

def update!(**args)
  @autodetect = args[:autodetect] if args.key?(:autodetect)
  @avro_options = args[:avro_options] if args.key?(:avro_options)
  @bigtable_options = args[:bigtable_options] if args.key?(:bigtable_options)
  @compression = args[:compression] if args.key?(:compression)
  @connection_id = args[:connection_id] if args.key?(:connection_id)
  @csv_options = args[:csv_options] if args.key?(:csv_options)
  @date_format = args[:date_format] if args.key?(:date_format)
  @datetime_format = args[:datetime_format] if args.key?(:datetime_format)
  @decimal_target_types = args[:decimal_target_types] if args.key?(:decimal_target_types)
  @file_set_spec_type = args[:file_set_spec_type] if args.key?(:file_set_spec_type)
  @google_sheets_options = args[:google_sheets_options] if args.key?(:google_sheets_options)
  @hive_partitioning_options = args[:hive_partitioning_options] if args.key?(:hive_partitioning_options)
  @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values)
  @json_extension = args[:json_extension] if args.key?(:json_extension)
  @json_options = args[:json_options] if args.key?(:json_options)
  @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records)
  @metadata_cache_mode = args[:metadata_cache_mode] if args.key?(:metadata_cache_mode)
  @object_metadata = args[:object_metadata] if args.key?(:object_metadata)
  @parquet_options = args[:parquet_options] if args.key?(:parquet_options)
  @reference_file_schema_uri = args[:reference_file_schema_uri] if args.key?(:reference_file_schema_uri)
  @schema = args[:schema] if args.key?(:schema)
  @source_format = args[:source_format] if args.key?(:source_format)
  @source_uris = args[:source_uris] if args.key?(:source_uris)
  @time_format = args[:time_format] if args.key?(:time_format)
  @time_zone = args[:time_zone] if args.key?(:time_zone)
  @timestamp_format = args[:timestamp_format] if args.key?(:timestamp_format)
  @timestamp_target_precision = args[:timestamp_target_precision] if args.key?(:timestamp_target_precision)
end