Class: Google::Apis::BigqueryV2::ExternalDataConfiguration

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ ExternalDataConfiguration

Returns a new instance of ExternalDataConfiguration.



3425
3426
3427
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3425

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#autodetectBoolean Also known as: autodetect?

Try to detect schema and format options automatically. Any option specified explicitly will be honored. Corresponds to the JSON property autodetect

Returns:

  • (Boolean)


3219
3220
3221
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3219

def autodetect
  @autodetect
end

#avro_optionsGoogle::Apis::BigqueryV2::AvroOptions

Options for external data sources. Corresponds to the JSON property avroOptions



3225
3226
3227
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3225

def avro_options
  @avro_options
end

#bigtable_optionsGoogle::Apis::BigqueryV2::BigtableOptions

Options specific to Google Cloud Bigtable data sources. Corresponds to the JSON property bigtableOptions



3230
3231
3232
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3230

def bigtable_options
  @bigtable_options
end

#compressionString

Optional. The compression type of the data source. Possible values include GZIP and NONE. The default value is NONE. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups, Avro, ORC and Parquet formats. An empty string is an invalid value. Corresponds to the JSON property compression

Returns:

  • (String)


3238
3239
3240
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3238

def compression
  @compression
end

#connection_idString

Optional. The connection specifying the credentials to be used to read external storage, such as Azure Blob, Cloud Storage, or S3. The connection_id can have the form project_id`.`location_id`;`connection_id or projects/ project_id/locations/location_id/connections/connection_id`. Corresponds to the JSON propertyconnectionId`

Returns:

  • (String)


3246
3247
3248
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3246

def connection_id
  @connection_id
end

#csv_optionsGoogle::Apis::BigqueryV2::CsvOptions

Information related to a CSV data source. Corresponds to the JSON property csvOptions



3251
3252
3253
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3251

def csv_options
  @csv_options
end

#date_formatString

Optional. Format used to parse DATE values. Supports C-style and SQL-style values. Corresponds to the JSON property dateFormat

Returns:

  • (String)


3257
3258
3259
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3257

def date_format
  @date_format
end

#datetime_formatString

Optional. Format used to parse DATETIME values. Supports C-style and SQL-style values. Corresponds to the JSON property datetimeFormat

Returns:

  • (String)


3263
3264
3265
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3263

def datetime_format
  @datetime_format
end

#decimal_target_typesArray<String>

Defines the list of possible SQL data types to which the source decimal values are converted. This list and the precision and the scale parameters of the decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC, and STRING, a type is picked if it is in the specified list and if it supports the precision and the scale. STRING supports all precision and scale values. If none of the listed types supports the precision and the scale, the type supporting the widest range in the specified list is picked, and if a value exceeds the supported range when reading the data, an error will be thrown. Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If ( precision,scale) is: * (38,9) -> NUMERIC; * (39,9) -> BIGNUMERIC (NUMERIC cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot hold 10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error if value exceeds supported range). This field cannot contain duplicate types. The order of the types in this field is ignored. For example, ["BIGNUMERIC", " NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and [" NUMERIC"] for the other file formats. Corresponds to the JSON property decimalTargetTypes

Returns:

  • (Array<String>)


3284
3285
3286
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3284

def decimal_target_types
  @decimal_target_types
end

#file_set_spec_typeString

Optional. Specifies how source URIs are interpreted for constructing the file set to load. By default source URIs are expanded against the underlying storage. Other options include specifying manifest files. Only applicable to object storage systems. Corresponds to the JSON property fileSetSpecType

Returns:

  • (String)


3292
3293
3294
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3292

def file_set_spec_type
  @file_set_spec_type
end

#google_sheets_optionsGoogle::Apis::BigqueryV2::GoogleSheetsOptions

Options specific to Google Sheets data sources. Corresponds to the JSON property googleSheetsOptions



3297
3298
3299
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3297

def google_sheets_options
  @google_sheets_options
end

#hive_partitioning_optionsGoogle::Apis::BigqueryV2::HivePartitioningOptions

Options for configuring hive partitioning detect. Corresponds to the JSON property hivePartitioningOptions



3302
3303
3304
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3302

def hive_partitioning_options
  @hive_partitioning_options
end

#ignore_unknown_valuesBoolean Also known as: ignore_unknown_values?

Optional. Indicates if BigQuery should allow extra values that are not represented in the table schema. If true, the extra values are ignored. If false, records with extra columns are treated as bad records, and if there are too many bad records, an invalid error is returned in the job result. The default value is false. The sourceFormat property determines what BigQuery treats as an extra value: CSV: Trailing columns JSON: Named values that don't match any column names Google Cloud Bigtable: This setting is ignored. Google Cloud Datastore backups: This setting is ignored. Avro: This setting is ignored. ORC: This setting is ignored. Parquet: This setting is ignored. Corresponds to the JSON property ignoreUnknownValues

Returns:

  • (Boolean)


3315
3316
3317
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3315

def ignore_unknown_values
  @ignore_unknown_values
end

#json_extensionString

Optional. Load option to be used together with source_format newline-delimited JSON to indicate that a variant of JSON is being loaded. To load newline- delimited GeoJSON, specify GEOJSON (and source_format must be set to NEWLINE_DELIMITED_JSON). Corresponds to the JSON property jsonExtension

Returns:

  • (String)


3324
3325
3326
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3324

def json_extension
  @json_extension
end

#json_optionsGoogle::Apis::BigqueryV2::JsonOptions

Json Options for load and make external tables. Corresponds to the JSON property jsonOptions



3329
3330
3331
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3329

def json_options
  @json_options
end

#max_bad_recordsFixnum

Optional. The maximum number of bad records that BigQuery can ignore when reading data. If the number of bad records exceeds this value, an invalid error is returned in the job result. The default value is 0, which requires that all records are valid. This setting is ignored for Google Cloud Bigtable, Google Cloud Datastore backups, Avro, ORC and Parquet formats. Corresponds to the JSON property maxBadRecords

Returns:

  • (Fixnum)


3338
3339
3340
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3338

def max_bad_records
  @max_bad_records
end

#metadata_cache_modeString

Optional. Metadata Cache Mode for the table. Set this to enable caching of metadata from external data source. Corresponds to the JSON property metadataCacheMode

Returns:

  • (String)


3344
3345
3346
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3344

def 
  @metadata_cache_mode
end

#object_metadataString

Optional. ObjectMetadata is used to create Object Tables. Object Tables contain a listing of objects (with their metadata) found at the source_uris. If ObjectMetadata is set, source_format should be omitted. Currently SIMPLE is the only supported Object Metadata type. Corresponds to the JSON property objectMetadata

Returns:

  • (String)


3352
3353
3354
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3352

def 
  @object_metadata
end

#parquet_optionsGoogle::Apis::BigqueryV2::ParquetOptions

Parquet Options for load and make external tables. Corresponds to the JSON property parquetOptions



3357
3358
3359
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3357

def parquet_options
  @parquet_options
end

#reference_file_schema_uriString

Optional. When creating an external table, the user can provide a reference file with the table schema. This is enabled for the following formats: AVRO, PARQUET, ORC. Corresponds to the JSON property referenceFileSchemaUri

Returns:

  • (String)


3364
3365
3366
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3364

def reference_file_schema_uri
  @reference_file_schema_uri
end

#schemaGoogle::Apis::BigqueryV2::TableSchema

Schema of a table Corresponds to the JSON property schema



3369
3370
3371
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3369

def schema
  @schema
end

#source_formatString

[Required] The data format. For CSV files, specify "CSV". For Google sheets, specify "GOOGLE_SHEETS". For newline-delimited JSON, specify " NEWLINE_DELIMITED_JSON". For Avro files, specify "AVRO". For Google Cloud Datastore backups, specify "DATASTORE_BACKUP". For Apache Iceberg tables, specify "ICEBERG". For ORC files, specify "ORC". For Parquet files, specify " PARQUET". [Beta] For Google Cloud Bigtable, specify "BIGTABLE". Corresponds to the JSON property sourceFormat

Returns:

  • (String)


3379
3380
3381
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3379

def source_format
  @source_format
end

#source_urisArray<String>

[Required] The fully-qualified URIs that point to your data in Google Cloud. For Google Cloud Storage URIs: Each URI can contain one '' wildcard character and it must come after the 'bucket' name. Size limits related to load jobs apply to external data sources. For Google Cloud Bigtable URIs: Exactly one URI can be specified and it has be a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. For Google Cloud Datastore backups, exactly one URI can be specified. Also, the '' wildcard character is not allowed. Corresponds to the JSON property sourceUris

Returns:

  • (Array<String>)


3390
3391
3392
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3390

def source_uris
  @source_uris
end

#time_formatString

Optional. Format used to parse TIME values. Supports C-style and SQL-style values. Corresponds to the JSON property timeFormat

Returns:

  • (String)


3396
3397
3398
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3396

def time_format
  @time_format
end

#time_zoneString

Optional. Time zone used when parsing timestamp values that do not have specific time zone information (e.g. 2024-04-20 12:34:56). The expected format is a IANA timezone string (e.g. America/Los_Angeles). Corresponds to the JSON property timeZone

Returns:

  • (String)


3403
3404
3405
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3403

def time_zone
  @time_zone
end

#timestamp_formatString

Optional. Format used to parse TIMESTAMP values. Supports C-style and SQL- style values. Corresponds to the JSON property timestampFormat

Returns:

  • (String)


3409
3410
3411
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3409

def timestamp_format
  @timestamp_format
end

#timestamp_target_precisionArray<Fixnum>

Precisions (maximum number of total digits in base 10) for seconds of TIMESTAMP types that are allowed to the destination table for autodetection mode. Available for the formats: CSV. For the CSV Format, Possible values include: Not Specified, [], or [6]: timestamp(6) for all auto detected TIMESTAMP columns [6, 12]: timestamp(6) for all auto detected TIMESTAMP columns that have less than 6 digits of subseconds. timestamp(12) for all auto detected TIMESTAMP columns that have more than 6 digits of subseconds. [12]: timestamp(12) for all auto detected TIMESTAMP columns. The order of the elements in this array is ignored. Inputs that have higher precision than the highest target precision in this array will be truncated. Corresponds to the JSON property timestampTargetPrecision

Returns:

  • (Array<Fixnum>)


3423
3424
3425
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3423

def timestamp_target_precision
  @timestamp_target_precision
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



3430
3431
3432
3433
3434
3435
3436
3437
3438
3439
3440
3441
3442
3443
3444
3445
3446
3447
3448
3449
3450
3451
3452
3453
3454
3455
3456
3457
3458
# File 'lib/google/apis/bigquery_v2/classes.rb', line 3430

def update!(**args)
  @autodetect = args[:autodetect] if args.key?(:autodetect)
  @avro_options = args[:avro_options] if args.key?(:avro_options)
  @bigtable_options = args[:bigtable_options] if args.key?(:bigtable_options)
  @compression = args[:compression] if args.key?(:compression)
  @connection_id = args[:connection_id] if args.key?(:connection_id)
  @csv_options = args[:csv_options] if args.key?(:csv_options)
  @date_format = args[:date_format] if args.key?(:date_format)
  @datetime_format = args[:datetime_format] if args.key?(:datetime_format)
  @decimal_target_types = args[:decimal_target_types] if args.key?(:decimal_target_types)
  @file_set_spec_type = args[:file_set_spec_type] if args.key?(:file_set_spec_type)
  @google_sheets_options = args[:google_sheets_options] if args.key?(:google_sheets_options)
  @hive_partitioning_options = args[:hive_partitioning_options] if args.key?(:hive_partitioning_options)
  @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values)
  @json_extension = args[:json_extension] if args.key?(:json_extension)
  @json_options = args[:json_options] if args.key?(:json_options)
  @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records)
  @metadata_cache_mode = args[:metadata_cache_mode] if args.key?(:metadata_cache_mode)
  @object_metadata = args[:object_metadata] if args.key?(:object_metadata)
  @parquet_options = args[:parquet_options] if args.key?(:parquet_options)
  @reference_file_schema_uri = args[:reference_file_schema_uri] if args.key?(:reference_file_schema_uri)
  @schema = args[:schema] if args.key?(:schema)
  @source_format = args[:source_format] if args.key?(:source_format)
  @source_uris = args[:source_uris] if args.key?(:source_uris)
  @time_format = args[:time_format] if args.key?(:time_format)
  @time_zone = args[:time_zone] if args.key?(:time_zone)
  @timestamp_format = args[:timestamp_format] if args.key?(:timestamp_format)
  @timestamp_target_precision = args[:timestamp_target_precision] if args.key?(:timestamp_target_precision)
end