Class: Google::Apis::BigqueryV2::JobConfigurationLoad
- Inherits:
-
Object
- Object
- Google::Apis::BigqueryV2::JobConfigurationLoad
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb
Overview
JobConfigurationLoad contains the configuration properties for loading data into a destination table.
Instance Attribute Summary collapse
-
#allow_jagged_rows ⇒ Boolean
(also: #allow_jagged_rows?)
Optional.
-
#allow_quoted_newlines ⇒ Boolean
(also: #allow_quoted_newlines?)
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
-
#autodetect ⇒ Boolean
(also: #autodetect?)
Optional.
-
#clustering ⇒ Google::Apis::BigqueryV2::Clustering
Configures table clustering.
-
#column_name_character_map ⇒ String
Optional.
-
#connection_properties ⇒ Array<Google::Apis::BigqueryV2::ConnectionProperty>
Optional.
-
#copy_files_only ⇒ Boolean
(also: #copy_files_only?)
Optional.
-
#create_disposition ⇒ String
Optional.
-
#create_session ⇒ Boolean
(also: #create_session?)
Optional.
-
#date_format ⇒ String
Optional.
-
#datetime_format ⇒ String
Optional.
-
#decimal_target_types ⇒ Array<String>
Defines the list of possible SQL data types to which the source decimal values are converted.
-
#destination_encryption_configuration ⇒ Google::Apis::BigqueryV2::EncryptionConfiguration
Configuration for Cloud KMS encryption settings.
-
#destination_table ⇒ Google::Apis::BigqueryV2::TableReference
[Required] The destination table to load the data into.
-
#destination_table_properties ⇒ Google::Apis::BigqueryV2::DestinationTableProperties
Properties for the destination table.
-
#encoding ⇒ String
Optional.
-
#field_delimiter ⇒ String
Optional.
-
#file_set_spec_type ⇒ String
Optional.
-
#hive_partitioning_options ⇒ Google::Apis::BigqueryV2::HivePartitioningOptions
Options for configuring hive partitioning detect.
-
#ignore_unknown_values ⇒ Boolean
(also: #ignore_unknown_values?)
Optional.
-
#json_extension ⇒ String
Optional.
-
#max_bad_records ⇒ Fixnum
Optional.
-
#null_marker ⇒ String
Optional.
-
#null_markers ⇒ Array<String>
Optional.
-
#parquet_options ⇒ Google::Apis::BigqueryV2::ParquetOptions
Parquet Options for load and make external tables.
-
#preserve_ascii_control_characters ⇒ Boolean
(also: #preserve_ascii_control_characters?)
Optional.
-
#projection_fields ⇒ Array<String>
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
-
#quote ⇒ String
Optional.
-
#range_partitioning ⇒ Google::Apis::BigqueryV2::RangePartitioning
Range partitioning specification for the destination table.
-
#reference_file_schema_uri ⇒ String
Optional.
-
#schema ⇒ Google::Apis::BigqueryV2::TableSchema
Schema of a table Corresponds to the JSON property
schema. -
#schema_inline ⇒ String
[Deprecated] The inline schema.
-
#schema_inline_format ⇒ String
[Deprecated] The format of the schemaInline property.
-
#schema_update_options ⇒ Array<String>
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.
-
#skip_leading_rows ⇒ Fixnum
Optional.
-
#source_column_match ⇒ String
Optional.
-
#source_format ⇒ String
Optional.
-
#source_uris ⇒ Array<String>
[Required] The fully-qualified URIs that point to your data in Google Cloud.
-
#time_format ⇒ String
Optional.
-
#time_partitioning ⇒ Google::Apis::BigqueryV2::TimePartitioning
Time-based partitioning specification for the destination table.
-
#time_zone ⇒ String
Optional.
-
#timestamp_format ⇒ String
Optional.
-
#timestamp_target_precision ⇒ Array<Fixnum>
Precisions (maximum number of total digits in base 10) for seconds of TIMESTAMP types that are allowed to the destination table for autodetection mode.
-
#use_avro_logical_types ⇒ Boolean
(also: #use_avro_logical_types?)
Optional.
-
#write_disposition ⇒ String
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ JobConfigurationLoad
constructor
A new instance of JobConfigurationLoad.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ JobConfigurationLoad
Returns a new instance of JobConfigurationLoad.
5507 5508 5509 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5507 def initialize(**args) update!(**args) end |
Instance Attribute Details
#allow_jagged_rows ⇒ Boolean Also known as: allow_jagged_rows?
Optional. Accept rows that are missing trailing optional columns. The missing
values are treated as nulls. If false, records with missing trailing columns
are treated as bad records, and if there are too many bad records, an invalid
error is returned in the job result. The default value is false. Only
applicable to CSV, ignored for other formats.
Corresponds to the JSON property allowJaggedRows
5118 5119 5120 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5118 def allow_jagged_rows @allow_jagged_rows end |
#allow_quoted_newlines ⇒ Boolean Also known as: allow_quoted_newlines?
Indicates if BigQuery should allow quoted data sections that contain newline
characters in a CSV file. The default value is false.
Corresponds to the JSON property allowQuotedNewlines
5125 5126 5127 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5125 def allow_quoted_newlines @allow_quoted_newlines end |
#autodetect ⇒ Boolean Also known as: autodetect?
Optional. Indicates if we should automatically infer the options and schema
for CSV and JSON sources.
Corresponds to the JSON property autodetect
5132 5133 5134 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5132 def autodetect @autodetect end |
#clustering ⇒ Google::Apis::BigqueryV2::Clustering
Configures table clustering.
Corresponds to the JSON property clustering
5138 5139 5140 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5138 def clustering @clustering end |
#column_name_character_map ⇒ String
Optional. Character map supported for column names in CSV/Parquet loads.
Defaults to STRICT and can be overridden by Project Config Service. Using this
option with unsupporting load formats will result in an error.
Corresponds to the JSON property columnNameCharacterMap
5145 5146 5147 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5145 def column_name_character_map @column_name_character_map end |
#connection_properties ⇒ Array<Google::Apis::BigqueryV2::ConnectionProperty>
Optional. Connection properties which can modify the load job behavior.
Currently, only the 'session_id' connection property is supported, and is used
to resolve _SESSION appearing as the dataset id.
Corresponds to the JSON property connectionProperties
5152 5153 5154 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5152 def connection_properties @connection_properties end |
#copy_files_only ⇒ Boolean Also known as: copy_files_only?
Optional. [Experimental] Configures the load job to copy files directly to the
destination BigLake managed table, bypassing file content reading and
rewriting. Copying files only is supported when all the following are true: *
source_uris are located in the same Cloud Storage location as the destination
table's storage_uri location. * source_format is PARQUET. *
destination_table is an existing BigLake managed table. The table's schema
does not have flexible column names. The table's columns do not have type
parameters other than precision and scale. * No options other than the above
are specified.
Corresponds to the JSON property copyFilesOnly
5165 5166 5167 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5165 def copy_files_only @copy_files_only end |
#create_disposition ⇒ String
Optional. Specifies whether the job is allowed to create new tables. The
following values are supported: * CREATE_IF_NEEDED: If the table does not
exist, BigQuery creates the table. * CREATE_NEVER: The table must already
exist. If it does not, a 'notFound' error is returned in the job result. The
default value is CREATE_IF_NEEDED. Creation, truncation and append actions
occur as one atomic update upon job completion.
Corresponds to the JSON property createDisposition
5176 5177 5178 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5176 def create_disposition @create_disposition end |
#create_session ⇒ Boolean Also known as: create_session?
Optional. If this property is true, the job creates a new session using a
randomly generated session_id. To continue using a created session with
subsequent queries, pass the existing session identifier as a
ConnectionProperty value. The session identifier is returned as part of the
SessionInfo message within the query statistics. The new session's location
will be set to Job.JobReference.location if it is present, otherwise it's
set to the default location based on existing routing logic.
Corresponds to the JSON property createSession
5187 5188 5189 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5187 def create_session @create_session end |
#date_format ⇒ String
Optional. Date format used for parsing DATE values.
Corresponds to the JSON property dateFormat
5193 5194 5195 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5193 def date_format @date_format end |
#datetime_format ⇒ String
Optional. Date format used for parsing DATETIME values.
Corresponds to the JSON property datetimeFormat
5198 5199 5200 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5198 def datetime_format @datetime_format end |
#decimal_target_types ⇒ Array<String>
Defines the list of possible SQL data types to which the source decimal values
are converted. This list and the precision and the scale parameters of the
decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC,
and STRING, a type is picked if it is in the specified list and if it supports
the precision and the scale. STRING supports all precision and scale values.
If none of the listed types supports the precision and the scale, the type
supporting the widest range in the specified list is picked, and if a value
exceeds the supported range when reading the data, an error will be thrown.
Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (
precision,scale) is: * (38,9) -> NUMERIC; * (39,9) -> BIGNUMERIC (NUMERIC
cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot hold
10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error
if value exceeds supported range). This field cannot contain duplicate types.
The order of the types in this field is ignored. For example, ["BIGNUMERIC", "
NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes
precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["
NUMERIC"] for the other file formats.
Corresponds to the JSON property decimalTargetTypes
5219 5220 5221 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5219 def decimal_target_types @decimal_target_types end |
#destination_encryption_configuration ⇒ Google::Apis::BigqueryV2::EncryptionConfiguration
Configuration for Cloud KMS encryption settings.
Corresponds to the JSON property destinationEncryptionConfiguration
5224 5225 5226 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5224 def destination_encryption_configuration @destination_encryption_configuration end |
#destination_table ⇒ Google::Apis::BigqueryV2::TableReference
[Required] The destination table to load the data into.
Corresponds to the JSON property destinationTable
5229 5230 5231 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5229 def destination_table @destination_table end |
#destination_table_properties ⇒ Google::Apis::BigqueryV2::DestinationTableProperties
Properties for the destination table.
Corresponds to the JSON property destinationTableProperties
5234 5235 5236 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5234 def destination_table_properties @destination_table_properties end |
#encoding ⇒ String
Optional. The character encoding of the data. The supported values are UTF-8,
ISO-8859-1, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is
UTF-8. BigQuery decodes the data after the raw, binary data has been split
using the values of the quote and fieldDelimiter properties. If you don't
specify an encoding, or if you specify a UTF-8 encoding when the CSV file is
not UTF-8 encoded, BigQuery attempts to convert the data to UTF-8. Generally,
your data loads successfully, but it may not match byte-for-byte what you
expect. To avoid this, specify the correct encoding by using the --encoding
flag. If BigQuery can't convert a character other than the ASCII 0 character,
BigQuery converts the character to the standard Unicode replacement character:
�.
Corresponds to the JSON property encoding
5249 5250 5251 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5249 def encoding @encoding end |
#field_delimiter ⇒ String
Optional. The separator character for fields in a CSV file. The separator is
interpreted as a single byte. For files encoded in ISO-8859-1, any single
character can be used as a separator. For files encoded in UTF-8, characters
represented in decimal range 1-127 (U+0001-U+007F) can be used without any
modification. UTF-8 characters encoded with multiple bytes (i.e. U+0080 and
above) will have only the first byte used for separating fields. The remaining
bytes will be treated as a part of the field. BigQuery also supports the
escape sequence "\t" (U+0009) to specify a tab separator. The default value is
comma (",", U+002C).
Corresponds to the JSON property fieldDelimiter
5262 5263 5264 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5262 def field_delimiter @field_delimiter end |
#file_set_spec_type ⇒ String
Optional. Specifies how source URIs are interpreted for constructing the file
set to load. By default, source URIs are expanded against the underlying
storage. You can also specify manifest files to control how the file set is
constructed. This option is only applicable to object storage systems.
Corresponds to the JSON property fileSetSpecType
5270 5271 5272 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5270 def file_set_spec_type @file_set_spec_type end |
#hive_partitioning_options ⇒ Google::Apis::BigqueryV2::HivePartitioningOptions
Options for configuring hive partitioning detect.
Corresponds to the JSON property hivePartitioningOptions
5275 5276 5277 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5275 def @hive_partitioning_options end |
#ignore_unknown_values ⇒ Boolean Also known as: ignore_unknown_values?
Optional. Indicates if BigQuery should allow extra values that are not
represented in the table schema. If true, the extra values are ignored. If
false, records with extra columns are treated as bad records, and if there are
too many bad records, an invalid error is returned in the job result. The
default value is false. The sourceFormat property determines what BigQuery
treats as an extra value: CSV: Trailing columns JSON: Named values that don't
match any column names in the table schema Avro, Parquet, ORC: Fields in the
file schema that don't exist in the table schema.
Corresponds to the JSON property ignoreUnknownValues
5287 5288 5289 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5287 def ignore_unknown_values @ignore_unknown_values end |
#json_extension ⇒ String
Optional. Load option to be used together with source_format newline-delimited
JSON to indicate that a variant of JSON is being loaded. To load newline-
delimited GeoJSON, specify GEOJSON (and source_format must be set to
NEWLINE_DELIMITED_JSON).
Corresponds to the JSON property jsonExtension
5296 5297 5298 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5296 def json_extension @json_extension end |
#max_bad_records ⇒ Fixnum
Optional. The maximum number of bad records that BigQuery can ignore when
running the job. If the number of bad records exceeds this value, an invalid
error is returned in the job result. The default value is 0, which requires
that all records are valid. This is only supported for CSV and
NEWLINE_DELIMITED_JSON file formats.
Corresponds to the JSON property maxBadRecords
5305 5306 5307 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5305 def max_bad_records @max_bad_records end |
#null_marker ⇒ String
Optional. Specifies a string that represents a null value in a CSV file. For
example, if you specify "\N", BigQuery interprets "\N" as a null value when
loading a CSV file. The default value is the empty string. If you set this
property to a custom value, BigQuery throws an error if an empty string is
present for all data types except for STRING and BYTE. For STRING and BYTE
columns, BigQuery interprets the empty string as an empty value.
Corresponds to the JSON property nullMarker
5315 5316 5317 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5315 def null_marker @null_marker end |
#null_markers ⇒ Array<String>
Optional. A list of strings represented as SQL NULL value in a CSV file.
null_marker and null_markers can't be set at the same time. If null_marker is
set, null_markers has to be not set. If null_markers is set, null_marker has
to be not set. If both null_marker and null_markers are set at the same time,
a user error would be thrown. Any strings listed in null_markers, including
empty string would be interpreted as SQL NULL. This applies to all column
types.
Corresponds to the JSON property nullMarkers
5326 5327 5328 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5326 def null_markers @null_markers end |
#parquet_options ⇒ Google::Apis::BigqueryV2::ParquetOptions
Parquet Options for load and make external tables.
Corresponds to the JSON property parquetOptions
5331 5332 5333 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5331 def @parquet_options end |
#preserve_ascii_control_characters ⇒ Boolean Also known as: preserve_ascii_control_characters?
Optional. When sourceFormat is set to "CSV", this indicates whether the
embedded ASCII control characters (the first 32 characters in the ASCII-table,
from '\x00' to '\x1F') are preserved.
Corresponds to the JSON property preserveAsciiControlCharacters
5338 5339 5340 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5338 def preserve_ascii_control_characters @preserve_ascii_control_characters end |
#projection_fields ⇒ Array<String>
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity
properties to load into BigQuery from a Cloud Datastore backup. Property names
are case sensitive and must be top-level properties. If no properties are
specified, BigQuery loads all properties. If any named property isn't found in
the Cloud Datastore backup, an invalid error is returned in the job result.
Corresponds to the JSON property projectionFields
5348 5349 5350 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5348 def projection_fields @projection_fields end |
#quote ⇒ String
Optional. The value that is used to quote data sections in a CSV file.
BigQuery converts the string to ISO-8859-1 encoding, and then uses the first
byte of the encoded string to split the data in its raw, binary state. The
default value is a double-quote ('"'). If your data does not contain quoted
sections, set the property value to an empty string. If your data contains
quoted newline characters, you must also set the allowQuotedNewlines property
to true. To include the specific quote character within a quoted value,
precede it with an additional matching quote character. For example, if you
want to escape the default character ' " ', use ' "" '. @default "
Corresponds to the JSON property quote
5361 5362 5363 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5361 def quote @quote end |
#range_partitioning ⇒ Google::Apis::BigqueryV2::RangePartitioning
Range partitioning specification for the destination table. Only one of
timePartitioning and rangePartitioning should be specified.
Corresponds to the JSON property rangePartitioning
5367 5368 5369 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5367 def range_partitioning @range_partitioning end |
#reference_file_schema_uri ⇒ String
Optional. The user can provide a reference file with the reader schema. This
file is only loaded if it is part of source URIs, but is not loaded otherwise.
It is enabled for the following formats: AVRO, PARQUET, ORC.
Corresponds to the JSON property referenceFileSchemaUri
5374 5375 5376 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5374 def reference_file_schema_uri @reference_file_schema_uri end |
#schema ⇒ Google::Apis::BigqueryV2::TableSchema
Schema of a table
Corresponds to the JSON property schema
5379 5380 5381 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5379 def schema @schema end |
#schema_inline ⇒ String
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,
Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT".
Corresponds to the JSON property schemaInline
5385 5386 5387 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5385 def schema_inline @schema_inline end |
#schema_inline_format ⇒ String
[Deprecated] The format of the schemaInline property.
Corresponds to the JSON property schemaInlineFormat
5390 5391 5392 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5390 def schema_inline_format @schema_inline_format end |
#schema_update_options ⇒ Array<String>
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in three cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE_DATA; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: * ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema.
- ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original
schema to nullable.
Corresponds to the JSON property
schemaUpdateOptions
5404 5405 5406 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5404 def @schema_update_options end |
#skip_leading_rows ⇒ Fixnum
Optional. The number of rows at the top of a CSV file that BigQuery will skip
when loading the data. The default value is 0. This property is useful if you
have header rows in the file that should be skipped. When autodetect is on,
the behavior is the following: * skipLeadingRows unspecified - Autodetect
tries to detect headers in the first row. If they are not detected, the row is
read as data. Otherwise data is read starting from the second row. *
skipLeadingRows is 0 - Instructs autodetect that there are no headers and data
should be read starting from the first row. * skipLeadingRows = N > 0 -
Autodetect skips N-1 rows and tries to detect headers in row N. If headers are
not detected, row N is just skipped. Otherwise row N is used to extract column
names for the detected schema.
Corresponds to the JSON property skipLeadingRows
5419 5420 5421 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5419 def skip_leading_rows @skip_leading_rows end |
#source_column_match ⇒ String
Optional. Controls the strategy used to match loaded columns to the schema. If
not set, a sensible default is chosen based on how the schema is provided. If
autodetect is used, then columns are matched by name. Otherwise, columns are
matched by position. This is done to keep the behavior backward-compatible.
Corresponds to the JSON property sourceColumnMatch
5427 5428 5429 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5427 def source_column_match @source_column_match end |
#source_format ⇒ String
Optional. The format of the data files. For CSV files, specify "CSV". For
datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON,
specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet,
specify "PARQUET". For orc, specify "ORC". The default value is CSV.
Corresponds to the JSON property sourceFormat
5435 5436 5437 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5435 def source_format @source_format end |
#source_uris ⇒ Array<String>
[Required] The fully-qualified URIs that point to your data in Google Cloud.
For Google Cloud Storage URIs: Each URI can contain one '' wildcard character
and it must come after the 'bucket' name. Size limits related to load jobs
apply to external data sources. For Google Cloud Bigtable URIs: Exactly one
URI can be specified and it has be a fully specified and valid HTTPS URL for a
Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one
URI can be specified. Also, the '' wildcard character is not allowed.
Corresponds to the JSON property sourceUris
5446 5447 5448 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5446 def source_uris @source_uris end |
#time_format ⇒ String
Optional. Date format used for parsing TIME values.
Corresponds to the JSON property timeFormat
5451 5452 5453 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5451 def time_format @time_format end |
#time_partitioning ⇒ Google::Apis::BigqueryV2::TimePartitioning
Time-based partitioning specification for the destination table. Only one of
timePartitioning and rangePartitioning should be specified.
Corresponds to the JSON property timePartitioning
5457 5458 5459 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5457 def time_partitioning @time_partitioning end |
#time_zone ⇒ String
Optional. Default time zone that will apply when parsing timestamp values that
have no specific time zone.
Corresponds to the JSON property timeZone
5463 5464 5465 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5463 def time_zone @time_zone end |
#timestamp_format ⇒ String
Optional. Date format used for parsing TIMESTAMP values.
Corresponds to the JSON property timestampFormat
5468 5469 5470 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5468 def @timestamp_format end |
#timestamp_target_precision ⇒ Array<Fixnum>
Precisions (maximum number of total digits in base 10) for seconds of
TIMESTAMP types that are allowed to the destination table for autodetection
mode. Available for the formats: CSV, PARQUET, and AVRO. Possible values
include: Not Specified, [], or [6]: timestamp(6) for all auto detected
TIMESTAMP columns [6, 12]: timestamp(6) for all auto detected TIMESTAMP
columns that have less than 6 digits of subseconds. timestamp(12) for all auto
detected TIMESTAMP columns that have more than 6 digits of subseconds. [12]:
timestamp(12) for all auto detected TIMESTAMP columns. The order of the
elements in this array is ignored. Inputs that have higher precision than the
highest target precision in this array will be truncated.
Corresponds to the JSON property timestampTargetPrecision
5482 5483 5484 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5482 def @timestamp_target_precision end |
#use_avro_logical_types ⇒ Boolean Also known as: use_avro_logical_types?
Optional. If sourceFormat is set to "AVRO", indicates whether to interpret
logical types as the corresponding BigQuery data type (for example, TIMESTAMP),
instead of using the raw type (for example, INTEGER).
Corresponds to the JSON property useAvroLogicalTypes
5489 5490 5491 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5489 def use_avro_logical_types @use_avro_logical_types end |
#write_disposition ⇒ String
Optional. Specifies the action that occurs if the destination table already
exists. The following values are supported: * WRITE_TRUNCATE: If the table
already exists, BigQuery overwrites the data, removes the constraints and uses
the schema from the load job. * WRITE_TRUNCATE_DATA: If the table already
exists, BigQuery overwrites the data, but keeps the constraints and schema of
the existing table. * WRITE_APPEND: If the table already exists, BigQuery
appends the data to the table. * WRITE_EMPTY: If the table already exists and
contains data, a 'duplicate' error is returned in the job result. The default
value is WRITE_APPEND. Each action is atomic and only occurs if BigQuery is
able to complete the job successfully. Creation, truncation and append actions
occur as one atomic update upon job completion.
Corresponds to the JSON property writeDisposition
5505 5506 5507 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5505 def write_disposition @write_disposition end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
5512 5513 5514 5515 5516 5517 5518 5519 5520 5521 5522 5523 5524 5525 5526 5527 5528 5529 5530 5531 5532 5533 5534 5535 5536 5537 5538 5539 5540 5541 5542 5543 5544 5545 5546 5547 5548 5549 5550 5551 5552 5553 5554 5555 5556 5557 5558 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5512 def update!(**args) @allow_jagged_rows = args[:allow_jagged_rows] if args.key?(:allow_jagged_rows) @allow_quoted_newlines = args[:allow_quoted_newlines] if args.key?(:allow_quoted_newlines) @autodetect = args[:autodetect] if args.key?(:autodetect) @clustering = args[:clustering] if args.key?(:clustering) @column_name_character_map = args[:column_name_character_map] if args.key?(:column_name_character_map) @connection_properties = args[:connection_properties] if args.key?(:connection_properties) @copy_files_only = args[:copy_files_only] if args.key?(:copy_files_only) @create_disposition = args[:create_disposition] if args.key?(:create_disposition) @create_session = args[:create_session] if args.key?(:create_session) @date_format = args[:date_format] if args.key?(:date_format) @datetime_format = args[:datetime_format] if args.key?(:datetime_format) @decimal_target_types = args[:decimal_target_types] if args.key?(:decimal_target_types) @destination_encryption_configuration = args[:destination_encryption_configuration] if args.key?(:destination_encryption_configuration) @destination_table = args[:destination_table] if args.key?(:destination_table) @destination_table_properties = args[:destination_table_properties] if args.key?(:destination_table_properties) @encoding = args[:encoding] if args.key?(:encoding) @field_delimiter = args[:field_delimiter] if args.key?(:field_delimiter) @file_set_spec_type = args[:file_set_spec_type] if args.key?(:file_set_spec_type) @hive_partitioning_options = args[:hive_partitioning_options] if args.key?(:hive_partitioning_options) @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values) @json_extension = args[:json_extension] if args.key?(:json_extension) @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records) @null_marker = args[:null_marker] if args.key?(:null_marker) @null_markers = args[:null_markers] if args.key?(:null_markers) @parquet_options = args[:parquet_options] if args.key?(:parquet_options) @preserve_ascii_control_characters = args[:preserve_ascii_control_characters] if args.key?(:preserve_ascii_control_characters) @projection_fields = args[:projection_fields] if args.key?(:projection_fields) @quote = args[:quote] if args.key?(:quote) @range_partitioning = args[:range_partitioning] if args.key?(:range_partitioning) @reference_file_schema_uri = args[:reference_file_schema_uri] if args.key?(:reference_file_schema_uri) @schema = args[:schema] if args.key?(:schema) @schema_inline = args[:schema_inline] if args.key?(:schema_inline) @schema_inline_format = args[:schema_inline_format] if args.key?(:schema_inline_format) @schema_update_options = args[:schema_update_options] if args.key?(:schema_update_options) @skip_leading_rows = args[:skip_leading_rows] if args.key?(:skip_leading_rows) @source_column_match = args[:source_column_match] if args.key?(:source_column_match) @source_format = args[:source_format] if args.key?(:source_format) @source_uris = args[:source_uris] if args.key?(:source_uris) @time_format = args[:time_format] if args.key?(:time_format) @time_partitioning = args[:time_partitioning] if args.key?(:time_partitioning) @time_zone = args[:time_zone] if args.key?(:time_zone) @timestamp_format = args[:timestamp_format] if args.key?(:timestamp_format) @timestamp_target_precision = args[:timestamp_target_precision] if args.key?(:timestamp_target_precision) @use_avro_logical_types = args[:use_avro_logical_types] if args.key?(:use_avro_logical_types) @write_disposition = args[:write_disposition] if args.key?(:write_disposition) end |