Class: Google::Apis::BigqueryV2::JobConfigurationLoad
- Inherits:
-
Object
- Object
- Google::Apis::BigqueryV2::JobConfigurationLoad
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb
Overview
JobConfigurationLoad contains the configuration properties for loading data into a destination table.
Instance Attribute Summary collapse
-
#allow_jagged_rows ⇒ Boolean
(also: #allow_jagged_rows?)
Optional.
-
#allow_quoted_newlines ⇒ Boolean
(also: #allow_quoted_newlines?)
Indicates if BigQuery should allow quoted data sections that contain newline characters in a CSV file.
-
#autodetect ⇒ Boolean
(also: #autodetect?)
Optional.
-
#clustering ⇒ Google::Apis::BigqueryV2::Clustering
Configures table clustering.
-
#column_name_character_map ⇒ String
Optional.
-
#connection_properties ⇒ Array<Google::Apis::BigqueryV2::ConnectionProperty>
Optional.
-
#copy_files_only ⇒ Boolean
(also: #copy_files_only?)
Optional.
-
#create_disposition ⇒ String
Optional.
-
#create_session ⇒ Boolean
(also: #create_session?)
Optional.
-
#date_format ⇒ String
Optional.
-
#datetime_format ⇒ String
Optional.
-
#decimal_target_types ⇒ Array<String>
Defines the list of possible SQL data types to which the source decimal values are converted.
-
#destination_encryption_configuration ⇒ Google::Apis::BigqueryV2::EncryptionConfiguration
Configuration for Cloud KMS encryption settings.
-
#destination_table ⇒ Google::Apis::BigqueryV2::TableReference
[Required] The destination table to load the data into.
-
#destination_table_properties ⇒ Google::Apis::BigqueryV2::DestinationTableProperties
Properties for the destination table.
-
#encoding ⇒ String
Optional.
-
#field_delimiter ⇒ String
Optional.
-
#file_set_spec_type ⇒ String
Optional.
-
#hive_partitioning_options ⇒ Google::Apis::BigqueryV2::HivePartitioningOptions
Options for configuring hive partitioning detect.
-
#ignore_unknown_values ⇒ Boolean
(also: #ignore_unknown_values?)
Optional.
-
#json_extension ⇒ String
Optional.
-
#max_bad_records ⇒ Fixnum
Optional.
-
#null_marker ⇒ String
Optional.
-
#null_markers ⇒ Array<String>
Optional.
-
#parquet_options ⇒ Google::Apis::BigqueryV2::ParquetOptions
Parquet Options for load and make external tables.
-
#preserve_ascii_control_characters ⇒ Boolean
(also: #preserve_ascii_control_characters?)
Optional.
-
#projection_fields ⇒ Array<String>
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity properties to load into BigQuery from a Cloud Datastore backup.
-
#quote ⇒ String
Optional.
-
#range_partitioning ⇒ Google::Apis::BigqueryV2::RangePartitioning
Range partitioning specification for the destination table.
-
#reference_file_schema_uri ⇒ String
Optional.
-
#schema ⇒ Google::Apis::BigqueryV2::TableSchema
Schema of a table Corresponds to the JSON property
schema. -
#schema_inline ⇒ String
[Deprecated] The inline schema.
-
#schema_inline_format ⇒ String
[Deprecated] The format of the schemaInline property.
-
#schema_update_options ⇒ Array<String>
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration.
-
#skip_leading_rows ⇒ Fixnum
Optional.
-
#source_column_match ⇒ String
Optional.
-
#source_format ⇒ String
Optional.
-
#source_uris ⇒ Array<String>
[Required] The fully-qualified URIs that point to your data in Google Cloud.
-
#time_format ⇒ String
Optional.
-
#time_partitioning ⇒ Google::Apis::BigqueryV2::TimePartitioning
Time-based partitioning specification for the destination table.
-
#time_zone ⇒ String
Optional.
-
#timestamp_format ⇒ String
Optional.
-
#timestamp_target_precision ⇒ Array<Fixnum>
Precisions (maximum number of total digits in base 10) for seconds of TIMESTAMP types that are allowed to the destination table for autodetection mode.
-
#use_avro_logical_types ⇒ Boolean
(also: #use_avro_logical_types?)
Optional.
-
#write_disposition ⇒ String
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ JobConfigurationLoad
constructor
A new instance of JobConfigurationLoad.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ JobConfigurationLoad
Returns a new instance of JobConfigurationLoad.
5267 5268 5269 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5267 def initialize(**args) update!(**args) end |
Instance Attribute Details
#allow_jagged_rows ⇒ Boolean Also known as: allow_jagged_rows?
Optional. Accept rows that are missing trailing optional columns. The missing
values are treated as nulls. If false, records with missing trailing columns
are treated as bad records, and if there are too many bad records, an invalid
error is returned in the job result. The default value is false. Only
applicable to CSV, ignored for other formats.
Corresponds to the JSON property allowJaggedRows
4878 4879 4880 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4878 def allow_jagged_rows @allow_jagged_rows end |
#allow_quoted_newlines ⇒ Boolean Also known as: allow_quoted_newlines?
Indicates if BigQuery should allow quoted data sections that contain newline
characters in a CSV file. The default value is false.
Corresponds to the JSON property allowQuotedNewlines
4885 4886 4887 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4885 def allow_quoted_newlines @allow_quoted_newlines end |
#autodetect ⇒ Boolean Also known as: autodetect?
Optional. Indicates if we should automatically infer the options and schema
for CSV and JSON sources.
Corresponds to the JSON property autodetect
4892 4893 4894 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4892 def autodetect @autodetect end |
#clustering ⇒ Google::Apis::BigqueryV2::Clustering
Configures table clustering.
Corresponds to the JSON property clustering
4898 4899 4900 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4898 def clustering @clustering end |
#column_name_character_map ⇒ String
Optional. Character map supported for column names in CSV/Parquet loads.
Defaults to STRICT and can be overridden by Project Config Service. Using this
option with unsupporting load formats will result in an error.
Corresponds to the JSON property columnNameCharacterMap
4905 4906 4907 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4905 def column_name_character_map @column_name_character_map end |
#connection_properties ⇒ Array<Google::Apis::BigqueryV2::ConnectionProperty>
Optional. Connection properties which can modify the load job behavior.
Currently, only the 'session_id' connection property is supported, and is used
to resolve _SESSION appearing as the dataset id.
Corresponds to the JSON property connectionProperties
4912 4913 4914 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4912 def connection_properties @connection_properties end |
#copy_files_only ⇒ Boolean Also known as: copy_files_only?
Optional. [Experimental] Configures the load job to copy files directly to the
destination BigLake managed table, bypassing file content reading and
rewriting. Copying files only is supported when all the following are true: *
source_uris are located in the same Cloud Storage location as the destination
table's storage_uri location. * source_format is PARQUET. *
destination_table is an existing BigLake managed table. The table's schema
does not have flexible column names. The table's columns do not have type
parameters other than precision and scale. * No options other than the above
are specified.
Corresponds to the JSON property copyFilesOnly
4925 4926 4927 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4925 def copy_files_only @copy_files_only end |
#create_disposition ⇒ String
Optional. Specifies whether the job is allowed to create new tables. The
following values are supported: * CREATE_IF_NEEDED: If the table does not
exist, BigQuery creates the table. * CREATE_NEVER: The table must already
exist. If it does not, a 'notFound' error is returned in the job result. The
default value is CREATE_IF_NEEDED. Creation, truncation and append actions
occur as one atomic update upon job completion.
Corresponds to the JSON property createDisposition
4936 4937 4938 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4936 def create_disposition @create_disposition end |
#create_session ⇒ Boolean Also known as: create_session?
Optional. If this property is true, the job creates a new session using a
randomly generated session_id. To continue using a created session with
subsequent queries, pass the existing session identifier as a
ConnectionProperty value. The session identifier is returned as part of the
SessionInfo message within the query statistics. The new session's location
will be set to Job.JobReference.location if it is present, otherwise it's
set to the default location based on existing routing logic.
Corresponds to the JSON property createSession
4947 4948 4949 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4947 def create_session @create_session end |
#date_format ⇒ String
Optional. Date format used for parsing DATE values.
Corresponds to the JSON property dateFormat
4953 4954 4955 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4953 def date_format @date_format end |
#datetime_format ⇒ String
Optional. Date format used for parsing DATETIME values.
Corresponds to the JSON property datetimeFormat
4958 4959 4960 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4958 def datetime_format @datetime_format end |
#decimal_target_types ⇒ Array<String>
Defines the list of possible SQL data types to which the source decimal values
are converted. This list and the precision and the scale parameters of the
decimal field determine the target type. In the order of NUMERIC, BIGNUMERIC,
and STRING, a type is picked if it is in the specified list and if it supports
the precision and the scale. STRING supports all precision and scale values.
If none of the listed types supports the precision and the scale, the type
supporting the widest range in the specified list is picked, and if a value
exceeds the supported range when reading the data, an error will be thrown.
Example: Suppose the value of this field is ["NUMERIC", "BIGNUMERIC"]. If (
precision,scale) is: * (38,9) -> NUMERIC; * (39,9) -> BIGNUMERIC (NUMERIC
cannot hold 30 integer digits); * (38,10) -> BIGNUMERIC (NUMERIC cannot hold
10 fractional digits); * (76,38) -> BIGNUMERIC; * (77,38) -> BIGNUMERIC (error
if value exceeds supported range). This field cannot contain duplicate types.
The order of the types in this field is ignored. For example, ["BIGNUMERIC", "
NUMERIC"] is the same as ["NUMERIC", "BIGNUMERIC"] and NUMERIC always takes
precedence over BIGNUMERIC. Defaults to ["NUMERIC", "STRING"] for ORC and ["
NUMERIC"] for the other file formats.
Corresponds to the JSON property decimalTargetTypes
4979 4980 4981 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4979 def decimal_target_types @decimal_target_types end |
#destination_encryption_configuration ⇒ Google::Apis::BigqueryV2::EncryptionConfiguration
Configuration for Cloud KMS encryption settings.
Corresponds to the JSON property destinationEncryptionConfiguration
4984 4985 4986 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4984 def destination_encryption_configuration @destination_encryption_configuration end |
#destination_table ⇒ Google::Apis::BigqueryV2::TableReference
[Required] The destination table to load the data into.
Corresponds to the JSON property destinationTable
4989 4990 4991 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4989 def destination_table @destination_table end |
#destination_table_properties ⇒ Google::Apis::BigqueryV2::DestinationTableProperties
Properties for the destination table.
Corresponds to the JSON property destinationTableProperties
4994 4995 4996 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 4994 def destination_table_properties @destination_table_properties end |
#encoding ⇒ String
Optional. The character encoding of the data. The supported values are UTF-8,
ISO-8859-1, UTF-16BE, UTF-16LE, UTF-32BE, and UTF-32LE. The default value is
UTF-8. BigQuery decodes the data after the raw, binary data has been split
using the values of the quote and fieldDelimiter properties. If you don't
specify an encoding, or if you specify a UTF-8 encoding when the CSV file is
not UTF-8 encoded, BigQuery attempts to convert the data to UTF-8. Generally,
your data loads successfully, but it may not match byte-for-byte what you
expect. To avoid this, specify the correct encoding by using the --encoding
flag. If BigQuery can't convert a character other than the ASCII 0 character,
BigQuery converts the character to the standard Unicode replacement character:
�.
Corresponds to the JSON property encoding
5009 5010 5011 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5009 def encoding @encoding end |
#field_delimiter ⇒ String
Optional. The separator character for fields in a CSV file. The separator is
interpreted as a single byte. For files encoded in ISO-8859-1, any single
character can be used as a separator. For files encoded in UTF-8, characters
represented in decimal range 1-127 (U+0001-U+007F) can be used without any
modification. UTF-8 characters encoded with multiple bytes (i.e. U+0080 and
above) will have only the first byte used for separating fields. The remaining
bytes will be treated as a part of the field. BigQuery also supports the
escape sequence "\t" (U+0009) to specify a tab separator. The default value is
comma (",", U+002C).
Corresponds to the JSON property fieldDelimiter
5022 5023 5024 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5022 def field_delimiter @field_delimiter end |
#file_set_spec_type ⇒ String
Optional. Specifies how source URIs are interpreted for constructing the file
set to load. By default, source URIs are expanded against the underlying
storage. You can also specify manifest files to control how the file set is
constructed. This option is only applicable to object storage systems.
Corresponds to the JSON property fileSetSpecType
5030 5031 5032 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5030 def file_set_spec_type @file_set_spec_type end |
#hive_partitioning_options ⇒ Google::Apis::BigqueryV2::HivePartitioningOptions
Options for configuring hive partitioning detect.
Corresponds to the JSON property hivePartitioningOptions
5035 5036 5037 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5035 def @hive_partitioning_options end |
#ignore_unknown_values ⇒ Boolean Also known as: ignore_unknown_values?
Optional. Indicates if BigQuery should allow extra values that are not
represented in the table schema. If true, the extra values are ignored. If
false, records with extra columns are treated as bad records, and if there are
too many bad records, an invalid error is returned in the job result. The
default value is false. The sourceFormat property determines what BigQuery
treats as an extra value: CSV: Trailing columns JSON: Named values that don't
match any column names in the table schema Avro, Parquet, ORC: Fields in the
file schema that don't exist in the table schema.
Corresponds to the JSON property ignoreUnknownValues
5047 5048 5049 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5047 def ignore_unknown_values @ignore_unknown_values end |
#json_extension ⇒ String
Optional. Load option to be used together with source_format newline-delimited
JSON to indicate that a variant of JSON is being loaded. To load newline-
delimited GeoJSON, specify GEOJSON (and source_format must be set to
NEWLINE_DELIMITED_JSON).
Corresponds to the JSON property jsonExtension
5056 5057 5058 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5056 def json_extension @json_extension end |
#max_bad_records ⇒ Fixnum
Optional. The maximum number of bad records that BigQuery can ignore when
running the job. If the number of bad records exceeds this value, an invalid
error is returned in the job result. The default value is 0, which requires
that all records are valid. This is only supported for CSV and
NEWLINE_DELIMITED_JSON file formats.
Corresponds to the JSON property maxBadRecords
5065 5066 5067 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5065 def max_bad_records @max_bad_records end |
#null_marker ⇒ String
Optional. Specifies a string that represents a null value in a CSV file. For
example, if you specify "\N", BigQuery interprets "\N" as a null value when
loading a CSV file. The default value is the empty string. If you set this
property to a custom value, BigQuery throws an error if an empty string is
present for all data types except for STRING and BYTE. For STRING and BYTE
columns, BigQuery interprets the empty string as an empty value.
Corresponds to the JSON property nullMarker
5075 5076 5077 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5075 def null_marker @null_marker end |
#null_markers ⇒ Array<String>
Optional. A list of strings represented as SQL NULL value in a CSV file.
null_marker and null_markers can't be set at the same time. If null_marker is
set, null_markers has to be not set. If null_markers is set, null_marker has
to be not set. If both null_marker and null_markers are set at the same time,
a user error would be thrown. Any strings listed in null_markers, including
empty string would be interpreted as SQL NULL. This applies to all column
types.
Corresponds to the JSON property nullMarkers
5086 5087 5088 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5086 def null_markers @null_markers end |
#parquet_options ⇒ Google::Apis::BigqueryV2::ParquetOptions
Parquet Options for load and make external tables.
Corresponds to the JSON property parquetOptions
5091 5092 5093 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5091 def @parquet_options end |
#preserve_ascii_control_characters ⇒ Boolean Also known as: preserve_ascii_control_characters?
Optional. When sourceFormat is set to "CSV", this indicates whether the
embedded ASCII control characters (the first 32 characters in the ASCII-table,
from '\x00' to '\x1F') are preserved.
Corresponds to the JSON property preserveAsciiControlCharacters
5098 5099 5100 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5098 def preserve_ascii_control_characters @preserve_ascii_control_characters end |
#projection_fields ⇒ Array<String>
If sourceFormat is set to "DATASTORE_BACKUP", indicates which entity
properties to load into BigQuery from a Cloud Datastore backup. Property names
are case sensitive and must be top-level properties. If no properties are
specified, BigQuery loads all properties. If any named property isn't found in
the Cloud Datastore backup, an invalid error is returned in the job result.
Corresponds to the JSON property projectionFields
5108 5109 5110 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5108 def projection_fields @projection_fields end |
#quote ⇒ String
Optional. The value that is used to quote data sections in a CSV file.
BigQuery converts the string to ISO-8859-1 encoding, and then uses the first
byte of the encoded string to split the data in its raw, binary state. The
default value is a double-quote ('"'). If your data does not contain quoted
sections, set the property value to an empty string. If your data contains
quoted newline characters, you must also set the allowQuotedNewlines property
to true. To include the specific quote character within a quoted value,
precede it with an additional matching quote character. For example, if you
want to escape the default character ' " ', use ' "" '. @default "
Corresponds to the JSON property quote
5121 5122 5123 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5121 def quote @quote end |
#range_partitioning ⇒ Google::Apis::BigqueryV2::RangePartitioning
Range partitioning specification for the destination table. Only one of
timePartitioning and rangePartitioning should be specified.
Corresponds to the JSON property rangePartitioning
5127 5128 5129 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5127 def range_partitioning @range_partitioning end |
#reference_file_schema_uri ⇒ String
Optional. The user can provide a reference file with the reader schema. This
file is only loaded if it is part of source URIs, but is not loaded otherwise.
It is enabled for the following formats: AVRO, PARQUET, ORC.
Corresponds to the JSON property referenceFileSchemaUri
5134 5135 5136 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5134 def reference_file_schema_uri @reference_file_schema_uri end |
#schema ⇒ Google::Apis::BigqueryV2::TableSchema
Schema of a table
Corresponds to the JSON property schema
5139 5140 5141 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5139 def schema @schema end |
#schema_inline ⇒ String
[Deprecated] The inline schema. For CSV schemas, specify as "Field1:Type1[,
Field2:Type2]*". For example, "foo:STRING, bar:INTEGER, baz:FLOAT".
Corresponds to the JSON property schemaInline
5145 5146 5147 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5145 def schema_inline @schema_inline end |
#schema_inline_format ⇒ String
[Deprecated] The format of the schemaInline property.
Corresponds to the JSON property schemaInlineFormat
5150 5151 5152 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5150 def schema_inline_format @schema_inline_format end |
#schema_update_options ⇒ Array<String>
Allows the schema of the destination table to be updated as a side effect of the load job if a schema is autodetected or supplied in the job configuration. Schema update options are supported in three cases: when writeDisposition is WRITE_APPEND; when writeDisposition is WRITE_TRUNCATE_DATA; when writeDisposition is WRITE_TRUNCATE and the destination table is a partition of a table, specified by partition decorators. For normal tables, WRITE_TRUNCATE will always overwrite the schema. One or more of the following values are specified: * ALLOW_FIELD_ADDITION: allow adding a nullable field to the schema.
- ALLOW_FIELD_RELAXATION: allow relaxing a required field in the original
schema to nullable.
Corresponds to the JSON property
schemaUpdateOptions
5164 5165 5166 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5164 def @schema_update_options end |
#skip_leading_rows ⇒ Fixnum
Optional. The number of rows at the top of a CSV file that BigQuery will skip
when loading the data. The default value is 0. This property is useful if you
have header rows in the file that should be skipped. When autodetect is on,
the behavior is the following: * skipLeadingRows unspecified - Autodetect
tries to detect headers in the first row. If they are not detected, the row is
read as data. Otherwise data is read starting from the second row. *
skipLeadingRows is 0 - Instructs autodetect that there are no headers and data
should be read starting from the first row. * skipLeadingRows = N > 0 -
Autodetect skips N-1 rows and tries to detect headers in row N. If headers are
not detected, row N is just skipped. Otherwise row N is used to extract column
names for the detected schema.
Corresponds to the JSON property skipLeadingRows
5179 5180 5181 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5179 def skip_leading_rows @skip_leading_rows end |
#source_column_match ⇒ String
Optional. Controls the strategy used to match loaded columns to the schema. If
not set, a sensible default is chosen based on how the schema is provided. If
autodetect is used, then columns are matched by name. Otherwise, columns are
matched by position. This is done to keep the behavior backward-compatible.
Corresponds to the JSON property sourceColumnMatch
5187 5188 5189 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5187 def source_column_match @source_column_match end |
#source_format ⇒ String
Optional. The format of the data files. For CSV files, specify "CSV". For
datastore backups, specify "DATASTORE_BACKUP". For newline-delimited JSON,
specify "NEWLINE_DELIMITED_JSON". For Avro, specify "AVRO". For parquet,
specify "PARQUET". For orc, specify "ORC". The default value is CSV.
Corresponds to the JSON property sourceFormat
5195 5196 5197 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5195 def source_format @source_format end |
#source_uris ⇒ Array<String>
[Required] The fully-qualified URIs that point to your data in Google Cloud.
For Google Cloud Storage URIs: Each URI can contain one '' wildcard character
and it must come after the 'bucket' name. Size limits related to load jobs
apply to external data sources. For Google Cloud Bigtable URIs: Exactly one
URI can be specified and it has be a fully specified and valid HTTPS URL for a
Google Cloud Bigtable table. For Google Cloud Datastore backups: Exactly one
URI can be specified. Also, the '' wildcard character is not allowed.
Corresponds to the JSON property sourceUris
5206 5207 5208 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5206 def source_uris @source_uris end |
#time_format ⇒ String
Optional. Date format used for parsing TIME values.
Corresponds to the JSON property timeFormat
5211 5212 5213 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5211 def time_format @time_format end |
#time_partitioning ⇒ Google::Apis::BigqueryV2::TimePartitioning
Time-based partitioning specification for the destination table. Only one of
timePartitioning and rangePartitioning should be specified.
Corresponds to the JSON property timePartitioning
5217 5218 5219 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5217 def time_partitioning @time_partitioning end |
#time_zone ⇒ String
Optional. Default time zone that will apply when parsing timestamp values that
have no specific time zone.
Corresponds to the JSON property timeZone
5223 5224 5225 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5223 def time_zone @time_zone end |
#timestamp_format ⇒ String
Optional. Date format used for parsing TIMESTAMP values.
Corresponds to the JSON property timestampFormat
5228 5229 5230 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5228 def @timestamp_format end |
#timestamp_target_precision ⇒ Array<Fixnum>
Precisions (maximum number of total digits in base 10) for seconds of
TIMESTAMP types that are allowed to the destination table for autodetection
mode. Available for the formats: CSV. For the CSV Format, Possible values
include: Not Specified, [], or [6]: timestamp(6) for all auto detected
TIMESTAMP columns [6, 12]: timestamp(6) for all auto detected TIMESTAMP
columns that have less than 6 digits of subseconds. timestamp(12) for all auto
detected TIMESTAMP columns that have more than 6 digits of subseconds. [12]:
timestamp(12) for all auto detected TIMESTAMP columns. The order of the
elements in this array is ignored. Inputs that have higher precision than the
highest target precision in this array will be truncated.
Corresponds to the JSON property timestampTargetPrecision
5242 5243 5244 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5242 def @timestamp_target_precision end |
#use_avro_logical_types ⇒ Boolean Also known as: use_avro_logical_types?
Optional. If sourceFormat is set to "AVRO", indicates whether to interpret
logical types as the corresponding BigQuery data type (for example, TIMESTAMP),
instead of using the raw type (for example, INTEGER).
Corresponds to the JSON property useAvroLogicalTypes
5249 5250 5251 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5249 def use_avro_logical_types @use_avro_logical_types end |
#write_disposition ⇒ String
Optional. Specifies the action that occurs if the destination table already
exists. The following values are supported: * WRITE_TRUNCATE: If the table
already exists, BigQuery overwrites the data, removes the constraints and uses
the schema from the load job. * WRITE_TRUNCATE_DATA: If the table already
exists, BigQuery overwrites the data, but keeps the constraints and schema of
the existing table. * WRITE_APPEND: If the table already exists, BigQuery
appends the data to the table. * WRITE_EMPTY: If the table already exists and
contains data, a 'duplicate' error is returned in the job result. The default
value is WRITE_APPEND. Each action is atomic and only occurs if BigQuery is
able to complete the job successfully. Creation, truncation and append actions
occur as one atomic update upon job completion.
Corresponds to the JSON property writeDisposition
5265 5266 5267 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5265 def write_disposition @write_disposition end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
5272 5273 5274 5275 5276 5277 5278 5279 5280 5281 5282 5283 5284 5285 5286 5287 5288 5289 5290 5291 5292 5293 5294 5295 5296 5297 5298 5299 5300 5301 5302 5303 5304 5305 5306 5307 5308 5309 5310 5311 5312 5313 5314 5315 5316 5317 5318 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 5272 def update!(**args) @allow_jagged_rows = args[:allow_jagged_rows] if args.key?(:allow_jagged_rows) @allow_quoted_newlines = args[:allow_quoted_newlines] if args.key?(:allow_quoted_newlines) @autodetect = args[:autodetect] if args.key?(:autodetect) @clustering = args[:clustering] if args.key?(:clustering) @column_name_character_map = args[:column_name_character_map] if args.key?(:column_name_character_map) @connection_properties = args[:connection_properties] if args.key?(:connection_properties) @copy_files_only = args[:copy_files_only] if args.key?(:copy_files_only) @create_disposition = args[:create_disposition] if args.key?(:create_disposition) @create_session = args[:create_session] if args.key?(:create_session) @date_format = args[:date_format] if args.key?(:date_format) @datetime_format = args[:datetime_format] if args.key?(:datetime_format) @decimal_target_types = args[:decimal_target_types] if args.key?(:decimal_target_types) @destination_encryption_configuration = args[:destination_encryption_configuration] if args.key?(:destination_encryption_configuration) @destination_table = args[:destination_table] if args.key?(:destination_table) @destination_table_properties = args[:destination_table_properties] if args.key?(:destination_table_properties) @encoding = args[:encoding] if args.key?(:encoding) @field_delimiter = args[:field_delimiter] if args.key?(:field_delimiter) @file_set_spec_type = args[:file_set_spec_type] if args.key?(:file_set_spec_type) @hive_partitioning_options = args[:hive_partitioning_options] if args.key?(:hive_partitioning_options) @ignore_unknown_values = args[:ignore_unknown_values] if args.key?(:ignore_unknown_values) @json_extension = args[:json_extension] if args.key?(:json_extension) @max_bad_records = args[:max_bad_records] if args.key?(:max_bad_records) @null_marker = args[:null_marker] if args.key?(:null_marker) @null_markers = args[:null_markers] if args.key?(:null_markers) @parquet_options = args[:parquet_options] if args.key?(:parquet_options) @preserve_ascii_control_characters = args[:preserve_ascii_control_characters] if args.key?(:preserve_ascii_control_characters) @projection_fields = args[:projection_fields] if args.key?(:projection_fields) @quote = args[:quote] if args.key?(:quote) @range_partitioning = args[:range_partitioning] if args.key?(:range_partitioning) @reference_file_schema_uri = args[:reference_file_schema_uri] if args.key?(:reference_file_schema_uri) @schema = args[:schema] if args.key?(:schema) @schema_inline = args[:schema_inline] if args.key?(:schema_inline) @schema_inline_format = args[:schema_inline_format] if args.key?(:schema_inline_format) @schema_update_options = args[:schema_update_options] if args.key?(:schema_update_options) @skip_leading_rows = args[:skip_leading_rows] if args.key?(:skip_leading_rows) @source_column_match = args[:source_column_match] if args.key?(:source_column_match) @source_format = args[:source_format] if args.key?(:source_format) @source_uris = args[:source_uris] if args.key?(:source_uris) @time_format = args[:time_format] if args.key?(:time_format) @time_partitioning = args[:time_partitioning] if args.key?(:time_partitioning) @time_zone = args[:time_zone] if args.key?(:time_zone) @timestamp_format = args[:timestamp_format] if args.key?(:timestamp_format) @timestamp_target_precision = args[:timestamp_target_precision] if args.key?(:timestamp_target_precision) @use_avro_logical_types = args[:use_avro_logical_types] if args.key?(:use_avro_logical_types) @write_disposition = args[:write_disposition] if args.key?(:write_disposition) end |