Class: Google::Apis::BigqueryV2::TrainingOptions
- Inherits:
-
Object
- Object
- Google::Apis::BigqueryV2::TrainingOptions
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/bigquery_v2/classes.rb,
lib/google/apis/bigquery_v2/representations.rb,
lib/google/apis/bigquery_v2/representations.rb
Overview
Options used in model training.
Instance Attribute Summary collapse
-
#activation_fn ⇒ String
Activation function of the neural nets.
-
#adjust_step_changes ⇒ Boolean
(also: #adjust_step_changes?)
If true, detect step changes and make data adjustment in the input time series.
-
#approx_global_feature_contrib ⇒ Boolean
(also: #approx_global_feature_contrib?)
Whether to use approximate feature contribution method in XGBoost model explanation for global explain.
-
#auto_arima ⇒ Boolean
(also: #auto_arima?)
Whether to enable auto ARIMA or not.
-
#auto_arima_max_order ⇒ Fixnum
The max value of the sum of non-seasonal p and q.
-
#auto_arima_min_order ⇒ Fixnum
The min value of the sum of non-seasonal p and q.
-
#auto_class_weights ⇒ Boolean
(also: #auto_class_weights?)
Whether to calculate class weights automatically based on the popularity of each label.
-
#batch_size ⇒ Fixnum
Batch size for dnn models.
-
#booster_type ⇒ String
Booster type for boosted tree models.
-
#budget_hours ⇒ Float
Budget in hours for AutoML training.
-
#calculate_p_values ⇒ Boolean
(also: #calculate_p_values?)
Whether or not p-value test should be computed for this model.
-
#category_encoding_method ⇒ String
Categorical feature encoding method.
-
#clean_spikes_and_dips ⇒ Boolean
(also: #clean_spikes_and_dips?)
If true, clean spikes and dips in the input time series.
-
#color_space ⇒ String
Enums for color space, used for processing images in Object Table.
-
#colsample_bylevel ⇒ Float
Subsample ratio of columns for each level for boosted tree models.
-
#colsample_bynode ⇒ Float
Subsample ratio of columns for each node(split) for boosted tree models.
-
#colsample_bytree ⇒ Float
Subsample ratio of columns when constructing each tree for boosted tree models.
-
#contribution_metric ⇒ String
The contribution metric.
-
#dart_normalize_type ⇒ String
Type of normalization algorithm for boosted tree models using dart booster.
-
#data_frequency ⇒ String
The data frequency of a time series.
-
#data_split_column ⇒ String
The column to split data with.
-
#data_split_eval_fraction ⇒ Float
The fraction of evaluation data over the whole input data.
-
#data_split_method ⇒ String
The data split type for training and evaluation, e.g.
-
#decompose_time_series ⇒ Boolean
(also: #decompose_time_series?)
If true, perform decompose time series and save the results.
-
#dimension_id_columns ⇒ Array<String>
Optional.
-
#distance_type ⇒ String
Distance type for clustering models.
-
#dropout ⇒ Float
Dropout probability for dnn models.
-
#early_stop ⇒ Boolean
(also: #early_stop?)
Whether to stop early when the loss doesn't improve significantly any more ( compared to min_relative_progress).
-
#enable_global_explain ⇒ Boolean
(also: #enable_global_explain?)
If true, enable global explanation during training.
-
#endpoint_idle_ttl ⇒ String
The idle TTL of the endpoint before the resources get destroyed.
-
#feedback_type ⇒ String
Feedback type that specifies which algorithm to run for matrix factorization.
-
#fit_intercept ⇒ Boolean
(also: #fit_intercept?)
Whether the model should include intercept during model training.
-
#forecast_limit_lower_bound ⇒ Float
The forecast limit lower bound that was used during ARIMA model training with limits.
-
#forecast_limit_upper_bound ⇒ Float
The forecast limit upper bound that was used during ARIMA model training with limits.
-
#hidden_units ⇒ Array<Fixnum>
Hidden units for dnn models.
-
#holiday_region ⇒ String
The geographical region based on which the holidays are considered in time series modeling.
-
#holiday_regions ⇒ Array<String>
A list of geographical regions that are used for time series modeling.
-
#horizon ⇒ Fixnum
The number of periods ahead that need to be forecasted.
-
#hparam_tuning_objectives ⇒ Array<String>
The target evaluation metrics to optimize the hyperparameters for.
-
#hugging_face_model_id ⇒ String
The id of a Hugging Face model.
-
#include_drift ⇒ Boolean
(also: #include_drift?)
Include drift when fitting an ARIMA model.
-
#initial_learn_rate ⇒ Float
Specifies the initial learning rate for the line search learn rate strategy.
-
#input_label_columns ⇒ Array<String>
Name of input label columns in training data.
-
#instance_weight_column ⇒ String
Name of the instance weight column for training data.
-
#integrated_gradients_num_steps ⇒ Fixnum
Number of integral steps for the integrated gradients explain method.
-
#is_test_column ⇒ String
Name of the column used to determine the rows corresponding to control and test.
-
#item_column ⇒ String
Item column specified for matrix factorization models.
-
#kmeans_initialization_column ⇒ String
The column used to provide the initial centroids for kmeans algorithm when kmeans_initialization_method is CUSTOM.
-
#kmeans_initialization_method ⇒ String
The method used to initialize the centroids for kmeans algorithm.
-
#l1_reg_activation ⇒ Float
L1 regularization coefficient to activations.
-
#l1_regularization ⇒ Float
L1 regularization coefficient.
-
#l2_regularization ⇒ Float
L2 regularization coefficient.
-
#label_class_weights ⇒ Hash<String,Float>
Weights associated with each label class, for rebalancing the training data.
-
#learn_rate ⇒ Float
Learning rate in training.
-
#learn_rate_strategy ⇒ String
The strategy to determine learn rate for the current iteration.
-
#loss_type ⇒ String
Type of loss function used during training run.
-
#machine_type ⇒ String
The type of the machine used to deploy and serve the model.
-
#max_iterations ⇒ Fixnum
The maximum number of iterations in training.
-
#max_parallel_trials ⇒ Fixnum
Maximum number of trials to run in parallel.
-
#max_replica_count ⇒ Fixnum
The maximum number of machine replicas that will be deployed on an endpoint.
-
#max_time_series_length ⇒ Fixnum
The maximum number of time points in a time series that can be used in modeling the trend component of the time series.
-
#max_tree_depth ⇒ Fixnum
Maximum depth of a tree for boosted tree models.
-
#min_apriori_support ⇒ Float
The apriori support minimum.
-
#min_relative_progress ⇒ Float
When early_stop is true, stops training when accuracy improvement is less than 'min_relative_progress'.
-
#min_replica_count ⇒ Fixnum
The minimum number of machine replicas that will be always deployed on an endpoint.
-
#min_split_loss ⇒ Float
Minimum split loss for boosted tree models.
-
#min_time_series_length ⇒ Fixnum
The minimum number of time points in a time series that are used in modeling the trend component of the time series.
-
#min_tree_child_weight ⇒ Fixnum
Minimum sum of instance weight needed in a child for boosted tree models.
-
#model_garden_model_name ⇒ String
The name of a Vertex model garden publisher model.
-
#model_registry ⇒ String
The model registry.
-
#model_uri ⇒ String
Google Cloud Storage URI from which the model was imported.
-
#non_seasonal_order ⇒ Google::Apis::BigqueryV2::ArimaOrder
Arima order, can be used for both non-seasonal and seasonal parts.
-
#num_clusters ⇒ Fixnum
Number of clusters for clustering models.
-
#num_factors ⇒ Fixnum
Num factors specified for matrix factorization models.
-
#num_parallel_tree ⇒ Fixnum
Number of parallel trees constructed during each iteration for boosted tree models.
-
#num_principal_components ⇒ Fixnum
Number of principal components to keep in the PCA model.
-
#num_trials ⇒ Fixnum
Number of trials to run this hyperparameter tuning job.
-
#optimization_strategy ⇒ String
Optimization strategy for training linear regression models.
-
#optimizer ⇒ String
Optimizer used for training the neural nets.
-
#pca_explained_variance_ratio ⇒ Float
The minimum ratio of cumulative explained variance that needs to be given by the PCA model.
-
#pca_solver ⇒ String
The solver for PCA.
-
#reservation_affinity_key ⇒ String
Corresponds to the label key of a reservation resource used by Vertex AI.
-
#reservation_affinity_type ⇒ String
Specifies the reservation affinity type used to configure a Vertex AI resource.
-
#reservation_affinity_values ⇒ Array<String>
Corresponds to the label values of a reservation resource used by Vertex AI.
-
#sampled_shapley_num_paths ⇒ Fixnum
Number of paths for the sampled Shapley explain method.
-
#scale_features ⇒ Boolean
(also: #scale_features?)
If true, scale the feature values by dividing the feature standard deviation.
-
#standardize_features ⇒ Boolean
(also: #standardize_features?)
Whether to standardize numerical features.
-
#subsample ⇒ Float
Subsample fraction of the training data to grow tree to prevent overfitting for boosted tree models.
-
#tf_version ⇒ String
Based on the selected TF version, the corresponding docker image is used to train external models.
-
#time_series_data_column ⇒ String
Column to be designated as time series data for ARIMA model.
-
#time_series_id_column ⇒ String
The time series id column that was used during ARIMA model training.
-
#time_series_id_columns ⇒ Array<String>
The time series id columns that were used during ARIMA model training.
-
#time_series_length_fraction ⇒ Float
The fraction of the interpolated length of the time series that's used to model the time series trend component.
-
#time_series_timestamp_column ⇒ String
Column to be designated as time series timestamp for ARIMA model.
-
#tree_method ⇒ String
Tree construction algorithm for boosted tree models.
-
#trend_smoothing_window_size ⇒ Fixnum
Smoothing window size for the trend component.
-
#user_column ⇒ String
User column specified for matrix factorization models.
-
#vertex_ai_model_version_aliases ⇒ Array<String>
The version aliases to apply in Vertex AI model registry.
-
#wals_alpha ⇒ Float
Hyperparameter for matrix factoration when implicit feedback type is specified.
-
#warm_start ⇒ Boolean
(also: #warm_start?)
Whether to train a model from the last checkpoint.
-
#xgboost_version ⇒ String
User-selected XGBoost versions for training of XGBoost models.
Instance Method Summary collapse
-
#initialize(**args) ⇒ TrainingOptions
constructor
A new instance of TrainingOptions.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ TrainingOptions
Returns a new instance of TrainingOptions.
11937 11938 11939 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11937 def initialize(**args) update!(**args) end |
Instance Attribute Details
#activation_fn ⇒ String
Activation function of the neural nets.
Corresponds to the JSON property activationFn
11356 11357 11358 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11356 def activation_fn @activation_fn end |
#adjust_step_changes ⇒ Boolean Also known as: adjust_step_changes?
If true, detect step changes and make data adjustment in the input time series.
Corresponds to the JSON property adjustStepChanges
11361 11362 11363 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11361 def adjust_step_changes @adjust_step_changes end |
#approx_global_feature_contrib ⇒ Boolean Also known as: approx_global_feature_contrib?
Whether to use approximate feature contribution method in XGBoost model
explanation for global explain.
Corresponds to the JSON property approxGlobalFeatureContrib
11368 11369 11370 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11368 def approx_global_feature_contrib @approx_global_feature_contrib end |
#auto_arima ⇒ Boolean Also known as: auto_arima?
Whether to enable auto ARIMA or not.
Corresponds to the JSON property autoArima
11374 11375 11376 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11374 def auto_arima @auto_arima end |
#auto_arima_max_order ⇒ Fixnum
The max value of the sum of non-seasonal p and q.
Corresponds to the JSON property autoArimaMaxOrder
11380 11381 11382 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11380 def auto_arima_max_order @auto_arima_max_order end |
#auto_arima_min_order ⇒ Fixnum
The min value of the sum of non-seasonal p and q.
Corresponds to the JSON property autoArimaMinOrder
11385 11386 11387 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11385 def auto_arima_min_order @auto_arima_min_order end |
#auto_class_weights ⇒ Boolean Also known as: auto_class_weights?
Whether to calculate class weights automatically based on the popularity of
each label.
Corresponds to the JSON property autoClassWeights
11391 11392 11393 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11391 def auto_class_weights @auto_class_weights end |
#batch_size ⇒ Fixnum
Batch size for dnn models.
Corresponds to the JSON property batchSize
11397 11398 11399 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11397 def batch_size @batch_size end |
#booster_type ⇒ String
Booster type for boosted tree models.
Corresponds to the JSON property boosterType
11402 11403 11404 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11402 def booster_type @booster_type end |
#budget_hours ⇒ Float
Budget in hours for AutoML training.
Corresponds to the JSON property budgetHours
11407 11408 11409 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11407 def budget_hours @budget_hours end |
#calculate_p_values ⇒ Boolean Also known as: calculate_p_values?
Whether or not p-value test should be computed for this model. Only available
for linear and logistic regression models.
Corresponds to the JSON property calculatePValues
11413 11414 11415 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11413 def calculate_p_values @calculate_p_values end |
#category_encoding_method ⇒ String
Categorical feature encoding method.
Corresponds to the JSON property categoryEncodingMethod
11419 11420 11421 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11419 def category_encoding_method @category_encoding_method end |
#clean_spikes_and_dips ⇒ Boolean Also known as: clean_spikes_and_dips?
If true, clean spikes and dips in the input time series.
Corresponds to the JSON property cleanSpikesAndDips
11424 11425 11426 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11424 def clean_spikes_and_dips @clean_spikes_and_dips end |
#color_space ⇒ String
Enums for color space, used for processing images in Object Table. See more
details at https://www.tensorflow.org/io/tutorials/colorspace.
Corresponds to the JSON property colorSpace
11431 11432 11433 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11431 def color_space @color_space end |
#colsample_bylevel ⇒ Float
Subsample ratio of columns for each level for boosted tree models.
Corresponds to the JSON property colsampleBylevel
11436 11437 11438 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11436 def colsample_bylevel @colsample_bylevel end |
#colsample_bynode ⇒ Float
Subsample ratio of columns for each node(split) for boosted tree models.
Corresponds to the JSON property colsampleBynode
11441 11442 11443 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11441 def colsample_bynode @colsample_bynode end |
#colsample_bytree ⇒ Float
Subsample ratio of columns when constructing each tree for boosted tree models.
Corresponds to the JSON property colsampleBytree
11446 11447 11448 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11446 def colsample_bytree @colsample_bytree end |
#contribution_metric ⇒ String
The contribution metric. Applies to contribution analysis models. Allowed
formats supported are for summable and summable ratio contribution metrics.
These include expressions such as SUM(x) or SUM(x)/SUM(y), where x and y
are column names from the base table.
Corresponds to the JSON property contributionMetric
11454 11455 11456 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11454 def contribution_metric @contribution_metric end |
#dart_normalize_type ⇒ String
Type of normalization algorithm for boosted tree models using dart booster.
Corresponds to the JSON property dartNormalizeType
11459 11460 11461 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11459 def dart_normalize_type @dart_normalize_type end |
#data_frequency ⇒ String
The data frequency of a time series.
Corresponds to the JSON property dataFrequency
11464 11465 11466 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11464 def data_frequency @data_frequency end |
#data_split_column ⇒ String
The column to split data with. This column won't be used as a feature. 1. When
data_split_method is CUSTOM, the corresponding column should be boolean. The
rows with true value tag are eval data, and the false are training data. 2.
When data_split_method is SEQ, the first DATA_SPLIT_EVAL_FRACTION rows (from
smallest to largest) in the corresponding column are used as training data,
and the rest are eval data. It respects the order in Orderable data types:
https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#
data_type_properties
Corresponds to the JSON property dataSplitColumn
11476 11477 11478 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11476 def data_split_column @data_split_column end |
#data_split_eval_fraction ⇒ Float
The fraction of evaluation data over the whole input data. The rest of data
will be used as training data. The format should be double. Accurate to two
decimal places. Default value is 0.2.
Corresponds to the JSON property dataSplitEvalFraction
11483 11484 11485 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11483 def data_split_eval_fraction @data_split_eval_fraction end |
#data_split_method ⇒ String
The data split type for training and evaluation, e.g. RANDOM.
Corresponds to the JSON property dataSplitMethod
11488 11489 11490 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11488 def data_split_method @data_split_method end |
#decompose_time_series ⇒ Boolean Also known as: decompose_time_series?
If true, perform decompose time series and save the results.
Corresponds to the JSON property decomposeTimeSeries
11493 11494 11495 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11493 def decompose_time_series @decompose_time_series end |
#dimension_id_columns ⇒ Array<String>
Optional. Names of the columns to slice on. Applies to contribution analysis
models.
Corresponds to the JSON property dimensionIdColumns
11500 11501 11502 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11500 def dimension_id_columns @dimension_id_columns end |
#distance_type ⇒ String
Distance type for clustering models.
Corresponds to the JSON property distanceType
11505 11506 11507 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11505 def distance_type @distance_type end |
#dropout ⇒ Float
Dropout probability for dnn models.
Corresponds to the JSON property dropout
11510 11511 11512 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11510 def dropout @dropout end |
#early_stop ⇒ Boolean Also known as: early_stop?
Whether to stop early when the loss doesn't improve significantly any more (
compared to min_relative_progress). Used only for iterative training
algorithms.
Corresponds to the JSON property earlyStop
11517 11518 11519 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11517 def early_stop @early_stop end |
#enable_global_explain ⇒ Boolean Also known as: enable_global_explain?
If true, enable global explanation during training.
Corresponds to the JSON property enableGlobalExplain
11523 11524 11525 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11523 def enable_global_explain @enable_global_explain end |
#endpoint_idle_ttl ⇒ String
The idle TTL of the endpoint before the resources get destroyed. The default
value is 6.5 hours.
Corresponds to the JSON property endpointIdleTtl
11530 11531 11532 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11530 def endpoint_idle_ttl @endpoint_idle_ttl end |
#feedback_type ⇒ String
Feedback type that specifies which algorithm to run for matrix factorization.
Corresponds to the JSON property feedbackType
11535 11536 11537 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11535 def feedback_type @feedback_type end |
#fit_intercept ⇒ Boolean Also known as: fit_intercept?
Whether the model should include intercept during model training.
Corresponds to the JSON property fitIntercept
11540 11541 11542 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11540 def fit_intercept @fit_intercept end |
#forecast_limit_lower_bound ⇒ Float
The forecast limit lower bound that was used during ARIMA model training with
limits. To see more details of the algorithm: https://otexts.com/fpp2/limits.
html
Corresponds to the JSON property forecastLimitLowerBound
11548 11549 11550 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11548 def forecast_limit_lower_bound @forecast_limit_lower_bound end |
#forecast_limit_upper_bound ⇒ Float
The forecast limit upper bound that was used during ARIMA model training with
limits.
Corresponds to the JSON property forecastLimitUpperBound
11554 11555 11556 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11554 def forecast_limit_upper_bound @forecast_limit_upper_bound end |
#hidden_units ⇒ Array<Fixnum>
Hidden units for dnn models.
Corresponds to the JSON property hiddenUnits
11559 11560 11561 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11559 def hidden_units @hidden_units end |
#holiday_region ⇒ String
The geographical region based on which the holidays are considered in time
series modeling. If a valid value is specified, then holiday effects modeling
is enabled.
Corresponds to the JSON property holidayRegion
11566 11567 11568 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11566 def holiday_region @holiday_region end |
#holiday_regions ⇒ Array<String>
A list of geographical regions that are used for time series modeling.
Corresponds to the JSON property holidayRegions
11571 11572 11573 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11571 def holiday_regions @holiday_regions end |
#horizon ⇒ Fixnum
The number of periods ahead that need to be forecasted.
Corresponds to the JSON property horizon
11576 11577 11578 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11576 def horizon @horizon end |
#hparam_tuning_objectives ⇒ Array<String>
The target evaluation metrics to optimize the hyperparameters for.
Corresponds to the JSON property hparamTuningObjectives
11581 11582 11583 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11581 def hparam_tuning_objectives @hparam_tuning_objectives end |
#hugging_face_model_id ⇒ String
The id of a Hugging Face model. For example, google/gemma-2-2b-it.
Corresponds to the JSON property huggingFaceModelId
11586 11587 11588 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11586 def hugging_face_model_id @hugging_face_model_id end |
#include_drift ⇒ Boolean Also known as: include_drift?
Include drift when fitting an ARIMA model.
Corresponds to the JSON property includeDrift
11591 11592 11593 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11591 def include_drift @include_drift end |
#initial_learn_rate ⇒ Float
Specifies the initial learning rate for the line search learn rate strategy.
Corresponds to the JSON property initialLearnRate
11597 11598 11599 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11597 def initial_learn_rate @initial_learn_rate end |
#input_label_columns ⇒ Array<String>
Name of input label columns in training data.
Corresponds to the JSON property inputLabelColumns
11602 11603 11604 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11602 def input_label_columns @input_label_columns end |
#instance_weight_column ⇒ String
Name of the instance weight column for training data. This column isn't be
used as a feature.
Corresponds to the JSON property instanceWeightColumn
11608 11609 11610 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11608 def instance_weight_column @instance_weight_column end |
#integrated_gradients_num_steps ⇒ Fixnum
Number of integral steps for the integrated gradients explain method.
Corresponds to the JSON property integratedGradientsNumSteps
11613 11614 11615 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11613 def integrated_gradients_num_steps @integrated_gradients_num_steps end |
#is_test_column ⇒ String
Name of the column used to determine the rows corresponding to control and
test. Applies to contribution analysis models.
Corresponds to the JSON property isTestColumn
11619 11620 11621 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11619 def is_test_column @is_test_column end |
#item_column ⇒ String
Item column specified for matrix factorization models.
Corresponds to the JSON property itemColumn
11624 11625 11626 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11624 def item_column @item_column end |
#kmeans_initialization_column ⇒ String
The column used to provide the initial centroids for kmeans algorithm when
kmeans_initialization_method is CUSTOM.
Corresponds to the JSON property kmeansInitializationColumn
11630 11631 11632 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11630 def kmeans_initialization_column @kmeans_initialization_column end |
#kmeans_initialization_method ⇒ String
The method used to initialize the centroids for kmeans algorithm.
Corresponds to the JSON property kmeansInitializationMethod
11635 11636 11637 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11635 def kmeans_initialization_method @kmeans_initialization_method end |
#l1_reg_activation ⇒ Float
L1 regularization coefficient to activations.
Corresponds to the JSON property l1RegActivation
11640 11641 11642 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11640 def l1_reg_activation @l1_reg_activation end |
#l1_regularization ⇒ Float
L1 regularization coefficient.
Corresponds to the JSON property l1Regularization
11645 11646 11647 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11645 def l1_regularization @l1_regularization end |
#l2_regularization ⇒ Float
L2 regularization coefficient.
Corresponds to the JSON property l2Regularization
11650 11651 11652 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11650 def l2_regularization @l2_regularization end |
#label_class_weights ⇒ Hash<String,Float>
Weights associated with each label class, for rebalancing the training data.
Only applicable for classification models.
Corresponds to the JSON property labelClassWeights
11656 11657 11658 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11656 def label_class_weights @label_class_weights end |
#learn_rate ⇒ Float
Learning rate in training. Used only for iterative training algorithms.
Corresponds to the JSON property learnRate
11661 11662 11663 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11661 def learn_rate @learn_rate end |
#learn_rate_strategy ⇒ String
The strategy to determine learn rate for the current iteration.
Corresponds to the JSON property learnRateStrategy
11666 11667 11668 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11666 def learn_rate_strategy @learn_rate_strategy end |
#loss_type ⇒ String
Type of loss function used during training run.
Corresponds to the JSON property lossType
11671 11672 11673 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11671 def loss_type @loss_type end |
#machine_type ⇒ String
The type of the machine used to deploy and serve the model.
Corresponds to the JSON property machineType
11676 11677 11678 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11676 def machine_type @machine_type end |
#max_iterations ⇒ Fixnum
The maximum number of iterations in training. Used only for iterative training
algorithms.
Corresponds to the JSON property maxIterations
11682 11683 11684 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11682 def max_iterations @max_iterations end |
#max_parallel_trials ⇒ Fixnum
Maximum number of trials to run in parallel.
Corresponds to the JSON property maxParallelTrials
11687 11688 11689 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11687 def max_parallel_trials @max_parallel_trials end |
#max_replica_count ⇒ Fixnum
The maximum number of machine replicas that will be deployed on an endpoint.
The default value is equal to min_replica_count.
Corresponds to the JSON property maxReplicaCount
11693 11694 11695 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11693 def max_replica_count @max_replica_count end |
#max_time_series_length ⇒ Fixnum
The maximum number of time points in a time series that can be used in
modeling the trend component of the time series. Don't use this option with
the timeSeriesLengthFraction or minTimeSeriesLength options.
Corresponds to the JSON property maxTimeSeriesLength
11700 11701 11702 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11700 def max_time_series_length @max_time_series_length end |
#max_tree_depth ⇒ Fixnum
Maximum depth of a tree for boosted tree models.
Corresponds to the JSON property maxTreeDepth
11705 11706 11707 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11705 def max_tree_depth @max_tree_depth end |
#min_apriori_support ⇒ Float
The apriori support minimum. Applies to contribution analysis models.
Corresponds to the JSON property minAprioriSupport
11710 11711 11712 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11710 def min_apriori_support @min_apriori_support end |
#min_relative_progress ⇒ Float
When early_stop is true, stops training when accuracy improvement is less than
'min_relative_progress'. Used only for iterative training algorithms.
Corresponds to the JSON property minRelativeProgress
11716 11717 11718 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11716 def min_relative_progress @min_relative_progress end |
#min_replica_count ⇒ Fixnum
The minimum number of machine replicas that will be always deployed on an
endpoint. This value must be greater than or equal to 1. The default value is
1.
Corresponds to the JSON property minReplicaCount
11723 11724 11725 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11723 def min_replica_count @min_replica_count end |
#min_split_loss ⇒ Float
Minimum split loss for boosted tree models.
Corresponds to the JSON property minSplitLoss
11728 11729 11730 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11728 def min_split_loss @min_split_loss end |
#min_time_series_length ⇒ Fixnum
The minimum number of time points in a time series that are used in modeling
the trend component of the time series. If you use this option you must also
set the timeSeriesLengthFraction option. This training option ensures that
enough time points are available when you use timeSeriesLengthFraction in
trend modeling. This is particularly important when forecasting multiple time
series in a single query using timeSeriesIdColumn. If the total number of
time points is less than the minTimeSeriesLength value, then the query uses
all available time points.
Corresponds to the JSON property minTimeSeriesLength
11740 11741 11742 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11740 def min_time_series_length @min_time_series_length end |
#min_tree_child_weight ⇒ Fixnum
Minimum sum of instance weight needed in a child for boosted tree models.
Corresponds to the JSON property minTreeChildWeight
11745 11746 11747 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11745 def min_tree_child_weight @min_tree_child_weight end |
#model_garden_model_name ⇒ String
The name of a Vertex model garden publisher model. Format is publishers/
publisher/models/model@optional_version_id`.
Corresponds to the JSON propertymodelGardenModelName`
11751 11752 11753 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11751 def model_garden_model_name @model_garden_model_name end |
#model_registry ⇒ String
The model registry.
Corresponds to the JSON property modelRegistry
11756 11757 11758 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11756 def model_registry @model_registry end |
#model_uri ⇒ String
Google Cloud Storage URI from which the model was imported. Only applicable
for imported models.
Corresponds to the JSON property modelUri
11762 11763 11764 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11762 def model_uri @model_uri end |
#non_seasonal_order ⇒ Google::Apis::BigqueryV2::ArimaOrder
Arima order, can be used for both non-seasonal and seasonal parts.
Corresponds to the JSON property nonSeasonalOrder
11767 11768 11769 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11767 def non_seasonal_order @non_seasonal_order end |
#num_clusters ⇒ Fixnum
Number of clusters for clustering models.
Corresponds to the JSON property numClusters
11772 11773 11774 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11772 def num_clusters @num_clusters end |
#num_factors ⇒ Fixnum
Num factors specified for matrix factorization models.
Corresponds to the JSON property numFactors
11777 11778 11779 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11777 def num_factors @num_factors end |
#num_parallel_tree ⇒ Fixnum
Number of parallel trees constructed during each iteration for boosted tree
models.
Corresponds to the JSON property numParallelTree
11783 11784 11785 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11783 def num_parallel_tree @num_parallel_tree end |
#num_principal_components ⇒ Fixnum
Number of principal components to keep in the PCA model. Must be <= the number
of features.
Corresponds to the JSON property numPrincipalComponents
11789 11790 11791 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11789 def num_principal_components @num_principal_components end |
#num_trials ⇒ Fixnum
Number of trials to run this hyperparameter tuning job.
Corresponds to the JSON property numTrials
11794 11795 11796 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11794 def num_trials @num_trials end |
#optimization_strategy ⇒ String
Optimization strategy for training linear regression models.
Corresponds to the JSON property optimizationStrategy
11799 11800 11801 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11799 def optimization_strategy @optimization_strategy end |
#optimizer ⇒ String
Optimizer used for training the neural nets.
Corresponds to the JSON property optimizer
11804 11805 11806 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11804 def optimizer @optimizer end |
#pca_explained_variance_ratio ⇒ Float
The minimum ratio of cumulative explained variance that needs to be given by
the PCA model.
Corresponds to the JSON property pcaExplainedVarianceRatio
11810 11811 11812 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11810 def pca_explained_variance_ratio @pca_explained_variance_ratio end |
#pca_solver ⇒ String
The solver for PCA.
Corresponds to the JSON property pcaSolver
11815 11816 11817 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11815 def pca_solver @pca_solver end |
#reservation_affinity_key ⇒ String
Corresponds to the label key of a reservation resource used by Vertex AI. To
target a SPECIFIC_RESERVATION by name, use compute.googleapis.com/reservation-
name as the key and specify the name of your reservation as its value.
Corresponds to the JSON property reservationAffinityKey
11822 11823 11824 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11822 def reservation_affinity_key @reservation_affinity_key end |
#reservation_affinity_type ⇒ String
Specifies the reservation affinity type used to configure a Vertex AI resource.
The default value is NO_RESERVATION.
Corresponds to the JSON property reservationAffinityType
11828 11829 11830 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11828 def reservation_affinity_type @reservation_affinity_type end |
#reservation_affinity_values ⇒ Array<String>
Corresponds to the label values of a reservation resource used by Vertex AI.
This must be the full resource name of the reservation or reservation block.
Corresponds to the JSON property reservationAffinityValues
11834 11835 11836 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11834 def reservation_affinity_values @reservation_affinity_values end |
#sampled_shapley_num_paths ⇒ Fixnum
Number of paths for the sampled Shapley explain method.
Corresponds to the JSON property sampledShapleyNumPaths
11839 11840 11841 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11839 def sampled_shapley_num_paths @sampled_shapley_num_paths end |
#scale_features ⇒ Boolean Also known as: scale_features?
If true, scale the feature values by dividing the feature standard deviation.
Currently only apply to PCA.
Corresponds to the JSON property scaleFeatures
11845 11846 11847 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11845 def scale_features @scale_features end |
#standardize_features ⇒ Boolean Also known as: standardize_features?
Whether to standardize numerical features. Default to true.
Corresponds to the JSON property standardizeFeatures
11851 11852 11853 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11851 def standardize_features @standardize_features end |
#subsample ⇒ Float
Subsample fraction of the training data to grow tree to prevent overfitting
for boosted tree models.
Corresponds to the JSON property subsample
11858 11859 11860 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11858 def subsample @subsample end |
#tf_version ⇒ String
Based on the selected TF version, the corresponding docker image is used to
train external models.
Corresponds to the JSON property tfVersion
11864 11865 11866 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11864 def tf_version @tf_version end |
#time_series_data_column ⇒ String
Column to be designated as time series data for ARIMA model.
Corresponds to the JSON property timeSeriesDataColumn
11869 11870 11871 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11869 def time_series_data_column @time_series_data_column end |
#time_series_id_column ⇒ String
The time series id column that was used during ARIMA model training.
Corresponds to the JSON property timeSeriesIdColumn
11874 11875 11876 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11874 def time_series_id_column @time_series_id_column end |
#time_series_id_columns ⇒ Array<String>
The time series id columns that were used during ARIMA model training.
Corresponds to the JSON property timeSeriesIdColumns
11879 11880 11881 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11879 def time_series_id_columns @time_series_id_columns end |
#time_series_length_fraction ⇒ Float
The fraction of the interpolated length of the time series that's used to
model the time series trend component. All of the time points of the time
series are used to model the non-trend component. This training option
accelerates modeling training without sacrificing much forecasting accuracy.
You can use this option with minTimeSeriesLength but not with
maxTimeSeriesLength.
Corresponds to the JSON property timeSeriesLengthFraction
11889 11890 11891 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11889 def time_series_length_fraction @time_series_length_fraction end |
#time_series_timestamp_column ⇒ String
Column to be designated as time series timestamp for ARIMA model.
Corresponds to the JSON property timeSeriesTimestampColumn
11894 11895 11896 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11894 def @time_series_timestamp_column end |
#tree_method ⇒ String
Tree construction algorithm for boosted tree models.
Corresponds to the JSON property treeMethod
11899 11900 11901 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11899 def tree_method @tree_method end |
#trend_smoothing_window_size ⇒ Fixnum
Smoothing window size for the trend component. When a positive value is
specified, a center moving average smoothing is applied on the history trend.
When the smoothing window is out of the boundary at the beginning or the end
of the trend, the first element or the last element is padded to fill the
smoothing window before the average is applied.
Corresponds to the JSON property trendSmoothingWindowSize
11908 11909 11910 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11908 def trend_smoothing_window_size @trend_smoothing_window_size end |
#user_column ⇒ String
User column specified for matrix factorization models.
Corresponds to the JSON property userColumn
11913 11914 11915 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11913 def user_column @user_column end |
#vertex_ai_model_version_aliases ⇒ Array<String>
The version aliases to apply in Vertex AI model registry. Always overwrite if
the version aliases exists in a existing model.
Corresponds to the JSON property vertexAiModelVersionAliases
11919 11920 11921 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11919 def vertex_ai_model_version_aliases @vertex_ai_model_version_aliases end |
#wals_alpha ⇒ Float
Hyperparameter for matrix factoration when implicit feedback type is specified.
Corresponds to the JSON property walsAlpha
11924 11925 11926 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11924 def wals_alpha @wals_alpha end |
#warm_start ⇒ Boolean Also known as: warm_start?
Whether to train a model from the last checkpoint.
Corresponds to the JSON property warmStart
11929 11930 11931 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11929 def warm_start @warm_start end |
#xgboost_version ⇒ String
User-selected XGBoost versions for training of XGBoost models.
Corresponds to the JSON property xgboostVersion
11935 11936 11937 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11935 def xgboost_version @xgboost_version end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
11942 11943 11944 11945 11946 11947 11948 11949 11950 11951 11952 11953 11954 11955 11956 11957 11958 11959 11960 11961 11962 11963 11964 11965 11966 11967 11968 11969 11970 11971 11972 11973 11974 11975 11976 11977 11978 11979 11980 11981 11982 11983 11984 11985 11986 11987 11988 11989 11990 11991 11992 11993 11994 11995 11996 11997 11998 11999 12000 12001 12002 12003 12004 12005 12006 12007 12008 12009 12010 12011 12012 12013 12014 12015 12016 12017 12018 12019 12020 12021 12022 12023 12024 12025 12026 12027 12028 12029 12030 12031 12032 12033 12034 12035 12036 12037 12038 12039 12040 12041 12042 12043 12044 |
# File 'lib/google/apis/bigquery_v2/classes.rb', line 11942 def update!(**args) @activation_fn = args[:activation_fn] if args.key?(:activation_fn) @adjust_step_changes = args[:adjust_step_changes] if args.key?(:adjust_step_changes) @approx_global_feature_contrib = args[:approx_global_feature_contrib] if args.key?(:approx_global_feature_contrib) @auto_arima = args[:auto_arima] if args.key?(:auto_arima) @auto_arima_max_order = args[:auto_arima_max_order] if args.key?(:auto_arima_max_order) @auto_arima_min_order = args[:auto_arima_min_order] if args.key?(:auto_arima_min_order) @auto_class_weights = args[:auto_class_weights] if args.key?(:auto_class_weights) @batch_size = args[:batch_size] if args.key?(:batch_size) @booster_type = args[:booster_type] if args.key?(:booster_type) @budget_hours = args[:budget_hours] if args.key?(:budget_hours) @calculate_p_values = args[:calculate_p_values] if args.key?(:calculate_p_values) @category_encoding_method = args[:category_encoding_method] if args.key?(:category_encoding_method) @clean_spikes_and_dips = args[:clean_spikes_and_dips] if args.key?(:clean_spikes_and_dips) @color_space = args[:color_space] if args.key?(:color_space) @colsample_bylevel = args[:colsample_bylevel] if args.key?(:colsample_bylevel) @colsample_bynode = args[:colsample_bynode] if args.key?(:colsample_bynode) @colsample_bytree = args[:colsample_bytree] if args.key?(:colsample_bytree) @contribution_metric = args[:contribution_metric] if args.key?(:contribution_metric) @dart_normalize_type = args[:dart_normalize_type] if args.key?(:dart_normalize_type) @data_frequency = args[:data_frequency] if args.key?(:data_frequency) @data_split_column = args[:data_split_column] if args.key?(:data_split_column) @data_split_eval_fraction = args[:data_split_eval_fraction] if args.key?(:data_split_eval_fraction) @data_split_method = args[:data_split_method] if args.key?(:data_split_method) @decompose_time_series = args[:decompose_time_series] if args.key?(:decompose_time_series) @dimension_id_columns = args[:dimension_id_columns] if args.key?(:dimension_id_columns) @distance_type = args[:distance_type] if args.key?(:distance_type) @dropout = args[:dropout] if args.key?(:dropout) @early_stop = args[:early_stop] if args.key?(:early_stop) @enable_global_explain = args[:enable_global_explain] if args.key?(:enable_global_explain) @endpoint_idle_ttl = args[:endpoint_idle_ttl] if args.key?(:endpoint_idle_ttl) @feedback_type = args[:feedback_type] if args.key?(:feedback_type) @fit_intercept = args[:fit_intercept] if args.key?(:fit_intercept) @forecast_limit_lower_bound = args[:forecast_limit_lower_bound] if args.key?(:forecast_limit_lower_bound) @forecast_limit_upper_bound = args[:forecast_limit_upper_bound] if args.key?(:forecast_limit_upper_bound) @hidden_units = args[:hidden_units] if args.key?(:hidden_units) @holiday_region = args[:holiday_region] if args.key?(:holiday_region) @holiday_regions = args[:holiday_regions] if args.key?(:holiday_regions) @horizon = args[:horizon] if args.key?(:horizon) @hparam_tuning_objectives = args[:hparam_tuning_objectives] if args.key?(:hparam_tuning_objectives) @hugging_face_model_id = args[:hugging_face_model_id] if args.key?(:hugging_face_model_id) @include_drift = args[:include_drift] if args.key?(:include_drift) @initial_learn_rate = args[:initial_learn_rate] if args.key?(:initial_learn_rate) @input_label_columns = args[:input_label_columns] if args.key?(:input_label_columns) @instance_weight_column = args[:instance_weight_column] if args.key?(:instance_weight_column) @integrated_gradients_num_steps = args[:integrated_gradients_num_steps] if args.key?(:integrated_gradients_num_steps) @is_test_column = args[:is_test_column] if args.key?(:is_test_column) @item_column = args[:item_column] if args.key?(:item_column) @kmeans_initialization_column = args[:kmeans_initialization_column] if args.key?(:kmeans_initialization_column) @kmeans_initialization_method = args[:kmeans_initialization_method] if args.key?(:kmeans_initialization_method) @l1_reg_activation = args[:l1_reg_activation] if args.key?(:l1_reg_activation) @l1_regularization = args[:l1_regularization] if args.key?(:l1_regularization) @l2_regularization = args[:l2_regularization] if args.key?(:l2_regularization) @label_class_weights = args[:label_class_weights] if args.key?(:label_class_weights) @learn_rate = args[:learn_rate] if args.key?(:learn_rate) @learn_rate_strategy = args[:learn_rate_strategy] if args.key?(:learn_rate_strategy) @loss_type = args[:loss_type] if args.key?(:loss_type) @machine_type = args[:machine_type] if args.key?(:machine_type) @max_iterations = args[:max_iterations] if args.key?(:max_iterations) @max_parallel_trials = args[:max_parallel_trials] if args.key?(:max_parallel_trials) @max_replica_count = args[:max_replica_count] if args.key?(:max_replica_count) @max_time_series_length = args[:max_time_series_length] if args.key?(:max_time_series_length) @max_tree_depth = args[:max_tree_depth] if args.key?(:max_tree_depth) @min_apriori_support = args[:min_apriori_support] if args.key?(:min_apriori_support) @min_relative_progress = args[:min_relative_progress] if args.key?(:min_relative_progress) @min_replica_count = args[:min_replica_count] if args.key?(:min_replica_count) @min_split_loss = args[:min_split_loss] if args.key?(:min_split_loss) @min_time_series_length = args[:min_time_series_length] if args.key?(:min_time_series_length) @min_tree_child_weight = args[:min_tree_child_weight] if args.key?(:min_tree_child_weight) @model_garden_model_name = args[:model_garden_model_name] if args.key?(:model_garden_model_name) @model_registry = args[:model_registry] if args.key?(:model_registry) @model_uri = args[:model_uri] if args.key?(:model_uri) @non_seasonal_order = args[:non_seasonal_order] if args.key?(:non_seasonal_order) @num_clusters = args[:num_clusters] if args.key?(:num_clusters) @num_factors = args[:num_factors] if args.key?(:num_factors) @num_parallel_tree = args[:num_parallel_tree] if args.key?(:num_parallel_tree) @num_principal_components = args[:num_principal_components] if args.key?(:num_principal_components) @num_trials = args[:num_trials] if args.key?(:num_trials) @optimization_strategy = args[:optimization_strategy] if args.key?(:optimization_strategy) @optimizer = args[:optimizer] if args.key?(:optimizer) @pca_explained_variance_ratio = args[:pca_explained_variance_ratio] if args.key?(:pca_explained_variance_ratio) @pca_solver = args[:pca_solver] if args.key?(:pca_solver) @reservation_affinity_key = args[:reservation_affinity_key] if args.key?(:reservation_affinity_key) @reservation_affinity_type = args[:reservation_affinity_type] if args.key?(:reservation_affinity_type) @reservation_affinity_values = args[:reservation_affinity_values] if args.key?(:reservation_affinity_values) @sampled_shapley_num_paths = args[:sampled_shapley_num_paths] if args.key?(:sampled_shapley_num_paths) @scale_features = args[:scale_features] if args.key?(:scale_features) @standardize_features = args[:standardize_features] if args.key?(:standardize_features) @subsample = args[:subsample] if args.key?(:subsample) @tf_version = args[:tf_version] if args.key?(:tf_version) @time_series_data_column = args[:time_series_data_column] if args.key?(:time_series_data_column) @time_series_id_column = args[:time_series_id_column] if args.key?(:time_series_id_column) @time_series_id_columns = args[:time_series_id_columns] if args.key?(:time_series_id_columns) @time_series_length_fraction = args[:time_series_length_fraction] if args.key?(:time_series_length_fraction) @time_series_timestamp_column = args[:time_series_timestamp_column] if args.key?(:time_series_timestamp_column) @tree_method = args[:tree_method] if args.key?(:tree_method) @trend_smoothing_window_size = args[:trend_smoothing_window_size] if args.key?(:trend_smoothing_window_size) @user_column = args[:user_column] if args.key?(:user_column) @vertex_ai_model_version_aliases = args[:vertex_ai_model_version_aliases] if args.key?(:vertex_ai_model_version_aliases) @wals_alpha = args[:wals_alpha] if args.key?(:wals_alpha) @warm_start = args[:warm_start] if args.key?(:warm_start) @xgboost_version = args[:xgboost_version] if args.key?(:xgboost_version) end |