Class: Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/aiplatform_v1/classes.rb,
lib/google/apis/aiplatform_v1/representations.rb,
lib/google/apis/aiplatform_v1/representations.rb

Overview

Metrics for general pairwise text generation evaluation results.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Returns a new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics.



22421
22422
22423
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22421

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#accuracyFloat

Fraction of cases where the autorater agreed with the human raters. Corresponds to the JSON property accuracy

Returns:

  • (Float)


22349
22350
22351
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22349

def accuracy
  @accuracy
end

#baseline_model_win_rateFloat

Percentage of time the autorater decided the baseline model had the better response. Corresponds to the JSON property baselineModelWinRate

Returns:

  • (Float)


22355
22356
22357
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22355

def baseline_model_win_rate
  @baseline_model_win_rate
end

#cohens_kappaFloat

A measurement of agreement between the autorater and human raters that takes the likelihood of random agreement into account. Corresponds to the JSON property cohensKappa

Returns:

  • (Float)


22361
22362
22363
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22361

def cohens_kappa
  @cohens_kappa
end

#f1_scoreFloat

Harmonic mean of precision and recall. Corresponds to the JSON property f1Score

Returns:

  • (Float)


22366
22367
22368
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22366

def f1_score
  @f1_score
end

#false_negative_countFixnum

Number of examples where the autorater chose the baseline model, but humans preferred the model. Corresponds to the JSON property falseNegativeCount

Returns:

  • (Fixnum)


22372
22373
22374
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22372

def false_negative_count
  @false_negative_count
end

#false_positive_countFixnum

Number of examples where the autorater chose the model, but humans preferred the baseline model. Corresponds to the JSON property falsePositiveCount

Returns:

  • (Fixnum)


22378
22379
22380
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22378

def false_positive_count
  @false_positive_count
end

#human_preference_baseline_model_win_rateFloat

Percentage of time humans decided the baseline model had the better response. Corresponds to the JSON property humanPreferenceBaselineModelWinRate

Returns:

  • (Float)


22383
22384
22385
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22383

def human_preference_baseline_model_win_rate
  @human_preference_baseline_model_win_rate
end

#human_preference_model_win_rateFloat

Percentage of time humans decided the model had the better response. Corresponds to the JSON property humanPreferenceModelWinRate

Returns:

  • (Float)


22388
22389
22390
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22388

def human_preference_model_win_rate
  @human_preference_model_win_rate
end

#model_win_rateFloat

Percentage of time the autorater decided the model had the better response. Corresponds to the JSON property modelWinRate

Returns:

  • (Float)


22393
22394
22395
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22393

def model_win_rate
  @model_win_rate
end

#precisionFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the autorater thought the model had a better response. True positive divided by all positive. Corresponds to the JSON property precision

Returns:

  • (Float)


22400
22401
22402
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22400

def precision
  @precision
end

#recallFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the humans thought the model had a better response. Corresponds to the JSON property recall

Returns:

  • (Float)


22407
22408
22409
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22407

def recall
  @recall
end

#true_negative_countFixnum

Number of examples where both the autorater and humans decided that the model had the worse response. Corresponds to the JSON property trueNegativeCount

Returns:

  • (Fixnum)


22413
22414
22415
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22413

def true_negative_count
  @true_negative_count
end

#true_positive_countFixnum

Number of examples where both the autorater and humans decided that the model had the better response. Corresponds to the JSON property truePositiveCount

Returns:

  • (Fixnum)


22419
22420
22421
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22419

def true_positive_count
  @true_positive_count
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



22426
22427
22428
22429
22430
22431
22432
22433
22434
22435
22436
22437
22438
22439
22440
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 22426

def update!(**args)
  @accuracy = args[:accuracy] if args.key?(:accuracy)
  @baseline_model_win_rate = args[:baseline_model_win_rate] if args.key?(:baseline_model_win_rate)
  @cohens_kappa = args[:cohens_kappa] if args.key?(:cohens_kappa)
  @f1_score = args[:f1_score] if args.key?(:f1_score)
  @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count)
  @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count)
  @human_preference_baseline_model_win_rate = args[:human_preference_baseline_model_win_rate] if args.key?(:human_preference_baseline_model_win_rate)
  @human_preference_model_win_rate = args[:human_preference_model_win_rate] if args.key?(:human_preference_model_win_rate)
  @model_win_rate = args[:model_win_rate] if args.key?(:model_win_rate)
  @precision = args[:precision] if args.key?(:precision)
  @recall = args[:recall] if args.key?(:recall)
  @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count)
  @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count)
end