Class: Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/aiplatform_v1/classes.rb,
lib/google/apis/aiplatform_v1/representations.rb,
lib/google/apis/aiplatform_v1/representations.rb

Overview

Metrics for general pairwise text generation evaluation results.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Returns a new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics.



23541
23542
23543
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23541

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#accuracyFloat

Fraction of cases where the autorater agreed with the human raters. Corresponds to the JSON property accuracy

Returns:

  • (Float)


23469
23470
23471
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23469

def accuracy
  @accuracy
end

#baseline_model_win_rateFloat

Percentage of time the autorater decided the baseline model had the better response. Corresponds to the JSON property baselineModelWinRate

Returns:

  • (Float)


23475
23476
23477
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23475

def baseline_model_win_rate
  @baseline_model_win_rate
end

#cohens_kappaFloat

A measurement of agreement between the autorater and human raters that takes the likelihood of random agreement into account. Corresponds to the JSON property cohensKappa

Returns:

  • (Float)


23481
23482
23483
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23481

def cohens_kappa
  @cohens_kappa
end

#f1_scoreFloat

Harmonic mean of precision and recall. Corresponds to the JSON property f1Score

Returns:

  • (Float)


23486
23487
23488
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23486

def f1_score
  @f1_score
end

#false_negative_countFixnum

Number of examples where the autorater chose the baseline model, but humans preferred the model. Corresponds to the JSON property falseNegativeCount

Returns:

  • (Fixnum)


23492
23493
23494
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23492

def false_negative_count
  @false_negative_count
end

#false_positive_countFixnum

Number of examples where the autorater chose the model, but humans preferred the baseline model. Corresponds to the JSON property falsePositiveCount

Returns:

  • (Fixnum)


23498
23499
23500
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23498

def false_positive_count
  @false_positive_count
end

#human_preference_baseline_model_win_rateFloat

Percentage of time humans decided the baseline model had the better response. Corresponds to the JSON property humanPreferenceBaselineModelWinRate

Returns:

  • (Float)


23503
23504
23505
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23503

def human_preference_baseline_model_win_rate
  @human_preference_baseline_model_win_rate
end

#human_preference_model_win_rateFloat

Percentage of time humans decided the model had the better response. Corresponds to the JSON property humanPreferenceModelWinRate

Returns:

  • (Float)


23508
23509
23510
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23508

def human_preference_model_win_rate
  @human_preference_model_win_rate
end

#model_win_rateFloat

Percentage of time the autorater decided the model had the better response. Corresponds to the JSON property modelWinRate

Returns:

  • (Float)


23513
23514
23515
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23513

def model_win_rate
  @model_win_rate
end

#precisionFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the autorater thought the model had a better response. True positive divided by all positive. Corresponds to the JSON property precision

Returns:

  • (Float)


23520
23521
23522
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23520

def precision
  @precision
end

#recallFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the humans thought the model had a better response. Corresponds to the JSON property recall

Returns:

  • (Float)


23527
23528
23529
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23527

def recall
  @recall
end

#true_negative_countFixnum

Number of examples where both the autorater and humans decided that the model had the worse response. Corresponds to the JSON property trueNegativeCount

Returns:

  • (Fixnum)


23533
23534
23535
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23533

def true_negative_count
  @true_negative_count
end

#true_positive_countFixnum

Number of examples where both the autorater and humans decided that the model had the better response. Corresponds to the JSON property truePositiveCount

Returns:

  • (Fixnum)


23539
23540
23541
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23539

def true_positive_count
  @true_positive_count
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



23546
23547
23548
23549
23550
23551
23552
23553
23554
23555
23556
23557
23558
23559
23560
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 23546

def update!(**args)
  @accuracy = args[:accuracy] if args.key?(:accuracy)
  @baseline_model_win_rate = args[:baseline_model_win_rate] if args.key?(:baseline_model_win_rate)
  @cohens_kappa = args[:cohens_kappa] if args.key?(:cohens_kappa)
  @f1_score = args[:f1_score] if args.key?(:f1_score)
  @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count)
  @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count)
  @human_preference_baseline_model_win_rate = args[:human_preference_baseline_model_win_rate] if args.key?(:human_preference_baseline_model_win_rate)
  @human_preference_model_win_rate = args[:human_preference_model_win_rate] if args.key?(:human_preference_model_win_rate)
  @model_win_rate = args[:model_win_rate] if args.key?(:model_win_rate)
  @precision = args[:precision] if args.key?(:precision)
  @recall = args[:recall] if args.key?(:recall)
  @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count)
  @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count)
end