Class: Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Inherits:
Object
  • Object
show all
Includes:
Core::Hashable, Core::JsonObjectSupport
Defined in:
lib/google/apis/aiplatform_v1/classes.rb,
lib/google/apis/aiplatform_v1/representations.rb,
lib/google/apis/aiplatform_v1/representations.rb

Overview

Metrics for general pairwise text generation evaluation results.

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics

Returns a new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsPairwiseTextGenerationEvaluationMetrics.



19735
19736
19737
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19735

def initialize(**args)
   update!(**args)
end

Instance Attribute Details

#accuracyFloat

Fraction of cases where the autorater agreed with the human raters. Corresponds to the JSON property accuracy

Returns:

  • (Float)


19663
19664
19665
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19663

def accuracy
  @accuracy
end

#baseline_model_win_rateFloat

Percentage of time the autorater decided the baseline model had the better response. Corresponds to the JSON property baselineModelWinRate

Returns:

  • (Float)


19669
19670
19671
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19669

def baseline_model_win_rate
  @baseline_model_win_rate
end

#cohens_kappaFloat

A measurement of agreement between the autorater and human raters that takes the likelihood of random agreement into account. Corresponds to the JSON property cohensKappa

Returns:

  • (Float)


19675
19676
19677
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19675

def cohens_kappa
  @cohens_kappa
end

#f1_scoreFloat

Harmonic mean of precision and recall. Corresponds to the JSON property f1Score

Returns:

  • (Float)


19680
19681
19682
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19680

def f1_score
  @f1_score
end

#false_negative_countFixnum

Number of examples where the autorater chose the baseline model, but humans preferred the model. Corresponds to the JSON property falseNegativeCount

Returns:

  • (Fixnum)


19686
19687
19688
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19686

def false_negative_count
  @false_negative_count
end

#false_positive_countFixnum

Number of examples where the autorater chose the model, but humans preferred the baseline model. Corresponds to the JSON property falsePositiveCount

Returns:

  • (Fixnum)


19692
19693
19694
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19692

def false_positive_count
  @false_positive_count
end

#human_preference_baseline_model_win_rateFloat

Percentage of time humans decided the baseline model had the better response. Corresponds to the JSON property humanPreferenceBaselineModelWinRate

Returns:

  • (Float)


19697
19698
19699
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19697

def human_preference_baseline_model_win_rate
  @human_preference_baseline_model_win_rate
end

#human_preference_model_win_rateFloat

Percentage of time humans decided the model had the better response. Corresponds to the JSON property humanPreferenceModelWinRate

Returns:

  • (Float)


19702
19703
19704
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19702

def human_preference_model_win_rate
  @human_preference_model_win_rate
end

#model_win_rateFloat

Percentage of time the autorater decided the model had the better response. Corresponds to the JSON property modelWinRate

Returns:

  • (Float)


19707
19708
19709
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19707

def model_win_rate
  @model_win_rate
end

#precisionFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the autorater thought the model had a better response. True positive divided by all positive. Corresponds to the JSON property precision

Returns:

  • (Float)


19714
19715
19716
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19714

def precision
  @precision
end

#recallFloat

Fraction of cases where the autorater and humans thought the model had a better response out of all cases where the humans thought the model had a better response. Corresponds to the JSON property recall

Returns:

  • (Float)


19721
19722
19723
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19721

def recall
  @recall
end

#true_negative_countFixnum

Number of examples where both the autorater and humans decided that the model had the worse response. Corresponds to the JSON property trueNegativeCount

Returns:

  • (Fixnum)


19727
19728
19729
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19727

def true_negative_count
  @true_negative_count
end

#true_positive_countFixnum

Number of examples where both the autorater and humans decided that the model had the better response. Corresponds to the JSON property truePositiveCount

Returns:

  • (Fixnum)


19733
19734
19735
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19733

def true_positive_count
  @true_positive_count
end

Instance Method Details

#update!(**args) ⇒ Object

Update properties of this object



19740
19741
19742
19743
19744
19745
19746
19747
19748
19749
19750
19751
19752
19753
19754
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19740

def update!(**args)
  @accuracy = args[:accuracy] if args.key?(:accuracy)
  @baseline_model_win_rate = args[:baseline_model_win_rate] if args.key?(:baseline_model_win_rate)
  @cohens_kappa = args[:cohens_kappa] if args.key?(:cohens_kappa)
  @f1_score = args[:f1_score] if args.key?(:f1_score)
  @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count)
  @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count)
  @human_preference_baseline_model_win_rate = args[:human_preference_baseline_model_win_rate] if args.key?(:human_preference_baseline_model_win_rate)
  @human_preference_model_win_rate = args[:human_preference_model_win_rate] if args.key?(:human_preference_model_win_rate)
  @model_win_rate = args[:model_win_rate] if args.key?(:model_win_rate)
  @precision = args[:precision] if args.key?(:precision)
  @recall = args[:recall] if args.key?(:recall)
  @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count)
  @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count)
end