Class: Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
- Inherits:
-
Object
- Object
- Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/aiplatform_v1/classes.rb,
lib/google/apis/aiplatform_v1/representations.rb,
lib/google/apis/aiplatform_v1/representations.rb
Instance Attribute Summary collapse
-
#confidence_threshold ⇒ Float
Metrics are computed with an assumption that the Model never returns predictions with score lower than this value.
-
#confusion_matrix ⇒ Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix
Confusion matrix of the evaluation for this confidence_threshold.
-
#f1_score ⇒ Float
The harmonic mean of recall and precision.
-
#f1_score_at1 ⇒ Float
The harmonic mean of recallAt1 and precisionAt1.
-
#f1_score_macro ⇒ Float
Macro-averaged F1 Score.
-
#f1_score_micro ⇒ Float
Micro-averaged F1 Score.
-
#false_negative_count ⇒ Fixnum
The number of ground truth labels that are not matched by a Model created label.
-
#false_positive_count ⇒ Fixnum
The number of Model created labels that do not match a ground truth label.
-
#false_positive_rate ⇒ Float
False Positive Rate for the given confidence threshold.
-
#false_positive_rate_at1 ⇒ Float
The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#max_predictions ⇒ Fixnum
Metrics are computed with an assumption that the Model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the
confidenceThreshold. -
#precision ⇒ Float
Precision for the given confidence threshold.
-
#precision_at1 ⇒ Float
The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#recall ⇒ Float
Recall (True Positive Rate) for the given confidence threshold.
-
#recall_at1 ⇒ Float
The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#true_negative_count ⇒ Fixnum
The number of labels that were not created by the Model, but if they would, they would not match a ground truth label.
-
#true_positive_count ⇒ Fixnum
The number of Model created labels that match a ground truth label.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
constructor
A new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
Returns a new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics.
19774 19775 19776 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19774 def initialize(**args) update!(**args) end |
Instance Attribute Details
#confidence_threshold ⇒ Float
Metrics are computed with an assumption that the Model never returns
predictions with score lower than this value.
Corresponds to the JSON property confidenceThreshold
19683 19684 19685 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19683 def confidence_threshold @confidence_threshold end |
#confusion_matrix ⇒ Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix
Confusion matrix of the evaluation for this confidence_threshold.
Corresponds to the JSON property confusionMatrix
19688 19689 19690 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19688 def confusion_matrix @confusion_matrix end |
#f1_score ⇒ Float
The harmonic mean of recall and precision. For summary metrics, it computes
the micro-averaged F1 score.
Corresponds to the JSON property f1Score
19694 19695 19696 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19694 def f1_score @f1_score end |
#f1_score_at1 ⇒ Float
The harmonic mean of recallAt1 and precisionAt1.
Corresponds to the JSON property f1ScoreAt1
19699 19700 19701 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19699 def f1_score_at1 @f1_score_at1 end |
#f1_score_macro ⇒ Float
Macro-averaged F1 Score.
Corresponds to the JSON property f1ScoreMacro
19704 19705 19706 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19704 def f1_score_macro @f1_score_macro end |
#f1_score_micro ⇒ Float
Micro-averaged F1 Score.
Corresponds to the JSON property f1ScoreMicro
19709 19710 19711 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19709 def f1_score_micro @f1_score_micro end |
#false_negative_count ⇒ Fixnum
The number of ground truth labels that are not matched by a Model created
label.
Corresponds to the JSON property falseNegativeCount
19715 19716 19717 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19715 def false_negative_count @false_negative_count end |
#false_positive_count ⇒ Fixnum
The number of Model created labels that do not match a ground truth label.
Corresponds to the JSON property falsePositiveCount
19720 19721 19722 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19720 def false_positive_count @false_positive_count end |
#false_positive_rate ⇒ Float
False Positive Rate for the given confidence threshold.
Corresponds to the JSON property falsePositiveRate
19725 19726 19727 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19725 def false_positive_rate @false_positive_rate end |
#false_positive_rate_at1 ⇒ Float
The False Positive Rate when only considering the label that has the highest
prediction score and not below the confidence threshold for each DataItem.
Corresponds to the JSON property falsePositiveRateAt1
19731 19732 19733 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19731 def false_positive_rate_at1 @false_positive_rate_at1 end |
#max_predictions ⇒ Fixnum
Metrics are computed with an assumption that the Model always returns at most
this many predictions (ordered by their score, descendingly), but they all
still need to meet the confidenceThreshold.
Corresponds to the JSON property maxPredictions
19738 19739 19740 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19738 def max_predictions @max_predictions end |
#precision ⇒ Float
Precision for the given confidence threshold.
Corresponds to the JSON property precision
19743 19744 19745 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19743 def precision @precision end |
#precision_at1 ⇒ Float
The precision when only considering the label that has the highest prediction
score and not below the confidence threshold for each DataItem.
Corresponds to the JSON property precisionAt1
19749 19750 19751 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19749 def precision_at1 @precision_at1 end |
#recall ⇒ Float
Recall (True Positive Rate) for the given confidence threshold.
Corresponds to the JSON property recall
19754 19755 19756 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19754 def recall @recall end |
#recall_at1 ⇒ Float
The Recall (True Positive Rate) when only considering the label that has the
highest prediction score and not below the confidence threshold for each
DataItem.
Corresponds to the JSON property recallAt1
19761 19762 19763 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19761 def recall_at1 @recall_at1 end |
#true_negative_count ⇒ Fixnum
The number of labels that were not created by the Model, but if they would,
they would not match a ground truth label.
Corresponds to the JSON property trueNegativeCount
19767 19768 19769 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19767 def true_negative_count @true_negative_count end |
#true_positive_count ⇒ Fixnum
The number of Model created labels that match a ground truth label.
Corresponds to the JSON property truePositiveCount
19772 19773 19774 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19772 def true_positive_count @true_positive_count end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
19779 19780 19781 19782 19783 19784 19785 19786 19787 19788 19789 19790 19791 19792 19793 19794 19795 19796 19797 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 19779 def update!(**args) @confidence_threshold = args[:confidence_threshold] if args.key?(:confidence_threshold) @confusion_matrix = args[:confusion_matrix] if args.key?(:confusion_matrix) @f1_score = args[:f1_score] if args.key?(:f1_score) @f1_score_at1 = args[:f1_score_at1] if args.key?(:f1_score_at1) @f1_score_macro = args[:f1_score_macro] if args.key?(:f1_score_macro) @f1_score_micro = args[:f1_score_micro] if args.key?(:f1_score_micro) @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count) @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count) @false_positive_rate = args[:false_positive_rate] if args.key?(:false_positive_rate) @false_positive_rate_at1 = args[:false_positive_rate_at1] if args.key?(:false_positive_rate_at1) @max_predictions = args[:max_predictions] if args.key?(:max_predictions) @precision = args[:precision] if args.key?(:precision) @precision_at1 = args[:precision_at1] if args.key?(:precision_at1) @recall = args[:recall] if args.key?(:recall) @recall_at1 = args[:recall_at1] if args.key?(:recall_at1) @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count) @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count) end |