Class: Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
- Inherits:
-
Object
- Object
- Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/aiplatform_v1/classes.rb,
lib/google/apis/aiplatform_v1/representations.rb,
lib/google/apis/aiplatform_v1/representations.rb
Instance Attribute Summary collapse
-
#confidence_threshold ⇒ Float
Metrics are computed with an assumption that the Model never returns predictions with score lower than this value.
-
#confusion_matrix ⇒ Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix
Confusion matrix of the evaluation for this confidence_threshold.
-
#f1_score ⇒ Float
The harmonic mean of recall and precision.
-
#f1_score_at1 ⇒ Float
The harmonic mean of recallAt1 and precisionAt1.
-
#f1_score_macro ⇒ Float
Macro-averaged F1 Score.
-
#f1_score_micro ⇒ Float
Micro-averaged F1 Score.
-
#false_negative_count ⇒ Fixnum
The number of ground truth labels that are not matched by a Model created label.
-
#false_positive_count ⇒ Fixnum
The number of Model created labels that do not match a ground truth label.
-
#false_positive_rate ⇒ Float
False Positive Rate for the given confidence threshold.
-
#false_positive_rate_at1 ⇒ Float
The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#max_predictions ⇒ Fixnum
Metrics are computed with an assumption that the Model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the
confidenceThreshold. -
#precision ⇒ Float
Precision for the given confidence threshold.
-
#precision_at1 ⇒ Float
The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#recall ⇒ Float
Recall (True Positive Rate) for the given confidence threshold.
-
#recall_at1 ⇒ Float
The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each DataItem.
-
#true_negative_count ⇒ Fixnum
The number of labels that were not created by the Model, but if they would, they would not match a ground truth label.
-
#true_positive_count ⇒ Fixnum
The number of Model created labels that match a ground truth label.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
constructor
A new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics
Returns a new instance of GoogleCloudAiplatformV1SchemaModelevaluationMetricsClassificationEvaluationMetricsConfidenceMetrics.
28116 28117 28118 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28116 def initialize(**args) update!(**args) end |
Instance Attribute Details
#confidence_threshold ⇒ Float
Metrics are computed with an assumption that the Model never returns
predictions with score lower than this value.
Corresponds to the JSON property confidenceThreshold
28025 28026 28027 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28025 def confidence_threshold @confidence_threshold end |
#confusion_matrix ⇒ Google::Apis::AiplatformV1::GoogleCloudAiplatformV1SchemaModelevaluationMetricsConfusionMatrix
Confusion matrix of the evaluation for this confidence_threshold.
Corresponds to the JSON property confusionMatrix
28030 28031 28032 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28030 def confusion_matrix @confusion_matrix end |
#f1_score ⇒ Float
The harmonic mean of recall and precision. For summary metrics, it computes
the micro-averaged F1 score.
Corresponds to the JSON property f1Score
28036 28037 28038 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28036 def f1_score @f1_score end |
#f1_score_at1 ⇒ Float
The harmonic mean of recallAt1 and precisionAt1.
Corresponds to the JSON property f1ScoreAt1
28041 28042 28043 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28041 def f1_score_at1 @f1_score_at1 end |
#f1_score_macro ⇒ Float
Macro-averaged F1 Score.
Corresponds to the JSON property f1ScoreMacro
28046 28047 28048 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28046 def f1_score_macro @f1_score_macro end |
#f1_score_micro ⇒ Float
Micro-averaged F1 Score.
Corresponds to the JSON property f1ScoreMicro
28051 28052 28053 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28051 def f1_score_micro @f1_score_micro end |
#false_negative_count ⇒ Fixnum
The number of ground truth labels that are not matched by a Model created
label.
Corresponds to the JSON property falseNegativeCount
28057 28058 28059 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28057 def false_negative_count @false_negative_count end |
#false_positive_count ⇒ Fixnum
The number of Model created labels that do not match a ground truth label.
Corresponds to the JSON property falsePositiveCount
28062 28063 28064 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28062 def false_positive_count @false_positive_count end |
#false_positive_rate ⇒ Float
False Positive Rate for the given confidence threshold.
Corresponds to the JSON property falsePositiveRate
28067 28068 28069 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28067 def false_positive_rate @false_positive_rate end |
#false_positive_rate_at1 ⇒ Float
The False Positive Rate when only considering the label that has the highest
prediction score and not below the confidence threshold for each DataItem.
Corresponds to the JSON property falsePositiveRateAt1
28073 28074 28075 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28073 def false_positive_rate_at1 @false_positive_rate_at1 end |
#max_predictions ⇒ Fixnum
Metrics are computed with an assumption that the Model always returns at most
this many predictions (ordered by their score, descendingly), but they all
still need to meet the confidenceThreshold.
Corresponds to the JSON property maxPredictions
28080 28081 28082 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28080 def max_predictions @max_predictions end |
#precision ⇒ Float
Precision for the given confidence threshold.
Corresponds to the JSON property precision
28085 28086 28087 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28085 def precision @precision end |
#precision_at1 ⇒ Float
The precision when only considering the label that has the highest prediction
score and not below the confidence threshold for each DataItem.
Corresponds to the JSON property precisionAt1
28091 28092 28093 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28091 def precision_at1 @precision_at1 end |
#recall ⇒ Float
Recall (True Positive Rate) for the given confidence threshold.
Corresponds to the JSON property recall
28096 28097 28098 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28096 def recall @recall end |
#recall_at1 ⇒ Float
The Recall (True Positive Rate) when only considering the label that has the
highest prediction score and not below the confidence threshold for each
DataItem.
Corresponds to the JSON property recallAt1
28103 28104 28105 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28103 def recall_at1 @recall_at1 end |
#true_negative_count ⇒ Fixnum
The number of labels that were not created by the Model, but if they would,
they would not match a ground truth label.
Corresponds to the JSON property trueNegativeCount
28109 28110 28111 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28109 def true_negative_count @true_negative_count end |
#true_positive_count ⇒ Fixnum
The number of Model created labels that match a ground truth label.
Corresponds to the JSON property truePositiveCount
28114 28115 28116 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28114 def true_positive_count @true_positive_count end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
28121 28122 28123 28124 28125 28126 28127 28128 28129 28130 28131 28132 28133 28134 28135 28136 28137 28138 28139 |
# File 'lib/google/apis/aiplatform_v1/classes.rb', line 28121 def update!(**args) @confidence_threshold = args[:confidence_threshold] if args.key?(:confidence_threshold) @confusion_matrix = args[:confusion_matrix] if args.key?(:confusion_matrix) @f1_score = args[:f1_score] if args.key?(:f1_score) @f1_score_at1 = args[:f1_score_at1] if args.key?(:f1_score_at1) @f1_score_macro = args[:f1_score_macro] if args.key?(:f1_score_macro) @f1_score_micro = args[:f1_score_micro] if args.key?(:f1_score_micro) @false_negative_count = args[:false_negative_count] if args.key?(:false_negative_count) @false_positive_count = args[:false_positive_count] if args.key?(:false_positive_count) @false_positive_rate = args[:false_positive_rate] if args.key?(:false_positive_rate) @false_positive_rate_at1 = args[:false_positive_rate_at1] if args.key?(:false_positive_rate_at1) @max_predictions = args[:max_predictions] if args.key?(:max_predictions) @precision = args[:precision] if args.key?(:precision) @precision_at1 = args[:precision_at1] if args.key?(:precision_at1) @recall = args[:recall] if args.key?(:recall) @recall_at1 = args[:recall_at1] if args.key?(:recall_at1) @true_negative_count = args[:true_negative_count] if args.key?(:true_negative_count) @true_positive_count = args[:true_positive_count] if args.key?(:true_positive_count) end |