Class: OpenAI::Models::Evals::RunRetrieveResponse
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Evals::RunRetrieveResponse
- Defined in:
- lib/openai/models/evals/run_retrieve_response.rb
Overview
Defined Under Namespace
Modules: DataSource Classes: PerModelUsage, PerTestingCriteriaResult, ResultCounts
Instance Attribute Summary collapse
-
#created_at ⇒ Integer
Unix timestamp (in seconds) when the evaluation run was created.
-
#data_source ⇒ OpenAI::Models::Evals::CreateEvalJSONLRunDataSource, ...
Information about the run’s data source.
-
#error ⇒ OpenAI::Models::Evals::EvalAPIError
An object representing an error response from the Eval API.
-
#eval_id ⇒ String
The identifier of the associated evaluation.
-
#id ⇒ String
Unique identifier for the evaluation run.
-
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object.
-
#model ⇒ String
The model that is evaluated, if applicable.
-
#name ⇒ String
The name of the evaluation run.
-
#object ⇒ Symbol, :"eval.run"
The type of the object.
-
#per_model_usage ⇒ Array<OpenAI::Models::Evals::RunRetrieveResponse::PerModelUsage>
Usage statistics for each model during the evaluation run.
-
#per_testing_criteria_results ⇒ Array<OpenAI::Models::Evals::RunRetrieveResponse::PerTestingCriteriaResult>
Results per testing criteria applied during the evaluation run.
-
#report_url ⇒ String
The URL to the rendered evaluation run report on the UI dashboard.
-
#result_counts ⇒ OpenAI::Models::Evals::RunRetrieveResponse::ResultCounts
Counters summarizing the outcomes of the evaluation run.
-
#status ⇒ String
The status of the evaluation run.
Class Method Summary collapse
Instance Method Summary collapse
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, #inspect, inspect, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(failed: , passed: , testing_criteria: ) ⇒ Object
|
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 99
|
Instance Attribute Details
#created_at ⇒ Integer
Unix timestamp (in seconds) when the evaluation run was created.
18 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 18 required :created_at, Integer |
#data_source ⇒ OpenAI::Models::Evals::CreateEvalJSONLRunDataSource, ...
Information about the run’s data source.
24 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 24 required :data_source, union: -> { OpenAI::Models::Evals::RunRetrieveResponse::DataSource } |
#error ⇒ OpenAI::Models::Evals::EvalAPIError
An object representing an error response from the Eval API.
30 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 30 required :error, -> { OpenAI::Evals::EvalAPIError } |
#eval_id ⇒ String
The identifier of the associated evaluation.
36 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 36 required :eval_id, String |
#id ⇒ String
Unique identifier for the evaluation run.
12 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 12 required :id, String |
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
47 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 47 required :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true |
#model ⇒ String
The model that is evaluated, if applicable.
53 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 53 required :model, String |
#name ⇒ String
The name of the evaluation run.
59 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 59 required :name, String |
#object ⇒ Symbol, :"eval.run"
The type of the object. Always “eval.run”.
65 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 65 required :object, const: :"eval.run" |
#per_model_usage ⇒ Array<OpenAI::Models::Evals::RunRetrieveResponse::PerModelUsage>
Usage statistics for each model during the evaluation run.
71 72 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 71 required :per_model_usage, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Models::Evals::RunRetrieveResponse::PerModelUsage] } |
#per_testing_criteria_results ⇒ Array<OpenAI::Models::Evals::RunRetrieveResponse::PerTestingCriteriaResult>
Results per testing criteria applied during the evaluation run.
78 79 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 78 required :per_testing_criteria_results, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Models::Evals::RunRetrieveResponse::PerTestingCriteriaResult] } |
#report_url ⇒ String
The URL to the rendered evaluation run report on the UI dashboard.
85 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 85 required :report_url, String |
#result_counts ⇒ OpenAI::Models::Evals::RunRetrieveResponse::ResultCounts
Counters summarizing the outcomes of the evaluation run.
91 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 91 required :result_counts, -> { OpenAI::Models::Evals::RunRetrieveResponse::ResultCounts } |
#status ⇒ String
The status of the evaluation run.
97 |
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 97 required :status, String |
Class Method Details
.variants ⇒ Array(OpenAI::Models::Evals::CreateEvalJSONLRunDataSource, OpenAI::Models::Evals::CreateEvalCompletionsRunDataSource, OpenAI::Models::Evals::RunRetrieveResponse::DataSource::Responses)
|
# File 'lib/openai/models/evals/run_retrieve_response.rb', line 640
|