Class: Google::Apis::DialogflowV2::GoogleCloudDialogflowV2InferenceParameter
- Inherits:
-
Object
- Object
- Google::Apis::DialogflowV2::GoogleCloudDialogflowV2InferenceParameter
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/dialogflow_v2/classes.rb,
lib/google/apis/dialogflow_v2/representations.rb,
lib/google/apis/dialogflow_v2/representations.rb
Overview
The parameters of inference.
Instance Attribute Summary collapse
-
#max_output_tokens ⇒ Fixnum
Optional.
-
#temperature ⇒ Float
Optional.
-
#top_k ⇒ Fixnum
Optional.
-
#top_p ⇒ Float
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudDialogflowV2InferenceParameter
constructor
A new instance of GoogleCloudDialogflowV2InferenceParameter.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudDialogflowV2InferenceParameter
Returns a new instance of GoogleCloudDialogflowV2InferenceParameter.
11853 11854 11855 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11853 def initialize(**args) update!(**args) end |
Instance Attribute Details
#max_output_tokens ⇒ Fixnum
Optional. Maximum number of the output tokens for the generator.
Corresponds to the JSON property maxOutputTokens
11819 11820 11821 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11819 def max_output_tokens @max_output_tokens end |
#temperature ⇒ Float
Optional. Controls the randomness of LLM predictions. Low temperature = less
random. High temperature = more random. If unset (or 0), uses a default value
of 0.
Corresponds to the JSON property temperature
11826 11827 11828 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11826 def temperature @temperature end |
#top_k ⇒ Fixnum
Optional. Top-k changes how the model selects tokens for output. A top-k of 1
means the selected token is the most probable among all tokens in the model's
vocabulary (also called greedy decoding), while a top-k of 3 means that the
next token is selected from among the 3 most probable tokens (using
temperature). For each token selection step, the top K tokens with the highest
probabilities are sampled. Then tokens are further filtered based on topP with
the final token selected using temperature sampling. Specify a lower value for
less random responses and a higher value for more random responses. Acceptable
value is [1, 40], default to 40.
Corresponds to the JSON property topK
11839 11840 11841 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11839 def top_k @top_k end |
#top_p ⇒ Float
Optional. Top-p changes how the model selects tokens for output. Tokens are
selected from most K (see topK parameter) probable to least until the sum of
their probabilities equals the top-p value. For example, if tokens A, B, and C
have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the
model will select either A or B as the next token (using temperature) and
doesn't consider C. The default top-p value is 0.95. Specify a lower value for
less random responses and a higher value for more random responses. Acceptable
value is [0.0, 1.0], default to 0.95.
Corresponds to the JSON property topP
11851 11852 11853 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11851 def top_p @top_p end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
11858 11859 11860 11861 11862 11863 |
# File 'lib/google/apis/dialogflow_v2/classes.rb', line 11858 def update!(**args) @max_output_tokens = args[:max_output_tokens] if args.key?(:max_output_tokens) @temperature = args[:temperature] if args.key?(:temperature) @top_k = args[:top_k] if args.key?(:top_k) @top_p = args[:top_p] if args.key?(:top_p) end |