Class: Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1GenerationConfig
- Inherits:
-
Object
- Object
- Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1GenerationConfig
- Includes:
- Core::Hashable, Core::JsonObjectSupport
- Defined in:
- lib/google/apis/aiplatform_v1beta1/classes.rb,
lib/google/apis/aiplatform_v1beta1/representations.rb,
lib/google/apis/aiplatform_v1beta1/representations.rb
Overview
Configuration for content generation. This message contains all the parameters that control how the model generates content. It allows you to influence the randomness, length, and structure of the output.
Instance Attribute Summary collapse
-
#audio_timestamp ⇒ Boolean
(also: #audio_timestamp?)
Optional.
-
#candidate_count ⇒ Fixnum
Optional.
-
#enable_affective_dialog ⇒ Boolean
(also: #enable_affective_dialog?)
Optional.
-
#frequency_penalty ⇒ Float
Optional.
-
#image_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1ImageConfig
Configuration for image generation.
-
#logprobs ⇒ Fixnum
Optional.
-
#max_output_tokens ⇒ Fixnum
Optional.
-
#media_resolution ⇒ String
Optional.
-
#model_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1GenerationConfigModelConfig
Config for model selection.
-
#presence_penalty ⇒ Float
Optional.
-
#response_json_schema ⇒ Object
Optional.
-
#response_logprobs ⇒ Boolean
(also: #response_logprobs?)
Optional.
-
#response_mime_type ⇒ String
Optional.
-
#response_modalities ⇒ Array<String>
Optional.
-
#response_schema ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1Schema
Defines the schema of input and output data.
-
#routing_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1GenerationConfigRoutingConfig
The configuration for routing the request to a specific model.
-
#seed ⇒ Fixnum
Optional.
-
#speech_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1SpeechConfig
Configuration for speech generation.
-
#stop_sequences ⇒ Array<String>
Optional.
-
#temperature ⇒ Float
Optional.
-
#thinking_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1GenerationConfigThinkingConfig
Configuration for the model's thinking features.
-
#top_k ⇒ Float
Optional.
-
#top_p ⇒ Float
Optional.
Instance Method Summary collapse
-
#initialize(**args) ⇒ GoogleCloudAiplatformV1beta1GenerationConfig
constructor
A new instance of GoogleCloudAiplatformV1beta1GenerationConfig.
-
#update!(**args) ⇒ Object
Update properties of this object.
Constructor Details
#initialize(**args) ⇒ GoogleCloudAiplatformV1beta1GenerationConfig
Returns a new instance of GoogleCloudAiplatformV1beta1GenerationConfig.
19915 19916 19917 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19915 def initialize(**args) update!(**args) end |
Instance Attribute Details
#audio_timestamp ⇒ Boolean Also known as: audio_timestamp?
Optional. If enabled, audio timestamps will be included in the request to the
model. This can be useful for synchronizing audio with other modalities in the
response.
Corresponds to the JSON property audioTimestamp
19741 19742 19743 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19741 def @audio_timestamp end |
#candidate_count ⇒ Fixnum
Optional. The number of candidate responses to generate. A higher
candidate_count can provide more options to choose from, but it also consumes
more resources. This can be useful for generating a variety of responses and
selecting the best one.
Corresponds to the JSON property candidateCount
19750 19751 19752 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19750 def candidate_count @candidate_count end |
#enable_affective_dialog ⇒ Boolean Also known as: enable_affective_dialog?
Optional. If enabled, the model will detect emotions and adapt its responses
accordingly. For example, if the model detects that the user is frustrated, it
may provide a more empathetic response.
Corresponds to the JSON property enableAffectiveDialog
19757 19758 19759 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19757 def enable_affective_dialog @enable_affective_dialog end |
#frequency_penalty ⇒ Float
Optional. Penalizes tokens based on their frequency in the generated text. A
positive value helps to reduce the repetition of words and phrases. Valid
values can range from [-2.0, 2.0].
Corresponds to the JSON property frequencyPenalty
19765 19766 19767 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19765 def frequency_penalty @frequency_penalty end |
#image_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1ImageConfig
Configuration for image generation. This message allows you to control various
aspects of image generation, such as the output format, aspect ratio, and
whether the model can generate images of people.
Corresponds to the JSON property imageConfig
19772 19773 19774 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19772 def image_config @image_config end |
#logprobs ⇒ Fixnum
Optional. The number of top log probabilities to return for each token. This
can be used to see which other tokens were considered likely candidates for a
given position. A higher value will return more options, but it will also
increase the size of the response.
Corresponds to the JSON property logprobs
19780 19781 19782 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19780 def logprobs @logprobs end |
#max_output_tokens ⇒ Fixnum
Optional. The maximum number of tokens to generate in the response. A token is
approximately four characters. The default value varies by model. This
parameter can be used to control the length of the generated text and prevent
overly long responses.
Corresponds to the JSON property maxOutputTokens
19788 19789 19790 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19788 def max_output_tokens @max_output_tokens end |
#media_resolution ⇒ String
Optional. The token resolution at which input media content is sampled. This
is used to control the trade-off between the quality of the response and the
number of tokens used to represent the media. A higher resolution allows the
model to perceive more detail, which can lead to a more nuanced response, but
it will also use more tokens. This does not affect the image dimensions sent
to the model.
Corresponds to the JSON property mediaResolution
19798 19799 19800 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19798 def media_resolution @media_resolution end |
#model_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1GenerationConfigModelConfig
Config for model selection.
Corresponds to the JSON property modelConfig
19803 19804 19805 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19803 def model_config @model_config end |
#presence_penalty ⇒ Float
Optional. Penalizes tokens that have already appeared in the generated text. A
positive value encourages the model to generate more diverse and less
repetitive text. Valid values can range from [-2.0, 2.0].
Corresponds to the JSON property presencePenalty
19810 19811 19812 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19810 def presence_penalty @presence_penalty end |
#response_json_schema ⇒ Object
Optional. When this field is set, response_schema must be omitted and
response_mime_type must be set to application/json.
Corresponds to the JSON property responseJsonSchema
19816 19817 19818 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19816 def response_json_schema @response_json_schema end |
#response_logprobs ⇒ Boolean Also known as: response_logprobs?
Optional. If set to true, the log probabilities of the output tokens are
returned. Log probabilities are the logarithm of the probability of a token
appearing in the output. A higher log probability means the token is more
likely to be generated. This can be useful for analyzing the model's
confidence in its own output and for debugging.
Corresponds to the JSON property responseLogprobs
19825 19826 19827 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19825 def response_logprobs @response_logprobs end |
#response_mime_type ⇒ String
Optional. The IANA standard MIME type of the response. The model will generate
output that conforms to this MIME type. Supported values include 'text/plain' (
default) and 'application/json'. The model needs to be prompted to output the
appropriate response type, otherwise the behavior is undefined.
Corresponds to the JSON property responseMimeType
19834 19835 19836 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19834 def response_mime_type @response_mime_type end |
#response_modalities ⇒ Array<String>
Optional. The modalities of the response. The model will generate a response
that includes all the specified modalities. For example, if this is set to [
TEXT, IMAGE], the response will include both text and an image.
Corresponds to the JSON property responseModalities
19841 19842 19843 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19841 def response_modalities @response_modalities end |
#response_schema ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1Schema
Defines the schema of input and output data. This is a subset of the OpenAPI
3.0 Schema Object.
Corresponds to the JSON property responseSchema
19847 19848 19849 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19847 def response_schema @response_schema end |
#routing_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1GenerationConfigRoutingConfig
The configuration for routing the request to a specific model. This can be
used to control which model is used for the generation, either automatically
or by specifying a model name.
Corresponds to the JSON property routingConfig
19854 19855 19856 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19854 def routing_config @routing_config end |
#seed ⇒ Fixnum
Optional. A seed for the random number generator. By setting a seed, you can
make the model's output mostly deterministic. For a given prompt and
parameters (like temperature, top_p, etc.), the model will produce the same
response every time. However, it's not a guaranteed absolute deterministic
behavior. This is different from parameters like temperature, which control
the level of randomness. seed ensures that the "random" choices the model
makes are the same on every run, making it essential for testing and ensuring
reproducible results.
Corresponds to the JSON property seed
19866 19867 19868 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19866 def seed @seed end |
#speech_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1SpeechConfig
Configuration for speech generation.
Corresponds to the JSON property speechConfig
19871 19872 19873 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19871 def speech_config @speech_config end |
#stop_sequences ⇒ Array<String>
Optional. A list of character sequences that will stop the model from
generating further tokens. If a stop sequence is generated, the output will
end at that point. This is useful for controlling the length and structure of
the output. For example, you can use ["\n", "###"] to stop generation at a new
line or a specific marker.
Corresponds to the JSON property stopSequences
19880 19881 19882 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19880 def stop_sequences @stop_sequences end |
#temperature ⇒ Float
Optional. Controls the randomness of the output. A higher temperature results
in more creative and diverse responses, while a lower temperature makes the
output more predictable and focused. The valid range is (0.0, 2.0].
Corresponds to the JSON property temperature
19887 19888 19889 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19887 def temperature @temperature end |
#thinking_config ⇒ Google::Apis::AiplatformV1beta1::GoogleCloudAiplatformV1beta1GenerationConfigThinkingConfig
Configuration for the model's thinking features. "Thinking" is a process where
the model breaks down a complex task into smaller, manageable steps. This
allows the model to reason about the task, plan its approach, and execute the
plan to generate a high-quality response.
Corresponds to the JSON property thinkingConfig
19895 19896 19897 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19895 def thinking_config @thinking_config end |
#top_k ⇒ Float
Optional. Specifies the top-k sampling threshold. The model considers only the
top k most probable tokens for the next token. This can be useful for
generating more coherent and less random text. For example, a top_k of 40
means the model will choose the next word from the 40 most likely words.
Corresponds to the JSON property topK
19903 19904 19905 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19903 def top_k @top_k end |
#top_p ⇒ Float
Optional. Specifies the nucleus sampling threshold. The model considers only
the smallest set of tokens whose cumulative probability is at least top_p.
This helps generate more diverse and less repetitive responses. For example, a
top_p of 0.9 means the model considers tokens until the cumulative
probability of the tokens to select from reaches 0.9. It's recommended to
adjust either temperature or top_p, but not both.
Corresponds to the JSON property topP
19913 19914 19915 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19913 def top_p @top_p end |
Instance Method Details
#update!(**args) ⇒ Object
Update properties of this object
19920 19921 19922 19923 19924 19925 19926 19927 19928 19929 19930 19931 19932 19933 19934 19935 19936 19937 19938 19939 19940 19941 19942 19943 19944 |
# File 'lib/google/apis/aiplatform_v1beta1/classes.rb', line 19920 def update!(**args) @audio_timestamp = args[:audio_timestamp] if args.key?(:audio_timestamp) @candidate_count = args[:candidate_count] if args.key?(:candidate_count) @enable_affective_dialog = args[:enable_affective_dialog] if args.key?(:enable_affective_dialog) @frequency_penalty = args[:frequency_penalty] if args.key?(:frequency_penalty) @image_config = args[:image_config] if args.key?(:image_config) @logprobs = args[:logprobs] if args.key?(:logprobs) @max_output_tokens = args[:max_output_tokens] if args.key?(:max_output_tokens) @media_resolution = args[:media_resolution] if args.key?(:media_resolution) @model_config = args[:model_config] if args.key?(:model_config) @presence_penalty = args[:presence_penalty] if args.key?(:presence_penalty) @response_json_schema = args[:response_json_schema] if args.key?(:response_json_schema) @response_logprobs = args[:response_logprobs] if args.key?(:response_logprobs) @response_mime_type = args[:response_mime_type] if args.key?(:response_mime_type) @response_modalities = args[:response_modalities] if args.key?(:response_modalities) @response_schema = args[:response_schema] if args.key?(:response_schema) @routing_config = args[:routing_config] if args.key?(:routing_config) @seed = args[:seed] if args.key?(:seed) @speech_config = args[:speech_config] if args.key?(:speech_config) @stop_sequences = args[:stop_sequences] if args.key?(:stop_sequences) @temperature = args[:temperature] if args.key?(:temperature) @thinking_config = args[:thinking_config] if args.key?(:thinking_config) @top_k = args[:top_k] if args.key?(:top_k) @top_p = args[:top_p] if args.key?(:top_p) end |