Class: Telnyx::Resources::AI::Chat

Inherits:
Object
  • Object
show all
Defined in:
lib/telnyx/resources/ai/chat.rb

Overview

Generate text with LLMs

Instance Method Summary collapse

Constructor Details

#initialize(client:) ⇒ Chat

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Returns a new instance of Chat.

Parameters:



84
85
86
# File 'lib/telnyx/resources/ai/chat.rb', line 84

def initialize(client:)
  @client = client
end

Instance Method Details

#create_completion(messages:, api_key_ref: nil, best_of: nil, early_stopping: nil, enable_thinking: nil, frequency_penalty: nil, guided_choice: nil, guided_json: nil, guided_regex: nil, length_penalty: nil, logprobs: nil, max_tokens: nil, min_p: nil, model: nil, n: nil, presence_penalty: nil, response_format: nil, stream: nil, temperature: nil, tool_choice: nil, tools: nil, top_logprobs: nil, top_p: nil, use_beam_search: nil, request_options: {}) ⇒ Hash{Symbol=>Object}

Some parameter documentations has been truncated, see Models::AI::ChatCreateCompletionParams for more details.

Chat with a language model. This endpoint is consistent with the [OpenAI Chat Completions API](platform.openai.com/docs/api-reference/chat) and may be used with the OpenAI JS or Python SDK.

Parameters:

  • messages (Array<Telnyx::Models::AI::ChatCreateCompletionParams::Message>)

    A list of the previous chat messages for context.

  • api_key_ref (String)

    If you are using an external inference provider like xAI or OpenAI, this field a

  • best_of (Integer)

    This is used with ‘use_beam_search` to determine how many candidate beams to exp

  • early_stopping (Boolean)

    This is used with ‘use_beam_search`. If `true`, generation stops as soon as ther

  • enable_thinking (Boolean)

    Whether to enable the thinking/reasoning phase for models that support it (e.g.,

  • frequency_penalty (Float)

    Higher values will penalize the model from repeating the same output tokens.

  • guided_choice (Array<String>)

    If specified, the output will be exactly one of the choices.

  • guided_json (Hash{Symbol=>Object})

    Must be a valid JSON schema. If specified, the output will follow the JSON schem

  • guided_regex (String)

    If specified, the output will follow the regex pattern.

  • length_penalty (Float)

    This is used with ‘use_beam_search` to prefer shorter or longer completions.

  • logprobs (Boolean)

    Whether to return log probabilities of the output tokens or not. If true, return

  • max_tokens (Integer)

    Maximum number of completion tokens the model should generate.

  • min_p (Float)

    This is an alternative to ‘top_p` that [many prefer](github.com/huggingf

  • model (String)

    The language model to chat with.

  • n (Float)

    This will return multiple choices for you instead of a single chat completion.

  • presence_penalty (Float)

    Higher values will penalize the model from repeating the same output tokens.

  • response_format (Telnyx::Models::AI::ChatCreateCompletionParams::ResponseFormat)

    Use this is you want to guarantee a JSON output without defining a schema. For c

  • stream (Boolean)

    Whether or not to stream data-only server-sent events as they become available.

  • temperature (Float)

    Adjusts the “creativity” of the model. Lower values make the model more determin

  • tool_choice (Symbol, Telnyx::Models::AI::ChatCreateCompletionParams::ToolChoice)
  • tools (Array<Telnyx::Models::AI::ChatCreateCompletionParams::Tool::Function, Telnyx::Models::AI::ChatCreateCompletionParams::Tool::Retrieval>)

    The ‘function` tool type follows the same schema as the [OpenAI Chat Completions

  • top_logprobs (Integer)

    This is used with ‘logprobs`. An integer between 0 and 20 specifying the number

  • top_p (Float)

    An alternative or complement to ‘temperature`. This adjusts how many of the top

  • use_beam_search (Boolean)

    Setting this to ‘true` will allow the model to [explore more completion options]

  • request_options (Telnyx::RequestOptions, Hash{Symbol=>Object}, nil)

Returns:

  • (Hash{Symbol=>Object})

See Also:



70
71
72
73
74
75
76
77
78
79
# File 'lib/telnyx/resources/ai/chat.rb', line 70

def create_completion(params)
  parsed, options = Telnyx::AI::ChatCreateCompletionParams.dump_request(params)
  @client.request(
    method: :post,
    path: "ai/chat/completions",
    body: parsed,
    model: Telnyx::Internal::Type::HashOf[Telnyx::Internal::Type::Unknown],
    options: options
  )
end