Class: OpenAI::Resources::Responses

Inherits:
Object
  • Object
show all
Defined in:
lib/openai/resources/responses.rb,
lib/openai/resources/responses/input_items.rb,
lib/openai/resources/responses/input_tokens.rb

Defined Under Namespace

Classes: InputItems, InputTokens

Instance Attribute Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(client:) ⇒ Responses

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Returns a new instance of Responses.

Parameters:



509
510
511
512
513
# File 'lib/openai/resources/responses.rb', line 509

def initialize(client:)
  @client = client
  @input_items = OpenAI::Resources::Responses::InputItems.new(client: client)
  @input_tokens = OpenAI::Resources::Responses::InputTokens.new(client: client)
end

Instance Attribute Details

#input_itemsOpenAI::Resources::Responses::InputItems (readonly)



7
8
9
# File 'lib/openai/resources/responses.rb', line 7

def input_items
  @input_items
end

#input_tokensOpenAI::Resources::Responses::InputTokens (readonly)



10
11
12
# File 'lib/openai/resources/responses.rb', line 10

def input_tokens
  @input_tokens
end

Instance Method Details

#cancel(response_id, request_options: {}) ⇒ OpenAI::Models::Responses::Response

Cancels a model response with the given ID. Only responses created with the ‘background` parameter set to `true` can be cancelled. [Learn more](platform.openai.com/docs/guides/background).

Parameters:

  • response_id (String)

    The ID of the response to cancel.

  • request_options (OpenAI::RequestOptions, Hash{Symbol=>Object}, nil)

Returns:

See Also:



459
460
461
462
463
464
465
466
# File 'lib/openai/resources/responses.rb', line 459

def cancel(response_id, params = {})
  @client.request(
    method: :post,
    path: ["responses/%1$s/cancel", response_id],
    model: OpenAI::Responses::Response,
    options: params[:request_options]
  )
end

#compact(model:, input: nil, instructions: nil, previous_response_id: nil, prompt_cache_key: nil, request_options: {}) ⇒ OpenAI::Models::Responses::CompactedResponse

Some parameter documentations has been truncated, see Models::Responses::ResponseCompactParams for more details.

Compact a conversation. Returns a compacted response object.

Learn when and how to compact long-running conversations in the [conversation state guide](platform.openai.com/docs/guides/conversation-state#managing-the-context-window). For ZDR-compatible compaction details, see [Compaction (advanced)](platform.openai.com/docs/guides/conversation-state#compaction-advanced).

Parameters:

Returns:

See Also:



495
496
497
498
499
500
501
502
503
504
# File 'lib/openai/resources/responses.rb', line 495

def compact(params)
  parsed, options = OpenAI::Responses::ResponseCompactParams.dump_request(params)
  @client.request(
    method: :post,
    path: "responses/compact",
    body: parsed,
    model: OpenAI::Responses::CompactedResponse,
    options: options
  )
end

#create(background: nil, context_management: nil, conversation: nil, include: nil, input: nil, instructions: nil, max_output_tokens: nil, max_tool_calls: nil, metadata: nil, model: nil, parallel_tool_calls: nil, previous_response_id: nil, prompt: nil, prompt_cache_key: nil, prompt_cache_retention: nil, reasoning: nil, safety_identifier: nil, service_tier: nil, store: nil, stream_options: nil, temperature: nil, text: nil, tool_choice: nil, tools: nil, top_logprobs: nil, top_p: nil, truncation: nil, user: nil, request_options: {}) ⇒ OpenAI::Models::Responses::Response

See #stream_raw for streaming counterpart.

Some parameter documentations has been truncated, see Models::Responses::ResponseCreateParams for more details.

Creates a model response. Provide [text](platform.openai.com/docs/guides/text) or [image](platform.openai.com/docs/guides/images) inputs to generate [text](platform.openai.com/docs/guides/text) or [JSON](platform.openai.com/docs/guides/structured-outputs) outputs. Have the model call your own [custom code](platform.openai.com/docs/guides/function-calling) or use built-in [tools](platform.openai.com/docs/guides/tools) like [web search](platform.openai.com/docs/guides/tools-web-search) or [file search](platform.openai.com/docs/guides/tools-file-search) to use your own data as input for the model’s response.

Parameters:

Returns:

See Also:



92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
# File 'lib/openai/resources/responses.rb', line 92

def create(params = {})
  parsed, options = OpenAI::Responses::ResponseCreateParams.dump_request(params)
  if parsed[:stream]
    message = "Please use `#stream` for the streaming use case."
    raise ArgumentError.new(message)
  end

  model, tool_models = get_structured_output_models(parsed)

  unwrap = ->(raw) do
    parse_structured_outputs!(raw, model, tool_models)
  end

  @client.request(
    method: :post,
    path: "responses",
    body: parsed,
    unwrap: unwrap,
    model: OpenAI::Responses::Response,
    options: options
  )
end

#delete(response_id, request_options: {}) ⇒ nil

Deletes a model response with the given ID.

Parameters:

  • response_id (String)

    The ID of the response to delete.

  • request_options (OpenAI::RequestOptions, Hash{Symbol=>Object}, nil)

Returns:

  • (nil)

See Also:



437
438
439
440
441
442
443
444
# File 'lib/openai/resources/responses.rb', line 437

def delete(response_id, params = {})
  @client.request(
    method: :delete,
    path: ["responses/%1$s", response_id],
    model: NilClass,
    options: params[:request_options]
  )
end

#retrieve(response_id, include: nil, include_obfuscation: nil, starting_after: nil, request_options: {}) ⇒ OpenAI::Models::Responses::Response

See #retrieve_streaming for streaming counterpart.

Some parameter documentations has been truncated, see Models::Responses::ResponseRetrieveParams for more details.

Retrieves a model response with the given ID.

Parameters:

  • response_id (String)

    The ID of the response to retrieve.

  • include (Array<Symbol, OpenAI::Models::Responses::ResponseIncludable>)

    Additional fields to include in the response. See the ‘include`

  • include_obfuscation (Boolean)

    When true, stream obfuscation will be enabled. Stream obfuscation adds

  • starting_after (Integer)

    The sequence number of the event after which to start streaming.

  • request_options (OpenAI::RequestOptions, Hash{Symbol=>Object}, nil)

Returns:

See Also:



354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
# File 'lib/openai/resources/responses.rb', line 354

def retrieve(response_id, params = {})
  parsed, options = OpenAI::Responses::ResponseRetrieveParams.dump_request(params)
  query = OpenAI::Internal::Util.encode_query_params(parsed)
  if parsed[:stream]
    message = "Please use `#retrieve_streaming` for the streaming use case."
    raise ArgumentError.new(message)
  end
  @client.request(
    method: :get,
    path: ["responses/%1$s", response_id],
    query: query,
    model: OpenAI::Responses::Response,
    options: options
  )
end

#retrieve_streaming(response_id, include: nil, include_obfuscation: nil, starting_after: nil, request_options: {}) ⇒ OpenAI::Internal::Stream<OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseReasoningTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::ResponseCustomToolCallInputDeltaEvent, OpenAI::Models::Responses::ResponseCustomToolCallInputDoneEvent>

See #retrieve for non-streaming counterpart.

Some parameter documentations has been truncated, see Models::Responses::ResponseRetrieveParams for more details.

Retrieves a model response with the given ID.

Parameters:

  • response_id (String)

    The ID of the response to retrieve.

  • include (Array<Symbol, OpenAI::Models::Responses::ResponseIncludable>)

    Additional fields to include in the response. See the ‘include`

  • include_obfuscation (Boolean)

    When true, stream obfuscation will be enabled. Stream obfuscation adds

  • starting_after (Integer)

    The sequence number of the event after which to start streaming.

  • request_options (OpenAI::RequestOptions, Hash{Symbol=>Object}, nil)

Returns:

See Also:



392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
# File 'lib/openai/resources/responses.rb', line 392

def retrieve_streaming(response_id, params = {})
  parsed, options = OpenAI::Responses::ResponseRetrieveParams.dump_request(params)
  query = OpenAI::Internal::Util.encode_query_params(parsed)
  unless parsed.fetch(:stream, true)
    message = "Please use `#retrieve` for the non-streaming use case."
    raise ArgumentError.new(message)
  end
  parsed.store(:stream, true)
  @client.request(
    method: :get,
    path: ["responses/%1$s", response_id],
    query: query,
    headers: {"accept" => "text/event-stream"},
    stream: OpenAI::Internal::Stream,
    model: OpenAI::Responses::ResponseStreamEvent,
    options: options
  )
end

#stream(background: nil, context_management: nil, conversation: nil, include: nil, input: nil, instructions: nil, max_output_tokens: nil, max_tool_calls: nil, metadata: nil, model: nil, parallel_tool_calls: nil, previous_response_id: nil, prompt: nil, prompt_cache_key: nil, prompt_cache_retention: nil, reasoning: nil, safety_identifier: nil, service_tier: nil, store: nil, stream_options: nil, temperature: nil, text: nil, tool_choice: nil, tools: nil, top_logprobs: nil, top_p: nil, truncation: nil, user: nil, request_options: {}) ⇒ OpenAI::Internal::Stream<OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::ResponseCustomToolCallInputDeltaEvent, OpenAI::Models::Responses::ResponseCustomToolCallInputDoneEvent>

See #create for non-streaming counterpart.

Some parameter documentations has been truncated, see Models::Responses::ResponseCreateParams for more details.

Creates a model response. Provide [text](platform.openai.com/docs/guides/text) or [image](platform.openai.com/docs/guides/images) inputs to generate [text](platform.openai.com/docs/guides/text) or [JSON](platform.openai.com/docs/guides/structured-outputs) outputs. Have the model call your own [custom code](platform.openai.com/docs/guides/function-calling) or use built-in [tools](platform.openai.com/docs/guides/tools) like [web search](platform.openai.com/docs/guides/tools-web-search) or [file search](platform.openai.com/docs/guides/tools-file-search) to use your own data as input for the model’s response.

Parameters:

Returns:

See Also:



191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
# File 'lib/openai/resources/responses.rb', line 191

def stream(params)
  parsed, options = OpenAI::Responses::ResponseCreateParams.dump_request(params)
  starting_after, response_id = parsed.values_at(:starting_after, :response_id)

  if starting_after && !response_id
    raise ArgumentError, "starting_after can only be used with response_id"
  end
  model, tool_models = get_structured_output_models(parsed)

  unwrap = ->(raw) do
    if raw[:type] == "response.completed" && raw[:response]
      parse_structured_outputs!(raw[:response], model, tool_models)
    end
    raw
  end

  if response_id
    retrieve_params = params.slice(:include, :request_options)

    raw_stream = retrieve_streaming_internal(
      response_id,
      params: retrieve_params,
      unwrap: unwrap
    )
  else
    parsed[:stream] = true

    raw_stream = @client.request(
      method: :post,
      path: "responses",
      headers: {"accept" => "text/event-stream"},
      body: parsed,
      stream: OpenAI::Internal::Stream,
      model: OpenAI::Models::Responses::ResponseStreamEvent,
      unwrap: unwrap,
      options: options
    )
  end

  OpenAI::Streaming::ResponseStream.new(
    raw_stream: raw_stream,
    text_format: model,
    starting_after: starting_after
  )
end

#stream_raw(background: nil, include: nil, input: nil, instructions: nil, max_output_tokens: nil, max_tool_calls: nil, metadata: nil, model: nil, parallel_tool_calls: nil, previous_response_id: nil, prompt: nil, prompt_cache_key: nil, reasoning: nil, safety_identifier: nil, service_tier: nil, store: nil, temperature: nil, text: nil, tool_choice: nil, tools: nil, top_logprobs: nil, top_p: nil, truncation: nil, user: nil, request_options: {}) ⇒ OpenAI::Internal::Stream<OpenAI::Models::Responses::ResponseAudioDeltaEvent, OpenAI::Models::Responses::ResponseAudioDoneEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDeltaEvent, OpenAI::Models::Responses::ResponseAudioTranscriptDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDeltaEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCodeDoneEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallCompletedEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInProgressEvent, OpenAI::Models::Responses::ResponseCodeInterpreterCallInterpretingEvent, OpenAI::Models::Responses::ResponseCompletedEvent, OpenAI::Models::Responses::ResponseContentPartAddedEvent, OpenAI::Models::Responses::ResponseContentPartDoneEvent, OpenAI::Models::Responses::ResponseCreatedEvent, OpenAI::Models::Responses::ResponseErrorEvent, OpenAI::Models::Responses::ResponseFileSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseFileSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseFileSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseFunctionCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseInProgressEvent, OpenAI::Models::Responses::ResponseFailedEvent, OpenAI::Models::Responses::ResponseIncompleteEvent, OpenAI::Models::Responses::ResponseOutputItemAddedEvent, OpenAI::Models::Responses::ResponseOutputItemDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartAddedEvent, OpenAI::Models::Responses::ResponseReasoningSummaryPartDoneEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningSummaryTextDoneEvent, OpenAI::Models::Responses::ResponseReasoningTextDeltaEvent, OpenAI::Models::Responses::ResponseReasoningTextDoneEvent, OpenAI::Models::Responses::ResponseRefusalDeltaEvent, OpenAI::Models::Responses::ResponseRefusalDoneEvent, OpenAI::Models::Responses::ResponseTextDeltaEvent, OpenAI::Models::Responses::ResponseTextDoneEvent, OpenAI::Models::Responses::ResponseWebSearchCallCompletedEvent, OpenAI::Models::Responses::ResponseWebSearchCallInProgressEvent, OpenAI::Models::Responses::ResponseWebSearchCallSearchingEvent, OpenAI::Models::Responses::ResponseImageGenCallCompletedEvent, OpenAI::Models::Responses::ResponseImageGenCallGeneratingEvent, OpenAI::Models::Responses::ResponseImageGenCallInProgressEvent, OpenAI::Models::Responses::ResponseImageGenCallPartialImageEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDeltaEvent, OpenAI::Models::Responses::ResponseMcpCallArgumentsDoneEvent, OpenAI::Models::Responses::ResponseMcpCallCompletedEvent, OpenAI::Models::Responses::ResponseMcpCallFailedEvent, OpenAI::Models::Responses::ResponseMcpCallInProgressEvent, OpenAI::Models::Responses::ResponseMcpListToolsCompletedEvent, OpenAI::Models::Responses::ResponseMcpListToolsFailedEvent, OpenAI::Models::Responses::ResponseMcpListToolsInProgressEvent, OpenAI::Models::Responses::ResponseOutputTextAnnotationAddedEvent, OpenAI::Models::Responses::ResponseQueuedEvent, OpenAI::Models::Responses::ResponseCustomToolCallInputDeltaEvent, OpenAI::Models::Responses::ResponseCustomToolCallInputDoneEvent>

See #create for non-streaming counterpart.

Some parameter documentations has been truncated, see Models::Responses::ResponseCreateParams for more details.

Creates a model response. Provide [text](platform.openai.com/docs/guides/text) or [image](platform.openai.com/docs/guides/images) inputs to generate [text](platform.openai.com/docs/guides/text) or [JSON](platform.openai.com/docs/guides/structured-outputs) outputs. Have the model call your own [custom code](platform.openai.com/docs/guides/function-calling) or use built-in [tools](platform.openai.com/docs/guides/tools) like [web search](platform.openai.com/docs/guides/tools-web-search) or [file search](platform.openai.com/docs/guides/tools-file-search) to use your own data as input for the model’s response.

Parameters:

Returns:

See Also:



313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
# File 'lib/openai/resources/responses.rb', line 313

def stream_raw(params = {})
  parsed, options = OpenAI::Responses::ResponseCreateParams.dump_request(params)
  unless parsed.fetch(:stream, true)
    message = "Please use `#create` for the non-streaming use case."
    raise ArgumentError.new(message)
  end
  parsed.store(:stream, true)

  @client.request(
    method: :post,
    path: "responses",
    headers: {"accept" => "text/event-stream"},
    body: parsed,
    stream: OpenAI::Internal::Stream,
    model: OpenAI::Responses::ResponseStreamEvent,
    options: options
  )
end