Class: LlmGateway::Adapters::OpenAICodex::InputMapper

Inherits:
LlmGateway::Adapters::OpenAI::Responses::InputMapper show all
Defined in:
lib/llm_gateway/adapters/openai_codex/input_mapper.rb

Overview

Custom input mapper for the Codex backend.

The Codex Responses endpoint rejects several content block types that the standard OpenAI Responses InputMapper passes through:

- "reasoning" and "summary_text" blocks are never accepted as input.
- "thinking" blocks are only valid when they carry an encrypted
  `signature`; unsigned thinking blocks must be dropped.

Additional normalisation:

- Tool-result output is coerced to recognised Responses input types
  (input_text / input_image).
- Assistant text content is always sent as "output_text" (not
  "input_text") because Codex is strict about directionality.
- function_call / tool_use blocks inside an assistant turn are
  promoted to top-level function_call items so that Codex can match
  them against the subsequent function_call_output items.

Class Method Summary collapse

Methods inherited from LlmGateway::Adapters::OpenAI::Responses::InputMapper

map_assistant_history_message, map_tools, message_mapper

Methods inherited from LlmGateway::Adapters::OpenAI::ChatCompletions::InputMapper

map, map_system, map_tools

Class Method Details

.map_messages(messages) ⇒ Object



26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# File 'lib/llm_gateway/adapters/openai_codex/input_mapper.rb', line 26

def self.map_messages(messages)
  return messages unless messages.is_a?(Array)

  mapper  = message_mapper
  stripped = strip_reasoning_blocks(messages)

  mapped = stripped.each_with_object([]) do |msg, acc|
    next unless msg.is_a?(Hash)

    role    = msg[:role]
    content = msg[:content]

    if %w[user developer].include?(role) && tool_result_message?(content)
      # Responses API expects tool results as top-level input items.
      # Also normalise nested tool_result output blocks to Responses
      # input types (text → input_text, image → input_image).
      content.each { |part| acc << map_tool_result_for_responses(part, mapper) }
      next
    end

    if role == "assistant" && content.is_a?(Array)
      acc.concat(map_assistant_content(content, mapper))
      next
    end

    mapped_content =
      if content.is_a?(Array)
        content.map { |part| mapper.map_content(part) }
      else
        [ mapper.map_content(content) ]
      end

    acc << { role: role, content: mapped_content }
  end

  normalize_assistant_content_types(mapped)
end