Class: Message
- Inherits:
-
ApplicationRecord
- Object
- ActiveRecord::Base
- ApplicationRecord
- Message
- Includes:
- TokenEstimation
- Defined in:
- app/models/message.rb
Overview
A persisted record of what was said during a session — by whom and when. Messages are the single source of truth for conversation history —there is no separate chat log, only messages attached to a session.
Not to be confused with Events::Base (transient bus signals). Messages persist to SQLite; events flow through the bus and are gone.
After commit, emits Events::MessageCreated and Events::MessageUpdated lifecycle events so subscribers (Events::Subscribers::MessageBroadcaster, Events::Subscribers::MnemeScheduler) can react without coupling.
Constant Summary collapse
- TYPES =
%w[system_message user_message agent_message tool_call tool_response].freeze
- LLM_TYPES =
%w[user_message agent_message].freeze
- CONVERSATION_TYPES =
%w[user_message agent_message system_message].freeze
- THINK_TOOL =
"think"- TOOL_TYPES =
Message types that require a tool_use_id to pair call with response.
%w[tool_call tool_response].freeze
- ROLE_MAP =
{"user_message" => "user", "agent_message" => "assistant"}.freeze
- SYSTEM_PROMPT_ID =
Synthetic ID for system prompt entries in the TUI message store. Real message IDs are positive integers from the database, so 0 is safe for deduplication without collision risk.
0
Constants included from TokenEstimation
TokenEstimation::BYTES_PER_TOKEN
Instance Attribute Summary collapse
-
#message_type ⇒ String
One of TYPES: system_message, user_message, agent_message, tool_call, tool_response.
-
#payload ⇒ Hash
Message-specific data (content, tool_name, tool_input, etc.).
-
#timestamp ⇒ Integer
Nanoseconds since epoch (Process::CLOCK_REALTIME).
-
#token_count ⇒ Integer
Token count for this message’s payload.
-
#tool_use_id ⇒ String
ID correlating tool_call and tool_response messages (Anthropic-assigned, or a SecureRandom.uuid fallback when the API returns nil; required for tool_call and tool_response messages).
Class Method Summary collapse
-
.conversation_or_think ⇒ ActiveRecord::Relation
Conversation messages (user/agent/system) and think tool_calls — the messages Mneme treats as boundary-eligible.
-
.llm_messages ⇒ ActiveRecord::Relation
Messages that represent conversation turns sent to the LLM API.
Instance Method Summary collapse
-
#api_role ⇒ String
Maps message_type to the Anthropic Messages API role.
-
#conversation_or_think? ⇒ Boolean
True if this is a conversation message (user/agent/system) or a think tool_call — the messages Mneme treats as “conversation” for boundary tracking.
-
#decorator_class ⇒ Class
Draper hook: picks the concrete decorator subclass based on #message_type.
-
#tokenization_text ⇒ String
String fed to the token estimator and the remote tokenizer.
Methods included from TokenEstimation
estimate_token_count, #estimate_tokens
Instance Attribute Details
#message_type ⇒ String
Returns one of TYPES: system_message, user_message, agent_message, tool_call, tool_response.
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
# File 'app/models/message.rb', line 29 class Message < ApplicationRecord include TokenEstimation TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create_commit :emit_created_event after_update_commit :emit_updated_event # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.conversation_or_think # Conversation messages (user/agent/system) and think tool_calls — # the messages Mneme treats as boundary-eligible. # @return [ActiveRecord::Relation] scope :conversation_or_think, -> { where(message_type: CONVERSATION_TYPES) .or(where(message_type: "tool_call") .where("json_extract(payload, '$.tool_name') = ?", THINK_TOOL)) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # String fed to the token estimator and the remote tokenizer. Tool # messages serialize the full payload as JSON so +tool_name+, +tool_input+, # and +tool_use_id+ contribute to the count; conversation messages use # the content field only. # # @return [String] def tokenization_text if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end end # Draper hook: picks the concrete decorator subclass based on # {#message_type}. Overrides {Draper::Decoratable#decorator_class}, # which would otherwise default to the abstract {MessageDecorator} # base class. Called implicitly by +message.decorate+. # # @return [Class] a {MessageDecorator} subclass def decorator_class case when "user_message" then UserMessageDecorator when "agent_message" then AgentMessageDecorator when "system_message" then SystemMessageDecorator when "tool_call" then ToolCallDecorator when "tool_response" then ToolResponseDecorator end end private def emit_created_event Events::Bus.emit(Events::MessageCreated.new(self)) end def emit_updated_event Events::Bus.emit(Events::MessageUpdated.new(self)) end end |
#payload ⇒ Hash
Returns message-specific data (content, tool_name, tool_input, etc.).
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
# File 'app/models/message.rb', line 29 class Message < ApplicationRecord include TokenEstimation TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create_commit :emit_created_event after_update_commit :emit_updated_event # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.conversation_or_think # Conversation messages (user/agent/system) and think tool_calls — # the messages Mneme treats as boundary-eligible. # @return [ActiveRecord::Relation] scope :conversation_or_think, -> { where(message_type: CONVERSATION_TYPES) .or(where(message_type: "tool_call") .where("json_extract(payload, '$.tool_name') = ?", THINK_TOOL)) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # String fed to the token estimator and the remote tokenizer. Tool # messages serialize the full payload as JSON so +tool_name+, +tool_input+, # and +tool_use_id+ contribute to the count; conversation messages use # the content field only. # # @return [String] def tokenization_text if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end end # Draper hook: picks the concrete decorator subclass based on # {#message_type}. Overrides {Draper::Decoratable#decorator_class}, # which would otherwise default to the abstract {MessageDecorator} # base class. Called implicitly by +message.decorate+. # # @return [Class] a {MessageDecorator} subclass def decorator_class case when "user_message" then UserMessageDecorator when "agent_message" then AgentMessageDecorator when "system_message" then SystemMessageDecorator when "tool_call" then ToolCallDecorator when "tool_response" then ToolResponseDecorator end end private def emit_created_event Events::Bus.emit(Events::MessageCreated.new(self)) end def emit_updated_event Events::Bus.emit(Events::MessageUpdated.new(self)) end end |
#timestamp ⇒ Integer
Returns nanoseconds since epoch (Process::CLOCK_REALTIME).
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
# File 'app/models/message.rb', line 29 class Message < ApplicationRecord include TokenEstimation TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create_commit :emit_created_event after_update_commit :emit_updated_event # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.conversation_or_think # Conversation messages (user/agent/system) and think tool_calls — # the messages Mneme treats as boundary-eligible. # @return [ActiveRecord::Relation] scope :conversation_or_think, -> { where(message_type: CONVERSATION_TYPES) .or(where(message_type: "tool_call") .where("json_extract(payload, '$.tool_name') = ?", THINK_TOOL)) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # String fed to the token estimator and the remote tokenizer. Tool # messages serialize the full payload as JSON so +tool_name+, +tool_input+, # and +tool_use_id+ contribute to the count; conversation messages use # the content field only. # # @return [String] def tokenization_text if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end end # Draper hook: picks the concrete decorator subclass based on # {#message_type}. Overrides {Draper::Decoratable#decorator_class}, # which would otherwise default to the abstract {MessageDecorator} # base class. Called implicitly by +message.decorate+. # # @return [Class] a {MessageDecorator} subclass def decorator_class case when "user_message" then UserMessageDecorator when "agent_message" then AgentMessageDecorator when "system_message" then SystemMessageDecorator when "tool_call" then ToolCallDecorator when "tool_response" then ToolResponseDecorator end end private def emit_created_event Events::Bus.emit(Events::MessageCreated.new(self)) end def emit_updated_event Events::Bus.emit(Events::MessageUpdated.new(self)) end end |
#token_count ⇒ Integer
Returns token count for this message’s payload. Seeded with a local estimate on create and later refined by CountTokensJob using the real Anthropic tokenizer. Always positive — never zero or nil.
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
# File 'app/models/message.rb', line 29 class Message < ApplicationRecord include TokenEstimation TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create_commit :emit_created_event after_update_commit :emit_updated_event # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.conversation_or_think # Conversation messages (user/agent/system) and think tool_calls — # the messages Mneme treats as boundary-eligible. # @return [ActiveRecord::Relation] scope :conversation_or_think, -> { where(message_type: CONVERSATION_TYPES) .or(where(message_type: "tool_call") .where("json_extract(payload, '$.tool_name') = ?", THINK_TOOL)) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # String fed to the token estimator and the remote tokenizer. Tool # messages serialize the full payload as JSON so +tool_name+, +tool_input+, # and +tool_use_id+ contribute to the count; conversation messages use # the content field only. # # @return [String] def tokenization_text if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end end # Draper hook: picks the concrete decorator subclass based on # {#message_type}. Overrides {Draper::Decoratable#decorator_class}, # which would otherwise default to the abstract {MessageDecorator} # base class. Called implicitly by +message.decorate+. # # @return [Class] a {MessageDecorator} subclass def decorator_class case when "user_message" then UserMessageDecorator when "agent_message" then AgentMessageDecorator when "system_message" then SystemMessageDecorator when "tool_call" then ToolCallDecorator when "tool_response" then ToolResponseDecorator end end private def emit_created_event Events::Bus.emit(Events::MessageCreated.new(self)) end def emit_updated_event Events::Bus.emit(Events::MessageUpdated.new(self)) end end |
#tool_use_id ⇒ String
Returns ID correlating tool_call and tool_response messages (Anthropic-assigned, or a SecureRandom.uuid fallback when the API returns nil; required for tool_call and tool_response messages).
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 |
# File 'app/models/message.rb', line 29 class Message < ApplicationRecord include TokenEstimation TYPES = %w[system_message user_message agent_message tool_call tool_response].freeze LLM_TYPES = %w[user_message agent_message].freeze CONVERSATION_TYPES = %w[user_message agent_message system_message].freeze THINK_TOOL = "think" # Message types that require a tool_use_id to pair call with response. TOOL_TYPES = %w[tool_call tool_response].freeze ROLE_MAP = {"user_message" => "user", "agent_message" => "assistant"}.freeze # Synthetic ID for system prompt entries in the TUI message store. # Real message IDs are positive integers from the database, so 0 # is safe for deduplication without collision risk. SYSTEM_PROMPT_ID = 0 belongs_to :session has_many :pinned_messages, dependent: :destroy validates :message_type, presence: true, inclusion: {in: TYPES} validates :payload, presence: true validates :timestamp, presence: true # Anthropic requires every tool_use to have a matching tool_result with the same ID validates :tool_use_id, presence: true, if: -> { .in?(TOOL_TYPES) } after_create_commit :emit_created_event after_update_commit :emit_updated_event # @!method self.llm_messages # Messages that represent conversation turns sent to the LLM API. # @return [ActiveRecord::Relation] scope :llm_messages, -> { where(message_type: LLM_TYPES) } # @!method self.conversation_or_think # Conversation messages (user/agent/system) and think tool_calls — # the messages Mneme treats as boundary-eligible. # @return [ActiveRecord::Relation] scope :conversation_or_think, -> { where(message_type: CONVERSATION_TYPES) .or(where(message_type: "tool_call") .where("json_extract(payload, '$.tool_name') = ?", THINK_TOOL)) } # Maps message_type to the Anthropic Messages API role. # @return [String] "user" or "assistant" def api_role ROLE_MAP.fetch() end # @return [Boolean] true if this is a conversation message (user/agent/system) # or a think tool_call — the messages Mneme treats as "conversation" for boundary tracking def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end # String fed to the token estimator and the remote tokenizer. Tool # messages serialize the full payload as JSON so +tool_name+, +tool_input+, # and +tool_use_id+ contribute to the count; conversation messages use # the content field only. # # @return [String] def tokenization_text if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end end # Draper hook: picks the concrete decorator subclass based on # {#message_type}. Overrides {Draper::Decoratable#decorator_class}, # which would otherwise default to the abstract {MessageDecorator} # base class. Called implicitly by +message.decorate+. # # @return [Class] a {MessageDecorator} subclass def decorator_class case when "user_message" then UserMessageDecorator when "agent_message" then AgentMessageDecorator when "system_message" then SystemMessageDecorator when "tool_call" then ToolCallDecorator when "tool_response" then ToolResponseDecorator end end private def emit_created_event Events::Bus.emit(Events::MessageCreated.new(self)) end def emit_updated_event Events::Bus.emit(Events::MessageUpdated.new(self)) end end |
Class Method Details
.conversation_or_think ⇒ ActiveRecord::Relation
Conversation messages (user/agent/system) and think tool_calls —the messages Mneme treats as boundary-eligible.
67 68 69 70 71 |
# File 'app/models/message.rb', line 67 scope :conversation_or_think, -> { where(message_type: CONVERSATION_TYPES) .or(where(message_type: "tool_call") .where("json_extract(payload, '$.tool_name') = ?", THINK_TOOL)) } |
.llm_messages ⇒ ActiveRecord::Relation
Messages that represent conversation turns sent to the LLM API.
61 |
# File 'app/models/message.rb', line 61 scope :llm_messages, -> { where(message_type: LLM_TYPES) } |
Instance Method Details
#api_role ⇒ String
Maps message_type to the Anthropic Messages API role.
75 76 77 |
# File 'app/models/message.rb', line 75 def api_role ROLE_MAP.fetch() end |
#conversation_or_think? ⇒ Boolean
Returns true if this is a conversation message (user/agent/system) or a think tool_call — the messages Mneme treats as “conversation” for boundary tracking.
81 82 83 84 |
# File 'app/models/message.rb', line 81 def conversation_or_think? .in?(CONVERSATION_TYPES) || ( == "tool_call" && payload["tool_name"] == THINK_TOOL) end |
#decorator_class ⇒ Class
Draper hook: picks the concrete decorator subclass based on #message_type. Overrides Draper::Decoratable#decorator_class, which would otherwise default to the abstract MessageDecorator base class. Called implicitly by message.decorate.
106 107 108 109 110 111 112 113 114 |
# File 'app/models/message.rb', line 106 def decorator_class case when "user_message" then UserMessageDecorator when "agent_message" then AgentMessageDecorator when "system_message" then SystemMessageDecorator when "tool_call" then ToolCallDecorator when "tool_response" then ToolResponseDecorator end end |
#tokenization_text ⇒ String
String fed to the token estimator and the remote tokenizer. Tool messages serialize the full payload as JSON so tool_name, tool_input, and tool_use_id contribute to the count; conversation messages use the content field only.
92 93 94 95 96 97 98 |
# File 'app/models/message.rb', line 92 def tokenization_text if .in?(TOOL_TYPES) payload.to_json else payload["content"].to_s end end |