Class: Rdkafka::Producer
- Inherits:
-
Object
- Object
- Rdkafka::Producer
- Includes:
- Helpers::OAuth, Helpers::Time
- Defined in:
- lib/rdkafka/producer.rb,
lib/rdkafka/producer/delivery_handle.rb,
lib/rdkafka/producer/delivery_report.rb,
lib/rdkafka/producer/partitions_count_cache.rb
Overview
Defined Under Namespace
Classes: DeliveryHandle, DeliveryReport, PartitionsCountCache, TopicHandleCreationError
Constant Summary collapse
- @@partitions_count_cache =
PartitionsCountCache.new
Instance Attribute Summary collapse
-
#delivery_callback ⇒ Proc?
Returns the current delivery callback, by default this is nil.
-
#delivery_callback_arity ⇒ Integer?
readonly
Returns the number of arguments accepted by the callback, by default this is nil.
Class Method Summary collapse
-
.partitions_count_cache ⇒ Rdkafka::Producer::PartitionsCountCache
Global (process wide) partitions cache.
- .partitions_count_cache=(partitions_count_cache) ⇒ Object
Instance Method Summary collapse
-
#abort_transaction(timeout_ms = -1)) ⇒ true
Abort the current transaction.
-
#arity(callback) ⇒ Integer
Figures out the arity of a given block/method.
-
#begin_transaction ⇒ true
Begin a new transaction Requires #init_transactions to have been called first.
-
#call_delivery_callback(delivery_report, delivery_handle) ⇒ Object
Calls (if registered) the delivery callback.
-
#close ⇒ Object
Close this producer and wait for the internal poll queue to empty.
-
#closed? ⇒ Boolean
Whether this producer has closed.
-
#commit_transaction(timeout_ms = -1)) ⇒ true
Commit the current transaction.
-
#flush(timeout_ms = Defaults::PRODUCER_FLUSH_TIMEOUT_MS) ⇒ Boolean
Wait until all outstanding producer requests are completed, with the given timeout in seconds.
-
#init_transactions ⇒ true
Initialize transactions for the producer Must be called once before any transactional operations.
-
#initialize(native_kafka, partitioner) ⇒ Producer
constructor
A new instance of Producer.
-
#name ⇒ String
Producer name.
-
#partition_count(topic) ⇒ Integer
Partition count for a given topic.
-
#produce(topic:, payload: nil, key: nil, partition: nil, partition_key: nil, timestamp: nil, headers: nil, label: nil, topic_config: EMPTY_HASH, partitioner: @partitioner) ⇒ DeliveryHandle
Produces a message to a Kafka topic.
-
#purge ⇒ Object
Purges the outgoing queue and releases all resources.
-
#queue_size ⇒ Integer
(also: #queue_length)
Returns the number of messages and requests waiting to be sent to the broker as well as delivery reports queued for the application.
-
#send_offsets_to_transaction(consumer, tpl, timeout_ms = Defaults::PRODUCER_SEND_OFFSETS_TIMEOUT_MS) ⇒ Object
Sends provided offsets of a consumer to the transaction for collective commit.
-
#set_topic_config(topic, config, config_hash) ⇒ Object
Sets alternative set of configuration details that can be set per topic.
-
#start ⇒ Object
Starts the native Kafka polling thread and kicks off the init polling.
Methods included from Helpers::OAuth
#oauthbearer_set_token, #oauthbearer_set_token_failure
Methods included from Helpers::Time
Constructor Details
#initialize(native_kafka, partitioner) ⇒ Producer
Returns a new instance of Producer.
55 56 57 58 59 60 61 62 63 |
# File 'lib/rdkafka/producer.rb', line 55 def initialize(native_kafka, partitioner) @topics_refs_map = {} @topics_configs = {} @native_kafka = native_kafka @partitioner = partitioner || "consistent_random" # Makes sure, that native kafka gets closed before it gets GCed by Ruby ObjectSpace.define_finalizer(self, native_kafka.finalizer) end |
Instance Attribute Details
#delivery_callback ⇒ Proc?
Returns the current delivery callback, by default this is nil.
43 44 45 |
# File 'lib/rdkafka/producer.rb', line 43 def delivery_callback @delivery_callback end |
#delivery_callback_arity ⇒ Integer? (readonly)
Returns the number of arguments accepted by the callback, by default this is nil.
49 50 51 |
# File 'lib/rdkafka/producer.rb', line 49 def delivery_callback_arity @delivery_callback_arity end |
Class Method Details
.partitions_count_cache ⇒ Rdkafka::Producer::PartitionsCountCache
It is critical to remember, that not all users may have statistics callbacks enabled, hence we should not make assumption that this cache is always updated from the stats.
Global (process wide) partitions cache. We use it to store number of topics partitions, either from the librdkafka statistics (if enabled) or via direct inline calls every now and then. Since the partitions count can only grow and should be same for all consumers and producers, we can use a global cache as long as we ensure that updates only move up.
20 21 22 |
# File 'lib/rdkafka/producer.rb', line 20 def self.partitions_count_cache @@partitions_count_cache end |
.partitions_count_cache=(partitions_count_cache) ⇒ Object
25 26 27 |
# File 'lib/rdkafka/producer.rb', line 25 def self.partitions_count_cache=(partitions_count_cache) @@partitions_count_cache = partitions_count_cache end |
Instance Method Details
#abort_transaction(timeout_ms = -1)) ⇒ true
Abort the current transaction
186 187 188 189 190 191 192 193 |
# File 'lib/rdkafka/producer.rb', line 186 def abort_transaction(timeout_ms = -1) closed_producer_check(__method__) @native_kafka.with_inner do |inner| response_ptr = Rdkafka::Bindings.rd_kafka_abort_transaction(inner, timeout_ms) Rdkafka::RdkafkaError.validate!(response_ptr, client_ptr: inner) || true end end |
#arity(callback) ⇒ Integer
Figures out the arity of a given block/method
533 534 535 536 537 |
# File 'lib/rdkafka/producer.rb', line 533 def arity(callback) return callback.arity if callback.respond_to?(:arity) callback.method(:call).arity end |
#begin_transaction ⇒ true
Begin a new transaction Requires #init_transactions to have been called first
156 157 158 159 160 161 162 163 164 |
# File 'lib/rdkafka/producer.rb', line 156 def begin_transaction closed_producer_check(__method__) @native_kafka.with_inner do |inner| response_ptr = Rdkafka::Bindings.rd_kafka_begin_transaction(inner) Rdkafka::RdkafkaError.validate!(response_ptr, client_ptr: inner) || true end end |
#call_delivery_callback(delivery_report, delivery_handle) ⇒ Object
Calls (if registered) the delivery callback
516 517 518 519 520 521 522 523 524 525 526 527 |
# File 'lib/rdkafka/producer.rb', line 516 def call_delivery_callback(delivery_report, delivery_handle) return unless @delivery_callback case @delivery_callback_arity when 0 @delivery_callback.call when 1 @delivery_callback.call(delivery_report) else @delivery_callback.call(delivery_report, delivery_handle) end end |
#close ⇒ Object
Close this producer and wait for the internal poll queue to empty.
223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 |
# File 'lib/rdkafka/producer.rb', line 223 def close return if closed? ObjectSpace.undefine_finalizer(self) @native_kafka.close do # We need to remove the topics references objects before we destroy the producer, # otherwise they would leak out @topics_refs_map.each_value do |refs| refs.each_value do |ref| Rdkafka::Bindings.rd_kafka_topic_destroy(ref) end end end @topics_refs_map.clear end |
#closed? ⇒ Boolean
Whether this producer has closed
241 242 243 |
# File 'lib/rdkafka/producer.rb', line 241 def closed? @native_kafka.closed? end |
#commit_transaction(timeout_ms = -1)) ⇒ true
Commit the current transaction
171 172 173 174 175 176 177 178 179 |
# File 'lib/rdkafka/producer.rb', line 171 def commit_transaction(timeout_ms = -1) closed_producer_check(__method__) @native_kafka.with_inner do |inner| response_ptr = Rdkafka::Bindings.rd_kafka_commit_transaction(inner, timeout_ms) Rdkafka::RdkafkaError.validate!(response_ptr, client_ptr: inner) || true end end |
#flush(timeout_ms = Defaults::PRODUCER_FLUSH_TIMEOUT_MS) ⇒ Boolean
We raise an exception for other errors because based on the librdkafka docs, there should be no other errors.
For ‘timed_out` we do not raise an error to keep it backwards compatible
Wait until all outstanding producer requests are completed, with the given timeout in seconds. Call this before closing a producer to ensure delivery of all messages.
256 257 258 259 260 261 262 263 264 265 266 267 268 269 |
# File 'lib/rdkafka/producer.rb', line 256 def flush(timeout_ms=Defaults::PRODUCER_FLUSH_TIMEOUT_MS) closed_producer_check(__method__) error = @native_kafka.with_inner do |inner| response = Rdkafka::Bindings.rd_kafka_flush(inner, timeout_ms) Rdkafka::RdkafkaError.build(response) end # Early skip not to build the error message return true unless error return false if error.code == :timed_out raise(error) end |
#init_transactions ⇒ true
Initialize transactions for the producer Must be called once before any transactional operations
141 142 143 144 145 146 147 148 149 |
# File 'lib/rdkafka/producer.rb', line 141 def init_transactions closed_producer_check(__method__) @native_kafka.with_inner do |inner| response_ptr = Rdkafka::Bindings.rd_kafka_init_transactions(inner, -1) Rdkafka::RdkafkaError.validate!(response_ptr, client_ptr: inner) || true end end |
#name ⇒ String
Returns producer name.
119 120 121 122 123 |
# File 'lib/rdkafka/producer.rb', line 119 def name @name ||= @native_kafka.with_inner do |inner| ::Rdkafka::Bindings.rd_kafka_name(inner) end end |
#partition_count(topic) ⇒ Integer
If ‘allow.auto.create.topics’ is set to true in the broker, the topic will be auto-created after returning nil.
We cache the partition count for a given topic for given time. If statistics are enabled for any producer or consumer, it will take precedence over per instance fetching.
This prevents us in case someone uses ‘partition_key` from querying for the count with each message. Instead we query at most once every 30 seconds at most if we have a valid partition count or every 5 seconds in case we were not able to obtain number of partitions.
Partition count for a given topic.
335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 |
# File 'lib/rdkafka/producer.rb', line 335 def partition_count(topic) closed_producer_check(__method__) self.class.partitions_count_cache.get(topic) do = nil @native_kafka.with_inner do |inner| = ::Rdkafka::Metadata.new(inner, topic).topics&.first end ? [:partition_count] : Rdkafka::Bindings::RD_KAFKA_PARTITION_UA end rescue Rdkafka::RdkafkaError => e # If the topic does not exist, it will be created or if not allowed another error will be # raised. We here return RD_KAFKA_PARTITION_UA so this can happen without early error # happening on metadata discovery. return Rdkafka::Bindings::RD_KAFKA_PARTITION_UA if e.code == :unknown_topic_or_part raise(e) end |
#produce(topic:, payload: nil, key: nil, partition: nil, partition_key: nil, timestamp: nil, headers: nil, label: nil, topic_config: EMPTY_HASH, partitioner: @partitioner) ⇒ DeliveryHandle
Produces a message to a Kafka topic. The message is added to rdkafka’s queue, call wait on the returned delivery handle to make sure it is delivered.
When no partition is specified the underlying Kafka library picks a partition based on the key. If no key is specified, a random partition will be used. When a timestamp is provided this is used instead of the auto-generated timestamp.
375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 |
# File 'lib/rdkafka/producer.rb', line 375 def produce( topic:, payload: nil, key: nil, partition: nil, partition_key: nil, timestamp: nil, headers: nil, label: nil, topic_config: EMPTY_HASH, partitioner: @partitioner ) closed_producer_check(__method__) # Start by checking and converting the input # Get payload length payload_size = if payload.nil? 0 else payload.bytesize end # Get key length key_size = if key.nil? 0 else key.bytesize end topic_config_hash = topic_config.hash # Checks if we have the rdkafka topic reference object ready. It saves us on object # allocation and allows to use custom config on demand. set_topic_config(topic, topic_config, topic_config_hash) unless @topics_refs_map.dig(topic, topic_config_hash) topic_ref = @topics_refs_map.dig(topic, topic_config_hash) if partition_key partition_count = partition_count(topic) # Check if there are no overrides for the partitioner and use the default one only when # no per-topic is present. selected_partitioner = @topics_configs.dig(topic, topic_config_hash, :partitioner) || partitioner # If the topic is not present, set to -1 partition = Rdkafka::Bindings.partitioner( topic_ref, partition_key, partition_count, selected_partitioner) if partition_count.positive? end # If partition is nil, use RD_KAFKA_PARTITION_UA to let librdafka set the partition randomly or # based on the key when present. partition ||= Rdkafka::Bindings::RD_KAFKA_PARTITION_UA # If timestamp is nil use 0 and let Kafka set one. If an integer or time # use it. = if .nil? 0 elsif .is_a?(Integer) elsif .is_a?(Time) (.to_i * 1000) + (.usec / 1000) else raise TypeError.new("Timestamp has to be nil, an Integer or a Time") end delivery_handle = DeliveryHandle.new delivery_handle.label = label delivery_handle.topic = topic delivery_handle[:pending] = true delivery_handle[:response] = Rdkafka::Bindings::RD_KAFKA_PARTITION_UA delivery_handle[:partition] = Rdkafka::Bindings::RD_KAFKA_PARTITION_UA delivery_handle[:offset] = Rdkafka::Bindings::RD_KAFKA_PARTITION_UA DeliveryHandle.register(delivery_handle) args = [ :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_RKT, :pointer, topic_ref, :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_MSGFLAGS, :int, Rdkafka::Bindings::RD_KAFKA_MSG_F_COPY, :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_VALUE, :buffer_in, payload, :size_t, payload_size, :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_KEY, :buffer_in, key, :size_t, key_size, :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_PARTITION, :int32, partition, :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_TIMESTAMP, :int64, , :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_OPAQUE, :pointer, delivery_handle, ] if headers && !headers.empty? headers.each do |key0, value0| key = key0.to_s case value0 when Array # Handle array of values per KIP-82 value0.each do |v| value = v.to_s args.push( :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_HEADER, :string, key, :pointer, value, :size_t, value.bytesize ) end else # Handle single value value = value0.to_s args.push( :int, Rdkafka::Bindings::RD_KAFKA_VTYPE_HEADER, :string, key, :pointer, value, :size_t, value.bytesize ) end end end args.push(:int, Rdkafka::Bindings::RD_KAFKA_VTYPE_END) # Produce the message response = @native_kafka.with_inner do |inner| Rdkafka::Bindings.rd_kafka_producev( inner, *args ) end # Raise error if the produce call was not successful if response != Rdkafka::Bindings::RD_KAFKA_RESP_ERR_NO_ERROR DeliveryHandle.remove(delivery_handle.to_ptr.address) @native_kafka.with_inner do |inner| Rdkafka::RdkafkaError.validate!(response, client_ptr: inner) end end delivery_handle end |
#purge ⇒ Object
Purges the outgoing queue and releases all resources.
Useful when closing the producer with outgoing messages to unstable clusters or when for any other reasons waiting cannot go on anymore. This purges both the queue and all the inflight requests + updates the delivery handles statuses so they can be materialized into ‘purge_queue` errors.
277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 |
# File 'lib/rdkafka/producer.rb', line 277 def purge closed_producer_check(__method__) @native_kafka.with_inner do |inner| response = Bindings.rd_kafka_purge( inner, Bindings::RD_KAFKA_PURGE_F_QUEUE | Bindings::RD_KAFKA_PURGE_F_INFLIGHT ) Rdkafka::RdkafkaError.validate!(response, client_ptr: inner) end # Wait for the purge to affect everything sleep(Defaults::PRODUCER_PURGE_SLEEP_INTERVAL_MS / 1_000.0) until flush(Defaults::PRODUCER_PURGE_FLUSH_TIMEOUT_MS) true end |
#queue_size ⇒ Integer Also known as: queue_length
This method is thread-safe as it uses the @native_kafka.with_inner synchronization
Returns the number of messages and requests waiting to be sent to the broker as well as delivery reports queued for the application.
This provides visibility into the producer’s internal queue depth, useful for:
-
Monitoring producer backpressure
-
Implementing custom flow control
-
Debugging message delivery issues
-
Graceful shutdown logic (wait until queue is empty)
311 312 313 314 315 316 317 |
# File 'lib/rdkafka/producer.rb', line 311 def queue_size closed_producer_check(__method__) @native_kafka.with_inner do |inner| Rdkafka::Bindings.rd_kafka_outq_len(inner) end end |
#send_offsets_to_transaction(consumer, tpl, timeout_ms = Defaults::PRODUCER_SEND_OFFSETS_TIMEOUT_MS) ⇒ Object
Use only in the context of an active transaction
Sends provided offsets of a consumer to the transaction for collective commit
201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
# File 'lib/rdkafka/producer.rb', line 201 def send_offsets_to_transaction(consumer, tpl, timeout_ms = Defaults::PRODUCER_SEND_OFFSETS_TIMEOUT_MS) closed_producer_check(__method__) return if tpl.empty? = consumer. native_tpl = tpl.to_native_tpl @native_kafka.with_inner do |inner| response_ptr = Bindings.rd_kafka_send_offsets_to_transaction(inner, native_tpl, , timeout_ms) Rdkafka::RdkafkaError.validate!(response_ptr, client_ptr: inner) end ensure if && !.null? Bindings.() end Rdkafka::Bindings.rd_kafka_topic_partition_list_destroy(native_tpl) unless native_tpl.nil? end |
#set_topic_config(topic, config, config_hash) ⇒ Object
It is not allowed to re-set the same topic config twice because of the underlying librdkafka caching
Sets alternative set of configuration details that can be set per topic
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
# File 'lib/rdkafka/producer.rb', line 73 def set_topic_config(topic, config, config_hash) # Ensure lock on topic reference just in case @native_kafka.with_inner do |inner| @topics_refs_map[topic] ||= {} @topics_configs[topic] ||= {} return if @topics_configs[topic].key?(config_hash) # If config is empty, we create an empty reference that will be used with defaults rd_topic_config = if config.empty? nil else Rdkafka::Bindings.rd_kafka_topic_conf_new.tap do |topic_config| config.each do |key, value| error_buffer = FFI::MemoryPointer.new(:char, 256) result = Rdkafka::Bindings.rd_kafka_topic_conf_set( topic_config, key.to_s, value.to_s, error_buffer, 256 ) unless result == :config_ok raise Config::ConfigError.new(error_buffer.read_string) end end end end topic_handle = Bindings.rd_kafka_topic_new(inner, topic, rd_topic_config) raise TopicHandleCreationError.new("Error creating topic handle for topic #{topic}") if topic_handle.null? @topics_configs[topic][config_hash] = config @topics_refs_map[topic][config_hash] = topic_handle end end |
#start ⇒ Object
Not needed to run unless explicit start was disabled
Starts the native Kafka polling thread and kicks off the init polling
114 115 116 |
# File 'lib/rdkafka/producer.rb', line 114 def start @native_kafka.start end |