Class: Rdkafka::Config

Inherits:
Object
  • Object
show all
Defined in:
lib/rdkafka/config.rb

Overview

Configuration for a Kafka consumer or producer. You can create an instance and use the consumer and producer methods to create a client. Documentation of the available configuration options is available on https://github.com/confluentinc/librdkafka/blob/master/CONFIGURATION.md.

Defined Under Namespace

Classes: ClientCreationError, ConfigError, NoLoggerError

Constant Summary collapse

DEFAULT_CONFIG =

Default config that can be overwritten.

{
  # Request api version so advanced features work
  :"api.version.request" => true
}.freeze
REQUIRED_CONFIG =

Required config that cannot be overwritten.

{
  # Enable log queues so we get callbacks in our own Ruby threads
  :"log.queue" => true
}.freeze

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(config_hash = {}) ⇒ Config

Returns a new config with the provided options which are merged with DEFAULT_CONFIG.

Parameters:

  • config_hash (Hash{String,Symbol => String}) (defaults to: {})

    The config options for rdkafka



129
130
131
132
133
# File 'lib/rdkafka/config.rb', line 129

def initialize(config_hash = {})
  @config_hash = DEFAULT_CONFIG.merge(config_hash)
  @consumer_rebalance_listener = nil
  @consumer_poll_set = true
end

Class Method Details

.ensure_log_threadObject

Makes sure that there is a thread for consuming logs We do not spawn thread immediately and we need to check if it operates to support forking



34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# File 'lib/rdkafka/config.rb', line 34

def self.ensure_log_thread
  return if @@log_thread && @@log_thread.alive?

  @@log_mutex.synchronize do
    # Restart if dead (fork, crash)
    @@log_thread = nil if @@log_thread && !@@log_thread.alive?

    @@log_thread ||= Thread.start do
      loop do
        severity, msg = @@log_queue.pop
        @@logger.add(severity, msg)
      end
    end
  end
end

.error_callbackProc?

Returns the current error callback, by default this is nil.

Returns:

  • (Proc, nil)


103
104
105
# File 'lib/rdkafka/config.rb', line 103

def self.error_callback
  @@error_callback
end

.error_callback=(callback) ⇒ nil

Set a callback that will be called every time the underlying client emits an error. If this callback is not set, global errors such as brokers becoming unavailable will only be sent to the logger, as defined by librdkafka. The callback is called with an instance of RdKafka::Error.

Parameters:

  • callback (Proc, #call)

    The callback

Returns:

  • (nil)

Raises:

  • (TypeError)


95
96
97
98
# File 'lib/rdkafka/config.rb', line 95

def self.error_callback=(callback)
  raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call)
  @@error_callback = callback
end

.log_queueQueue

Returns a queue whose contents will be passed to the configured logger. Each entry should follow the format [Logger::Severity, String]. The benefit over calling the logger directly is that this is safe to use from trap contexts.

Returns:

  • (Queue)


55
56
57
# File 'lib/rdkafka/config.rb', line 55

def self.log_queue
  @@log_queue
end

.loggerLogger

Returns the current logger, by default this is a logger to stdout.

Returns:

  • (Logger)


28
29
30
# File 'lib/rdkafka/config.rb', line 28

def self.logger
  @@logger
end

.logger=(logger) ⇒ nil

Set the logger that will be used for all logging output by this library.

Parameters:

  • logger (Logger)

    The logger to be used

Returns:

  • (nil)

Raises:



64
65
66
67
# File 'lib/rdkafka/config.rb', line 64

def self.logger=(logger)
  raise NoLoggerError if logger.nil?
  @@logger = logger
end

.statistics_callbackProc?

Returns the current statistics callback, by default this is nil.

Returns:

  • (Proc, nil)


84
85
86
# File 'lib/rdkafka/config.rb', line 84

def self.statistics_callback
  @@statistics_callback
end

.statistics_callback=(callback) ⇒ nil

Set a callback that will be called every time the underlying client emits statistics. You can configure if and how often this happens using statistics.interval.ms. The callback is called with a hash that's documented here: https://github.com/confluentinc/librdkafka/blob/master/STATISTICS.md

Parameters:

  • callback (Proc, #call)

    The callback

Returns:

  • (nil)

Raises:

  • (TypeError)


76
77
78
79
# File 'lib/rdkafka/config.rb', line 76

def self.statistics_callback=(callback)
  raise TypeError.new("Callback has to be callable") unless callback.respond_to?(:call) || callback == nil
  @@statistics_callback = callback
end

Instance Method Details

#[](key) ⇒ String?

Get a config option with the specified key

Parameters:

  • key (String)

    The config option's key

Returns:

  • (String, nil)

    The config option or nil if it is not present



150
151
152
# File 'lib/rdkafka/config.rb', line 150

def [](key)
  @config_hash[key]
end

#[]=(key, value) ⇒ nil

Set a config option.

Parameters:

  • key (String)

    The config option's key

  • value (String)

    The config option's value

Returns:

  • (nil)


141
142
143
# File 'lib/rdkafka/config.rb', line 141

def []=(key, value)
  @config_hash[key] = value
end

#adminAdmin

Creates an admin instance with this configuration.

Returns:

  • (Admin)

    The created admin instance

Raises:



241
242
243
244
245
246
247
248
249
250
251
252
# File 'lib/rdkafka/config.rb', line 241

def admin
  opaque = Opaque.new
  config = native_config(opaque)
  Rdkafka::Bindings.rd_kafka_conf_set_background_event_cb(config, Rdkafka::Callbacks::BackgroundEventCallbackFunction)
  Rdkafka::Admin.new(
    Rdkafka::NativeKafka.new(
      native_kafka(config, :rd_kafka_producer),
      run_polling_thread: true,
      opaque: opaque
    )
  )
end

#consumerConsumer

Creates a consumer with this configuration.

Returns:

Raises:



183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
# File 'lib/rdkafka/config.rb', line 183

def consumer
  opaque = Opaque.new
  config = native_config(opaque)

  if @consumer_rebalance_listener
    opaque.consumer_rebalance_listener = @consumer_rebalance_listener
    Rdkafka::Bindings.rd_kafka_conf_set_rebalance_cb(config, Rdkafka::Bindings::RebalanceCallback)
  end

  # Create native client
  kafka = native_kafka(config, :rd_kafka_consumer)

  # Redirect the main queue to the consumer queue
  Rdkafka::Bindings.rd_kafka_poll_set_consumer(kafka) if @consumer_poll_set

  # Return consumer with Kafka client
  Rdkafka::Consumer.new(
    Rdkafka::NativeKafka.new(
      kafka,
      run_polling_thread: false,
      opaque: opaque
    )
  )
end

#consumer_poll_set=(poll_set) ⇒ Object

Should we use a single queue for the underlying consumer and events.

This is an advanced API that allows for more granular control of the polling process. When this value is set to false (true by defualt), there will be two queues that need to be polled:

  • main librdkafka queue for events
  • consumer queue with messages and rebalances

It is recommended to use the defaults and only set it to false in advance multi-threaded and complex cases where granular events handling control is needed.

Parameters:

  • poll_set (Boolean)


173
174
175
# File 'lib/rdkafka/config.rb', line 173

def consumer_poll_set=(poll_set)
  @consumer_poll_set = poll_set
end

#consumer_rebalance_listener=(listener) ⇒ Object

Get notifications on partition assignment/revocation for the subscribed topics

Parameters:

  • listener (Object, #on_partitions_assigned, #on_partitions_revoked)

    listener instance



157
158
159
# File 'lib/rdkafka/config.rb', line 157

def consumer_rebalance_listener=(listener)
  @consumer_rebalance_listener = listener
end

#producerProducer

Create a producer with this configuration.

Returns:

Raises:



214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
# File 'lib/rdkafka/config.rb', line 214

def producer
  # Create opaque
  opaque = Opaque.new
  # Create Kafka config
  config = native_config(opaque)
  # Set callback to receive delivery reports on config
  Rdkafka::Bindings.rd_kafka_conf_set_dr_msg_cb(config, Rdkafka::Callbacks::DeliveryCallbackFunction)
  # Return producer with Kafka client
  partitioner_name = self[:partitioner] || self["partitioner"]
  Rdkafka::Producer.new(
    Rdkafka::NativeKafka.new(
      native_kafka(config, :rd_kafka_producer),
      run_polling_thread: true,
      opaque: opaque
    ),
    partitioner_name
  ).tap do |producer|
    opaque.producer = producer
  end
end