Class: Aws::Kafka::Client

Inherits:
Seahorse::Client::Base
  • Object
show all
Includes:
ClientStubs
Defined in:
lib/aws-sdk-kafka/client.rb

Overview

An API client for Kafka. To construct a client, you need to configure a ‘:region` and `:credentials`.

client = Aws::Kafka::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the [developer guide](/sdk-for-ruby/v3/developer-guide/setup-config.html).

See #initialize for a full list of supported configuration options.

Class Attribute Summary collapse

API Operations collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :plugins (Array<Seahorse::Client::Plugin>) — default: []]

    A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials used for authentication. This can be any class that includes and implements ‘Aws::CredentialProvider`, or instance of any one of the following classes:

    • ‘Aws::Credentials` - Used for configuring static, non-refreshing credentials.

    • ‘Aws::SharedCredentials` - Used for loading static credentials from a shared file, such as `~/.aws/config`.

    • ‘Aws::AssumeRoleCredentials` - Used when you need to assume a role.

    • ‘Aws::AssumeRoleWebIdentityCredentials` - Used when you need to assume a role after providing credentials via the web.

    • ‘Aws::SSOCredentials` - Used for loading credentials from AWS SSO using an access token generated from `aws login`.

    • ‘Aws::ProcessCredentials` - Used for loading credentials from a process that outputs to stdout.

    • ‘Aws::InstanceProfileCredentials` - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • ‘Aws::ECSCredentials` - Used for loading credentials from instances running in ECS.

    • ‘Aws::CognitoIdentityCredentials` - Used for loading credentials from the Cognito Identity service.

    When ‘:credentials` are not configured directly, the following locations will be searched for credentials:

    • Aws.config`

    • The ‘:access_key_id`, `:secret_access_key`, `:session_token`, and `:account_id` options.

    • ENV`, `ENV`, `ENV`, and `ENV`.

    • ‘~/.aws/credentials`

    • ‘~/.aws/config`

    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of ‘Aws::InstanceProfileCredentials` or `Aws::ECSCredentials` to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting `ENV` to `true`.

  • :region (required, String)

    The AWS region to connect to. The configured ‘:region` is used to determine the service `:endpoint`. When not passed, a default `:region` is searched for in the following locations:

  • :access_key_id (String)
  • :account_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to ‘true`, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to `false`.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in ‘adaptive` retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a `RetryCapacityNotAvailableError` and will not retry instead of sleeping.

  • :auth_scheme_preference (Array<String>)

    A list of preferred authentication schemes to use when making a request. Supported values are: ‘sigv4`, `sigv4a`, `httpBearerAuth`, and `noAuth`. When set using `ENV` or in shared config as `auth_scheme_preference`, the value should be a comma-separated list.

  • :client_side_monitoring (Boolean) — default: false

    When ‘true`, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When ‘true`, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in ‘standard` and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    When ‘true`, the SDK will not prepend the modeled host prefix to the endpoint.

  • :disable_request_compression (Boolean) — default: false

    When set to ‘true’ the request body will not be compressed for supported operations.

  • :endpoint (String, URI::HTTPS, URI::HTTP)

    Normally you should not configure the ‘:endpoint` option directly. This is normally constructed from the `:region` option. Configuring `:endpoint` is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:

    'http://example.com'
    'https://example.com'
    'http://example.com:123'
    
  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to ‘true`, endpoint discovery will be enabled for operations when available.

  • :ignore_configured_endpoint_urls (Boolean)

    Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the ‘:logger` at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in ‘standard` and `adaptive` retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at ‘HOME/.aws/credentials`. When not specified, ’default’ is used.

  • :request_checksum_calculation (String) — default: "when_supported"

    Determines when a checksum will be calculated for request payloads. Values are:

    • ‘when_supported` - (default) When set, a checksum will be calculated for all request payloads of operations modeled with the `httpChecksum` trait where `requestChecksumRequired` is `true` and/or a `requestAlgorithmMember` is modeled.

    • ‘when_required` - When set, a checksum will only be calculated for request payloads of operations modeled with the `httpChecksum` trait where `requestChecksumRequired` is `true` or where a `requestAlgorithmMember` is modeled and supplied.

  • :request_min_compression_size_bytes (Integer) — default: 10240

    The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.

  • :response_checksum_validation (String) — default: "when_supported"

    Determines when checksum validation will be performed on response payloads. Values are:

    • ‘when_supported` - (default) When set, checksum validation is performed on all response payloads of operations modeled with the `httpChecksum` trait where `responseAlgorithms` is modeled, except when no modeled checksum algorithms are supported.

    • ‘when_required` - When set, checksum validation is not performed on response payloads of operations unless the checksum algorithm is supported and the `requestValidationModeMember` member is set to `ENABLED`.

  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the ‘legacy` retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the ‘legacy` retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the ‘legacy` retry mode.

    @see www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the ‘legacy` retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the ‘legacy` retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • ‘legacy` - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • ‘standard` - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • ‘adaptive` - An experimental retry mode that includes all the functionality of `standard` mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :sdk_ua_app_id (String)

    A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key (String)
  • :session_token (String)
  • :sigv4a_signing_region_set (Array)

    A list of regions that should be signed with SigV4a signing. When not passed, a default ‘:sigv4a_signing_region_set` is searched for in the following locations:

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    ** Please note ** When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :telemetry_provider (Aws::Telemetry::TelemetryProviderBase) — default: Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses ‘NoOpTelemetryProvider` which will not record or emit any telemetry data. The SDK supports the following telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require the

    ‘opentelemetry-sdk` gem and then, pass in an instance of a `Aws::Telemetry::OTelProvider` for telemetry provider.

  • :token_provider (Aws::TokenProvider)

    Your Bearer token used for authentication. This can be any class that includes and implements ‘Aws::TokenProvider`, or instance of any one of the following classes:

    • ‘Aws::StaticTokenProvider` - Used for configuring static, non-refreshing tokens.

    • ‘Aws::SSOTokenProvider` - Used for loading tokens from AWS SSO using an access token generated from `aws login`.

    When ‘:token_provider` is not configured directly, the `Aws::TokenProviderChain` will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to ‘true`, dualstack enabled endpoints (with `.aws` TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to ‘true`, fips compatible endpoints will be used if available. When a `fips` region is used, the region is normalized and this config is set to `true`.

  • :validate_params (Boolean) — default: true

    When ‘true`, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::Kafka::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to ‘#resolve_endpoint(parameters)` where `parameters` is a Struct similar to `Aws::Kafka::EndpointParameters`.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has “Expect” header set to “100-continue”. Defaults to ‘nil` which disables this behaviour. This value can safely be set per request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_open_timeout (Float) — default: 15

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like ‘proxy.com:123’.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_wire_trace (Boolean) — default: false

    When ‘true`, HTTP debug output will be sent to the `:logger`.

  • :on_chunk_received (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a ‘content-length`).

  • :on_chunk_sent (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.

  • :raise_response_errors (Boolean) — default: true

    When ‘true`, response errors are raised.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass ‘:ssl_ca_bundle` or `:ssl_ca_directory` the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass ‘:ssl_ca_bundle` or `:ssl_ca_directory` the the system default will be used if available.

  • :ssl_ca_store (String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert (OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key (OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout (Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer (Boolean) — default: true

    When ‘true`, SSL peer certificates are verified when establishing a connection.



473
474
475
# File 'lib/aws-sdk-kafka/client.rb', line 473

def initialize(*args)
  super
end

Class Attribute Details

.identifierObject (readonly)

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.



3875
3876
3877
# File 'lib/aws-sdk-kafka/client.rb', line 3875

def identifier
  @identifier
end

Class Method Details

.errors_moduleObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.



3878
3879
3880
# File 'lib/aws-sdk-kafka/client.rb', line 3878

def errors_module
  Errors
end

Instance Method Details

#batch_associate_scram_secret(params = {}) ⇒ Types::BatchAssociateScramSecretResponse

Associates one or more Scram Secrets with an Amazon MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.batch_associate_scram_secret({
  cluster_arn: "__string", # required
  secret_arn_list: ["__string"], # required
})

Response structure


resp.cluster_arn #=> String
resp.unprocessed_scram_secrets #=> Array
resp.unprocessed_scram_secrets[0].error_code #=> String
resp.unprocessed_scram_secrets[0].error_message #=> String
resp.unprocessed_scram_secrets[0].secret_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :secret_arn_list (required, Array<String>)

    List of AWS Secrets Manager secret ARNs.

Returns:

See Also:



510
511
512
513
# File 'lib/aws-sdk-kafka/client.rb', line 510

def batch_associate_scram_secret(params = {}, options = {})
  req = build_request(:batch_associate_scram_secret, params)
  req.send_request(options)
end

#batch_disassociate_scram_secret(params = {}) ⇒ Types::BatchDisassociateScramSecretResponse

Disassociates one or more Scram Secrets from an Amazon MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.batch_disassociate_scram_secret({
  cluster_arn: "__string", # required
  secret_arn_list: ["__string"], # required
})

Response structure


resp.cluster_arn #=> String
resp.unprocessed_scram_secrets #=> Array
resp.unprocessed_scram_secrets[0].error_code #=> String
resp.unprocessed_scram_secrets[0].error_message #=> String
resp.unprocessed_scram_secrets[0].secret_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :secret_arn_list (required, Array<String>)

    List of AWS Secrets Manager secret ARNs.

Returns:

See Also:



2067
2068
2069
2070
# File 'lib/aws-sdk-kafka/client.rb', line 2067

def batch_disassociate_scram_secret(params = {}, options = {})
  req = build_request(:batch_disassociate_scram_secret, params)
  req.send_request(options)
end

#build_request(operation_name, params = {}) ⇒ Object

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Parameters:

  • params ({}) (defaults to: {})


3848
3849
3850
3851
3852
3853
3854
3855
3856
3857
3858
3859
3860
3861
3862
3863
3864
# File 'lib/aws-sdk-kafka/client.rb', line 3848

def build_request(operation_name, params = {})
  handlers = @handlers.for(operation_name)
  tracer = config.telemetry_provider.tracer_provider.tracer(
    Aws::Telemetry.module_to_tracer_name('Aws::Kafka')
  )
  context = Seahorse::Client::RequestContext.new(
    operation_name: operation_name,
    operation: config.api.operation(operation_name),
    client: self,
    params: params,
    config: config,
    tracer: tracer
  )
  context[:gem_name] = 'aws-sdk-kafka'
  context[:gem_version] = '1.110.0'
  Seahorse::Client::Request.new(handlers, context)
end

#create_cluster(params = {}) ⇒ Types::CreateClusterResponse

Creates a new MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.create_cluster({
  broker_node_group_info: { # required
    broker_az_distribution: "DEFAULT", # accepts DEFAULT
    client_subnets: ["__string"], # required
    instance_type: "__stringMin5Max32", # required
    security_groups: ["__string"],
    storage_info: {
      ebs_storage_info: {
        provisioned_throughput: {
          enabled: false,
          volume_throughput: 1,
        },
        volume_size: 1,
      },
    },
    connectivity_info: {
      public_access: {
        type: "__string",
      },
      vpc_connectivity: {
        client_authentication: {
          sasl: {
            scram: {
              enabled: false,
            },
            iam: {
              enabled: false,
            },
          },
          tls: {
            enabled: false,
          },
        },
      },
      network_type: "IPV4", # accepts IPV4, DUAL
    },
    zone_ids: ["__string"],
  },
  client_authentication: {
    sasl: {
      scram: {
        enabled: false,
      },
      iam: {
        enabled: false,
      },
    },
    tls: {
      certificate_authority_arn_list: ["__string"],
      enabled: false,
    },
    unauthenticated: {
      enabled: false,
    },
  },
  cluster_name: "__stringMin1Max64", # required
  configuration_info: {
    arn: "__string", # required
    revision: 1, # required
  },
  encryption_info: {
    encryption_at_rest: {
      data_volume_kms_key_id: "__string", # required
    },
    encryption_in_transit: {
      client_broker: "TLS", # accepts TLS, TLS_PLAINTEXT, PLAINTEXT
      in_cluster: false,
    },
  },
  enhanced_monitoring: "DEFAULT", # accepts DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, PER_TOPIC_PER_PARTITION
  kafka_version: "__stringMin1Max128", # required
  logging_info: {
    broker_logs: { # required
      cloud_watch_logs: {
        enabled: false, # required
        log_group: "__string",
      },
      firehose: {
        delivery_stream: "__string",
        enabled: false, # required
      },
      s3: {
        bucket: "__string",
        enabled: false, # required
        prefix: "__string",
      },
    },
  },
  number_of_broker_nodes: 1, # required
  open_monitoring: {
    prometheus: { # required
      jmx_exporter: {
        enabled_in_broker: false, # required
      },
      node_exporter: {
        enabled_in_broker: false, # required
      },
    },
  },
  tags: {
    "__string" => "__string",
  },
  rebalancing: {
    status: "PAUSED", # required, accepts PAUSED, ACTIVE
  },
  storage_mode: "LOCAL", # accepts LOCAL, TIERED
})

Response structure


resp.cluster_arn #=> String
resp.cluster_name #=> String
resp.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :broker_node_group_info (required, Types::BrokerNodeGroupInfo)

    Information about the brokers.

  • :client_authentication (Types::ClientAuthentication)

    Includes all client authentication related information.

  • :cluster_name (required, String)

    The name of the cluster.

  • :configuration_info (Types::ConfigurationInfo)

    Represents the configuration that you want MSK to use for the cluster.

  • :encryption_info (Types::EncryptionInfo)

    Includes all encryption-related information.

  • :enhanced_monitoring (String)

    Specifies the level of monitoring for the MSK cluster. The possible values are DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, and PER_TOPIC_PER_PARTITION.

  • :kafka_version (required, String)

    The version of Apache Kafka.

  • :logging_info (Types::LoggingInfo)

    LoggingInfo details.

  • :number_of_broker_nodes (required, Integer)

    The number of Apache Kafka broker nodes in the Amazon MSK cluster.

  • :open_monitoring (Types::OpenMonitoringInfo)

    The settings for open monitoring.

  • :tags (Hash<String,String>)

    Create tags when creating the cluster.

  • :rebalancing (Types::Rebalancing)

    Specifies if intelligent rebalancing should be turned on for the new MSK Provisioned cluster with Express brokers. By default, intelligent rebalancing status is ACTIVE for all new clusters.

  • :storage_mode (String)

    This controls storage mode for supported storage tiers.

Returns:

See Also:



686
687
688
689
# File 'lib/aws-sdk-kafka/client.rb', line 686

def create_cluster(params = {}, options = {})
  req = build_request(:create_cluster, params)
  req.send_request(options)
end

#create_cluster_v2(params = {}) ⇒ Types::CreateClusterV2Response

Creates a new Amazon MSK cluster of either the provisioned or the serverless type.

Examples:

Request syntax with placeholder values


resp = client.create_cluster_v2({
  cluster_name: "__stringMin1Max64", # required
  tags: {
    "__string" => "__string",
  },
  provisioned: {
    broker_node_group_info: { # required
      broker_az_distribution: "DEFAULT", # accepts DEFAULT
      client_subnets: ["__string"], # required
      instance_type: "__stringMin5Max32", # required
      security_groups: ["__string"],
      storage_info: {
        ebs_storage_info: {
          provisioned_throughput: {
            enabled: false,
            volume_throughput: 1,
          },
          volume_size: 1,
        },
      },
      connectivity_info: {
        public_access: {
          type: "__string",
        },
        vpc_connectivity: {
          client_authentication: {
            sasl: {
              scram: {
                enabled: false,
              },
              iam: {
                enabled: false,
              },
            },
            tls: {
              enabled: false,
            },
          },
        },
        network_type: "IPV4", # accepts IPV4, DUAL
      },
      zone_ids: ["__string"],
    },
    client_authentication: {
      sasl: {
        scram: {
          enabled: false,
        },
        iam: {
          enabled: false,
        },
      },
      tls: {
        certificate_authority_arn_list: ["__string"],
        enabled: false,
      },
      unauthenticated: {
        enabled: false,
      },
    },
    configuration_info: {
      arn: "__string", # required
      revision: 1, # required
    },
    encryption_info: {
      encryption_at_rest: {
        data_volume_kms_key_id: "__string", # required
      },
      encryption_in_transit: {
        client_broker: "TLS", # accepts TLS, TLS_PLAINTEXT, PLAINTEXT
        in_cluster: false,
      },
    },
    enhanced_monitoring: "DEFAULT", # accepts DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, PER_TOPIC_PER_PARTITION
    open_monitoring: {
      prometheus: { # required
        jmx_exporter: {
          enabled_in_broker: false, # required
        },
        node_exporter: {
          enabled_in_broker: false, # required
        },
      },
    },
    kafka_version: "__stringMin1Max128", # required
    logging_info: {
      broker_logs: { # required
        cloud_watch_logs: {
          enabled: false, # required
          log_group: "__string",
        },
        firehose: {
          delivery_stream: "__string",
          enabled: false, # required
        },
        s3: {
          bucket: "__string",
          enabled: false, # required
          prefix: "__string",
        },
      },
    },
    number_of_broker_nodes: 1, # required
    storage_mode: "LOCAL", # accepts LOCAL, TIERED
    rebalancing: {
      status: "PAUSED", # required, accepts PAUSED, ACTIVE
    },
  },
  serverless: {
    vpc_configs: [ # required
      {
        subnet_ids: ["__string"], # required
        security_group_ids: ["__string"],
      },
    ],
    client_authentication: {
      sasl: {
        iam: {
          enabled: false,
        },
      },
    },
  },
})

Response structure


resp.cluster_arn #=> String
resp.cluster_name #=> String
resp.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_name (required, String)

    The name of the cluster.

  • :tags (Hash<String,String>)

    A map of tags that you want the cluster to have.

  • :provisioned (Types::ProvisionedRequest)

    Creates a provisioned cluster.

  • :serverless (Types::ServerlessRequest)

    Creates a serverless cluster.

Returns:

See Also:



851
852
853
854
# File 'lib/aws-sdk-kafka/client.rb', line 851

def create_cluster_v2(params = {}, options = {})
  req = build_request(:create_cluster_v2, params)
  req.send_request(options)
end

#create_configuration(params = {}) ⇒ Types::CreateConfigurationResponse

Creates a new MSK configuration.

Examples:

Request syntax with placeholder values


resp = client.create_configuration({
  description: "__string",
  kafka_versions: ["__string"],
  name: "__string", # required
  server_properties: "data", # required
})

Response structure


resp.arn #=> String
resp.creation_time #=> Time
resp.latest_revision.creation_time #=> Time
resp.latest_revision.description #=> String
resp.latest_revision.revision #=> Integer
resp.name #=> String
resp.state #=> String, one of "ACTIVE", "DELETING", "DELETE_FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :description (String)

    The description of the configuration.

  • :kafka_versions (Array<String>)

    The versions of Apache Kafka with which you can use this MSK configuration.

  • :name (required, String)

    The name of the configuration. Configuration names are strings that match the regex “^[0-9A-Za-z-]+$”.

  • :server_properties (required, String, StringIO, File)

Returns:

See Also:



902
903
904
905
# File 'lib/aws-sdk-kafka/client.rb', line 902

def create_configuration(params = {}, options = {})
  req = build_request(:create_configuration, params)
  req.send_request(options)
end

#create_replicator(params = {}) ⇒ Types::CreateReplicatorResponse

Creates a new Kafka Replicator.

Examples:

Request syntax with placeholder values


resp = client.create_replicator({
  description: "__stringMax1024",
  kafka_clusters: [ # required
    {
      amazon_msk_cluster: {
        msk_cluster_arn: "__string", # required
      },
      apache_kafka_cluster: {
        apache_kafka_cluster_id: "__string", # required
        bootstrap_broker_string: "__string", # required
      },
      vpc_config: {
        security_group_ids: ["__string"],
        subnet_ids: ["__string"], # required
      },
      client_authentication: {
        sasl_scram: { # required
          mechanism: "SHA256", # required, accepts SHA256, SHA512
          secret_arn: "__string", # required
        },
      },
      encryption_in_transit: {
        encryption_type: "TLS", # required, accepts TLS
        root_ca_certificate: "__string",
      },
    },
  ],
  log_delivery: {
    replicator_log_delivery: {
      cloud_watch_logs: {
        enabled: false, # required
        log_group: "__string",
      },
      firehose: {
        delivery_stream: "__string",
        enabled: false, # required
      },
      s3: {
        bucket: "__string",
        enabled: false, # required
        prefix: "__string",
      },
    },
  },
  replication_info_list: [ # required
    {
      consumer_group_replication: { # required
        consumer_groups_to_exclude: ["__stringMax256"],
        consumer_groups_to_replicate: ["__stringMax256"], # required
        detect_and_copy_new_consumer_groups: false,
        synchronise_consumer_group_offsets: false,
        consumer_group_offset_sync_mode: "LEGACY", # accepts LEGACY, ENHANCED
      },
      source_kafka_cluster_arn: "__string",
      source_kafka_cluster_id: "__string",
      target_compression_type: "NONE", # required, accepts NONE, GZIP, SNAPPY, LZ4, ZSTD
      target_kafka_cluster_arn: "__string",
      target_kafka_cluster_id: "__string",
      topic_replication: { # required
        copy_access_control_lists_for_topics: false,
        copy_topic_configurations: false,
        detect_and_copy_new_topics: false,
        starting_position: {
          type: "LATEST", # accepts LATEST, EARLIEST
        },
        topic_name_configuration: {
          type: "PREFIXED_WITH_SOURCE_CLUSTER_ALIAS", # accepts PREFIXED_WITH_SOURCE_CLUSTER_ALIAS, IDENTICAL
        },
        topics_to_exclude: ["__stringMax249"],
        topics_to_replicate: ["__stringMax249"], # required
      },
    },
  ],
  replicator_name: "__stringMin1Max128Pattern09AZaZ09AZaZ0", # required
  service_execution_role_arn: "__string", # required
  tags: {
    "__string" => "__string",
  },
})

Response structure


resp.replicator_arn #=> String
resp.replicator_name #=> String
resp.replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :description (String)

    A summary description of the replicator.

  • :kafka_clusters (required, Array<Types::KafkaCluster>)

    Kafka Clusters to use in setting up sources / targets for replication.

  • :log_delivery (Types::LogDelivery)

    Configuration for delivering replicator logs to customer destinations.

  • :replication_info_list (required, Array<Types::ReplicationInfo>)

    A list of replication configurations, where each configuration targets a given source cluster to target cluster replication flow.

  • :replicator_name (required, String)

    The name of the replicator. Alpha-numeric characters with ‘-’ are allowed.

  • :service_execution_role_arn (required, String)

    The ARN of the IAM role used by the replicator to access resources in the customer’s account (e.g source and target clusters)

  • :tags (Hash<String,String>)

    List of tags to attach to created Replicator.

Returns:

See Also:



1031
1032
1033
1034
# File 'lib/aws-sdk-kafka/client.rb', line 1031

def create_replicator(params = {}, options = {})
  req = build_request(:create_replicator, params)
  req.send_request(options)
end

#create_topic(params = {}) ⇒ Types::CreateTopicResponse

Creates a topic in the specified MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.create_topic({
  cluster_arn: "__string", # required
  topic_name: "__string", # required
  partition_count: 1, # required
  replication_factor: 1, # required
  configs: "__string",
})

Response structure


resp.topic_arn #=> String
resp.topic_name #=> String
resp.status #=> String, one of "CREATING", "UPDATING", "DELETING", "ACTIVE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :topic_name (required, String)
  • :partition_count (required, Integer)
  • :replication_factor (required, Integer)
  • :configs (String)

Returns:

See Also:



1074
1075
1076
1077
# File 'lib/aws-sdk-kafka/client.rb', line 1074

def create_topic(params = {}, options = {})
  req = build_request(:create_topic, params)
  req.send_request(options)
end

#create_vpc_connection(params = {}) ⇒ Types::CreateVpcConnectionResponse

Creates a new Amazon MSK VPC connection.

Examples:

Request syntax with placeholder values


resp = client.create_vpc_connection({
  target_cluster_arn: "__string", # required
  authentication: "__string", # required
  vpc_id: "__string", # required
  client_subnets: ["__string"], # required
  security_groups: ["__string"], # required
  tags: {
    "__string" => "__string",
  },
})

Response structure


resp.vpc_connection_arn #=> String
resp.state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"
resp.authentication #=> String
resp.vpc_id #=> String
resp.client_subnets #=> Array
resp.client_subnets[0] #=> String
resp.security_groups #=> Array
resp.security_groups[0] #=> String
resp.creation_time #=> Time
resp.tags #=> Hash
resp.tags["__string"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :target_cluster_arn (required, String)

    The Amazon Resource Name (ARN) of the cluster.

  • :authentication (required, String)
  • :vpc_id (required, String)

    The VPC ID of the VPC connection.

  • :client_subnets (required, Array<String>)

    The list of subnets in the client VPC.

  • :security_groups (required, Array<String>)

    The list of security groups to attach to the VPC connection.

  • :tags (Hash<String,String>)

    Create tags when creating the VPC connection.

Returns:

See Also:



1140
1141
1142
1143
# File 'lib/aws-sdk-kafka/client.rb', line 1140

def create_vpc_connection(params = {}, options = {})
  req = build_request(:create_vpc_connection, params)
  req.send_request(options)
end

#delete_cluster(params = {}) ⇒ Types::DeleteClusterResponse

Deletes the MSK cluster specified by the Amazon Resource Name (ARN) in the request.

Examples:

Request syntax with placeholder values


resp = client.delete_cluster({
  cluster_arn: "__string", # required
  current_version: "__string",
})

Response structure


resp.cluster_arn #=> String
resp.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (String)

Returns:

See Also:



1173
1174
1175
1176
# File 'lib/aws-sdk-kafka/client.rb', line 1173

def delete_cluster(params = {}, options = {})
  req = build_request(:delete_cluster, params)
  req.send_request(options)
end

#delete_cluster_policy(params = {}) ⇒ Struct

Deletes the MSK cluster policy specified by the Amazon Resource Name (ARN) in your request.

Examples:

Request syntax with placeholder values


resp = client.delete_cluster_policy({
  cluster_arn: "__string", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

Returns:

  • (Struct)

    Returns an empty response.

See Also:



3008
3009
3010
3011
# File 'lib/aws-sdk-kafka/client.rb', line 3008

def delete_cluster_policy(params = {}, options = {})
  req = build_request(:delete_cluster_policy, params)
  req.send_request(options)
end

#delete_configuration(params = {}) ⇒ Types::DeleteConfigurationResponse

Deletes the specified MSK configuration. The configuration must be in the ACTIVE or DELETE_FAILED state.

Examples:

Request syntax with placeholder values


resp = client.delete_configuration({
  arn: "__string", # required
})

Response structure


resp.arn #=> String
resp.state #=> String, one of "ACTIVE", "DELETING", "DELETE_FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

    The Amazon Resource Name (ARN) of the configuration.

Returns:

See Also:



1204
1205
1206
1207
# File 'lib/aws-sdk-kafka/client.rb', line 1204

def delete_configuration(params = {}, options = {})
  req = build_request(:delete_configuration, params)
  req.send_request(options)
end

#delete_replicator(params = {}) ⇒ Types::DeleteReplicatorResponse

Deletes a replicator.

Examples:

Request syntax with placeholder values


resp = client.delete_replicator({
  current_version: "__string",
  replicator_arn: "__string", # required
})

Response structure


resp.replicator_arn #=> String
resp.replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :current_version (String)
  • :replicator_arn (required, String)

Returns:

See Also:



1236
1237
1238
1239
# File 'lib/aws-sdk-kafka/client.rb', line 1236

def delete_replicator(params = {}, options = {})
  req = build_request(:delete_replicator, params)
  req.send_request(options)
end

#delete_topic(params = {}) ⇒ Types::DeleteTopicResponse

Deletes a topic in the specified MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.delete_topic({
  cluster_arn: "__string", # required
  topic_name: "__string", # required
})

Response structure


resp.topic_arn #=> String
resp.topic_name #=> String
resp.status #=> String, one of "CREATING", "UPDATING", "DELETING", "ACTIVE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :topic_name (required, String)

Returns:

See Also:



1270
1271
1272
1273
# File 'lib/aws-sdk-kafka/client.rb', line 1270

def delete_topic(params = {}, options = {})
  req = build_request(:delete_topic, params)
  req.send_request(options)
end

#delete_vpc_connection(params = {}) ⇒ Types::DeleteVpcConnectionResponse

Deletes the Amazon MSK VPC connection specified in your request.

Examples:

Request syntax with placeholder values


resp = client.delete_vpc_connection({
  arn: "__string", # required
})

Response structure


resp.vpc_connection_arn #=> String
resp.state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

Returns:

See Also:



1299
1300
1301
1302
# File 'lib/aws-sdk-kafka/client.rb', line 1299

def delete_vpc_connection(params = {}, options = {})
  req = build_request(:delete_vpc_connection, params)
  req.send_request(options)
end

#describe_cluster(params = {}) ⇒ Types::DescribeClusterResponse

Returns a description of the MSK cluster whose Amazon Resource Name (ARN) is specified in the request.

Examples:

Request syntax with placeholder values


resp = client.describe_cluster({
  cluster_arn: "__string", # required
})

Response structure


resp.cluster_info.active_operation_arn #=> String
resp.cluster_info.broker_node_group_info.broker_az_distribution #=> String, one of "DEFAULT"
resp.cluster_info.broker_node_group_info.client_subnets #=> Array
resp.cluster_info.broker_node_group_info.client_subnets[0] #=> String
resp.cluster_info.broker_node_group_info.instance_type #=> String
resp.cluster_info.broker_node_group_info.security_groups #=> Array
resp.cluster_info.broker_node_group_info.security_groups[0] #=> String
resp.cluster_info.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.enabled #=> Boolean
resp.cluster_info.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.volume_throughput #=> Integer
resp.cluster_info.broker_node_group_info.storage_info.ebs_storage_info.volume_size #=> Integer
resp.cluster_info.broker_node_group_info.connectivity_info.public_access.type #=> String
resp.cluster_info.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_info.broker_node_group_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_info.broker_node_group_info.zone_ids #=> Array
resp.cluster_info.broker_node_group_info.zone_ids[0] #=> String
resp.cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_info.cluster_arn #=> String
resp.cluster_info.cluster_name #=> String
resp.cluster_info.creation_time #=> Time
resp.cluster_info.current_broker_software_info.configuration_arn #=> String
resp.cluster_info.current_broker_software_info.configuration_revision #=> Integer
resp.cluster_info.current_broker_software_info.kafka_version #=> String
resp.cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_info.current_version #=> String
resp.cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_info.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_info.state_info.code #=> String
resp.cluster_info.state_info.message #=> String
resp.cluster_info.tags #=> Hash
resp.cluster_info.tags["__string"] #=> String
resp.cluster_info.zookeeper_connect_string #=> String
resp.cluster_info.zookeeper_connect_string_tls #=> String
resp.cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_info.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_info.customer_action_status #=> String, one of "CRITICAL_ACTION_REQUIRED", "ACTION_RECOMMENDED", "NONE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

Returns:

See Also:



1380
1381
1382
1383
# File 'lib/aws-sdk-kafka/client.rb', line 1380

def describe_cluster(params = {}, options = {})
  req = build_request(:describe_cluster, params)
  req.send_request(options)
end

#describe_cluster_operation(params = {}) ⇒ Types::DescribeClusterOperationResponse

Returns a description of the cluster operation specified by the ARN.

Examples:

Request syntax with placeholder values


resp = client.describe_cluster_operation({
  cluster_operation_arn: "__string", # required
})

Response structure


resp.cluster_operation_info.client_request_id #=> String
resp.cluster_operation_info.cluster_arn #=> String
resp.cluster_operation_info.creation_time #=> Time
resp.cluster_operation_info.end_time #=> Time
resp.cluster_operation_info.error_info.error_code #=> String
resp.cluster_operation_info.error_info.error_string #=> String
resp.cluster_operation_info.operation_steps #=> Array
resp.cluster_operation_info.operation_steps[0].step_info.step_status #=> String
resp.cluster_operation_info.operation_steps[0].step_name #=> String
resp.cluster_operation_info.operation_arn #=> String
resp.cluster_operation_info.operation_state #=> String
resp.cluster_operation_info.operation_type #=> String
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info.source_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info.source_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info.source_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info.source_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info.source_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.source_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.source_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info.source_cluster_info.kafka_version #=> String
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info.source_cluster_info.instance_type #=> String
resp.cluster_operation_info.source_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info.source_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info.source_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info.source_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info.source_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info.source_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_operation_info.source_cluster_info.zookeeper_access.enabled #=> Boolean
resp.cluster_operation_info.source_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info.source_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info.source_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info.source_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info.source_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info.source_cluster_info.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info.target_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info.target_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info.target_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info.target_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info.target_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.target_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.target_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info.target_cluster_info.kafka_version #=> String
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info.target_cluster_info.instance_type #=> String
resp.cluster_operation_info.target_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info.target_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info.target_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info.target_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info.target_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info.target_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_operation_info.target_cluster_info.zookeeper_access.enabled #=> Boolean
resp.cluster_operation_info.target_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info.target_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info.target_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info.target_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info.target_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info.target_cluster_info.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_operation_info.vpc_connection_info.vpc_connection_arn #=> String
resp.cluster_operation_info.vpc_connection_info.owner #=> String
resp.cluster_operation_info.vpc_connection_info.user_identity.type #=> String, one of "AWSACCOUNT", "AWSSERVICE"
resp.cluster_operation_info.vpc_connection_info.user_identity.principal_id #=> String
resp.cluster_operation_info.vpc_connection_info.creation_time #=> Time

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_operation_arn (required, String)

Returns:

See Also:



1596
1597
1598
1599
# File 'lib/aws-sdk-kafka/client.rb', line 1596

def describe_cluster_operation(params = {}, options = {})
  req = build_request(:describe_cluster_operation, params)
  req.send_request(options)
end

#describe_cluster_operation_v2(params = {}) ⇒ Types::DescribeClusterOperationV2Response

Returns a description of the cluster operation specified by the ARN.

Examples:

Request syntax with placeholder values


resp = client.describe_cluster_operation_v2({
  cluster_operation_arn: "__string", # required
})

Response structure


resp.cluster_operation_info.cluster_arn #=> String
resp.cluster_operation_info.cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"
resp.cluster_operation_info.start_time #=> Time
resp.cluster_operation_info.end_time #=> Time
resp.cluster_operation_info.operation_arn #=> String
resp.cluster_operation_info.operation_state #=> String
resp.cluster_operation_info.operation_type #=> String
resp.cluster_operation_info.provisioned.operation_steps #=> Array
resp.cluster_operation_info.provisioned.operation_steps[0].step_info.step_status #=> String
resp.cluster_operation_info.provisioned.operation_steps[0].step_name #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info.provisioned.source_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info.provisioned.source_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info.provisioned.source_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info.provisioned.source_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info.provisioned.source_cluster_info.kafka_version #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.instance_type #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info.provisioned.source_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_operation_info.provisioned.source_cluster_info.zookeeper_access.enabled #=> Boolean
resp.cluster_operation_info.provisioned.source_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info.provisioned.source_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info.provisioned.source_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info.provisioned.source_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info.provisioned.source_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info.provisioned.source_cluster_info.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info.provisioned.target_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info.provisioned.target_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info.provisioned.target_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info.provisioned.target_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info.provisioned.target_cluster_info.kafka_version #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.instance_type #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info.provisioned.target_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_operation_info.provisioned.target_cluster_info.zookeeper_access.enabled #=> Boolean
resp.cluster_operation_info.provisioned.target_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info.provisioned.target_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info.provisioned.target_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info.provisioned.target_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info.provisioned.target_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info.provisioned.target_cluster_info.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_operation_info.provisioned.vpc_connection_info.vpc_connection_arn #=> String
resp.cluster_operation_info.provisioned.vpc_connection_info.owner #=> String
resp.cluster_operation_info.provisioned.vpc_connection_info.user_identity.type #=> String, one of "AWSACCOUNT", "AWSSERVICE"
resp.cluster_operation_info.provisioned.vpc_connection_info.user_identity.principal_id #=> String
resp.cluster_operation_info.provisioned.vpc_connection_info.creation_time #=> Time
resp.cluster_operation_info.serverless.source_cluster_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_operation_info.serverless.target_cluster_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_operation_info.serverless.vpc_connection_info.creation_time #=> Time
resp.cluster_operation_info.serverless.vpc_connection_info.owner #=> String
resp.cluster_operation_info.serverless.vpc_connection_info.user_identity.type #=> String, one of "AWSACCOUNT", "AWSSERVICE"
resp.cluster_operation_info.serverless.vpc_connection_info.user_identity.principal_id #=> String
resp.cluster_operation_info.serverless.vpc_connection_info.vpc_connection_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_operation_arn (required, String)

Returns:

See Also:



1726
1727
1728
1729
# File 'lib/aws-sdk-kafka/client.rb', line 1726

def describe_cluster_operation_v2(params = {}, options = {})
  req = build_request(:describe_cluster_operation_v2, params)
  req.send_request(options)
end

#describe_cluster_v2(params = {}) ⇒ Types::DescribeClusterV2Response

Returns a description of the MSK cluster of either the provisioned or the serverless type whose Amazon Resource Name (ARN) is specified in the request.

Examples:

Request syntax with placeholder values


resp = client.describe_cluster_v2({
  cluster_arn: "__string", # required
})

Response structure


resp.cluster_info.active_operation_arn #=> String
resp.cluster_info.cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"
resp.cluster_info.cluster_arn #=> String
resp.cluster_info.cluster_name #=> String
resp.cluster_info.creation_time #=> Time
resp.cluster_info.current_version #=> String
resp.cluster_info.state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_info.state_info.code #=> String
resp.cluster_info.state_info.message #=> String
resp.cluster_info.tags #=> Hash
resp.cluster_info.tags["__string"] #=> String
resp.cluster_info.provisioned.broker_node_group_info.broker_az_distribution #=> String, one of "DEFAULT"
resp.cluster_info.provisioned.broker_node_group_info.client_subnets #=> Array
resp.cluster_info.provisioned.broker_node_group_info.client_subnets[0] #=> String
resp.cluster_info.provisioned.broker_node_group_info.instance_type #=> String
resp.cluster_info.provisioned.broker_node_group_info.security_groups #=> Array
resp.cluster_info.provisioned.broker_node_group_info.security_groups[0] #=> String
resp.cluster_info.provisioned.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.enabled #=> Boolean
resp.cluster_info.provisioned.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.volume_throughput #=> Integer
resp.cluster_info.provisioned.broker_node_group_info.storage_info.ebs_storage_info.volume_size #=> Integer
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.public_access.type #=> String
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_info.provisioned.broker_node_group_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_info.provisioned.broker_node_group_info.zone_ids #=> Array
resp.cluster_info.provisioned.broker_node_group_info.zone_ids[0] #=> String
resp.cluster_info.provisioned.current_broker_software_info.configuration_arn #=> String
resp.cluster_info.provisioned.current_broker_software_info.configuration_revision #=> Integer
resp.cluster_info.provisioned.current_broker_software_info.kafka_version #=> String
resp.cluster_info.provisioned.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info.provisioned.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.provisioned.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_info.provisioned.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_info.provisioned.client_authentication.tls.enabled #=> Boolean
resp.cluster_info.provisioned.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_info.provisioned.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_info.provisioned.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_info.provisioned.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_info.provisioned.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_info.provisioned.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_info.provisioned.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_info.provisioned.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_info.provisioned.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_info.provisioned.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_info.provisioned.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_info.provisioned.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_info.provisioned.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_info.provisioned.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_info.provisioned.number_of_broker_nodes #=> Integer
resp.cluster_info.provisioned.zookeeper_connect_string #=> String
resp.cluster_info.provisioned.zookeeper_connect_string_tls #=> String
resp.cluster_info.provisioned.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_info.provisioned.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_info.provisioned.customer_action_status #=> String, one of "CRITICAL_ACTION_REQUIRED", "ACTION_RECOMMENDED", "NONE"
resp.cluster_info.serverless.vpc_configs #=> Array
resp.cluster_info.serverless.vpc_configs[0].subnet_ids #=> Array
resp.cluster_info.serverless.vpc_configs[0].subnet_ids[0] #=> String
resp.cluster_info.serverless.vpc_configs[0].security_group_ids #=> Array
resp.cluster_info.serverless.vpc_configs[0].security_group_ids[0] #=> String
resp.cluster_info.serverless.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info.serverless.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

    The Amazon Resource Name (ARN) that uniquely identifies the cluster.

Returns:

See Also:



1471
1472
1473
1474
# File 'lib/aws-sdk-kafka/client.rb', line 1471

def describe_cluster_v2(params = {}, options = {})
  req = build_request(:describe_cluster_v2, params)
  req.send_request(options)
end

#describe_configuration(params = {}) ⇒ Types::DescribeConfigurationResponse

Returns a description of this MSK configuration.

Examples:

Request syntax with placeholder values


resp = client.describe_configuration({
  arn: "__string", # required
})

Response structure


resp.arn #=> String
resp.creation_time #=> Time
resp.description #=> String
resp.kafka_versions #=> Array
resp.kafka_versions[0] #=> String
resp.latest_revision.creation_time #=> Time
resp.latest_revision.description #=> String
resp.latest_revision.revision #=> Integer
resp.name #=> String
resp.state #=> String, one of "ACTIVE", "DELETING", "DELETE_FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

Returns:

See Also:



1768
1769
1770
1771
# File 'lib/aws-sdk-kafka/client.rb', line 1768

def describe_configuration(params = {}, options = {})
  req = build_request(:describe_configuration, params)
  req.send_request(options)
end

#describe_configuration_revision(params = {}) ⇒ Types::DescribeConfigurationRevisionResponse

Returns a description of this revision of the configuration.

Examples:

Request syntax with placeholder values


resp = client.describe_configuration_revision({
  arn: "__string", # required
  revision: 1, # required
})

Response structure


resp.arn #=> String
resp.creation_time #=> Time
resp.description #=> String
resp.revision #=> Integer
resp.server_properties #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)
  • :revision (required, Integer)

Returns:

See Also:



1806
1807
1808
1809
# File 'lib/aws-sdk-kafka/client.rb', line 1806

def describe_configuration_revision(params = {}, options = {})
  req = build_request(:describe_configuration_revision, params)
  req.send_request(options)
end

#describe_replicator(params = {}) ⇒ Types::DescribeReplicatorResponse

Returns a description of the Kafka Replicator whose Amazon Resource Name (ARN) is specified in the request.

Examples:

Request syntax with placeholder values


resp = client.describe_replicator({
  replicator_arn: "__string", # required
})

Response structure


resp.creation_time #=> Time
resp.current_version #=> String
resp.is_replicator_reference #=> Boolean
resp.kafka_clusters #=> Array
resp.kafka_clusters[0].amazon_msk_cluster.msk_cluster_arn #=> String
resp.kafka_clusters[0].apache_kafka_cluster.apache_kafka_cluster_id #=> String
resp.kafka_clusters[0].apache_kafka_cluster.bootstrap_broker_string #=> String
resp.kafka_clusters[0].kafka_cluster_alias #=> String
resp.kafka_clusters[0].vpc_config.security_group_ids #=> Array
resp.kafka_clusters[0].vpc_config.security_group_ids[0] #=> String
resp.kafka_clusters[0].vpc_config.subnet_ids #=> Array
resp.kafka_clusters[0].vpc_config.subnet_ids[0] #=> String
resp.kafka_clusters[0].client_authentication.sasl_scram.mechanism #=> String, one of "SHA256", "SHA512"
resp.kafka_clusters[0].client_authentication.sasl_scram.secret_arn #=> String
resp.kafka_clusters[0].encryption_in_transit.encryption_type #=> String, one of "TLS"
resp.kafka_clusters[0].encryption_in_transit.root_ca_certificate #=> String
resp.log_delivery.replicator_log_delivery.cloud_watch_logs.enabled #=> Boolean
resp.log_delivery.replicator_log_delivery.cloud_watch_logs.log_group #=> String
resp.log_delivery.replicator_log_delivery.firehose.delivery_stream #=> String
resp.log_delivery.replicator_log_delivery.firehose.enabled #=> Boolean
resp.log_delivery.replicator_log_delivery.s3.bucket #=> String
resp.log_delivery.replicator_log_delivery.s3.enabled #=> Boolean
resp.log_delivery.replicator_log_delivery.s3.prefix #=> String
resp.replication_info_list #=> Array
resp.replication_info_list[0].consumer_group_replication.consumer_groups_to_exclude #=> Array
resp.replication_info_list[0].consumer_group_replication.consumer_groups_to_exclude[0] #=> String
resp.replication_info_list[0].consumer_group_replication.consumer_groups_to_replicate #=> Array
resp.replication_info_list[0].consumer_group_replication.consumer_groups_to_replicate[0] #=> String
resp.replication_info_list[0].consumer_group_replication.detect_and_copy_new_consumer_groups #=> Boolean
resp.replication_info_list[0].consumer_group_replication.synchronise_consumer_group_offsets #=> Boolean
resp.replication_info_list[0].consumer_group_replication.consumer_group_offset_sync_mode #=> String, one of "LEGACY", "ENHANCED"
resp.replication_info_list[0].source_kafka_cluster_alias #=> String
resp.replication_info_list[0].target_compression_type #=> String, one of "NONE", "GZIP", "SNAPPY", "LZ4", "ZSTD"
resp.replication_info_list[0].target_kafka_cluster_alias #=> String
resp.replication_info_list[0].topic_replication.copy_access_control_lists_for_topics #=> Boolean
resp.replication_info_list[0].topic_replication.copy_topic_configurations #=> Boolean
resp.replication_info_list[0].topic_replication.detect_and_copy_new_topics #=> Boolean
resp.replication_info_list[0].topic_replication.starting_position.type #=> String, one of "LATEST", "EARLIEST"
resp.replication_info_list[0].topic_replication.topic_name_configuration.type #=> String, one of "PREFIXED_WITH_SOURCE_CLUSTER_ALIAS", "IDENTICAL"
resp.replication_info_list[0].topic_replication.topics_to_exclude #=> Array
resp.replication_info_list[0].topic_replication.topics_to_exclude[0] #=> String
resp.replication_info_list[0].topic_replication.topics_to_replicate #=> Array
resp.replication_info_list[0].topic_replication.topics_to_replicate[0] #=> String
resp.replicator_arn #=> String
resp.replicator_description #=> String
resp.replicator_name #=> String
resp.replicator_resource_arn #=> String
resp.replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"
resp.service_execution_role_arn #=> String
resp.state_info.code #=> String
resp.state_info.message #=> String
resp.tags #=> Hash
resp.tags["__string"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :replicator_arn (required, String)

Returns:

See Also:



1899
1900
1901
1902
# File 'lib/aws-sdk-kafka/client.rb', line 1899

def describe_replicator(params = {}, options = {})
  req = build_request(:describe_replicator, params)
  req.send_request(options)
end

#describe_topic(params = {}) ⇒ Types::DescribeTopicResponse

Returns topic details of this topic on a MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.describe_topic({
  cluster_arn: "__string", # required
  topic_name: "__string", # required
})

Response structure


resp.topic_arn #=> String
resp.topic_name #=> String
resp.replication_factor #=> Integer
resp.partition_count #=> Integer
resp.configs #=> String
resp.status #=> String, one of "CREATING", "UPDATING", "DELETING", "ACTIVE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :topic_name (required, String)

Returns:

See Also:



1939
1940
1941
1942
# File 'lib/aws-sdk-kafka/client.rb', line 1939

def describe_topic(params = {}, options = {})
  req = build_request(:describe_topic, params)
  req.send_request(options)
end

#describe_topic_partitions(params = {}) ⇒ Types::DescribeTopicPartitionsResponse

Returns partition details of this topic on a MSK cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.describe_topic_partitions({
  cluster_arn: "__string", # required
  topic_name: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.partitions #=> Array
resp.partitions[0].partition #=> Integer
resp.partitions[0].leader #=> Integer
resp.partitions[0].replicas #=> Array
resp.partitions[0].replicas[0] #=> Integer
resp.partitions[0].isr #=> Array
resp.partitions[0].isr[0] #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :topic_name (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



1985
1986
1987
1988
# File 'lib/aws-sdk-kafka/client.rb', line 1985

def describe_topic_partitions(params = {}, options = {})
  req = build_request(:describe_topic_partitions, params)
  req.send_request(options)
end

#describe_vpc_connection(params = {}) ⇒ Types::DescribeVpcConnectionResponse

Displays information about the specified Amazon MSK VPC connection.

Examples:

Request syntax with placeholder values


resp = client.describe_vpc_connection({
  arn: "__string", # required
})

Response structure


resp.vpc_connection_arn #=> String
resp.target_cluster_arn #=> String
resp.state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"
resp.authentication #=> String
resp.vpc_id #=> String
resp.subnets #=> Array
resp.subnets[0] #=> String
resp.security_groups #=> Array
resp.security_groups[0] #=> String
resp.creation_time #=> Time
resp.tags #=> Hash
resp.tags["__string"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

Returns:

See Also:



2031
2032
2033
2034
# File 'lib/aws-sdk-kafka/client.rb', line 2031

def describe_vpc_connection(params = {}, options = {})
  req = build_request(:describe_vpc_connection, params)
  req.send_request(options)
end

#get_bootstrap_brokers(params = {}) ⇒ Types::GetBootstrapBrokersResponse

A list of brokers that a client application can use to bootstrap. This list doesn’t necessarily include all of the brokers in the cluster. The following Python 3.6 example shows how you can use the Amazon Resource Name (ARN) of a cluster to get its bootstrap brokers. If you don’t know the ARN of your cluster, you can use the ‘ListClusters` operation to get the ARNs of all the clusters in this account and Region.

Examples:

Request syntax with placeholder values


resp = client.get_bootstrap_brokers({
  cluster_arn: "__string", # required
})

Response structure


resp.bootstrap_broker_string #=> String
resp.bootstrap_broker_string_public_sasl_iam #=> String
resp.bootstrap_broker_string_public_sasl_scram #=> String
resp.bootstrap_broker_string_public_tls #=> String
resp.bootstrap_broker_string_tls #=> String
resp.bootstrap_broker_string_sasl_scram #=> String
resp.bootstrap_broker_string_sasl_iam #=> String
resp.bootstrap_broker_string_vpc_connectivity_tls #=> String
resp.bootstrap_broker_string_vpc_connectivity_sasl_scram #=> String
resp.bootstrap_broker_string_vpc_connectivity_sasl_iam #=> String
resp.bootstrap_broker_string_ipv_6 #=> String
resp.bootstrap_broker_string_tls_ipv_6 #=> String
resp.bootstrap_broker_string_sasl_scram_ipv_6 #=> String
resp.bootstrap_broker_string_sasl_iam_ipv_6 #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

Returns:

See Also:



2126
2127
2128
2129
# File 'lib/aws-sdk-kafka/client.rb', line 2126

def get_bootstrap_brokers(params = {}, options = {})
  req = build_request(:get_bootstrap_brokers, params)
  req.send_request(options)
end

#get_cluster_policy(params = {}) ⇒ Types::GetClusterPolicyResponse

Retrieves the contents of the specified MSK cluster policy.

Examples:

Request syntax with placeholder values


resp = client.get_cluster_policy({
  cluster_arn: "__string", # required
})

Response structure


resp.current_version #=> String
resp.policy #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

Returns:

See Also:



3037
3038
3039
3040
# File 'lib/aws-sdk-kafka/client.rb', line 3037

def get_cluster_policy(params = {}, options = {})
  req = build_request(:get_cluster_policy, params)
  req.send_request(options)
end

#get_compatible_kafka_versions(params = {}) ⇒ Types::GetCompatibleKafkaVersionsResponse

Gets the Apache Kafka versions to which you can update the MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.get_compatible_kafka_versions({
  cluster_arn: "__string",
})

Response structure


resp.compatible_kafka_versions #=> Array
resp.compatible_kafka_versions[0].source_version #=> String
resp.compatible_kafka_versions[0].target_versions #=> Array
resp.compatible_kafka_versions[0].target_versions[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (String)

Returns:

See Also:



2157
2158
2159
2160
# File 'lib/aws-sdk-kafka/client.rb', line 2157

def get_compatible_kafka_versions(params = {}, options = {})
  req = build_request(:get_compatible_kafka_versions, params)
  req.send_request(options)
end

#list_client_vpc_connections(params = {}) ⇒ Types::ListClientVpcConnectionsResponse

Displays a list of client VPC connections.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_client_vpc_connections({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.client_vpc_connections #=> Array
resp.client_vpc_connections[0].authentication #=> String
resp.client_vpc_connections[0].creation_time #=> Time
resp.client_vpc_connections[0].state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"
resp.client_vpc_connections[0].vpc_connection_arn #=> String
resp.client_vpc_connections[0].owner #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2879
2880
2881
2882
# File 'lib/aws-sdk-kafka/client.rb', line 2879

def list_client_vpc_connections(params = {}, options = {})
  req = build_request(:list_client_vpc_connections, params)
  req.send_request(options)
end

#list_cluster_operations(params = {}) ⇒ Types::ListClusterOperationsResponse

Returns a list of all the operations that have been performed on the specified MSK cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_cluster_operations({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.cluster_operation_info_list #=> Array
resp.cluster_operation_info_list[0].client_request_id #=> String
resp.cluster_operation_info_list[0].cluster_arn #=> String
resp.cluster_operation_info_list[0].creation_time #=> Time
resp.cluster_operation_info_list[0].end_time #=> Time
resp.cluster_operation_info_list[0].error_info.error_code #=> String
resp.cluster_operation_info_list[0].error_info.error_string #=> String
resp.cluster_operation_info_list[0].operation_steps #=> Array
resp.cluster_operation_info_list[0].operation_steps[0].step_info.step_status #=> String
resp.cluster_operation_info_list[0].operation_steps[0].step_name #=> String
resp.cluster_operation_info_list[0].operation_arn #=> String
resp.cluster_operation_info_list[0].operation_state #=> String
resp.cluster_operation_info_list[0].operation_type #=> String
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info_list[0].source_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info_list[0].source_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info_list[0].source_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info_list[0].source_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info_list[0].source_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info_list[0].source_cluster_info.kafka_version #=> String
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info_list[0].source_cluster_info.instance_type #=> String
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info_list[0].source_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info_list[0].source_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_operation_info_list[0].source_cluster_info.zookeeper_access.enabled #=> Boolean
resp.cluster_operation_info_list[0].source_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info_list[0].source_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info_list[0].source_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info_list[0].source_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info_list[0].source_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info_list[0].source_cluster_info.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info #=> Array
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info[0].kafka_broker_node_id #=> String
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info[0].provisioned_throughput.volume_throughput #=> Integer
resp.cluster_operation_info_list[0].target_cluster_info.broker_ebs_volume_info[0].volume_size_gb #=> Integer
resp.cluster_operation_info_list[0].target_cluster_info.configuration_info.arn #=> String
resp.cluster_operation_info_list[0].target_cluster_info.configuration_info.revision #=> Integer
resp.cluster_operation_info_list[0].target_cluster_info.number_of_broker_nodes #=> Integer
resp.cluster_operation_info_list[0].target_cluster_info.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_operation_info_list[0].target_cluster_info.kafka_version #=> String
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_operation_info_list[0].target_cluster_info.instance_type #=> String
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_operation_info_list[0].target_cluster_info.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_operation_info_list[0].target_cluster_info.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.public_access.type #=> String
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_operation_info_list[0].target_cluster_info.zookeeper_access.enabled #=> Boolean
resp.cluster_operation_info_list[0].target_cluster_info.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_operation_info_list[0].target_cluster_info.broker_count_update_info.created_broker_ids #=> Array
resp.cluster_operation_info_list[0].target_cluster_info.broker_count_update_info.created_broker_ids[0] #=> Float
resp.cluster_operation_info_list[0].target_cluster_info.broker_count_update_info.deleted_broker_ids #=> Array
resp.cluster_operation_info_list[0].target_cluster_info.broker_count_update_info.deleted_broker_ids[0] #=> Float
resp.cluster_operation_info_list[0].target_cluster_info.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_operation_info_list[0].vpc_connection_info.vpc_connection_arn #=> String
resp.cluster_operation_info_list[0].vpc_connection_info.owner #=> String
resp.cluster_operation_info_list[0].vpc_connection_info.user_identity.type #=> String, one of "AWSACCOUNT", "AWSSERVICE"
resp.cluster_operation_info_list[0].vpc_connection_info.user_identity.principal_id #=> String
resp.cluster_operation_info_list[0].vpc_connection_info.creation_time #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2294
2295
2296
2297
# File 'lib/aws-sdk-kafka/client.rb', line 2294

def list_cluster_operations(params = {}, options = {})
  req = build_request(:list_cluster_operations, params)
  req.send_request(options)
end

#list_cluster_operations_v2(params = {}) ⇒ Types::ListClusterOperationsV2Response

Returns a list of all the operations that have been performed on the specified MSK cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_cluster_operations_v2({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.cluster_operation_info_list #=> Array
resp.cluster_operation_info_list[0].cluster_arn #=> String
resp.cluster_operation_info_list[0].cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"
resp.cluster_operation_info_list[0].start_time #=> Time
resp.cluster_operation_info_list[0].end_time #=> Time
resp.cluster_operation_info_list[0].operation_arn #=> String
resp.cluster_operation_info_list[0].operation_state #=> String
resp.cluster_operation_info_list[0].operation_type #=> String
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2339
2340
2341
2342
# File 'lib/aws-sdk-kafka/client.rb', line 2339

def list_cluster_operations_v2(params = {}, options = {})
  req = build_request(:list_cluster_operations_v2, params)
  req.send_request(options)
end

#list_clusters(params = {}) ⇒ Types::ListClustersResponse

Returns a list of all the MSK clusters in the current Region.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_clusters({
  cluster_name_filter: "__string",
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.cluster_info_list #=> Array
resp.cluster_info_list[0].active_operation_arn #=> String
resp.cluster_info_list[0].broker_node_group_info.broker_az_distribution #=> String, one of "DEFAULT"
resp.cluster_info_list[0].broker_node_group_info.client_subnets #=> Array
resp.cluster_info_list[0].broker_node_group_info.client_subnets[0] #=> String
resp.cluster_info_list[0].broker_node_group_info.instance_type #=> String
resp.cluster_info_list[0].broker_node_group_info.security_groups #=> Array
resp.cluster_info_list[0].broker_node_group_info.security_groups[0] #=> String
resp.cluster_info_list[0].broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.enabled #=> Boolean
resp.cluster_info_list[0].broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.volume_throughput #=> Integer
resp.cluster_info_list[0].broker_node_group_info.storage_info.ebs_storage_info.volume_size #=> Integer
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.public_access.type #=> String
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_info_list[0].broker_node_group_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_info_list[0].broker_node_group_info.zone_ids #=> Array
resp.cluster_info_list[0].broker_node_group_info.zone_ids[0] #=> String
resp.cluster_info_list[0].client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info_list[0].client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_info_list[0].client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_info_list[0].client_authentication.tls.enabled #=> Boolean
resp.cluster_info_list[0].client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_info_list[0].cluster_arn #=> String
resp.cluster_info_list[0].cluster_name #=> String
resp.cluster_info_list[0].creation_time #=> Time
resp.cluster_info_list[0].current_broker_software_info.configuration_arn #=> String
resp.cluster_info_list[0].current_broker_software_info.configuration_revision #=> Integer
resp.cluster_info_list[0].current_broker_software_info.kafka_version #=> String
resp.cluster_info_list[0].logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_info_list[0].logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_info_list[0].logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_info_list[0].logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_info_list[0].logging_info.broker_logs.s3.bucket #=> String
resp.cluster_info_list[0].logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_info_list[0].logging_info.broker_logs.s3.prefix #=> String
resp.cluster_info_list[0].current_version #=> String
resp.cluster_info_list[0].encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_info_list[0].encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_info_list[0].encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_info_list[0].enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_info_list[0].number_of_broker_nodes #=> Integer
resp.cluster_info_list[0].open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_info_list[0].open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_info_list[0].state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_info_list[0].state_info.code #=> String
resp.cluster_info_list[0].state_info.message #=> String
resp.cluster_info_list[0].tags #=> Hash
resp.cluster_info_list[0].tags["__string"] #=> String
resp.cluster_info_list[0].zookeeper_connect_string #=> String
resp.cluster_info_list[0].zookeeper_connect_string_tls #=> String
resp.cluster_info_list[0].storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_info_list[0].rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_info_list[0].customer_action_status #=> String, one of "CRITICAL_ACTION_REQUIRED", "ACTION_RECOMMENDED", "NONE"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_name_filter (String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2430
2431
2432
2433
# File 'lib/aws-sdk-kafka/client.rb', line 2430

def list_clusters(params = {}, options = {})
  req = build_request(:list_clusters, params)
  req.send_request(options)
end

#list_clusters_v2(params = {}) ⇒ Types::ListClustersV2Response

Returns a list of all the MSK clusters in the current Region.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_clusters_v2({
  cluster_name_filter: "__string",
  cluster_type_filter: "__string",
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.cluster_info_list #=> Array
resp.cluster_info_list[0].active_operation_arn #=> String
resp.cluster_info_list[0].cluster_type #=> String, one of "PROVISIONED", "SERVERLESS"
resp.cluster_info_list[0].cluster_arn #=> String
resp.cluster_info_list[0].cluster_name #=> String
resp.cluster_info_list[0].creation_time #=> Time
resp.cluster_info_list[0].current_version #=> String
resp.cluster_info_list[0].state #=> String, one of "ACTIVE", "CREATING", "DELETING", "FAILED", "HEALING", "MAINTENANCE", "REBOOTING_BROKER", "UPDATING"
resp.cluster_info_list[0].state_info.code #=> String
resp.cluster_info_list[0].state_info.message #=> String
resp.cluster_info_list[0].tags #=> Hash
resp.cluster_info_list[0].tags["__string"] #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.broker_az_distribution #=> String, one of "DEFAULT"
resp.cluster_info_list[0].provisioned.broker_node_group_info.client_subnets #=> Array
resp.cluster_info_list[0].provisioned.broker_node_group_info.client_subnets[0] #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.instance_type #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.security_groups #=> Array
resp.cluster_info_list[0].provisioned.broker_node_group_info.security_groups[0] #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.broker_node_group_info.storage_info.ebs_storage_info.provisioned_throughput.volume_throughput #=> Integer
resp.cluster_info_list[0].provisioned.broker_node_group_info.storage_info.ebs_storage_info.volume_size #=> Integer
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.public_access.type #=> String
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.vpc_connectivity.client_authentication.tls.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.broker_node_group_info.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.cluster_info_list[0].provisioned.broker_node_group_info.zone_ids #=> Array
resp.cluster_info_list[0].provisioned.broker_node_group_info.zone_ids[0] #=> String
resp.cluster_info_list[0].provisioned.current_broker_software_info.configuration_arn #=> String
resp.cluster_info_list[0].provisioned.current_broker_software_info.configuration_revision #=> Integer
resp.cluster_info_list[0].provisioned.current_broker_software_info.kafka_version #=> String
resp.cluster_info_list[0].provisioned.client_authentication.sasl.scram.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.client_authentication.tls.certificate_authority_arn_list #=> Array
resp.cluster_info_list[0].provisioned.client_authentication.tls.certificate_authority_arn_list[0] #=> String
resp.cluster_info_list[0].provisioned.client_authentication.tls.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.client_authentication.unauthenticated.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.encryption_info.encryption_at_rest.data_volume_kms_key_id #=> String
resp.cluster_info_list[0].provisioned.encryption_info.encryption_in_transit.client_broker #=> String, one of "TLS", "TLS_PLAINTEXT", "PLAINTEXT"
resp.cluster_info_list[0].provisioned.encryption_info.encryption_in_transit.in_cluster #=> Boolean
resp.cluster_info_list[0].provisioned.enhanced_monitoring #=> String, one of "DEFAULT", "PER_BROKER", "PER_TOPIC_PER_BROKER", "PER_TOPIC_PER_PARTITION"
resp.cluster_info_list[0].provisioned.open_monitoring.prometheus.jmx_exporter.enabled_in_broker #=> Boolean
resp.cluster_info_list[0].provisioned.open_monitoring.prometheus.node_exporter.enabled_in_broker #=> Boolean
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.cloud_watch_logs.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.cloud_watch_logs.log_group #=> String
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.firehose.delivery_stream #=> String
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.firehose.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.s3.bucket #=> String
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.s3.enabled #=> Boolean
resp.cluster_info_list[0].provisioned.logging_info.broker_logs.s3.prefix #=> String
resp.cluster_info_list[0].provisioned.number_of_broker_nodes #=> Integer
resp.cluster_info_list[0].provisioned.zookeeper_connect_string #=> String
resp.cluster_info_list[0].provisioned.zookeeper_connect_string_tls #=> String
resp.cluster_info_list[0].provisioned.storage_mode #=> String, one of "LOCAL", "TIERED"
resp.cluster_info_list[0].provisioned.rebalancing.status #=> String, one of "PAUSED", "ACTIVE"
resp.cluster_info_list[0].provisioned.customer_action_status #=> String, one of "CRITICAL_ACTION_REQUIRED", "ACTION_RECOMMENDED", "NONE"
resp.cluster_info_list[0].serverless.vpc_configs #=> Array
resp.cluster_info_list[0].serverless.vpc_configs[0].subnet_ids #=> Array
resp.cluster_info_list[0].serverless.vpc_configs[0].subnet_ids[0] #=> String
resp.cluster_info_list[0].serverless.vpc_configs[0].security_group_ids #=> Array
resp.cluster_info_list[0].serverless.vpc_configs[0].security_group_ids[0] #=> String
resp.cluster_info_list[0].serverless.client_authentication.sasl.iam.enabled #=> Boolean
resp.cluster_info_list[0].serverless.connectivity_info.network_type #=> String, one of "IPV4", "DUAL"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_name_filter (String)

    Specify a prefix of the names of the clusters that you want to list. The service lists all the clusters whose names start with this prefix.

  • :cluster_type_filter (String)

    Specify either PROVISIONED or SERVERLESS.

  • :max_results (Integer)

    The maximum number of results to return in the response. If there are more results, the response includes a NextToken parameter.

  • :next_token (String)

    The paginated results marker. When the result of the operation is truncated, the call returns NextToken in the response. To get the next batch, provide this token in your next request.

Returns:

See Also:



2540
2541
2542
2543
# File 'lib/aws-sdk-kafka/client.rb', line 2540

def list_clusters_v2(params = {}, options = {})
  req = build_request(:list_clusters_v2, params)
  req.send_request(options)
end

#list_configuration_revisions(params = {}) ⇒ Types::ListConfigurationRevisionsResponse

Returns a list of all the revisions of an MSK configuration.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_configuration_revisions({
  arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.revisions #=> Array
resp.revisions[0].creation_time #=> Time
resp.revisions[0].description #=> String
resp.revisions[0].revision #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2580
2581
2582
2583
# File 'lib/aws-sdk-kafka/client.rb', line 2580

def list_configuration_revisions(params = {}, options = {})
  req = build_request(:list_configuration_revisions, params)
  req.send_request(options)
end

#list_configurations(params = {}) ⇒ Types::ListConfigurationsResponse

Returns a list of all the MSK configurations in this Region.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_configurations({
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.configurations #=> Array
resp.configurations[0].arn #=> String
resp.configurations[0].creation_time #=> Time
resp.configurations[0].description #=> String
resp.configurations[0].kafka_versions #=> Array
resp.configurations[0].kafka_versions[0] #=> String
resp.configurations[0].latest_revision.creation_time #=> Time
resp.configurations[0].latest_revision.description #=> String
resp.configurations[0].latest_revision.revision #=> Integer
resp.configurations[0].name #=> String
resp.configurations[0].state #=> String, one of "ACTIVE", "DELETING", "DELETE_FAILED"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2624
2625
2626
2627
# File 'lib/aws-sdk-kafka/client.rb', line 2624

def list_configurations(params = {}, options = {})
  req = build_request(:list_configurations, params)
  req.send_request(options)
end

#list_kafka_versions(params = {}) ⇒ Types::ListKafkaVersionsResponse

Returns a list of Apache Kafka versions.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_kafka_versions({
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.kafka_versions #=> Array
resp.kafka_versions[0].version #=> String
resp.kafka_versions[0].status #=> String, one of "ACTIVE", "DEPRECATED"
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2660
2661
2662
2663
# File 'lib/aws-sdk-kafka/client.rb', line 2660

def list_kafka_versions(params = {}, options = {})
  req = build_request(:list_kafka_versions, params)
  req.send_request(options)
end

#list_nodes(params = {}) ⇒ Types::ListNodesResponse

Returns a list of the broker nodes in the cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_nodes({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.node_info_list #=> Array
resp.node_info_list[0].added_to_cluster_time #=> String
resp.node_info_list[0].broker_node_info.attached_eni_id #=> String
resp.node_info_list[0].broker_node_info.broker_id #=> Float
resp.node_info_list[0].broker_node_info.client_subnet #=> String
resp.node_info_list[0].broker_node_info.client_vpc_ip_address #=> String
resp.node_info_list[0].broker_node_info.current_broker_software_info.configuration_arn #=> String
resp.node_info_list[0].broker_node_info.current_broker_software_info.configuration_revision #=> Integer
resp.node_info_list[0].broker_node_info.current_broker_software_info.kafka_version #=> String
resp.node_info_list[0].broker_node_info.endpoints #=> Array
resp.node_info_list[0].broker_node_info.endpoints[0] #=> String
resp.node_info_list[0].controller_node_info.endpoints #=> Array
resp.node_info_list[0].controller_node_info.endpoints[0] #=> String
resp.node_info_list[0].instance_type #=> String
resp.node_info_list[0].node_arn #=> String
resp.node_info_list[0].node_type #=> String, one of "BROKER"
resp.node_info_list[0].zookeeper_node_info.attached_eni_id #=> String
resp.node_info_list[0].zookeeper_node_info.client_vpc_ip_address #=> String
resp.node_info_list[0].zookeeper_node_info.endpoints #=> Array
resp.node_info_list[0].zookeeper_node_info.endpoints[0] #=> String
resp.node_info_list[0].zookeeper_node_info.zookeeper_id #=> Float
resp.node_info_list[0].zookeeper_node_info.zookeeper_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2718
2719
2720
2721
# File 'lib/aws-sdk-kafka/client.rb', line 2718

def list_nodes(params = {}, options = {})
  req = build_request(:list_nodes, params)
  req.send_request(options)
end

#list_replicators(params = {}) ⇒ Types::ListReplicatorsResponse

Lists the replicators.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_replicators({
  max_results: 1,
  next_token: "__string",
  replicator_name_filter: "__string",
})

Response structure


resp.next_token #=> String
resp.replicators #=> Array
resp.replicators[0].creation_time #=> Time
resp.replicators[0].current_version #=> String
resp.replicators[0].is_replicator_reference #=> Boolean
resp.replicators[0].kafka_clusters_summary #=> Array
resp.replicators[0].kafka_clusters_summary[0].amazon_msk_cluster.msk_cluster_arn #=> String
resp.replicators[0].kafka_clusters_summary[0].apache_kafka_cluster.apache_kafka_cluster_id #=> String
resp.replicators[0].kafka_clusters_summary[0].apache_kafka_cluster.bootstrap_broker_string #=> String
resp.replicators[0].kafka_clusters_summary[0].kafka_cluster_alias #=> String
resp.replicators[0].replication_info_summary_list #=> Array
resp.replicators[0].replication_info_summary_list[0].source_kafka_cluster_alias #=> String
resp.replicators[0].replication_info_summary_list[0].target_kafka_cluster_alias #=> String
resp.replicators[0].replicator_arn #=> String
resp.replicators[0].replicator_name #=> String
resp.replicators[0].replicator_resource_arn #=> String
resp.replicators[0].replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)
  • :next_token (String)
  • :replicator_name_filter (String)

Returns:

See Also:



2770
2771
2772
2773
# File 'lib/aws-sdk-kafka/client.rb', line 2770

def list_replicators(params = {}, options = {})
  req = build_request(:list_replicators, params)
  req.send_request(options)
end

#list_scram_secrets(params = {}) ⇒ Types::ListScramSecretsResponse

Returns a list of the Scram Secrets associated with an Amazon MSK cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_scram_secrets({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.secret_arn_list #=> Array
resp.secret_arn_list[0] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2809
2810
2811
2812
# File 'lib/aws-sdk-kafka/client.rb', line 2809

def list_scram_secrets(params = {}, options = {})
  req = build_request(:list_scram_secrets, params)
  req.send_request(options)
end

#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse

Returns a list of the tags associated with the specified resource.

Examples:

Request syntax with placeholder values


resp = client.list_tags_for_resource({
  resource_arn: "__string", # required
})

Response structure


resp.tags #=> Hash
resp.tags["__string"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)

Returns:

See Also:



2837
2838
2839
2840
# File 'lib/aws-sdk-kafka/client.rb', line 2837

def list_tags_for_resource(params = {}, options = {})
  req = build_request(:list_tags_for_resource, params)
  req.send_request(options)
end

#list_topics(params = {}) ⇒ Types::ListTopicsResponse

List topics in a MSK cluster.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_topics({
  cluster_arn: "__string", # required
  max_results: 1,
  next_token: "__string",
  topic_name_filter: "__string",
})

Response structure


resp.topics #=> Array
resp.topics[0].topic_arn #=> String
resp.topics[0].topic_name #=> String
resp.topics[0].replication_factor #=> Integer
resp.topics[0].partition_count #=> Integer
resp.topics[0].out_of_sync_replica_count #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :max_results (Integer)
  • :next_token (String)
  • :topic_name_filter (String)

Returns:

See Also:



2964
2965
2966
2967
# File 'lib/aws-sdk-kafka/client.rb', line 2964

def list_topics(params = {}, options = {})
  req = build_request(:list_topics, params)
  req.send_request(options)
end

#list_vpc_connections(params = {}) ⇒ Types::ListVpcConnectionsResponse

Displays a list of Amazon MSK VPC connections.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

Examples:

Request syntax with placeholder values


resp = client.list_vpc_connections({
  max_results: 1,
  next_token: "__string",
})

Response structure


resp.next_token #=> String
resp.vpc_connections #=> Array
resp.vpc_connections[0].vpc_connection_arn #=> String
resp.vpc_connections[0].target_cluster_arn #=> String
resp.vpc_connections[0].creation_time #=> Time
resp.vpc_connections[0].authentication #=> String
resp.vpc_connections[0].vpc_id #=> String
resp.vpc_connections[0].state #=> String, one of "CREATING", "AVAILABLE", "INACTIVE", "DEACTIVATING", "DELETING", "FAILED", "REJECTED", "REJECTING"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :max_results (Integer)
  • :next_token (String)

Returns:

See Also:



2919
2920
2921
2922
# File 'lib/aws-sdk-kafka/client.rb', line 2919

def list_vpc_connections(params = {}, options = {})
  req = build_request(:list_vpc_connections, params)
  req.send_request(options)
end

#put_cluster_policy(params = {}) ⇒ Types::PutClusterPolicyResponse

Creates or updates the specified MSK cluster policy. If updating the policy, the currentVersion field is required in the request payload.

Examples:

Request syntax with placeholder values


resp = client.put_cluster_policy({
  cluster_arn: "__string", # required
  current_version: "__string",
  policy: "__string", # required
})

Response structure


resp.current_version #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (String)
  • :policy (required, String)

Returns:

See Also:



3071
3072
3073
3074
# File 'lib/aws-sdk-kafka/client.rb', line 3071

def put_cluster_policy(params = {}, options = {})
  req = build_request(:put_cluster_policy, params)
  req.send_request(options)
end

#reboot_broker(params = {}) ⇒ Types::RebootBrokerResponse

Executes a reboot on a broker.

Examples:

Request syntax with placeholder values


resp = client.reboot_broker({
  broker_ids: ["__string"], # required
  cluster_arn: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :broker_ids (required, Array<String>)

    The list of broker ids to be rebooted.

  • :cluster_arn (required, String)

Returns:

See Also:



3104
3105
3106
3107
# File 'lib/aws-sdk-kafka/client.rb', line 3104

def reboot_broker(params = {}, options = {})
  req = build_request(:reboot_broker, params)
  req.send_request(options)
end

#reject_client_vpc_connection(params = {}) ⇒ Struct

Returns an empty response.

Examples:

Request syntax with placeholder values


resp = client.reject_client_vpc_connection({
  cluster_arn: "__string", # required
  vpc_connection_arn: "__string", # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :vpc_connection_arn (required, String)

Returns:

  • (Struct)

    Returns an empty response.

See Also:



2986
2987
2988
2989
# File 'lib/aws-sdk-kafka/client.rb', line 2986

def reject_client_vpc_connection(params = {}, options = {})
  req = build_request(:reject_client_vpc_connection, params)
  req.send_request(options)
end

#tag_resource(params = {}) ⇒ Struct

Adds tags to the specified MSK resource.

Examples:

Request syntax with placeholder values


resp = client.tag_resource({
  resource_arn: "__string", # required
  tags: { # required
    "__string" => "__string",
  },
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)
  • :tags (required, Hash<String,String>)

    The key-value pair for the resource tag.

Returns:

  • (Struct)

    Returns an empty response.

See Also:



3131
3132
3133
3134
# File 'lib/aws-sdk-kafka/client.rb', line 3131

def tag_resource(params = {}, options = {})
  req = build_request(:tag_resource, params)
  req.send_request(options)
end

#untag_resource(params = {}) ⇒ Struct

Removes the tags associated with the keys that are provided in the query.

Examples:

Request syntax with placeholder values


resp = client.untag_resource({
  resource_arn: "__string", # required
  tag_keys: ["__string"], # required
})

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_arn (required, String)
  • :tag_keys (required, Array<String>)

Returns:

  • (Struct)

    Returns an empty response.

See Also:



3156
3157
3158
3159
# File 'lib/aws-sdk-kafka/client.rb', line 3156

def untag_resource(params = {}, options = {})
  req = build_request(:untag_resource, params)
  req.send_request(options)
end

#update_broker_count(params = {}) ⇒ Types::UpdateBrokerCountResponse

Updates the number of broker nodes in the cluster. You can use this operation to increase the number of brokers in an existing cluster. You can’t decrease the number of brokers.

Examples:

Request syntax with placeholder values


resp = client.update_broker_count({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  target_number_of_broker_nodes: 1, # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The current version of the cluster.

  • :target_number_of_broker_nodes (required, Integer)

    The number of broker nodes that you want the cluster to have after this operation completes successfully.

Returns:

See Also:



3196
3197
3198
3199
# File 'lib/aws-sdk-kafka/client.rb', line 3196

def update_broker_count(params = {}, options = {})
  req = build_request(:update_broker_count, params)
  req.send_request(options)
end

#update_broker_storage(params = {}) ⇒ Types::UpdateBrokerStorageResponse

Updates the EBS storage associated with MSK brokers.

Examples:

Request syntax with placeholder values


resp = client.update_broker_storage({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  target_broker_ebs_volume_info: [ # required
    {
      kafka_broker_node_id: "__string", # required
      provisioned_throughput: {
        enabled: false,
        volume_throughput: 1,
      },
      volume_size_gb: 1,
    },
  ],
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The version of cluster to update from. A successful operation will then generate a new version.

  • :target_broker_ebs_volume_info (required, Array<Types::BrokerEBSVolumeInfo>)

    Describes the target volume size and the ID of the broker to apply the update to.

    The value you specify for Target-Volume-in-GiB must be a whole number that is greater than 100 GiB.

    The storage per broker after the update operation can’t exceed 16384 GiB.

Returns:

See Also:



3288
3289
3290
3291
# File 'lib/aws-sdk-kafka/client.rb', line 3288

def update_broker_storage(params = {}, options = {})
  req = build_request(:update_broker_storage, params)
  req.send_request(options)
end

#update_broker_type(params = {}) ⇒ Types::UpdateBrokerTypeResponse

Updates all the brokers in the cluster to the specified type.

Examples:

Request syntax with placeholder values


resp = client.update_broker_type({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  target_instance_type: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The current version of the cluster.

  • :target_instance_type (required, String)

    The Amazon MSK broker type that you want all of the brokers in this cluster to be.

Returns:

See Also:



3234
3235
3236
3237
# File 'lib/aws-sdk-kafka/client.rb', line 3234

def update_broker_type(params = {}, options = {})
  req = build_request(:update_broker_type, params)
  req.send_request(options)
end

#update_cluster_configuration(params = {}) ⇒ Types::UpdateClusterConfigurationResponse

Updates the cluster with the configuration that is specified in the request body.

Examples:

Request syntax with placeholder values


resp = client.update_cluster_configuration({
  cluster_arn: "__string", # required
  configuration_info: { # required
    arn: "__string", # required
    revision: 1, # required
  },
  current_version: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :configuration_info (required, Types::ConfigurationInfo)

    Represents the configuration that you want MSK to use for the cluster.

  • :current_version (required, String)

    The version of the cluster that you want to update.

Returns:

See Also:



3369
3370
3371
3372
# File 'lib/aws-sdk-kafka/client.rb', line 3369

def update_cluster_configuration(params = {}, options = {})
  req = build_request(:update_cluster_configuration, params)
  req.send_request(options)
end

#update_cluster_kafka_version(params = {}) ⇒ Types::UpdateClusterKafkaVersionResponse

Updates the Apache Kafka version for the cluster.

Examples:

Request syntax with placeholder values


resp = client.update_cluster_kafka_version({
  cluster_arn: "__string", # required
  configuration_info: {
    arn: "__string", # required
    revision: 1, # required
  },
  current_version: "__string", # required
  target_kafka_version: "__string", # required
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :configuration_info (Types::ConfigurationInfo)

    Specifies the configuration to use for the brokers.

  • :current_version (required, String)

    Current cluster version.

  • :target_kafka_version (required, String)

    Target Apache Kafka version.

Returns:

See Also:



3413
3414
3415
3416
# File 'lib/aws-sdk-kafka/client.rb', line 3413

def update_cluster_kafka_version(params = {}, options = {})
  req = build_request(:update_cluster_kafka_version, params)
  req.send_request(options)
end

#update_configuration(params = {}) ⇒ Types::UpdateConfigurationResponse

Updates an existing MSK configuration. The configuration must be in the Active state.

Examples:

Request syntax with placeholder values


resp = client.update_configuration({
  arn: "__string", # required
  description: "__string",
  server_properties: "data", # required
})

Response structure


resp.arn #=> String
resp.latest_revision.creation_time #=> Time
resp.latest_revision.description #=> String
resp.latest_revision.revision #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :arn (required, String)

    The Amazon Resource Name (ARN) of the configuration.

  • :description (String)

    The description of the configuration.

  • :server_properties (required, String, StringIO, File)

Returns:

See Also:



3328
3329
3330
3331
# File 'lib/aws-sdk-kafka/client.rb', line 3328

def update_configuration(params = {}, options = {})
  req = build_request(:update_configuration, params)
  req.send_request(options)
end

#update_connectivity(params = {}) ⇒ Types::UpdateConnectivityResponse

Updates the connectivity configuration for the MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.update_connectivity({
  cluster_arn: "__string", # required
  connectivity_info: {
    public_access: {
      type: "__string",
    },
    vpc_connectivity: {
      client_authentication: {
        sasl: {
          scram: {
            enabled: false,
          },
          iam: {
            enabled: false,
          },
        },
        tls: {
          enabled: false,
        },
      },
    },
    network_type: "IPV4", # accepts IPV4, DUAL
  },
  current_version: "__string", # required
  zookeeper_access: {
    enabled: false,
  },
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :connectivity_info (Types::ConnectivityInfo)

    Information about the broker access configuration.

  • :current_version (required, String)

    The current version of the cluster.

  • :zookeeper_access (Types::ZookeeperAccess)

    Access control settings for zookeeper

Returns:

See Also:



3476
3477
3478
3479
# File 'lib/aws-sdk-kafka/client.rb', line 3476

def update_connectivity(params = {}, options = {})
  req = build_request(:update_connectivity, params)
  req.send_request(options)
end

#update_monitoring(params = {}) ⇒ Types::UpdateMonitoringResponse

Updates the monitoring settings for the cluster. You can use this operation to specify which Apache Kafka metrics you want Amazon MSK to send to Amazon CloudWatch. You can also specify settings for open monitoring with Prometheus.

Examples:

Request syntax with placeholder values


resp = client.update_monitoring({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  enhanced_monitoring: "DEFAULT", # accepts DEFAULT, PER_BROKER, PER_TOPIC_PER_BROKER, PER_TOPIC_PER_PARTITION
  open_monitoring: {
    prometheus: { # required
      jmx_exporter: {
        enabled_in_broker: false, # required
      },
      node_exporter: {
        enabled_in_broker: false, # required
      },
    },
  },
  logging_info: {
    broker_logs: { # required
      cloud_watch_logs: {
        enabled: false, # required
        log_group: "__string",
      },
      firehose: {
        delivery_stream: "__string",
        enabled: false, # required
      },
      s3: {
        bucket: "__string",
        enabled: false, # required
        prefix: "__string",
      },
    },
  },
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The version of cluster to update from. A successful operation will then generate a new version.

  • :enhanced_monitoring (String)

    Specifies which Apache Kafka metrics Amazon MSK gathers and sends to Amazon CloudWatch for this cluster.

  • :open_monitoring (Types::OpenMonitoringInfo)

    The settings for open monitoring.

  • :logging_info (Types::LoggingInfo)

    LoggingInfo details.

Returns:

See Also:



3551
3552
3553
3554
# File 'lib/aws-sdk-kafka/client.rb', line 3551

def update_monitoring(params = {}, options = {})
  req = build_request(:update_monitoring, params)
  req.send_request(options)
end

#update_rebalancing(params = {}) ⇒ Types::UpdateRebalancingResponse

Use this resource to update the intelligent rebalancing status of an Amazon MSK Provisioned cluster with Express brokers.

Examples:

Request syntax with placeholder values


resp = client.update_rebalancing({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  rebalancing: { # required
    status: "PAUSED", # required, accepts PAUSED, ACTIVE
  },
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)

    The Amazon Resource Name (ARN) of the cluster.

  • :current_version (required, String)

    The current version of the cluster.

  • :rebalancing (required, Types::Rebalancing)

    Includes all rebalancing-related information for the cluster.

Returns:

See Also:



3592
3593
3594
3595
# File 'lib/aws-sdk-kafka/client.rb', line 3592

def update_rebalancing(params = {}, options = {})
  req = build_request(:update_rebalancing, params)
  req.send_request(options)
end

#update_replication_info(params = {}) ⇒ Types::UpdateReplicationInfoResponse

Updates replication info of a replicator.

Examples:

Request syntax with placeholder values


resp = client.update_replication_info({
  consumer_group_replication: {
    consumer_groups_to_exclude: ["__stringMax256"], # required
    consumer_groups_to_replicate: ["__stringMax256"], # required
    detect_and_copy_new_consumer_groups: false, # required
    synchronise_consumer_group_offsets: false, # required
  },
  current_version: "__string", # required
  log_delivery: {
    replicator_log_delivery: {
      cloud_watch_logs: {
        enabled: false, # required
        log_group: "__string",
      },
      firehose: {
        delivery_stream: "__string",
        enabled: false, # required
      },
      s3: {
        bucket: "__string",
        enabled: false, # required
        prefix: "__string",
      },
    },
  },
  replicator_arn: "__string", # required
  source_kafka_cluster_arn: "__string",
  source_kafka_cluster_id: "__string",
  target_kafka_cluster_arn: "__string",
  target_kafka_cluster_id: "__string",
  topic_replication: {
    copy_access_control_lists_for_topics: false, # required
    copy_topic_configurations: false, # required
    detect_and_copy_new_topics: false, # required
    topics_to_exclude: ["__stringMax249"], # required
    topics_to_replicate: ["__stringMax249"], # required
  },
})

Response structure


resp.replicator_arn #=> String
resp.replicator_state #=> String, one of "RUNNING", "CREATING", "UPDATING", "DELETING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :consumer_group_replication (Types::ConsumerGroupReplicationUpdate)

    Updated consumer group replication information.

  • :current_version (required, String)

    Current replicator version.

  • :log_delivery (Types::LogDelivery)

    Configuration for delivering replicator logs to customer destinations.

  • :replicator_arn (required, String)
  • :source_kafka_cluster_arn (String)

    The ARN of the source Kafka cluster.

  • :source_kafka_cluster_id (String)

    The ID of the source Kafka cluster.

  • :target_kafka_cluster_arn (String)

    The ARN of the target Kafka cluster.

  • :target_kafka_cluster_id (String)

    The ID of the target Kafka cluster.

  • :topic_replication (Types::TopicReplicationUpdate)

    Updated topic replication information.

Returns:

See Also:



3680
3681
3682
3683
# File 'lib/aws-sdk-kafka/client.rb', line 3680

def update_replication_info(params = {}, options = {})
  req = build_request(:update_replication_info, params)
  req.send_request(options)
end

#update_security(params = {}) ⇒ Types::UpdateSecurityResponse

You can use this operation to update the encrypting and authentication settings for an existing cluster.

Examples:

Request syntax with placeholder values


resp = client.update_security({
  client_authentication: {
    sasl: {
      scram: {
        enabled: false,
      },
      iam: {
        enabled: false,
      },
    },
    tls: {
      certificate_authority_arn_list: ["__string"],
      enabled: false,
    },
    unauthenticated: {
      enabled: false,
    },
  },
  cluster_arn: "__string", # required
  current_version: "__string", # required
  encryption_info: {
    encryption_at_rest: {
      data_volume_kms_key_id: "__string", # required
    },
    encryption_in_transit: {
      client_broker: "TLS", # accepts TLS, TLS_PLAINTEXT, PLAINTEXT
      in_cluster: false,
    },
  },
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :client_authentication (Types::ClientAuthentication)

    Includes all client authentication related information.

  • :cluster_arn (required, String)
  • :current_version (required, String)

    You can use the DescribeCluster operation to get the current version of the cluster. After the security update is complete, the cluster will have a new version.

  • :encryption_info (Types::EncryptionInfo)

    Includes all encryption-related information.

Returns:

See Also:



3748
3749
3750
3751
# File 'lib/aws-sdk-kafka/client.rb', line 3748

def update_security(params = {}, options = {})
  req = build_request(:update_security, params)
  req.send_request(options)
end

#update_storage(params = {}) ⇒ Types::UpdateStorageResponse

Updates cluster broker volume size (or) sets cluster storage mode to TIERED.

Examples:

Request syntax with placeholder values


resp = client.update_storage({
  cluster_arn: "__string", # required
  current_version: "__string", # required
  provisioned_throughput: {
    enabled: false,
    volume_throughput: 1,
  },
  storage_mode: "LOCAL", # accepts LOCAL, TIERED
  volume_size_gb: 1,
})

Response structure


resp.cluster_arn #=> String
resp.cluster_operation_arn #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :current_version (required, String)

    The version of cluster to update from. A successful operation will then generate a new version.

  • :provisioned_throughput (Types::ProvisionedThroughput)

    EBS volume provisioned throughput information.

  • :storage_mode (String)

    Controls storage mode for supported storage tiers.

  • :volume_size_gb (Integer)

    size of the EBS volume to update.

Returns:

See Also:



3798
3799
3800
3801
# File 'lib/aws-sdk-kafka/client.rb', line 3798

def update_storage(params = {}, options = {})
  req = build_request(:update_storage, params)
  req.send_request(options)
end

#update_topic(params = {}) ⇒ Types::UpdateTopicResponse

Updates the topic configuration or partition count in the specified MSK cluster.

Examples:

Request syntax with placeholder values


resp = client.update_topic({
  cluster_arn: "__string", # required
  topic_name: "__string", # required
  configs: "__string",
  partition_count: 1,
})

Response structure


resp.topic_arn #=> String
resp.topic_name #=> String
resp.status #=> String, one of "CREATING", "UPDATING", "DELETING", "ACTIVE"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :cluster_arn (required, String)
  • :topic_name (required, String)
  • :configs (String)
  • :partition_count (Integer)

Returns:

See Also:



3839
3840
3841
3842
# File 'lib/aws-sdk-kafka/client.rb', line 3839

def update_topic(params = {}, options = {})
  req = build_request(:update_topic, params)
  req.send_request(options)
end

#waiter_namesObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Deprecated.


3868
3869
3870
# File 'lib/aws-sdk-kafka/client.rb', line 3868

def waiter_names
  []
end