Class: Aws::MachineLearning::Client

Inherits:
Seahorse::Client::Base
  • Object
show all
Includes:
ClientStubs
Defined in:
lib/aws-sdk-machinelearning/client.rb

Overview

An API client for MachineLearning. To construct a client, you need to configure a ‘:region` and `:credentials`.

client = Aws::MachineLearning::Client.new(
  region: region_name,
  credentials: credentials,
  # ...
)

For details on configuring region and credentials see the [developer guide](/sdk-for-ruby/v3/developer-guide/setup-config.html).

See #initialize for a full list of supported configuration options.

Class Attribute Summary collapse

API Operations collapse

Class Method Summary collapse

Instance Method Summary collapse

Constructor Details

#initialize(options) ⇒ Client

Returns a new instance of Client.

Parameters:

  • options (Hash)

Options Hash (options):

  • :plugins (Array<Seahorse::Client::Plugin>) — default: []]

    A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.

  • :credentials (required, Aws::CredentialProvider)

    Your AWS credentials. This can be an instance of any one of the following classes:

    • ‘Aws::Credentials` - Used for configuring static, non-refreshing credentials.

    • ‘Aws::SharedCredentials` - Used for loading static credentials from a shared file, such as `~/.aws/config`.

    • ‘Aws::AssumeRoleCredentials` - Used when you need to assume a role.

    • ‘Aws::AssumeRoleWebIdentityCredentials` - Used when you need to assume a role after providing credentials via the web.

    • ‘Aws::SSOCredentials` - Used for loading credentials from AWS SSO using an access token generated from `aws login`.

    • ‘Aws::ProcessCredentials` - Used for loading credentials from a process that outputs to stdout.

    • ‘Aws::InstanceProfileCredentials` - Used for loading credentials from an EC2 IMDS on an EC2 instance.

    • ‘Aws::ECSCredentials` - Used for loading credentials from instances running in ECS.

    • ‘Aws::CognitoIdentityCredentials` - Used for loading credentials from the Cognito Identity service.

    When ‘:credentials` are not configured directly, the following locations will be searched for credentials:

    • Aws.config`

    • The ‘:access_key_id`, `:secret_access_key`, `:session_token`, and `:account_id` options.

    • ENV, ENV, ENV, and ENV

    • ‘~/.aws/credentials`

    • ‘~/.aws/config`

    • EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of ‘Aws::InstanceProfileCredentials` or `Aws::ECSCredentials` to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV to true.

  • :region (required, String)

    The AWS region to connect to. The configured ‘:region` is used to determine the service `:endpoint`. When not passed, a default `:region` is searched for in the following locations:

  • :access_key_id (String)
  • :account_id (String)
  • :active_endpoint_cache (Boolean) — default: false

    When set to ‘true`, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to `false`.

  • :adaptive_retry_wait_to_fill (Boolean) — default: true

    Used only in ‘adaptive` retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a `RetryCapacityNotAvailableError` and will not retry instead of sleeping.

  • :client_side_monitoring (Boolean) — default: false

    When ‘true`, client-side metrics will be collected for all API requests from this client.

  • :client_side_monitoring_client_id (String) — default: ""

    Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.

  • :client_side_monitoring_host (String) — default: "127.0.0.1"

    Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_port (Integer) — default: 31000

    Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.

  • :client_side_monitoring_publisher (Aws::ClientSideMonitoring::Publisher) — default: Aws::ClientSideMonitoring::Publisher

    Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.

  • :convert_params (Boolean) — default: true

    When ‘true`, an attempt is made to coerce request parameters into the required types.

  • :correct_clock_skew (Boolean) — default: true

    Used only in ‘standard` and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.

  • :defaults_mode (String) — default: "legacy"

    See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.

  • :disable_host_prefix_injection (Boolean) — default: false

    Set to true to disable SDK automatically adding host prefix to default service endpoint when available.

  • :disable_request_compression (Boolean) — default: false

    When set to ‘true’ the request body will not be compressed for supported operations.

  • :endpoint (String, URI::HTTPS, URI::HTTP)

    Normally you should not configure the ‘:endpoint` option directly. This is normally constructed from the `:region` option. Configuring `:endpoint` is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:

    'http://example.com'
    'https://example.com'
    'http://example.com:123'
    
  • :endpoint_cache_max_entries (Integer) — default: 1000

    Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.

  • :endpoint_cache_max_threads (Integer) — default: 10

    Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.

  • :endpoint_cache_poll_interval (Integer) — default: 60

    When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.

  • :endpoint_discovery (Boolean) — default: false

    When set to ‘true`, endpoint discovery will be enabled for operations when available.

  • :ignore_configured_endpoint_urls (Boolean)

    Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.

  • :log_formatter (Aws::Log::Formatter) — default: Aws::Log::Formatter.default

    The log formatter.

  • :log_level (Symbol) — default: :info

    The log level to send messages to the ‘:logger` at.

  • :logger (Logger)

    The Logger instance to send log messages to. If this option is not set, logging will be disabled.

  • :max_attempts (Integer) — default: 3

    An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in ‘standard` and `adaptive` retry modes.

  • :profile (String) — default: "default"

    Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, ‘default’ is used.

  • :request_min_compression_size_bytes (Integer) — default: 10240

    The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.

  • :retry_backoff (Proc)

    A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the ‘legacy` retry mode.

  • :retry_base_delay (Float) — default: 0.3

    The base delay in seconds used by the default backoff function. This option is only used in the ‘legacy` retry mode.

  • :retry_jitter (Symbol) — default: :none

    A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the ‘legacy` retry mode.

    @see www.awsarchitectureblog.com/2015/03/backoff.html

  • :retry_limit (Integer) — default: 3

    The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the ‘legacy` retry mode.

  • :retry_max_delay (Integer) — default: 0

    The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the ‘legacy` retry mode.

  • :retry_mode (String) — default: "legacy"

    Specifies which retry algorithm to use. Values are:

    • ‘legacy` - The pre-existing retry behavior. This is default value if no retry mode is provided.

    • ‘standard` - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.

    • ‘adaptive` - An experimental retry mode that includes all the functionality of `standard` mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.

  • :sdk_ua_app_id (String)

    A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.

  • :secret_access_key (String)
  • :session_token (String)
  • :sigv4a_signing_region_set (Array)

    A list of regions that should be signed with SigV4a signing. When not passed, a default ‘:sigv4a_signing_region_set` is searched for in the following locations:

  • :simple_json (Boolean) — default: false

    Disables request parameter conversion, validation, and formatting. Also disables response data type conversions. The request parameters hash must be formatted exactly as the API expects.This option is useful when you want to ensure the highest level of performance by avoiding overhead of walking request parameters and response data structures.

  • :stub_responses (Boolean) — default: false

    Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.

    ** Please note ** When response stubbing is enabled, no HTTP requests are made, and retries are disabled.

  • :telemetry_provider (Aws::Telemetry::TelemetryProviderBase) — default: Aws::Telemetry::NoOpTelemetryProvider

    Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses ‘NoOpTelemetryProvider` which will not record or emit any telemetry data. The SDK supports the following telemetry providers:

    • OpenTelemetry (OTel) - To use the OTel provider, install and require the

    ‘opentelemetry-sdk` gem and then, pass in an instance of a `Aws::Telemetry::OTelProvider` for telemetry provider.

  • :token_provider (Aws::TokenProvider)

    A Bearer Token Provider. This can be an instance of any one of the following classes:

    • ‘Aws::StaticTokenProvider` - Used for configuring static, non-refreshing tokens.

    • ‘Aws::SSOTokenProvider` - Used for loading tokens from AWS SSO using an access token generated from `aws login`.

    When ‘:token_provider` is not configured directly, the `Aws::TokenProviderChain` will be used to search for tokens configured for your profile in shared configuration files.

  • :use_dualstack_endpoint (Boolean)

    When set to ‘true`, dualstack enabled endpoints (with `.aws` TLD) will be used if available.

  • :use_fips_endpoint (Boolean)

    When set to ‘true`, fips compatible endpoints will be used if available. When a `fips` region is used, the region is normalized and this config is set to `true`.

  • :validate_params (Boolean) — default: true

    When ‘true`, request parameters are validated before sending the request.

  • :endpoint_provider (Aws::MachineLearning::EndpointProvider)

    The endpoint provider used to resolve endpoints. Any object that responds to ‘#resolve_endpoint(parameters)` where `parameters` is a Struct similar to `Aws::MachineLearning::EndpointParameters`.

  • :http_continue_timeout (Float) — default: 1

    The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has “Expect” header set to “100-continue”. Defaults to ‘nil` which disables this behaviour. This value can safely be set per request on the session.

  • :http_idle_timeout (Float) — default: 5

    The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.

  • :http_open_timeout (Float) — default: 15

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_proxy (URI::HTTP, String)

    A proxy to send requests through. Formatted like ‘proxy.com:123’.

  • :http_read_timeout (Float) — default: 60

    The default number of seconds to wait for response data. This value can safely be set per-request on the session.

  • :http_wire_trace (Boolean) — default: false

    When ‘true`, HTTP debug output will be sent to the `:logger`.

  • :on_chunk_received (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a ‘content-length`).

  • :on_chunk_sent (Proc)

    When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.

  • :raise_response_errors (Boolean) — default: true

    When ‘true`, response errors are raised.

  • :ssl_ca_bundle (String)

    Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass ‘:ssl_ca_bundle` or `:ssl_ca_directory` the the system default will be used if available.

  • :ssl_ca_directory (String)

    Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass ‘:ssl_ca_bundle` or `:ssl_ca_directory` the the system default will be used if available.

  • :ssl_ca_store (String)

    Sets the X509::Store to verify peer certificate.

  • :ssl_cert (OpenSSL::X509::Certificate)

    Sets a client certificate when creating http connections.

  • :ssl_key (OpenSSL::PKey)

    Sets a client key when creating http connections.

  • :ssl_timeout (Float)

    Sets the SSL timeout in seconds

  • :ssl_verify_peer (Boolean) — default: true

    When ‘true`, SSL peer certificates are verified when establishing a connection.



453
454
455
# File 'lib/aws-sdk-machinelearning/client.rb', line 453

def initialize(*args)
  super
end

Class Attribute Details

.identifierObject (readonly)

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.



2623
2624
2625
# File 'lib/aws-sdk-machinelearning/client.rb', line 2623

def identifier
  @identifier
end

Class Method Details

.errors_moduleObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.



2626
2627
2628
# File 'lib/aws-sdk-machinelearning/client.rb', line 2626

def errors_module
  Errors
end

Instance Method Details

#add_tags(params = {}) ⇒ Types::AddTagsOutput

Adds one or more tags to an object, up to a limit of 10. Each tag consists of a key and an optional value. If you add a tag using a key that is already associated with the ML object, ‘AddTags` updates the tag’s value.

Examples:

Request syntax with placeholder values


resp = client.add_tags({
  tags: [ # required
    {
      key: "TagKey",
      value: "TagValue",
    },
  ],
  resource_id: "EntityId", # required
  resource_type: "BatchPrediction", # required, accepts BatchPrediction, DataSource, Evaluation, MLModel
})

Response structure


resp.resource_id #=> String
resp.resource_type #=> String, one of "BatchPrediction", "DataSource", "Evaluation", "MLModel"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :tags (required, Array<Types::Tag>)

    The key-value pairs to use to create tags. If you specify a key without specifying a value, Amazon ML creates a tag with the specified key and a value of null.

  • :resource_id (required, String)

    The ID of the ML object to tag. For example, ‘exampleModelId`.

  • :resource_type (required, String)

    The type of the ML object to tag.

Returns:



500
501
502
503
# File 'lib/aws-sdk-machinelearning/client.rb', line 500

def add_tags(params = {}, options = {})
  req = build_request(:add_tags, params)
  req.send_request(options)
end

#build_request(operation_name, params = {}) ⇒ Object

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Parameters:

  • params ({}) (defaults to: {})


2477
2478
2479
2480
2481
2482
2483
2484
2485
2486
2487
2488
2489
2490
2491
2492
2493
# File 'lib/aws-sdk-machinelearning/client.rb', line 2477

def build_request(operation_name, params = {})
  handlers = @handlers.for(operation_name)
  tracer = config.telemetry_provider.tracer_provider.tracer(
    Aws::Telemetry.module_to_tracer_name('Aws::MachineLearning')
  )
  context = Seahorse::Client::RequestContext.new(
    operation_name: operation_name,
    operation: config.api.operation(operation_name),
    client: self,
    params: params,
    config: config,
    tracer: tracer
  )
  context[:gem_name] = 'aws-sdk-machinelearning'
  context[:gem_version] = '1.63.0'
  Seahorse::Client::Request.new(handlers, context)
end

#create_batch_prediction(params = {}) ⇒ Types::CreateBatchPredictionOutput

Generates predictions for a group of observations. The observations to process exist in one or more data files referenced by a ‘DataSource`. This operation creates a new `BatchPrediction`, and uses an `MLModel` and the data files referenced by the `DataSource` as information sources.

‘CreateBatchPrediction` is an asynchronous operation. In response to `CreateBatchPrediction`, Amazon Machine Learning (Amazon ML) immediately returns and sets the `BatchPrediction` status to `PENDING`. After the `BatchPrediction` completes, Amazon ML sets the status to `COMPLETED`.

You can poll for status updates by using the GetBatchPrediction operation and checking the ‘Status` parameter of the result. After the `COMPLETED` status appears, the results are available in the location specified by the `OutputUri` parameter.

Examples:

Request syntax with placeholder values


resp = client.create_batch_prediction({
  batch_prediction_id: "EntityId", # required
  batch_prediction_name: "EntityName",
  ml_model_id: "EntityId", # required
  batch_prediction_data_source_id: "EntityId", # required
  output_uri: "S3Url", # required
})

Response structure


resp.batch_prediction_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :batch_prediction_id (required, String)

    A user-supplied ID that uniquely identifies the ‘BatchPrediction`.

  • :batch_prediction_name (String)

    A user-supplied name or description of the ‘BatchPrediction`. `BatchPredictionName` can only use the UTF-8 character set.

  • :ml_model_id (required, String)

    The ID of the ‘MLModel` that will generate predictions for the group of observations.

  • :batch_prediction_data_source_id (required, String)

    The ID of the ‘DataSource` that points to the group of observations to predict.

  • :output_uri (required, String)

    The location of an Amazon Simple Storage Service (Amazon S3) bucket or directory to store the batch prediction results. The following substrings are not allowed in the ‘s3 key` portion of the `outputURI` field: ’:‘, ’//‘, ’/./‘, ’/../‘.

    Amazon ML needs permissions to store and retrieve the logs on your behalf. For information about how to set permissions, see the [Amazon Machine Learning Developer Guide].

    [1]: docs.aws.amazon.com/machine-learning/latest/dg

Returns:



571
572
573
574
# File 'lib/aws-sdk-machinelearning/client.rb', line 571

def create_batch_prediction(params = {}, options = {})
  req = build_request(:create_batch_prediction, params)
  req.send_request(options)
end

#create_data_source_from_rds(params = {}) ⇒ Types::CreateDataSourceFromRDSOutput

Creates a ‘DataSource` object from an [ Amazon Relational Database Service] (Amazon RDS). A `DataSource` references data that can be used to perform `CreateMLModel`, `CreateEvaluation`, or `CreateBatchPrediction` operations.

‘CreateDataSourceFromRDS` is an asynchronous operation. In response to `CreateDataSourceFromRDS`, Amazon Machine Learning (Amazon ML) immediately returns and sets the `DataSource` status to `PENDING`. After the `DataSource` is created and ready for use, Amazon ML sets the `Status` parameter to `COMPLETED`. `DataSource` in the `COMPLETED` or `PENDING` state can be used only to perform `>CreateMLModel`&gt;, `CreateEvaluation`, or `CreateBatchPrediction` operations.

If Amazon ML cannot accept the input source, it sets the ‘Status` parameter to `FAILED` and includes an error message in the `Message` attribute of the `GetDataSource` operation response.

[1]: aws.amazon.com/rds/

Examples:

Request syntax with placeholder values


resp = client.create_data_source_from_rds({
  data_source_id: "EntityId", # required
  data_source_name: "EntityName",
  rds_data: { # required
    database_information: { # required
      instance_identifier: "RDSInstanceIdentifier", # required
      database_name: "RDSDatabaseName", # required
    },
    select_sql_query: "RDSSelectSqlQuery", # required
    database_credentials: { # required
      username: "RDSDatabaseUsername", # required
      password: "RDSDatabasePassword", # required
    },
    s3_staging_location: "S3Url", # required
    data_rearrangement: "DataRearrangement",
    data_schema: "DataSchema",
    data_schema_uri: "S3Url",
    resource_role: "EDPResourceRole", # required
    service_role: "EDPServiceRole", # required
    subnet_id: "EDPSubnetId", # required
    security_group_ids: ["EDPSecurityGroupId"], # required
  },
  role_arn: "RoleARN", # required
  compute_statistics: false,
})

Response structure


resp.data_source_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_source_id (required, String)

    A user-supplied ID that uniquely identifies the ‘DataSource`. Typically, an Amazon Resource Number (ARN) becomes the ID for a `DataSource`.

  • :data_source_name (String)

    A user-supplied name or description of the ‘DataSource`.

  • :rds_data (required, Types::RDSDataSpec)

    The data specification of an Amazon RDS ‘DataSource`:

    • DatabaseInformation -

      • ‘DatabaseName` - The name of the Amazon RDS database.

      • ‘InstanceIdentifier ` - A unique identifier for the Amazon RDS database instance.

    • DatabaseCredentials - AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon RDS database.

    • ResourceRole - A role (DataPipelineDefaultResourceRole) assumed by an EC2 instance to carry out the copy task from Amazon RDS to Amazon Simple Storage Service (Amazon S3). For more information, see [Role templates] for data pipelines.

    • ServiceRole - A role (DataPipelineDefaultRole) assumed by the AWS Data Pipeline service to monitor the progress of the copy task from Amazon RDS to Amazon S3. For more information, see [Role templates] for data pipelines.

    • SecurityInfo - The security information to use to access an RDS DB instance. You need to set up appropriate ingress rules for the security entity IDs provided to allow access to the Amazon RDS instance. Specify a [‘SubnetId`, `SecurityGroupIds`] pair for a VPC-based RDS DB instance.

    • SelectSqlQuery - A query that is used to retrieve the observation data for the ‘Datasource`.

    • S3StagingLocation - The Amazon S3 location for staging Amazon RDS data. The data retrieved from Amazon RDS using ‘SelectSqlQuery` is stored in this location.

    • DataSchemaUri - The Amazon S3 location of the ‘DataSchema`.

    • DataSchema - A JSON string representing the schema. This is not required if ‘DataSchemaUri` is specified.

    • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the ‘Datasource`.

      Sample - ‘ “”splitting“:{”percentBegin“:10,”percentEnd“:60}”`

    [1]: docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-iam-roles.html

  • :role_arn (required, String)

    The role that Amazon ML assumes on behalf of the user to create and activate a data pipeline in the user’s account and copy data using the ‘SelectSqlQuery` query from Amazon RDS to Amazon S3.

  • :compute_statistics (Boolean)

    The compute statistics for a ‘DataSource`. The statistics are generated from the observation data referenced by a `DataSource`. Amazon ML uses the statistics internally during `MLModel` training. This parameter must be set to `true` if the `DataSource needs to be used for MLModel training. </p> `

Returns:



707
708
709
710
# File 'lib/aws-sdk-machinelearning/client.rb', line 707

def create_data_source_from_rds(params = {}, options = {})
  req = build_request(:create_data_source_from_rds, params)
  req.send_request(options)
end

#create_data_source_from_redshift(params = {}) ⇒ Types::CreateDataSourceFromRedshiftOutput

Creates a ‘DataSource` from a database hosted on an Amazon Redshift cluster. A `DataSource` references data that can be used to perform either `CreateMLModel`, `CreateEvaluation`, or `CreateBatchPrediction` operations.

‘CreateDataSourceFromRedshift` is an asynchronous operation. In response to `CreateDataSourceFromRedshift`, Amazon Machine Learning (Amazon ML) immediately returns and sets the `DataSource` status to `PENDING`. After the `DataSource` is created and ready for use, Amazon ML sets the `Status` parameter to `COMPLETED`. `DataSource` in `COMPLETED` or `PENDING` states can be used to perform only `CreateMLModel`, `CreateEvaluation`, or `CreateBatchPrediction` operations.

If Amazon ML can’t accept the input source, it sets the ‘Status` parameter to `FAILED` and includes an error message in the `Message` attribute of the `GetDataSource` operation response.

The observations should be contained in the database hosted on an Amazon Redshift cluster and should be specified by a ‘SelectSqlQuery` query. Amazon ML executes an `Unload` command in Amazon Redshift to transfer the result set of the `SelectSqlQuery` query to `S3StagingLocation`.

After the ‘DataSource` has been created, it’s ready for use in evaluations and batch predictions. If you plan to use the ‘DataSource` to train an `MLModel`, the `DataSource` also requires a recipe. A recipe describes how each input variable will be used in training an `MLModel`. Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions.

You can’t change an existing datasource, but you can copy and modify the settings from an existing Amazon Redshift datasource to create a new datasource. To do so, call ‘GetDataSource` for an existing datasource and copy the values to a `CreateDataSource` call. Change the settings that you want to change and make sure that all required fields have the appropriate values.

Examples:

Request syntax with placeholder values


resp = client.create_data_source_from_redshift({
  data_source_id: "EntityId", # required
  data_source_name: "EntityName",
  data_spec: { # required
    database_information: { # required
      database_name: "RedshiftDatabaseName", # required
      cluster_identifier: "RedshiftClusterIdentifier", # required
    },
    select_sql_query: "RedshiftSelectSqlQuery", # required
    database_credentials: { # required
      username: "RedshiftDatabaseUsername", # required
      password: "RedshiftDatabasePassword", # required
    },
    s3_staging_location: "S3Url", # required
    data_rearrangement: "DataRearrangement",
    data_schema: "DataSchema",
    data_schema_uri: "S3Url",
  },
  role_arn: "RoleARN", # required
  compute_statistics: false,
})

Response structure


resp.data_source_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_source_id (required, String)

    A user-supplied ID that uniquely identifies the ‘DataSource`.

  • :data_source_name (String)

    A user-supplied name or description of the ‘DataSource`.

  • :data_spec (required, Types::RedshiftDataSpec)

    The data specification of an Amazon Redshift ‘DataSource`:

    • DatabaseInformation -

      • ‘DatabaseName` - The name of the Amazon Redshift database.

      • ‘ ClusterIdentifier` - The unique ID for the Amazon Redshift cluster.

    • DatabaseCredentials - The AWS Identity and Access Management (IAM) credentials that are used to connect to the Amazon Redshift database.

    • SelectSqlQuery - The query that is used to retrieve the observation data for the ‘Datasource`.

    • S3StagingLocation - The Amazon Simple Storage Service (Amazon S3) location for staging Amazon Redshift data. The data retrieved from Amazon Redshift using the ‘SelectSqlQuery` query is stored in this location.

    • DataSchemaUri - The Amazon S3 location of the ‘DataSchema`.

    • DataSchema - A JSON string representing the schema. This is not required if ‘DataSchemaUri` is specified.

    • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the ‘DataSource`.

      Sample - ‘ “”splitting“:{”percentBegin“:10,”percentEnd“:60}”`

  • :role_arn (required, String)

    A fully specified role Amazon Resource Name (ARN). Amazon ML assumes the role on behalf of the user to create the following:

    • A security group to allow Amazon ML to execute the ‘SelectSqlQuery` query on an Amazon Redshift cluster

    • An Amazon S3 bucket policy to grant Amazon ML read/write permissions on the ‘S3StagingLocation`

  • :compute_statistics (Boolean)

    The compute statistics for a ‘DataSource`. The statistics are generated from the observation data referenced by a `DataSource`. Amazon ML uses the statistics internally during `MLModel` training. This parameter must be set to `true` if the `DataSource` needs to be used for `MLModel` training.

Returns:



842
843
844
845
# File 'lib/aws-sdk-machinelearning/client.rb', line 842

def create_data_source_from_redshift(params = {}, options = {})
  req = build_request(:create_data_source_from_redshift, params)
  req.send_request(options)
end

#create_data_source_from_s3(params = {}) ⇒ Types::CreateDataSourceFromS3Output

Creates a ‘DataSource` object. A `DataSource` references data that can be used to perform `CreateMLModel`, `CreateEvaluation`, or `CreateBatchPrediction` operations.

‘CreateDataSourceFromS3` is an asynchronous operation. In response to `CreateDataSourceFromS3`, Amazon Machine Learning (Amazon ML) immediately returns and sets the `DataSource` status to `PENDING`. After the `DataSource` has been created and is ready for use, Amazon ML sets the `Status` parameter to `COMPLETED`. `DataSource` in the `COMPLETED` or `PENDING` state can be used to perform only `CreateMLModel`, `CreateEvaluation` or `CreateBatchPrediction` operations.

If Amazon ML can’t accept the input source, it sets the ‘Status` parameter to `FAILED` and includes an error message in the `Message` attribute of the `GetDataSource` operation response.

The observation data used in a ‘DataSource` should be ready to use; that is, it should have a consistent structure, and missing data values should be kept to a minimum. The observation data must reside in one or more .csv files in an Amazon Simple Storage Service (Amazon S3) location, along with a schema that describes the data items by name and type. The same schema must be used for all of the data files referenced by the `DataSource`.

After the ‘DataSource` has been created, it’s ready to use in evaluations and batch predictions. If you plan to use the ‘DataSource` to train an `MLModel`, the `DataSource` also needs a recipe. A recipe describes how each input variable will be used in training an `MLModel`. Will the variable be included or excluded from training? Will the variable be manipulated; for example, will it be combined with another variable or will it be split apart into word combinations? The recipe provides answers to these questions.

Examples:

Request syntax with placeholder values


resp = client.create_data_source_from_s3({
  data_source_id: "EntityId", # required
  data_source_name: "EntityName",
  data_spec: { # required
    data_location_s3: "S3Url", # required
    data_rearrangement: "DataRearrangement",
    data_schema: "DataSchema",
    data_schema_location_s3: "S3Url",
  },
  compute_statistics: false,
})

Response structure


resp.data_source_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_source_id (required, String)

    A user-supplied identifier that uniquely identifies the ‘DataSource`.

  • :data_source_name (String)

    A user-supplied name or description of the ‘DataSource`.

  • :data_spec (required, Types::S3DataSpec)

    The data specification of a ‘DataSource`:

    • DataLocationS3 - The Amazon S3 location of the observation data.

    • DataSchemaLocationS3 - The Amazon S3 location of the ‘DataSchema`.

    • DataSchema - A JSON string representing the schema. This is not required if ‘DataSchemaUri` is specified.

    • DataRearrangement - A JSON string that represents the splitting and rearrangement requirements for the ‘Datasource`.

      Sample - ‘ “”splitting“:{”percentBegin“:10,”percentEnd“:60}”`

  • :compute_statistics (Boolean)

    The compute statistics for a ‘DataSource`. The statistics are generated from the observation data referenced by a `DataSource`. Amazon ML uses the statistics internally during `MLModel` training. This parameter must be set to `true` if the `DataSource needs to be used for MLModel training.</p> `

Returns:



935
936
937
938
# File 'lib/aws-sdk-machinelearning/client.rb', line 935

def create_data_source_from_s3(params = {}, options = {})
  req = build_request(:create_data_source_from_s3, params)
  req.send_request(options)
end

#create_evaluation(params = {}) ⇒ Types::CreateEvaluationOutput

Creates a new ‘Evaluation` of an `MLModel`. An `MLModel` is evaluated on a set of observations associated to a `DataSource`. Like a `DataSource` for an `MLModel`, the `DataSource` for an `Evaluation` contains values for the `Target Variable`. The `Evaluation` compares the predicted result for each observation to the actual outcome and provides a summary so that you know how effective the `MLModel` functions on the test data. Evaluation generates a relevant performance metric, such as BinaryAUC, RegressionRMSE or MulticlassAvgFScore based on the corresponding `MLModelType`: `BINARY`, `REGRESSION` or `MULTICLASS`.

‘CreateEvaluation` is an asynchronous operation. In response to `CreateEvaluation`, Amazon Machine Learning (Amazon ML) immediately returns and sets the evaluation status to `PENDING`. After the `Evaluation` is created and ready for use, Amazon ML sets the status to `COMPLETED`.

You can use the ‘GetEvaluation` operation to check progress of the evaluation during the creation operation.

Examples:

Request syntax with placeholder values


resp = client.create_evaluation({
  evaluation_id: "EntityId", # required
  evaluation_name: "EntityName",
  ml_model_id: "EntityId", # required
  evaluation_data_source_id: "EntityId", # required
})

Response structure


resp.evaluation_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :evaluation_id (required, String)

    A user-supplied ID that uniquely identifies the ‘Evaluation`.

  • :evaluation_name (String)

    A user-supplied name or description of the ‘Evaluation`.

  • :ml_model_id (required, String)

    The ID of the ‘MLModel` to evaluate.

    The schema used in creating the ‘MLModel` must match the schema of the `DataSource` used in the `Evaluation`.

  • :evaluation_data_source_id (required, String)

    The ID of the ‘DataSource` for the evaluation. The schema of the `DataSource` must match the schema used to create the `MLModel`.

Returns:



995
996
997
998
# File 'lib/aws-sdk-machinelearning/client.rb', line 995

def create_evaluation(params = {}, options = {})
  req = build_request(:create_evaluation, params)
  req.send_request(options)
end

#create_ml_model(params = {}) ⇒ Types::CreateMLModelOutput

Creates a new ‘MLModel` using the `DataSource` and the recipe as information sources.

An ‘MLModel` is nearly immutable. Users can update only the `MLModelName` and the `ScoreThreshold` in an `MLModel` without creating a new `MLModel`.

‘CreateMLModel` is an asynchronous operation. In response to `CreateMLModel`, Amazon Machine Learning (Amazon ML) immediately returns and sets the `MLModel` status to `PENDING`. After the `MLModel` has been created and ready is for use, Amazon ML sets the status to `COMPLETED`.

You can use the ‘GetMLModel` operation to check the progress of the `MLModel` during the creation operation.

‘CreateMLModel` requires a `DataSource` with computed statistics, which can be created by setting `ComputeStatistics` to `true` in `CreateDataSourceFromRDS`, `CreateDataSourceFromS3`, or `CreateDataSourceFromRedshift` operations.

Examples:

Request syntax with placeholder values


resp = client.create_ml_model({
  ml_model_id: "EntityId", # required
  ml_model_name: "EntityName",
  ml_model_type: "REGRESSION", # required, accepts REGRESSION, BINARY, MULTICLASS
  parameters: {
    "StringType" => "StringType",
  },
  training_data_source_id: "EntityId", # required
  recipe: "Recipe",
  recipe_uri: "S3Url",
})

Response structure


resp.ml_model_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :ml_model_id (required, String)

    A user-supplied ID that uniquely identifies the ‘MLModel`.

  • :ml_model_name (String)

    A user-supplied name or description of the ‘MLModel`.

  • :ml_model_type (required, String)

    The category of supervised learning that this ‘MLModel` will address. Choose from the following types:

    • Choose ‘REGRESSION` if the `MLModel` will be used to predict a numeric value.

    • Choose ‘BINARY` if the `MLModel` result has two possible values.

    • Choose ‘MULTICLASS` if the `MLModel` result has a limited number of values.

    For more information, see the [Amazon Machine Learning Developer Guide].

    [1]: docs.aws.amazon.com/machine-learning/latest/dg

  • :parameters (Hash<String,String>)

    A list of the training parameters in the ‘MLModel`. The list is implemented as a map of key-value pairs.

    The following is the current set of training parameters:

    • ‘sgd.maxMLModelSizeInBytes` - The maximum allowed size of the model. Depending on the input data, the size of the model might affect its performance.

      The value is an integer that ranges from ‘100000` to `2147483648`. The default value is `33554432`.

    • ‘sgd.maxPasses` - The number of times that the training process traverses the observations to build the `MLModel`. The value is an integer that ranges from `1` to `10000`. The default value is `10`.

    • ‘sgd.shuffleType` - Whether Amazon ML shuffles the training data. Shuffling the data improves a model’s ability to find the optimal solution for a variety of data types. The valid values are ‘auto` and `none`. The default value is `none`. We strongly recommend that you shuffle your data.

    • ‘sgd.l1RegularizationAmount` - The coefficient regularization L1 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to zero, resulting in a sparse feature set. If you use this parameter, start by specifying a small value, such as `1.0E-08`.

      The value is a double that ranges from ‘0` to `MAX_DOUBLE`. The default is to not use L1 normalization. This parameter can’t be used when ‘L2` is specified. Use this parameter sparingly.

    • ‘sgd.l2RegularizationAmount` - The coefficient regularization L2 norm. It controls overfitting the data by penalizing large coefficients. This tends to drive coefficients to small, nonzero values. If you use this parameter, start by specifying a small value, such as `1.0E-08`.

      The value is a double that ranges from ‘0` to `MAX_DOUBLE`. The default is to not use L2 normalization. This parameter can’t be used when ‘L1` is specified. Use this parameter sparingly.

  • :training_data_source_id (required, String)

    The ‘DataSource` that points to the training data.

  • :recipe (String)

    The data recipe for creating the ‘MLModel`. You must specify either the recipe or its URI. If you don’t specify a recipe or its URI, Amazon ML creates a default.

  • :recipe_uri (String)

    The Amazon Simple Storage Service (Amazon S3) location and file name that contains the ‘MLModel` recipe. You must specify either the recipe or its URI. If you don’t specify a recipe or its URI, Amazon ML creates a default.

Returns:



1127
1128
1129
1130
# File 'lib/aws-sdk-machinelearning/client.rb', line 1127

def create_ml_model(params = {}, options = {})
  req = build_request(:create_ml_model, params)
  req.send_request(options)
end

#create_realtime_endpoint(params = {}) ⇒ Types::CreateRealtimeEndpointOutput

Creates a real-time endpoint for the ‘MLModel`. The endpoint contains the URI of the `MLModel`; that is, the location to send real-time prediction requests for the specified `MLModel`.

Examples:

Request syntax with placeholder values


resp = client.create_realtime_endpoint({
  ml_model_id: "EntityId", # required
})

Response structure


resp.ml_model_id #=> String
resp.realtime_endpoint_info.peak_requests_per_second #=> Integer
resp.realtime_endpoint_info.created_at #=> Time
resp.realtime_endpoint_info.endpoint_url #=> String
resp.realtime_endpoint_info.endpoint_status #=> String, one of "NONE", "READY", "UPDATING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :ml_model_id (required, String)

    The ID assigned to the ‘MLModel` during creation.

Returns:



1160
1161
1162
1163
# File 'lib/aws-sdk-machinelearning/client.rb', line 1160

def create_realtime_endpoint(params = {}, options = {})
  req = build_request(:create_realtime_endpoint, params)
  req.send_request(options)
end

#delete_batch_prediction(params = {}) ⇒ Types::DeleteBatchPredictionOutput

Assigns the DELETED status to a ‘BatchPrediction`, rendering it unusable.

After using the ‘DeleteBatchPrediction` operation, you can use the GetBatchPrediction operation to verify that the status of the `BatchPrediction` changed to DELETED.

Caution: The result of the ‘DeleteBatchPrediction` operation is irreversible.

Examples:

Request syntax with placeholder values


resp = client.delete_batch_prediction({
  batch_prediction_id: "EntityId", # required
})

Response structure


resp.batch_prediction_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :batch_prediction_id (required, String)

    A user-supplied ID that uniquely identifies the ‘BatchPrediction`.

Returns:



1194
1195
1196
1197
# File 'lib/aws-sdk-machinelearning/client.rb', line 1194

def delete_batch_prediction(params = {}, options = {})
  req = build_request(:delete_batch_prediction, params)
  req.send_request(options)
end

#delete_data_source(params = {}) ⇒ Types::DeleteDataSourceOutput

Assigns the DELETED status to a ‘DataSource`, rendering it unusable.

After using the ‘DeleteDataSource` operation, you can use the GetDataSource operation to verify that the status of the `DataSource` changed to DELETED.

Caution: The results of the ‘DeleteDataSource` operation are irreversible.

Examples:

Request syntax with placeholder values


resp = client.delete_data_source({
  data_source_id: "EntityId", # required
})

Response structure


resp.data_source_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_source_id (required, String)

    A user-supplied ID that uniquely identifies the ‘DataSource`.

Returns:



1227
1228
1229
1230
# File 'lib/aws-sdk-machinelearning/client.rb', line 1227

def delete_data_source(params = {}, options = {})
  req = build_request(:delete_data_source, params)
  req.send_request(options)
end

#delete_evaluation(params = {}) ⇒ Types::DeleteEvaluationOutput

Assigns the ‘DELETED` status to an `Evaluation`, rendering it unusable.

After invoking the ‘DeleteEvaluation` operation, you can use the `GetEvaluation` operation to verify that the status of the `Evaluation` changed to `DELETED`.

Caution: The results of the ‘DeleteEvaluation` operation are irreversible.

Examples:

Request syntax with placeholder values


resp = client.delete_evaluation({
  evaluation_id: "EntityId", # required
})

Response structure


resp.evaluation_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :evaluation_id (required, String)

    A user-supplied ID that uniquely identifies the ‘Evaluation` to delete.

Returns:



1262
1263
1264
1265
# File 'lib/aws-sdk-machinelearning/client.rb', line 1262

def delete_evaluation(params = {}, options = {})
  req = build_request(:delete_evaluation, params)
  req.send_request(options)
end

#delete_ml_model(params = {}) ⇒ Types::DeleteMLModelOutput

Assigns the ‘DELETED` status to an `MLModel`, rendering it unusable.

After using the ‘DeleteMLModel` operation, you can use the `GetMLModel` operation to verify that the status of the `MLModel` changed to DELETED.

Caution: The result of the ‘DeleteMLModel` operation is irreversible.

Examples:

Request syntax with placeholder values


resp = client.delete_ml_model({
  ml_model_id: "EntityId", # required
})

Response structure


resp.ml_model_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :ml_model_id (required, String)

    A user-supplied ID that uniquely identifies the ‘MLModel`.

Returns:



1295
1296
1297
1298
# File 'lib/aws-sdk-machinelearning/client.rb', line 1295

def delete_ml_model(params = {}, options = {})
  req = build_request(:delete_ml_model, params)
  req.send_request(options)
end

#delete_realtime_endpoint(params = {}) ⇒ Types::DeleteRealtimeEndpointOutput

Deletes a real time endpoint of an ‘MLModel`.

Examples:

Request syntax with placeholder values


resp = client.delete_realtime_endpoint({
  ml_model_id: "EntityId", # required
})

Response structure


resp.ml_model_id #=> String
resp.realtime_endpoint_info.peak_requests_per_second #=> Integer
resp.realtime_endpoint_info.created_at #=> Time
resp.realtime_endpoint_info.endpoint_url #=> String
resp.realtime_endpoint_info.endpoint_status #=> String, one of "NONE", "READY", "UPDATING", "FAILED"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :ml_model_id (required, String)

    The ID assigned to the ‘MLModel` during creation.

Returns:



1326
1327
1328
1329
# File 'lib/aws-sdk-machinelearning/client.rb', line 1326

def delete_realtime_endpoint(params = {}, options = {})
  req = build_request(:delete_realtime_endpoint, params)
  req.send_request(options)
end

#delete_tags(params = {}) ⇒ Types::DeleteTagsOutput

Deletes the specified tags associated with an ML object. After this operation is complete, you can’t recover deleted tags.

If you specify a tag that doesn’t exist, Amazon ML ignores it.

Examples:

Request syntax with placeholder values


resp = client.delete_tags({
  tag_keys: ["TagKey"], # required
  resource_id: "EntityId", # required
  resource_type: "BatchPrediction", # required, accepts BatchPrediction, DataSource, Evaluation, MLModel
})

Response structure


resp.resource_id #=> String
resp.resource_type #=> String, one of "BatchPrediction", "DataSource", "Evaluation", "MLModel"

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :tag_keys (required, Array<String>)

    One or more tags to delete.

  • :resource_id (required, String)

    The ID of the tagged ML object. For example, ‘exampleModelId`.

  • :resource_type (required, String)

    The type of the tagged ML object.

Returns:



1365
1366
1367
1368
# File 'lib/aws-sdk-machinelearning/client.rb', line 1365

def delete_tags(params = {}, options = {})
  req = build_request(:delete_tags, params)
  req.send_request(options)
end

#describe_batch_predictions(params = {}) ⇒ Types::DescribeBatchPredictionsOutput

Returns a list of ‘BatchPrediction` operations that match the search criteria in the request.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

The following waiters are defined for this operation (see #wait_until for detailed usage):

* batch_prediction_available

Examples:

Request syntax with placeholder values


resp = client.describe_batch_predictions({
  filter_variable: "CreatedAt", # accepts CreatedAt, LastUpdatedAt, Status, Name, IAMUser, MLModelId, DataSourceId, DataURI
  eq: "ComparatorValue",
  gt: "ComparatorValue",
  lt: "ComparatorValue",
  ge: "ComparatorValue",
  le: "ComparatorValue",
  ne: "ComparatorValue",
  prefix: "ComparatorValue",
  sort_order: "asc", # accepts asc, dsc
  next_token: "StringType",
  limit: 1,
})

Response structure


resp.results #=> Array
resp.results[0].batch_prediction_id #=> String
resp.results[0].ml_model_id #=> String
resp.results[0].batch_prediction_data_source_id #=> String
resp.results[0].input_data_location_s3 #=> String
resp.results[0].created_by_iam_user #=> String
resp.results[0].created_at #=> Time
resp.results[0].last_updated_at #=> Time
resp.results[0].name #=> String
resp.results[0].status #=> String, one of "PENDING", "INPROGRESS", "FAILED", "COMPLETED", "DELETED"
resp.results[0].output_uri #=> String
resp.results[0].message #=> String
resp.results[0].compute_time #=> Integer
resp.results[0].finished_at #=> Time
resp.results[0].started_at #=> Time
resp.results[0].total_record_count #=> Integer
resp.results[0].invalid_record_count #=> Integer
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :filter_variable (String)

    Use one of the following variables to filter a list of ‘BatchPrediction`:

    • ‘CreatedAt` - Sets the search criteria to the `BatchPrediction` creation date.

    • ‘Status` - Sets the search criteria to the `BatchPrediction` status.

    • ‘Name` - Sets the search criteria to the contents of the `BatchPrediction` `Name`.

    • ‘IAMUser` - Sets the search criteria to the user account that invoked the `BatchPrediction` creation.

    • ‘MLModelId` - Sets the search criteria to the `MLModel` used in the `BatchPrediction`.

    • ‘DataSourceId` - Sets the search criteria to the `DataSource` used in the `BatchPrediction`.

    • ‘DataURI` - Sets the search criteria to the data file(s) used in the `BatchPrediction`. The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.

  • :eq (String)

    The equal to operator. The ‘BatchPrediction` results will have `FilterVariable` values that exactly match the value specified with `EQ`.

  • :gt (String)

    The greater than operator. The ‘BatchPrediction` results will have `FilterVariable` values that are greater than the value specified with `GT`.

  • :lt (String)

    The less than operator. The ‘BatchPrediction` results will have `FilterVariable` values that are less than the value specified with `LT`.

  • :ge (String)

    The greater than or equal to operator. The ‘BatchPrediction` results will have `FilterVariable` values that are greater than or equal to the value specified with `GE`.

  • :le (String)

    The less than or equal to operator. The ‘BatchPrediction` results will have `FilterVariable` values that are less than or equal to the value specified with `LE`.

  • :ne (String)

    The not equal to operator. The ‘BatchPrediction` results will have `FilterVariable` values not equal to the value specified with `NE`.

  • :prefix (String)

    A string that is found at the beginning of a variable, such as ‘Name` or `Id`.

    For example, a ‘Batch Prediction` operation could have the `Name` `2014-09-09-HolidayGiftMailer`. To search for this `BatchPrediction`, select `Name` for the `FilterVariable` and any of the following strings for the `Prefix`:

    • 2014-09

    • 2014-09-09

    • 2014-09-09-Holiday

  • :sort_order (String)

    A two-value parameter that determines the sequence of the resulting list of ‘MLModel`s.

    • ‘asc` - Arranges the list in ascending order (A-Z, 0-9).

    • ‘dsc` - Arranges the list in descending order (Z-A, 9-0).

    Results are sorted by ‘FilterVariable`.

  • :next_token (String)

    An ID of the page in the paginated results.

  • :limit (Integer)

    The number of pages of information to include in the result. The range of acceptable values is ‘1` through `100`. The default value is `100`.

Returns:



1510
1511
1512
1513
# File 'lib/aws-sdk-machinelearning/client.rb', line 1510

def describe_batch_predictions(params = {}, options = {})
  req = build_request(:describe_batch_predictions, params)
  req.send_request(options)
end

#describe_data_sources(params = {}) ⇒ Types::DescribeDataSourcesOutput

Returns a list of ‘DataSource` that match the search criteria in the request.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

The following waiters are defined for this operation (see #wait_until for detailed usage):

* data_source_available

Examples:

Request syntax with placeholder values


resp = client.describe_data_sources({
  filter_variable: "CreatedAt", # accepts CreatedAt, LastUpdatedAt, Status, Name, DataLocationS3, IAMUser
  eq: "ComparatorValue",
  gt: "ComparatorValue",
  lt: "ComparatorValue",
  ge: "ComparatorValue",
  le: "ComparatorValue",
  ne: "ComparatorValue",
  prefix: "ComparatorValue",
  sort_order: "asc", # accepts asc, dsc
  next_token: "StringType",
  limit: 1,
})

Response structure


resp.results #=> Array
resp.results[0].data_source_id #=> String
resp.results[0].data_location_s3 #=> String
resp.results[0].data_rearrangement #=> String
resp.results[0].created_by_iam_user #=> String
resp.results[0].created_at #=> Time
resp.results[0].last_updated_at #=> Time
resp.results[0].data_size_in_bytes #=> Integer
resp.results[0].number_of_files #=> Integer
resp.results[0].name #=> String
resp.results[0].status #=> String, one of "PENDING", "INPROGRESS", "FAILED", "COMPLETED", "DELETED"
resp.results[0].message #=> String
resp.results[0]..redshift_database.database_name #=> String
resp.results[0]..redshift_database.cluster_identifier #=> String
resp.results[0]..database_user_name #=> String
resp.results[0]..select_sql_query #=> String
resp.results[0]..database.instance_identifier #=> String
resp.results[0]..database.database_name #=> String
resp.results[0]..database_user_name #=> String
resp.results[0]..select_sql_query #=> String
resp.results[0]..resource_role #=> String
resp.results[0]..service_role #=> String
resp.results[0]..data_pipeline_id #=> String
resp.results[0].role_arn #=> String
resp.results[0].compute_statistics #=> Boolean
resp.results[0].compute_time #=> Integer
resp.results[0].finished_at #=> Time
resp.results[0].started_at #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :filter_variable (String)

    Use one of the following variables to filter a list of ‘DataSource`:

    • ‘CreatedAt` - Sets the search criteria to `DataSource` creation dates.

    • ‘Status` - Sets the search criteria to `DataSource` statuses.

    • ‘Name` - Sets the search criteria to the contents of `DataSource` `Name`.

    • ‘DataUri` - Sets the search criteria to the URI of data files used to create the `DataSource`. The URI can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.

    • ‘IAMUser` - Sets the search criteria to the user account that invoked the `DataSource` creation.

  • :eq (String)

    The equal to operator. The ‘DataSource` results will have `FilterVariable` values that exactly match the value specified with `EQ`.

  • :gt (String)

    The greater than operator. The ‘DataSource` results will have `FilterVariable` values that are greater than the value specified with `GT`.

  • :lt (String)

    The less than operator. The ‘DataSource` results will have `FilterVariable` values that are less than the value specified with `LT`.

  • :ge (String)

    The greater than or equal to operator. The ‘DataSource` results will have `FilterVariable` values that are greater than or equal to the value specified with `GE`.

  • :le (String)

    The less than or equal to operator. The ‘DataSource` results will have `FilterVariable` values that are less than or equal to the value specified with `LE`.

  • :ne (String)

    The not equal to operator. The ‘DataSource` results will have `FilterVariable` values not equal to the value specified with `NE`.

  • :prefix (String)

    A string that is found at the beginning of a variable, such as ‘Name` or `Id`.

    For example, a ‘DataSource` could have the `Name` `2014-09-09-HolidayGiftMailer`. To search for this `DataSource`, select `Name` for the `FilterVariable` and any of the following strings for the `Prefix`:

    • 2014-09

    • 2014-09-09

    • 2014-09-09-Holiday

  • :sort_order (String)

    A two-value parameter that determines the sequence of the resulting list of ‘DataSource`.

    • ‘asc` - Arranges the list in ascending order (A-Z, 0-9).

    • ‘dsc` - Arranges the list in descending order (Z-A, 9-0).

    Results are sorted by ‘FilterVariable`.

  • :next_token (String)

    The ID of the page in the paginated results.

  • :limit (Integer)

    The maximum number of ‘DataSource` to include in the result.

Returns:



1658
1659
1660
1661
# File 'lib/aws-sdk-machinelearning/client.rb', line 1658

def describe_data_sources(params = {}, options = {})
  req = build_request(:describe_data_sources, params)
  req.send_request(options)
end

#describe_evaluations(params = {}) ⇒ Types::DescribeEvaluationsOutput

Returns a list of ‘DescribeEvaluations` that match the search criteria in the request.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

The following waiters are defined for this operation (see #wait_until for detailed usage):

* evaluation_available

Examples:

Request syntax with placeholder values


resp = client.describe_evaluations({
  filter_variable: "CreatedAt", # accepts CreatedAt, LastUpdatedAt, Status, Name, IAMUser, MLModelId, DataSourceId, DataURI
  eq: "ComparatorValue",
  gt: "ComparatorValue",
  lt: "ComparatorValue",
  ge: "ComparatorValue",
  le: "ComparatorValue",
  ne: "ComparatorValue",
  prefix: "ComparatorValue",
  sort_order: "asc", # accepts asc, dsc
  next_token: "StringType",
  limit: 1,
})

Response structure


resp.results #=> Array
resp.results[0].evaluation_id #=> String
resp.results[0].ml_model_id #=> String
resp.results[0].evaluation_data_source_id #=> String
resp.results[0].input_data_location_s3 #=> String
resp.results[0].created_by_iam_user #=> String
resp.results[0].created_at #=> Time
resp.results[0].last_updated_at #=> Time
resp.results[0].name #=> String
resp.results[0].status #=> String, one of "PENDING", "INPROGRESS", "FAILED", "COMPLETED", "DELETED"
resp.results[0].performance_metrics.properties #=> Hash
resp.results[0].performance_metrics.properties["PerformanceMetricsPropertyKey"] #=> String
resp.results[0].message #=> String
resp.results[0].compute_time #=> Integer
resp.results[0].finished_at #=> Time
resp.results[0].started_at #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :filter_variable (String)

    Use one of the following variable to filter a list of ‘Evaluation` objects:

    • ‘CreatedAt` - Sets the search criteria to the `Evaluation` creation date.

    • ‘Status` - Sets the search criteria to the `Evaluation` status.

    • ‘Name` - Sets the search criteria to the contents of `Evaluation` `Name`.

    • ‘IAMUser` - Sets the search criteria to the user account that invoked an `Evaluation`.

    • ‘MLModelId` - Sets the search criteria to the `MLModel` that was evaluated.

    • ‘DataSourceId` - Sets the search criteria to the `DataSource` used in `Evaluation`.

    • ‘DataUri` - Sets the search criteria to the data file(s) used in `Evaluation`. The URL can identify either a file or an Amazon Simple Storage Solution (Amazon S3) bucket or directory.

  • :eq (String)

    The equal to operator. The ‘Evaluation` results will have `FilterVariable` values that exactly match the value specified with `EQ`.

  • :gt (String)

    The greater than operator. The ‘Evaluation` results will have `FilterVariable` values that are greater than the value specified with `GT`.

  • :lt (String)

    The less than operator. The ‘Evaluation` results will have `FilterVariable` values that are less than the value specified with `LT`.

  • :ge (String)

    The greater than or equal to operator. The ‘Evaluation` results will have `FilterVariable` values that are greater than or equal to the value specified with `GE`.

  • :le (String)

    The less than or equal to operator. The ‘Evaluation` results will have `FilterVariable` values that are less than or equal to the value specified with `LE`.

  • :ne (String)

    The not equal to operator. The ‘Evaluation` results will have `FilterVariable` values not equal to the value specified with `NE`.

  • :prefix (String)

    A string that is found at the beginning of a variable, such as ‘Name` or `Id`.

    For example, an ‘Evaluation` could have the `Name` `2014-09-09-HolidayGiftMailer`. To search for this `Evaluation`, select `Name` for the `FilterVariable` and any of the following strings for the `Prefix`:

    • 2014-09

    • 2014-09-09

    • 2014-09-09-Holiday

  • :sort_order (String)

    A two-value parameter that determines the sequence of the resulting list of ‘Evaluation`.

    • ‘asc` - Arranges the list in ascending order (A-Z, 0-9).

    • ‘dsc` - Arranges the list in descending order (Z-A, 9-0).

    Results are sorted by ‘FilterVariable`.

  • :next_token (String)

    The ID of the page in the paginated results.

  • :limit (Integer)

    The maximum number of ‘Evaluation` to include in the result.

Returns:



1801
1802
1803
1804
# File 'lib/aws-sdk-machinelearning/client.rb', line 1801

def describe_evaluations(params = {}, options = {})
  req = build_request(:describe_evaluations, params)
  req.send_request(options)
end

#describe_ml_models(params = {}) ⇒ Types::DescribeMLModelsOutput

Returns a list of ‘MLModel` that match the search criteria in the request.

The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.

The following waiters are defined for this operation (see #wait_until for detailed usage):

* ml_model_available

Examples:

Request syntax with placeholder values


resp = client.describe_ml_models({
  filter_variable: "CreatedAt", # accepts CreatedAt, LastUpdatedAt, Status, Name, IAMUser, TrainingDataSourceId, RealtimeEndpointStatus, MLModelType, Algorithm, TrainingDataURI
  eq: "ComparatorValue",
  gt: "ComparatorValue",
  lt: "ComparatorValue",
  ge: "ComparatorValue",
  le: "ComparatorValue",
  ne: "ComparatorValue",
  prefix: "ComparatorValue",
  sort_order: "asc", # accepts asc, dsc
  next_token: "StringType",
  limit: 1,
})

Response structure


resp.results #=> Array
resp.results[0].ml_model_id #=> String
resp.results[0].training_data_source_id #=> String
resp.results[0].created_by_iam_user #=> String
resp.results[0].created_at #=> Time
resp.results[0].last_updated_at #=> Time
resp.results[0].name #=> String
resp.results[0].status #=> String, one of "PENDING", "INPROGRESS", "FAILED", "COMPLETED", "DELETED"
resp.results[0].size_in_bytes #=> Integer
resp.results[0].endpoint_info.peak_requests_per_second #=> Integer
resp.results[0].endpoint_info.created_at #=> Time
resp.results[0].endpoint_info.endpoint_url #=> String
resp.results[0].endpoint_info.endpoint_status #=> String, one of "NONE", "READY", "UPDATING", "FAILED"
resp.results[0].training_parameters #=> Hash
resp.results[0].training_parameters["StringType"] #=> String
resp.results[0].input_data_location_s3 #=> String
resp.results[0].algorithm #=> String, one of "sgd"
resp.results[0].ml_model_type #=> String, one of "REGRESSION", "BINARY", "MULTICLASS"
resp.results[0].score_threshold #=> Float
resp.results[0].score_threshold_last_updated_at #=> Time
resp.results[0].message #=> String
resp.results[0].compute_time #=> Integer
resp.results[0].finished_at #=> Time
resp.results[0].started_at #=> Time
resp.next_token #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :filter_variable (String)

    Use one of the following variables to filter a list of ‘MLModel`:

    • ‘CreatedAt` - Sets the search criteria to `MLModel` creation date.

    • ‘Status` - Sets the search criteria to `MLModel` status.

    • ‘Name` - Sets the search criteria to the contents of `MLModel` `Name`.

    • ‘IAMUser` - Sets the search criteria to the user account that invoked the `MLModel` creation.

    • ‘TrainingDataSourceId` - Sets the search criteria to the `DataSource` used to train one or more `MLModel`.

    • ‘RealtimeEndpointStatus` - Sets the search criteria to the `MLModel` real-time endpoint status.

    • ‘MLModelType` - Sets the search criteria to `MLModel` type: binary, regression, or multi-class.

    • ‘Algorithm` - Sets the search criteria to the algorithm that the `MLModel` uses.

    • ‘TrainingDataURI` - Sets the search criteria to the data file(s) used in training a `MLModel`. The URL can identify either a file or an Amazon Simple Storage Service (Amazon S3) bucket or directory.

  • :eq (String)

    The equal to operator. The ‘MLModel` results will have `FilterVariable` values that exactly match the value specified with `EQ`.

  • :gt (String)

    The greater than operator. The ‘MLModel` results will have `FilterVariable` values that are greater than the value specified with `GT`.

  • :lt (String)

    The less than operator. The ‘MLModel` results will have `FilterVariable` values that are less than the value specified with `LT`.

  • :ge (String)

    The greater than or equal to operator. The ‘MLModel` results will have `FilterVariable` values that are greater than or equal to the value specified with `GE`.

  • :le (String)

    The less than or equal to operator. The ‘MLModel` results will have `FilterVariable` values that are less than or equal to the value specified with `LE`.

  • :ne (String)

    The not equal to operator. The ‘MLModel` results will have `FilterVariable` values not equal to the value specified with `NE`.

  • :prefix (String)

    A string that is found at the beginning of a variable, such as ‘Name` or `Id`.

    For example, an ‘MLModel` could have the `Name` `2014-09-09-HolidayGiftMailer`. To search for this `MLModel`, select `Name` for the `FilterVariable` and any of the following strings for the `Prefix`:

    • 2014-09

    • 2014-09-09

    • 2014-09-09-Holiday

  • :sort_order (String)

    A two-value parameter that determines the sequence of the resulting list of ‘MLModel`.

    • ‘asc` - Arranges the list in ascending order (A-Z, 0-9).

    • ‘dsc` - Arranges the list in descending order (Z-A, 9-0).

    Results are sorted by ‘FilterVariable`.

  • :next_token (String)

    The ID of the page in the paginated results.

  • :limit (Integer)

    The number of pages of information to include in the result. The range of acceptable values is ‘1` through `100`. The default value is `100`.

Returns:



1957
1958
1959
1960
# File 'lib/aws-sdk-machinelearning/client.rb', line 1957

def describe_ml_models(params = {}, options = {})
  req = build_request(:describe_ml_models, params)
  req.send_request(options)
end

#describe_tags(params = {}) ⇒ Types::DescribeTagsOutput

Describes one or more of the tags for your Amazon ML object.

Examples:

Request syntax with placeholder values


resp = client.describe_tags({
  resource_id: "EntityId", # required
  resource_type: "BatchPrediction", # required, accepts BatchPrediction, DataSource, Evaluation, MLModel
})

Response structure


resp.resource_id #=> String
resp.resource_type #=> String, one of "BatchPrediction", "DataSource", "Evaluation", "MLModel"
resp.tags #=> Array
resp.tags[0].key #=> String
resp.tags[0].value #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :resource_id (required, String)

    The ID of the ML object. For example, ‘exampleModelId`.

  • :resource_type (required, String)

    The type of the ML object.

Returns:



1993
1994
1995
1996
# File 'lib/aws-sdk-machinelearning/client.rb', line 1993

def describe_tags(params = {}, options = {})
  req = build_request(:describe_tags, params)
  req.send_request(options)
end

#get_batch_prediction(params = {}) ⇒ Types::GetBatchPredictionOutput

Returns a ‘BatchPrediction` that includes detailed metadata, status, and data file information for a `Batch Prediction` request.

Examples:

Request syntax with placeholder values


resp = client.get_batch_prediction({
  batch_prediction_id: "EntityId", # required
})

Response structure


resp.batch_prediction_id #=> String
resp.ml_model_id #=> String
resp.batch_prediction_data_source_id #=> String
resp.input_data_location_s3 #=> String
resp.created_by_iam_user #=> String
resp.created_at #=> Time
resp.last_updated_at #=> Time
resp.name #=> String
resp.status #=> String, one of "PENDING", "INPROGRESS", "FAILED", "COMPLETED", "DELETED"
resp.output_uri #=> String
resp.log_uri #=> String
resp.message #=> String
resp.compute_time #=> Integer
resp.finished_at #=> Time
resp.started_at #=> Time
resp.total_record_count #=> Integer
resp.invalid_record_count #=> Integer

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :batch_prediction_id (required, String)

    An ID assigned to the ‘BatchPrediction` at creation.

Returns:



2052
2053
2054
2055
# File 'lib/aws-sdk-machinelearning/client.rb', line 2052

def get_batch_prediction(params = {}, options = {})
  req = build_request(:get_batch_prediction, params)
  req.send_request(options)
end

#get_data_source(params = {}) ⇒ Types::GetDataSourceOutput

Returns a ‘DataSource` that includes metadata and data file information, as well as the current status of the `DataSource`.

‘GetDataSource` provides results in normal or verbose format. The verbose format adds the schema description and the list of files pointed to by the DataSource to the normal format.

Examples:

Request syntax with placeholder values


resp = client.get_data_source({
  data_source_id: "EntityId", # required
  verbose: false,
})

Response structure


resp.data_source_id #=> String
resp.data_location_s3 #=> String
resp.data_rearrangement #=> String
resp.created_by_iam_user #=> String
resp.created_at #=> Time
resp.last_updated_at #=> Time
resp.data_size_in_bytes #=> Integer
resp.number_of_files #=> Integer
resp.name #=> String
resp.status #=> String, one of "PENDING", "INPROGRESS", "FAILED", "COMPLETED", "DELETED"
resp.log_uri #=> String
resp.message #=> String
resp..redshift_database.database_name #=> String
resp..redshift_database.cluster_identifier #=> String
resp..database_user_name #=> String
resp..select_sql_query #=> String
resp..database.instance_identifier #=> String
resp..database.database_name #=> String
resp..database_user_name #=> String
resp..select_sql_query #=> String
resp..resource_role #=> String
resp..service_role #=> String
resp..data_pipeline_id #=> String
resp.role_arn #=> String
resp.compute_statistics #=> Boolean
resp.compute_time #=> Integer
resp.finished_at #=> Time
resp.started_at #=> Time
resp.data_source_schema #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_source_id (required, String)

    The ID assigned to the ‘DataSource` at creation.

  • :verbose (Boolean)

    Specifies whether the ‘GetDataSource` operation should return `DataSourceSchema`.

    If true, ‘DataSourceSchema` is returned.

    If false, ‘DataSourceSchema` is not returned.

Returns:



2139
2140
2141
2142
# File 'lib/aws-sdk-machinelearning/client.rb', line 2139

def get_data_source(params = {}, options = {})
  req = build_request(:get_data_source, params)
  req.send_request(options)
end

#get_evaluation(params = {}) ⇒ Types::GetEvaluationOutput

Returns an ‘Evaluation` that includes metadata as well as the current status of the `Evaluation`.

Examples:

Request syntax with placeholder values


resp = client.get_evaluation({
  evaluation_id: "EntityId", # required
})

Response structure


resp.evaluation_id #=> String
resp.ml_model_id #=> String
resp.evaluation_data_source_id #=> String
resp.input_data_location_s3 #=> String
resp.created_by_iam_user #=> String
resp.created_at #=> Time
resp.last_updated_at #=> Time
resp.name #=> String
resp.status #=> String, one of "PENDING", "INPROGRESS", "FAILED", "COMPLETED", "DELETED"
resp.performance_metrics.properties #=> Hash
resp.performance_metrics.properties["PerformanceMetricsPropertyKey"] #=> String
resp.log_uri #=> String
resp.message #=> String
resp.compute_time #=> Integer
resp.finished_at #=> Time
resp.started_at #=> Time

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :evaluation_id (required, String)

    The ID of the ‘Evaluation` to retrieve. The evaluation of each `MLModel` is recorded and cataloged. The ID provides the means to access the information.

Returns:



2197
2198
2199
2200
# File 'lib/aws-sdk-machinelearning/client.rb', line 2197

def get_evaluation(params = {}, options = {})
  req = build_request(:get_evaluation, params)
  req.send_request(options)
end

#get_ml_model(params = {}) ⇒ Types::GetMLModelOutput

Returns an ‘MLModel` that includes detailed metadata, data source information, and the current status of the `MLModel`.

‘GetMLModel` provides results in normal or verbose format.

Examples:

Request syntax with placeholder values


resp = client.get_ml_model({
  ml_model_id: "EntityId", # required
  verbose: false,
})

Response structure


resp.ml_model_id #=> String
resp.training_data_source_id #=> String
resp.created_by_iam_user #=> String
resp.created_at #=> Time
resp.last_updated_at #=> Time
resp.name #=> String
resp.status #=> String, one of "PENDING", "INPROGRESS", "FAILED", "COMPLETED", "DELETED"
resp.size_in_bytes #=> Integer
resp.endpoint_info.peak_requests_per_second #=> Integer
resp.endpoint_info.created_at #=> Time
resp.endpoint_info.endpoint_url #=> String
resp.endpoint_info.endpoint_status #=> String, one of "NONE", "READY", "UPDATING", "FAILED"
resp.training_parameters #=> Hash
resp.training_parameters["StringType"] #=> String
resp.input_data_location_s3 #=> String
resp.ml_model_type #=> String, one of "REGRESSION", "BINARY", "MULTICLASS"
resp.score_threshold #=> Float
resp.score_threshold_last_updated_at #=> Time
resp.log_uri #=> String
resp.message #=> String
resp.compute_time #=> Integer
resp.finished_at #=> Time
resp.started_at #=> Time
resp.recipe #=> String
resp.schema #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :ml_model_id (required, String)

    The ID assigned to the ‘MLModel` at creation.

  • :verbose (Boolean)

    Specifies whether the ‘GetMLModel` operation should return `Recipe`.

    If true, ‘Recipe` is returned.

    If false, ‘Recipe` is not returned.

Returns:



2278
2279
2280
2281
# File 'lib/aws-sdk-machinelearning/client.rb', line 2278

def get_ml_model(params = {}, options = {})
  req = build_request(:get_ml_model, params)
  req.send_request(options)
end

#predict(params = {}) ⇒ Types::PredictOutput

Generates a prediction for the observation using the specified ‘ML Model`.

Note: Not all response parameters will be populated. Whether a response parameter is populated depends on the type of model requested.

Examples:

Request syntax with placeholder values


resp = client.predict({
  ml_model_id: "EntityId", # required
  record: { # required
    "VariableName" => "VariableValue",
  },
  predict_endpoint: "VipURL", # required
})

Response structure


resp.prediction.predicted_label #=> String
resp.prediction.predicted_value #=> Float
resp.prediction.predicted_scores #=> Hash
resp.prediction.predicted_scores["Label"] #=> Float
resp.prediction.details #=> Hash
resp.prediction.details["DetailsAttributes"] #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :ml_model_id (required, String)

    A unique identifier of the ‘MLModel`.

  • :record (required, Hash<String,String>)

    A map of variable name-value pairs that represent an observation.

  • :predict_endpoint (required, String)

Returns:



2323
2324
2325
2326
# File 'lib/aws-sdk-machinelearning/client.rb', line 2323

def predict(params = {}, options = {})
  req = build_request(:predict, params)
  req.send_request(options)
end

#update_batch_prediction(params = {}) ⇒ Types::UpdateBatchPredictionOutput

Updates the ‘BatchPredictionName` of a `BatchPrediction`.

You can use the ‘GetBatchPrediction` operation to view the contents of the updated data element.

Examples:

Request syntax with placeholder values


resp = client.update_batch_prediction({
  batch_prediction_id: "EntityId", # required
  batch_prediction_name: "EntityName", # required
})

Response structure


resp.batch_prediction_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :batch_prediction_id (required, String)

    The ID assigned to the ‘BatchPrediction` during creation.

  • :batch_prediction_name (required, String)

    A new user-supplied name or description of the ‘BatchPrediction`.

Returns:



2356
2357
2358
2359
# File 'lib/aws-sdk-machinelearning/client.rb', line 2356

def update_batch_prediction(params = {}, options = {})
  req = build_request(:update_batch_prediction, params)
  req.send_request(options)
end

#update_data_source(params = {}) ⇒ Types::UpdateDataSourceOutput

Updates the ‘DataSourceName` of a `DataSource`.

You can use the ‘GetDataSource` operation to view the contents of the updated data element.

Examples:

Request syntax with placeholder values


resp = client.update_data_source({
  data_source_id: "EntityId", # required
  data_source_name: "EntityName", # required
})

Response structure


resp.data_source_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :data_source_id (required, String)

    The ID assigned to the ‘DataSource` during creation.

  • :data_source_name (required, String)

    A new user-supplied name or description of the ‘DataSource` that will replace the current description.

Returns:



2390
2391
2392
2393
# File 'lib/aws-sdk-machinelearning/client.rb', line 2390

def update_data_source(params = {}, options = {})
  req = build_request(:update_data_source, params)
  req.send_request(options)
end

#update_evaluation(params = {}) ⇒ Types::UpdateEvaluationOutput

Updates the ‘EvaluationName` of an `Evaluation`.

You can use the ‘GetEvaluation` operation to view the contents of the updated data element.

Examples:

Request syntax with placeholder values


resp = client.update_evaluation({
  evaluation_id: "EntityId", # required
  evaluation_name: "EntityName", # required
})

Response structure


resp.evaluation_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :evaluation_id (required, String)

    The ID assigned to the ‘Evaluation` during creation.

  • :evaluation_name (required, String)

    A new user-supplied name or description of the ‘Evaluation` that will replace the current content.

Returns:



2424
2425
2426
2427
# File 'lib/aws-sdk-machinelearning/client.rb', line 2424

def update_evaluation(params = {}, options = {})
  req = build_request(:update_evaluation, params)
  req.send_request(options)
end

#update_ml_model(params = {}) ⇒ Types::UpdateMLModelOutput

Updates the ‘MLModelName` and the `ScoreThreshold` of an `MLModel`.

You can use the ‘GetMLModel` operation to view the contents of the updated data element.

Examples:

Request syntax with placeholder values


resp = client.update_ml_model({
  ml_model_id: "EntityId", # required
  ml_model_name: "EntityName",
  score_threshold: 1.0,
})

Response structure


resp.ml_model_id #=> String

Parameters:

  • params (Hash) (defaults to: {})

    ({})

Options Hash (params):

  • :ml_model_id (required, String)

    The ID assigned to the ‘MLModel` during creation.

  • :ml_model_name (String)

    A user-supplied name or description of the ‘MLModel`.

  • :score_threshold (Float)

    The ‘ScoreThreshold` used in binary classification `MLModel` that marks the boundary between a positive prediction and a negative prediction.

    Output values greater than or equal to the ‘ScoreThreshold` receive a positive result from the `MLModel`, such as `true`. Output values less than the `ScoreThreshold` receive a negative response from the `MLModel`, such as `false`.

Returns:



2468
2469
2470
2471
# File 'lib/aws-sdk-machinelearning/client.rb', line 2468

def update_ml_model(params = {}, options = {})
  req = build_request(:update_ml_model, params)
  req.send_request(options)
end

#wait_until(waiter_name, params = {}, options = {}) {|w.waiter| ... } ⇒ Boolean

Polls an API operation until a resource enters a desired state.

## Basic Usage

A waiter will call an API operation until:

  • It is successful

  • It enters a terminal state

  • It makes the maximum number of attempts

In between attempts, the waiter will sleep.

# polls in a loop, sleeping between attempts
client.wait_until(waiter_name, params)

## Configuration

You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. You can pass configuration as the final arguments hash.

# poll for ~25 seconds
client.wait_until(waiter_name, params, {
  max_attempts: 5,
  delay: 5,
})

## Callbacks

You can be notified before each polling attempt and before each delay. If you throw ‘:success` or `:failure` from these callbacks, it will terminate the waiter.

started_at = Time.now
client.wait_until(waiter_name, params, {

  # disable max attempts
  max_attempts: nil,

  # poll for 1 hour, instead of a number of attempts
  before_wait: -> (attempts, response) do
    throw :failure if Time.now - started_at > 3600
  end
})

## Handling Errors

When a waiter is unsuccessful, it will raise an error. All of the failure errors extend from Waiters::Errors::WaiterFailed.

begin
  client.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
  # resource did not enter the desired state in time
end

## Valid Waiters

The following table lists the valid waiter names, the operations they call, and the default ‘:delay` and `:max_attempts` values.

| waiter_name | params | :delay | :max_attempts | | ————————– | ———————————– | ——– | ————- | | batch_prediction_available | #describe_batch_predictions | 30 | 60 | | data_source_available | #describe_data_sources | 30 | 60 | | evaluation_available | #describe_evaluations | 30 | 60 | | ml_model_available | #describe_ml_models | 30 | 60 |

Parameters:

  • waiter_name (Symbol)
  • params (Hash) (defaults to: {})

    ({})

  • options (Hash) (defaults to: {})

    ({})

Options Hash (options):

  • :max_attempts (Integer)
  • :delay (Integer)
  • :before_attempt (Proc)
  • :before_wait (Proc)

Yields:

  • (w.waiter)

Returns:

  • (Boolean)

    Returns ‘true` if the waiter was successful.

Raises:

  • (Errors::FailureStateError)

    Raised when the waiter terminates because the waiter has entered a state that it will not transition out of, preventing success.

  • (Errors::TooManyAttemptsError)

    Raised when the configured maximum number of attempts have been made, and the waiter is not yet successful.

  • (Errors::UnexpectedError)

    Raised when an error is encounted while polling for a resource that is not expected.

  • (Errors::NoSuchWaiterError)

    Raised when you request to wait for an unknown state.



2586
2587
2588
2589
2590
# File 'lib/aws-sdk-machinelearning/client.rb', line 2586

def wait_until(waiter_name, params = {}, options = {})
  w = waiter(waiter_name, options)
  yield(w.waiter) if block_given? # deprecated
  w.wait(params)
end

#waiter_namesObject

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Deprecated.


2594
2595
2596
# File 'lib/aws-sdk-machinelearning/client.rb', line 2594

def waiter_names
  waiters.keys
end