Class: Aws::Batch::Client
- Inherits:
-
Seahorse::Client::Base
- Object
- Seahorse::Client::Base
- Aws::Batch::Client
- Includes:
- ClientStubs
- Defined in:
- lib/aws-sdk-batch/client.rb
Overview
An API client for Batch. To construct a client, you need to configure a ‘:region` and `:credentials`.
client = Aws::Batch::Client.new(
region: region_name,
credentials: credentials,
# ...
)
For details on configuring region and credentials see the [developer guide](/sdk-for-ruby/v3/developer-guide/setup-config.html).
See #initialize for a full list of supported configuration options.
Class Attribute Summary collapse
- .identifier ⇒ Object readonly private
API Operations collapse
-
#cancel_job(params = {}) ⇒ Struct
Cancels a job in an Batch job queue.
-
#create_compute_environment(params = {}) ⇒ Types::CreateComputeEnvironmentResponse
Creates an Batch compute environment.
-
#create_job_queue(params = {}) ⇒ Types::CreateJobQueueResponse
Creates an Batch job queue.
-
#create_scheduling_policy(params = {}) ⇒ Types::CreateSchedulingPolicyResponse
Creates an Batch scheduling policy.
-
#delete_compute_environment(params = {}) ⇒ Struct
Deletes an Batch compute environment.
-
#delete_job_queue(params = {}) ⇒ Struct
Deletes the specified job queue.
-
#delete_scheduling_policy(params = {}) ⇒ Struct
Deletes the specified scheduling policy.
-
#deregister_job_definition(params = {}) ⇒ Struct
Deregisters an Batch job definition.
-
#describe_compute_environments(params = {}) ⇒ Types::DescribeComputeEnvironmentsResponse
Describes one or more of your compute environments.
-
#describe_job_definitions(params = {}) ⇒ Types::DescribeJobDefinitionsResponse
Describes a list of job definitions.
-
#describe_job_queues(params = {}) ⇒ Types::DescribeJobQueuesResponse
Describes one or more of your job queues.
-
#describe_jobs(params = {}) ⇒ Types::DescribeJobsResponse
Describes a list of Batch jobs.
-
#describe_scheduling_policies(params = {}) ⇒ Types::DescribeSchedulingPoliciesResponse
Describes one or more of your scheduling policies.
-
#get_job_queue_snapshot(params = {}) ⇒ Types::GetJobQueueSnapshotResponse
Provides a list of the first 100 ‘RUNNABLE` jobs associated to a single job queue.
-
#list_jobs(params = {}) ⇒ Types::ListJobsResponse
Returns a list of Batch jobs.
-
#list_scheduling_policies(params = {}) ⇒ Types::ListSchedulingPoliciesResponse
Returns a list of Batch scheduling policies.
-
#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse
Lists the tags for an Batch resource.
-
#register_job_definition(params = {}) ⇒ Types::RegisterJobDefinitionResponse
Registers an Batch job definition.
-
#submit_job(params = {}) ⇒ Types::SubmitJobResponse
Submits an Batch job from a job definition.
-
#tag_resource(params = {}) ⇒ Struct
Associates the specified tags to a resource with the specified ‘resourceArn`.
-
#terminate_job(params = {}) ⇒ Struct
Terminates a job in a job queue.
-
#untag_resource(params = {}) ⇒ Struct
Deletes specified tags from an Batch resource.
-
#update_compute_environment(params = {}) ⇒ Types::UpdateComputeEnvironmentResponse
Updates an Batch compute environment.
-
#update_job_queue(params = {}) ⇒ Types::UpdateJobQueueResponse
Updates a job queue.
-
#update_scheduling_policy(params = {}) ⇒ Struct
Updates a scheduling policy.
Class Method Summary collapse
- .errors_module ⇒ Object private
Instance Method Summary collapse
- #build_request(operation_name, params = {}) ⇒ Object private
-
#initialize(options) ⇒ Client
constructor
A new instance of Client.
- #waiter_names ⇒ Object deprecated private Deprecated.
Constructor Details
#initialize(options) ⇒ Client
Returns a new instance of Client.
Parameters:
- options (Hash)
Options Hash (options):
-
:plugins
(Array<Seahorse::Client::Plugin>)
— default:
[]]
—
A list of plugins to apply to the client. Each plugin is either a class name or an instance of a plugin class.
-
:credentials
(required, Aws::CredentialProvider)
—
Your AWS credentials. This can be an instance of any one of the following classes:
-
‘Aws::Credentials` - Used for configuring static, non-refreshing credentials.
-
‘Aws::SharedCredentials` - Used for loading static credentials from a shared file, such as `~/.aws/config`.
-
‘Aws::AssumeRoleCredentials` - Used when you need to assume a role.
-
‘Aws::AssumeRoleWebIdentityCredentials` - Used when you need to assume a role after providing credentials via the web.
-
‘Aws::SSOCredentials` - Used for loading credentials from AWS SSO using an access token generated from `aws login`.
-
‘Aws::ProcessCredentials` - Used for loading credentials from a process that outputs to stdout.
-
‘Aws::InstanceProfileCredentials` - Used for loading credentials from an EC2 IMDS on an EC2 instance.
-
‘Aws::ECSCredentials` - Used for loading credentials from instances running in ECS.
-
‘Aws::CognitoIdentityCredentials` - Used for loading credentials from the Cognito Identity service.
When ‘:credentials` are not configured directly, the following locations will be searched for credentials:
-
The ‘:access_key_id`, `:secret_access_key`, `:session_token`, and `:account_id` options.
-
‘~/.aws/credentials`
-
‘~/.aws/config`
-
EC2/ECS IMDS instance profile - When used by default, the timeouts are very aggressive. Construct and pass an instance of ‘Aws::InstanceProfileCredentials` or `Aws::ECSCredentials` to enable retries and extended timeouts. Instance profile credential fetching can be disabled by setting ENV to true.
-
- :region (required, String) —
- :access_key_id (String)
- :account_id (String)
-
:active_endpoint_cache
(Boolean)
— default:
false
—
When set to ‘true`, a thread polling for endpoints will be running in the background every 60 secs (default). Defaults to `false`.
-
:adaptive_retry_wait_to_fill
(Boolean)
— default:
true
—
Used only in ‘adaptive` retry mode. When true, the request will sleep until there is sufficent client side capacity to retry the request. When false, the request will raise a `RetryCapacityNotAvailableError` and will not retry instead of sleeping.
-
:client_side_monitoring
(Boolean)
— default:
false
—
When ‘true`, client-side metrics will be collected for all API requests from this client.
-
:client_side_monitoring_client_id
(String)
— default:
""
—
Allows you to provide an identifier for this client which will be attached to all generated client side metrics. Defaults to an empty string.
-
:client_side_monitoring_host
(String)
— default:
"127.0.0.1"
—
Allows you to specify the DNS hostname or IPv4 or IPv6 address that the client side monitoring agent is running on, where client metrics will be published via UDP.
-
:client_side_monitoring_port
(Integer)
— default:
31000
—
Required for publishing client metrics. The port that the client side monitoring agent is running on, where client metrics will be published via UDP.
-
:client_side_monitoring_publisher
(Aws::ClientSideMonitoring::Publisher)
— default:
Aws::ClientSideMonitoring::Publisher
—
Allows you to provide a custom client-side monitoring publisher class. By default, will use the Client Side Monitoring Agent Publisher.
-
:convert_params
(Boolean)
— default:
true
—
When ‘true`, an attempt is made to coerce request parameters into the required types.
-
:correct_clock_skew
(Boolean)
— default:
true
—
Used only in ‘standard` and adaptive retry modes. Specifies whether to apply a clock skew correction and retry requests with skewed client clocks.
-
:defaults_mode
(String)
— default:
"legacy"
—
See DefaultsModeConfiguration for a list of the accepted modes and the configuration defaults that are included.
-
:disable_host_prefix_injection
(Boolean)
— default:
false
—
Set to true to disable SDK automatically adding host prefix to default service endpoint when available.
-
:disable_request_compression
(Boolean)
— default:
false
—
When set to ‘true’ the request body will not be compressed for supported operations.
-
:endpoint
(String, URI::HTTPS, URI::HTTP)
—
Normally you should not configure the ‘:endpoint` option directly. This is normally constructed from the `:region` option. Configuring `:endpoint` is normally reserved for connecting to test or custom endpoints. The endpoint should be a URI formatted like:
'http://example.com' 'https://example.com' 'http://example.com:123'
-
:endpoint_cache_max_entries
(Integer)
— default:
1000
—
Used for the maximum size limit of the LRU cache storing endpoints data for endpoint discovery enabled operations. Defaults to 1000.
-
:endpoint_cache_max_threads
(Integer)
— default:
10
—
Used for the maximum threads in use for polling endpoints to be cached, defaults to 10.
-
:endpoint_cache_poll_interval
(Integer)
— default:
60
—
When :endpoint_discovery and :active_endpoint_cache is enabled, Use this option to config the time interval in seconds for making requests fetching endpoints information. Defaults to 60 sec.
-
:endpoint_discovery
(Boolean)
— default:
false
—
When set to ‘true`, endpoint discovery will be enabled for operations when available.
-
:ignore_configured_endpoint_urls
(Boolean)
—
Setting to true disables use of endpoint URLs provided via environment variables and the shared configuration file.
-
:log_formatter
(Aws::Log::Formatter)
— default:
Aws::Log::Formatter.default
—
The log formatter.
-
:log_level
(Symbol)
— default:
:info
—
The log level to send messages to the ‘:logger` at.
-
:logger
(Logger)
—
The Logger instance to send log messages to. If this option is not set, logging will be disabled.
-
:max_attempts
(Integer)
— default:
3
—
An integer representing the maximum number attempts that will be made for a single request, including the initial attempt. For example, setting this value to 5 will result in a request being retried up to 4 times. Used in ‘standard` and `adaptive` retry modes.
-
:profile
(String)
— default:
"default"
—
Used when loading credentials from the shared credentials file at HOME/.aws/credentials. When not specified, ‘default’ is used.
-
:request_min_compression_size_bytes
(Integer)
— default:
10240
—
The minimum size in bytes that triggers compression for request bodies. The value must be non-negative integer value between 0 and 10485780 bytes inclusive.
-
:retry_backoff
(Proc)
—
A proc or lambda used for backoff. Defaults to 2**retries * retry_base_delay. This option is only used in the ‘legacy` retry mode.
-
:retry_base_delay
(Float)
— default:
0.3
—
The base delay in seconds used by the default backoff function. This option is only used in the ‘legacy` retry mode.
-
:retry_jitter
(Symbol)
— default:
:none
—
A delay randomiser function used by the default backoff function. Some predefined functions can be referenced by name - :none, :equal, :full, otherwise a Proc that takes and returns a number. This option is only used in the ‘legacy` retry mode.
-
:retry_limit
(Integer)
— default:
3
—
The maximum number of times to retry failed requests. Only ~ 500 level server errors and certain ~ 400 level client errors are retried. Generally, these are throttling errors, data checksum errors, networking errors, timeout errors, auth errors, endpoint discovery, and errors from expired credentials. This option is only used in the ‘legacy` retry mode.
-
:retry_max_delay
(Integer)
— default:
0
—
The maximum number of seconds to delay between retries (0 for no limit) used by the default backoff function. This option is only used in the ‘legacy` retry mode.
-
:retry_mode
(String)
— default:
"legacy"
—
Specifies which retry algorithm to use. Values are:
-
‘legacy` - The pre-existing retry behavior. This is default value if no retry mode is provided.
-
‘standard` - A standardized set of retry rules across the AWS SDKs. This includes support for retry quotas, which limit the number of unsuccessful retries a client can make.
-
‘adaptive` - An experimental retry mode that includes all the functionality of `standard` mode along with automatic client side throttling. This is a provisional mode that may change behavior in the future.
-
-
:sdk_ua_app_id
(String)
—
A unique and opaque application ID that is appended to the User-Agent header as app/sdk_ua_app_id. It should have a maximum length of 50. This variable is sourced from environment variable AWS_SDK_UA_APP_ID or the shared config profile attribute sdk_ua_app_id.
- :secret_access_key (String)
- :session_token (String)
-
:sigv4a_signing_region_set
(Array)
—
A list of regions that should be signed with SigV4a signing. When not passed, a default ‘:sigv4a_signing_region_set` is searched for in the following locations:
-
‘ENV`
-
‘~/.aws/config`
-
:stub_responses
(Boolean)
— default:
false
—
Causes the client to return stubbed responses. By default fake responses are generated and returned. You can specify the response data to return or errors to raise by calling ClientStubs#stub_responses. See ClientStubs for more information.
** Please note ** When response stubbing is enabled, no HTTP requests are made, and retries are disabled.
-
:telemetry_provider
(Aws::Telemetry::TelemetryProviderBase)
— default:
Aws::Telemetry::NoOpTelemetryProvider
—
Allows you to provide a telemetry provider, which is used to emit telemetry data. By default, uses ‘NoOpTelemetryProvider` which will not record or emit any telemetry data. The SDK supports the following telemetry providers:
-
OpenTelemetry (OTel) - To use the OTel provider, install and require the
‘opentelemetry-sdk` gem and then, pass in an instance of a `Aws::Telemetry::OTelProvider` for telemetry provider.
-
-
:token_provider
(Aws::TokenProvider)
—
A Bearer Token Provider. This can be an instance of any one of the following classes:
-
‘Aws::StaticTokenProvider` - Used for configuring static, non-refreshing tokens.
-
‘Aws::SSOTokenProvider` - Used for loading tokens from AWS SSO using an access token generated from `aws login`.
When ‘:token_provider` is not configured directly, the `Aws::TokenProviderChain` will be used to search for tokens configured for your profile in shared configuration files.
-
-
:use_dualstack_endpoint
(Boolean)
—
When set to ‘true`, dualstack enabled endpoints (with `.aws` TLD) will be used if available.
-
:use_fips_endpoint
(Boolean)
—
When set to ‘true`, fips compatible endpoints will be used if available. When a `fips` region is used, the region is normalized and this config is set to `true`.
-
:validate_params
(Boolean)
— default:
true
—
When ‘true`, request parameters are validated before sending the request.
-
:endpoint_provider
(Aws::Batch::EndpointProvider)
—
The endpoint provider used to resolve endpoints. Any object that responds to ‘#resolve_endpoint(parameters)` where `parameters` is a Struct similar to `Aws::Batch::EndpointParameters`.
-
:http_continue_timeout
(Float)
— default:
1
—
The number of seconds to wait for a 100-continue response before sending the request body. This option has no effect unless the request has “Expect” header set to “100-continue”. Defaults to ‘nil` which disables this behaviour. This value can safely be set per request on the session.
-
:http_idle_timeout
(Float)
— default:
5
—
The number of seconds a connection is allowed to sit idle before it is considered stale. Stale connections are closed and removed from the pool before making a request.
-
:http_open_timeout
(Float)
— default:
15
—
The default number of seconds to wait for response data. This value can safely be set per-request on the session.
-
:http_proxy
(URI::HTTP, String)
—
A proxy to send requests through. Formatted like ‘proxy.com:123’.
-
:http_read_timeout
(Float)
— default:
60
—
The default number of seconds to wait for response data. This value can safely be set per-request on the session.
-
:http_wire_trace
(Boolean)
— default:
false
—
When ‘true`, HTTP debug output will be sent to the `:logger`.
-
:on_chunk_received
(Proc)
—
When a Proc object is provided, it will be used as callback when each chunk of the response body is received. It provides three arguments: the chunk, the number of bytes received, and the total number of bytes in the response (or nil if the server did not send a ‘content-length`).
-
:on_chunk_sent
(Proc)
—
When a Proc object is provided, it will be used as callback when each chunk of the request body is sent. It provides three arguments: the chunk, the number of bytes read from the body, and the total number of bytes in the body.
-
:raise_response_errors
(Boolean)
— default:
true
—
When ‘true`, response errors are raised.
-
:ssl_ca_bundle
(String)
—
Full path to the SSL certificate authority bundle file that should be used when verifying peer certificates. If you do not pass ‘:ssl_ca_bundle` or `:ssl_ca_directory` the the system default will be used if available.
-
:ssl_ca_directory
(String)
—
Full path of the directory that contains the unbundled SSL certificate authority files for verifying peer certificates. If you do not pass ‘:ssl_ca_bundle` or `:ssl_ca_directory` the the system default will be used if available.
-
:ssl_ca_store
(String)
—
Sets the X509::Store to verify peer certificate.
-
:ssl_cert
(OpenSSL::X509::Certificate)
—
Sets a client certificate when creating http connections.
-
:ssl_key
(OpenSSL::PKey)
—
Sets a client key when creating http connections.
-
:ssl_timeout
(Float)
—
Sets the SSL timeout in seconds
-
:ssl_verify_peer
(Boolean)
— default:
true
—
When ‘true`, SSL peer certificates are verified when establishing a connection.
444 445 446 |
# File 'lib/aws-sdk-batch/client.rb', line 444 def initialize(*args) super end |
Class Attribute Details
.identifier ⇒ Object (readonly)
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
4936 4937 4938 |
# File 'lib/aws-sdk-batch/client.rb', line 4936 def identifier @identifier end |
Class Method Details
.errors_module ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
4939 4940 4941 |
# File 'lib/aws-sdk-batch/client.rb', line 4939 def errors_module Errors end |
Instance Method Details
#build_request(operation_name, params = {}) ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
Parameters:
- params ({}) (defaults to: {})
4909 4910 4911 4912 4913 4914 4915 4916 4917 4918 4919 4920 4921 4922 4923 4924 4925 |
# File 'lib/aws-sdk-batch/client.rb', line 4909 def build_request(operation_name, params = {}) handlers = @handlers.for(operation_name) tracer = config.telemetry_provider.tracer_provider.tracer( Aws::Telemetry.module_to_tracer_name('Aws::Batch') ) context = Seahorse::Client::RequestContext.new( operation_name: operation_name, operation: config.api.operation(operation_name), client: self, params: params, config: config, tracer: tracer ) context[:gem_name] = 'aws-sdk-batch' context[:gem_version] = '1.103.0' Seahorse::Client::Request.new(handlers, context) end |
#cancel_job(params = {}) ⇒ Struct
Cancels a job in an Batch job queue. Jobs that are in a ‘SUBMITTED`, `PENDING`, or `RUNNABLE` state are cancelled and the job status is updated to `FAILED`.
<note markdown=“1”> A ‘PENDING` job is canceled after all dependency jobs are completed. Therefore, it may take longer than expected to cancel a job in `PENDING` status.
When you try to cancel an array parent job in `PENDING`, Batch
attempts to cancel all child jobs. The array parent job is canceled when all child jobs are completed.
</note>
Jobs that progressed to the ‘STARTING` or `RUNNING` state aren’t canceled. However, the API operation still succeeds, even if no job is canceled. These jobs must be terminated with the TerminateJob operation.
Examples:
Example: To cancel a job
Example: To cancel a job
# This example cancels a job with the specified job ID.
resp = client.cancel_job({
job_id: "1d828f65-7a4d-42e8-996d-3b900ed59dc4",
reason: "Cancelling job.",
})
resp.to_h outputs the following:
{
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.cancel_job({
job_id: "String", # required
reason: "String", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_id
(required, String)
—
The Batch job ID of the job to cancel.
-
:reason
(required, String)
—
A message to attach to the job that explains the reason for canceling it. This message is returned by future DescribeJobs operations on the job. It is also recorded in the Batch activity logs.
This parameter has as limit of 1024 characters.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
506 507 508 509 |
# File 'lib/aws-sdk-batch/client.rb', line 506 def cancel_job(params = {}, options = {}) req = build_request(:cancel_job, params) req.send_request(options) end |
#create_compute_environment(params = {}) ⇒ Types::CreateComputeEnvironmentResponse
Creates an Batch compute environment. You can create ‘MANAGED` or `UNMANAGED` compute environments. `MANAGED` compute environments can use Amazon EC2 or Fargate resources. `UNMANAGED` compute environments can only use EC2 resources.
In a managed compute environment, Batch manages the capacity and instance types of the compute resources within the environment. This is based on the compute resource specification that you define or the
- launch template][1
-
that you specify when you create the compute
environment. Either, you can choose to use EC2 On-Demand Instances and EC2 Spot Instances. Or, you can use Fargate and Fargate Spot capacity in your managed compute environment. You can optionally set a maximum price so that Spot Instances only launch when the Spot Instance price is less than a specified percentage of the On-Demand price.
<note markdown=“1”> Multi-node parallel jobs aren’t supported on Spot Instances.
</note>
In an unmanaged compute environment, you can manage your own EC2 compute resources and have flexibility with how you configure your compute resources. For example, you can use custom AMIs. However, you must verify that each of your AMIs meet the Amazon ECS container instance AMI specification. For more information, see [container instance AMIs] in the *Amazon Elastic Container Service Developer Guide*. After you created your unmanaged compute environment, you can use the DescribeComputeEnvironments operation to find the Amazon ECS cluster that’s associated with it. Then, launch your container instances into that Amazon ECS cluster. For more information, see
- Launching an Amazon ECS container instance][3
-
in the *Amazon Elastic
Container Service Developer Guide*.
<note markdown=“1”> To create a compute environment that uses EKS resources, the caller must have permissions to call ‘eks:DescribeCluster`.
</note>
<note markdown=“1”> Batch doesn’t automatically upgrade the AMIs in a compute environment after it’s created. For example, it also doesn’t update the AMIs in your compute environment when a newer version of the Amazon ECS optimized AMI is available. You’re responsible for the management of the guest operating system. This includes any updates and security patches. You’re also responsible for any additional application software or utilities that you install on the compute resources. There are two ways to use a new AMI for your Batch jobs. The original method is to complete these steps:
1. Create a new compute environment with the new AMI.
-
Add the compute environment to an existing job queue.
-
Remove the earlier compute environment from your job queue.
-
Delete the earlier compute environment.
In April 2022, Batch added enhanced support for updating compute
environments. For more information, see [Updating compute environments]. To use the enhanced updating of compute environments to update AMIs, follow these rules:
* Either don't set the service role (`serviceRole`) parameter or set
it to the **AWSBatchServiceRole** service-linked role.
-
Set the allocation strategy (‘allocationStrategy`) parameter to `BEST_FIT_PROGRESSIVE`, `SPOT_CAPACITY_OPTIMIZED`, or `SPOT_PRICE_CAPACITY_OPTIMIZED`.
-
Set the update to latest image version (‘updateToLatestImageVersion`) parameter to `true`. The `updateToLatestImageVersion` parameter is used when you update a compute environment. This parameter is ignored when you create a compute environment.
-
Don’t specify an AMI ID in ‘imageId`, `imageIdOverride` (in [ `ec2Configuration` ][5]), or in the launch template (`launchTemplate`). In that case, Batch selects the latest Amazon ECS optimized AMI that’s supported by Batch at the time the infrastructure update is initiated. Alternatively, you can specify the AMI ID in the ‘imageId` or `imageIdOverride` parameters, or the launch template identified by the `LaunchTemplate` properties. Changing any of these properties starts an infrastructure update. If the AMI ID is specified in the launch template, it can’t be replaced by specifying an AMI ID in either the ‘imageId` or `imageIdOverride` parameters. It can only be replaced by specifying a different launch template, or if the launch template version is set to `$Default` or `$Latest`, by setting either a new default version for the launch template (if `$Default`) or by adding a new version to the launch template (if `$Latest`).
If these rules are followed, any update that starts an infrastructure
update causes the AMI ID to be re-selected. If the ‘version` setting in the launch template (`launchTemplate`) is set to `$Latest` or `$Default`, the latest or default version of the launch template is evaluated up at the time of the infrastructure update, even if the `launchTemplate` wasn’t updated.
</note>
[1]: docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-launch-templates.html [2]: docs.aws.amazon.com/AmazonECS/latest/developerguide/container_instance_AMIs.html [3]: docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html [4]: docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html [5]: docs.aws.amazon.com/batch/latest/APIReference/API_Ec2Configuration.html
Examples:
Example: To create a managed EC2 compute environment
Example: To create a managed EC2 compute environment
# This example creates a managed compute environment with specific C4 instance types that are launched on demand. The
# compute environment is called C4OnDemand.
resp = client.create_compute_environment({
type: "MANAGED",
compute_environment_name: "C4OnDemand",
compute_resources: {
type: "EC2",
desiredv_cpus: 48,
ec2_key_pair: "id_rsa",
instance_role: "ecsInstanceRole",
instance_types: [
"c4.large",
"c4.xlarge",
"c4.2xlarge",
"c4.4xlarge",
"c4.8xlarge",
],
maxv_cpus: 128,
minv_cpus: 0,
security_group_ids: [
"sg-cf5093b2",
],
subnets: [
"subnet-220c0e0a",
"subnet-1a95556d",
"subnet-978f6dce",
],
tags: {
"Name" => "Batch Instance - C4OnDemand",
},
},
service_role: "arn:aws:iam::012345678910:role/AWSBatchServiceRole",
state: "ENABLED",
})
resp.to_h outputs the following:
{
compute_environment_arn: "arn:aws:batch:us-east-1:012345678910:compute-environment/C4OnDemand",
compute_environment_name: "C4OnDemand",
}
Example: To create a managed EC2 Spot compute environment
Example: To create a managed EC2 Spot compute environment
# This example creates a managed compute environment with the M4 instance type that is launched when the Spot bid price is
# at or below 20% of the On-Demand price for the instance type. The compute environment is called M4Spot.
resp = client.create_compute_environment({
type: "MANAGED",
compute_environment_name: "M4Spot",
compute_resources: {
type: "SPOT",
bid_percentage: 20,
desiredv_cpus: 4,
ec2_key_pair: "id_rsa",
instance_role: "ecsInstanceRole",
instance_types: [
"m4",
],
maxv_cpus: 128,
minv_cpus: 0,
security_group_ids: [
"sg-cf5093b2",
],
spot_iam_fleet_role: "arn:aws:iam::012345678910:role/aws-ec2-spot-fleet-role",
subnets: [
"subnet-220c0e0a",
"subnet-1a95556d",
"subnet-978f6dce",
],
tags: {
"Name" => "Batch Instance - M4Spot",
},
},
service_role: "arn:aws:iam::012345678910:role/AWSBatchServiceRole",
state: "ENABLED",
})
resp.to_h outputs the following:
{
compute_environment_arn: "arn:aws:batch:us-east-1:012345678910:compute-environment/M4Spot",
compute_environment_name: "M4Spot",
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_compute_environment({
compute_environment_name: "String", # required
type: "MANAGED", # required, accepts MANAGED, UNMANAGED
state: "ENABLED", # accepts ENABLED, DISABLED
unmanagedv_cpus: 1,
compute_resources: {
type: "EC2", # required, accepts EC2, SPOT, FARGATE, FARGATE_SPOT
allocation_strategy: "BEST_FIT", # accepts BEST_FIT, BEST_FIT_PROGRESSIVE, SPOT_CAPACITY_OPTIMIZED, SPOT_PRICE_CAPACITY_OPTIMIZED
minv_cpus: 1,
maxv_cpus: 1, # required
desiredv_cpus: 1,
instance_types: ["String"],
image_id: "String",
subnets: ["String"], # required
security_group_ids: ["String"],
ec2_key_pair: "String",
instance_role: "String",
tags: {
"String" => "String",
},
placement_group: "String",
bid_percentage: 1,
spot_iam_fleet_role: "String",
launch_template: {
launch_template_id: "String",
launch_template_name: "String",
version: "String",
},
ec2_configuration: [
{
image_type: "ImageType", # required
image_id_override: "ImageIdOverride",
image_kubernetes_version: "KubernetesVersion",
},
],
},
service_role: "String",
tags: {
"TagKey" => "TagValue",
},
eks_configuration: {
eks_cluster_arn: "String", # required
kubernetes_namespace: "String", # required
},
context: "String",
})
Response structure
Response structure
resp.compute_environment_name #=> String
resp.compute_environment_arn #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:compute_environment_name
(required, String)
—
The name for your compute environment. It can be up to 128 characters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
-
:type
(required, String)
—
The type of the compute environment: ‘MANAGED` or `UNMANAGED`. For more information, see [Compute Environments] in the *Batch User Guide*.
[1]: docs.aws.amazon.com/batch/latest/userguide/compute_environments.html
-
:state
(String)
—
The state of the compute environment. If the state is ‘ENABLED`, then the compute environment accepts jobs from a queue and can scale out automatically based on queues.
If the state is ‘ENABLED`, then the Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.
If the state is ‘DISABLED`, then the Batch scheduler doesn’t attempt to place jobs within the environment. Jobs in a ‘STARTING` or `RUNNING` state continue to progress normally. Managed compute environments in the `DISABLED` state don’t scale out.
<note markdown=“1”> Compute environments in a ‘DISABLED` state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see [State] in the *Batch User Guide*.
</note>
When an instance is idle, the instance scales down to the ‘minvCpus` value. However, the instance size doesn’t change. For example, consider a ‘c5.8xlarge` instance with a `minvCpus` value of `4` and a `desiredvCpus` value of `36`. This instance doesn’t scale down to a ‘c5.large` instance.
-
:unmanagedv_cpus
(Integer)
—
The maximum number of vCPUs for an unmanaged compute environment. This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn’t provided for a fair share job queue, no vCPU capacity is reserved.
<note markdown=“1”> This parameter is only supported when the ‘type` parameter is set to `UNMANAGED`.
</note>
-
:compute_resources
(Types::ComputeResource)
—
Details about the compute resources managed by the compute environment. This parameter is required for managed compute environments. For more information, see [Compute Environments] in the *Batch User Guide*.
[1]: docs.aws.amazon.com/batch/latest/userguide/compute_environments.html
-
:service_role
(String)
—
The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf. For more information, see [Batch service IAM role] in the *Batch User Guide*.
If your account already created the Batch service-linked role, that role is used by default for your compute environment unless you specify a different role here. If the Batch service-linked role doesn’t exist in your account, and no role is specified here, the service attempts to create the Batch service-linked role in your account.
If your specified role has a path other than ‘/`, then you must specify either the full role ARN (recommended) or prefix the role name with the path. For example, if a role with the name `bar` has a path of `/foo/`, specify `/foo/bar` as the role name. For more information, see [Friendly names and paths] in the *IAM User Guide*.
<note markdown=“1”> Depending on how you created your Batch service role, its ARN might contain the ‘service-role` path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn’t use the ‘service-role` path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.
</note>
[1]: docs.aws.amazon.com/batch/latest/userguide/service_IAM_role.html [2]: docs.aws.amazon.com/IAM/latest/UserGuide/reference_identifiers.html#identifiers-friendly-names
-
:tags
(Hash<String,String>)
—
The tags that you apply to the compute environment to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see [Tagging Amazon Web Services Resources] in *Amazon Web Services General Reference*.
These tags can be updated or removed using the [TagResource] and
- UntagResource][3
-
API operations. These tags don’t propagate to the
underlying compute resources.
[1]: docs.aws.amazon.com/general/latest/gr/aws_tagging.html [2]: docs.aws.amazon.com/batch/latest/APIReference/API_TagResource.html [3]: docs.aws.amazon.com/batch/latest/APIReference/API_UntagResource.html
-
:eks_configuration
(Types::EksConfiguration)
—
The details for the Amazon EKS cluster that supports the compute environment.
-
:context
(String)
—
Reserved.
Returns:
-
(Types::CreateComputeEnvironmentResponse)
—
Returns a response object which responds to the following methods:
-
#compute_environment_name => String
-
#compute_environment_arn => String
-
See Also:
891 892 893 894 |
# File 'lib/aws-sdk-batch/client.rb', line 891 def create_compute_environment(params = {}, options = {}) req = build_request(:create_compute_environment, params) req.send_request(options) end |
#create_job_queue(params = {}) ⇒ Types::CreateJobQueueResponse
Creates an Batch job queue. When you create a job queue, you associate one or more compute environments to the queue and assign an order of preference for the compute environments.
You also set a priority to the job queue that determines the order that the Batch scheduler places jobs onto its associated compute environments. For example, if a compute environment is associated with more than one job queue, the job queue with a higher priority is given preference for scheduling jobs to that compute environment.
Examples:
Example: To create a job queue with a single compute environment
Example: To create a job queue with a single compute environment
# This example creates a job queue called LowPriority that uses the M4Spot compute environment.
resp = client.create_job_queue({
compute_environment_order: [
{
compute_environment: "M4Spot",
order: 1,
},
],
job_queue_name: "LowPriority",
priority: 1,
state: "ENABLED",
})
resp.to_h outputs the following:
{
job_queue_arn: "arn:aws:batch:us-east-1:012345678910:job-queue/LowPriority",
job_queue_name: "LowPriority",
}
Example: To create a job queue with multiple compute environments
Example: To create a job queue with multiple compute environments
# This example creates a job queue called HighPriority that uses the C4OnDemand compute environment with an order of 1 and
# the M4Spot compute environment with an order of 2.
resp = client.create_job_queue({
compute_environment_order: [
{
compute_environment: "C4OnDemand",
order: 1,
},
{
compute_environment: "M4Spot",
order: 2,
},
],
job_queue_name: "HighPriority",
priority: 10,
state: "ENABLED",
})
resp.to_h outputs the following:
{
job_queue_arn: "arn:aws:batch:us-east-1:012345678910:job-queue/HighPriority",
job_queue_name: "HighPriority",
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_job_queue({
job_queue_name: "String", # required
state: "ENABLED", # accepts ENABLED, DISABLED
scheduling_policy_arn: "String",
priority: 1, # required
compute_environment_order: [ # required
{
order: 1, # required
compute_environment: "String", # required
},
],
tags: {
"TagKey" => "TagValue",
},
job_state_time_limit_actions: [
{
reason: "String", # required
state: "RUNNABLE", # required, accepts RUNNABLE
max_time_seconds: 1, # required
action: "CANCEL", # required, accepts CANCEL
},
],
})
Response structure
Response structure
resp.job_queue_name #=> String
resp.job_queue_arn #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_queue_name
(required, String)
—
The name of the job queue. It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
-
:state
(String)
—
The state of the job queue. If the job queue state is ‘ENABLED`, it is able to accept jobs. If the job queue state is `DISABLED`, new jobs can’t be added to the queue, but jobs already in the queue can finish.
-
:scheduling_policy_arn
(String)
—
The Amazon Resource Name (ARN) of the fair share scheduling policy. Job queues that don’t have a scheduling policy are scheduled in a first-in, first-out (FIFO) model. After a job queue has a scheduling policy, it can be replaced but can’t be removed.
The format is ‘aws:Partition:batch:Region:Account:scheduling-policy/Name `.
An example is ‘aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy`.
A job queue without a scheduling policy is scheduled as a FIFO job queue and can’t have a scheduling policy added. Jobs queues with a scheduling policy can have a maximum of 500 active fair share identifiers. When the limit has been reached, submissions of any jobs that add a new fair share identifier fail.
-
:priority
(required, Integer)
—
The priority of the job queue. Job queues with a higher priority (or a higher integer value for the ‘priority` parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of `10` is given scheduling preference over a job queue with a priority value of `1`. All of the compute environments must be either EC2 (`EC2` or `SPOT`) or Fargate (`FARGATE` or `FARGATE_SPOT`); EC2 and Fargate compute environments can’t be mixed.
-
:compute_environment_order
(required, Array<Types::ComputeEnvironmentOrder>)
—
The set of compute environments mapped to a job queue and their order relative to each other. The job scheduler uses this parameter to determine which compute environment runs a specific job. Compute environments must be in the ‘VALID` state before you can associate them with a job queue. You can associate up to three compute environments with a job queue. All of the compute environments must be either EC2 (`EC2` or `SPOT`) or Fargate (`FARGATE` or `FARGATE_SPOT`); EC2 and Fargate compute environments can’t be mixed.
<note markdown=“1”> All compute environments that are associated with a job queue must share the same architecture. Batch doesn’t support mixing compute environment architecture types in a single job queue.
</note>
-
:tags
(Hash<String,String>)
—
The tags that you apply to the job queue to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see [Tagging your Batch resources] in *Batch User Guide*.
[1]: docs.aws.amazon.com/batch/latest/userguide/using-tags.html
-
:job_state_time_limit_actions
(Array<Types::JobStateTimeLimitAction>)
—
The set of actions that Batch performs on jobs that remain at the head of the job queue in the specified state longer than specified times. Batch will perform each action after ‘maxTimeSeconds` has passed. (Note: The minimum value for maxTimeSeconds is 600 (10 minutes) and its maximum value is 86,400 (24 hours).)
Returns:
-
(Types::CreateJobQueueResponse)
—
Returns a response object which responds to the following methods:
-
#job_queue_name => String
-
#job_queue_arn => String
-
See Also:
1068 1069 1070 1071 |
# File 'lib/aws-sdk-batch/client.rb', line 1068 def create_job_queue(params = {}, options = {}) req = build_request(:create_job_queue, params) req.send_request(options) end |
#create_scheduling_policy(params = {}) ⇒ Types::CreateSchedulingPolicyResponse
Creates an Batch scheduling policy.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.create_scheduling_policy({
name: "String", # required
fairshare_policy: {
share_decay_seconds: 1,
compute_reservation: 1,
share_distribution: [
{
share_identifier: "String", # required
weight_factor: 1.0,
},
],
},
tags: {
"TagKey" => "TagValue",
},
})
Response structure
Response structure
resp.name #=> String
resp.arn #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:name
(required, String)
—
The name of the scheduling policy. It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
-
:fairshare_policy
(Types::FairsharePolicy)
—
The fair share policy of the scheduling policy.
-
:tags
(Hash<String,String>)
—
The tags that you apply to the scheduling policy to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see [Tagging Amazon Web Services Resources] in *Amazon Web Services General Reference*.
These tags can be updated or removed using the [TagResource] and
- UntagResource][3
-
API operations.
[1]: docs.aws.amazon.com/general/latest/gr/aws_tagging.html [2]: docs.aws.amazon.com/batch/latest/APIReference/API_TagResource.html [3]: docs.aws.amazon.com/batch/latest/APIReference/API_UntagResource.html
Returns:
See Also:
1131 1132 1133 1134 |
# File 'lib/aws-sdk-batch/client.rb', line 1131 def create_scheduling_policy(params = {}, options = {}) req = build_request(:create_scheduling_policy, params) req.send_request(options) end |
#delete_compute_environment(params = {}) ⇒ Struct
Deletes an Batch compute environment.
Before you can delete a compute environment, you must set its state to ‘DISABLED` with the UpdateComputeEnvironment API operation and disassociate it from any job queues with the UpdateJobQueue API operation. Compute environments that use Fargate resources must terminate all active jobs on that compute environment before deleting the compute environment. If this isn’t done, the compute environment enters an invalid state.
Examples:
Example: To delete a compute environment
Example: To delete a compute environment
# This example deletes the P2OnDemand compute environment.
resp = client.delete_compute_environment({
compute_environment: "P2OnDemand",
})
resp.to_h outputs the following:
{
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.delete_compute_environment({
compute_environment: "String", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:compute_environment
(required, String)
—
The name or Amazon Resource Name (ARN) of the compute environment to delete.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
1175 1176 1177 1178 |
# File 'lib/aws-sdk-batch/client.rb', line 1175 def delete_compute_environment(params = {}, options = {}) req = build_request(:delete_compute_environment, params) req.send_request(options) end |
#delete_job_queue(params = {}) ⇒ Struct
Deletes the specified job queue. You must first disable submissions for a queue with the UpdateJobQueue operation. All jobs in the queue are eventually terminated when you delete a job queue. The jobs are terminated at a rate of about 16 jobs each second.
It’s not necessary to disassociate compute environments from a queue before submitting a ‘DeleteJobQueue` request.
Examples:
Example: To delete a job queue
Example: To delete a job queue
# This example deletes the GPGPU job queue.
resp = client.delete_job_queue({
job_queue: "GPGPU",
})
resp.to_h outputs the following:
{
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.delete_job_queue({
job_queue: "String", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_queue
(required, String)
—
The short name or full Amazon Resource Name (ARN) of the queue to delete.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
1217 1218 1219 1220 |
# File 'lib/aws-sdk-batch/client.rb', line 1217 def delete_job_queue(params = {}, options = {}) req = build_request(:delete_job_queue, params) req.send_request(options) end |
#delete_scheduling_policy(params = {}) ⇒ Struct
Deletes the specified scheduling policy.
You can’t delete a scheduling policy that’s used in any job queues.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.delete_scheduling_policy({
arn: "String", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:arn
(required, String)
—
The Amazon Resource Name (ARN) of the scheduling policy to delete.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
1241 1242 1243 1244 |
# File 'lib/aws-sdk-batch/client.rb', line 1241 def delete_scheduling_policy(params = {}, options = {}) req = build_request(:delete_scheduling_policy, params) req.send_request(options) end |
#deregister_job_definition(params = {}) ⇒ Struct
Deregisters an Batch job definition. Job definitions are permanently deleted after 180 days.
Examples:
Example: To deregister a job definition
Example: To deregister a job definition
# This example deregisters a job definition called sleep10.
resp = client.deregister_job_definition({
job_definition: "sleep10",
})
resp.to_h outputs the following:
{
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.deregister_job_definition({
job_definition: "String", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_definition
(required, String)
—
The name and revision (‘name:revision`) or full Amazon Resource Name (ARN) of the job definition to deregister.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
1278 1279 1280 1281 |
# File 'lib/aws-sdk-batch/client.rb', line 1278 def deregister_job_definition(params = {}, options = {}) req = build_request(:deregister_job_definition, params) req.send_request(options) end |
#describe_compute_environments(params = {}) ⇒ Types::DescribeComputeEnvironmentsResponse
Describes one or more of your compute environments.
If you’re using an unmanaged compute environment, you can use the ‘DescribeComputeEnvironment` operation to determine the `ecsClusterArn` that you launch your Amazon ECS container instances into.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Example: To describe a compute environment
Example: To describe a compute environment
# This example describes the P2OnDemand compute environment.
resp = client.describe_compute_environments({
compute_environments: [
"P2OnDemand",
],
})
resp.to_h outputs the following:
{
compute_environments: [
{
type: "MANAGED",
compute_environment_arn: "arn:aws:batch:us-east-1:012345678910:compute-environment/P2OnDemand",
compute_environment_name: "P2OnDemand",
compute_resources: {
type: "EC2",
desiredv_cpus: 48,
ec2_key_pair: "id_rsa",
instance_role: "ecsInstanceRole",
instance_types: [
"p2",
],
maxv_cpus: 128,
minv_cpus: 0,
security_group_ids: [
"sg-cf5093b2",
],
subnets: [
"subnet-220c0e0a",
"subnet-1a95556d",
"subnet-978f6dce",
],
tags: {
"Name" => "Batch Instance - P2OnDemand",
},
},
ecs_cluster_arn: "arn:aws:ecs:us-east-1:012345678910:cluster/P2OnDemand_Batch_2c06f29d-d1fe-3a49-879d-42394c86effc",
service_role: "arn:aws:iam::012345678910:role/AWSBatchServiceRole",
state: "ENABLED",
status: "VALID",
status_reason: "ComputeEnvironment Healthy",
},
],
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.describe_compute_environments({
compute_environments: ["String"],
max_results: 1,
next_token: "String",
})
Response structure
Response structure
resp.compute_environments #=> Array
resp.compute_environments[0].compute_environment_name #=> String
resp.compute_environments[0].compute_environment_arn #=> String
resp.compute_environments[0].unmanagedv_cpus #=> Integer
resp.compute_environments[0].ecs_cluster_arn #=> String
resp.compute_environments[0].tags #=> Hash
resp.compute_environments[0].tags["TagKey"] #=> String
resp.compute_environments[0].type #=> String, one of "MANAGED", "UNMANAGED"
resp.compute_environments[0].state #=> String, one of "ENABLED", "DISABLED"
resp.compute_environments[0].status #=> String, one of "CREATING", "UPDATING", "DELETING", "DELETED", "VALID", "INVALID"
resp.compute_environments[0].status_reason #=> String
resp.compute_environments[0].compute_resources.type #=> String, one of "EC2", "SPOT", "FARGATE", "FARGATE_SPOT"
resp.compute_environments[0].compute_resources.allocation_strategy #=> String, one of "BEST_FIT", "BEST_FIT_PROGRESSIVE", "SPOT_CAPACITY_OPTIMIZED", "SPOT_PRICE_CAPACITY_OPTIMIZED"
resp.compute_environments[0].compute_resources.minv_cpus #=> Integer
resp.compute_environments[0].compute_resources.maxv_cpus #=> Integer
resp.compute_environments[0].compute_resources.desiredv_cpus #=> Integer
resp.compute_environments[0].compute_resources.instance_types #=> Array
resp.compute_environments[0].compute_resources.instance_types[0] #=> String
resp.compute_environments[0].compute_resources.image_id #=> String
resp.compute_environments[0].compute_resources.subnets #=> Array
resp.compute_environments[0].compute_resources.subnets[0] #=> String
resp.compute_environments[0].compute_resources.security_group_ids #=> Array
resp.compute_environments[0].compute_resources.security_group_ids[0] #=> String
resp.compute_environments[0].compute_resources.ec2_key_pair #=> String
resp.compute_environments[0].compute_resources.instance_role #=> String
resp.compute_environments[0].compute_resources.tags #=> Hash
resp.compute_environments[0].compute_resources.tags["String"] #=> String
resp.compute_environments[0].compute_resources.placement_group #=> String
resp.compute_environments[0].compute_resources.bid_percentage #=> Integer
resp.compute_environments[0].compute_resources.spot_iam_fleet_role #=> String
resp.compute_environments[0].compute_resources.launch_template.launch_template_id #=> String
resp.compute_environments[0].compute_resources.launch_template.launch_template_name #=> String
resp.compute_environments[0].compute_resources.launch_template.version #=> String
resp.compute_environments[0].compute_resources.ec2_configuration #=> Array
resp.compute_environments[0].compute_resources.ec2_configuration[0].image_type #=> String
resp.compute_environments[0].compute_resources.ec2_configuration[0].image_id_override #=> String
resp.compute_environments[0].compute_resources.ec2_configuration[0].image_kubernetes_version #=> String
resp.compute_environments[0].service_role #=> String
resp.compute_environments[0].update_policy.terminate_jobs_on_update #=> Boolean
resp.compute_environments[0].update_policy.job_execution_timeout_minutes #=> Integer
resp.compute_environments[0].eks_configuration.eks_cluster_arn #=> String
resp.compute_environments[0].eks_configuration.kubernetes_namespace #=> String
resp.compute_environments[0].container_orchestration_type #=> String, one of "ECS", "EKS"
resp.compute_environments[0].uuid #=> String
resp.compute_environments[0].context #=> String
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:compute_environments
(Array<String>)
—
A list of up to 100 compute environment names or full Amazon Resource Name (ARN) entries.
-
:max_results
(Integer)
—
The maximum number of cluster results returned by ‘DescribeComputeEnvironments` in paginated output. When this parameter is used, `DescribeComputeEnvironments` only returns `maxResults` results in a single page along with a `nextToken` response element. The remaining results of the initial request can be seen by sending another `DescribeComputeEnvironments` request with the returned `nextToken` value. This value can be between 1 and 100. If this parameter isn’t used, then ‘DescribeComputeEnvironments` returns up to 100 results and a `nextToken` value if applicable.
-
:next_token
(String)
—
The ‘nextToken` value returned from a previous paginated `DescribeComputeEnvironments` request where `maxResults` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value. This value is `null` when there are no more results to return.
<note markdown=“1”> Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
</note>
Returns:
-
(Types::DescribeComputeEnvironmentsResponse)
—
Returns a response object which responds to the following methods:
-
#compute_environments => Array<Types::ComputeEnvironmentDetail>
-
#next_token => String
-
See Also:
1434 1435 1436 1437 |
# File 'lib/aws-sdk-batch/client.rb', line 1434 def describe_compute_environments(params = {}, options = {}) req = build_request(:describe_compute_environments, params) req.send_request(options) end |
#describe_job_definitions(params = {}) ⇒ Types::DescribeJobDefinitionsResponse
Describes a list of job definitions. You can specify a ‘status` (such as `ACTIVE`) to only return job definitions that match that status.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Example: To describe active job definitions
Example: To describe active job definitions
# This example describes all of your active job definitions.
resp = client.describe_job_definitions({
status: "ACTIVE",
})
resp.to_h outputs the following:
{
job_definitions: [
{
type: "container",
container_properties: {
command: [
"sleep",
"60",
],
environment: [
],
image: "busybox",
mount_points: [
],
resource_requirements: [
{
type: "MEMORY",
value: "128",
},
{
type: "VCPU",
value: "1",
},
],
ulimits: [
],
volumes: [
],
},
job_definition_arn: "arn:aws:batch:us-east-1:012345678910:job-definition/sleep60:1",
job_definition_name: "sleep60",
revision: 1,
status: "ACTIVE",
},
],
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.describe_job_definitions({
job_definitions: ["String"],
max_results: 1,
job_definition_name: "String",
status: "String",
next_token: "String",
})
Response structure
Response structure
resp.job_definitions #=> Array
resp.job_definitions[0].job_definition_name #=> String
resp.job_definitions[0].job_definition_arn #=> String
resp.job_definitions[0].revision #=> Integer
resp.job_definitions[0].status #=> String
resp.job_definitions[0].type #=> String
resp.job_definitions[0].scheduling_priority #=> Integer
resp.job_definitions[0].parameters #=> Hash
resp.job_definitions[0].parameters["String"] #=> String
resp.job_definitions[0].retry_strategy.attempts #=> Integer
resp.job_definitions[0].retry_strategy.evaluate_on_exit #=> Array
resp.job_definitions[0].retry_strategy.evaluate_on_exit[0].on_status_reason #=> String
resp.job_definitions[0].retry_strategy.evaluate_on_exit[0].on_reason #=> String
resp.job_definitions[0].retry_strategy.evaluate_on_exit[0].on_exit_code #=> String
resp.job_definitions[0].retry_strategy.evaluate_on_exit[0].action #=> String, one of "RETRY", "EXIT"
resp.job_definitions[0].container_properties.image #=> String
resp.job_definitions[0].container_properties.vcpus #=> Integer
resp.job_definitions[0].container_properties.memory #=> Integer
resp.job_definitions[0].container_properties.command #=> Array
resp.job_definitions[0].container_properties.command[0] #=> String
resp.job_definitions[0].container_properties.job_role_arn #=> String
resp.job_definitions[0].container_properties.execution_role_arn #=> String
resp.job_definitions[0].container_properties.volumes #=> Array
resp.job_definitions[0].container_properties.volumes[0].host.source_path #=> String
resp.job_definitions[0].container_properties.volumes[0].name #=> String
resp.job_definitions[0].container_properties.volumes[0].efs_volume_configuration.file_system_id #=> String
resp.job_definitions[0].container_properties.volumes[0].efs_volume_configuration.root_directory #=> String
resp.job_definitions[0].container_properties.volumes[0].efs_volume_configuration.transit_encryption #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].container_properties.volumes[0].efs_volume_configuration.transit_encryption_port #=> Integer
resp.job_definitions[0].container_properties.volumes[0].efs_volume_configuration.authorization_config.access_point_id #=> String
resp.job_definitions[0].container_properties.volumes[0].efs_volume_configuration.authorization_config.iam #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].container_properties.environment #=> Array
resp.job_definitions[0].container_properties.environment[0].name #=> String
resp.job_definitions[0].container_properties.environment[0].value #=> String
resp.job_definitions[0].container_properties.mount_points #=> Array
resp.job_definitions[0].container_properties.mount_points[0].container_path #=> String
resp.job_definitions[0].container_properties.mount_points[0].read_only #=> Boolean
resp.job_definitions[0].container_properties.mount_points[0].source_volume #=> String
resp.job_definitions[0].container_properties.readonly_root_filesystem #=> Boolean
resp.job_definitions[0].container_properties.privileged #=> Boolean
resp.job_definitions[0].container_properties.ulimits #=> Array
resp.job_definitions[0].container_properties.ulimits[0].hard_limit #=> Integer
resp.job_definitions[0].container_properties.ulimits[0].name #=> String
resp.job_definitions[0].container_properties.ulimits[0].soft_limit #=> Integer
resp.job_definitions[0].container_properties.user #=> String
resp.job_definitions[0].container_properties.instance_type #=> String
resp.job_definitions[0].container_properties.resource_requirements #=> Array
resp.job_definitions[0].container_properties.resource_requirements[0].value #=> String
resp.job_definitions[0].container_properties.resource_requirements[0].type #=> String, one of "GPU", "VCPU", "MEMORY"
resp.job_definitions[0].container_properties.linux_parameters.devices #=> Array
resp.job_definitions[0].container_properties.linux_parameters.devices[0].host_path #=> String
resp.job_definitions[0].container_properties.linux_parameters.devices[0].container_path #=> String
resp.job_definitions[0].container_properties.linux_parameters.devices[0].permissions #=> Array
resp.job_definitions[0].container_properties.linux_parameters.devices[0].permissions[0] #=> String, one of "READ", "WRITE", "MKNOD"
resp.job_definitions[0].container_properties.linux_parameters.init_process_enabled #=> Boolean
resp.job_definitions[0].container_properties.linux_parameters.shared_memory_size #=> Integer
resp.job_definitions[0].container_properties.linux_parameters.tmpfs #=> Array
resp.job_definitions[0].container_properties.linux_parameters.tmpfs[0].container_path #=> String
resp.job_definitions[0].container_properties.linux_parameters.tmpfs[0].size #=> Integer
resp.job_definitions[0].container_properties.linux_parameters.tmpfs[0].mount_options #=> Array
resp.job_definitions[0].container_properties.linux_parameters.tmpfs[0].mount_options[0] #=> String
resp.job_definitions[0].container_properties.linux_parameters.max_swap #=> Integer
resp.job_definitions[0].container_properties.linux_parameters.swappiness #=> Integer
resp.job_definitions[0].container_properties.log_configuration.log_driver #=> String, one of "json-file", "syslog", "journald", "gelf", "fluentd", "awslogs", "splunk"
resp.job_definitions[0].container_properties.log_configuration.options #=> Hash
resp.job_definitions[0].container_properties.log_configuration.options["String"] #=> String
resp.job_definitions[0].container_properties.log_configuration.secret_options #=> Array
resp.job_definitions[0].container_properties.log_configuration.secret_options[0].name #=> String
resp.job_definitions[0].container_properties.log_configuration.secret_options[0].value_from #=> String
resp.job_definitions[0].container_properties.secrets #=> Array
resp.job_definitions[0].container_properties.secrets[0].name #=> String
resp.job_definitions[0].container_properties.secrets[0].value_from #=> String
resp.job_definitions[0].container_properties.network_configuration.assign_public_ip #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].container_properties.fargate_platform_configuration.platform_version #=> String
resp.job_definitions[0].container_properties.ephemeral_storage.size_in_gi_b #=> Integer
resp.job_definitions[0].container_properties.runtime_platform.operating_system_family #=> String
resp.job_definitions[0].container_properties.runtime_platform.cpu_architecture #=> String
resp.job_definitions[0].container_properties.repository_credentials.credentials_parameter #=> String
resp.job_definitions[0].timeout.attempt_duration_seconds #=> Integer
resp.job_definitions[0].node_properties.num_nodes #=> Integer
resp.job_definitions[0].node_properties.main_node #=> Integer
resp.job_definitions[0].node_properties.node_range_properties #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].target_nodes #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.image #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.vcpus #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.memory #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.command #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.command[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.job_role_arn #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.execution_role_arn #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes[0].host.source_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.file_system_id #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.root_directory #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.transit_encryption #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.transit_encryption_port #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.authorization_config.access_point_id #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.authorization_config.iam #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].node_properties.node_range_properties[0].container.environment #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.environment[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.environment[0].value #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.mount_points #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.mount_points[0].container_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.mount_points[0].read_only #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].container.mount_points[0].source_volume #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.readonly_root_filesystem #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].container.privileged #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].container.ulimits #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.ulimits[0].hard_limit #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.ulimits[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.ulimits[0].soft_limit #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.user #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.instance_type #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.resource_requirements #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.resource_requirements[0].value #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.resource_requirements[0].type #=> String, one of "GPU", "VCPU", "MEMORY"
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.devices #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.devices[0].host_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.devices[0].container_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.devices[0].permissions #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.devices[0].permissions[0] #=> String, one of "READ", "WRITE", "MKNOD"
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.init_process_enabled #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.shared_memory_size #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs[0].container_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs[0].size #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs[0].mount_options #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs[0].mount_options[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.max_swap #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.linux_parameters.swappiness #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.log_configuration.log_driver #=> String, one of "json-file", "syslog", "journald", "gelf", "fluentd", "awslogs", "splunk"
resp.job_definitions[0].node_properties.node_range_properties[0].container.log_configuration.options #=> Hash
resp.job_definitions[0].node_properties.node_range_properties[0].container.log_configuration.options["String"] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.log_configuration.secret_options #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.log_configuration.secret_options[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.log_configuration.secret_options[0].value_from #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.secrets #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].container.secrets[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.secrets[0].value_from #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.network_configuration.assign_public_ip #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].node_properties.node_range_properties[0].container.fargate_platform_configuration.platform_version #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.ephemeral_storage.size_in_gi_b #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].container.runtime_platform.operating_system_family #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.runtime_platform.cpu_architecture #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].container.repository_credentials.credentials_parameter #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].instance_types #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].instance_types[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].command #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].command[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].depends_on #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].depends_on[0].container_name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].depends_on[0].condition #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].environment #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].environment[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].environment[0].value #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].essential #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].image #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].host_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].container_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].permissions #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].permissions[0] #=> String, one of "READ", "WRITE", "MKNOD"
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.init_process_enabled #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.shared_memory_size #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].container_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].size #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].mount_options #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].mount_options[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.max_swap #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.swappiness #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.log_driver #=> String, one of "json-file", "syslog", "journald", "gelf", "fluentd", "awslogs", "splunk"
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.options #=> Hash
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.options["String"] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options[0].value_from #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].mount_points #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].mount_points[0].container_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].mount_points[0].read_only #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].mount_points[0].source_volume #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].privileged #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].readonly_root_filesystem #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].repository_credentials.credentials_parameter #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].resource_requirements #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].resource_requirements[0].value #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].resource_requirements[0].type #=> String, one of "GPU", "VCPU", "MEMORY"
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].secrets #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].secrets[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].secrets[0].value_from #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].ulimits #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].ulimits[0].hard_limit #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].ulimits[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].ulimits[0].soft_limit #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].user #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].ephemeral_storage.size_in_gi_b #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].execution_role_arn #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].platform_version #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].ipc_mode #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].task_role_arn #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].pid_mode #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].network_configuration.assign_public_ip #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].runtime_platform.operating_system_family #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].runtime_platform.cpu_architecture #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].host.source_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.file_system_id #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.root_directory #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.transit_encryption #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.transit_encryption_port #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.authorization_config.access_point_id #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.authorization_config.iam #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.service_account_name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.host_network #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.dns_policy #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.image_pull_secrets #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.image_pull_secrets[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].image #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].image_pull_policy #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].command #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].command[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].args #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].args[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].env #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].env[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].env[0].value #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].resources.limits #=> Hash
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].resources.limits["String"] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].resources.requests #=> Hash
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].resources.requests["String"] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].volume_mounts #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].volume_mounts[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].volume_mounts[0].mount_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].volume_mounts[0].read_only #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.run_as_user #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.run_as_group #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.privileged #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.allow_privilege_escalation #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.read_only_root_filesystem #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.run_as_non_root #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].image #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].image_pull_policy #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].command #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].command[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].args #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].args[0] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].env #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].env[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].env[0].value #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].resources.limits #=> Hash
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].resources.limits["String"] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].resources.requests #=> Hash
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].resources.requests["String"] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].volume_mounts #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].mount_path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].read_only #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_user #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_group #=> Integer
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.privileged #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.allow_privilege_escalation #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.read_only_root_filesystem #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_non_root #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes #=> Array
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].host_path.path #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].empty_dir.medium #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].empty_dir.size_limit #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].secret.secret_name #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].secret.optional #=> Boolean
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.metadata.labels #=> Hash
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.metadata.labels["String"] #=> String
resp.job_definitions[0].node_properties.node_range_properties[0].eks_properties.pod_properties.share_process_namespace #=> Boolean
resp.job_definitions[0].tags #=> Hash
resp.job_definitions[0].tags["TagKey"] #=> String
resp.job_definitions[0].propagate_tags #=> Boolean
resp.job_definitions[0].platform_capabilities #=> Array
resp.job_definitions[0].platform_capabilities[0] #=> String, one of "EC2", "FARGATE"
resp.job_definitions[0].ecs_properties.task_properties #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].command #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].command[0] #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].depends_on #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].depends_on[0].container_name #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].depends_on[0].condition #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].environment #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].environment[0].name #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].environment[0].value #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].essential #=> Boolean
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].image #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].host_path #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].container_path #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].permissions #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].permissions[0] #=> String, one of "READ", "WRITE", "MKNOD"
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.init_process_enabled #=> Boolean
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.shared_memory_size #=> Integer
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].container_path #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].size #=> Integer
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].mount_options #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].mount_options[0] #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.max_swap #=> Integer
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].linux_parameters.swappiness #=> Integer
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].log_configuration.log_driver #=> String, one of "json-file", "syslog", "journald", "gelf", "fluentd", "awslogs", "splunk"
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].log_configuration.options #=> Hash
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].log_configuration.options["String"] #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options[0].name #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options[0].value_from #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].mount_points #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].mount_points[0].container_path #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].mount_points[0].read_only #=> Boolean
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].mount_points[0].source_volume #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].name #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].privileged #=> Boolean
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].readonly_root_filesystem #=> Boolean
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].repository_credentials.credentials_parameter #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].resource_requirements #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].resource_requirements[0].value #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].resource_requirements[0].type #=> String, one of "GPU", "VCPU", "MEMORY"
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].secrets #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].secrets[0].name #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].secrets[0].value_from #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].ulimits #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].ulimits[0].hard_limit #=> Integer
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].ulimits[0].name #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].ulimits[0].soft_limit #=> Integer
resp.job_definitions[0].ecs_properties.task_properties[0].containers[0].user #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].ephemeral_storage.size_in_gi_b #=> Integer
resp.job_definitions[0].ecs_properties.task_properties[0].execution_role_arn #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].platform_version #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].ipc_mode #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].task_role_arn #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].pid_mode #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].network_configuration.assign_public_ip #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].ecs_properties.task_properties[0].runtime_platform.operating_system_family #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].runtime_platform.cpu_architecture #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].volumes #=> Array
resp.job_definitions[0].ecs_properties.task_properties[0].volumes[0].host.source_path #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].volumes[0].name #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.file_system_id #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.root_directory #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.transit_encryption #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.transit_encryption_port #=> Integer
resp.job_definitions[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.authorization_config.access_point_id #=> String
resp.job_definitions[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.authorization_config.iam #=> String, one of "ENABLED", "DISABLED"
resp.job_definitions[0].eks_properties.pod_properties.service_account_name #=> String
resp.job_definitions[0].eks_properties.pod_properties.host_network #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.dns_policy #=> String
resp.job_definitions[0].eks_properties.pod_properties.image_pull_secrets #=> Array
resp.job_definitions[0].eks_properties.pod_properties.image_pull_secrets[0].name #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers #=> Array
resp.job_definitions[0].eks_properties.pod_properties.containers[0].name #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].image #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].image_pull_policy #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].command #=> Array
resp.job_definitions[0].eks_properties.pod_properties.containers[0].command[0] #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].args #=> Array
resp.job_definitions[0].eks_properties.pod_properties.containers[0].args[0] #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].env #=> Array
resp.job_definitions[0].eks_properties.pod_properties.containers[0].env[0].name #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].env[0].value #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].resources.limits #=> Hash
resp.job_definitions[0].eks_properties.pod_properties.containers[0].resources.limits["String"] #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].resources.requests #=> Hash
resp.job_definitions[0].eks_properties.pod_properties.containers[0].resources.requests["String"] #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].volume_mounts #=> Array
resp.job_definitions[0].eks_properties.pod_properties.containers[0].volume_mounts[0].name #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].volume_mounts[0].mount_path #=> String
resp.job_definitions[0].eks_properties.pod_properties.containers[0].volume_mounts[0].read_only #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.containers[0].security_context.run_as_user #=> Integer
resp.job_definitions[0].eks_properties.pod_properties.containers[0].security_context.run_as_group #=> Integer
resp.job_definitions[0].eks_properties.pod_properties.containers[0].security_context.privileged #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.containers[0].security_context.allow_privilege_escalation #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.containers[0].security_context.read_only_root_filesystem #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.containers[0].security_context.run_as_non_root #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.init_containers #=> Array
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].name #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].image #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].image_pull_policy #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].command #=> Array
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].command[0] #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].args #=> Array
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].args[0] #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].env #=> Array
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].env[0].name #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].env[0].value #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].resources.limits #=> Hash
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].resources.limits["String"] #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].resources.requests #=> Hash
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].resources.requests["String"] #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].volume_mounts #=> Array
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].name #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].mount_path #=> String
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].read_only #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_user #=> Integer
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_group #=> Integer
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].security_context.privileged #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].security_context.allow_privilege_escalation #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].security_context.read_only_root_filesystem #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_non_root #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.volumes #=> Array
resp.job_definitions[0].eks_properties.pod_properties.volumes[0].name #=> String
resp.job_definitions[0].eks_properties.pod_properties.volumes[0].host_path.path #=> String
resp.job_definitions[0].eks_properties.pod_properties.volumes[0].empty_dir.medium #=> String
resp.job_definitions[0].eks_properties.pod_properties.volumes[0].empty_dir.size_limit #=> String
resp.job_definitions[0].eks_properties.pod_properties.volumes[0].secret.secret_name #=> String
resp.job_definitions[0].eks_properties.pod_properties.volumes[0].secret.optional #=> Boolean
resp.job_definitions[0].eks_properties.pod_properties.metadata.labels #=> Hash
resp.job_definitions[0].eks_properties.pod_properties.metadata.labels["String"] #=> String
resp.job_definitions[0].eks_properties.pod_properties.share_process_namespace #=> Boolean
resp.job_definitions[0].container_orchestration_type #=> String, one of "ECS", "EKS"
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_definitions
(Array<String>)
—
A list of up to 100 job definitions. Each entry in the list can either be an ARN in the format ‘arn:aws:batch:$Region:$Account:job-definition/$JobDefinitionName:$Revision` or a short version using the form `$JobDefinitionName:$Revision`. This parameter can’t be used with other parameters.
-
:max_results
(Integer)
—
The maximum number of results returned by ‘DescribeJobDefinitions` in paginated output. When this parameter is used, `DescribeJobDefinitions` only returns `maxResults` results in a single page and a `nextToken` response element. The remaining results of the initial request can be seen by sending another `DescribeJobDefinitions` request with the returned `nextToken` value. This value can be between 1 and 100. If this parameter isn’t used, then ‘DescribeJobDefinitions` returns up to 100 results and a `nextToken` value if applicable.
-
:job_definition_name
(String)
—
The name of the job definition to describe.
-
:status
(String)
—
The status used to filter job definitions.
-
:next_token
(String)
—
The ‘nextToken` value returned from a previous paginated `DescribeJobDefinitions` request where `maxResults` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value. This value is `null` when there are no more results to return.
<note markdown=“1”> Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
</note>
Returns:
-
(Types::DescribeJobDefinitionsResponse)
—
Returns a response object which responds to the following methods:
-
#job_definitions => Array<Types::JobDefinition>
-
#next_token => String
-
See Also:
1973 1974 1975 1976 |
# File 'lib/aws-sdk-batch/client.rb', line 1973 def describe_job_definitions(params = {}, options = {}) req = build_request(:describe_job_definitions, params) req.send_request(options) end |
#describe_job_queues(params = {}) ⇒ Types::DescribeJobQueuesResponse
Describes one or more of your job queues.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Example: To describe a job queue
Example: To describe a job queue
# This example describes the HighPriority job queue.
resp = client.describe_job_queues({
job_queues: [
"HighPriority",
],
})
resp.to_h outputs the following:
{
job_queues: [
{
compute_environment_order: [
{
compute_environment: "arn:aws:batch:us-east-1:012345678910:compute-environment/C4OnDemand",
order: 1,
},
],
job_queue_arn: "arn:aws:batch:us-east-1:012345678910:job-queue/HighPriority",
job_queue_name: "HighPriority",
priority: 1,
state: "ENABLED",
status: "VALID",
status_reason: "JobQueue Healthy",
},
],
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.describe_job_queues({
job_queues: ["String"],
max_results: 1,
next_token: "String",
})
Response structure
Response structure
resp.job_queues #=> Array
resp.job_queues[0].job_queue_name #=> String
resp.job_queues[0].job_queue_arn #=> String
resp.job_queues[0].state #=> String, one of "ENABLED", "DISABLED"
resp.job_queues[0].scheduling_policy_arn #=> String
resp.job_queues[0].status #=> String, one of "CREATING", "UPDATING", "DELETING", "DELETED", "VALID", "INVALID"
resp.job_queues[0].status_reason #=> String
resp.job_queues[0].priority #=> Integer
resp.job_queues[0].compute_environment_order #=> Array
resp.job_queues[0].compute_environment_order[0].order #=> Integer
resp.job_queues[0].compute_environment_order[0].compute_environment #=> String
resp.job_queues[0].tags #=> Hash
resp.job_queues[0].tags["TagKey"] #=> String
resp.job_queues[0].job_state_time_limit_actions #=> Array
resp.job_queues[0].job_state_time_limit_actions[0].reason #=> String
resp.job_queues[0].job_state_time_limit_actions[0].state #=> String, one of "RUNNABLE"
resp.job_queues[0].job_state_time_limit_actions[0].max_time_seconds #=> Integer
resp.job_queues[0].job_state_time_limit_actions[0].action #=> String, one of "CANCEL"
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_queues
(Array<String>)
—
A list of up to 100 queue names or full queue Amazon Resource Name (ARN) entries.
-
:max_results
(Integer)
—
The maximum number of results returned by ‘DescribeJobQueues` in paginated output. When this parameter is used, `DescribeJobQueues` only returns `maxResults` results in a single page and a `nextToken` response element. The remaining results of the initial request can be seen by sending another `DescribeJobQueues` request with the returned `nextToken` value. This value can be between 1 and 100. If this parameter isn’t used, then ‘DescribeJobQueues` returns up to 100 results and a `nextToken` value if applicable.
-
:next_token
(String)
—
The ‘nextToken` value returned from a previous paginated `DescribeJobQueues` request where `maxResults` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value. This value is `null` when there are no more results to return.
<note markdown=“1”> Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
</note>
Returns:
-
(Types::DescribeJobQueuesResponse)
—
Returns a response object which responds to the following methods:
-
#job_queues => Array<Types::JobQueueDetail>
-
#next_token => String
-
See Also:
2078 2079 2080 2081 |
# File 'lib/aws-sdk-batch/client.rb', line 2078 def describe_job_queues(params = {}, options = {}) req = build_request(:describe_job_queues, params) req.send_request(options) end |
#describe_jobs(params = {}) ⇒ Types::DescribeJobsResponse
Describes a list of Batch jobs.
Examples:
Example: To describe a specific job
Example: To describe a specific job
# This example describes a job with the specified job ID.
resp = client.describe_jobs({
jobs: [
"24fa2d7a-64c4-49d2-8b47-f8da4fbde8e9",
],
})
resp.to_h outputs the following:
{
jobs: [
{
container: {
command: [
"sleep",
"60",
],
container_instance_arn: "arn:aws:ecs:us-east-1:012345678910:container-instance/5406d7cd-58bd-4b8f-9936-48d7c6b1526c",
environment: [
],
exit_code: 0,
image: "busybox",
memory: 128,
mount_points: [
],
ulimits: [
],
vcpus: 1,
volumes: [
],
},
created_at: 1480460782010,
depends_on: [
],
job_definition: "sleep60",
job_id: "24fa2d7a-64c4-49d2-8b47-f8da4fbde8e9",
job_name: "example",
job_queue: "arn:aws:batch:us-east-1:012345678910:job-queue/HighPriority",
parameters: {
},
started_at: 1480460816500,
status: "SUCCEEDED",
stopped_at: 1480460880699,
},
],
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.describe_jobs({
jobs: ["String"], # required
})
Response structure
Response structure
resp.jobs #=> Array
resp.jobs[0].job_arn #=> String
resp.jobs[0].job_name #=> String
resp.jobs[0].job_id #=> String
resp.jobs[0].job_queue #=> String
resp.jobs[0].status #=> String, one of "SUBMITTED", "PENDING", "RUNNABLE", "STARTING", "RUNNING", "SUCCEEDED", "FAILED"
resp.jobs[0].share_identifier #=> String
resp.jobs[0].scheduling_priority #=> Integer
resp.jobs[0].attempts #=> Array
resp.jobs[0].attempts[0].container.container_instance_arn #=> String
resp.jobs[0].attempts[0].container.task_arn #=> String
resp.jobs[0].attempts[0].container.exit_code #=> Integer
resp.jobs[0].attempts[0].container.reason #=> String
resp.jobs[0].attempts[0].container.log_stream_name #=> String
resp.jobs[0].attempts[0].container.network_interfaces #=> Array
resp.jobs[0].attempts[0].container.network_interfaces[0].attachment_id #=> String
resp.jobs[0].attempts[0].container.network_interfaces[0].ipv6_address #=> String
resp.jobs[0].attempts[0].container.network_interfaces[0].private_ipv_4_address #=> String
resp.jobs[0].attempts[0].started_at #=> Integer
resp.jobs[0].attempts[0].stopped_at #=> Integer
resp.jobs[0].attempts[0].status_reason #=> String
resp.jobs[0].attempts[0].task_properties #=> Array
resp.jobs[0].attempts[0].task_properties[0].container_instance_arn #=> String
resp.jobs[0].attempts[0].task_properties[0].task_arn #=> String
resp.jobs[0].attempts[0].task_properties[0].containers #=> Array
resp.jobs[0].attempts[0].task_properties[0].containers[0].exit_code #=> Integer
resp.jobs[0].attempts[0].task_properties[0].containers[0].name #=> String
resp.jobs[0].attempts[0].task_properties[0].containers[0].reason #=> String
resp.jobs[0].attempts[0].task_properties[0].containers[0].log_stream_name #=> String
resp.jobs[0].attempts[0].task_properties[0].containers[0].network_interfaces #=> Array
resp.jobs[0].attempts[0].task_properties[0].containers[0].network_interfaces[0].attachment_id #=> String
resp.jobs[0].attempts[0].task_properties[0].containers[0].network_interfaces[0].ipv6_address #=> String
resp.jobs[0].attempts[0].task_properties[0].containers[0].network_interfaces[0].private_ipv_4_address #=> String
resp.jobs[0].status_reason #=> String
resp.jobs[0].created_at #=> Integer
resp.jobs[0].retry_strategy.attempts #=> Integer
resp.jobs[0].retry_strategy.evaluate_on_exit #=> Array
resp.jobs[0].retry_strategy.evaluate_on_exit[0].on_status_reason #=> String
resp.jobs[0].retry_strategy.evaluate_on_exit[0].on_reason #=> String
resp.jobs[0].retry_strategy.evaluate_on_exit[0].on_exit_code #=> String
resp.jobs[0].retry_strategy.evaluate_on_exit[0].action #=> String, one of "RETRY", "EXIT"
resp.jobs[0].started_at #=> Integer
resp.jobs[0].stopped_at #=> Integer
resp.jobs[0].depends_on #=> Array
resp.jobs[0].depends_on[0].job_id #=> String
resp.jobs[0].depends_on[0].type #=> String, one of "N_TO_N", "SEQUENTIAL"
resp.jobs[0].job_definition #=> String
resp.jobs[0].parameters #=> Hash
resp.jobs[0].parameters["String"] #=> String
resp.jobs[0].container.image #=> String
resp.jobs[0].container.vcpus #=> Integer
resp.jobs[0].container.memory #=> Integer
resp.jobs[0].container.command #=> Array
resp.jobs[0].container.command[0] #=> String
resp.jobs[0].container.job_role_arn #=> String
resp.jobs[0].container.execution_role_arn #=> String
resp.jobs[0].container.volumes #=> Array
resp.jobs[0].container.volumes[0].host.source_path #=> String
resp.jobs[0].container.volumes[0].name #=> String
resp.jobs[0].container.volumes[0].efs_volume_configuration.file_system_id #=> String
resp.jobs[0].container.volumes[0].efs_volume_configuration.root_directory #=> String
resp.jobs[0].container.volumes[0].efs_volume_configuration.transit_encryption #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].container.volumes[0].efs_volume_configuration.transit_encryption_port #=> Integer
resp.jobs[0].container.volumes[0].efs_volume_configuration.authorization_config.access_point_id #=> String
resp.jobs[0].container.volumes[0].efs_volume_configuration.authorization_config.iam #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].container.environment #=> Array
resp.jobs[0].container.environment[0].name #=> String
resp.jobs[0].container.environment[0].value #=> String
resp.jobs[0].container.mount_points #=> Array
resp.jobs[0].container.mount_points[0].container_path #=> String
resp.jobs[0].container.mount_points[0].read_only #=> Boolean
resp.jobs[0].container.mount_points[0].source_volume #=> String
resp.jobs[0].container.readonly_root_filesystem #=> Boolean
resp.jobs[0].container.ulimits #=> Array
resp.jobs[0].container.ulimits[0].hard_limit #=> Integer
resp.jobs[0].container.ulimits[0].name #=> String
resp.jobs[0].container.ulimits[0].soft_limit #=> Integer
resp.jobs[0].container.privileged #=> Boolean
resp.jobs[0].container.user #=> String
resp.jobs[0].container.exit_code #=> Integer
resp.jobs[0].container.reason #=> String
resp.jobs[0].container.container_instance_arn #=> String
resp.jobs[0].container.task_arn #=> String
resp.jobs[0].container.log_stream_name #=> String
resp.jobs[0].container.instance_type #=> String
resp.jobs[0].container.network_interfaces #=> Array
resp.jobs[0].container.network_interfaces[0].attachment_id #=> String
resp.jobs[0].container.network_interfaces[0].ipv6_address #=> String
resp.jobs[0].container.network_interfaces[0].private_ipv_4_address #=> String
resp.jobs[0].container.resource_requirements #=> Array
resp.jobs[0].container.resource_requirements[0].value #=> String
resp.jobs[0].container.resource_requirements[0].type #=> String, one of "GPU", "VCPU", "MEMORY"
resp.jobs[0].container.linux_parameters.devices #=> Array
resp.jobs[0].container.linux_parameters.devices[0].host_path #=> String
resp.jobs[0].container.linux_parameters.devices[0].container_path #=> String
resp.jobs[0].container.linux_parameters.devices[0].permissions #=> Array
resp.jobs[0].container.linux_parameters.devices[0].permissions[0] #=> String, one of "READ", "WRITE", "MKNOD"
resp.jobs[0].container.linux_parameters.init_process_enabled #=> Boolean
resp.jobs[0].container.linux_parameters.shared_memory_size #=> Integer
resp.jobs[0].container.linux_parameters.tmpfs #=> Array
resp.jobs[0].container.linux_parameters.tmpfs[0].container_path #=> String
resp.jobs[0].container.linux_parameters.tmpfs[0].size #=> Integer
resp.jobs[0].container.linux_parameters.tmpfs[0].mount_options #=> Array
resp.jobs[0].container.linux_parameters.tmpfs[0].mount_options[0] #=> String
resp.jobs[0].container.linux_parameters.max_swap #=> Integer
resp.jobs[0].container.linux_parameters.swappiness #=> Integer
resp.jobs[0].container.log_configuration.log_driver #=> String, one of "json-file", "syslog", "journald", "gelf", "fluentd", "awslogs", "splunk"
resp.jobs[0].container.log_configuration.options #=> Hash
resp.jobs[0].container.log_configuration.options["String"] #=> String
resp.jobs[0].container.log_configuration.secret_options #=> Array
resp.jobs[0].container.log_configuration.secret_options[0].name #=> String
resp.jobs[0].container.log_configuration.secret_options[0].value_from #=> String
resp.jobs[0].container.secrets #=> Array
resp.jobs[0].container.secrets[0].name #=> String
resp.jobs[0].container.secrets[0].value_from #=> String
resp.jobs[0].container.network_configuration.assign_public_ip #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].container.fargate_platform_configuration.platform_version #=> String
resp.jobs[0].container.ephemeral_storage.size_in_gi_b #=> Integer
resp.jobs[0].container.runtime_platform.operating_system_family #=> String
resp.jobs[0].container.runtime_platform.cpu_architecture #=> String
resp.jobs[0].container.repository_credentials.credentials_parameter #=> String
resp.jobs[0].node_details.node_index #=> Integer
resp.jobs[0].node_details.is_main_node #=> Boolean
resp.jobs[0].node_properties.num_nodes #=> Integer
resp.jobs[0].node_properties.main_node #=> Integer
resp.jobs[0].node_properties.node_range_properties #=> Array
resp.jobs[0].node_properties.node_range_properties[0].target_nodes #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.image #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.vcpus #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.memory #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.command #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.command[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.job_role_arn #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.execution_role_arn #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.volumes #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.volumes[0].host.source_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.volumes[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.file_system_id #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.root_directory #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.transit_encryption #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.transit_encryption_port #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.authorization_config.access_point_id #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.volumes[0].efs_volume_configuration.authorization_config.iam #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].node_properties.node_range_properties[0].container.environment #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.environment[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.environment[0].value #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.mount_points #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.mount_points[0].container_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.mount_points[0].read_only #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].container.mount_points[0].source_volume #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.readonly_root_filesystem #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].container.privileged #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].container.ulimits #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.ulimits[0].hard_limit #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.ulimits[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.ulimits[0].soft_limit #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.user #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.instance_type #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.resource_requirements #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.resource_requirements[0].value #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.resource_requirements[0].type #=> String, one of "GPU", "VCPU", "MEMORY"
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.devices #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.devices[0].host_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.devices[0].container_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.devices[0].permissions #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.devices[0].permissions[0] #=> String, one of "READ", "WRITE", "MKNOD"
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.init_process_enabled #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.shared_memory_size #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs[0].container_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs[0].size #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs[0].mount_options #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.tmpfs[0].mount_options[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.max_swap #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.linux_parameters.swappiness #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.log_configuration.log_driver #=> String, one of "json-file", "syslog", "journald", "gelf", "fluentd", "awslogs", "splunk"
resp.jobs[0].node_properties.node_range_properties[0].container.log_configuration.options #=> Hash
resp.jobs[0].node_properties.node_range_properties[0].container.log_configuration.options["String"] #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.log_configuration.secret_options #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.log_configuration.secret_options[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.log_configuration.secret_options[0].value_from #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.secrets #=> Array
resp.jobs[0].node_properties.node_range_properties[0].container.secrets[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.secrets[0].value_from #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.network_configuration.assign_public_ip #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].node_properties.node_range_properties[0].container.fargate_platform_configuration.platform_version #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.ephemeral_storage.size_in_gi_b #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].container.runtime_platform.operating_system_family #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.runtime_platform.cpu_architecture #=> String
resp.jobs[0].node_properties.node_range_properties[0].container.repository_credentials.credentials_parameter #=> String
resp.jobs[0].node_properties.node_range_properties[0].instance_types #=> Array
resp.jobs[0].node_properties.node_range_properties[0].instance_types[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].command #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].command[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].depends_on #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].depends_on[0].container_name #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].depends_on[0].condition #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].environment #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].environment[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].environment[0].value #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].essential #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].image #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].host_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].container_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].permissions #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].permissions[0] #=> String, one of "READ", "WRITE", "MKNOD"
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.init_process_enabled #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.shared_memory_size #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].container_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].size #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].mount_options #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].mount_options[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.max_swap #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].linux_parameters.swappiness #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.log_driver #=> String, one of "json-file", "syslog", "journald", "gelf", "fluentd", "awslogs", "splunk"
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.options #=> Hash
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.options["String"] #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options[0].value_from #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].mount_points #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].mount_points[0].container_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].mount_points[0].read_only #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].mount_points[0].source_volume #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].privileged #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].readonly_root_filesystem #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].repository_credentials.credentials_parameter #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].resource_requirements #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].resource_requirements[0].value #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].resource_requirements[0].type #=> String, one of "GPU", "VCPU", "MEMORY"
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].secrets #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].secrets[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].secrets[0].value_from #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].ulimits #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].ulimits[0].hard_limit #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].ulimits[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].ulimits[0].soft_limit #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].containers[0].user #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].ephemeral_storage.size_in_gi_b #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].execution_role_arn #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].platform_version #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].ipc_mode #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].task_role_arn #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].pid_mode #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].network_configuration.assign_public_ip #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].runtime_platform.operating_system_family #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].runtime_platform.cpu_architecture #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes #=> Array
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].host.source_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.file_system_id #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.root_directory #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.transit_encryption #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.transit_encryption_port #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.authorization_config.access_point_id #=> String
resp.jobs[0].node_properties.node_range_properties[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.authorization_config.iam #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.service_account_name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.host_network #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.dns_policy #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.image_pull_secrets #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.image_pull_secrets[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].image #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].image_pull_policy #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].command #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].command[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].args #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].args[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].env #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].env[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].env[0].value #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].resources.limits #=> Hash
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].resources.limits["String"] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].resources.requests #=> Hash
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].resources.requests["String"] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].volume_mounts #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].volume_mounts[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].volume_mounts[0].mount_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].volume_mounts[0].read_only #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.run_as_user #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.run_as_group #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.privileged #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.allow_privilege_escalation #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.read_only_root_filesystem #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.containers[0].security_context.run_as_non_root #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].image #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].image_pull_policy #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].command #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].command[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].args #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].args[0] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].env #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].env[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].env[0].value #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].resources.limits #=> Hash
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].resources.limits["String"] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].resources.requests #=> Hash
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].resources.requests["String"] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].volume_mounts #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].mount_path #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].read_only #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_user #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_group #=> Integer
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.privileged #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.allow_privilege_escalation #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.read_only_root_filesystem #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_non_root #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes #=> Array
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].host_path.path #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].empty_dir.medium #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].empty_dir.size_limit #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].secret.secret_name #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.volumes[0].secret.optional #=> Boolean
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.metadata.labels #=> Hash
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.metadata.labels["String"] #=> String
resp.jobs[0].node_properties.node_range_properties[0].eks_properties.pod_properties.share_process_namespace #=> Boolean
resp.jobs[0].array_properties.status_summary #=> Hash
resp.jobs[0].array_properties.status_summary["String"] #=> Integer
resp.jobs[0].array_properties.size #=> Integer
resp.jobs[0].array_properties.index #=> Integer
resp.jobs[0].timeout.attempt_duration_seconds #=> Integer
resp.jobs[0].tags #=> Hash
resp.jobs[0].tags["TagKey"] #=> String
resp.jobs[0].propagate_tags #=> Boolean
resp.jobs[0].platform_capabilities #=> Array
resp.jobs[0].platform_capabilities[0] #=> String, one of "EC2", "FARGATE"
resp.jobs[0].eks_properties.pod_properties.service_account_name #=> String
resp.jobs[0].eks_properties.pod_properties.host_network #=> Boolean
resp.jobs[0].eks_properties.pod_properties.dns_policy #=> String
resp.jobs[0].eks_properties.pod_properties.image_pull_secrets #=> Array
resp.jobs[0].eks_properties.pod_properties.image_pull_secrets[0].name #=> String
resp.jobs[0].eks_properties.pod_properties.containers #=> Array
resp.jobs[0].eks_properties.pod_properties.containers[0].name #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].image #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].image_pull_policy #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].command #=> Array
resp.jobs[0].eks_properties.pod_properties.containers[0].command[0] #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].args #=> Array
resp.jobs[0].eks_properties.pod_properties.containers[0].args[0] #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].env #=> Array
resp.jobs[0].eks_properties.pod_properties.containers[0].env[0].name #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].env[0].value #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].resources.limits #=> Hash
resp.jobs[0].eks_properties.pod_properties.containers[0].resources.limits["String"] #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].resources.requests #=> Hash
resp.jobs[0].eks_properties.pod_properties.containers[0].resources.requests["String"] #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].exit_code #=> Integer
resp.jobs[0].eks_properties.pod_properties.containers[0].reason #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].volume_mounts #=> Array
resp.jobs[0].eks_properties.pod_properties.containers[0].volume_mounts[0].name #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].volume_mounts[0].mount_path #=> String
resp.jobs[0].eks_properties.pod_properties.containers[0].volume_mounts[0].read_only #=> Boolean
resp.jobs[0].eks_properties.pod_properties.containers[0].security_context.run_as_user #=> Integer
resp.jobs[0].eks_properties.pod_properties.containers[0].security_context.run_as_group #=> Integer
resp.jobs[0].eks_properties.pod_properties.containers[0].security_context.privileged #=> Boolean
resp.jobs[0].eks_properties.pod_properties.containers[0].security_context.allow_privilege_escalation #=> Boolean
resp.jobs[0].eks_properties.pod_properties.containers[0].security_context.read_only_root_filesystem #=> Boolean
resp.jobs[0].eks_properties.pod_properties.containers[0].security_context.run_as_non_root #=> Boolean
resp.jobs[0].eks_properties.pod_properties.init_containers #=> Array
resp.jobs[0].eks_properties.pod_properties.init_containers[0].name #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].image #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].image_pull_policy #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].command #=> Array
resp.jobs[0].eks_properties.pod_properties.init_containers[0].command[0] #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].args #=> Array
resp.jobs[0].eks_properties.pod_properties.init_containers[0].args[0] #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].env #=> Array
resp.jobs[0].eks_properties.pod_properties.init_containers[0].env[0].name #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].env[0].value #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].resources.limits #=> Hash
resp.jobs[0].eks_properties.pod_properties.init_containers[0].resources.limits["String"] #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].resources.requests #=> Hash
resp.jobs[0].eks_properties.pod_properties.init_containers[0].resources.requests["String"] #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].exit_code #=> Integer
resp.jobs[0].eks_properties.pod_properties.init_containers[0].reason #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].volume_mounts #=> Array
resp.jobs[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].name #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].mount_path #=> String
resp.jobs[0].eks_properties.pod_properties.init_containers[0].volume_mounts[0].read_only #=> Boolean
resp.jobs[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_user #=> Integer
resp.jobs[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_group #=> Integer
resp.jobs[0].eks_properties.pod_properties.init_containers[0].security_context.privileged #=> Boolean
resp.jobs[0].eks_properties.pod_properties.init_containers[0].security_context.allow_privilege_escalation #=> Boolean
resp.jobs[0].eks_properties.pod_properties.init_containers[0].security_context.read_only_root_filesystem #=> Boolean
resp.jobs[0].eks_properties.pod_properties.init_containers[0].security_context.run_as_non_root #=> Boolean
resp.jobs[0].eks_properties.pod_properties.volumes #=> Array
resp.jobs[0].eks_properties.pod_properties.volumes[0].name #=> String
resp.jobs[0].eks_properties.pod_properties.volumes[0].host_path.path #=> String
resp.jobs[0].eks_properties.pod_properties.volumes[0].empty_dir.medium #=> String
resp.jobs[0].eks_properties.pod_properties.volumes[0].empty_dir.size_limit #=> String
resp.jobs[0].eks_properties.pod_properties.volumes[0].secret.secret_name #=> String
resp.jobs[0].eks_properties.pod_properties.volumes[0].secret.optional #=> Boolean
resp.jobs[0].eks_properties.pod_properties.pod_name #=> String
resp.jobs[0].eks_properties.pod_properties.node_name #=> String
resp.jobs[0].eks_properties.pod_properties.metadata.labels #=> Hash
resp.jobs[0].eks_properties.pod_properties.metadata.labels["String"] #=> String
resp.jobs[0].eks_properties.pod_properties.share_process_namespace #=> Boolean
resp.jobs[0].eks_attempts #=> Array
resp.jobs[0].eks_attempts[0].containers #=> Array
resp.jobs[0].eks_attempts[0].containers[0].name #=> String
resp.jobs[0].eks_attempts[0].containers[0].container_id #=> String
resp.jobs[0].eks_attempts[0].containers[0].exit_code #=> Integer
resp.jobs[0].eks_attempts[0].containers[0].reason #=> String
resp.jobs[0].eks_attempts[0].init_containers #=> Array
resp.jobs[0].eks_attempts[0].init_containers[0].name #=> String
resp.jobs[0].eks_attempts[0].init_containers[0].container_id #=> String
resp.jobs[0].eks_attempts[0].init_containers[0].exit_code #=> Integer
resp.jobs[0].eks_attempts[0].init_containers[0].reason #=> String
resp.jobs[0].eks_attempts[0].eks_cluster_arn #=> String
resp.jobs[0].eks_attempts[0].pod_name #=> String
resp.jobs[0].eks_attempts[0].pod_namespace #=> String
resp.jobs[0].eks_attempts[0].node_name #=> String
resp.jobs[0].eks_attempts[0].started_at #=> Integer
resp.jobs[0].eks_attempts[0].stopped_at #=> Integer
resp.jobs[0].eks_attempts[0].status_reason #=> String
resp.jobs[0].ecs_properties.task_properties #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].command #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].command[0] #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].depends_on #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].depends_on[0].container_name #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].depends_on[0].condition #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].environment #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].environment[0].name #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].environment[0].value #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].essential #=> Boolean
resp.jobs[0].ecs_properties.task_properties[0].containers[0].image #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].host_path #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].container_path #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].permissions #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.devices[0].permissions[0] #=> String, one of "READ", "WRITE", "MKNOD"
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.init_process_enabled #=> Boolean
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.shared_memory_size #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].container_path #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].size #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].mount_options #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.tmpfs[0].mount_options[0] #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.max_swap #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].containers[0].linux_parameters.swappiness #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].containers[0].log_configuration.log_driver #=> String, one of "json-file", "syslog", "journald", "gelf", "fluentd", "awslogs", "splunk"
resp.jobs[0].ecs_properties.task_properties[0].containers[0].log_configuration.options #=> Hash
resp.jobs[0].ecs_properties.task_properties[0].containers[0].log_configuration.options["String"] #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options[0].name #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].log_configuration.secret_options[0].value_from #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].mount_points #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].mount_points[0].container_path #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].mount_points[0].read_only #=> Boolean
resp.jobs[0].ecs_properties.task_properties[0].containers[0].mount_points[0].source_volume #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].name #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].privileged #=> Boolean
resp.jobs[0].ecs_properties.task_properties[0].containers[0].readonly_root_filesystem #=> Boolean
resp.jobs[0].ecs_properties.task_properties[0].containers[0].repository_credentials.credentials_parameter #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].resource_requirements #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].resource_requirements[0].value #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].resource_requirements[0].type #=> String, one of "GPU", "VCPU", "MEMORY"
resp.jobs[0].ecs_properties.task_properties[0].containers[0].secrets #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].secrets[0].name #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].secrets[0].value_from #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].ulimits #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].ulimits[0].hard_limit #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].containers[0].ulimits[0].name #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].ulimits[0].soft_limit #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].containers[0].user #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].exit_code #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].containers[0].reason #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].log_stream_name #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].network_interfaces #=> Array
resp.jobs[0].ecs_properties.task_properties[0].containers[0].network_interfaces[0].attachment_id #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].network_interfaces[0].ipv6_address #=> String
resp.jobs[0].ecs_properties.task_properties[0].containers[0].network_interfaces[0].private_ipv_4_address #=> String
resp.jobs[0].ecs_properties.task_properties[0].container_instance_arn #=> String
resp.jobs[0].ecs_properties.task_properties[0].task_arn #=> String
resp.jobs[0].ecs_properties.task_properties[0].ephemeral_storage.size_in_gi_b #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].execution_role_arn #=> String
resp.jobs[0].ecs_properties.task_properties[0].platform_version #=> String
resp.jobs[0].ecs_properties.task_properties[0].ipc_mode #=> String
resp.jobs[0].ecs_properties.task_properties[0].task_role_arn #=> String
resp.jobs[0].ecs_properties.task_properties[0].pid_mode #=> String
resp.jobs[0].ecs_properties.task_properties[0].network_configuration.assign_public_ip #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].ecs_properties.task_properties[0].runtime_platform.operating_system_family #=> String
resp.jobs[0].ecs_properties.task_properties[0].runtime_platform.cpu_architecture #=> String
resp.jobs[0].ecs_properties.task_properties[0].volumes #=> Array
resp.jobs[0].ecs_properties.task_properties[0].volumes[0].host.source_path #=> String
resp.jobs[0].ecs_properties.task_properties[0].volumes[0].name #=> String
resp.jobs[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.file_system_id #=> String
resp.jobs[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.root_directory #=> String
resp.jobs[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.transit_encryption #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.transit_encryption_port #=> Integer
resp.jobs[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.authorization_config.access_point_id #=> String
resp.jobs[0].ecs_properties.task_properties[0].volumes[0].efs_volume_configuration.authorization_config.iam #=> String, one of "ENABLED", "DISABLED"
resp.jobs[0].is_cancelled #=> Boolean
resp.jobs[0].is_terminated #=> Boolean
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:jobs
(required, Array<String>)
—
A list of up to 100 job IDs.
Returns:
-
(Types::DescribeJobsResponse)
—
Returns a response object which responds to the following methods:
-
#jobs => Array<Types::JobDetail>
-
See Also:
2660 2661 2662 2663 |
# File 'lib/aws-sdk-batch/client.rb', line 2660 def describe_jobs(params = {}, options = {}) req = build_request(:describe_jobs, params) req.send_request(options) end |
#describe_scheduling_policies(params = {}) ⇒ Types::DescribeSchedulingPoliciesResponse
Describes one or more of your scheduling policies.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.describe_scheduling_policies({
arns: ["String"], # required
})
Response structure
Response structure
resp.scheduling_policies #=> Array
resp.scheduling_policies[0].name #=> String
resp.scheduling_policies[0].arn #=> String
resp.scheduling_policies[0].fairshare_policy.share_decay_seconds #=> Integer
resp.scheduling_policies[0].fairshare_policy.compute_reservation #=> Integer
resp.scheduling_policies[0].fairshare_policy.share_distribution #=> Array
resp.scheduling_policies[0].fairshare_policy.share_distribution[0].share_identifier #=> String
resp.scheduling_policies[0].fairshare_policy.share_distribution[0].weight_factor #=> Float
resp.scheduling_policies[0].tags #=> Hash
resp.scheduling_policies[0].tags["TagKey"] #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:arns
(required, Array<String>)
—
A list of up to 100 scheduling policy Amazon Resource Name (ARN) entries.
Returns:
-
(Types::DescribeSchedulingPoliciesResponse)
—
Returns a response object which responds to the following methods:
-
#scheduling_policies => Array<Types::SchedulingPolicyDetail>
-
See Also:
2698 2699 2700 2701 |
# File 'lib/aws-sdk-batch/client.rb', line 2698 def describe_scheduling_policies(params = {}, options = {}) req = build_request(:describe_scheduling_policies, params) req.send_request(options) end |
#get_job_queue_snapshot(params = {}) ⇒ Types::GetJobQueueSnapshotResponse
Provides a list of the first 100 ‘RUNNABLE` jobs associated to a single job queue.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.get_job_queue_snapshot({
job_queue: "String", # required
})
Response structure
Response structure
resp.front_of_queue.jobs #=> Array
resp.front_of_queue.jobs[0].job_arn #=> String
resp.front_of_queue.jobs[0].earliest_time_at_position #=> Integer
resp.front_of_queue.last_updated_at #=> Integer
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_queue
(required, String)
—
The job queue’s name or full queue Amazon Resource Name (ARN).
Returns:
-
(Types::GetJobQueueSnapshotResponse)
—
Returns a response object which responds to the following methods:
-
#front_of_queue => Types::FrontOfQueueDetail
-
See Also:
2730 2731 2732 2733 |
# File 'lib/aws-sdk-batch/client.rb', line 2730 def get_job_queue_snapshot(params = {}, options = {}) req = build_request(:get_job_queue_snapshot, params) req.send_request(options) end |
#list_jobs(params = {}) ⇒ Types::ListJobsResponse
Returns a list of Batch jobs.
You must specify only one of the following items:
-
A job queue ID to return a list of jobs in that job queue
-
A multi-node parallel job ID to return a list of nodes for that job
-
An array job ID to return a list of the children for that job
You can filter the results by job status with the ‘jobStatus` parameter. If you don’t specify a status, only ‘RUNNING` jobs are returned.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Example: To list running jobs
Example: To list running jobs
# This example lists the running jobs in the HighPriority job queue.
resp = client.list_jobs({
job_queue: "HighPriority",
})
resp.to_h outputs the following:
{
job_summary_list: [
{
job_id: "e66ff5fd-a1ff-4640-b1a2-0b0a142f49bb",
job_name: "example",
},
],
}
Example: To list submitted jobs
Example: To list submitted jobs
# This example lists jobs in the HighPriority job queue that are in the SUBMITTED job status.
resp = client.list_jobs({
job_queue: "HighPriority",
job_status: "SUBMITTED",
})
resp.to_h outputs the following:
{
job_summary_list: [
{
job_id: "68f0c163-fbd4-44e6-9fd1-25b14a434786",
job_name: "example",
},
],
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.list_jobs({
job_queue: "String",
array_job_id: "String",
multi_node_job_id: "String",
job_status: "SUBMITTED", # accepts SUBMITTED, PENDING, RUNNABLE, STARTING, RUNNING, SUCCEEDED, FAILED
max_results: 1,
next_token: "String",
filters: [
{
name: "String",
values: ["String"],
},
],
})
Response structure
Response structure
resp.job_summary_list #=> Array
resp.job_summary_list[0].job_arn #=> String
resp.job_summary_list[0].job_id #=> String
resp.job_summary_list[0].job_name #=> String
resp.job_summary_list[0].created_at #=> Integer
resp.job_summary_list[0].status #=> String, one of "SUBMITTED", "PENDING", "RUNNABLE", "STARTING", "RUNNING", "SUCCEEDED", "FAILED"
resp.job_summary_list[0].status_reason #=> String
resp.job_summary_list[0].started_at #=> Integer
resp.job_summary_list[0].stopped_at #=> Integer
resp.job_summary_list[0].container.exit_code #=> Integer
resp.job_summary_list[0].container.reason #=> String
resp.job_summary_list[0].array_properties.size #=> Integer
resp.job_summary_list[0].array_properties.index #=> Integer
resp.job_summary_list[0].node_properties.is_main_node #=> Boolean
resp.job_summary_list[0].node_properties.num_nodes #=> Integer
resp.job_summary_list[0].node_properties.node_index #=> Integer
resp.job_summary_list[0].job_definition #=> String
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_queue
(String)
—
The name or full Amazon Resource Name (ARN) of the job queue used to list jobs.
-
:array_job_id
(String)
—
The job ID for an array job. Specifying an array job ID with this parameter lists all child jobs from within the specified array.
-
:multi_node_job_id
(String)
—
The job ID for a multi-node parallel job. Specifying a multi-node parallel job ID with this parameter lists all nodes that are associated with the specified job.
-
:job_status
(String)
—
The job status used to filter jobs in the specified queue. If the ‘filters` parameter is specified, the `jobStatus` parameter is ignored and jobs with any status are returned. If you don’t specify a status, only ‘RUNNING` jobs are returned.
-
:max_results
(Integer)
—
The maximum number of results returned by ‘ListJobs` in a paginated output. When this parameter is used, `ListJobs` returns up to `maxResults` results in a single page and a `nextToken` response element, if applicable. The remaining results of the initial request can be seen by sending another `ListJobs` request with the returned `nextToken` value.
The following outlines key parameters and limitations:
-
The minimum value is 1.
-
When ‘–job-status` is used, Batch returns up to 1000 values.
-
When ‘–filters` is used, Batch returns up to 100 values.
-
If neither parameter is used, then ‘ListJobs` returns up to 1000 results (jobs that are in the `RUNNING` status) and a `nextToken` value, if applicable.
-
-
:next_token
(String)
—
The ‘nextToken` value returned from a previous paginated `ListJobs` request where `maxResults` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value. This value is `null` when there are no more results to return.
<note markdown=“1”> Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
</note>
-
:filters
(Array<Types::KeyValuesPair>)
—
The filter to apply to the query. Only one filter can be used at a time. When the filter is used, ‘jobStatus` is ignored. The filter doesn’t apply to child jobs in an array or multi-node parallel (MNP) jobs. The results are sorted by the ‘createdAt` field, with the most recent jobs being first.
JOB_NAME
: The value of the filter is a case-insensitive match for the job
name. If the value ends with an asterisk (*), the filter matches any job name that begins with the string before the '*'. This corresponds to the `jobName` value. For example, `test1` matches both `Test1` and `test1`, and `test1*` matches both `test1` and `Test10`. When the `JOB_NAME` filter is used, the results are grouped by the job name and version.
JOB_DEFINITION
: The value for the filter is the name or Amazon Resource Name (ARN)
of the job definition. This corresponds to the `jobDefinition` value. The value is case sensitive. When the value for the filter is the job definition name, the results include all the jobs that used any revision of that job definition name. If the value ends with an asterisk (*), the filter matches any job definition name that begins with the string before the '*'. For example, `jd1` matches only `jd1`, and `jd1*` matches both `jd1` and `jd1A`. The version of the job definition that's used doesn't affect the sort order. When the `JOB_DEFINITION` filter is used and the ARN is used (which is in the form `arn:$\{Partition\}:batch:$\{Region\}:$\{Account\}:job-definition/$\{JobDefinitionName\}:$\{Revision\}`), the results include jobs that used the specified revision of the job definition. Asterisk (*) isn't supported when the ARN is used.
BEFORE_CREATED_AT
: The value for the filter is the time that’s before the job was
created. This corresponds to the `createdAt` value. The value is a string representation of the number of milliseconds since 00:00:00 UTC (midnight) on January 1, 1970.
AFTER_CREATED_AT
: The value for the filter is the time that’s after the job was
created. This corresponds to the `createdAt` value. The value is a string representation of the number of milliseconds since 00:00:00 UTC (midnight) on January 1, 1970.
Returns:
-
(Types::ListJobsResponse)
—
Returns a response object which responds to the following methods:
-
#job_summary_list => Array<Types::JobSummary>
-
#next_token => String
-
See Also:
2935 2936 2937 2938 |
# File 'lib/aws-sdk-batch/client.rb', line 2935 def list_jobs(params = {}, options = {}) req = build_request(:list_jobs, params) req.send_request(options) end |
#list_scheduling_policies(params = {}) ⇒ Types::ListSchedulingPoliciesResponse
Returns a list of Batch scheduling policies.
The returned response is a pageable response and is Enumerable. For details on usage see PageableResponse.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.list_scheduling_policies({
max_results: 1,
next_token: "String",
})
Response structure
Response structure
resp.scheduling_policies #=> Array
resp.scheduling_policies[0].arn #=> String
resp.next_token #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:max_results
(Integer)
—
The maximum number of results that’s returned by ‘ListSchedulingPolicies` in paginated output. When this parameter is used, `ListSchedulingPolicies` only returns `maxResults` results in a single page and a `nextToken` response element. You can see the remaining results of the initial request by sending another `ListSchedulingPolicies` request with the returned `nextToken` value. This value can be between 1 and 100. If this parameter isn’t used, ‘ListSchedulingPolicies` returns up to 100 results and a `nextToken` value if applicable.
-
:next_token
(String)
—
The ‘nextToken` value that’s returned from a previous paginated ‘ListSchedulingPolicies` request where `maxResults` was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the `nextToken` value. This value is `null` when there are no more results to return.
<note markdown=“1”> Treat this token as an opaque identifier that’s only used to retrieve the next items in a list and not for other programmatic purposes.
</note>
Returns:
-
(Types::ListSchedulingPoliciesResponse)
—
Returns a response object which responds to the following methods:
-
#scheduling_policies => Array<Types::SchedulingPolicyListingDetail>
-
#next_token => String
-
See Also:
2989 2990 2991 2992 |
# File 'lib/aws-sdk-batch/client.rb', line 2989 def list_scheduling_policies(params = {}, options = {}) req = build_request(:list_scheduling_policies, params) req.send_request(options) end |
#list_tags_for_resource(params = {}) ⇒ Types::ListTagsForResourceResponse
Lists the tags for an Batch resource. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren’t supported.
Examples:
Example: ListTagsForResource Example
Example: ListTagsForResource Example
# This demonstrates calling the ListTagsForResource action.
resp = client.list_tags_for_resource({
resource_arn: "arn:aws:batch:us-east-1:123456789012:job-definition/sleep30:1",
})
resp.to_h outputs the following:
{
tags: {
"Department" => "Engineering",
"Stage" => "Alpha",
"User" => "JaneDoe",
},
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.list_tags_for_resource({
resource_arn: "String", # required
})
Response structure
Response structure
resp.tags #=> Hash
resp.tags["TagKey"] #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:resource_arn
(required, String)
—
The Amazon Resource Name (ARN) that identifies the resource that tags are listed for. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren’t supported.
Returns:
-
(Types::ListTagsForResourceResponse)
—
Returns a response object which responds to the following methods:
-
#tags => Hash<String,String>
-
See Also:
3043 3044 3045 3046 |
# File 'lib/aws-sdk-batch/client.rb', line 3043 def list_tags_for_resource(params = {}, options = {}) req = build_request(:list_tags_for_resource, params) req.send_request(options) end |
#register_job_definition(params = {}) ⇒ Types::RegisterJobDefinitionResponse
Registers an Batch job definition.
Examples:
Example: To register a job definition
Example: To register a job definition
# This example registers a job definition for a simple container job.
resp = client.register_job_definition({
type: "container",
container_properties: {
command: [
"sleep",
"10",
],
image: "busybox",
resource_requirements: [
{
type: "MEMORY",
value: "128",
},
{
type: "VCPU",
value: "1",
},
],
},
job_definition_name: "sleep10",
})
resp.to_h outputs the following:
{
job_definition_arn: "arn:aws:batch:us-east-1:012345678910:job-definition/sleep10:1",
job_definition_name: "sleep10",
revision: 1,
}
Example: RegisterJobDefinition with tags
Example: RegisterJobDefinition with tags
# This demonstrates calling the RegisterJobDefinition action, including tags.
resp = client.register_job_definition({
type: "container",
container_properties: {
command: [
"sleep",
"30",
],
image: "busybox",
resource_requirements: [
{
type: "MEMORY",
value: "128",
},
{
type: "VCPU",
value: "1",
},
],
},
job_definition_name: "sleep30",
tags: {
"Department" => "Engineering",
"User" => "JaneDoe",
},
})
resp.to_h outputs the following:
{
job_definition_arn: "arn:aws:batch:us-east-1:012345678910:job-definition/sleep30:1",
job_definition_name: "sleep30",
revision: 1,
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.register_job_definition({
job_definition_name: "String", # required
type: "container", # required, accepts container, multinode
parameters: {
"String" => "String",
},
scheduling_priority: 1,
container_properties: {
image: "String",
vcpus: 1,
memory: 1,
command: ["String"],
job_role_arn: "String",
execution_role_arn: "String",
volumes: [
{
host: {
source_path: "String",
},
name: "String",
efs_volume_configuration: {
file_system_id: "String", # required
root_directory: "String",
transit_encryption: "ENABLED", # accepts ENABLED, DISABLED
transit_encryption_port: 1,
authorization_config: {
access_point_id: "String",
iam: "ENABLED", # accepts ENABLED, DISABLED
},
},
},
],
environment: [
{
name: "String",
value: "String",
},
],
mount_points: [
{
container_path: "String",
read_only: false,
source_volume: "String",
},
],
readonly_root_filesystem: false,
privileged: false,
ulimits: [
{
hard_limit: 1, # required
name: "String", # required
soft_limit: 1, # required
},
],
user: "String",
instance_type: "String",
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU, VCPU, MEMORY
},
],
linux_parameters: {
devices: [
{
host_path: "String", # required
container_path: "String",
permissions: ["READ"], # accepts READ, WRITE, MKNOD
},
],
init_process_enabled: false,
shared_memory_size: 1,
tmpfs: [
{
container_path: "String", # required
size: 1, # required
mount_options: ["String"],
},
],
max_swap: 1,
swappiness: 1,
},
log_configuration: {
log_driver: "json-file", # required, accepts json-file, syslog, journald, gelf, fluentd, awslogs, splunk
options: {
"String" => "String",
},
secret_options: [
{
name: "String", # required
value_from: "String", # required
},
],
},
secrets: [
{
name: "String", # required
value_from: "String", # required
},
],
network_configuration: {
assign_public_ip: "ENABLED", # accepts ENABLED, DISABLED
},
fargate_platform_configuration: {
platform_version: "String",
},
ephemeral_storage: {
size_in_gi_b: 1, # required
},
runtime_platform: {
operating_system_family: "String",
cpu_architecture: "String",
},
repository_credentials: {
credentials_parameter: "String", # required
},
},
node_properties: {
num_nodes: 1, # required
main_node: 1, # required
node_range_properties: [ # required
{
target_nodes: "String", # required
container: {
image: "String",
vcpus: 1,
memory: 1,
command: ["String"],
job_role_arn: "String",
execution_role_arn: "String",
volumes: [
{
host: {
source_path: "String",
},
name: "String",
efs_volume_configuration: {
file_system_id: "String", # required
root_directory: "String",
transit_encryption: "ENABLED", # accepts ENABLED, DISABLED
transit_encryption_port: 1,
authorization_config: {
access_point_id: "String",
iam: "ENABLED", # accepts ENABLED, DISABLED
},
},
},
],
environment: [
{
name: "String",
value: "String",
},
],
mount_points: [
{
container_path: "String",
read_only: false,
source_volume: "String",
},
],
readonly_root_filesystem: false,
privileged: false,
ulimits: [
{
hard_limit: 1, # required
name: "String", # required
soft_limit: 1, # required
},
],
user: "String",
instance_type: "String",
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU, VCPU, MEMORY
},
],
linux_parameters: {
devices: [
{
host_path: "String", # required
container_path: "String",
permissions: ["READ"], # accepts READ, WRITE, MKNOD
},
],
init_process_enabled: false,
shared_memory_size: 1,
tmpfs: [
{
container_path: "String", # required
size: 1, # required
mount_options: ["String"],
},
],
max_swap: 1,
swappiness: 1,
},
log_configuration: {
log_driver: "json-file", # required, accepts json-file, syslog, journald, gelf, fluentd, awslogs, splunk
options: {
"String" => "String",
},
secret_options: [
{
name: "String", # required
value_from: "String", # required
},
],
},
secrets: [
{
name: "String", # required
value_from: "String", # required
},
],
network_configuration: {
assign_public_ip: "ENABLED", # accepts ENABLED, DISABLED
},
fargate_platform_configuration: {
platform_version: "String",
},
ephemeral_storage: {
size_in_gi_b: 1, # required
},
runtime_platform: {
operating_system_family: "String",
cpu_architecture: "String",
},
repository_credentials: {
credentials_parameter: "String", # required
},
},
instance_types: ["String"],
ecs_properties: {
task_properties: [ # required
{
containers: [ # required
{
command: ["String"],
depends_on: [
{
container_name: "String",
condition: "String",
},
],
environment: [
{
name: "String",
value: "String",
},
],
essential: false,
image: "String", # required
linux_parameters: {
devices: [
{
host_path: "String", # required
container_path: "String",
permissions: ["READ"], # accepts READ, WRITE, MKNOD
},
],
init_process_enabled: false,
shared_memory_size: 1,
tmpfs: [
{
container_path: "String", # required
size: 1, # required
mount_options: ["String"],
},
],
max_swap: 1,
swappiness: 1,
},
log_configuration: {
log_driver: "json-file", # required, accepts json-file, syslog, journald, gelf, fluentd, awslogs, splunk
options: {
"String" => "String",
},
secret_options: [
{
name: "String", # required
value_from: "String", # required
},
],
},
mount_points: [
{
container_path: "String",
read_only: false,
source_volume: "String",
},
],
name: "String",
privileged: false,
readonly_root_filesystem: false,
repository_credentials: {
credentials_parameter: "String", # required
},
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU, VCPU, MEMORY
},
],
secrets: [
{
name: "String", # required
value_from: "String", # required
},
],
ulimits: [
{
hard_limit: 1, # required
name: "String", # required
soft_limit: 1, # required
},
],
user: "String",
},
],
ephemeral_storage: {
size_in_gi_b: 1, # required
},
execution_role_arn: "String",
platform_version: "String",
ipc_mode: "String",
task_role_arn: "String",
pid_mode: "String",
network_configuration: {
assign_public_ip: "ENABLED", # accepts ENABLED, DISABLED
},
runtime_platform: {
operating_system_family: "String",
cpu_architecture: "String",
},
volumes: [
{
host: {
source_path: "String",
},
name: "String",
efs_volume_configuration: {
file_system_id: "String", # required
root_directory: "String",
transit_encryption: "ENABLED", # accepts ENABLED, DISABLED
transit_encryption_port: 1,
authorization_config: {
access_point_id: "String",
iam: "ENABLED", # accepts ENABLED, DISABLED
},
},
},
],
},
],
},
eks_properties: {
pod_properties: {
service_account_name: "String",
host_network: false,
dns_policy: "String",
image_pull_secrets: [
{
name: "String", # required
},
],
containers: [
{
name: "String",
image: "String", # required
image_pull_policy: "String",
command: ["String"],
args: ["String"],
env: [
{
name: "String", # required
value: "String",
},
],
resources: {
limits: {
"String" => "Quantity",
},
requests: {
"String" => "Quantity",
},
},
volume_mounts: [
{
name: "String",
mount_path: "String",
read_only: false,
},
],
security_context: {
run_as_user: 1,
run_as_group: 1,
privileged: false,
allow_privilege_escalation: false,
read_only_root_filesystem: false,
run_as_non_root: false,
},
},
],
init_containers: [
{
name: "String",
image: "String", # required
image_pull_policy: "String",
command: ["String"],
args: ["String"],
env: [
{
name: "String", # required
value: "String",
},
],
resources: {
limits: {
"String" => "Quantity",
},
requests: {
"String" => "Quantity",
},
},
volume_mounts: [
{
name: "String",
mount_path: "String",
read_only: false,
},
],
security_context: {
run_as_user: 1,
run_as_group: 1,
privileged: false,
allow_privilege_escalation: false,
read_only_root_filesystem: false,
run_as_non_root: false,
},
},
],
volumes: [
{
name: "String", # required
host_path: {
path: "String",
},
empty_dir: {
medium: "String",
size_limit: "Quantity",
},
secret: {
secret_name: "String", # required
optional: false,
},
},
],
metadata: {
labels: {
"String" => "String",
},
},
share_process_namespace: false,
},
},
},
],
},
retry_strategy: {
attempts: 1,
evaluate_on_exit: [
{
on_status_reason: "String",
on_reason: "String",
on_exit_code: "String",
action: "RETRY", # required, accepts RETRY, EXIT
},
],
},
propagate_tags: false,
timeout: {
attempt_duration_seconds: 1,
},
tags: {
"TagKey" => "TagValue",
},
platform_capabilities: ["EC2"], # accepts EC2, FARGATE
eks_properties: {
pod_properties: {
service_account_name: "String",
host_network: false,
dns_policy: "String",
image_pull_secrets: [
{
name: "String", # required
},
],
containers: [
{
name: "String",
image: "String", # required
image_pull_policy: "String",
command: ["String"],
args: ["String"],
env: [
{
name: "String", # required
value: "String",
},
],
resources: {
limits: {
"String" => "Quantity",
},
requests: {
"String" => "Quantity",
},
},
volume_mounts: [
{
name: "String",
mount_path: "String",
read_only: false,
},
],
security_context: {
run_as_user: 1,
run_as_group: 1,
privileged: false,
allow_privilege_escalation: false,
read_only_root_filesystem: false,
run_as_non_root: false,
},
},
],
init_containers: [
{
name: "String",
image: "String", # required
image_pull_policy: "String",
command: ["String"],
args: ["String"],
env: [
{
name: "String", # required
value: "String",
},
],
resources: {
limits: {
"String" => "Quantity",
},
requests: {
"String" => "Quantity",
},
},
volume_mounts: [
{
name: "String",
mount_path: "String",
read_only: false,
},
],
security_context: {
run_as_user: 1,
run_as_group: 1,
privileged: false,
allow_privilege_escalation: false,
read_only_root_filesystem: false,
run_as_non_root: false,
},
},
],
volumes: [
{
name: "String", # required
host_path: {
path: "String",
},
empty_dir: {
medium: "String",
size_limit: "Quantity",
},
secret: {
secret_name: "String", # required
optional: false,
},
},
],
metadata: {
labels: {
"String" => "String",
},
},
share_process_namespace: false,
},
},
ecs_properties: {
task_properties: [ # required
{
containers: [ # required
{
command: ["String"],
depends_on: [
{
container_name: "String",
condition: "String",
},
],
environment: [
{
name: "String",
value: "String",
},
],
essential: false,
image: "String", # required
linux_parameters: {
devices: [
{
host_path: "String", # required
container_path: "String",
permissions: ["READ"], # accepts READ, WRITE, MKNOD
},
],
init_process_enabled: false,
shared_memory_size: 1,
tmpfs: [
{
container_path: "String", # required
size: 1, # required
mount_options: ["String"],
},
],
max_swap: 1,
swappiness: 1,
},
log_configuration: {
log_driver: "json-file", # required, accepts json-file, syslog, journald, gelf, fluentd, awslogs, splunk
options: {
"String" => "String",
},
secret_options: [
{
name: "String", # required
value_from: "String", # required
},
],
},
mount_points: [
{
container_path: "String",
read_only: false,
source_volume: "String",
},
],
name: "String",
privileged: false,
readonly_root_filesystem: false,
repository_credentials: {
credentials_parameter: "String", # required
},
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU, VCPU, MEMORY
},
],
secrets: [
{
name: "String", # required
value_from: "String", # required
},
],
ulimits: [
{
hard_limit: 1, # required
name: "String", # required
soft_limit: 1, # required
},
],
user: "String",
},
],
ephemeral_storage: {
size_in_gi_b: 1, # required
},
execution_role_arn: "String",
platform_version: "String",
ipc_mode: "String",
task_role_arn: "String",
pid_mode: "String",
network_configuration: {
assign_public_ip: "ENABLED", # accepts ENABLED, DISABLED
},
runtime_platform: {
operating_system_family: "String",
cpu_architecture: "String",
},
volumes: [
{
host: {
source_path: "String",
},
name: "String",
efs_volume_configuration: {
file_system_id: "String", # required
root_directory: "String",
transit_encryption: "ENABLED", # accepts ENABLED, DISABLED
transit_encryption_port: 1,
authorization_config: {
access_point_id: "String",
iam: "ENABLED", # accepts ENABLED, DISABLED
},
},
},
],
},
],
},
})
Response structure
Response structure
resp.job_definition_name #=> String
resp.job_definition_arn #=> String
resp.revision #=> Integer
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_definition_name
(required, String)
—
The name of the job definition to register. It can be up to 128 letters long. It can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
-
:type
(required, String)
—
The type of job definition. For more information about multi-node parallel jobs, see [Creating a multi-node parallel job definition] in the *Batch User Guide*.
-
If the value is ‘container`, then one of the following is required: `containerProperties`, `ecsProperties`, or `eksProperties`.
-
If the value is ‘multinode`, then `nodeProperties` is required.
<note markdown=“1”> If the job is run on Fargate resources, then ‘multinode` isn’t supported.
</note>
[1]: docs.aws.amazon.com/batch/latest/userguide/multi-node-job-def.html
-
-
:parameters
(Hash<String,String>)
—
Default parameter substitution placeholders to set in the job definition. Parameters are specified as a key-value pair mapping. Parameters in a ‘SubmitJob` request override any corresponding parameter defaults from the job definition.
-
:scheduling_priority
(Integer)
—
The scheduling priority for jobs that are submitted with this job definition. This only affects jobs in job queues with a fair share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority.
The minimum supported value is 0 and the maximum supported value is 9999.
-
:container_properties
(Types::ContainerProperties)
—
An object with properties specific to Amazon ECS-based single-node container-based jobs. If the job definition’s ‘type` parameter is `container`, then you must specify either `containerProperties` or `nodeProperties`. This must not be specified for Amazon EKS-based job definitions.
<note markdown=“1”> If the job runs on Fargate resources, then you must not specify ‘nodeProperties`; use only `containerProperties`.
</note>
-
:node_properties
(Types::NodeProperties)
—
An object with properties specific to multi-node parallel jobs. If you specify node properties for a job, it becomes a multi-node parallel job. For more information, see [Multi-node Parallel Jobs] in the *Batch User Guide*.
<note markdown=“1”> If the job runs on Fargate resources, then you must not specify ‘nodeProperties`; use `containerProperties` instead.
</note>
<note markdown=“1”> If the job runs on Amazon EKS resources, then you must not specify ‘nodeProperties`.
</note>
[1]: docs.aws.amazon.com/batch/latest/userguide/multi-node-parallel-jobs.html
-
:retry_strategy
(Types::RetryStrategy)
—
The retry strategy to use for failed jobs that are submitted with this job definition. Any retry strategy that’s specified during a SubmitJob operation overrides the retry strategy defined here. If a job is terminated due to a timeout, it isn’t retried.
-
:propagate_tags
(Boolean)
—
Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task. If no value is specified, the tags are not propagated. Tags can only be propagated to the tasks during task creation. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job is moved to the ‘FAILED` state.
<note markdown=“1”> If the job runs on Amazon EKS resources, then you must not specify ‘propagateTags`.
</note>
-
:timeout
(Types::JobTimeout)
—
The timeout configuration for jobs that are submitted with this job definition, after which Batch terminates your jobs if they have not finished. If a job is terminated due to a timeout, it isn’t retried. The minimum value for the timeout is 60 seconds. Any timeout configuration that’s specified during a SubmitJob operation overrides the timeout configuration defined here. For more information, see [Job Timeouts] in the *Batch User Guide*.
[1]: docs.aws.amazon.com/batch/latest/userguide/job_timeouts.html
-
:tags
(Hash<String,String>)
—
The tags that you apply to the job definition to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see [Tagging Amazon Web Services Resources] in *Batch User Guide*.
[1]: docs.aws.amazon.com/batch/latest/userguide/using-tags.html
-
:platform_capabilities
(Array<String>)
—
The platform capabilities required by the job definition. If no value is specified, it defaults to ‘EC2`. To run the job on Fargate resources, specify `FARGATE`.
<note markdown=“1”> If the job runs on Amazon EKS resources, then you must not specify ‘platformCapabilities`.
</note>
-
:eks_properties
(Types::EksProperties)
—
An object with properties that are specific to Amazon EKS-based jobs. This must not be specified for Amazon ECS based job definitions.
-
:ecs_properties
(Types::EcsProperties)
—
An object with properties that are specific to Amazon ECS-based jobs. This must not be specified for Amazon EKS-based job definitions.
Returns:
-
(Types::RegisterJobDefinitionResponse)
—
Returns a response object which responds to the following methods:
-
#job_definition_name => String
-
#job_definition_arn => String
-
#revision => Integer
-
See Also:
3995 3996 3997 3998 |
# File 'lib/aws-sdk-batch/client.rb', line 3995 def register_job_definition(params = {}, options = {}) req = build_request(:register_job_definition, params) req.send_request(options) end |
#submit_job(params = {}) ⇒ Types::SubmitJobResponse
Submits an Batch job from a job definition. Parameters that are specified during SubmitJob override parameters defined in the job definition. vCPU and memory requirements that are specified in the ‘resourceRequirements` objects in the job definition are the exception. They can’t be overridden this way using the ‘memory` and `vcpus` parameters. Rather, you must specify updates to job definition parameters in a `resourceRequirements` object that’s included in the ‘containerOverrides` parameter.
<note markdown=“1”> Job queues with a scheduling policy are limited to 500 active fair share identifiers at a time.
</note>
Jobs that run on Fargate resources can’t be guaranteed to run for more than 14 days. This is because, after 14 days, Fargate resources might become unavailable and job might be terminated.
Examples:
Example: To submit a job to a queue
Example: To submit a job to a queue
# This example submits a simple container job called example to the HighPriority job queue.
resp = client.submit_job({
job_definition: "sleep60",
job_name: "example",
job_queue: "HighPriority",
})
resp.to_h outputs the following:
{
job_id: "876da822-4198-45f2-a252-6cea32512ea8",
job_name: "example",
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.submit_job({
job_name: "String", # required
job_queue: "String", # required
share_identifier: "String",
scheduling_priority_override: 1,
array_properties: {
size: 1,
},
depends_on: [
{
job_id: "String",
type: "N_TO_N", # accepts N_TO_N, SEQUENTIAL
},
],
job_definition: "String", # required
parameters: {
"String" => "String",
},
container_overrides: {
vcpus: 1,
memory: 1,
command: ["String"],
instance_type: "String",
environment: [
{
name: "String",
value: "String",
},
],
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU, VCPU, MEMORY
},
],
},
node_overrides: {
num_nodes: 1,
node_property_overrides: [
{
target_nodes: "String", # required
container_overrides: {
vcpus: 1,
memory: 1,
command: ["String"],
instance_type: "String",
environment: [
{
name: "String",
value: "String",
},
],
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU, VCPU, MEMORY
},
],
},
ecs_properties_override: {
task_properties: [
{
containers: [
{
command: ["String"],
environment: [
{
name: "String",
value: "String",
},
],
name: "String",
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU, VCPU, MEMORY
},
],
},
],
},
],
},
instance_types: ["String"],
eks_properties_override: {
pod_properties: {
containers: [
{
name: "String",
image: "String",
command: ["String"],
args: ["String"],
env: [
{
name: "String", # required
value: "String",
},
],
resources: {
limits: {
"String" => "Quantity",
},
requests: {
"String" => "Quantity",
},
},
},
],
init_containers: [
{
name: "String",
image: "String",
command: ["String"],
args: ["String"],
env: [
{
name: "String", # required
value: "String",
},
],
resources: {
limits: {
"String" => "Quantity",
},
requests: {
"String" => "Quantity",
},
},
},
],
metadata: {
labels: {
"String" => "String",
},
},
},
},
},
],
},
retry_strategy: {
attempts: 1,
evaluate_on_exit: [
{
on_status_reason: "String",
on_reason: "String",
on_exit_code: "String",
action: "RETRY", # required, accepts RETRY, EXIT
},
],
},
propagate_tags: false,
timeout: {
attempt_duration_seconds: 1,
},
tags: {
"TagKey" => "TagValue",
},
eks_properties_override: {
pod_properties: {
containers: [
{
name: "String",
image: "String",
command: ["String"],
args: ["String"],
env: [
{
name: "String", # required
value: "String",
},
],
resources: {
limits: {
"String" => "Quantity",
},
requests: {
"String" => "Quantity",
},
},
},
],
init_containers: [
{
name: "String",
image: "String",
command: ["String"],
args: ["String"],
env: [
{
name: "String", # required
value: "String",
},
],
resources: {
limits: {
"String" => "Quantity",
},
requests: {
"String" => "Quantity",
},
},
},
],
metadata: {
labels: {
"String" => "String",
},
},
},
},
ecs_properties_override: {
task_properties: [
{
containers: [
{
command: ["String"],
environment: [
{
name: "String",
value: "String",
},
],
name: "String",
resource_requirements: [
{
value: "String", # required
type: "GPU", # required, accepts GPU, VCPU, MEMORY
},
],
},
],
},
],
},
})
Response structure
Response structure
resp.job_arn #=> String
resp.job_name #=> String
resp.job_id #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_name
(required, String)
—
The name of the job. It can be up to 128 letters long. The first character must be alphanumeric, can contain uppercase and lowercase letters, numbers, hyphens (-), and underscores (_).
-
:job_queue
(required, String)
—
The job queue where the job is submitted. You can specify either the name or the Amazon Resource Name (ARN) of the queue.
-
:share_identifier
(String)
—
The share identifier for the job. Don’t specify this parameter if the job queue doesn’t have a scheduling policy. If the job queue has a scheduling policy, then this parameter must be specified.
This string is limited to 255 alphanumeric characters, and can be followed by an asterisk (*).
-
:scheduling_priority_override
(Integer)
—
The scheduling priority for the job. This only affects jobs in job queues with a fair share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority. This overrides any scheduling priority in the job definition and works only within a single share identifier.
The minimum supported value is 0 and the maximum supported value is 9999.
-
:array_properties
(Types::ArrayProperties)
—
The array properties for the submitted job, such as the size of the array. The array size can be between 2 and 10,000. If you specify array properties for a job, it becomes an array job. For more information, see [Array Jobs] in the *Batch User Guide*.
[1]: docs.aws.amazon.com/batch/latest/userguide/array_jobs.html
-
:depends_on
(Array<Types::JobDependency>)
—
A list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify a ‘SEQUENTIAL` type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify an `N_TO_N` type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin.
-
:job_definition
(required, String)
—
The job definition used by this job. This value can be one of ‘definition-name`, `definition-name:revision`, or the Amazon Resource Name (ARN) for the job definition, with or without the revision (`arn:aws:batch:region:account:job-definition/definition-name:revision `, or `arn:aws:batch:region:account:job-definition/definition-name `).
If the revision is not specified, then the latest active revision is used.
-
:parameters
(Hash<String,String>)
—
Additional parameters passed to the job that replace parameter substitution placeholders that are set in the job definition. Parameters are specified as a key and value pair mapping. Parameters in a ‘SubmitJob` request override any corresponding parameter defaults from the job definition.
-
:container_overrides
(Types::ContainerOverrides)
—
An object with properties that override the defaults for the job definition that specify the name of a container in the specified job definition and the overrides it should receive. You can override the default command for a container, which is specified in the job definition or the Docker image, with a ‘command` override. You can also override existing environment variables on a container or add new environment variables to it with an `environment` override.
-
:node_overrides
(Types::NodeOverrides)
—
A list of node overrides in JSON format that specify the node range to target and the container overrides for that node range.
<note markdown=“1”> This parameter isn’t applicable to jobs that are running on Fargate resources; use ‘containerOverrides` instead.
</note>
-
:retry_strategy
(Types::RetryStrategy)
—
The retry strategy to use for failed jobs from this SubmitJob operation. When a retry strategy is specified here, it overrides the retry strategy defined in the job definition.
-
:propagate_tags
(Boolean)
—
Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task. If no value is specified, the tags aren’t propagated. Tags can only be propagated to the tasks during task creation. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job is moved to the ‘FAILED` state. When specified, this overrides the tag propagation setting in the job definition.
-
:timeout
(Types::JobTimeout)
—
The timeout configuration for this SubmitJob operation. You can specify a timeout duration after which Batch terminates your jobs if they haven’t finished. If a job is terminated due to a timeout, it isn’t retried. The minimum value for the timeout is 60 seconds. This configuration overrides any timeout configuration specified in the job definition. For array jobs, child jobs have the same timeout configuration as the parent job. For more information, see [Job Timeouts] in the *Amazon Elastic Container Service Developer Guide*.
[1]: docs.aws.amazon.com/AmazonECS/latest/developerguide/job_timeouts.html
-
:tags
(Hash<String,String>)
—
The tags that you apply to the job request to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see [Tagging Amazon Web Services Resources] in *Amazon Web Services General Reference*.
-
:eks_properties_override
(Types::EksPropertiesOverride)
—
An object, with properties that override defaults for the job definition, can only be specified for jobs that are run on Amazon EKS resources.
-
:ecs_properties_override
(Types::EcsPropertiesOverride)
—
An object, with properties that override defaults for the job definition, can only be specified for jobs that are run on Amazon ECS resources.
Returns:
See Also:
4421 4422 4423 4424 |
# File 'lib/aws-sdk-batch/client.rb', line 4421 def submit_job(params = {}, options = {}) req = build_request(:submit_job, params) req.send_request(options) end |
#tag_resource(params = {}) ⇒ Struct
Associates the specified tags to a resource with the specified ‘resourceArn`. If existing tags on a resource aren’t specified in the request parameters, they aren’t changed. When a resource is deleted, the tags that are associated with that resource are deleted as well. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren’t supported.
Examples:
Example: TagResource Example
Example: TagResource Example
# This demonstrates calling the TagResource action.
resp = client.tag_resource({
resource_arn: "arn:aws:batch:us-east-1:123456789012:job-definition/sleep30:1",
tags: {
"Stage" => "Alpha",
},
})
resp.to_h outputs the following:
{
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.tag_resource({
resource_arn: "String", # required
tags: { # required
"TagKey" => "TagValue",
},
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:resource_arn
(required, String)
—
The Amazon Resource Name (ARN) of the resource that tags are added to. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren’t supported.
-
:tags
(required, Hash<String,String>)
—
The tags that you apply to the resource to help you categorize and organize your resources. Each tag consists of a key and an optional value. For more information, see [Tagging Amazon Web Services Resources] in *Amazon Web Services General Reference*.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
4481 4482 4483 4484 |
# File 'lib/aws-sdk-batch/client.rb', line 4481 def tag_resource(params = {}, options = {}) req = build_request(:tag_resource, params) req.send_request(options) end |
#terminate_job(params = {}) ⇒ Struct
Terminates a job in a job queue. Jobs that are in the ‘STARTING` or `RUNNING` state are terminated, which causes them to transition to `FAILED`. Jobs that have not progressed to the `STARTING` state are cancelled.
Examples:
Example: To terminate a job
Example: To terminate a job
# This example terminates a job with the specified job ID.
resp = client.terminate_job({
job_id: "61e743ed-35e4-48da-b2de-5c8333821c84",
reason: "Terminating job.",
})
resp.to_h outputs the following:
{
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.terminate_job({
job_id: "String", # required
reason: "String", # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_id
(required, String)
—
The Batch job ID of the job to terminate.
-
:reason
(required, String)
—
A message to attach to the job that explains the reason for canceling it. This message is returned by future DescribeJobs operations on the job. It is also recorded in the Batch activity logs.
This parameter has as limit of 1024 characters.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
4528 4529 4530 4531 |
# File 'lib/aws-sdk-batch/client.rb', line 4528 def terminate_job(params = {}, options = {}) req = build_request(:terminate_job, params) req.send_request(options) end |
#untag_resource(params = {}) ⇒ Struct
Deletes specified tags from an Batch resource.
Examples:
Example: UntagResource Example
Example: UntagResource Example
# This demonstrates calling the UntagResource action.
resp = client.untag_resource({
resource_arn: "arn:aws:batch:us-east-1:123456789012:job-definition/sleep30:1",
tag_keys: [
"Stage",
],
})
resp.to_h outputs the following:
{
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.untag_resource({
resource_arn: "String", # required
tag_keys: ["TagKey"], # required
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:resource_arn
(required, String)
—
The Amazon Resource Name (ARN) of the resource from which to delete tags. Batch resources that support tags are compute environments, jobs, job definitions, job queues, and scheduling policies. ARNs for child jobs of array and multi-node parallel (MNP) jobs aren’t supported.
-
:tag_keys
(required, Array<String>)
—
The keys of the tags to be removed.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
4574 4575 4576 4577 |
# File 'lib/aws-sdk-batch/client.rb', line 4574 def untag_resource(params = {}, options = {}) req = build_request(:untag_resource, params) req.send_request(options) end |
#update_compute_environment(params = {}) ⇒ Types::UpdateComputeEnvironmentResponse
Updates an Batch compute environment.
Examples:
Example: To update a compute environment
Example: To update a compute environment
# This example disables the P2OnDemand compute environment so it can be deleted.
resp = client.update_compute_environment({
compute_environment: "P2OnDemand",
state: "DISABLED",
})
resp.to_h outputs the following:
{
compute_environment_arn: "arn:aws:batch:us-east-1:012345678910:compute-environment/P2OnDemand",
compute_environment_name: "P2OnDemand",
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.update_compute_environment({
compute_environment: "String", # required
state: "ENABLED", # accepts ENABLED, DISABLED
unmanagedv_cpus: 1,
compute_resources: {
minv_cpus: 1,
maxv_cpus: 1,
desiredv_cpus: 1,
subnets: ["String"],
security_group_ids: ["String"],
allocation_strategy: "BEST_FIT_PROGRESSIVE", # accepts BEST_FIT_PROGRESSIVE, SPOT_CAPACITY_OPTIMIZED, SPOT_PRICE_CAPACITY_OPTIMIZED
instance_types: ["String"],
ec2_key_pair: "String",
instance_role: "String",
tags: {
"String" => "String",
},
placement_group: "String",
bid_percentage: 1,
launch_template: {
launch_template_id: "String",
launch_template_name: "String",
version: "String",
},
ec2_configuration: [
{
image_type: "ImageType", # required
image_id_override: "ImageIdOverride",
image_kubernetes_version: "KubernetesVersion",
},
],
update_to_latest_image_version: false,
type: "EC2", # accepts EC2, SPOT, FARGATE, FARGATE_SPOT
image_id: "String",
},
service_role: "String",
update_policy: {
terminate_jobs_on_update: false,
job_execution_timeout_minutes: 1,
},
context: "String",
})
Response structure
Response structure
resp.compute_environment_name #=> String
resp.compute_environment_arn #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:compute_environment
(required, String)
—
The name or full Amazon Resource Name (ARN) of the compute environment to update.
-
:state
(String)
—
The state of the compute environment. Compute environments in the ‘ENABLED` state can accept jobs from a queue and scale in or out automatically based on the workload demand of its associated queues.
If the state is ‘ENABLED`, then the Batch scheduler can attempt to place jobs from an associated job queue on the compute resources within the environment. If the compute environment is managed, then it can scale its instances out or in automatically, based on the job queue demand.
If the state is ‘DISABLED`, then the Batch scheduler doesn’t attempt to place jobs within the environment. Jobs in a ‘STARTING` or `RUNNING` state continue to progress normally. Managed compute environments in the `DISABLED` state don’t scale out.
<note markdown=“1”> Compute environments in a ‘DISABLED` state may continue to incur billing charges. To prevent additional charges, turn off and then delete the compute environment. For more information, see [State] in the *Batch User Guide*.
</note>
When an instance is idle, the instance scales down to the ‘minvCpus` value. However, the instance size doesn’t change. For example, consider a ‘c5.8xlarge` instance with a `minvCpus` value of `4` and a `desiredvCpus` value of `36`. This instance doesn’t scale down to a ‘c5.large` instance.
-
:unmanagedv_cpus
(Integer)
—
The maximum number of vCPUs expected to be used for an unmanaged compute environment. Don’t specify this parameter for a managed compute environment. This parameter is only used for fair share scheduling to reserve vCPU capacity for new share identifiers. If this parameter isn’t provided for a fair share job queue, no vCPU capacity is reserved.
-
:compute_resources
(Types::ComputeResourceUpdate)
—
Details of the compute resources managed by the compute environment. Required for a managed compute environment. For more information, see
- Compute Environments][1
-
in the *Batch User Guide*.
[1]: docs.aws.amazon.com/batch/latest/userguide/compute_environments.html
-
:service_role
(String)
—
The full Amazon Resource Name (ARN) of the IAM role that allows Batch to make calls to other Amazon Web Services services on your behalf. For more information, see [Batch service IAM role] in the *Batch User Guide*.
If the compute environment has a service-linked role, it can’t be changed to use a regular IAM role. Likewise, if the compute environment has a regular IAM role, it can’t be changed to use a service-linked role. To update the parameters for the compute environment that require an infrastructure update to change, the AWSServiceRoleForBatch service-linked role must be used. For more information, see [Updating compute environments] in the *Batch User Guide*.
If your specified role has a path other than ‘/`, then you must either specify the full role ARN (recommended) or prefix the role name with the path.
<note markdown=“1”> Depending on how you created your Batch service role, its ARN might contain the ‘service-role` path prefix. When you only specify the name of the service role, Batch assumes that your ARN doesn’t use the ‘service-role` path prefix. Because of this, we recommend that you specify the full ARN of your service role when you create compute environments.
</note>
[1]: docs.aws.amazon.com/batch/latest/userguide/service_IAM_role.html [2]: docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
-
:update_policy
(Types::UpdatePolicy)
—
Specifies the updated infrastructure update policy for the compute environment. For more information about infrastructure updates, see
- Updating compute environments][1
-
in the *Batch User Guide*.
[1]: docs.aws.amazon.com/batch/latest/userguide/updating-compute-environments.html
-
:context
(String)
—
Reserved.
Returns:
-
(Types::UpdateComputeEnvironmentResponse)
—
Returns a response object which responds to the following methods:
-
#compute_environment_name => String
-
#compute_environment_arn => String
-
See Also:
4755 4756 4757 4758 |
# File 'lib/aws-sdk-batch/client.rb', line 4755 def update_compute_environment(params = {}, options = {}) req = build_request(:update_compute_environment, params) req.send_request(options) end |
#update_job_queue(params = {}) ⇒ Types::UpdateJobQueueResponse
Updates a job queue.
Examples:
Example: To update a job queue
Example: To update a job queue
# This example disables a job queue so that it can be deleted.
resp = client.update_job_queue({
job_queue: "GPGPU",
state: "DISABLED",
})
resp.to_h outputs the following:
{
job_queue_arn: "arn:aws:batch:us-east-1:012345678910:job-queue/GPGPU",
job_queue_name: "GPGPU",
}
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.update_job_queue({
job_queue: "String", # required
state: "ENABLED", # accepts ENABLED, DISABLED
scheduling_policy_arn: "String",
priority: 1,
compute_environment_order: [
{
order: 1, # required
compute_environment: "String", # required
},
],
job_state_time_limit_actions: [
{
reason: "String", # required
state: "RUNNABLE", # required, accepts RUNNABLE
max_time_seconds: 1, # required
action: "CANCEL", # required, accepts CANCEL
},
],
})
Response structure
Response structure
resp.job_queue_name #=> String
resp.job_queue_arn #=> String
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:job_queue
(required, String)
—
The name or the Amazon Resource Name (ARN) of the job queue.
-
:state
(String)
—
Describes the queue’s ability to accept new jobs. If the job queue state is ‘ENABLED`, it can accept jobs. If the job queue state is `DISABLED`, new jobs can’t be added to the queue, but jobs already in the queue can finish.
-
:scheduling_policy_arn
(String)
—
Amazon Resource Name (ARN) of the fair share scheduling policy. Once a job queue is created, the fair share scheduling policy can be replaced but not removed. The format is ‘aws:Partition:batch:Region:Account:scheduling-policy/Name `. For example, `aws:aws:batch:us-west-2:123456789012:scheduling-policy/MySchedulingPolicy`.
-
:priority
(Integer)
—
The priority of the job queue. Job queues with a higher priority (or a higher integer value for the ‘priority` parameter) are evaluated first when associated with the same compute environment. Priority is determined in descending order. For example, a job queue with a priority value of `10` is given scheduling preference over a job queue with a priority value of `1`. All of the compute environments must be either EC2 (`EC2` or `SPOT`) or Fargate (`FARGATE` or `FARGATE_SPOT`). EC2 and Fargate compute environments can’t be mixed.
-
:compute_environment_order
(Array<Types::ComputeEnvironmentOrder>)
—
Details the set of compute environments mapped to a job queue and their order relative to each other. This is one of the parameters used by the job scheduler to determine which compute environment runs a given job. Compute environments must be in the ‘VALID` state before you can associate them with a job queue. All of the compute environments must be either EC2 (`EC2` or `SPOT`) or Fargate (`FARGATE` or `FARGATE_SPOT`). EC2 and Fargate compute environments can’t be mixed.
<note markdown=“1”> All compute environments that are associated with a job queue must share the same architecture. Batch doesn’t support mixing compute environment architecture types in a single job queue.
</note>
-
:job_state_time_limit_actions
(Array<Types::JobStateTimeLimitAction>)
—
The set of actions that Batch perform on jobs that remain at the head of the job queue in the specified state longer than specified times. Batch will perform each action after ‘maxTimeSeconds` has passed. (Note: The minimum value for maxTimeSeconds is 600 (10 minutes) and its maximum value is 86,400 (24 hours).)
Returns:
-
(Types::UpdateJobQueueResponse)
—
Returns a response object which responds to the following methods:
-
#job_queue_name => String
-
#job_queue_arn => String
-
See Also:
4865 4866 4867 4868 |
# File 'lib/aws-sdk-batch/client.rb', line 4865 def update_job_queue(params = {}, options = {}) req = build_request(:update_job_queue, params) req.send_request(options) end |
#update_scheduling_policy(params = {}) ⇒ Struct
Updates a scheduling policy.
Examples:
Request syntax with placeholder values
Request syntax with placeholder values
resp = client.update_scheduling_policy({
arn: "String", # required
fairshare_policy: {
share_decay_seconds: 1,
compute_reservation: 1,
share_distribution: [
{
share_identifier: "String", # required
weight_factor: 1.0,
},
],
},
})
Parameters:
-
params
(Hash)
(defaults to: {})
—
({})
Options Hash (params):
-
:arn
(required, String)
—
The Amazon Resource Name (ARN) of the scheduling policy to update.
-
:fairshare_policy
(Types::FairsharePolicy)
—
The fair share policy.
Returns:
-
(Struct)
—
Returns an empty response.
See Also:
4900 4901 4902 4903 |
# File 'lib/aws-sdk-batch/client.rb', line 4900 def update_scheduling_policy(params = {}, options = {}) req = build_request(:update_scheduling_policy, params) req.send_request(options) end |
#waiter_names ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
4929 4930 4931 |
# File 'lib/aws-sdk-batch/client.rb', line 4929 def waiter_names [] end |