Class: Aws::S3::Object
- Inherits:
-
Object
- Object
- Aws::S3::Object
- Extended by:
- Deprecations
- Defined in:
- lib/aws-sdk-s3/customizations/object.rb,
lib/aws-sdk-s3/object.rb
Defined Under Namespace
Classes: Collection
Read-Only Attributes collapse
-
#accept_ranges ⇒ String
Indicates that a range of bytes was specified.
-
#archive_status ⇒ String
The archive state of the head object.
-
#bucket_key_enabled ⇒ Boolean
Indicates whether the object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
- #bucket_name ⇒ String
-
#cache_control ⇒ String
Specifies caching behavior along the request/reply chain.
-
#checksum_crc32 ⇒ String
The Base64 encoded, 32-bit ‘CRC32 checksum` of the object.
-
#checksum_crc32c ⇒ String
The Base64 encoded, 32-bit ‘CRC32C` checksum of the object.
-
#checksum_crc64nvme ⇒ String
The Base64 encoded, 64-bit ‘CRC64NVME` checksum of the object.
-
#checksum_md5 ⇒ String
The Base64 encoded, 128-bit ‘MD5` digest of the object.
-
#checksum_sha1 ⇒ String
The Base64 encoded, 160-bit ‘SHA1` digest of the object.
-
#checksum_sha256 ⇒ String
The Base64 encoded, 256-bit ‘SHA256` digest of the object.
-
#checksum_sha512 ⇒ String
The Base64 encoded, 512-bit ‘SHA512` digest of the object.
-
#checksum_type ⇒ String
The checksum type, which determines how part-level checksums are combined to create an object-level checksum for multipart objects.
-
#checksum_xxhash128 ⇒ String
The Base64 encoded, 128-bit ‘XXHASH128` checksum of the object.
-
#checksum_xxhash3 ⇒ String
The Base64 encoded, 64-bit ‘XXHASH3` checksum of the object.
-
#checksum_xxhash64 ⇒ String
The Base64 encoded, 64-bit ‘XXHASH64` checksum of the object.
-
#content_disposition ⇒ String
Specifies presentational information for the object.
-
#content_encoding ⇒ String
Indicates what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
-
#content_language ⇒ String
The language the content is in.
-
#content_length ⇒ Integer
Size of the body in bytes.
-
#content_range ⇒ String
The portion of the object returned in the response for a ‘GET` request.
-
#content_type ⇒ String
A standard MIME type describing the format of the object data.
-
#delete_marker ⇒ Boolean
Specifies whether the object retrieved was (true) or was not (false) a Delete Marker.
-
#etag ⇒ String
An entity tag (ETag) is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
-
#expiration ⇒ String
If the object expiration is configured (see [ ‘PutBucketLifecycleConfiguration` ][1]), the response includes this header.
-
#expires ⇒ Time
The date and time at which the object is no longer cacheable.
- #expires_string ⇒ String
- #key ⇒ String
-
#last_modified ⇒ Time
Date and time when the object was last modified.
-
#metadata ⇒ Hash<String,String>
A map of metadata to store with the object in S3.
-
#missing_meta ⇒ Integer
This is set to the number of metadata entries not returned in ‘x-amz-meta` headers.
-
#object_lock_legal_hold_status ⇒ String
Specifies whether a legal hold is in effect for this object.
-
#object_lock_mode ⇒ String
The Object Lock mode, if any, that’s in effect for this object.
-
#object_lock_retain_until_date ⇒ Time
The date and time when the Object Lock retention period expires.
-
#parts_count ⇒ Integer
The count of parts this object has.
-
#replication_status ⇒ String
Amazon S3 can return this header if your request involves a bucket that is either a source or a destination in a replication rule.
-
#request_charged ⇒ String
If present, indicates that the requester was successfully charged for the request.
-
#restore ⇒ String
If the object is an archived object (an object whose storage class is GLACIER), the response includes this header if either the archive restoration is in progress (see [RestoreObject] or an archive copy is already restored.
-
#server_side_encryption ⇒ String
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
-
#sse_customer_algorithm ⇒ String
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
-
#sse_customer_key_md5 ⇒ String
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
-
#ssekms_key_id ⇒ String
If present, indicates the ID of the KMS key that was used for object encryption.
-
#storage_class ⇒ String
Provides storage class information of the object.
-
#tag_count ⇒ Integer
The number of tags, if any, on the object, when you have the relevant permission to read object tags.
-
#version_id ⇒ String
Version ID of the object.
-
#website_redirect_location ⇒ String
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL.
Actions collapse
- #copy_from(options = {}) ⇒ Types::CopyObjectOutput
- #delete(options = {}) ⇒ Types::DeleteObjectOutput
- #get(options = {}, &block) ⇒ Types::GetObjectOutput
- #head(options = {}) ⇒ Types::HeadObjectOutput
- #initiate_multipart_upload(options = {}) ⇒ MultipartUpload
- #put(options = {}) ⇒ Types::PutObjectOutput
- #restore_object(options = {}) ⇒ Types::RestoreObjectOutput
Associations collapse
- #acl ⇒ ObjectAcl
- #bucket ⇒ Bucket
- #identifiers ⇒ Object deprecated private Deprecated.
- #multipart_upload(id) ⇒ MultipartUpload
- #version(id) ⇒ ObjectVersion
Instance Method Summary collapse
- #client ⇒ Client
-
#copy_to(target, options = {}) ⇒ Object
Copies this object to another object.
-
#data ⇒ Types::HeadObjectOutput
Returns the data for this Object.
-
#data_loaded? ⇒ Boolean
Returns ‘true` if this resource is loaded.
-
#download_file(destination, options = {}) ⇒ Boolean
Downloads a file in S3 to a path on disk.
-
#exists?(options = {}) ⇒ Boolean
Returns ‘true` if the Object exists.
-
#initialize(*args) ⇒ Object
constructor
A new instance of Object.
- #load ⇒ self (also: #reload)
-
#move_to(target, options = {}) ⇒ void
Copies and deletes the current object.
-
#presigned_post(options = {}) ⇒ PresignedPost
Creates a PresignedPost that makes it easy to upload a file from a web browser direct to Amazon S3 using an HTML post form with a file field.
-
#presigned_request(method, params = {}) ⇒ String, Hash
Allows you to create presigned URL requests for S3 operations.
-
#presigned_url(method, params = {}) ⇒ String
Generates a pre-signed URL for this object.
-
#public_url(options = {}) ⇒ String
Returns the public (un-signed) URL for this object.
- #size ⇒ Object
-
#upload_file(source, options = {}) {|response| ... } ⇒ Boolean
Uploads a file from disk to the current object in S3.
-
#upload_stream(options = {}, &block) ⇒ Boolean
Uploads a stream in a streaming fashion to the current object in S3.
-
#wait_until(options = {}) {|resource| ... } ⇒ Resource
deprecated
Deprecated.
Use [Aws::S3::Client] #wait_until instead
- #wait_until_exists(options = {}, &block) ⇒ Object
- #wait_until_not_exists(options = {}, &block) ⇒ Object
Constructor Details
#initialize(bucket_name, key, options = {}) ⇒ Object #initialize(options = {}) ⇒ Object
Returns a new instance of Object.
24 25 26 27 28 29 30 31 |
# File 'lib/aws-sdk-s3/object.rb', line 24 def initialize(*args) = Hash === args.last ? args.pop.dup : {} @bucket_name = extract_bucket_name(args, ) @key = extract_key(args, ) @data = .delete(:data) @client = .delete(:client) || Client.new() @waiter_block_warned = false end |
Instance Method Details
#accept_ranges ⇒ String
Indicates that a range of bytes was specified.
59 60 61 |
# File 'lib/aws-sdk-s3/object.rb', line 59 def accept_ranges data[:accept_ranges] end |
#acl ⇒ ObjectAcl
3555 3556 3557 3558 3559 3560 3561 |
# File 'lib/aws-sdk-s3/object.rb', line 3555 def acl ObjectAcl.new( bucket_name: @bucket_name, object_key: @key, client: @client ) end |
#archive_status ⇒ String
The archive state of the head object.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
122 123 124 |
# File 'lib/aws-sdk-s3/object.rb', line 122 def archive_status data[:archive_status] end |
#bucket ⇒ Bucket
3564 3565 3566 3567 3568 3569 |
# File 'lib/aws-sdk-s3/object.rb', line 3564 def bucket Bucket.new( name: @bucket_name, client: @client ) end |
#bucket_key_enabled ⇒ Boolean
Indicates whether the object uses an S3 Bucket Key for server-side encryption with Key Management Service (KMS) keys (SSE-KMS).
439 440 441 |
# File 'lib/aws-sdk-s3/object.rb', line 439 def bucket_key_enabled data[:bucket_key_enabled] end |
#bucket_name ⇒ String
36 37 38 |
# File 'lib/aws-sdk-s3/object.rb', line 36 def bucket_name @bucket_name end |
#cache_control ⇒ String
Specifies caching behavior along the request/reply chain.
326 327 328 |
# File 'lib/aws-sdk-s3/object.rb', line 326 def cache_control data[:cache_control] end |
#checksum_crc32 ⇒ String
The Base64 encoded, 32-bit ‘CRC32 checksum` of the object. This checksum is only present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see [ Checking object integrity] in the *Amazon S3 User Guide*.
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
151 152 153 |
# File 'lib/aws-sdk-s3/object.rb', line 151 def checksum_crc32 data[:checksum_crc32] end |
#checksum_crc32c ⇒ String
The Base64 encoded, 32-bit ‘CRC32C` checksum of the object. This checksum is only present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see [ Checking object integrity] in the *Amazon S3 User Guide*.
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
168 169 170 |
# File 'lib/aws-sdk-s3/object.rb', line 168 def checksum_crc32c data[:checksum_crc32c] end |
#checksum_crc64nvme ⇒ String
The Base64 encoded, 64-bit ‘CRC64NVME` checksum of the object. For more information, see [Checking object integrity in the Amazon S3 User Guide].
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
180 181 182 |
# File 'lib/aws-sdk-s3/object.rb', line 180 def checksum_crc64nvme data[:checksum_crc64nvme] end |
#checksum_md5 ⇒ String
The Base64 encoded, 128-bit ‘MD5` digest of the object. For more information, see [Checking object integrity in the Amazon S3 User Guide].
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
238 239 240 |
# File 'lib/aws-sdk-s3/object.rb', line 238 def checksum_md5 data[:checksum_md5] end |
#checksum_sha1 ⇒ String
The Base64 encoded, 160-bit ‘SHA1` digest of the object. This checksum is only present if the checksum was uploaded with the object. When you use the API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see [ Checking object integrity] in the *Amazon S3 User Guide*.
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
197 198 199 |
# File 'lib/aws-sdk-s3/object.rb', line 197 def checksum_sha1 data[:checksum_sha1] end |
#checksum_sha256 ⇒ String
The Base64 encoded, 256-bit ‘SHA256` digest of the object. This checksum is only present if the checksum was uploaded with the object. When you use an API operation on an object that was uploaded using multipart uploads, this value may not be a direct checksum value of the full object. Instead, it’s a calculation based on the checksum values of each individual part. For more information about how checksums are calculated with multipart uploads, see [ Checking object integrity] in the *Amazon S3 User Guide*.
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html#large-object-checksums
214 215 216 |
# File 'lib/aws-sdk-s3/object.rb', line 214 def checksum_sha256 data[:checksum_sha256] end |
#checksum_sha512 ⇒ String
The Base64 encoded, 512-bit ‘SHA512` digest of the object. For more information, see [Checking object integrity in the Amazon S3 User Guide].
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
226 227 228 |
# File 'lib/aws-sdk-s3/object.rb', line 226 def checksum_sha512 data[:checksum_sha512] end |
#checksum_type ⇒ String
The checksum type, which determines how part-level checksums are combined to create an object-level checksum for multipart objects. You can use this header response to verify that the checksum type that is received is the same checksum type that was specified in ‘CreateMultipartUpload` request. For more information, see [Checking object integrity in the Amazon S3 User Guide].
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
289 290 291 |
# File 'lib/aws-sdk-s3/object.rb', line 289 def checksum_type data[:checksum_type] end |
#checksum_xxhash128 ⇒ String
The Base64 encoded, 128-bit ‘XXHASH128` checksum of the object. For more information, see [Checking object integrity in the Amazon S3 User Guide].
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
274 275 276 |
# File 'lib/aws-sdk-s3/object.rb', line 274 def checksum_xxhash128 data[:checksum_xxhash128] end |
#checksum_xxhash3 ⇒ String
The Base64 encoded, 64-bit ‘XXHASH3` checksum of the object. For more information, see [Checking object integrity in the Amazon S3 User Guide].
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
262 263 264 |
# File 'lib/aws-sdk-s3/object.rb', line 262 def checksum_xxhash3 data[:checksum_xxhash3] end |
#checksum_xxhash64 ⇒ String
The Base64 encoded, 64-bit ‘XXHASH64` checksum of the object. For more information, see [Checking object integrity in the Amazon S3 User Guide].
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/checking-object-integrity.html
250 251 252 |
# File 'lib/aws-sdk-s3/object.rb', line 250 def checksum_xxhash64 data[:checksum_xxhash64] end |
#content_disposition ⇒ String
Specifies presentational information for the object.
332 333 334 |
# File 'lib/aws-sdk-s3/object.rb', line 332 def content_disposition data[:content_disposition] end |
#content_encoding ⇒ String
Indicates what content encodings have been applied to the object and thus what decoding mechanisms must be applied to obtain the media-type referenced by the Content-Type header field.
340 341 342 |
# File 'lib/aws-sdk-s3/object.rb', line 340 def content_encoding data[:content_encoding] end |
#content_language ⇒ String
The language the content is in.
346 347 348 |
# File 'lib/aws-sdk-s3/object.rb', line 346 def content_language data[:content_language] end |
#content_length ⇒ Integer
Size of the body in bytes.
134 135 136 |
# File 'lib/aws-sdk-s3/object.rb', line 134 def content_length data[:content_length] end |
#content_range ⇒ String
The portion of the object returned in the response for a ‘GET` request.
359 360 361 |
# File 'lib/aws-sdk-s3/object.rb', line 359 def content_range data[:content_range] end |
#content_type ⇒ String
A standard MIME type describing the format of the object data.
352 353 354 |
# File 'lib/aws-sdk-s3/object.rb', line 352 def content_type data[:content_type] end |
#copy_from(options = {}) ⇒ Types::CopyObjectOutput
78 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 78 alias_method :copy_from, :copy_from |
#copy_to(target, options = {}) ⇒ Object
If you need to copy to a bucket in a different region, use #copy_from.
Copies this object to another object. Use ‘multipart_copy: true` for large objects. This is required for objects that exceed 5GB.
121 122 123 124 125 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 121 def copy_to(target, = {}) Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do ObjectCopier.new(self, ).copy_to(target, ) end end |
#data ⇒ Types::HeadObjectOutput
Returns the data for this Aws::S3::Object. Calls Client#head_object if #data_loaded? is ‘false`.
631 632 633 634 |
# File 'lib/aws-sdk-s3/object.rb', line 631 def data load unless @data @data end |
#data_loaded? ⇒ Boolean
639 640 641 |
# File 'lib/aws-sdk-s3/object.rb', line 639 def data_loaded? !!@data end |
#delete(options = {}) ⇒ Types::DeleteObjectOutput
1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 |
# File 'lib/aws-sdk-s3/object.rb', line 1667 def delete( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.delete_object() end resp.data end |
#delete_marker ⇒ Boolean
Specifies whether the object retrieved was (true) or was not (false) a Delete Marker. If false, this response header does not appear in the response.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
53 54 55 |
# File 'lib/aws-sdk-s3/object.rb', line 53 def delete_marker data[:delete_marker] end |
#download_file(destination, options = {}) ⇒ Boolean
Downloads a file in S3 to a path on disk.
# small files (< 5MB) are downloaded in a single API call
obj.download_file('/path/to/file')
Files larger than 5MB are downloaded using multipart method:
# large files are split into parts
# and the parts are downloaded in parallel
obj.download_file('/path/to/very_large_file')
You can provide a callback to monitor progress of the download:
# bytes and part_sizes are each an array with 1 entry per part
# part_sizes may not be known until the first bytes are retrieved
progress = proc do |bytes, part_sizes, file_size|
puts bytes.map.with_index { |b, i| "Part #{i + 1}: #{b} / #{part_sizes[i]}" }.join(' ') + "Total: #{100.0 * bytes.sum / file_size}%"
end
obj.download_file('/path/to/file', progress_callback: progress)
541 542 543 544 545 546 547 548 549 550 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 541 def download_file(destination, = {}) download_opts = .merge(bucket: bucket_name, key: key) executor = DefaultExecutor.new(max_threads: download_opts.delete([:thread_count])) downloader = FileDownloader.new(client: client, executor: executor) Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do downloader.download(destination, download_opts) end executor.shutdown true end |
#etag ⇒ String
An entity tag (ETag) is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
296 297 298 |
# File 'lib/aws-sdk-s3/object.rb', line 296 def etag data[:etag] end |
#exists?(options = {}) ⇒ Boolean
Returns ‘true` if the Object exists.
646 647 648 649 650 651 652 653 654 655 |
# File 'lib/aws-sdk-s3/object.rb', line 646 def exists?( = {}) begin wait_until_exists(.merge(max_attempts: 1)) true rescue Aws::Waiters::Errors::UnexpectedError => e raise e.error rescue Aws::Waiters::Errors::WaiterFailed false end end |
#expiration ⇒ String
If the object expiration is configured (see [ ‘PutBucketLifecycleConfiguration` ][1]), the response includes this header. It includes the `expiry-date` and `rule-id` key-value pairs providing object expiration information. The value of the `rule-id` is URL-encoded.
<note markdown=“1”> Object expiration information is not returned in directory buckets and this header returns the value “‘NotImplemented`” in all responses for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketLifecycleConfiguration.html
79 80 81 |
# File 'lib/aws-sdk-s3/object.rb', line 79 def expiration data[:expiration] end |
#expires ⇒ Time
The date and time at which the object is no longer cacheable.
365 366 367 |
# File 'lib/aws-sdk-s3/object.rb', line 365 def expires data[:expires] end |
#expires_string ⇒ String
370 371 372 |
# File 'lib/aws-sdk-s3/object.rb', line 370 def expires_string data[:expires_string] end |
#get(options = {}, &block) ⇒ Types::GetObjectOutput
1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 |
# File 'lib/aws-sdk-s3/object.rb', line 1923 def get( = {}, &block) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.get_object(, &block) end resp.data end |
#head(options = {}) ⇒ Types::HeadObjectOutput
3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 |
# File 'lib/aws-sdk-s3/object.rb', line 3541 def head( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.head_object() end resp.data end |
#identifiers ⇒ Object
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
3595 3596 3597 3598 3599 3600 |
# File 'lib/aws-sdk-s3/object.rb', line 3595 def identifiers { bucket_name: @bucket_name, key: @key } end |
#initiate_multipart_upload(options = {}) ⇒ MultipartUpload
2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 |
# File 'lib/aws-sdk-s3/object.rb', line 2507 def initiate_multipart_upload( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.create_multipart_upload() end MultipartUpload.new( bucket_name: @bucket_name, object_key: @key, id: resp.data.upload_id, client: @client ) end |
#key ⇒ String
41 42 43 |
# File 'lib/aws-sdk-s3/object.rb', line 41 def key @key end |
#last_modified ⇒ Time
Date and time when the object was last modified.
128 129 130 |
# File 'lib/aws-sdk-s3/object.rb', line 128 def last_modified data[:last_modified] end |
#load ⇒ self Also known as: reload
Loads, or reloads #data for the current Aws::S3::Object. Returns ‘self` making it possible to chain methods.
object.reload.data
616 617 618 619 620 621 622 623 624 625 |
# File 'lib/aws-sdk-s3/object.rb', line 616 def load resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.head_object( bucket: @bucket_name, key: @key ) end @data = resp.data self end |
#metadata ⇒ Hash<String,String>
A map of metadata to store with the object in S3.
400 401 402 |
# File 'lib/aws-sdk-s3/object.rb', line 400 def data[:metadata] end |
#missing_meta ⇒ Integer
This is set to the number of metadata entries not returned in ‘x-amz-meta` headers. This can happen if you create metadata using an API like SOAP that supports more flexible metadata than the REST API. For example, using SOAP, you can create metadata whose values are not legal HTTP headers.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
310 311 312 |
# File 'lib/aws-sdk-s3/object.rb', line 310 def data[:missing_meta] end |
#move_to(target, options = {}) ⇒ void
This method returns an undefined value.
Copies and deletes the current object. The object will only be deleted if the copy operation succeeds.
135 136 137 138 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 135 def move_to(target, = {}) copy_to(target, ) delete end |
#multipart_upload(id) ⇒ MultipartUpload
3573 3574 3575 3576 3577 3578 3579 3580 |
# File 'lib/aws-sdk-s3/object.rb', line 3573 def multipart_upload(id) MultipartUpload.new( bucket_name: @bucket_name, object_key: @key, id: id, client: @client ) end |
#object_lock_legal_hold_status ⇒ String
Specifies whether a legal hold is in effect for this object. This header is only returned if the requester has the ‘s3:GetObjectLegalHold` permission. This header is not returned if the specified version of this object has never had a legal hold applied. For more information about S3 Object Lock, see [Object Lock].
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html
599 600 601 |
# File 'lib/aws-sdk-s3/object.rb', line 599 def object_lock_legal_hold_status data[:object_lock_legal_hold_status] end |
#object_lock_mode ⇒ String
The Object Lock mode, if any, that’s in effect for this object. This header is only returned if the requester has the ‘s3:GetObjectRetention` permission. For more information about S3 Object Lock, see [Object Lock].
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/dev/object-lock.html
569 570 571 |
# File 'lib/aws-sdk-s3/object.rb', line 569 def object_lock_mode data[:object_lock_mode] end |
#object_lock_retain_until_date ⇒ Time
The date and time when the Object Lock retention period expires. This header is only returned if the requester has the ‘s3:GetObjectRetention` permission.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
581 582 583 |
# File 'lib/aws-sdk-s3/object.rb', line 581 def object_lock_retain_until_date data[:object_lock_retain_until_date] end |
#parts_count ⇒ Integer
The count of parts this object has. This value is only returned if you specify ‘partNumber` in your request and the object was uploaded as a multipart upload.
534 535 536 |
# File 'lib/aws-sdk-s3/object.rb', line 534 def parts_count data[:parts_count] end |
#presigned_post(options = {}) ⇒ PresignedPost
Creates a PresignedPost that makes it easy to upload a file from a web browser direct to Amazon S3 using an HTML post form with a file field.
See the PresignedPost documentation for more information.
149 150 151 152 153 154 155 156 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 149 def presigned_post( = {}) PresignedPost.new( client.config.credentials, client.config.region, bucket_name, { key: key, url: bucket.url }.merge() ) end |
#presigned_request(method, params = {}) ⇒ String, Hash
Allows you to create presigned URL requests for S3 operations. This method returns a tuple containing the URL and the signed X-amz-* headers to be used with the presigned url.
293 294 295 296 297 298 299 300 301 302 303 304 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 293 def presigned_request(method, params = {}) presigner = Presigner.new(client: client) if %w(delete head get put).include?(method.to_s) method = "#{method}_object".to_sym end presigner.presigned_request( method.downcase, params.merge(bucket: bucket_name, key: key) ) end |
#presigned_url(method, params = {}) ⇒ String
Generates a pre-signed URL for this object.
220 221 222 223 224 225 226 227 228 229 230 231 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 220 def presigned_url(method, params = {}) presigner = Presigner.new(client: client) if %w(delete head get put).include?(method.to_s) method = "#{method}_object".to_sym end presigner.presigned_url( method.downcase, params.merge(bucket: bucket_name, key: key) ) end |
#public_url(options = {}) ⇒ String
Returns the public (un-signed) URL for this object.
s3.bucket('bucket-name').object('obj-key').public_url
#=> "https://bucket-name.s3.amazonaws.com/obj-key"
To use virtual hosted bucket url. Uses https unless secure: false is set. If the bucket name contains dots (.) then you will need to set secure: false.
s3.bucket('my-bucket.com').object('key')
.public_url(virtual_host: true)
#=> "https://my-bucket.com/key"
328 329 330 331 332 333 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 328 def public_url( = {}) url = URI.parse(bucket.url()) url.path += '/' unless url.path[-1] == '/' url.path += key.gsub(/[^\/]+/) { |s| Seahorse::Util.uri_escape(s) } url.to_s end |
#put(options = {}) ⇒ Types::PutObjectOutput
3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 |
# File 'lib/aws-sdk-s3/object.rb', line 3201 def put( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.put_object() end resp.data end |
#replication_status ⇒ String
Amazon S3 can return this header if your request involves a bucket that is either a source or a destination in a replication rule.
In replication, you have a source bucket on which you configure replication and destination bucket or buckets where Amazon S3 stores object replicas. When you request an object (‘GetObject`) or object metadata (`HeadObject`) from these buckets, Amazon S3 will return the `x-amz-replication-status` header in the response as follows:
-
**If requesting an object from the source bucket**, Amazon S3 will return the ‘x-amz-replication-status` header if the object in your request is eligible for replication.
For example, suppose that in your replication configuration, you specify object prefix ‘TaxDocs` requesting Amazon S3 to replicate objects with key prefix `TaxDocs`. Any objects you upload with this key name prefix, for example `TaxDocs/document1.pdf`, are eligible for replication. For any object request with this key name prefix, Amazon S3 will return the `x-amz-replication-status` header with value PENDING, COMPLETED or FAILED indicating object replication status.
-
**If requesting an object from a destination bucket**, Amazon S3 will return the ‘x-amz-replication-status` header with value REPLICA if the object in your request is a replica that Amazon S3 created and there is no replica modification replication in progress.
-
**When replicating objects to multiple destination buckets**, the ‘x-amz-replication-status` header acts differently. The header of the source object will only return a value of COMPLETED when replication is successful to all destinations. The header will remain at value PENDING until replication has completed for all destinations. If one or more destinations fails replication the header will return FAILED.
For more information, see [Replication].
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
526 527 528 |
# File 'lib/aws-sdk-s3/object.rb', line 526 def replication_status data[:replication_status] end |
#request_charged ⇒ String
If present, indicates that the requester was successfully charged for the request. For more information, see [Using Requester Pays buckets for storage transfers and usage] in the *Amazon Simple Storage Service user guide*.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/userguide/RequesterPaysBuckets.html
477 478 479 |
# File 'lib/aws-sdk-s3/object.rb', line 477 def request_charged data[:request_charged] end |
#restore ⇒ String
If the object is an archived object (an object whose storage class is GLACIER), the response includes this header if either the archive restoration is in progress (see [RestoreObject] or an archive copy is already restored.
If an archive copy is already restored, the header value indicates when Amazon S3 is scheduled to delete the object copy. For example:
‘x-amz-restore: ongoing-request=“false”, expiry-date=“Fri, 21 Dec 2012 00:00:00 GMT”`
If the object restoration is in progress, the header returns the value ‘ongoing-request=“true”`.
For more information about archiving objects, see [Transitioning Objects: General Considerations].
<note markdown=“1”> This functionality is not supported for directory buckets. Directory buckets only support ‘EXPRESS_ONEZONE` (the S3 Express One Zone storage class) in Availability Zones and `ONEZONE_IA` (the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/API/API_RestoreObject.html [2]: docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html#lifecycle-transition-general-considerations
112 113 114 |
# File 'lib/aws-sdk-s3/object.rb', line 112 def restore data[:restore] end |
#restore_object(options = {}) ⇒ Types::RestoreObjectOutput
3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 |
# File 'lib/aws-sdk-s3/object.rb', line 3342 def restore_object( = {}) = .merge( bucket: @bucket_name, key: @key ) resp = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do @client.restore_object() end resp.data end |
#server_side_encryption ⇒ String
The server-side encryption algorithm used when you store this object in Amazon S3 or Amazon FSx.
<note markdown=“1”> When accessing data stored in Amazon FSx file systems using S3 access points, the only valid server side encryption option is ‘aws:fsx`.
</note>
394 395 396 |
# File 'lib/aws-sdk-s3/object.rb', line 394 def server_side_encryption data[:server_side_encryption] end |
#size ⇒ Object
6 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 6 alias size content_length |
#sse_customer_algorithm ⇒ String
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to confirm the encryption algorithm that’s used.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
412 413 414 |
# File 'lib/aws-sdk-s3/object.rb', line 412 def sse_customer_algorithm data[:sse_customer_algorithm] end |
#sse_customer_key_md5 ⇒ String
If server-side encryption with a customer-provided encryption key was requested, the response will include this header to provide the round-trip message integrity verification of the customer-provided encryption key.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
425 426 427 |
# File 'lib/aws-sdk-s3/object.rb', line 425 def sse_customer_key_md5 data[:sse_customer_key_md5] end |
#ssekms_key_id ⇒ String
If present, indicates the ID of the KMS key that was used for object encryption.
432 433 434 |
# File 'lib/aws-sdk-s3/object.rb', line 432 def ssekms_key_id data[:ssekms_key_id] end |
#storage_class ⇒ String
Provides storage class information of the object. Amazon S3 returns this header for all objects except for S3 Standard storage class objects.
For more information, see [Storage Classes].
<note markdown=“1”> Directory buckets - Directory buckets only support ‘EXPRESS_ONEZONE` (the S3 Express One Zone storage class) in Availability Zones and `ONEZONE_IA` (the S3 One Zone-Infrequent Access storage class) in Dedicated Local Zones.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/dev/storage-class-intro.html
460 461 462 |
# File 'lib/aws-sdk-s3/object.rb', line 460 def storage_class data[:storage_class] end |
#tag_count ⇒ Integer
The number of tags, if any, on the object, when you have the relevant permission to read object tags.
You can use [GetObjectTagging] to retrieve the tag set associated with an object.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
[1]: docs.aws.amazon.com/AmazonS3/latest/API/API_GetObjectTagging.html
552 553 554 |
# File 'lib/aws-sdk-s3/object.rb', line 552 def tag_count data[:tag_count] end |
#upload_file(source, options = {}) {|response| ... } ⇒ Boolean
Uploads a file from disk to the current object in S3.
# small files are uploaded in a single API call
obj.upload_file('/path/to/file')
Files larger than or equal to ‘:multipart_threshold` are uploaded using the Amazon S3 multipart upload APIs.
# large files are automatically split into parts
# and the parts are uploaded in parallel
obj.upload_file('/path/to/very_large_file')
The response of the S3 upload API is yielded if a block given.
# API response will have etag value of the file
obj.upload_file('/path/to/file') do |response|
etag = response.etag
end
You can provide a callback to monitor progress of the upload:
# bytes and totals are each an array with 1 entry per part
progress = proc do |bytes, totals|
puts bytes.map.with_index { |b, i| "Part #{i+1}: #{b} / #{totals[i]}"}.join(' ') + "Total: #{100.0 * bytes.sum / totals.sum }%"
end
obj.upload_file('/path/to/file', progress_callback: progress)
459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 459 def upload_file(source, = {}) upload_opts = .merge(bucket: bucket_name, key: key) executor = DefaultExecutor.new(max_threads: upload_opts.delete(:thread_count)) uploader = FileUploader.new( client: client, executor: executor, multipart_threshold: upload_opts.delete(:multipart_threshold) ) response = Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do uploader.upload(source, upload_opts) end yield response if block_given? executor.shutdown true end |
#upload_stream(options = {}, &block) ⇒ Boolean
Uploads a stream in a streaming fashion to the current object in S3.
Passed chunks automatically split into multipart upload parts and the parts are uploaded in parallel. This allows for streaming uploads that never touch the disk.
Note that this is known to have issues in JRuby until jruby-9.1.15.0, so avoid using this with older versions of JRuby.
385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 |
# File 'lib/aws-sdk-s3/customizations/object.rb', line 385 def upload_stream( = {}, &block) upload_opts = .merge(bucket: bucket_name, key: key) executor = DefaultExecutor.new(max_threads: upload_opts.delete(:thread_count)) uploader = MultipartStreamUploader.new( client: client, executor: executor, tempfile: upload_opts.delete(:tempfile), part_size: upload_opts.delete(:part_size) ) Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do uploader.upload(upload_opts, &block) end executor.shutdown true end |
#version(id) ⇒ ObjectVersion
3584 3585 3586 3587 3588 3589 3590 3591 |
# File 'lib/aws-sdk-s3/object.rb', line 3584 def version(id) ObjectVersion.new( bucket_name: @bucket_name, object_key: @key, id: id, client: @client ) end |
#version_id ⇒ String
Version ID of the object.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
320 321 322 |
# File 'lib/aws-sdk-s3/object.rb', line 320 def version_id data[:version_id] end |
#wait_until(options = {}) {|resource| ... } ⇒ Resource
Use [Aws::S3::Client] #wait_until instead
The waiting operation is performed on a copy. The original resource remains unchanged.
Waiter polls an API operation until a resource enters a desired state.
## Basic Usage
Waiter will polls until it is successful, it fails by entering a terminal state, or until a maximum number of attempts are made.
# polls in a loop until condition is true
resource.wait_until() {|resource| condition}
## Example
instance.wait_until(max_attempts:10, delay:5) do |instance|
instance.state.name == 'running'
end
## Configuration
You can configure the maximum number of polling attempts, and the delay (in seconds) between each polling attempt. The waiting condition is set by passing a block to #wait_until:
# poll for ~25 seconds
resource.wait_until(max_attempts:5,delay:5) {|resource|...}
## Callbacks
You can be notified before each polling attempt and before each delay. If you throw ‘:success` or `:failure` from these callbacks, it will terminate the waiter.
started_at = Time.now
# poll for 1 hour, instead of a number of attempts
proc = Proc.new do |attempts, response|
throw :failure if Time.now - started_at > 3600
end
# disable max attempts
instance.wait_until(before_wait:proc, max_attempts:nil) {...}
## Handling Errors
When a waiter is successful, it returns the Resource. When a waiter fails, it raises an error.
begin
resource.wait_until(...)
rescue Aws::Waiters::Errors::WaiterFailed
# resource did not enter the desired state in time
end
attempts attempt in seconds invoked before each attempt invoked before each wait
779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 |
# File 'lib/aws-sdk-s3/object.rb', line 779 def wait_until( = {}, &block) self_copy = self.dup attempts = 0 [:max_attempts] = 10 unless .key?(:max_attempts) [:delay] ||= 10 [:poller] = Proc.new do attempts += 1 if block.call(self_copy) [:success, self_copy] else self_copy.reload unless attempts == [:max_attempts] :retry end end Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do Aws::Waiters::Waiter.new().wait({}) end end |
#wait_until_exists(options = {}, &block) ⇒ Object
663 664 665 666 667 668 669 670 671 672 673 674 675 676 |
# File 'lib/aws-sdk-s3/object.rb', line 663 def wait_until_exists( = {}, &block) , params = () waiter = Waiters::ObjectExists.new() yield_waiter_and_warn(waiter, &block) if block_given? Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do waiter.wait(params.merge(bucket: @bucket_name, key: @key)) end Object.new({ bucket_name: @bucket_name, key: @key, client: @client }) end |
#wait_until_not_exists(options = {}, &block) ⇒ Object
684 685 686 687 688 689 690 691 692 693 694 695 696 697 |
# File 'lib/aws-sdk-s3/object.rb', line 684 def wait_until_not_exists( = {}, &block) , params = () waiter = Waiters::ObjectNotExists.new() yield_waiter_and_warn(waiter, &block) if block_given? Aws::Plugins::UserAgent.metric('RESOURCE_MODEL') do waiter.wait(params.merge(bucket: @bucket_name, key: @key)) end Object.new({ bucket_name: @bucket_name, key: @key, client: @client }) end |
#website_redirect_location ⇒ String
If the bucket is configured as a website, redirects requests for this object to another object in the same bucket or to an external URL. Amazon S3 stores the value of this header in the object metadata.
<note markdown=“1”> This functionality is not supported for directory buckets.
</note>
382 383 384 |
# File 'lib/aws-sdk-s3/object.rb', line 382 def website_redirect_location data[:website_redirect_location] end |