Page topics
- General S3 FAQs
19
- AWS Regions
6
- Billing
10
- Amazon S3 and IPv6
4
- S3 Event Notifications
5
- Amazon S3 Transfer Acceleration
12
- Security
14
- S3 Access Grants
19
- S3 Access Points
12
- Durability & Data Protection
23
- Storage Classes
2
- S3 Intelligent-Tiering
15
- S3 Standard
2
- S3 Express One Zone
16
- S3 Standard-Infrequent Access (S3 Standard-IA)
8
- S3 One Zone-Infrequent Access (S3 One Zone-IA)
6
- Amazon S3 Glacier Instant Retrieval storage class
8
- Amazon S3 Glacier Flexible Retrieval storage class
10
- Amazon S3 Glacier Deep Archive
10
- S3 on Outposts
1
- Storage Management
47
- Storage Analytics & Insights
12
- Query in Place
4
- Replication
32
- Data processing
9
- Data Access
20
- Storage Browser for Amazon S3
9
General S3 FAQs
Open allWhat is Amazon S3?
What can I do with Amazon S3?
How can I get started using Amazon S3?
What can I do with Amazon S3 that I cannot do with an on-premises solution?
What kind of data can I store in Amazon S3?
How much data can I store in Amazon S3?
What is an S3 general purpose bucket?
What is an S3 directory bucket?
What is an S3 table bucket?
A table bucket is purpose-built for storing tables using the Apache Iceberg format. Use Amazon S3 Tables to create table buckets and set up table-level permissions in just a few steps. S3 table buckets are specifically optimized for analytics and machine learning workloads. With built-in support for Apache Iceberg, you can query tabular data in S3 with popular query engines including Amazon Athena, Amazon Redshift, and Apache Spark. Use S3 table buckets to store tabular data such as daily purchase transactions, streaming sensor data, or ad impressions as an Iceberg table in Amazon S3, and then interact with that data using analytics capabilities.
What is the difference between a general purpose bucket, a directory bucket, and a table bucket?
What does Amazon do with my data in Amazon S3?
Does Amazon store its own data in Amazon S3?
How is Amazon S3 data organized?
How do I interface with Amazon S3?
How reliable is Amazon S3?
How will Amazon S3 perform if traffic from my application suddenly spikes?
Does Amazon S3 offer a Service Level Agreement (SLA)?
What is the consistency model for Amazon S3?
Why does strong read-after-write consistency help me?
AWS Regions
Open allWhere is my data stored?
Why should I use Amazon S3 storage classes for AWS Dedicated Local Zones?
What is an AWS Region?
What is an AWS Availability Zone (AZ)?
The Amazon S3 One Zone-IA storage class replicates data within a single AZ. The data stored in S3 One Zone-IA is not resilient to the physical loss of an Availability Zone resulting from disasters, such as earthquakes, fires, and floods.
How do I decide which AWS Region to store my data in?
In which parts of the world is Amazon S3 available?
Billing
Open allHow much does Amazon S3 cost?
How will I be charged and billed for my use of Amazon S3?
Why do prices vary depending on which Amazon S3 Region I choose?
How am I charged for using Versioning?
2) Day 16 of the month: You perform a PUT of 5 GB (5,368,709,120 bytes) within the same bucket using the same key as the original PUT on Day 1.
When analyzing the storage costs of the above operations, note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket. At the end of the month: Total Byte-Hour usage
[4,294,967,296 bytes x 31 days x (24 hours / day)] + [5,368,709,120 bytes x 16 days x (24 hours / day)] = 5,257,039,970,304 Byte-Hours. Conversion to Total GB-Months
5,257,039,970,304 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 6.581 GB-Month The cost is calculated based on the current rates for your Region on the Amazon S3 pricing page.
How am I charged for accessing Amazon S3 through the AWS Management Console?
How am I charged if my Amazon S3 buckets are accessed from another AWS account?
Do your prices include taxes?
Will I incur any data transfer out to the internet charges when I move my data out of AWS?
I want to move my data out of AWS. How do I request free data transfer out to the internet?
Why do I have to request AWS’ pre-approval for free data transfer out to the internet before moving my data out of AWS?
Amazon S3 and IPv6
Open allWhat is IPv6?
What can I do with IPv6?
How do I get started with IPv6 on Amazon S3?
Should I expect a change in Amazon S3 performance when using IPv6?
S3 Event Notifications
Open allWhat are Amazon S3 Event Notifications?
What can I do with Amazon S3 Event Notifications?
What is included in Amazon S3 Event Notifications?
How do I set up Amazon S3 Event Notifications?
What does it cost to use Amazon S3 Event Notifications?
Amazon S3 Transfer Acceleration
Open allWhat is S3 Transfer Acceleration?
How do I get started with S3 Transfer Acceleration?
How fast is S3 Transfer Acceleration?
Who should use S3 Transfer Acceleration?
How secure is S3 Transfer Acceleration?
What if S3 Transfer Acceleration is not faster than a regular Amazon S3 transfer?
Can I use S3 Transfer Acceleration with multipart uploads?
How should I choose between S3 Transfer Acceleration and Amazon CloudFront’s PUT/POST?
Can S3 Transfer Acceleration complement AWS Direct Connect?
Can S3 Transfer Acceleration complement AWS Storage Gateway or a third-party gateway?
Visit this File section of the Storage Gateway FAQ to learn more about the AWS implementation.
Can S3 Transfer Acceleration complement third-party integrated software?
Is S3 Transfer Acceleration HIPAA eligible?
Security
Open allHow secure is my data in Amazon S3?
For more information on security in AWS, refer to the AWS security page, and for S3 security information, visit the S3 security page and the S3 security best practices guide.
How can I control access to my data stored on Amazon S3?
Does Amazon S3 support data access auditing?
What options do I have for encrypting data stored on Amazon S3?
Can I comply with European data privacy regulations using Amazon S3?
Where is my object and object metadata stored in AWS Dedicated Local Zones?
By default, your object data and object metadata stay within the single Dedicated Local Zone you put the object. Bucket management and telemetry data, including bucket names, capacity metrics, CloudTrail logs, CloudWatch metrics, customer managed keys from AWS Key Management Service (KMS), and Identity and Access Management (IAM) policies, are stored back in the parent AWS Region. Optionally, other bucket management features, like S3 Batch Operations, store management metadata with bucket name and object name in the parent AWS Region.
What is an Amazon VPC Endpoint for Amazon S3?
Can I allow a specific Amazon VPC Endpoint access to my Amazon S3 bucket?
What is AWS PrivateLink for Amazon S3?
How do I get started with interface VPC endpoints for S3?
You can create an interface VPC endpoint using the AWS VPC Management Console, AWS Command Line Interface (AWS CLI), AWS SDK, or API. To learn more, visit the documentation.
When should I choose gateway VPC endpoints versus AWS PrivateLink-based interface VPC endpoints?
Can I use both Interface Endpoints and Gateway Endpoints for S3 in the same VPC?
What is Amazon Macie and how can I use it to secure my data?
What is IAM Access Analyzer for Amazon S3 and how does it work?
For more information, visit the IAM Access Analyzer documentation.
S3 Access Grants
Open allWhat are Amazon S3 Access Grants?
Why should I use S3 Access Grants?
How do I get started with S3 Access Grants?
What types of identity are supported for S3 Access Grants permission grants?
What are the different access levels that S3 Access Grants offers?
Can I customize my access levels?
Are there any quotas for S3 Access Grants?
Is there any performance impact for data access when I use S3 Access Grants?
What other AWS services are required to use S3 Access Grants?
Does S3 Access Grants require client-side modifications?
Since client-side modifications are necessary, what AWS services and third-party applications are integrated with S3 Access Grants out-of-box today?
Is S3 Access Grants a replacement for AWS IAM?
Does S3 Access Grants work with KMS?
How do I view and manage my S3 Access Grants permission grants?
Can you grant public access to data with S3 Access Grants?
How can I audit requests that were authorized via S3 Access Grants?
How is S3 Access Grants priced?
What is the relationship between S3 Access Grants and Lake Formation?
Is S3 Access Grants integrated with IAM Access Analyzer?
S3 Access Points
Open allWhat are Amazon S3 Access Points?
Why should I use an access point?
How do S3 Access Points work?
Is there a quota on how many S3 Access Points I can create?
When using an access point, how are requests authorized?
How do I write access point policies?
How is restricting access to specific VPCs using network origin controls on access points different from restricting access to VPCs using the bucket policy?
Can I enforce a “No internet data access” policy for all access points in my organization?
Can I completely disable direct access to a bucket using the bucket hostname?
Can I replace or remove an access point from a bucket?
What is the cost of Amazon S3 Access Points?
How do I get started with S3 Access Points?
Durability & Data Protection
Open allHow durable is Amazon S3?
How is Amazon S3 designed for 99.999999999% durability?
Is data stored in a One Zone storage class protected against damage or loss of the Availability Zone?
How does Amazon S3 go beyond 99.999999999% durability?
With such high durability, do I still need to back up my critical data?
What capabilities does Amazon S3 provide to protect my data against accidental or malicious deletes?
What checksum algorithms does Amazon S3 support for data integrity checking?
Amazon S3 uses a combination of Content-MD5 checksums, secure hash algorithms (SHAs), and cyclic redundancy checks (CRCs) to verify data integrity. Amazon S3 performs these checksums on data at rest and repairs any disparity using redundant data. In addition, the latest AWS SDKs automatically calculate efficient CRC-based checksums for all uploads. S3 independently verifies that checksum and only accepts objects after confirming that data integrity was maintained in transit over the public internet. If a version of the SDK that does not provide pre-calculated checksums is used to upload an object, S3 calculates a CRC-based checksum of the whole object, even for multipart uploads. Checksums are stored in object metadata and are therefore available to verify data integrity at any time. You can choose from five supported checksum algorithms for data integrity checking on your upload and download requests. You can choose a SHA-1, SHA-256, CRC32, CRC32C, or CRC64NVME checksum algorithm, depending on your application needs. You can automatically calculate and verify checksums as you store or retrieve data from S3, and can access the checksum information at any time using the HeadObject S3 API, the GetObjectAttributes S3 API or an S3 Inventory report. Calculating a checksum as you stream data into S3 saves you time as you’re able to both verify and transmit your data in a single pass, instead of as two sequential operations. Using checksums for data validation is a best practice for data durability, and these capabilities increase the performance and reduce the cost to do so.
What is Versioning?
Why should I use Versioning?
How do I start using Versioning?
How does Versioning protect me from accidental deletion of my objects?
Can I set up a trash, recycle bin, or rollback window on my Amazon S3 objects to recover from deletes and overwrites?
How can I ensure maximum protection of my preserved versions?
How am I charged for using Versioning?
2) Day 16 of the month: You perform a PUT of 5 GB (5,368,709,120 bytes) within the same bucket using the same key as the original PUT on Day 1.
When analyzing the storage costs of the above operations, note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket. At the end of the month: Total Byte-Hour usage
[4,294,967,296 bytes x 31 days x (24 hours / day)] + [5,368,709,120 bytes x 16 days x (24 hours / day)] = 5,257,039,970,304 Byte-Hours. Conversion to Total GB-Months
5,257,039,970,304 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 6.581 GB-Month The cost is calculated based on the current rates for your region on the Amazon S3 pricing page.
What is Amazon S3 Object Lock?
Learn more by visiting the S3 Object Lock user guide.
How does Amazon S3 Object Lock work?
S3 Object Lock can be configured in one of two Modes. When deployed in Governance Mode, AWS accounts with specific IAM permissions are able to remove WORM protection from an object version. If you require stronger immutability in order to comply with regulations, you can use Compliance Mode. In Compliance Mode, WORM protection cannot be removed by any user, including the root account.
How does enabling S3 Object Lock for existing buckets impact the objects already existing in the buckets?
Can I disable S3 Object Lock after I have enabled it?
No, you cannot disable S3 Object Lock or S3 Versioning for buckets once S3 Object Lock is enabled.
How do I get started with replicating objects from buckets with S3 Object Lock enabled?
To start replicating objects with S3 Replication from buckets with S3 Object Lock enabled , you can add a replication configuration on your source bucket by specifying a destination bucket in the same or different AWS Region and in the same or different AWS account. You can choose to replicate all objects at the S3 bucket level, or filter objects on a shared prefix level, or an object level using S3 object tags. You will also need to specify an AWS Identity and Access Management (IAM) role with the required permissions to perform the replication operation. You can use the S3 console, AWS API, AWS CLI, AWS SDKs, or AWS CloudFormation to enable replication and must have S3 Versioning enabled for both the source and destination buckets. Additionally, to replicate objects from S3 Object Lock enabled buckets, your destination bucket must also have S3 Object Lock enabled. For more information see the documentation on setting up S3 Replication and using S3 Object Lock with S3 Replication.
Do I need additional permissions to replicate objects from buckets with S3 Object Lock enabled?
Yes, to replicate objects from S3 Object Lock enabled buckets you need to grant two new permissions, s3:GetObjectRetention and s3:GetObjectLegalHold, on the source bucket in the IAM role that you use to set up replication. Alternatively, if the IAM role has an s3:Get* permission, it satisfies the requirement. For more information see the documentation on using S3 Object Lock with S3 Replication.
Are there any limitations for using S3 Replication while replicating from S3 Object Lock buckets?
No, all features of S3 Replication, such as S3 Same-Region Replication (S3 SRR), S3 Cross-Region Replication (S3 CRR), S3 Replication metrics to track progress, S3 Replication Time Control (S3 RTC), and S3 Batch Replication, are supported while replicating from S3 Object Lock buckets.
How can I replicate existing objects from S3 Object Lock enabled buckets?
You can use S3 Batch Replication to replicate existing objects from S3 Object Lock enabled buckets. For more information on replicating existing objects, see the documentation on S3 Batch Replication.
What is the retention status of the replicas of source objects protected with S3 Object Lock?
Storage Classes
Open allWhat are the Amazon S3 storage classes?
How do I decide which S3 storage class to use?
In deciding which S3 storage class best fits your workload, consider the access patterns and retention time of your data to optimize for the lowest total cost over the lifetime of your data. Many workloads have changing (user-generated content), unpredictable (analytics, data lakes), or unknown (new applications) access patterns, and that is why S3 Intelligent-Tiering should be the default storage class to automatically save on storage costs. If you know the access patterns of your data, you can follow this guidance. The S3 Standard storage class is ideal for frequently accessed data; this is the best choice if you access data more than once a month. S3 Standard-Infrequent Access is ideal for data retained for at least a month and accessed once every month or two. The Amazon S3 Glacier storage classes are purpose-built for data archiving, providing you with the highest performance, most retrieval flexibility, and the lowest cost archive storage in the cloud. You can now choose from three archive storage classes optimized for different access patterns and storage duration. For archive data that needs immediate access, such as medical images, news media assets, or genomics data, choose the S3 Glacier Instant Retrieval storage class, an archive storage class that delivers the lowest cost storage with milliseconds retrieval. For archive data that does not require immediate access but needs the flexibility to retrieve large sets of data at no cost, such as backup or disaster recovery use cases, choose S3 Glacier Flexible Retrieval, with retrieval in minutes or free bulk retrievals in 5—12 hours. To save even more on long-lived archive storage such as compliance archives and digital media preservation, choose S3 Glacier Deep Archive, the lowest cost storage in the cloud with data retrieval within 12 hours. All these storage classes provide multi-Availability Zone (AZ) resiliency by redundantly storing data on multiple devices and physically separated AWS Availability Zones in an AWS Region.
For data that has a lower resiliency requirement, you can reduce costs by selecting a single-AZ storage class, like S3 One Zone-Infrequent Access. If you have data residency or isolation requirements that can’t be met by an existing AWS Region, you can use S3 storage classes for AWS Dedicated Local Zones or S3 on Outposts racks to store your data in a specific perimeter.
S3 Intelligent-Tiering
Open allWhat is S3 Intelligent-Tiering?
How does S3 Intelligent-Tiering work?
There is no minimum object size for S3 Intelligent-Tiering, but objects smaller than 128KB are not eligible for auto-tiering. These smaller objects may be stored in S3 Intelligent-Tiering, but will always be charged at the Frequent Access tier rates, and are not charged the monitoring and automation charge. If you would like to standardize on S3 Intelligent-Tiering as the default storage class for newly created data, you can modify your applications by specifying INTELLIGENT-TIERING on your S3 PUT API request header. S3 Intelligent-Tiering is designed for 99.9% availability and 99.999999999% durability, and automatically offers the same low latency and high throughput performance of S3 Standard. You can use AWS Cost Explorer to measure the additional savings from the Archive Instant Access tier.
Why would I choose to use S3 Intelligent-Tiering?
What performance does S3 Intelligent-Tiering offer?
What performance do the optional Archive Access and Deep Archive Access tiers provide?
How durable and available is S3 Intelligent-Tiering?
How do I get my data into S3 Intelligent-Tiering?
How am I charged for S3 Intelligent-Tiering?
For a small monitoring and automation fee, S3 Intelligent-Tiering monitors access patterns and automatically moves objects through low latency and high throughput access tiers, as well as two opt in asynchronous archive access tiers where customers get the lowest storage costs in the cloud for data that can be accessed asynchronously.
There is no minimum billable object size in S3 Intelligent-Tiering, but objects smaller than 128KB are not eligible for auto-tiering. These small objects will not be monitored and will always be charged at the Frequent Access tier rates, with no monitoring and automation charge. For each object archived to the Archive Access tier or Deep Archive Access tier in S3 Intelligent-Tiering, Amazon S3 uses 8 KB of storage for the name of the object and other metadata (billed at S3 Standard storage rates) and 32 KB of storage for index and related metadata (billed at S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage rates).
Is there a charge to retrieve data from S3 Intelligent-Tiering?
How do I activate S3 Intelligent-Tiering archive access tiers?
How do I access an object from the Archive Access or Deep Archive Access tiers in the S3 Intelligent-Tiering storage class?
How do I know in which S3 Intelligent-Tiering access tier my objects are stored in?
Can I lifecycle objects from S3 Intelligent-Tiering to another storage class?
Is there a minimum duration for S3 Intelligent-Tiering?
Is there a minimum billable object size for S3 Intelligent-Tiering?
S3 Standard
Open allWhat is S3 Standard?
Why would I choose to use S3 Standard?
S3 Express One Zone
Open allWhat is the Amazon S3 Express One Zone storage class?
Why would I choose to use the Amazon S3 Express One Zone storage class?
How do I get started with the Amazon S3 Express One Zone storage class?
How can I import data into the Amazon S3 Express One Zone storage class?
You can import data from within the same AWS Region into the S3 Express One Zone storage class via the S3 console by using the Import option after you create a directory bucket. Import simplifies copying data into S3 directory buckets by letting you choose a prefix or bucket to import data from without having to specify all of the objects to copy individually. S3 Batch Operations copies the objects in the selected prefix or general purpose bucket and you can monitor the progress of the import copy job through the S3 Batch Operations job details page.
How many Availability Zones are Amazon S3 Express One Zone objects stored in?
What performance does the Amazon S3 Express One Zone storage class provide?
How does the Amazon S3 Express One Zone storage class achieve high performance?
How many transactions per second (TPS) does an S3 directory bucket support?
What happens to an S3 directory bucket with no request activity for an extended period of time?
S3 directory buckets that have no request activity for a period of at least 3 months will transition to an inactive state. While in an inactive state, a directory bucket is temporarily inaccessible for reads and writes. Inactive buckets retain all storage, object metadata, and bucket metadata. Existing storage charges will apply to inactive buckets. On an access request to an inactive bucket, the bucket will transition to an active state, typically within a few minutes. During this transition period, reads and writes will return a 503 SlowDown error code.
How should I plan for my application’s throughput needs with the S3 Express One Zone storage class?
How is request authorization different with Amazon S3 Express One Zone compared to other S3 storage classes?
How reliable is the Amazon S3 Express One Zone storage class?
How is the Amazon S3 Express One Zones storage class designed to provide 99.95% availability?
How am I charged for Amazon S3 Express One Zone?
Assume you store 10 GB of data in S3 Express One Zone for 30 days, making a total of 1,000,000 writes and 9,000,000 reads, accessing with Athena with a request size of 10 KB. Then, you delete 1,000,000 files by the end of 30 days. Assuming your bucket is in the US East (Northern Virginia) Region, the storage and request charges are calculated below: Storage Charges
Total Byte-Hour usage = 10 GB-Month
Total Storage cost = 10 GB-Month x $0.11 = $1.10 Request Charges
1,000,000 PUT Requests: 1,000,000 requests x $0.00113/1,000 = $1.13
9,000,000 GET Requests: 9,000,000 requests x $0.00003/1,000 = $0.27
1,000,000 DELETE requests = 1,000,000 requests x $0.00 (no charge) = $0 Data upload charge: 10 KB / 1,048,576 x 1,000,000 x $0.0032 = $0.03
Data retrieval charge: 10 KB / 1,048,576 x 9,000,000 x $0.0006 = $0.05
Total Charges = $1.10 + $1.13 + $0.27 + $0.03 + $0.05= $2.58 Example 2:
Assume you store 10 TB of data for machine learning training for an 8-hour workload every day, and then delete it. During the 8-hour workload you make 5,242,880 writes and 10,485,760 reads for a 2 MB request size. Assume you do this for 30 days (a month). Storage Charges
Total Byte-Hour usage = [10,995,116,277,760 bytes x 30 days x (8 hours / day)] = 2,638,827,906,662,400 Byte-Hours = 3303.77 GB-Month
Total Storage cost = 3303.77 GB x $0.11 = $363.41 Request Charges
5,242,880 PUT Requests/day: 5,242,880 requests x 30 x $0.00113/1,000 = $177.73
10,485,760 GET Requests/day: 10,485,760 requests x 30 x $0.00003/1,000 = $9.44
5,242,880 DELETE requests/day: 5,242,880 requests x $0.00 (no charge) = $0 Data upload charge: 2MB/1024 x 5,242,880 x 30 x $0.0032 = $983.04
Data retrieval charge: 2MB/1024 x 10,485,760 x 30 x $0.0006 = $368.64
Total Charges = $363.41 + $177.73 + $9.44 + $983.04 + $368.64 = $1,902.26
Are there any additional Data Transfer charges for using the Amazon S3 Express One Zone storage class within the same Region?
Are there any additional networking charges for using Gateway VPC endpoints with the Amazon S3 Express One Zone storage class?
S3 Standard-Infrequent Access (S3 Standard-IA)
Open allWhat is S3 Standard-Infrequent Access?
Why would I choose to use S3 Standard-IA?
What performance does S3 Standard-IA offer?
How do I get my data into S3 Standard-IA?
What charges will I incur if I change the storage class of an object from S3 Standard-IA to S3 Standard with a COPY request?
Is there a minimum storage duration charge for S3 Standard-IA?
Is there a minimum object storage charge for S3 Standard-IA?
Can I tier objects from S3 Standard-IA to S3 One Zone-IA or to the S3 Glacier Flexible Retrieval storage class?
S3 One Zone-Infrequent Access (S3 One Zone-IA)
Open allWhat is S3 One Zone-IA storage class?
What use cases are best suited for S3 One Zone-IA storage class?
What performance does S3 One Zone-IA storage offer?
How durable is the S3 One Zone-IA storage class?
Is an S3 One Zone-IA “Zone” the same thing as an AWS Availability Zone?
How much disaster recovery protection do I forgo by using S3 One Zone-IA?
Amazon S3 Glacier Instant Retrieval storage class
Open allWhat is the S3 Glacier Instant Retrieval storage class?
Why would I choose to use S3 Glacier Instant Retrieval?
How available and durable is S3 Glacier Instant Retrieval?
What performance does S3 Glacier Instant Retrieval offer?
How do I get my data into S3 Glacier Instant Retrieval?
Is there a minimum storage duration charge for Amazon S3 Glacier Instant Retrieval?
Is there a minimum object size charge for Amazon S3 Glacier Instant Retrieval?
How am I charged for S3 Glacier Instant Retrieval?
Amazon S3 Glacier Flexible Retrieval storage class
Open allWhat is the S3 Glacier Flexible Retrieval storage class?
Why would I choose to use S3 Glacier Flexible Retrieval storage class?
How do I get my into S3 Glacier Flexible Retrieval?
Note: S3 Glacier Flexible Retrieval is also available through the original direct Glacier APIs and through the Amazon S3 Glacier Management Console. For an enhanced experience complete with access to the full S3 feature set including lifecycle management, S3 Replication, S3 Storage Lens, and more, we recommend using S3 APIs and the S3 Management Console to use S3 Glacier features.
How can I retrieve my objects that are archived in S3 Glacier Flexible Retrieval and will I be notified when the object is restored?
How long will it take to restore my objects archived in Amazon S3 Glacier Flexible Retrieval?
With S3 Glacier storage class provisioned capacity units, you can pay a fixed upfront fee for a given month to ensure the availability of retrieval capacity for expedited retrievals from S3 Glacier Flexible Retrieval. You can purchase two provisioned capacity units per month to increase the amount of data you can retrieve. Each unit of capacity ensures that at least three expedited retrievals can be performed every five minutes, and it provides up to 150 MB/s of retrieval throughput. If your workload requires highly reliable and predictable access to a subset of your data in minutes, you should purchase provisioned retrieval capacity. Without provisioned capacity, expedited retrievals might not be accepted during periods of high demand. If you require access to expedited retrievals under any circumstance, we recommend that you purchase provisioned retrieval capacity.
You can purchase provisioned capacity using the Amazon S3 console, the purchase provisioned capacity REST API, the AWS SDKs, or the AWS CLI. A provisioned capacity unit lasts for one month starting at the date and time of purchase, which is the start date. The unit expires on the expiration date, which is exactly one month after the start date to the nearest second. For provisioned capacity pricing information, see Amazon S3 pricing.
How is my storage charge calculated for Amazon S3 objects archived to S3 Glacier Flexible Retrieval?
1.000032 gigabytes for each object x 100,000 objects = 100,003.2 gigabytes of S3 Glacier storage.
0.000008 gigabytes for each object x 100,000 objects = 0.8 gigabytes of S3 Standard storage.
The fee is calculated based on the current rates for your AWS Region on the Amazon S3 pricing page. For additional Amazon S3 pricing examples, go to the S3 billing FAQs or use the AWS pricing calculator.
Are there minimum storage duration and minimum object storage charges for Amazon S3 Glacier Flexible Retrieval?
S3 Glacier Flexible Retrieval also requires 40 KB of additional metadata for each archived object. This includes 32 KB of metadata charged at the S3 Glacier Flexible Retrieval rate required to identify and retrieve your data. And, an additional 8 KB data charged at the S3 Standard rate which is required to maintain the user-defined name and metadata for objects archived to S3 Glacier Flexible Retrieval. This allows you to get a real-time list of all of your S3 objects using the S3 LIST API or the S3 Inventory report. View the Amazon S3 pricing page for information about Amazon S3 Glacier Flexible Retrieval pricing.
How much does it cost to retrieve data from Amazon S3 Glacier Flexible Retrieval?
Does Amazon S3 provide capabilities for archiving objects to lower cost storage classes?
What is the backend infrastructure supporting the S3 Glacier Flexible Retrieval and S3 Glacier Deep Archive storage class?
Amazon S3 Glacier Deep Archive
Open allWhat is the Amazon S3 Glacier Deep Archive storage class?
What use cases are best suited for the S3 Glacier Deep Archive storage class?
How does the S3 Glacier Deep Archive storage class differ from the S3 Glacier Instant Retrieval, and S3 Glacier Flexible Retrieval storage classes?
How do I get started using S3 Glacier Deep Archive?
How do you recommend migrating data from my existing tape archives to S3 Glacier Deep Archive?
You can also use AWS Snowball to migrate data. Snowball accelerates moving terabytes to petabytes of data into and out of AWS using physical storage devices designed to be secure for transport. Using Snowball helps to eliminate challenges that can be encountered with large-scale data transfers including high network costs, long transfer times, and security concerns. Finally, you can use AWS Direct Connect to establish dedicated network connections from your premises to AWS. In many cases, Direct Connect can reduce your network costs, increase bandwidth throughput, and provide a more consistent network experience than internet-based connections.
How can I retrieve my objects stored in S3 Glacier Deep Archive?
How am I charged for using S3 Glacier Deep Archive?
How will S3 Glacier Deep Archive usage show up on my AWS bill and in the AWS Cost Management tool?
Are there minimum storage duration and minimum object storage charges for S3 Glacier Deep Archive?
How does S3 Glacier Deep Archive integrate with other AWS Services?
S3 on Outposts
Open allWhat is Amazon S3 on Outposts?
Storage Management
Open allWhat are S3 object tags?
Learn more by visiting the S3 object tags user guide.
Why should I use object tags?
How can I update the object tags on my objects?
How much do object tags cost?
How do I get started with Storage Class Analysis?
Why should I use Amazon S3 Metadata?
How does S3 Metadata work?
How do I get started with S3 Metadata?
How much does S3 Metadata cost?
Where are my S3 Metadata tables stored?
If the metadata tables are read-only, how do I delete them?
Can I combine S3 Metadata tables with my own metadata?
How is S3 Metadata different from S3 Inventory?
What is S3 Inventory?
The S3 Inventory report provides a scheduled alternative to Amazon S3’s synchronous List API. You can configure S3 Inventory to provide a CSV, ORC, or Parquet file output of your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or prefix. You can simplify and speed up business workflows and big data jobs with S3 Inventory. You can also use S3 inventory to verify encryption and replication status of your objects to meet business, compliance, and regulatory needs. Learn more at the Amazon S3 Inventory user guide.
How do I get started with S3 Inventory?
How am I charged for using S3 Inventory?
What are Amazon S3 Tables?
Why should I use S3 Tables?
How do table buckets work?
How do I get started with S3 Tables?
How do I create and delete tables in my table bucket?
How do I query my tables?
What performance can I expect from S3 Tables?
Can I manually overwrite or delete an object in my table bucket?
How do table bucket permissions work?
Do table buckets support concurrent writes to a single table?
What table and data formats do table buckets support?
What table maintenance operations are offered by table buckets?
Can I track and audit changes made to my tables?
Do table buckets support encryption at rest for my table data?
How much does it cost to use S3 Tables?
How does compaction work for S3 Tables?
How does snapshot management work for S3 Tables?
How does unreferenced file removal work for S3 Tables?
What is S3 Batch Operations?
How do I get started with S3 Batch Operations?
If you are interested in learning more about S3 Batch Operations watch the tutorials videos and visit the documentation.
What AWS electronic storage services have been assessed based on financial services regulations?
What AWS documentation supports the SEC 17a-4(f)(2)(i) and CFTC 1.31(c) requirement for notifying my regulator?
How do I get started with S3 CloudWatch Metrics?
What alarms can I set on my storage metrics?
How am I charged for using S3 CloudWatch Metrics?
What is S3 Lifecycle management?
How do I set up an S3 Lifecycle management policy?
How can I use Amazon S3 Lifecycle management to help lower my Amazon S3 storage costs?
You can also specify an S3 Lifecycle policy to delete objects after a specific period of time. You can use this policy-driven automation to quickly and easily reduce storage costs as well as save time. In each rule you can specify a prefix, a time period, a transition to S3 Standard-IA, S3 One Zone-IA, S3 Glacier Instant Retrieval, S3 Glacier Flexible Retrieval, S3 Glacier Deep Archive, and/or an expiration. For example, you could create a rule that archives into S3 Glacier Flexible Retrieval all objects with the common prefix “logs/” 30 days from creation and expires these objects after 365 days from creation.
You can also create a separate rule that only expires all objects with the prefix “backups/” 90 days from creation. S3 Lifecycle policies apply to both existing and new S3 objects, helping you optimize storage and maximize cost savings for all current data and any new data placed in S3 without time-consuming manual data review and migration.
Within a lifecycle rule, the prefix field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects, specify their common prefix (e.g. “logs/”). You can specify a transition action to have your objects archived and an expiration action to have your objects removed. For time period, provide the creation date (e.g. January 31, 2015) or the number of days from creation date (e.g. 30 days) after which you want your objects to be archived or removed. You may create multiple rules for different prefixes.
How much does it cost to use S3 Lifecycle management?
Why would I use an S3 Lifecycle policy to expire incomplete multipart uploads?
Can I set up Amazon S3 Event Notifications to send notifications when S3 Lifecycle transitions or expires objects?
Storage Analytics & Insights
Open allWhat features are available to analyze my storage usage on Amazon S3?
What is Amazon S3 Storage Lens?
How does S3 Storage Lens work?
What are the key questions that can be answered using S3 Storage Lens metrics?
The S3 Storage Lens dashboard is organized around four main types of questions that can be answered about your storage. With the Summary filter, top-level questions related to overall storage usage and activity trends can be explored. For example, “How rapidly is my overall byte count and request count increasing over time?” With the Cost Optimization filter, you can explore questions related to storage cost reduction, for example, “Is it possible for me to save money by retaining fewer non-current versions?” With the Data Protection and Access Management filters you can answer questions about securing your data, for example, “Is my storage protected from accidental or intentional deletion?” Finally, with the Performance and Events filters you can explore ways to improve performance of workflows. Each of these questions represent a first layer of inquiry that would likely lead to drill-down analysis.
What metrics are available in S3 Storage Lens?
What are my dashboard configuration options?
A default dashboard is configured automatically provided for your entire account, and you have the option to create additional custom dashboards that can be scoped to your AWS organization, specific regions, or buckets within an account. You can set up multiple custom dashboards, which can be useful if you require some logical separation in your storage analysis, such as segmenting on buckets to represent various internal teams. By default, your dashboard will receive the S3 Storage Lens free metrics, but you have the option to upgrade to receive S3 Storage Lens advanced metrics and recommendations (for an additional cost). S3 Storage Lens advanced metrics have 7 distinct options: Activity metrics, Advanced Cost Optimization metrics, Advanced Data Protection metrics, Detailed Status Code metrics, Prefix aggregation, CloudWatch publishing, and Storage Lens groups aggregation. Additionally, for each dashboard you can enable metrics export, with additional options to specify destination bucket and encryption type.
How much historical data is available in S3 Storage Lens?
How will I be charged for S3 Storage Lens?
S3 Storage Lens is available in two tiers of metrics. The free metrics are enabled by default and available at no additional charge to all S3 customers. The S3 Storage Lens advanced metrics and recommendations pricing details are available on the S3 pricing page. With S3 Storage Lens free metrics you receive 28 metrics at the bucket level, and can access 14 days of historical data in the dashboard. With S3 Storage Lens advanced metrics and recommendations you receive 35 additional metrics, prefix-level aggregation, CloudWatch metrics support, custom object metadata filtering with S3 Storage Lens groups, and can access 15 months of historical data in the dashboard.
What is the difference between S3 Storage Lens and S3 Inventory?
What is the difference between S3 Storage Lens and S3 Storage Class Analysis (SCA)?
What is Storage Class Analysis?
How often is the Storage Class Analysis updated?
Query in Place
Open allWhat is "Query in Place" functionality?
How do I query my data in Amazon S3?
What is Amazon Athena?
What is Amazon Redshift Spectrum?
Replication
Open allWhat is Amazon S3 Replication?
What is Amazon S3 Cross-Region Replication (CRR)?
What is Amazon S3 Same-Region Replication (SRR)?
What is Amazon S3 Batch Replication?
How do I enable Amazon S3 Replication (Cross-Region Replication and Same-Region Replication)?
How do I use S3 Batch Replication?
Can I use S3 Replication with S3 Lifecycle rules?
You can find more information about lifecycle configuration and replication in the S3 Replication documentation.
Can I use S3 Replication to replicate to more than one destination bucket?
Yes. S3 Replication allows customers to replicate their data to multiple destination buckets in the same, or different AWS Regions. When setting up, you simply specify the new destination bucket in your existing replication configuration or create a new replication configuration with multiple destination buckets. For each new destination you specify, you have the flexibility to choose storage class of destination bucket, encryption type, replication metrics and notifications, Replication Time Control (RTC), and other properties.
Q: Can I use S3 Replication to set up two-way replication between S3 buckets?
Can I use replication across AWS accounts to protect against malicious or accidental deletion?
Will my object tags be replicated if I use Cross-Region Replication?
Can I replicate delete markers from one bucket to another?
Can I replicate data from other AWS Regions to China? Can a customer replicate from one China Region bucket outside of China Regions?
Can I replicate existing objects?
Can I re-try replication if object fail to replicate initially?
What encryption types does S3 Replication support?
What is the pricing for cross account data replication?
Visit the Amazon S3 pricing page for more details on S3 Replication pricing.
What is Amazon S3 Replication Time Control?
How do I enable Amazon S3 Replication Time Control?
Can I use S3 Replication Time Control to replicate data within and between China Regions?
What are Amazon S3 Replication metrics and events?
How do I enable Amazon S3 Replication metrics and events?
Can I use Amazon S3 Replication metrics and events to track S3 Batch Replication?
What is the Amazon S3 Replication Time Control Service Level Agreement (SLA)?
What is the pricing for S3 Replication and S3 Replication Time Control?
What are S3 Multi-Region Access Points?
Why should I use S3 Multi-Region Access Points?
How do S3 Multi-Region Access Points work?
In an active-active configuration, S3 Multi-Region Access Points consider factors like network congestion and the location of the requesting application to dynamically route your requests over the AWS network to the closest copy of your data. S3 Multi-Region Access Points route your requests through the closest AWS location to your client, and then over the global private AWS network to S3. In either configuration, S3 Multi-Region Access Points allow you to take advantage of the global infrastructure of AWS while maintaining a simple application architecture.
What is the difference between S3 Cross-Region Replication (S3 CRR) and S3 Multi-Region Access Points?
S3 CRR and S3 Multi-Region Access Points are complementary features that work together to replicate data across AWS Regions and then to automatically route requests to the replicated copy with the lowest latency. S3 Multi-Region Access Points help you to manage requests across AWS Regions, while CRR allows you to move data across AWS Regions to create isolated replicas. You use S3 Multi-Region Access Points and CRR together to create a replicated multi-Region dataset that is addressable by a single global endpoint.
How much do S3 Multi-Region Access Points cost?
When you use an S3 Multi-Region Access Point to route requests within AWS, you pay a low per-GB data routing charge for each GB processed, as well as standard charges for S3 requests, storage, data transfer, and replication. If your application runs outside of AWS and accesses S3 over the internet, S3 Multi-Region Access Points increase performance by automatically routing your requests through an AWS edge location, over the global private AWS network, to the closest copy of your data based on access latency. When you accelerate requests made over the internet, you pay the data routing charge and an internet acceleration charge. S3 Multi-Region Access Points internet acceleration pricing varies based on whether the source client is in the same or in a different location as the destination AWS Region, and is in addition to standard S3 data transfer pricing. To use S3 Multi-Region Access Points failover controls, you are only charged for standard S3 API costs to view the current routing control status of each Region and submit any routing control changes for initiating a failover. See the Amazon S3 pricing page and the data transfer tab for more pricing information.
Can I use Requester Pays with S3 Multi-Region Access Points?
Yes, you can configure the underlying buckets of the S3 Multi-Region Access Point to be Requester Pays buckets. With Requester Pays, the requester pays all of the cost associated to the endpoint usage, including the cost for requests and data transfer cost associated with both the bucket and the Multi-Region Access Point. Typically, you want to configure your buckets as Requester Pays buckets if you wish to share data but not incur charges associated with others accessing the data. In general, bucket owners pay for all Amazon S3 storage associated with their bucket. To learn more, please visit S3 Requester Pays.
How is S3 Transfer Acceleration different than S3 Multi-Region Access Points?
How do I get started with S3 Multi-Region Access Points and failover controls?
The S3 console provides a simple guided workflow to quickly set up everything you need to run multi-Region storage on S3 in just three simple steps. First, create an Amazon S3 Multi-Region Access Point endpoint and specify the AWS Regions you want to replicate and failover between. You can add buckets in multiple AWS accounts to a new S3 Multi-Region Access Point by entering the account IDs that own the buckets at the time of creation. Second, for each AWS Region and S3 bucket behind your S3 Multi-Region Access Point endpoint, specify whether their routing status is active or passive, where active AWS Regions accept S3 data request traffic, and passive Regions are not be routed to until you initiate a failover. Third, configure your S3 Cross-Region Replication rules to synchronize your data in S3 between the Regions and/or accounts. You can then initiate a failover at any time between the AWS Regions within minutes to shift your S3 data requests and monitor the shift of your S3 traffic to your new active AWS Region in Amazon CloudWatch. Alternatively, you can use AWS CloudFormation to automate your multi-Region storage configuration. All of the building blocks required to set up multi-Region storage on S3, including S3 Multi-Region Access Points, are supported by CloudFormation, allowing you to automate a repeatable setup process outside of the S3 console.