The resources given are a basic index.html page and an error.html page.
When creating a new bucket via the console, you specify that you want to host a static website and you set the object entry point and error page.
You could make the files individually public, but that is time consuming. Instead, we edit the bucket policy to allow access.
Bucket policies: make sure entire buckets are public using policies.
Static content: use s3 to host static content only (not dynamic).
Automatic scaling: S3 scales automatically with demand.
Versioning Objects in S3
You can enable versioning in S3 to allow multiple versions of objects within a bucket.
All versions: all versions stored.
Can be a great backup tool.
Cannot be disabled (can suspend).
Inside of a bucket, you go to Properties and you can edit to enable bucket versioning.
Once on, there will be a toggle to List Versions that can show which version of the object is there. The demo itself uploads to new file.
Previous versions do not allow access (even if the policy is for a public policy). You can turn the ACL for that object to be public.
For deleted objects, the versions will still be there (but with a delete marker). Once you delete the delete marker, it actually reverts to the previous version.
S3 Storage Classes
High availability and durability. 4 9's availability, 11 9's durability.
Designed for Frequent Access.
Suitable for Most Workloads.
For Standard-IA (infrequent access):
You pay to access data (low per-GB storage price) and a per-GB retrieval fee.
Use cases: great for long-term storage, backups and as a data store for disaster recovery.
3 9's availability, 9 9's durability.
For S3 One Zone Infrequent Access (1ZIA):
Like Standard-IA, but data store redundantly within a single AZ.
Costs 20% less than regular S3 Standard-IA.
99.% percent availability.
Optimized for very infrequently accessed data.
Pay for each time you access.
Use only for archiving data.
Glacier (cold storage) is the last type and has two options.
Option 1: Provides long-term data archiving with retrieval times that range from 1 minute to 12 hours (eg historical data only access a few times per year).
Option 2: Deep archive. Archiving data rarely access. Access time is 12 hours. Only use if accessed once or twice a year (think financial records).
Glaciers has 4 9's availbility and 11 9's durability.
Frequent + infrequent access. It automatically moves your data to the most cost-effective tier based on how frequently you access each object.
$0.0025 per 1000 objects per month.
3 9's availability, 11 9's durability.
Be sure to know the different use-cases in S3 storage tiers.
Lifecycle Management with S3
Automates the moving of objects between different storage tiers.
E.g. you may set S3 Standard object unused for 30 days to S3 IA and 90 days to Glacier.
You can integrate this with versioning too!
In the bucket under Management, you can create your Lifecycle rule.
You can add to all objects.
You can add to individual objects.
With versioning turned on, you can also change the rule actions for things such as "transition".
With each action, you can specify which object is transitioned after X amount of days to what class.
Once created, you can also see a Timeline summary.
Automates moving of objects between different storage tiers.
Can be used in conjunction with versioning.
Can be applied to current versions and previous versions.
S3 Object Lock And Glacier Vault Lock
You can use S3 Object Lock to store objects using a write once, read many (WORM) model. It can help prevent objecst from being deleted or modified for a fixed amount of time or indefinitely.
You can use it to meet regulatory requirements that require WORM storage, or to add an extra layer of protection again object changes and deletion.
Users can't overwrite or delete an object version or alter its lock settings unless they have special permissions.
No one can overwrite or delete an object version or alter its lock settings.
Protects an object version for a fixed amount of time.
After the period expires, the object version can be overwritten or deleted unless you also placed a legal hold on the object version.
S3 Object Lock enables you to place a legal hold on an object version. Like a retention period, it prevents overwrite/deltion, however it doesn't have an associated retention period and remains in effect until removed.
Those with the s3:PutObjectLegalHold permission can add/remove.
Glacier Vault Lock
Allows you to easily deploy and enforce compliance controls for individual Glacier Vaults with a Vault lock policy.
You can specify controls (such as WORM) in a Vault lock policy and lock policy from future edits. Once locked, the policy can no longer be changed.
S3 Object Lock Exam Tips
Use S3 Object Lock to store objects using a write once, read many (WORM) model.
Object locks can be applied to individual objects or applied across the bucket.
Object locks come in two modes: governance and compliance.
Encypting S3 Objects
The types of encryption:
Encryption in transit: SSL/TLS, HTTPS.
Encryption at rest (server): SSE-C (Customer provided keys), SSE-KMS, SSE-S3 (S3-managed keys, using AES 256-bit encryption).
Encryption at rest (client): Encypting files before uploading them to S3.
You can enforce server-side encryption:
Through the console. This is the easiest way.
Through a bucket policy.
If the file is to be encrypted at upload time, the x-amz-server-side-encryption parameter will need to be included in the request header.
There are two options: AES256 or aws:kms.
It is put into the header to tell S3 to encrypt objects during the upload.
To enforce this, you can set a bucket policy to deny any object PUT request that does not include the encryption header.
In the console, there is a Default Encryption option when creating a bucket.
Optimizing S3 Performance
The S3 prefix for mybucketname/folder1/subfolder1/myfile.jpg is folder1/subfolder1.
S3 has low latecy (100-200ms for first byte).
You can achieve high number of requests: 3500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per prefix.
You can get better performance by spreadin reads across different prefixes.
If we used 4 prefixes as an exampe, we would achieve 22k requests per second.
The limitations with KMS:
If you use SSE-KMS, to encrypt the objects in S3 you must be mindful of KMS limits.
When you upload a file, you will call GenerateDataKey in the KMS API.
When you download a file, you will call Decrypt.
The KMS req rate changes per region, however it is either 5.5k, 10k or 30k per second.
Currently you cannot request a quota increase for KMS.
Recommended for files over 100MB.
Required for files over 5GB.
Parellelize uploads (increases efficiency).
S3 Byte-Range Fetches
Parallelize downloads by specifying byte range.
If there's a failure in the download, it's only for a specific byte range.
Back up Data With S3 Replication
Used to be called cross-region replication, but now can happen in the same region.
A way of replicating objects from one bucket to another. Versioning must be enabled on both the source and destination buckets.
Objects in an existing bucket are not replicated automatically. Once turned on, all subsequent updated objects will be replicated automatically.
Delete markers are not replicated by default. You can turn this on.
To demo: you can create replication rules under Management. You will need versioning on first. You can select the destination bucket from there.
You can replicate objects from one bucket to another.
Existing not replicated automatically.
Delete markers are not replicated by default. You can turn this on.