At the re:Invent 2023 conference, AI became a major focus, with AWS introducing exciting updates in Compute, Container, Serverless technologies, and Storage. These advancements highlight AWS's commitment to pushing the limits of cloud computing, showcasing their dedication to innovation.
Welcome to the second part of our AWS re:Invent 2023 coverage, focusing on Compute, Container, Serverless, and Storage major announcements. If you haven't already, catch up on Part 01, where we explored the groundbreaking developments in Generative AI and Machine Learning.
Amazon Web Services (AWS) has recently introduced a preview of the upcoming iteration of Amazon Elastic Compute Cloud (Amazon EC2) instances. Featuring cutting-edge Graviton4 processors, the forthcoming R8g instances are poised to provide superior price performance compared to any existing memory-optimized counterparts.
The R8g instances are tailor-made for handling your most resource-intensive memory workloads, such as big data analytics, high-performance databases, and in-memory caches.
New Graviton4
More Info: https://press.aboutamazon.com/2023/11/aws-unveils-next-generation-aws-designed-chips
R8g instance sizes
The 8th generation R8g instances will be available in multiple sizes with up to triple the number of vCPUs and triple the amount of memory of the 7th generation (R7g) of memory-optimized, Graviton3-powered instances.
More Info: https://aws.amazon.com/ec2/instance-types/r8g/
The recently launched U7i instances have been meticulously crafted to accommodate extensive in-memory databases such as SAP HANA, Oracle, and SQL Server. Fueled by advanced custom fourth-generation Intel Xeon Scalable Processors, codenamed Sapphire Rapids, these instances are currently in preview across various AWS regions. The preview is accessible in the US West (Oregon), Asia Pacific (Seoul), and Europe(Frankfurt) AWS Regions.
AWS has introduced a new feature known as the Amazon Managed Service for Prometheus collector, designed to autonomously and without the use of agents, identify and gather Prometheus metrics originating from Amazon Elastic Kubernetes Service (Amazon EKS). This collector is comprised of a scraper responsible for the discovery and retrieval of metrics from Amazon EKS applications and infrastructure, eliminating the necessity for in-cluster collector execution.
When creating a new EKS cluster using the Amazon EKS console, there's an option to activate AWS managed collector by selecting "Send Prometheus metrics to Amazon Managed Service for Prometheus." In the Destination section, choose an existing work space or create a new one in Amazon Managed Service for Prometheus.
More Info: https://docs.aws.amazon.com/prometheus/latest/userguide/AMP-getting-started.html
Amazon EKS Pod Identity streamlines application access to AWS services, offering a straightforward and easily configurable experience. This improvement allows for defining necessary IAM permissions for applications in Amazon Elastic Kubernetes Service (Amazon EKS) clusters, enabling connectivity with AWS services beyond the cluster boundaries.
As the cluster administrator, there's no longer a need to toggle between Amazon EKS and IAM services to authenticate applications for accessing all AWS resources.
More Info: https://docs.aws.amazon.com/eks/latest/userguide/pod-identities.html
Amazon OpenSearch Serverless introduces a straightforward, scalable, and high-performance similarity search feature. The vector engine simplifies the process of creating contemporary machine learning (ML)-enhanced search experiences and generative artificial intelligence (generative AI) applications, eliminating the need to handle the underlying vector database infrastructure.
Here are some new or improved features for this GA release:
More Info: https://aws.amazon.com/opensearch-service/serverless-vector-engine/
AWS Lambda has significantly enhanced its scalability, now boasting a remarkable 12-fold increase in speed. Notably, Lambda functions invoked synchronously experience a rapid scaling of 1,000 concurrent executions every 10 seconds. This scaling continues until the cumulative concurrency across all functions reaches the account's specified limit. Importantly, this scaling occurs independently for each function within an account, regardless of the invocation method.
These advancements are seamlessly integrated at no extra cost, requiring no additional configuration for existing functions. The traditional challenges of building scalable and high-performing applications, such as over-provisioning compute resources or implementing complex caching solutions for peak demands, are effectively addressed by Lambda's dynamic scaling capabilities. Developers often opt for Lambda due to its ability to scale on-demand, particularly when faced with unpredictable traffic patterns.
The recently introduced Amazon S3 Express One Zone storage class is crafted to provide a performance boost of up to 10 times compared to the S3 Standard storage class. It excels in managing a substantial volume of requests, reaching several hundred thousand per second, all the while maintaining consistent single-digit millisecond latency. This storage class is an optimal choice for your frequently accessed data and the most demanding applications.
In this configuration, objects are stored and duplicated on specialized hardware within a specific AWS Availability Zone. This setup enables you to conveniently position storage alongside compute resources, such as Amazon EC2, Amazon ECS, and Amazon EKS, within the same zone. This co-location strategy not only enhances efficiency but also contributes to a reduction in latency for your storage and computing needs.
More Info: https://aws.amazon.com/s3/storage-classes/express-one-zone/
Amazon Elastic Block Store (AmazonEBS) Snapshots Archive, integrated with AWS Backup, now extends its accessibility beyond the confines of the Amazon EC2 console or Amazon Data Lifecycle Manager. This new functionality empowers users to seamlessly migrate infrequently accessed Amazon EBS Snapshots to a cost-effective archival solution, catering to the long-term storage needs of snapshots that experience rare access and don't necessitate frequent or rapid retrieval.
Discover the capability to manage Amazon EBS Snapshots Archive through the AWS Backup console. It's important to note that this feature is specifically designed for snapshots with a backup frequency of one month or longer (utilizing a 28-day cron expression) and a retention period exceeding 90 days.
This precautionary measure ensures that only snapshots exhibiting characteristics conducive to the advantages of transitioning to the cold storage tier are eligible for archiving. Consequently, this excludes snapshots with higher frequencies, such as hourly snapshots, which would not derive significant benefits from the archival transition.
More Info: https://docs.aws.amazon.com/aws-backup/latest/devguide/whatisbackup.html
Amazon Elastic File System (AmazonEFS) introduces two enhanced functionalities.
Utilizing Amazon EFS replication empowers users to create a duplicate of their file system within the same AWS Region or another. Once replication is activated, Amazon EFS automatically maintains synchronization between the primary (source) and secondary (destination) file systems. EFS replication is meticulously designed to align with compliance and business continuity objectives, offering a recovery point objective (RPO) and a recovery time objective (RTO) measured in minutes.
The newly introduced failback support further accelerates responses to disaster recovery events, facilitates planned business continuity tests, and efficiently manages other DR-related activities. Failback support allows users to reverse the direction of replication between primary and secondary file systems. By copying only incremental changes, EFS replication ensures synchronization without the need for full data copies or reliance on self-managed, custom solutions in completing recovery workflows.
More Info: https://aws.amazon.com/blogs/aws/replication-failback-and-increased-iops-are-new-for-amazon-efs/
Performing automated game day testing for critical resources is a fundamental step in assessing preparedness for potential threats such as ransomware or data loss incidents. This proactive approach allows for the identification of issues and the implementation of corrective actions based on test results. Monitoring outcomes, such as successor failure during these tests, helps organizations gauge if restoration times align with recovery time objective (RTO) goals, contributing to the development of more robust recovery strategies.
The introduction of restore testing as a capability in AWS Backup provides users with the ability to seamlessly perform restore testing across storage, compute, and databases for AWS resources. This feature automates the entire restore testing process, ensuring organizations can successfully recover from data loss events, including ransomware attacks. Additionally, users have the option to leverage restore job results to demonstrate compliance with both organizational and regulatory data governance requirements.
The restore testing functionality in AWS Backup supports resources with recovery points created by AWS Backup.
The supported services include Amazon Elastic Block Store (Amazon EBS), Amazon Elastic Compute Cloud (AmazonEC2), Amazon Aurora, Amazon Relational Database Service (Amazon RDS), Amazon Elastic File Store (Amazon EFS), Amazon Simple Storage Service (Amazon S3), Amazon DynamoDB, Amazon FSx, Amazon DocumentDB, and Amazon Neptune. Initiating restore testing is simple and can be done through the AWS Backup console, AWSCLI, or AWS SDK, offering flexibility for users to integrate restore testing into their existing workflows and enhance the overall resilience of their AWS environment.
Introducing EFS Archive, a new Amazon EFS storage class designed for long-lived, rarely accessed data. With EFS Archive, there are now three regional storage classes:
All classes provide high throughput, hundreds of thousands of IOPS, and are built for eleven nines of durability. EFS lifecycle management automatically migrates files across classes based on access patterns. This simplifies storage, allowing a single shared file system for diverse data types.
EFS Archive is ideal for rarely accessed data, facilitating cost-effective storage within the same shared file system. This streamlined approach supports collaboration on large datasets, making it easy to set up and scale analytics workloads. Optimize costs for workloads with mixed active and inactive data, including user shares, ML training datasets, SaaS applications, and data retained for compliance.
More info: https://aws.amazon.com/efs/storage-classes/archive/
More info: https://aws.amazon.com/blogs/aws/on-demand-data-replication-for-amazon-fsx-for-openzfs/
More Info: https://aws.amazon.com/blogs/aws/introducing-shared-vpc-support-for-amazon-fsx-for-netapp-ontap/
More Info: https://aws.amazon.com/blogs/aws/new-scale-out-file-systems-for-amazon-fsx-for-netapp-ontap/