Sponsored Feature Now that cloud computing has minimized capital expenditure for users, many of them face a new challenge: making operating costs more predictable. So, this month saw AWS tackle the last frontier in price predictability for its Amazon Aurora, fully managed relational database, input/output (I/O) costs.
Aurora offers pay-per-use pricing for compute, storage, and I/O operations. It is designed to automatically scale to match the storage and I/O needs of the most demanding applications, without requiring customers to provision storage and/or I/Os in advance. Although the vast majority of customers benefit from the cost-effectiveness of this pay-per-request I/O pricing using the Aurora Standard configuration says AWS, the needs of individual businesses can vary widely based on changes in queries and I/O consumption, caused by spikes in customer demand, leading to price variability. Now, with its new Amazon Aurora I/O-Optimized configuration, customers can factor I/O into existing storage and compute payments as a consistent cost.
Fluctuating operational cost is still a challenge for businesses that cannot predict their workloads. They need predictable pricing to plan their costs and revenues more accurately, improving business resilience and allowing for growth. Cloud service providers have offered options like reserved instances and serverless computing to help customers better plan their computing costs in the cloud. And now Aurora I/O-Optimized is able to create more certainty for customers by eliminating I/O costs as a variable.
The new configuration fulfils two value propositions. First, it makes database pricing for AWS customers more predictable in the cloud, which enables them to plan their own costing – and therefore pricing – more easily and accurately. Second, it improves price-performance for I/O-intensive applications. Instead of paying elevated costs during periods of high database throughput, the flat fee lowers customers’ overall cost. The savings can be considerable, AWS says. If I/O spend exceeds 25 percent of your database expenditure, you could find your bill to be up to 40 percent lower.
Continuing a pattern of cost savings
Driving down cost efficiency without sacrificing performance has been a priority since AWS released its managed relational database in 2014, explains Manbeen Kohli, senior manager of product management for Aurora. “We launched Aurora to give our customers the performance and availability of a commercial database at one-tenth of the cost.” she says.
The service is designed to deliver a range of benefits, always keeping security top of mind with the recent launch of threat detection for Aurora databases with Amazon GuardDuty RDS Protection. In its Business Value of Amazon Aurora report, IDC categorized the benefits into four main groups: direct staff benefits, business enablement, IT cost savings, and business benefits. The database removed many mundane tasks from existing staff, reducing costs by an average of $700,000 (around 32 percent) per company. Better database performance and scalability boosted each customer’s net revenues by $1m per year on average reckoned IDC, while saving around $200,000 in downtime-based revenue losses. Overall, the analyst firm estimated total discounted three-year benefits of $20.9m per customer.
AWS took several design decisions to save Aurora customers money and improve performance. For example, it decoupled storage and compute resources, enabling them to pay only for the resources they needed in one area without spending unnecessarily on others. Aurora customers also enjoy other capabilities common to many AWS services, including the use of reserved instances that offer up to a 65 percent reduction over on-demand options. The choice of using headless clusters in Amazon Aurora Global Database enables customers to store their data in multiple regions for disaster recovery purposes without having to spin up database instances in the secondary region.
In 2018, AWS also launched a serverless option of Aurora, which allowed customers to run without instances while quickly scaling their applications to peak usage. The latest version of this feature, Amazon Aurora Serverless v2, now offers cost savings of up to 90 percent compared to the cost of provisioning capacity for peak load, Kohli says. The company has kept flexibility in mind when designing these options. For example, customers can mix serverless and instance-based operations in mixed clusters, serving different computing needs while minimizing cost.
Tackling the predictability problem
All of this helped customers cope with storage and computing costs, but one challenge remained: I/O. AWS has traditionally charged customers for sending data to an Aurora database and retrieving data from it. Until now, customers could only pay for I/O in increments of a million requests.
Aurora I/O-Optimized moves away from incremental I/O pricing by allowing companies to pay only for their database instances and storage consumption, with no separate charges for read and write I/O operations.
For some, though, the problem hasn’t been tracking their I/O expenditure so much as predicting it.
“Although Aurora Standard’s pay-per-request I/O pricing is cost-effective for the vast majority of applications, there are some customers who would like to more easily predict their database costs up front,” says Kohli. “Aurora I/O-Optimized makes database costs more predictable, regardless of the evolving data access patterns of database applications.”
Some customers typically run I/O-intensive applications where reads and writes spike or are unevenly distributed. One example is SaaS applications. Some SaaS operators with larger enterprise customers operate a single database cluster per customer.
“That means their own customer workloads can vary.” Kohli explains. The variation doesn’t just stem from the volume of work, but from the type of work that the customer is doing.
“A customer might run analytical queries one day that consume more I/O, while another customer might not do that,” she says. “Therefore, having a pricing structure that eliminates this variability enables SaaS application providers to have more predictable costs across customers.”
These problems are not specific to SaaS companies or to any one sector, points out Kohli, highlighting applications in industries ranging from education to finance as potential targets. If your software has a high write throughput, it is worth looking at the new configuration option, she adds. One example might be an app that consumes and processes time-series data coming in at high speed. Another might be software that conducts lots of analytics on transactional data.
Better price-performance in the numbers
Many companies might not have an I/O intensive application now, but could evolve into I/O-intensive applications as their user base grows. In those cases, using Aurora I/O-Optimized can help improve the query latency and throughput, and therefore the responsiveness and interactivity of their application, and provides a better price-performance option, and cost savings, Kohli explains.
The new configuration is likely to offer better price-performance for many Aurora users, calculates Kohli, who cites typical cost savings of between 30 and 40 percent in pre-launch tests with select customers. Also, with the coinciding launch of AWS Graviton3-based R7g database instances for Aurora, customers can see up to 20 percent improved price performance, as compared to Graviton2 R6g instances, according to AWS.
“Consider applications with high-write throughput, such as ecommerce applications, payment processing systems, or gaming applications,” says Kohli. “Such applications can gain up to 40 percent cost savings if I/O spend exceeds 25 percent of total database spend and up to 20 percent price-performance improvement with R7g instances.”
Aurora I/O-Optimized accompanies the existing configuration, which AWS is renaming as Aurora Standard. “So now we are giving customers the flexibility to choose the configuration that best matches their price predictability and price-performance needs.” Kohli says. Because companies can implement the new configuration on a per cluster basis, they choose the extent to which they use the Aurora I/O-Optimized model. That will depend on their price-predictability vs their performance requirements.
AWS believes that by offering two types of configurations – Aurora Standard and Aurora I/O-Optimized – Aurora is the first database service to offer customers cost-effective pricing regardless of their unique workload characteristics. Applications with moderate I/O usage can continue to use Aurora Standard to run their applications cost-effectively. Customers with I/O-intensive applications can switch to Aurora I/O-Optimized and gain cost savings.
The Aurora I/O-Optimized database configuration option is available now. For many cloud users, it represents a welcome layer of predictability in an economy crying out for certainty – not to mention a healthy saving on monthly cloud bills.
Sponsored by AWS.