Cloud

2023 could be the year of public cloud repatriation


Here’s a topic we don’t discuss as much as we should: public cloud repatriation. Many regard repatriating data and applications back to enterprise data centers from a public cloud provider as an admission that someone made a big mistake moving the workloads to the cloud in the first place.

I don’t automatically consider this a failure as much as an adjustment of hosting platforms based on current economic realities. Many cite the high cost of cloud computing as the reason for moving back to more traditional platforms. 

High cloud bills are rarely the fault of the cloud providers. They are often self-inflicted by enterprises that don’t refactor applications and data to optimize their cost-efficiencies on the new cloud platforms. Yes, the applications work as well as they did on the original platform, but you’ll pay for the inefficiencies you chose not to deal with during the migration. The cloud bills are higher than expected because lifted-and-shifted applications can’t take advantage of native capabilities such as auto-scaling, security, and storage management that allow workloads to function efficiently.

It’s easy to point out the folly of not refactoring data and applications for cloud platforms during migration. The reality is that refactoring is time-consuming and expensive, and the pandemic put many enterprises under tight deadlines to migrate to the cloud. For enterprises that did not optimize systems for migration, it doesn’t make much economic sense to refactor those workloads now. Repatriation is often a more cost-effective option for these enterprises, even considering the hassle and expense of operating your own systems in your own data center.

In a happy coincidence, the prices of hard drive storage, networking hardware, compute hardware, power supplies, and other tech gear dropped in the past 10 years while cloud computing costs remained about the same or a bit higher.

Business is business. You can’t ignore the fact that it makes economic sense to move some workloads back to a traditional data center.

It makes the most sense to repatriate workloads and data storage that typically do a lot of the same thing, such as just storing data for long periods of time without any special data processing (e.g., no advanced artificial intelligence or business intelligence). These workloads can often move back to owned hardware and show a net gain ROI. Even with the added costs to take over and internalize operations, the enterprise saves money (or a lot of money) compared to equivalent public cloud hosting.

However, don’t forget that many workloads have dependencies on specialized cloud-based services. Those workloads typically cannot be repatriated because affordable analogs are unlikely to run on traditional platforms. When advanced IT services are involved (AI, deep analytics, massive scaling, quantum computing, etc.), public clouds typically are more economical.

Many enterprises made a deliberate business decision at the time to absorb the additional costs of running lifted-and-shifted applications on public clouds. Now, based on today’s business environment and economics, many enterprises will make a simple decision to bring some workloads back into their data center.

The overall goal is to find the most optimized architecture to support your business. Sometimes it’s on a public cloud; many times, it’s not. Or not yet. I learned a long time ago not to fall blindly in love with any technology, including cloud computing.

2023 may indeed be the year we begin repatriating applications and data stores that are more cost-effective to run inside a traditional enterprise data center. This is not a criticism of cloud computing. Like any technology, cloud computing is better for some uses than for others. That “fact” will evolve and change over time, and businesses will adjust again. No shame in that.

Copyright © 2023 IDG Communications, Inc.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.