Microsoft is taking advantage of hardware-based security features in AMD’s Epyc processors for its confidential containers running in Azure, as part of its push into confidential computing.
Confidential containers on Azure Container Instances (ACI), Microsoft’s serverless confidential computing platform, were released to limited preview in May 2022 and this week the company moved them into public preview, giving a wider range of organizations access.
The service makes use of the Secure Encrypted Virtualization and Secure Nested Paging (SEV-SNP) technology in AMD’s server chips to secure containerized Linux workloads.
“Azure customers are increasingly turning to cloud-native, container-based applications to support their workloads,” Peter Pogorski, senior product manager for Azure Container, wrote in a blog post. “However, these customers are also seeking cloud hosting options that offer the highest levels of data protection, which often require complex infrastructure management and expertise.”
Confidential computing aims to protect data at that vulnerable time when it’s in use. Data at rest and in motion is typically encrypted; confidential computing encrypts it when it’s in use. As we wrote last year, it isolates sensitive data and code, and keeps it from being exposed to the rest of the host system, but just as importantly insider threats, outside attackers, and compromised kernels and hypervisors.
Hardware-based security plays a central role in confidential computing, creating a hardware-based trusted execution environment (TEE) for running computations in encrypted memory. The SEV-SNP technology isolates the container from malicious hypervisors and provides a hardware-managed key that is unique to each container group to protect against such threats as data replay, corruption, remapping, and alias-based attacks.
Microsoft’s ACI platform through AMD Epyc chips providing the hardware-based TEE, which delivers runtime protection to protect data in use and code that is initialized.
“Customers can lift and shift their containerized Linux applications or build new confidential computing applications without needing to adopt specialized programming models,” Pogorski wrote. “Confidential containers on ACI can protect data-in-use by processing data in encrypted memory.”
Through ACI, enterprises can use verifiable execution policies to verify workload integrity to ensure that untrusted code doesn’t run. Verifiable initialization policies ensure users control the software and actions allowed when a container launches to protect against miscreants creating application modifications that could lead to data leaks and organizations can create execution policies.
There also is remote guest attestation to verify the trustworthiness of a container group.
ACI is typically used for such workloads as continuous integration, data processing pipelines, and batch processing. With confidential containers, the ACI can take on news jobs, including data clean rooms for analytics and machine learning training involving multiple groups and confidential inferencing, he wrote.
The demand for confidential computing is expected to grow rapidly as the number, speed, and sophistication of cyberattacks expand. Everest Group analysts wrote [PDF] that the total addressable market in 2021 was $2 billion and could grow 90-95 percent every year through 2026, driven in large part by regulated industries like finance, banking, healthcare, and the public sector. ®