What does digital sovereignty look like when AI meets the data center? At its first conference since joining the Linux Foundation, the OpenInfra Foundation made a case that open infrastructure�anchored by OpenStack�can run serious AI at scale without surrendering control to a single vendor? The pitch is credible on security and performance, but the path away from incumbents like VMware is tougher than headlines suggest?
Confidential GPUs and sandboxed containers get real
The most concrete news came from the stack-level plumbing? Kata Containers�lightweight virtual machines managed through container tooling�now support confidential computing flows, with an optimized Linux kernel and initramfs enabling attestation of workloads before they start? The design uses trusted execution environments (TEE) on supported hosts (Intel SGX/TDX or AMD SEV-SNP) so the container runtime can verify integrity before handing over GPU access? Nvidia tooling handles attestation; the company�s GPU Operator can simplify setup for Kubernetes clusters?
Security isn�t only about attestation? Developers at Ant Group showed how extended Berkeley Packet Filter (eBPF) hooks inside the Kata-optimized kernel can enforce fine-grained policies�watching network events, processes, and system calls via Linux Security Module (LSM) integration? For regulated industries training or serving models with sensitive data, that combination�confidential VMs plus in-kernel policy�matters more than marketing claims?
VMware exits: more than VM lift-and-shift
OpenStack is being positioned as a VMware alternative, but documentation today mostly covers basic virtual machine migration? The harder part is replacing vSAN and NSX capabilities with OpenStack components (Cinder, Manila, Swift, Glance, Neutron)? Those are different operational models with different failure modes? CIOs should expect multi-quarter migrations and the need for skilled integrators�particularly for network overlays, storage performance, and Day-2 operations?
Scale and sovereignty: not just theory
OpenStack�s relevance isn�t hypothetical? The platform underpins more than 300 public cloud data centers managing over 40 million compute cores, including major telcos and research facilities across Europe and Asia? Operators like Deutsche Telekom and StackIT run sovereign clouds where data residency and procurement flexibility are strategic advantages? CERN�s environment alone exceeds 300,000 cores, underscoring that open infrastructure can handle heavy, bursty research workloads?
Open vs? proprietary AI data stacks
The OpenInfra Foundation also released a paper detailing OpenStack reference designs for five AI scenarios: from base-model training and GPU-as-a-Service to MLOps platforms, HPC research clusters, and edge/AIoT? That roadmap lands as enterprise vendors push tightly integrated alternatives? Oracle�s AI Database 26ai bakes vectors and retrieval-augmented generation (RAG) into its core database, and wraps it in a multicloud lakehouse plus cross-cloud credits? The appeal is obvious�fewer moving parts and one throat to choke�but it trades flexibility for lock-in? OpenStack�s approach remains modular and portable, especially for GPU pooling across sites and clouds, but requires more architectural expertise up front?
Follow the infrastructure, not the hype
Investors are tracking the same shift? Blackstone now puts AI disruption on the first page of deal memos and has steered away from acquisitions in categories exposed to AI-driven automation while doubling down on data center infrastructure? The lesson for IT leaders: capacity, energy, and location strategy are becoming board-level issues, not just cloud team line items?
Meanwhile, the industry�s tendency to over-claim persists? A recent controversy over AI �solving� long-standing math problems�later corrected as literature lookups, not breakthroughs�shows why verifiable, auditable pipelines matter? If you can�t attest who ran what, where, and under which policy, your AI risk�and your compliance exposure�goes up?
What this means for CIOs and architects
- If sovereignty is a requirement, OpenStack�s GPUaaS designs and confidential Kata-based runtimes offer a credible path�especially for EU operators and telcos?
- Plan VMware exits as phased transformations, not lift-and-shift? Network (NSX to Neutron) and storage (vSAN to Cinder/Manila/Swift) will be the bottlenecks?
- Balance control vs? speed: proprietary AI databases reduce integration work but increase switching costs; open stacks require more engineering but future-proof multicloud?
- Budget for hardware trust: TEEs (SGX/TDX/SEV-SNP) and GPU attestation should be treated as baseline for sensitive workloads?
The takeaway: open infrastructure for AI isn�t about ideology? It�s about proving security, portability, and cost control at the hardware boundary�and doing so with enough operational maturity to survive production? OpenInfra�s latest moves push in that direction? The real test will be how fast the community closes the VMware gap and how many operators choose open designs over convenience when the next AI budget cycle hits?

