Using AI in the Enterprise? You Might Be Exposing More Than You Think

Explore how AI-as-a-Service vulnerabilities and shared model risks highlight the growing importance of confidential computing.

7 minutes

Aug 20, 2025

abstract digital water drops on a black blackground

AI is everywhere—from summarising contracts to triaging support tickets. But as enterprises rush to adopt it, a crucial question remains unanswered: how do you know your data is actually safe? In most cases, you don’t.

Today, sensitive data is routinely sent to third-party AI platforms with little more than a checkbox of consent. The assumption is that model providers won’t store, misuse, or leak that data. But the “trust the vendor” model is increasingly unsustainable, especially as AI becomes core infrastructure for sectors like finance, healthcare, and defence.

Cracks in the AI Cloud

AI-as-a-Service has become the default way to run models at scale. Developers upload code, platforms spin up inference infrastructure on demand, and results are delivered in seconds. But beneath this convenience lies an unspoken assumption: that everything runs safely by default.

In reality, there’s little built-in isolation or runtime enforcement in most of these systems. The current model wasn’t designed with adversarial behaviour in mind—it’s built to move fast, share widely, and support collaborative development. That works—until it doesn’t. The following incidents demonstrate just how quickly that trust can be broken.

Hugging Face’s Inference API

Wiz researchers demonstrated a full compromise of Hugging Face’s inference backend by uploading a malicious model encoded in Python Pickle. Once executed, it gave them shell access to the underlying infrastructure.

From there, they gained access to internal cloud tools, collected security tokens, and were able to access data belonging to other users—breaking the basic promise that each customer’s data is kept separate. They also showed that Hugging Face’s Spaces feature could be exploited by hiding malicious commands in app setup files.

For businesses, this means your proprietary models and data could be exposed just by using a shared inference service. Without strong sandboxing, uploading a model was effectively uploading executable code, turning every tenant into a potential threat.

The Samsung Chatbot Data Leakage 

In early 2023, Samsung engineers pasted confidential semiconductor source code into ChatGPT while testing an internal tool. Although OpenAI advises against sharing sensitive inputs, the data was processed and may have been retained for future debugging or training.

OpenAI claims it won’t train on user data if you opt out, but these are policy-based controls. There’s no cryptographic proof, auditability, or runtime enforcement. If policies change or an insider goes rogue, users have no technical recourse.

What’s Actually at Stake

For most businesses, AI platforms now handle source code, strategic documents, investment logic, healthcare records, customer queries, or legal contracts. Here’s what they risk:

  • Zero transparency: You can’t inspect what code is running, how your data is handled, or whether it’s truly deleted.

  • No runtime guarantees: Admins and operators may still access memory, logs, or snapshots, especially during outages or debugging.

  • Weak isolation: Multi-tenant infrastructure means your data may reside on the same physical GPU as that of another user. That opens the door to side-channel attacks or token leakage.

  • Prompt injection risks: Even a single malicious file or input can hijack an internal LLM and cause it to leak data or act unexpectedly.

  • Misaligned models: The researchers in this study found that fine-tuning models to write insecure code caused unexpected, harmful behaviour in unrelated tasks. Tuning for one goal can destabilise others.

Enterprise Security Leaders Are Pushing Back

In his open letter to third-party suppliers, Patrick Opet, Chief Information Security Officer at JP Morgan, issued a stark warning through the Cloud Security Alliance: today’s AI infrastructure isn’t ready for enterprise workloads.

He highlighted a critical disconnect: while enterprises are asked to hand over their most sensitive data to external SaaS platforms, they’re offered little in return—no verifiable control over where or how that data is processed, and no technical guarantees that workloads are isolated, secure, or auditable. In his words, the entire trust model underpinning AI deployment is “broken.”

He argued for a foundational shift toward cryptographically enforced infrastructure, not just better docs or policies. At the centre of that shift is confidential computing.

Confidential Computing: A Practical Path Forward

Confidential computing secures data during processing, not just in transit or at rest. It runs applications inside hardware-protected enclaves that isolate memory from the host system, hypervisor, and cloud provider.

These enclaves generate cryptographic attestation: a signed proof of what code is running and under what conditions. The result? You no longer have to trust your AI provider blindly. You can verify that:

  • Only authorised code is handling your data

  • Workloads are running in a tamper-proof, isolated runtime

  • Encryption keys are released only after the attestation passes

This architecture makes insider access, memory scraping, and supply chain tampering materially harder to detect. Confidential computing in AI offers a way to meet the security needs, prove compliance, and adopt AI securely, without having to rewrite your software stack.

The Industry Is Moving

This shift toward infrastructure-level security is already underway. Some of the most influential players in AI are already rethinking their infrastructure.

Anthropic is actively researching how to run its Claude models inside isolated environments that offer cryptographic guarantees. The goal is to design secure runtimes where even the provider cannot access user inputs during processing—minimising insider risk and setting a new standard for user privacy.

Apple has taken a similar approach with its Private Cloud Compute system. Built from the ground up using confidential computing, PCC is designed so that no Apple engineer can inspect or recover user data, even during outages or debugging. Every part of the system, from the hardware to the operating system, has been hardened to enforce verifiable AI execution.

OpenAI has also called for this model to become standard. In a public statement on secure infrastructure for advanced AI, the company advocated for “trusted compute” as a foundational requirement—one that includes attestation, isolation, and resilience by design.

The message is clear: security must be built in, not bolted on.

Bottom Line

As data becomes more valuable and more vulnerable, how you deploy AI matters. Confidential computing is gaining traction because it moves beyond promises. Isolating workloads at the hardware level and verifying them with cryptographic attestation makes misuse far harder to pull off and nearly impossible to conceal.

A growing number of organisations are adopting this model, not just to mitigate risk, but to unlock collaboration that was previously off-limits due to trust constraints. Solutions like OBLV Deploy are making this shift practical: allowing teams to process sensitive data and run existing applications inside secure enclaves, without overhauling code or workflows.

If you’re reassessing your AI security—or want to make confidential computing part of your stack—get in touch.

ai

challenges

confidential computing

innovation

pets

responsible ai

trusted execution environment