Understanding Confidential Computing

Tinfoil is built on top of hardware-level isolation technology that separates processing from outsiders, including the operators of the hardware itself. This form of computation is known as confidential computing and provides data privacy even when processing on remote cloud servers. In contrast, traditional cloud computing solutions give the cloud provider full visibility of all data being processed. Such secure processing environments are often called “secure enclaves”—formally known as Trusted Execution Environments (TEEs).

Trusted Execution Environments (TEEs)

TEEs provide an isolated execution environment directly at the hardware level. The TEE operates as a “computer within a computer,” with its own dedicated memory regions and processing capabilities that remain completely isolated from the rest of the system and the operator (e.g., Tinfoil). When code runs within a TEE, it is executed in a protected region where even privileged system software like the operating system, hypervisor, and system administrators cannot access or modify the data being processed. This is achieved through hardware-based isolation mechanisms and cryptographic protections that are built directly into modern processors. TEE Encryption Architecture Key security guarantees provided by TEEs include:
  • Complete separation from the host operating system: The TEE memory cannot be accessed or modified by any system software, including the OS kernel and hypervisor.
  • Protection from cloud infrastructure access: Cloud providers and administrators have no visibility into the TEE’s operations or data.
  • Isolation from other processes and applications: Applications running outside the TEE cannot interfere with or access the TEE’s memory space.
  • Hardware-enforced memory encryption: In modern TEEs like Intel TDX and AMD SEV-SNP, all data in memory is automatically encrypted using keys that never leave the processor.

Always-Encrypted Memory

Memory encryption ensures that all data outside of the TEE’s processing unit remains encrypted, and thus inaccessible to software and hardware-based attackers. This encryption is performed automatically by the TEE hardware using keys that are generated and stored securely within the processor. Automatic encryption/decryption: The TEE hardware automatically encrypts data when it’s written to memory and decrypts it when read by the processor, making the process transparent to applications while ensuring continuous protection. Protection from memory attacks:
  • Memory dumps cannot reveal sensitive information.
  • Cold boot attacks are ineffective since memory contents remain encrypted.
  • Direct Memory Access (DMA) attacks are blocked.
  • Physical memory probing yields only encrypted data.
Secure key management:
  • Encryption keys are generated within the processor.
  • Keys never leave the TEE’s security boundary.
  • Each TEE instance has unique keys.
  • Keys are automatically destroyed when the TEE terminates.

Supported Hardware

There are several options for instantiating TEEs. While traditionally TEEs were restricted to CPU-only workloads, the latest NVIDIA GPUs now offer the ability to run them by enabling a special “confidential compute mode.”
VendorPlatformFeature
NVIDIAHopper H100/H200, Blackwell B200 GPUsConfidential Computing Mode
AMDEPYC (Milan, Genoa, Bergamo)SEV-SNP
IntelXeon (Sapphire Rapids)TDX

Security Boundaries

TEEs create a strict isolation between trusted and untrusted components of the system. This isolation is enforced at the hardware level, creating a clear delineation between what runs inside the protected environment and what remains outside. Understanding these boundaries is crucial for developers and security architects to properly design their applications and ensure sensitive operations are contained within the secure perimeter. The TEE establishes clear security boundaries:

Outside TEE

  • OS and drivers
  • Other applications
  • System administrators
  • Cloud provider

Inside TEE

  • Application code
  • Processing data
  • Encryption keys
  • Temporary variables

Limitations and Considerations

While TEEs provide strong security guarantees, it’s important to understand their limitations:
  • Side-channel vulnerabilities: TEEs may still be vulnerable to timing attacks, power analysis, and other side-channels. Side-channel attacks exploit information leaked through the physical implementation of a system rather than weaknesses in the algorithms themselves. Examples include observing power consumption patterns, electromagnetic emissions, or timing variations that could potentially reveal information about the data being processed.
  • I/O patterns: The host system can observe data access patterns and I/O behavior which can reveal potentially sensitive metadata (in most cases this is not an issue).
  • Denial of service: Cloud providers still control resource allocation and can restrict access (e.g., deny service).
These limitations, however, have minimal impact for AI inference workloads in particular. The beauty of AI inference is that it can easily be made stateless, by default. Unlike applications that maintain persistent state, AI inference typically processes each request independently without retaining information between requests. This statelessness offers several security advantages:
  • Reduced side-channel exposure: Inference operations follow predictable, data-independent execution paths, making timing attacks less informative
  • Uniform access patterns: Model operations typically have fixed, regular memory access patterns regardless of input data
  • Resilience to restarts: Without persistent state, workloads can be easily restarted if interrupted, minimizing denial-of-service impacts