Understanding Confidential Computing

Tinfoil is built on top of hardware-level isolation technology enabling cloud-based confidential computing. This provides data privacy even when the data is being used on the server. In contrast, traditional cloud computing solutions give the cloud provider full visibility of the data as it is being used.

Trusted Execution Environments (TEEs)

TEEs provide an isolated execution environment directly at the hardware level. The TEE operates as a “computer within a computer,” with its own dedicated memory regions and processing capabilities that remain completely isolated from the rest of the system and the operator (e.g., Tinfoil).

When code runs within a TEE, it is executed in a protected region where even privileged system software like the operating system, hypervisor, and system administrators cannot access or modify the data being processed. This is achieved through hardware-based isolation mechanisms and cryptographic protections that are built directly into modern processors.

Key security guarantees provided by TEEs include:

  • Complete separation from the host operating system: The TEE memory cannot be accessed or modified by any system software, including the OS kernel and hypervisor.

  • Protection from cloud infrastructure access: Cloud providers and administrators have no visibility into the TEE’s operations or data.

  • Isolation from other processes and applications: Applications running outside the TEE cannot interfere with or access the TEE’s memory space.

  • Hardware-enforced memory encryption: In modern TEEs like Intel SGX/TDX, AMD SEV-SNP, and NVIDIA Confidential Computing, all data in memory is automatically encrypted using keys that never leave the processor.

Always-Encrypted Memory

Memory encryption ensures that all data outside of the TEE’s processing unit remains encrypted, and thus inaccessible to software and hardware-based attackers. This encryption is performed automatically by the TEE hardware using keys that are generated and stored securely within the processor.

Automatic encryption/decryption:

The TEE hardware automatically encrypts data when it’s written to memory and decrypts it when read by the processor, making the process transparent to applications while ensuring continuous protection.

Protection from memory attacks:

  • Memory dumps cannot reveal sensitive information.
  • Cold boot attacks are ineffective since memory contents remain encrypted.
  • Direct Memory Access (DMA) attacks are blocked.
  • Physical memory probing yields only encrypted data.

Secure key management:

  • Encryption keys are generated within the processor.
  • Keys never leave the TEE’s security boundary.
  • Each enclave instance has unique keys.
  • Keys are automatically destroyed when the TEE terminates.

Supported Hardware

There currently exist several options for instantiating TEEs. While traditionally TEEs were restricted to CPU-only workloads, the latest NVIDIA GPUs now offer the ability to run them as TEEs by enabling a special “confidential compute mode.”

VendorPlatformFeatureStatus
NVIDIAHopper H100, Blackwell B100/B200 GPUsConfidential Computing ModeEA/GA
AMDEPYC (Milan, Genoa, Bergamo)SEV-SNPGA
IntelXeon (Sapphire Rapids)TDXGA

Security Boundaries

Secure enclaves create a strict isolation between trusted and untrusted components of the system. This isolation is enforced at the hardware level, creating a clear delineation between what runs inside the protected environment and what remains outside. Understanding these boundaries is crucial for developers and security architects to properly design their applications and ensure sensitive operations are contained within the secure perimeter.

The enclave establishes clear security boundaries:

Outside Enclave

  • OS and drivers
  • Other applications
  • System administrators
  • Cloud provider

Inside Enclave

  • Application code
  • Processing data
  • Encryption keys
  • Temporary variables

Limitations and Considerations

While TEEs provide strong security guarantees, it’s important to understand their limitations:

  • Side-channel vulnerabilities: TEEs may still be vulnerable to timing attacks, power analysis, and other side-channels
  • I/O patterns: The host system can observe data access patterns and I/O behavior
  • Denial of service: Cloud providers still control resource allocation and can restrict access

These limitations, however, have reduced impact for AI inference workloads due to their stateless nature. Unlike applications that maintain persistent state, AI inference typically processes each request independently without retaining information between requests. This statelessness offers several security advantages:

  • Reduced side-channel exposure: Inference operations follow predictable, data-independent execution paths, making timing attacks less informative
  • Uniform access patterns: Model operations typically have fixed, regular memory access patterns regardless of input data
  • Resilience to restarts: Without persistent state, workloads can be easily restarted if interrupted, minimizing denial-of-service impacts