Skip to main content

Overview

Every Tinfoil Container requires a tinfoil-config.yml file in the root of your GitHub repository. This file defines the enclave runtime, resource allocation, container configuration, and request routing. The easiest way to get started is by creating a repo from the tinfoil-containers-template, which includes a pre-configured tinfoil-config.yml.

File location

The file must be named tinfoil-config.yml and placed at the root of your repository. Tinfoil fetches this file from GitHub when you deploy or validate a container.

Top-level fields

FieldTypeRequiredDescription
cvm-versionstringYesConfidential VM version (e.g. 0.7.1)
cpusintegerYesNumber of CPU cores
memoryintegerYesRAM in megabytes
containerslistYesOne or more container definitions
shimobjectYesPort and path routing configuration

Valid resource values

ResourceValid values
CPUs2, 4, 8, 16, 32
Memory (MB)8192, 16384, 32768, 65536, 131072
GPUs1, 8 (requires runtime: nvidia and gpus: all in the container spec)
Memory values correspond to 8 GB, 16 GB, 32 GB, 64 GB, and 128 GB respectively.

Container spec

Each entry in the containers list defines a container to run inside the enclave.
FieldTypeRequiredDescription
namestringYesContainer identifier
imagestringYesDocker image with SHA256 digest (e.g. image:tag@sha256:...)
commandlistNoCommand arguments passed to the container
entrypointlistNoOverride the container’s entrypoint
envlistNoEnvironment variables (see below)
secretslistNoSecret names — values are set in the dashboard
runtimestringNoContainer runtime — set to nvidia for GPU workloads
gpusstringNoGPU allocation — use all for single-GPU deployments (see note below)
ipcstringNoIPC mode — set to host
volumeslistNoBind mounts (e.g. /mnt/ramdisk/data:/data)

GPU configuration

NVIDIA confidential computing restricts enclaves to either 1 or 8 GPUs. For single-GPU deployments, set gpus: all. This gives the container access to the one available GPU. Specifying individual GPU indices (e.g. gpus: "0,1") is only necessary when running multiple containers in an 8-GPU enclave and splitting GPUs between them. Most deployments should use gpus: all.
Container images must include a SHA256 digest (e.g. image:tag@sha256:...). This pins the exact image binary and ensures it can be verified in the transparency log. To get the digest: docker pull <image> && docker inspect --format='{{index .RepoDigests 0}}' <image>

Environment variable formats

Environment variables support two formats:
env:
  - PORT: "8080"          # Hardcoded value (YAML map syntax)
  - LOG_LEVEL: "info"

secrets:
  - DATABASE_URL           # Looked up from your org's secrets store
  - API_KEY

Routing

The shim section controls which ports and paths your container exposes.
FieldTypeRequiredDescription
shim.upstream-portintegerYesPort your container listens on
shim.pathslistYesURL paths to expose (supports * wildcards)
Only listed paths are reachable from outside the enclave. Any request to an unlisted path is rejected with a 404 error code.

Examples

Minimal config

tinfoil-config.yml
cvm-version: 0.7.1
cpus: 2
memory: 8192

containers:
  - name: "api"
    image: "ghcr.io/myorg/api-server:v1.0.0@sha256:abc123..."
    command: ["--port", "8000"]

shim:
  upstream-port: 8000
  paths:
    - /health
    - /api/*

With environment variables and secrets

tinfoil-config.yml
cvm-version: 0.7.1
cpus: 4
memory: 16384

containers:
  - name: "api"
    image: "ghcr.io/myorg/api-server:v2.1.0@sha256:def456..."
    env:
      - PORT: "8080"
      - LOG_LEVEL: "info"
      - NODE_ENV: "production"
    secrets:
      - DATABASE_URL
      - STRIPE_SECRET_KEY
    command: ["--port", "8080"]

shim:
  upstream-port: 8080
  paths:
    - /health
    - /api/*
    - /webhooks/*

GPU inference server (vLLM)

tinfoil-config.yml
cvm-version: 0.7.1
cpus: 16
memory: 65536

containers:
  - name: "inference"
    image: "vllm/vllm-openai:v0.14.1@sha256:6fc52be4609fc19b09c163be2556976447cc844b8d0d817f19bc9e1f44b48d5a"
    runtime: nvidia
    gpus: all
    ipc: host
    command: [
      "--model", "/models/my-model",
      "--port", "8001"
    ]

shim:
  upstream-port: 8001
  paths:
    - /v1/chat/completions
    - /v1/completions
    - /health
    - /metrics

Multi-container setup

tinfoil-config.yml
cvm-version: 0.7.1
cpus: 8
memory: 32768

containers:
  - name: "api"
    image: "ghcr.io/myorg/api-server:v1.0.0@sha256:abc123..."
    env:
      - PORT: "8080"
    secrets:
      - DATABASE_URL
    command: ["--port", "8080"]
  - name: "worker"
    image: "ghcr.io/myorg/worker:v1.0.0@sha256:def456..."
    env:
      - QUEUE_URL: "redis://localhost:6379"
    secrets:
      - API_SECRET

shim:
  upstream-port: 8080
  paths:
    - /health
    - /api/*

Validation

You can validate your config before deploying. In the dashboard, click Create Deployment and select your repo and tag. Tinfoil fetches your tinfoil-config.yml and checks that:
  • CPU, memory, and GPU values are valid
  • Resource usage is within your org’s limits
  • Referenced secrets exist in your org
  • The container image is accessible and includes a SHA256 digest
If validation fails, the dashboard shows specific error messages explaining what needs to be fixed.

Container naming constraints

ConstraintRule
Allowed charactersLowercase letters, numbers, hyphens
Start/endCannot start or end with a hyphen
Max length64 characters
UniquenessUnique per organization (production and debug namespaces are separate)

Examples

NameValid?
my-apiYes
data-processor-v2Yes
MyAPINo — uppercase not allowed
my_apiNo — underscores not allowed
my apiNo — spaces not allowed
-my-apiNo — cannot start with a hyphen
my-api-No — cannot end with a hyphen

Template

The tinfoil-containers-template repo contains a ready-to-use tinfoil-config.yml with the latest cvm-version value. Create a new repo from this template to get started quickly.