Skip to main content
Hermes Agent configured with Tinfoil
TL;DR — Configure Hermes Agent to use Tinfoil:
  • Install Hermes
  • Run hermes model and select Custom endpoint
  • Enter the base URL (https://inference.tinfoil.sh/v1/), your Tinfoil API key, and a model name (e.g., kimi-k2-5)
Verification Note: When using Hermes or other OpenAI-compatible clients, connection-time attestation verification is not performed automatically. However, all Tinfoil enclaves support audit-time verification through attestation transparency, creating an immutable audit trail. For applications requiring connection-time verification, use our official SDK clients. Learn more about the verification approaches.

Introduction

Hermes Agent is a self-improving AI assistant from Nous Research. This tutorial shows how to configure Hermes to route all inference requests through Tinfoil’s confidential computing enclaves.

Prerequisites

  • Tinfoil API Key: Get one at tinfoil.sh
  • Terminal: macOS, Linux, or WSL on Windows
You are billed for all Tinfoil Inference API usage. See Tinfoil pricing.
Never commit your API key to version control.

Installation

Follow the Hermes Agent installation guide to install Hermes on your system. Verify installation:
hermes --version

Configure Tinfoil as the Provider

Step 1: Open Model Configuration

Run the interactive model selector:
$ hermes model
Select provider:

  GitHub Copilot ACP (spawns `copilot --acp --stdio`)
  Google AI Studio (Gemini models OpenAI-compatible endpoint)
  Z.AI / GLM (Zhipu AI direct API)
  Kimi / Moonshot (Moonshot AI direct API)
  MiniMax (global direct API)
  MiniMax China (domestic direct API)
  Kilo Code (Kilo Gateway API)
  OpenCode Zen (35+ curated models, pay-as-you-go)
  OpenCode Go (open models, $10/month subscription)
  AI Gateway (Vercel  200+ models, pay-per-use)
  Alibaba Cloud / DashScope Coding (Qwen + multi-provider)
 Custom endpoint (enter URL manually)
  Cancel

Step 2: Enter Tinfoil Configuration

Use the arrow keys to highlight Custom endpoint, then press Enter. You will be prompted for three values:
SettingValue
Base URLhttps://inference.tinfoil.sh/v1/
API keyYour Tinfoil API key from tinfoil.sh
Model nameWe recommend kimi-k2-5 (Kimi K2.5) or gemma4-31b (Gemma 4 31B). See the chat model catalog for all options.
Enter base URL: https://inference.tinfoil.sh/v1/
Enter API key (or press Enter to skip): <your-tinfoil-api-key>
Enter model name: kimi-k2-5

Configuration saved to ~/.hermes/config.yaml

Step 3: Verify Configuration

$ hermes doctor
 Config file exists
 Dependencies installed
 API connection successful
 Model kimi-k2-5 is accessible

Resources