All models listed below are accessible through any of our SDKs. See the SDK pages for language-specific usage examples.
Available Models
Below is a list of all models currently supported on Tinfoil, including their model IDs and types.Available models and capabilities are subject to change. If you require SLA guarantees, specific model availability, or long-term production usage, please contact us to discuss your needs. We’re also happy to work with you to add support for your desired model.
Chat Models
Description: Chat models support conversational AI capabilities through the standard chat completions API. All chat models follow the OpenAI chat completion format.
DeepSeek R1
deepseek-r1-0528
Kimi K2 Thinking
kimi-k2-thinking🤖 Thinking Agent: End-to-end trained for interleaved reasoning and function calling. Maintains stable performance across extended tool orchestration sequences.

Kimi K2.5
kimi-k2-5🎨 Vision + Language: Jointly trained on images, video, and text. Handles visual reasoning tasks and can spawn coordinated sub-agents for complex problems.

GPT-OSS 120B
gpt-oss-120bStrengths: Powerful reasoning, configurable reasoning effort levels, full chain-of-thought access, native agentic abilities including function calling, web browsing, and Python code execution
Structured Outputs: Structured response formatting support
Best for: Production use cases requiring high reasoning capabilities, agentic operations, and specialized applications

GPT-OSS Safeguard 120B
gpt-oss-safeguard-120bSafety Model: Classifies text content based on custom safety policies you provide.

Llama 3.3 70B
llama3-3-70bStrengths: Multilingual understanding, dialogue optimization, strong reasoning
Structured Outputs: Structured response formatting support
Best for: Conversational AI applications and complex dialogue systems
Structured Outputs: All chat models support structured outputs for reliable data extraction and API integration. Full JSON schema validation available in Python, Node, and Go SDKs. See the Structured Outputs Guide for implementation examples.
Vision Models
Description: Vision models understand images and video for visual tasks including image analysis, video understanding, OCR, and screenshot-to-code generation.
Qwen3-VL 30B
qwen3-vl-30b📸 Multimodal: Processes both images and video. Supports long videos and documents with up to 256K context. See Image Processing Guide for usage examples.
Audio Models
Description: Audio models provide speech-to-text transcription and text-to-speech synthesis capabilities. Supporting both audio file transcription and high-quality speech generation.
Whisper Large V3 Turbo
whisper-large-v3-turboStrengths: Fast processing, high accuracy, multiple language support
Best for: Audio transcription, voice-to-text applications
Audio Format: Supports .mp3 and .wav files

Voxtral Small 24B
voxtral-small-24bAudio + Text: Built on Mistral Small 3.1 foundation, combining speech processing with strong text capabilities including function calling from voice commands.
Embedding Models
Description: Embedding models convert text into high-dimensional vectors for semantic search, similarity comparisons, and other vector-based operations.
Nomic Embed Text v1.5
nomic-embed-textDocument Processing Models
Description: Document processing models handle file conversion, text extraction, and document parsing operations.
Docling Document Processing
doclingStrengths: PDF processing, Word document parsing, text extraction, format conversion with high accuracy
Best for: Document upload, processing, conversion, and text extraction workflows
📄 File Support: Supports PDF, Word documents, and other common document formats. See Document Processing Guide for usage examples.
Using Models
To use any of these models, you’ll need:- API Key: Get your key from the Tinfoil dashboard
- SDK: Install the SDK for your preferred language
- Model ID: Use the model ID from the cards above in your API requests




