Skip to main content

LangChain integration

Tinfoil’s SDKs expose verified, attested HTTP clients that can be injected into LangChain. This lets you build chains, agents, and RAG pipelines while using Tinfoil as the backend. The integration works by passing Tinfoil’s secure transport layer into LangChain’s OpenAI provider. All LangChain features work as before.
Supported languages: Python, JavaScript/TypeScript, and Go.

Installation

pip install tinfoil langchain-openai

Quick start

Create a Tinfoil-verified LangChain client in a few lines:
import os
from tinfoil import SecureClient
from langchain_openai import ChatOpenAI

# Create a SecureClient for Tinfoil's inference router
sc = SecureClient(
    enclave="inference.tinfoil.sh",
    repo="tinfoilsh/confidential-model-router",
)

# Inject the TLS-pinned httpx clients into LangChain
llm = ChatOpenAI(
    model="<MODEL_NAME>",
    api_key=os.getenv("TINFOIL_API_KEY"),
    base_url="https://inference.tinfoil.sh/v1/",
    http_client=sc.make_secure_http_client(),
    http_async_client=sc.make_secure_async_http_client(),
)

response = llm.invoke("What is confidential computing?")
print(response.content)

Streaming

import os
from tinfoil import SecureClient
from langchain_openai import ChatOpenAI

sc = SecureClient(
    enclave="inference.tinfoil.sh",
    repo="tinfoilsh/confidential-model-router",
)

llm = ChatOpenAI(
    model="<MODEL_NAME>",
    api_key=os.getenv("TINFOIL_API_KEY"),
    base_url="https://inference.tinfoil.sh/v1/",
    http_client=sc.make_secure_http_client(),
    http_async_client=sc.make_secure_async_http_client(),
    streaming=True,
)

for chunk in llm.stream("Explain hardware attestation step by step."):
    print(chunk.content, end="", flush=True)

How it works

Each Tinfoil SDK verifies the remote enclave’s hardware attestation and pins TLS certificates before any data is sent. The integration injects this verified transport into LangChain’s OpenAI provider:
LanguageTinfoil transportInjected via
Pythonhttpx.Client with pinned SSL contextChatOpenAI(http_client=...)
JavaScriptFetch function with EHBP encryptionChatOpenAI({ configuration: { fetch } })
Go*http.Client with pinned TLSopenai.WithHTTPClient(...)
Once injected, every HTTP request LangChain makes — chat completions, embeddings, tool calls — goes through the verified connection. No application code changes are needed beyond the initial setup.

Python SDK

Full Python SDK reference and examples.

JavaScript SDK

Full JavaScript SDK reference and examples.

Go SDK

Full Go SDK reference and examples.

Tool Calling

Use function calling with LangChain agents and Tinfoil.