INFERENCE_API_KEY, and the gateway forwards requests to the downstream provider using the provider key you supply in x-inference-provider-api-key.
The important detail is that LangChain’s wrapper surfaces are not identical to the direct OpenAI and Anthropic SDK constructors. This page documents the wrapper-specific patterns that we validated for Python and TypeScript.
Supported wrappers in this guide
| Language | Supported now | Notes |
|---|---|---|
| Python | ChatOpenAI, ChatAnthropic, init_chat_model(..., provider="openai" / "anthropic") | Validated against the local gateway and observability stack |
| TypeScript | ChatOpenAI, ChatAnthropic | Validated against the local gateway and observability stack |
Deferred and unsupported wrappers
These cases are not covered in this pass:ChatGoogleGenerativeAI- native Google SDKs such as
genai.Client() ChatNVIDIA- config-only model strings in TOML or YAML when there is no concrete wrapper instantiation site to rewrite
The LangChain-specific differences
Compared with the direct SDK patterns, LangChain changes a few important things:-
Anthropic wrapper auth is pragmatic, not pure.
The LangChain Anthropic wrappers still expect an Anthropic API key field, so the examples below keep
anthropic_api_key/apiKeyset to the real Anthropic key for wrapper compatibility. Gateway auth still goes throughAuthorization: Bearer <INFERENCE_API_KEY>, and downstream provider auth still goes throughx-inference-provider-api-key. -
OpenAI-compatible routing still uses the OpenAI wrappers.
For custom OpenAI-compatible providers, keep the LangChain OpenAI wrapper pointed at the Inference.net gateway and move the original provider URL into
x-inference-provider-url. - Task IDs are still supported. LangChain can pass request-level task IDs through the validated wrappers, but the exact call shape differs by wrapper. The precise task-ID call patterns are documented below.
Python
Python ChatOpenAI
Use ChatOpenAI for direct OpenAI routing by pointing the wrapper at the gateway, authenticating the gateway with INFERENCE_API_KEY, and forwarding the downstream OpenAI key in x-inference-provider-api-key.
Python ChatAnthropic
Use the pragmatic LangChain pattern for Anthropic wrappers:
- keep
anthropic_api_keyset to the real Anthropic key - point
anthropic_api_urlat the Inference.net gateway - send gateway auth through
Authorization - send downstream provider auth through
x-inference-provider-api-key
Python init_chat_model
For the validated OpenAI and Anthropic paths, init_chat_model passes the provider-specific kwargs through to the underlying LangChain wrapper.
OpenAI via init_chat_model
Anthropic via init_chat_model
TypeScript
TypeScript ChatOpenAI
For LangChain JS/TS, ChatOpenAI uses configuration.baseURL and configuration.defaultHeaders for gateway routing.
TypeScript ChatAnthropic
For LangChain JS/TS, ChatAnthropic uses anthropicApiUrl and clientOptions.defaultHeaders.
Use the pragmatic wrapper-compatible pattern:
- keep
apiKeyset to the real Anthropic key - point
anthropicApiUrlat the Inference.net gateway - send gateway auth through
Authorization - send downstream provider auth through
x-inference-provider-api-key
Custom OpenAI-compatible providers
When you use a LangChain OpenAI wrapper against Gemini, Together AI, Groq, Fireworks, Mistral, OpenRouter, or another OpenAI-compatible provider, keep the wrapper pointed at the Inference.net gateway and move the original provider URL intox-inference-provider-url.
Python custom provider via ChatOpenAI
TypeScript custom provider via ChatOpenAI
x-inference-provider-url is present, you usually do not need x-inference-provider. The gateway can infer the provider protocol from the overridden URL.
Task IDs
The wrappers in this guide support request-level task IDs on a shared client. You do not need to create separate clients per task for the validated wrappers below. The important detail is that the call shape differs by wrapper.Python task IDs
For the validated Python wrappers in this guide, pass task IDs throughextra_headers.
Python ChatOpenAI and ChatAnthropic
extra_headers pattern also works for the validated init_chat_model(..., provider="openai") and init_chat_model(..., provider="anthropic") paths.
TypeScript task IDs
For TypeScript, the exact call shape depends on the wrapper.TypeScript ChatAnthropic
Use headers directly in the second argument to invoke().
TypeScript ChatOpenAI
Use options.headers in the second argument to invoke().
Why this matters
LangChain does not normalize transport-level request options across wrappers. That is why the TypeScript OpenAI and Anthropic wrappers use different task-ID call shapes, even though both ultimately set the samex-inference-task-id header on the proxied request.
Framework-owned invocation loops
Some frameworks accept a LangChain model object and then own the actual.invoke() / .ainvoke() calls internally. In those cases you may not have a wrapper call site where you can attach per-request task IDs yourself.
Use a client-level fallback instead: