Python
GateCtr + LangChain
Route LangChain LLM calls through GateCtr for cost control
1
Install
No additional packages required. Use your existing LangChain installation.
2
Configure
Before
from langchain_openai import ChatOpenAI llm = ChatOpenAI(model="gpt-4o", api_key="sk-...")
After GateCtr
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model="gpt-4o",
api_key="sk-...",
openai_api_base="https://api.gatectr.com/v1"
)3
Test
Make a test call and check the GateCtr dashboard for token savings and cost data.
What GateCtr does under the hood for LangChain
When you route LangChain calls through GateCtr, every request is automatically compressed (up to 40% fewer tokens), scored for complexity (to select the optimal model), and checked against your budget cap before reaching the LLM provider. You get full observability β tokens, cost, latency β in the GateCtr dashboard.