Python
GateCtr + LiteLLM
Proxy LiteLLM calls through GateCtr for unified cost tracking
1
Install
No additional packages required. Use your existing LiteLLM installation.
2
Configure
Before
import litellm
response = litellm.completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}]
)After GateCtr
import litellm
response = litellm.completion(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}],
api_base="https://api.gatectr.com/v1"
)3
Test
Make a test call and check the GateCtr dashboard for token savings and cost data.
What GateCtr does under the hood for LiteLLM
When you route LiteLLM calls through GateCtr, every request is automatically compressed (up to 40% fewer tokens), scored for complexity (to select the optimal model), and checked against your budget cap before reaching the LLM provider. You get full observability β tokens, cost, latency β in the GateCtr dashboard.