Overview
CrewAI supports two paths for connecting to LLM providers:- Native integrations — direct SDK connections to OpenAI, Anthropic, Google Gemini, Azure OpenAI, and AWS Bedrock
- LiteLLM fallback — a translation layer that supports 100+ additional providers
Why Remove LiteLLM?
- Reduced dependency surface — fewer packages means fewer potential supply-chain risks
- Better performance — native SDKs communicate directly with provider APIs, eliminating a translation layer
- Simpler debugging — one less abstraction layer between your code and the provider
- Smaller install footprint — LiteLLM brings in many transitive dependencies
Native Providers (No LiteLLM Required)
These providers use their own SDKs and work without LiteLLM installed:OpenAI
GPT-4o, GPT-4o-mini, o1, o3-mini, and more.
Anthropic
Claude Sonnet, Claude Haiku, and more.
Google Gemini
Gemini 2.0 Flash, Gemini 2.0 Pro, and more.
Azure OpenAI
Azure-hosted OpenAI models.
AWS Bedrock
Claude, Llama, Titan, and more via AWS.
If you only use native providers, you never need to install
crewai[litellm]. The base crewai package plus your chosen provider extra is all you need.How to Check If You’re Using LiteLLM
Check your model strings
If your code uses model prefixes like these, you’re routing through LiteLLM:| Prefix | Provider | Uses LiteLLM? |
|---|---|---|
ollama/ | Ollama | ✅ Yes |
groq/ | Groq | ✅ Yes |
together_ai/ | Together AI | ✅ Yes |
mistral/ | Mistral | ✅ Yes |
cohere/ | Cohere | ✅ Yes |
huggingface/ | Hugging Face | ✅ Yes |
openai/ | OpenAI | ❌ Native |
anthropic/ | Anthropic | ❌ Native |
gemini/ | Google Gemini | ❌ Native |
azure/ | Azure OpenAI | ❌ Native |
bedrock/ | AWS Bedrock | ❌ Native |
Check if LiteLLM is installed
Check your dependencies
Look at yourpyproject.toml for crewai[litellm]:
Migration Guide
Step 1: Identify your current provider
Find allLLM() calls and model strings in your code:
Step 2: Switch to a native provider
- Switch to OpenAI
- Switch to Anthropic
- Switch to Gemini
- Switch to Azure OpenAI
- Switch to AWS Bedrock
Step 3: Keep Ollama without LiteLLM
If you’re using Ollama and want to keep using it, you can connect via Ollama’s OpenAI-compatible API:Step 4: Update your YAML configs
Step 5: Remove LiteLLM
Once you’ve migrated all your model references:Step 6: Verify
Run your project and confirm everything works:Quick Reference: Model String Mapping
Here are common migration paths from LiteLLM-dependent providers to native ones:FAQ
Do I lose any functionality by removing LiteLLM?
Do I lose any functionality by removing LiteLLM?
No, if you use one of the five natively supported providers (OpenAI, Anthropic, Gemini, Azure, Bedrock). These native integrations support all CrewAI features including streaming, tool calling, structured output, and more. You only lose access to providers that are exclusively available through LiteLLM (like Groq, Together AI, Mistral as first-class providers).
Can I use multiple native providers at the same time?
Can I use multiple native providers at the same time?
Yes. Install multiple extras and use different providers for different agents:
Is LiteLLM safe to use now?
Is LiteLLM safe to use now?
Regardless of quarantine status, reducing your dependency surface is good security practice. If you only need providers that CrewAI supports natively, there’s no reason to keep LiteLLM installed.
What about environment variables like OPENAI_API_KEY?
What about environment variables like OPENAI_API_KEY?
Native providers use the same environment variables you’re already familiar with. No changes needed for
OPENAI_API_KEY, ANTHROPIC_API_KEY, GEMINI_API_KEY, etc.Related Resources
- LLM Connections — Full guide to connecting CrewAI with any LLM
- LLM Concepts — Understanding LLMs in CrewAI
- LLM Selection Guide — Choosing the right model for your use case
