OpenAI API Compatibility
Learn how to integrate any OpenAI-compatible API provider with your Aikeedo platform.
Introduction
Aikeedo supports any API provider that implements the OpenAI API specification. This means you can integrate any service that follows OpenAI’s API format, whether it’s a cloud service, self-hosted model, or custom implementation.
If a service is compatible with OpenAI’s API specification, it will work with Aikeedo - there are no restrictions on which providers you can use.
Common Providers
While you can use any OpenAI-compatible provider, here are some popular options:
- Together AI
- OpenRouter
- Groq
- DeepSeek
- Google AI (Gemini)
- Perplexity AI
- Hugging Face
- Nebius
- DeepInfra
- AI/ML API
- Azure OpenAI Service
- v0 by Vercel
- Self-hosted models (via compatible servers)
- Custom LLM implementations
Adding a New Provider
- Go to Settings → Integrations
- Scroll to “Custom LLM Servers” section
- Click “New server”
Configuration Fields
API Server
- Name: A descriptive name for the provider
- Server address: The API endpoint URL
- API Key/Authorization token: Your authentication token for the service
The server address supports dynamic variables for flexible configuration.
Dynamic Variables
You can use the following variable in your server address:
{model}
: Will be dynamically replaced with the current model name
Example for Hugging Face:
When using model “mistralai/Mistral-7B-Instruct-v0.1”, the actual request will go to:
Dynamic variables are particularly useful for services like Hugging Face where the model name is part of the API endpoint.
Headers
Aikeedo automatically adds these default headers:
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
(when API key field is filled)
You can override these default headers or add custom headers as needed. This is useful when a provider requires specific header configurations.
Adding Custom Headers
- Click “Add header”
- Enter the header key (e.g.,
HTTP-Referer
) - Enter the header value
- Repeat for additional headers
Examples
Override default Authorization header:
Add provider-specific headers:
If you need to override the default Content-Type or Authorization headers, simply add them with your desired values in the custom headers section.
Models Configuration
For each model you want to use:
- Key: Unique identifier for the model (e.g.,
gpt-3.5-turbo
) - Name: Display name shown to users
- Provider: The model provider’s name
- Vision: Toggle if the model supports image analysis
- Tools: Toggle if the model supports function calling
Provider-Specific Setup
Together AI
Together AI
Get your API key from Together AI Settings
OpenRouter
OpenRouter
Get your API key from OpenRouter Settings
Google AI (Gemini)
Google AI (Gemini)
Get your API key from Google AI Studio
Hugging Face
Hugging Face
Get your API key from Hugging Face Settings
The variable in the server address will be automatically replaced with your chosen model name.
Groq
Groq
Get your API key from Groq Console
Groq is known for its extremely fast inference speeds and competitive pricing.
DeepSeek
DeepSeek
Required headers:
Authorization
: Bearer YOUR_API_KEY
Get your API key from DeepSeek Platform
DeepSeek offers both general-purpose and code-specialized models with competitive pricing.
Perplexity
Perplexity
Get your API key from Perplexity Settings
Perplexity offers high-quality models with strong reasoning capabilities and up-to-date knowledge.
Nebius
Nebius
Get your API key from Nebius Studio
DeepInfra
DeepInfra
Get your API key from DeepInfra Dashboard
DeepInfra provides access to a variety of open-source models with competitive pricing and low latency.
AI/ML API
AI/ML API
Get your API key from AI/ML API Keys
Azure OpenAI Service
Azure OpenAI Service
Get your API key and other details from Azure Portal
For Azure OpenAI Service:
- Replace
:resource_name
with your Azure OpenAI resource name - Replace
:deployment_name
with{model}
to use your configured model names as deployment names - Replace
:api_version
with your desired API version (e.g., ‘2024-10-21’) - Different endpoints (like completions, embeddings) will need their specific paths
Example configuration:
When you configure a model named “gpt-4o-mini”, it will automatically use that as the deployment name in the URL.
v0 by Vercel
v0 by Vercel
Model name: v0-1.0-md
Get your API key from v0.dev Settings
The v0 API is currently in beta and requires a Premium or Team plan with usage-based billing enabled.
Tools Compatibility
When configuring models for your custom LLM server, you can enable them for:
- Chat: Interactive conversations and assistance
- Writer: Content generation and writing tasks
- Coder: Programming help and code generation
- Title Generation: Automatic title creation for content
Make sure to enable only the capabilities that your chosen model actually supports. Enabling unsupported features may result in unexpected behavior.
Best Practices
-
Testing:
- Test each model after configuration
- Verify response formats
- Check token limits and pricing
-
Security:
- Keep API keys secure
- Use HTTPS for external providers
- Regularly rotate API keys
-
Monitoring:
- Track API usage
- Monitor response times
- Check for error rates
Troubleshooting
Common issues and solutions:
-
Authentication Errors:
- Verify API key format
- Check header configuration
- Confirm server address is correct
-
Model Issues:
- Ensure model names match provider’s specifications
- Verify model availability in your subscription
- Check provider’s status page
-
Connection Problems:
- Verify network connectivity
- Check for firewall restrictions
- Confirm server address format
Always refer to your provider’s documentation for the most up-to-date configuration details and troubleshooting guides.
Rate Limits and Quotas
- Monitor your provider’s rate limits
- Check quota usage regularly
- Set up alerts for quota thresholds
- Consider implementing retry logic for rate limit errors
Need Help?
For additional support:
- Check the provider’s documentation
- Contact support@aikeedo.com
Keep your integrations updated as providers may change their API specifications or requirements.