Introduction
Aikeedo supports any API provider that implements the OpenAI API specification. This means you can integrate any service that follows OpenAI’s API format, whether it’s a cloud service, self-hosted model, or custom implementation.If a service is compatible with OpenAI’s API specification, it will work with Aikeedo - there are no restrictions on which providers you can use.
Common Providers
While you can use any OpenAI-compatible provider, here are some popular options:- Together AI
- OpenRouter
- Groq
- DeepSeek
- Google AI (Gemini)
- Perplexity AI
- Hugging Face
- Nebius
- DeepInfra
- AI/ML API
- Azure OpenAI Service
- v0 by Vercel
- Self-hosted models (via compatible servers)
- Custom LLM implementations
Adding a New Provider
- Go to Settings → Integrations
- Scroll to “Custom LLM Servers” section
- Click “New server”
Configuration Fields
API Server
- Name: A descriptive name for the provider
- Server address: The API endpoint URL
- API Key/Authorization token: Your authentication token for the service
The server address supports dynamic variables for flexible configuration.
Dynamic Variables
You can use the following variable in your server address:{model}
: Will be dynamically replaced with the current model name
Dynamic variables are particularly useful for services like Hugging Face where the model name is part of the API endpoint.
Headers
Aikeedo automatically adds these default headers:Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
(when API key field is filled)
You can override these default headers or add custom headers as needed. This is useful when a provider requires specific header configurations.
Adding Custom Headers
- Click “Add header”
- Enter the header key (e.g.,
HTTP-Referer
) - Enter the header value
- Repeat for additional headers
Examples
Override default Authorization header:If you need to override the default Content-Type or Authorization headers, simply add them with your desired values in the custom headers section.
Models Configuration
For each model you want to use:- Key: Unique identifier for the model (e.g.,
gpt-3.5-turbo
) - Name: Display name shown to users
- Provider: The model provider’s name
- Vision: Toggle if the model supports image analysis
- Tools: Toggle if the model supports function calling
Provider-Specific Setup
Together AI
Together AI
Server
OpenRouter
OpenRouter
Server
Google AI (Gemini)
Google AI (Gemini)
Server
Hugging Face
Hugging Face
Server
The variable in the server address will be automatically replaced with your chosen model name.
Groq
Groq
Server
Groq is known for its extremely fast inference speeds and competitive pricing.
DeepSeek
DeepSeek
Server
Authorization
: Bearer YOUR_API_KEY
DeepSeek offers both general-purpose and code-specialized models with competitive pricing.
Perplexity
Perplexity
Server
Perplexity offers high-quality models with strong reasoning capabilities and up-to-date knowledge.
Nebius
Nebius
Server
DeepInfra
DeepInfra
Server
DeepInfra provides access to a variety of open-source models with competitive pricing and low latency.
AI/ML API
AI/ML API
Server
Azure OpenAI Service
Azure OpenAI Service
Server
For Azure OpenAI Service:
- Replace
:resource_name
with your Azure OpenAI resource name - Replace
:deployment_name
with{model}
to use your configured model names as deployment names - Replace
:api_version
with your desired API version (e.g., ‘2024-10-21’) - Different endpoints (like completions, embeddings) will need their specific paths
Example configuration:When you configure a model named “gpt-4o-mini”, it will automatically use that as the deployment name in the URL.
v0 by Vercel
v0 by Vercel
Server
v0-1.0-md
Get your API key from v0.dev SettingsThe v0 API is currently in beta and requires a Premium or Team plan with usage-based billing enabled.
Tools Compatibility
When configuring models for your custom LLM server, you can enable them for:- Chat: Interactive conversations and assistance
- Writer: Content generation and writing tasks
- Coder: Programming help and code generation
- Title Generation: Automatic title creation for content
Make sure to enable only the capabilities that your chosen model actually supports. Enabling unsupported features may result in unexpected behavior.
Best Practices
-
Testing:
- Test each model after configuration
- Verify response formats
- Check token limits and pricing
-
Security:
- Keep API keys secure
- Use HTTPS for external providers
- Regularly rotate API keys
-
Monitoring:
- Track API usage
- Monitor response times
- Check for error rates
Troubleshooting
Common issues and solutions:-
Authentication Errors:
- Verify API key format
- Check header configuration
- Confirm server address is correct
-
Model Issues:
- Ensure model names match provider’s specifications
- Verify model availability in your subscription
- Check provider’s status page
-
Connection Problems:
- Verify network connectivity
- Check for firewall restrictions
- Confirm server address format
Always refer to your provider’s documentation for the most up-to-date configuration details and troubleshooting guides.
Rate Limits and Quotas
- Monitor your provider’s rate limits
- Check quota usage regularly
- Set up alerts for quota thresholds
- Consider implementing retry logic for rate limit errors
Need Help?
For additional support:- Check the provider’s documentation
- Contact support@aikeedo.com
Keep your integrations updated as providers may change their API specifications or requirements.