OpenAI API Compatibility
Learn how to integrate any OpenAI-compatible API provider with your Aikeedo platform.
Introduction
Aikeedo supports any API provider that implements the OpenAI API specification. This means you can integrate any service that follows OpenAI’s API format, whether it’s a cloud service, self-hosted model, or custom implementation.
If a service is compatible with OpenAI’s API specification, it will work with Aikeedo - there are no restrictions on which providers you can use.
Common Providers
While you can use any OpenAI-compatible provider, here are some popular options:
- Together AI
- OpenRouter
- Groq
- DeepSeek
- Google AI (Gemini)
- Hugging Face
- Self-hosted models (via compatible servers)
- Custom LLM implementations
Adding a New Provider
- Go to Settings → Integrations
- Scroll to “Custom LLM Servers” section
- Click “New server”
Configuration Fields
API Server
- Name: A descriptive name for the provider
- Server address: The API endpoint URL
- API Key/Authorization token: Your authentication token for the service
The server address supports dynamic variables for flexible configuration.
Dynamic Variables
You can use the following variable in your server address:
{model}
: Will be dynamically replaced with the current model name
Example for Hugging Face:
When using model “mistralai/Mistral-7B-Instruct-v0.1”, the actual request will go to:
Dynamic variables are particularly useful for services like Hugging Face where the model name is part of the API endpoint.
Headers
Aikeedo automatically adds these default headers:
Content-Type: application/json
Authorization: Bearer YOUR_API_KEY
(when API key field is filled)
You can override these default headers or add custom headers as needed. This is useful when a provider requires specific header configurations.
Adding Custom Headers
- Click “Add header”
- Enter the header key (e.g.,
HTTP-Referer
) - Enter the header value
- Repeat for additional headers
Examples
Override default Authorization header:
Add provider-specific headers:
If you need to override the default Content-Type or Authorization headers, simply add them with your desired values in the custom headers section.
Models Configuration
For each model you want to use:
- Key: Unique identifier for the model (e.g.,
gpt-3.5-turbo
) - Name: Display name shown to users
- Provider: The model provider’s name
- Vision: Toggle if the model supports image analysis
- Tools: Toggle if the model supports function calling
Provider-Specific Setup
Get your API key from Together AI Settings
Tools Compatibility
When configuring models for your custom LLM server, you can enable them for:
- Chat: Interactive conversations and assistance
- Writer: Content generation and writing tasks
- Coder: Programming help and code generation
- Title Generation: Automatic title creation for content
Make sure to enable only the capabilities that your chosen model actually supports. Enabling unsupported features may result in unexpected behavior.
Best Practices
-
Testing:
- Test each model after configuration
- Verify response formats
- Check token limits and pricing
-
Security:
- Keep API keys secure
- Use HTTPS for external providers
- Regularly rotate API keys
-
Monitoring:
- Track API usage
- Monitor response times
- Check for error rates
Troubleshooting
Common issues and solutions:
-
Authentication Errors:
- Verify API key format
- Check header configuration
- Confirm server address is correct
-
Model Issues:
- Ensure model names match provider’s specifications
- Verify model availability in your subscription
- Check provider’s status page
-
Connection Problems:
- Verify network connectivity
- Check for firewall restrictions
- Confirm server address format
Always refer to your provider’s documentation for the most up-to-date configuration details and troubleshooting guides.
Rate Limits and Quotas
- Monitor your provider’s rate limits
- Check quota usage regularly
- Set up alerts for quota thresholds
- Consider implementing retry logic for rate limit errors
Need Help?
For additional support:
- Check the provider’s documentation
- Contact support@aikeedo.com
Keep your integrations updated as providers may change their API specifications or requirements.
Was this page helpful?