Skip to main content

Introduction

Aikeedo supports any API provider that implements the OpenAI API specification. This means you can integrate any service that follows OpenAI’s API format, whether it’s a cloud service, self-hosted model, or custom implementation.
If a service is compatible with OpenAI’s API specification, it will work with Aikeedo - there are no restrictions on which providers you can use.

Common Providers

While you can use any OpenAI-compatible provider, here are some popular options:
  • Together AI
  • OpenRouter
  • Groq
  • DeepSeek
  • Google AI (Gemini)
  • Perplexity AI
  • Hugging Face
  • Nebius
  • DeepInfra
  • AI/ML API
  • Azure OpenAI Service
  • v0 by Vercel
  • Self-hosted models (via compatible servers)
  • Custom LLM implementations

Adding a New Provider

  1. Go to Settings → Integrations
  2. Scroll to “Custom LLM Servers” section
  3. Click “New server”

Configuration Fields

API Server

  • Name: A descriptive name for the provider
  • Server address: The API endpoint URL
  • API Key/Authorization token: Your authentication token for the service
The server address supports dynamic variables for flexible configuration.
Dynamic Variables
You can use the following variable in your server address:
  • {model}: Will be dynamically replaced with the current model name
Example for Hugging Face:
https://api-inference.huggingface.co/models/{model}
When using model “mistralai/Mistral-7B-Instruct-v0.1”, the actual request will go to:
https://api-inference.huggingface.co/models/mistralai/Mistral-7B-Instruct-v0.1
Dynamic variables are particularly useful for services like Hugging Face where the model name is part of the API endpoint.

Headers

Aikeedo automatically adds these default headers:
  • Content-Type: application/json
  • Authorization: Bearer YOUR_API_KEY (when API key field is filled)
You can override these default headers or add custom headers as needed. This is useful when a provider requires specific header configurations.
Adding Custom Headers
  1. Click “Add header”
  2. Enter the header key (e.g., HTTP-Referer)
  3. Enter the header value
  4. Repeat for additional headers
Examples
Override default Authorization header:
Key: Authorization
Value: Basic YOUR_BASE64_CREDENTIALS
Add provider-specific headers:
Key: X-Title
Value: Your Application Name
If you need to override the default Content-Type or Authorization headers, simply add them with your desired values in the custom headers section.

Models Configuration

For each model you want to use:
  • Key: Unique identifier for the model (e.g., gpt-3.5-turbo)
  • Name: Display name shown to users
  • Provider: The model provider’s name
  • Vision: Toggle if the model supports image analysis
  • Tools: Toggle if the model supports function calling

Provider-Specific Setup

Server
https://api.together.xyz/v1
Get your API key from Together AI Settings
Server
https://openrouter.ai/api/v1
Get your API key from OpenRouter Settings
Server
https://generativelanguage.googleapis.com/v1beta/openai
Get your API key from Google AI Studio
Server
https://api-inference.huggingface.co/models/{model}/v1/
Get your API key from Hugging Face Settings
The variable in the server address will be automatically replaced with your chosen model name.
Server
https://api.groq.com/openai/v1
Get your API key from Groq Console
Groq is known for its extremely fast inference speeds and competitive pricing.
Server
https://api.deepseek.com
Required headers:
  • Authorization: Bearer YOUR_API_KEY
Get your API key from DeepSeek Platform
DeepSeek offers both general-purpose and code-specialized models with competitive pricing.
Server
https://api.perplexity.ai
Get your API key from Perplexity Settings
Perplexity offers high-quality models with strong reasoning capabilities and up-to-date knowledge.
Server
https://api.studio.nebius.ai/v1/
Get your API key from Nebius Studio
Server
https://api.deepinfra.com/v1/openai/
Get your API key from DeepInfra Dashboard
DeepInfra provides access to a variety of open-source models with competitive pricing and low latency.
Server
https://api.aimlapi.com/v1/
Get your API key from AI/ML API Keys
Server
https://:resource_name.openai.azure.com/openai/deployments/:deployment_name/chat/completions?api-version=:api_version
Get your API key and other details from Azure Portal
For Azure OpenAI Service:
  • Replace :resource_name with your Azure OpenAI resource name
  • Replace :deployment_name with {model} to use your configured model names as deployment names
  • Replace :api_version with your desired API version (e.g., ‘2024-10-21’)
  • Different endpoints (like completions, embeddings) will need their specific paths
Example configuration:
https://my-instance.openai.azure.com/openai/deployments/{model}/chat/completions?api-version=2024-10-21
When you configure a model named “gpt-4o-mini”, it will automatically use that as the deployment name in the URL.
Server
https://api.v0.dev/v1
Model name: v0-1.0-mdGet your API key from v0.dev Settings
The v0 API is currently in beta and requires a Premium or Team plan with usage-based billing enabled.

Tools Compatibility

When configuring models for your custom LLM server, you can enable them for:
  1. Chat: Interactive conversations and assistance
  2. Writer: Content generation and writing tasks
  3. Coder: Programming help and code generation
  4. Title Generation: Automatic title creation for content
Make sure to enable only the capabilities that your chosen model actually supports. Enabling unsupported features may result in unexpected behavior.

Best Practices

  1. Testing:
    • Test each model after configuration
    • Verify response formats
    • Check token limits and pricing
  2. Security:
    • Keep API keys secure
    • Use HTTPS for external providers
    • Regularly rotate API keys
  3. Monitoring:
    • Track API usage
    • Monitor response times
    • Check for error rates

Troubleshooting

Common issues and solutions:
  1. Authentication Errors:
    • Verify API key format
    • Check header configuration
    • Confirm server address is correct
  2. Model Issues:
    • Ensure model names match provider’s specifications
    • Verify model availability in your subscription
    • Check provider’s status page
  3. Connection Problems:
    • Verify network connectivity
    • Check for firewall restrictions
    • Confirm server address format
Always refer to your provider’s documentation for the most up-to-date configuration details and troubleshooting guides.

Rate Limits and Quotas

  • Monitor your provider’s rate limits
  • Check quota usage regularly
  • Set up alerts for quota thresholds
  • Consider implementing retry logic for rate limit errors

Need Help?

For additional support:
  1. Check the provider’s documentation
  2. Contact support@aikeedo.com
Keep your integrations updated as providers may change their API specifications or requirements.
I