AI SDK
A unified AI model configuration center supporting all major AI models — configure once, use everywhere, no need to set up API Keys repeatedly.
Note: This feature requires Eagle 4.0 Build20 or later (not yet released — please visit the Eagle website for release announcements).
Introduction to the AI SDK Dependency Plugin

The "AI SDK Dependency Plugin" is a developer toolkit that provides a unified AI model configuration center. It supports all major AI models — configure once, use everywhere. By integrating the AI SDK, developers can easily implement text generation, structured object generation, and streaming capabilities in their own plugins.
Unified Configuration Center: Set Up Once, Use Everywhere
The AI SDK plugin supports the following 8 Providers:
Commercial Models:
OpenAI (GPT-5.2, GPT-5, o3, etc.)
Anthropic Claude (Claude Sonnet 4.6, Claude Opus 4.6, etc.)
Google Gemini (Gemini 3 Pro, Gemini 3 Flash, etc.)
DeepSeek (DeepSeek V3, DeepSeek R1, etc.)
Tongyi Qwen (Qwen3 series)
Local Models (fully offline):
Ollama (supports Llama 4, Qwen3, Gemma 3, etc.)
LM Studio (graphical interface, beginner-friendly)
OpenAI Compatible Protocol:
OpenAI Compatible — Connect to any OpenAI API-compatible endpoint (e.g., Groq, Together AI, Fireworks, vLLM, etc.) by simply providing a Base URL. API Key is optional.
Support for additional Model Providers will be added in future releases.
Once configured, all AI-related plugins can use it directly — no repeated setup required. For example, if you install both an "AI Translation" and an "AI Rename" plugin, they will automatically share the configuration you set up in AI SDK. They can even use different models individually, without requiring you to re-enter an API Key.
Open Development Environment
Built on the ai-sdk.dev standard (AI SDK v6), the AI SDK plugin provides developers with a clean, stable infrastructure. Developers no longer need to handle API Key storage, model switching, error handling, and other boilerplate configuration — they can focus on plugin feature innovation. The only difference is how Providers are obtained — we use custom-developed Providers to ensure better stability and user experience.
Version Note: This plugin is built on AI SDK v6 and stays in sync with the ai-sdk.dev official documentation.
Installation and Setup
Installation Steps
Go to the Eagle Plugin Center
Search for and find the "AI SDK" plugin
Click Install
After installation, open Preferences and find "AI Models" in the left sidebar
Configure your Model Providers in the settings panel on the right, and set up Default Models (Language Model, Vision Model)
When users install a plugin that depends on AI SDK, Eagle will automatically prompt them to install the "AI SDK Dependency Plugin." Therefore, developers do not need to write code for prompting users to install it — the system automatically ensures all dependencies are installed before allowing the plugin to run.
Declaring the Dependency in manifest.json
Add the dependencies field to your plugin's manifest.json:
The key setting is "dependencies": ["ai-sdk"], which lets Eagle know that this plugin requires AI SDK to function.
Quick Start
Get the AI SDK module:
Recommended Approach: Use Default Models
Users typically pre-select their preferred Language Model and Vision Model in the "AI Models" section of Preferences. Using getDefaultModel() to directly inherit the user's selection is the simplest and most recommended approach:
Best Practice: Prefer using getDefaultModel("chat") to get the user's preferred model rather than hardcoding a specific Provider and model name in your code. This has two major benefits:
Easier development — no need to implement your own model selector; just inherit the user's preferences from AI SDK.
Avoid configuration issues — if you hardcode
openai("gpt-5")but the user hasn't configured OpenAI, it will fail. Using default models guarantees the user has already configured and verified the model.
Specifying a Specific Provider
If your plugin specifically needs to use a particular Provider (e.g., for OpenAI-only features), you can specify it directly:
Core Concepts
Relationship Between Provider and Model
AI SDK uses a two-layer structure: Provider manages the API connection and authentication, while Model is the unit that actually executes AI tasks.
provider::model Format
Important: Provider and Model are separated by a double colon ::, not a single colon or slash.
This format is used in methods such as getModel() and getDefaultModel().
Synchronous vs Asynchronous Methods
Methods in AI SDK are clearly divided into synchronous and asynchronous categories:
Synchronous methods (no await needed):
getProviders(),getProvider(),getAvailableProviders(),getModel()getDefaultModel()
Asynchronous methods (require await):
generateText(),generateObject(),streamText(),streamObject()Provider instance methods:
verify(),getModels(),hasModel()
Three Ways to Get a Model
Best Practice: Unless your plugin has special requirements (e.g., features only supported by a specific Provider), you should prefer using getDefaultModel() to get the user's preferred model.
generateText() — Basic Text Generation
Generate a text response using a specified model.
Basic Usage (prompt)
Using the messages Array
Use messages to set system prompts and multi-turn conversations:
Multimodal (Text + Image)
For more advanced usage of generateText (such as maxTokens, temperature, and other parameters), refer to the AI SDK official documentation.
generateObject() — Structured Object Generation
Have the AI return a structured JSON object according to a specified Schema.
Using Zod Schema
Using JSON Schema
Image Analysis Example
For more advanced usage of generateObject, refer to the AI SDK official documentation.
streamText() — Streaming Text Generation
Receive AI responses incrementally via streaming — ideal for scenarios where results need to be displayed in real time.
Displaying in the UI in Real Time
For more advanced usage of streamText, refer to the AI SDK official documentation.
streamObject() — Streaming Object Generation
Receive structured objects incrementally via streaming. Each iteration yields the partially parsed object so far.
Best Practice: streamObject is ideal for scenarios where analysis results need to be progressively displayed in the UI, allowing users to see partial results before the AI finishes.
For more advanced usage of streamObject, refer to the AI SDK official documentation.
Provider Management Methods
All methods below are synchronous — no await required.
getProviders()
Get an array of all registered Providers.
Returns
ProviderFunction[]— an array of all Providers
Note: getProviders() returns an array, not an object. The following usage is incorrect:
getProvider(providerName)
Get a specific Provider by name.
providerNamestring — Provider name (e.g.,"openai","google")Returns
ProviderFunction | undefined— the matching Provider, orundefinedif not found
If you need a specific Provider, use getProvider() instead of getProviders() for cleaner and more concise code. However, in most cases, using getDefaultModel() directly is the better choice.
getAvailableProviders()
Get all configured Providers (those the user has finished setting up).
Returns
ProviderFunction[]— an array of configured Providers
The difference between this method and getProviders() is that getProviders() returns all 8 Providers (including unconfigured ones), while getAvailableProviders() returns only those the user has finished configuring.
getModel(providerAndModel)
Get a model instance directly using the provider::model format.
providerAndModelstring — format:"provider::model"Returns
Model— a model object that can be passed directly togenerateText()and other methods
Note: You must use the :: double colon to separate the Provider and Model name; otherwise, an error will be thrown.
Settings and Reload
open()
Open the "AI Models" settings panel in Preferences. This is useful for providing a "Model Settings" button in your plugin's interface, allowing users to quickly configure Model Providers and Default Models.
Returns
void
Best Practice: When getDefaultModel() returns undefined (the user has not yet set a default model), display a prompt in your plugin's interface with a button that calls open() to guide the user through the setup.
reload()
Reload the AI SDK configuration. After the user opens the settings panel via open() and adjusts the configuration, calling this method reads the latest configuration.
Returns
void
open() does not block execution, so the system cannot know when the user finishes configuring. It is recommended to call reload() when you need to use the model to ensure you read the latest configuration.
Default Model Methods
AI SDK supports setting and reading default models, allowing users to specify their preferred models centrally in the "AI Models" section of Preferences.
getDefaultModel(type)
Get the default model for the specified type. This is a synchronous method.
typestring — model type, possible values:"chat"(Language Model) or"image"(Vision Model)Returns
string | undefined— the"provider::model"string of the default model, orundefinedif not set
Users can separately select their preferred Language Model ("chat") and Vision Model ("image") in the "AI Models" section of Preferences, and plugins can retrieve whichever they need.
Provider Instance Methods
Provider instances obtained via getProvider() can be called as functions to get a Model, and they also provide the following methods.
verify()
Verify whether the Provider's connection and authentication are valid. Used to check if the user's current configuration can connect successfully.
Returns
Promise<VerifyResult>— verification result objectokboolean — whether verification succeedederrorAPIError (optional) — error details on failure
Note: verify() does not return a boolean; it returns an object containing ok and error.
getModels()
Get a list of all available models for this Provider.
Returns
Promise<string[]>— an array of model IDs
This method sends a request to the Provider's API. Make sure the user has configured this Provider. If not configured, an APIError will be thrown.
hasModel(modelId)
Check whether this Provider includes a specific model.
modelIdstring — the model ID (e.g.,"gpt-5")Returns
Promise<boolean>— whether the model exists
Provider Instance Properties
The following are read-only properties of Provider instances:
name string
name stringThe name of the Provider.
baseURL string | undefined
baseURL string | undefinedThe currently configured API endpoint.
Supported Providers Reference
OpenAI
"openai"
Commercial (Cloud)
Requires manual configuration
Anthropic
"anthropic"
Commercial (Cloud)
Requires manual configuration
Google Gemini
"google"
Commercial (Cloud)
Requires manual configuration
DeepSeek
"deepseek"
Commercial (Cloud)
Requires manual configuration
Tongyi Qwen
"tongyi"
Commercial (Cloud)
https://dashscope.aliyuncs.com/compatible-mode/v1
Ollama
"ollama"
Local
http://localhost:11434/v1
LM Studio
"lmstudio"
Local
http://localhost:1234/v1
OpenAI Compatible
"openai-compatible"
Custom (Compatible Protocol)
Requires manual configuration (API Key optional)
OpenAI Compatible works with any service that implements the OpenAI API protocol. Users only need to provide a Base URL; the API Key is optional (required for some cloud services, typically not needed for local servers). The system automatically appends /v1 to the URL if not already present.
Note: The Provider name for Google Gemini is "google", not "gemini".
Error Handling
APIError Class
AI SDK throws an APIError when an API request fails, containing complete error information.
Properties
message
string
Error message
status
number | undefined
HTTP status code (e.g., 401, 403, 500)
statusText
string | undefined
HTTP status text (e.g., "Unauthorized")
code
string | undefined
Error code (extracted from the response body)
provider
string | undefined
Provider name
url
string | undefined
Request URL
responseBody
unknown
Raw error content returned by the server
Methods
toJSON()
Returns a complete error details object (suitable for logging)
toString()
Returns the error message string
Error Handling Example
Network Errors
When the Provider is unreachable (e.g., local Ollama is not running), you will also receive an APIError:
Best Practices
1. Prefer Using Default Models
This is the most important recommendation. Use getDefaultModel() to inherit the user's preferences from the AI SDK settings interface, rather than hardcoding a specific Provider and model in your code:
Why not hardcode models? If you specify ai.getProvider("openai")("gpt-5") in your code but the user hasn't configured OpenAI, it will fail. You would need to handle the "Provider not configured" edge case separately, which is cumbersome. Using default models guarantees the user has already configured and verified the model, saving a significant amount of defensive code.
2. Handle verify() Results Correctly
3. Use Streaming for Long Text Generation
When expecting a long response, use streamText() for a better user experience:
Full Example
Below is a comprehensive plugin example demonstrating how to properly use the main features of AI SDK:
API Cheat Sheet
AI SDK Top-Level Methods
getProviders()
Sync
ProviderFunction[]
Get all Providers
getProvider(name)
Sync
ProviderFunction | undefined
Get a specific Provider
getAvailableProviders()
Sync
ProviderFunction[]
Get configured Providers
getModel(provider::model)
Sync
Model
Get a model instance
getDefaultModel(type)
Sync
string | undefined
Get the default model
open()
Sync
void
Open the model settings panel in Preferences
reload()
Sync
void
Reload the latest configuration
generateText(options)
Async
Promise<GenerateTextResult>
Generate text
generateObject(options)
Async
Promise<GenerateObjectResult>
Generate a structured object
streamText(options)
Async
StreamTextResult
Streaming text generation
streamObject(options)
Async
StreamObjectResult
Streaming object generation
Provider Instance Methods
provider(modelId)
Sync
Model
Get a model (function call)
verify()
Async
Promise<VerifyResult>
Verify connection and authentication
getModels()
Async
Promise<string[]>
Get the model list
hasModel(modelId)
Async
Promise<boolean>
Check if a model exists
Provider Instance Properties
name
string
Provider name (read-only)
baseURL
string | undefined
API endpoint (read-only)
VerifyResult
ok
boolean
Whether verification succeeded
error
APIError | undefined
Error object on failure
Last updated