AI SDK

A unified AI model configuration center supporting all major AI models — configure once, use everywhere, no need to set up API Keys repeatedly.

circle-exclamation

Introduction to the AI SDK Dependency Plugin

The "AI SDK Dependency Plugin" is a developer toolkit that provides a unified AI model configuration center. It supports all major AI models — configure once, use everywhere. By integrating the AI SDK, developers can easily implement text generation, structured object generation, and streaming capabilities in their own plugins.

Unified Configuration Center: Set Up Once, Use Everywhere

The AI SDK plugin supports the following 8 Providers:

Commercial Models:

  • OpenAI (GPT-5.2, GPT-5, o3, etc.)

  • Anthropic Claude (Claude Sonnet 4.6, Claude Opus 4.6, etc.)

  • Google Gemini (Gemini 3 Pro, Gemini 3 Flash, etc.)

  • DeepSeek (DeepSeek V3, DeepSeek R1, etc.)

  • Tongyi Qwen (Qwen3 series)

Local Models (fully offline):

  • Ollama (supports Llama 4, Qwen3, Gemma 3, etc.)

  • LM Studio (graphical interface, beginner-friendly)

OpenAI Compatible Protocol:

  • OpenAI Compatible — Connect to any OpenAI API-compatible endpoint (e.g., Groq, Together AI, Fireworks, vLLM, etc.) by simply providing a Base URL. API Key is optional.

Support for additional Model Providers will be added in future releases.

Once configured, all AI-related plugins can use it directly — no repeated setup required. For example, if you install both an "AI Translation" and an "AI Rename" plugin, they will automatically share the configuration you set up in AI SDK. They can even use different models individually, without requiring you to re-enter an API Key.

Open Development Environment

Built on the ai-sdk.devarrow-up-right standard (AI SDK v6), the AI SDK plugin provides developers with a clean, stable infrastructure. Developers no longer need to handle API Key storage, model switching, error handling, and other boilerplate configuration — they can focus on plugin feature innovation. The only difference is how Providers are obtained — we use custom-developed Providers to ensure better stability and user experience.

circle-info

Version Note: This plugin is built on AI SDK v6 and stays in sync with the ai-sdk.devarrow-up-right official documentation.


Installation and Setup

Installation Steps

  1. Go to the Eagle Plugin Center

  2. Search for and find the "AI SDK" plugin

  3. Click Install

  4. After installation, open Preferences and find "AI Models" in the left sidebar

  5. Configure your Model Providers in the settings panel on the right, and set up Default Models (Language Model, Vision Model)

circle-info

When users install a plugin that depends on AI SDK, Eagle will automatically prompt them to install the "AI SDK Dependency Plugin." Therefore, developers do not need to write code for prompting users to install it — the system automatically ensures all dependencies are installed before allowing the plugin to run.

Declaring the Dependency in manifest.json

Add the dependencies field to your plugin's manifest.json:

The key setting is "dependencies": ["ai-sdk"], which lets Eagle know that this plugin requires AI SDK to function.


Quick Start

Get the AI SDK module:

Users typically pre-select their preferred Language Model and Vision Model in the "AI Models" section of Preferences. Using getDefaultModel() to directly inherit the user's selection is the simplest and most recommended approach:

circle-check

Specifying a Specific Provider

If your plugin specifically needs to use a particular Provider (e.g., for OpenAI-only features), you can specify it directly:


Core Concepts

Relationship Between Provider and Model

AI SDK uses a two-layer structure: Provider manages the API connection and authentication, while Model is the unit that actually executes AI tasks.

provider::model Format

triangle-exclamation

This format is used in methods such as getModel() and getDefaultModel().

Synchronous vs Asynchronous Methods

Methods in AI SDK are clearly divided into synchronous and asynchronous categories:

Synchronous methods (no await needed):

  • getProviders(), getProvider(), getAvailableProviders(), getModel()

  • getDefaultModel()

Asynchronous methods (require await):

  • generateText(), generateObject(), streamText(), streamObject()

  • Provider instance methods: verify(), getModels(), hasModel()

Three Ways to Get a Model

circle-check

generateText() — Basic Text Generation

Generate a text response using a specified model.

Basic Usage (prompt)

Using the messages Array

Use messages to set system prompts and multi-turn conversations:

Multimodal (Text + Image)

circle-info

For more advanced usage of generateText (such as maxTokens, temperature, and other parameters), refer to the AI SDK official documentationarrow-up-right.


generateObject() — Structured Object Generation

Have the AI return a structured JSON object according to a specified Schema.

Using Zod Schema

Using JSON Schema

Image Analysis Example

circle-info

For more advanced usage of generateObject, refer to the AI SDK official documentationarrow-up-right.


streamText() — Streaming Text Generation

Receive AI responses incrementally via streaming — ideal for scenarios where results need to be displayed in real time.

Displaying in the UI in Real Time

circle-info

For more advanced usage of streamText, refer to the AI SDK official documentationarrow-up-right.


streamObject() — Streaming Object Generation

Receive structured objects incrementally via streaming. Each iteration yields the partially parsed object so far.

circle-check
circle-info

For more advanced usage of streamObject, refer to the AI SDK official documentationarrow-up-right.


Provider Management Methods

All methods below are synchronous — no await required.


getProviders()

Get an array of all registered Providers.

  • Returns ProviderFunction[] — an array of all Providers

triangle-exclamation

getProvider(providerName)

Get a specific Provider by name.

  • providerName string — Provider name (e.g., "openai", "google")

  • Returns ProviderFunction | undefined — the matching Provider, or undefined if not found

circle-info

If you need a specific Provider, use getProvider() instead of getProviders() for cleaner and more concise code. However, in most cases, using getDefaultModel() directly is the better choice.


getAvailableProviders()

Get all configured Providers (those the user has finished setting up).

  • Returns ProviderFunction[] — an array of configured Providers

circle-info

The difference between this method and getProviders() is that getProviders() returns all 8 Providers (including unconfigured ones), while getAvailableProviders() returns only those the user has finished configuring.


getModel(providerAndModel)

Get a model instance directly using the provider::model format.

  • providerAndModel string — format: "provider::model"

  • Returns Model — a model object that can be passed directly to generateText() and other methods

triangle-exclamation

Settings and Reload


open()

Open the "AI Models" settings panel in Preferences. This is useful for providing a "Model Settings" button in your plugin's interface, allowing users to quickly configure Model Providers and Default Models.

  • Returns void

circle-check

reload()

Reload the AI SDK configuration. After the user opens the settings panel via open() and adjusts the configuration, calling this method reads the latest configuration.

  • Returns void

circle-info

open() does not block execution, so the system cannot know when the user finishes configuring. It is recommended to call reload() when you need to use the model to ensure you read the latest configuration.


Default Model Methods

AI SDK supports setting and reading default models, allowing users to specify their preferred models centrally in the "AI Models" section of Preferences.


getDefaultModel(type)

Get the default model for the specified type. This is a synchronous method.

  • type string — model type, possible values: "chat" (Language Model) or "image" (Vision Model)

  • Returns string | undefined — the "provider::model" string of the default model, or undefined if not set

circle-info

Users can separately select their preferred Language Model ("chat") and Vision Model ("image") in the "AI Models" section of Preferences, and plugins can retrieve whichever they need.


Provider Instance Methods

Provider instances obtained via getProvider() can be called as functions to get a Model, and they also provide the following methods.


verify()

Verify whether the Provider's connection and authentication are valid. Used to check if the user's current configuration can connect successfully.

  • Returns Promise<VerifyResult> — verification result object

    • ok boolean — whether verification succeeded

    • error APIError (optional) — error details on failure

triangle-exclamation

getModels()

Get a list of all available models for this Provider.

  • Returns Promise<string[]> — an array of model IDs

circle-info

This method sends a request to the Provider's API. Make sure the user has configured this Provider. If not configured, an APIError will be thrown.


hasModel(modelId)

Check whether this Provider includes a specific model.

  • modelId string — the model ID (e.g., "gpt-5")

  • Returns Promise<boolean> — whether the model exists


Provider Instance Properties

The following are read-only properties of Provider instances:


name string

The name of the Provider.


baseURL string | undefined

The currently configured API endpoint.


Supported Providers Reference

Provider
Name
Type
Default Base URL

OpenAI

"openai"

Commercial (Cloud)

Requires manual configuration

Anthropic

"anthropic"

Commercial (Cloud)

Requires manual configuration

Google Gemini

"google"

Commercial (Cloud)

Requires manual configuration

DeepSeek

"deepseek"

Commercial (Cloud)

Requires manual configuration

Tongyi Qwen

"tongyi"

Commercial (Cloud)

https://dashscope.aliyuncs.com/compatible-mode/v1

Ollama

"ollama"

Local

http://localhost:11434/v1

LM Studio

"lmstudio"

Local

http://localhost:1234/v1

OpenAI Compatible

"openai-compatible"

Custom (Compatible Protocol)

Requires manual configuration (API Key optional)

circle-info

OpenAI Compatible works with any service that implements the OpenAI API protocol. Users only need to provide a Base URL; the API Key is optional (required for some cloud services, typically not needed for local servers). The system automatically appends /v1 to the URL if not already present.

triangle-exclamation

Error Handling

APIError Class

AI SDK throws an APIError when an API request fails, containing complete error information.

Properties

Property
Type
Description

message

string

Error message

status

number | undefined

HTTP status code (e.g., 401, 403, 500)

statusText

string | undefined

HTTP status text (e.g., "Unauthorized")

code

string | undefined

Error code (extracted from the response body)

provider

string | undefined

Provider name

url

string | undefined

Request URL

responseBody

unknown

Raw error content returned by the server

Methods

Method
Description

toJSON()

Returns a complete error details object (suitable for logging)

toString()

Returns the error message string

Error Handling Example

Network Errors

When the Provider is unreachable (e.g., local Ollama is not running), you will also receive an APIError:


Best Practices

1. Prefer Using Default Models

This is the most important recommendation. Use getDefaultModel() to inherit the user's preferences from the AI SDK settings interface, rather than hardcoding a specific Provider and model in your code:

Why not hardcode models? If you specify ai.getProvider("openai")("gpt-5") in your code but the user hasn't configured OpenAI, it will fail. You would need to handle the "Provider not configured" edge case separately, which is cumbersome. Using default models guarantees the user has already configured and verified the model, saving a significant amount of defensive code.

2. Handle verify() Results Correctly

3. Use Streaming for Long Text Generation

When expecting a long response, use streamText() for a better user experience:


Full Example

Below is a comprehensive plugin example demonstrating how to properly use the main features of AI SDK:


API Cheat Sheet

AI SDK Top-Level Methods

Method
Sync/Async
Return Type
Description

getProviders()

Sync

ProviderFunction[]

Get all Providers

getProvider(name)

Sync

ProviderFunction | undefined

Get a specific Provider

getAvailableProviders()

Sync

ProviderFunction[]

Get configured Providers

getModel(provider::model)

Sync

Model

Get a model instance

getDefaultModel(type)

Sync

string | undefined

Get the default model

open()

Sync

void

Open the model settings panel in Preferences

reload()

Sync

void

Reload the latest configuration

generateText(options)

Async

Promise<GenerateTextResult>

Generate text

generateObject(options)

Async

Promise<GenerateObjectResult>

Generate a structured object

streamText(options)

Async

StreamTextResult

Streaming text generation

streamObject(options)

Async

StreamObjectResult

Streaming object generation

Provider Instance Methods

Method
Sync/Async
Return Type
Description

provider(modelId)

Sync

Model

Get a model (function call)

verify()

Async

Promise<VerifyResult>

Verify connection and authentication

getModels()

Async

Promise<string[]>

Get the model list

hasModel(modelId)

Async

Promise<boolean>

Check if a model exists

Provider Instance Properties

Property
Type
Description

name

string

Provider name (read-only)

baseURL

string | undefined

API endpoint (read-only)

VerifyResult

Property
Type
Description

ok

boolean

Whether verification succeeded

error

APIError | undefined

Error object on failure

Last updated