# AI SDK

{% hint style="warning" %}
Note: This feature requires Eagle 4.0 Build20 or later.
{% endhint %}

***

## Introduction to the AI SDK Dependency Plugin <a href="#introduction" id="introduction"></a>

<figure><img src="https://1590693372-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F8ag8XBIM3olHOU7WmBBx%2Fuploads%2Fz0rT7Kxok59esswHY5Z9%2Fimage.png?alt=media&#x26;token=9e9cb09a-88a2-4457-887c-b2325335de10" alt=""><figcaption></figcaption></figure>

The "AI SDK Dependency Plugin" is a developer toolkit that provides a unified AI model configuration center. It supports all major AI models — configure once, use everywhere. By integrating the AI SDK, developers can easily implement text generation, structured object generation, and streaming capabilities in their own plugins.

### Unified Configuration Center: Set Up Once, Use Everywhere

The AI SDK plugin supports the following 8 Providers:

**Commercial Models**:

* OpenAI (GPT-5.2, GPT-5, o3, etc.)
* Anthropic Claude (Claude Sonnet 4.6, Claude Opus 4.6, etc.)
* Google Gemini (Gemini 3 Pro, Gemini 3 Flash, etc.)
* DeepSeek (DeepSeek V3, DeepSeek R1, etc.)
* Tongyi Qwen (Qwen3 series)

**Local Models** (fully offline):

* Ollama (supports Llama 4, Qwen3, Gemma 3, etc.)
* LM Studio (graphical interface, beginner-friendly)

**OpenAI Compatible Protocol**:

* OpenAI Compatible — Connect to any OpenAI API-compatible endpoint (e.g., Groq, Together AI, Fireworks, vLLM, etc.) by simply providing a Base URL. API Key is optional.

Support for additional Model Providers will be added in future releases.

Once configured, all AI-related plugins can use it directly — no repeated setup required. For example, if you install both an "AI Translation" and an "AI Rename" plugin, they will automatically share the configuration you set up in AI SDK. They can even use different models individually, without requiring you to re-enter an API Key.

### Open Development Environment

Built on the [ai-sdk.dev](https://ai-sdk.dev/) standard (AI SDK v6), the AI SDK plugin provides developers with a clean, stable infrastructure. Developers no longer need to handle API Key storage, model switching, error handling, and other boilerplate configuration — they can focus on plugin feature innovation. The only difference is how Providers are obtained — we use custom-developed Providers to ensure better stability and user experience.

{% hint style="info" %}
**Version Note**: This plugin is built on AI SDK v6 and stays in sync with the [ai-sdk.dev](https://ai-sdk.dev/) official documentation.
{% endhint %}

***

## Installation and Setup <a href="#installation" id="installation"></a>

### Installation Steps

1. Go to the Eagle Plugin Center
2. Search for and find the "AI SDK" plugin
3. Click Install
4. After installation, open **Preferences** and find "**AI Models**" in the left sidebar
5. Configure your Model Providers in the settings panel on the right, and set up **Default Models** (Language Model, Vision Model)

{% hint style="info" %}
When users install a plugin that depends on AI SDK, Eagle will automatically prompt them to install the "AI SDK Dependency Plugin." Therefore, developers do not need to write code for prompting users to install it — the system automatically ensures all dependencies are installed before allowing the plugin to run.
{% endhint %}

### Declaring the Dependency in manifest.json

Add the `dependencies` field to your plugin's `manifest.json`:

```json
{
    "id": "YOUR_PLUGIN_ID",
    "version": "1.0.0",
    "platform": "all",
    "arch": "all",
    "name": "My AI Plugin",
    "logo": "/logo.png",
    "keywords": [],
    "dependencies": ["ai-sdk"],
    "devTools": false,
    "main": {
        "url": "index.html",
        "width": 640,
        "height": 480
    }
}
```

The key setting is `"dependencies": ["ai-sdk"]`, which lets Eagle know that this plugin requires AI SDK to function.

***

## Quick Start <a href="#quickstart" id="quickstart"></a>

Get the AI SDK module:

```javascript
const ai = eagle.extraModule.ai;
```

### Recommended Approach: Use Default Models

Users typically pre-select their preferred Language Model and Vision Model in the "AI Models" section of Preferences. Using `getDefaultModel()` to directly inherit the user's selection is the simplest and most recommended approach:

```javascript
eagle.onPluginCreate(async (plugin) => {
    const ai = eagle.extraModule.ai;
    const { generateText } = ai;

    // Get the user's default Language Model (synchronous, no await needed)
    // There is also getDefaultModel("image") to get the user's default Vision Model
    const defaultLLM = ai.getDefaultModel("chat");

    if (!defaultLLM) {
        console.log("Please select a default Language Model in AI SDK settings first");
        ai.open(); // Automatically opens the model settings panel in Preferences
        return;
    }

    // Get the model directly using the provider::model string
    const model = ai.getModel(defaultLLM);

    // Generate text
    const result = await generateText({
        model,
        prompt: "Describe digital art in one sentence",
    });

    console.log(result.text);
});
```

{% hint style="success" %}
**Best Practice:** Prefer using `getDefaultModel("chat")` to get the user's preferred model rather than hardcoding a specific Provider and model name in your code. This has two major benefits:

1. **Easier development** — no need to implement your own model selector; just inherit the user's preferences from AI SDK.
2. **Avoid configuration issues** — if you hardcode `openai("gpt-5")` but the user hasn't configured OpenAI, it will fail. Using default models guarantees the user has already configured and verified the model.
   {% endhint %}

### Specifying a Specific Provider

If your plugin specifically needs to use a particular Provider (e.g., for OpenAI-only features), you can specify it directly:

```javascript
eagle.onPluginCreate(async (plugin) => {
    const ai = eagle.extraModule.ai;
    const { generateText } = ai;

    // Get a specific Provider (synchronous, no await needed)
    const openai = ai.getProvider("openai");

    const result = await generateText({
        model: openai("gpt-5"),
        prompt: "Describe digital art in one sentence",
    });

    console.log(result.text);
});
```

***

## Core Concepts <a href="#core-concepts" id="core-concepts"></a>

### Relationship Between Provider and Model

AI SDK uses a two-layer structure: **Provider** manages the API connection and authentication, while **Model** is the unit that actually executes AI tasks.

```
Provider (e.g., openai)
  └── Model (e.g., gpt-5)
  └── Model (e.g., gpt-5.2)
```

### provider::model Format

{% hint style="danger" %}
**Important:** Provider and Model are separated by a **double colon** `::`, not a single colon or slash.
{% endhint %}

```javascript
// ✅ Correct
"openai::gpt-5"
"anthropic::claude-sonnet-4-6-20250514"
"google::gemini-3-flash"

// ❌ Incorrect
"openai/gpt-5"
"openai:gpt-5"
```

This format is used in methods such as `getModel()` and `getDefaultModel()`.

### Synchronous vs Asynchronous Methods

Methods in AI SDK are clearly divided into synchronous and asynchronous categories:

**Synchronous methods** (no `await` needed):

* `getProviders()`, `getProvider()`, `getAvailableProviders()`, `getModel()`
* `getDefaultModel()`

**Asynchronous methods** (require `await`):

* `generateText()`, `generateObject()`, `streamText()`, `streamObject()`
* Provider instance methods: `verify()`, `getModels()`, `hasModel()`

### Three Ways to Get a Model

```javascript
const ai = eagle.extraModule.ai;

// ⭐ Recommended: Use default models, inheriting user preferences
const defaultLLM = ai.getModel(ai.getDefaultModel("chat"));   // Default Language Model
const defaultVLM = ai.getModel(ai.getDefaultModel("image"));  // Default Vision Model

// Method 2: Get directly using the provider::model format
const model = ai.getModel("openai::gpt-5");

// Method 3: Get the Provider first, then specify the Model
const openai = ai.getProvider("openai");
const model = openai("gpt-5");
```

{% hint style="success" %}
**Best Practice:** Unless your plugin has special requirements (e.g., features only supported by a specific Provider), you should prefer using `getDefaultModel()` to get the user's preferred model.
{% endhint %}

***

## generateText() — Basic Text Generation <a href="#generate-text" id="generate-text"></a>

Generate a text response using a specified model.

### Basic Usage (prompt)

```javascript
const ai = eagle.extraModule.ai;
const { generateText } = ai;

// Use the default Language Model
const model = ai.getModel(ai.getDefaultModel("chat"));

const result = await generateText({
    model,
    prompt: "Write a creative brief about digital art",
});

console.log(result.text);
```

### Using the messages Array

Use `messages` to set system prompts and multi-turn conversations:

```javascript
const model = ai.getModel(ai.getDefaultModel("chat"));

const result = await generateText({
    model,
    messages: [
        {
            role: "system",
            content: "You are a professional art critic.",
        },
        {
            role: "user",
            content: "Analyze the characteristics of color usage in Impressionism.",
        },
    ],
});

console.log(result.text);
```

### Multimodal (Text + Image)

```javascript
// Use the default Vision Model (supports image understanding)
const model = ai.getModel(ai.getDefaultModel("image"));

const result = await generateText({
    model,
    messages: [
        {
            role: "user",
            content: [
                {
                    type: "text",
                    text: "Describe the content of this image",
                },
                {
                    type: "image",
                    image: "https://example.com/sample-image.jpg",
                },
            ],
        },
    ],
});

console.log(result.text);
```

{% hint style="info" %}
For more advanced usage of `generateText` (such as `maxTokens`, `temperature`, and other parameters), refer to the [AI SDK official documentation](https://ai-sdk.dev/docs/ai-sdk-core/generating-text).
{% endhint %}

***

## generateObject() — Structured Object Generation <a href="#generate-object" id="generate-object"></a>

Have the AI return a structured JSON object according to a specified Schema.

### Using Zod Schema

```javascript
const ai = eagle.extraModule.ai;
const { generateObject } = ai;
const { z } = require("zod");

// Use the default Language Model for text-only tasks
const model = ai.getModel(ai.getDefaultModel("chat"));

const result = await generateObject({
    model,
    schema: z.object({
        tags: z.array(z.object({
            name: z.string(),
            reason: z.string(),
        })),
        description: z.string(),
    }),
    prompt: "Generate 5 tags and a description for a sunset beach photo",
});

console.log(result.object.tags);
console.log(result.object.description);
```

### Using JSON Schema

```javascript
const model = ai.getModel(ai.getDefaultModel("chat"));

const result = await generateObject({
    model,
    schema: {
        type: "object",
        properties: {
            tags: {
                type: "array",
                items: {
                    type: "object",
                    properties: {
                        name: { type: "string" },
                        reason: { type: "string" },
                    },
                },
            },
            description: { type: "string" },
        },
    },
    prompt: "Generate 5 tags and a description for a sunset beach photo",
});

console.log(result.object.tags);
console.log(result.object.description);
```

### Image Analysis Example

```javascript
// Use the default Vision Model for image understanding
const model = ai.getModel(ai.getDefaultModel("image"));

const result = await generateObject({
    model,
    schema: {
        type: "object",
        properties: {
            colors: { type: "array", items: { type: "string" } },
            style: { type: "string" },
            mood: { type: "string" },
        },
    },
    messages: [
        {
            role: "system",
            content: "You are a professional image analysis expert.",
        },
        {
            role: "user",
            content: [
                { type: "text", text: "Analyze the colors, style, and mood of this image." },
                { type: "image", image: "https://example.com/artwork.jpg" },
            ],
        },
    ],
});

console.log(result.object);
```

{% hint style="info" %}
For more advanced usage of `generateObject`, refer to the [AI SDK official documentation](https://ai-sdk.dev/docs/ai-sdk-core/generating-structured-data).
{% endhint %}

***

## streamText() — Streaming Text Generation <a href="#stream-text" id="stream-text"></a>

Receive AI responses incrementally via streaming — ideal for scenarios where results need to be displayed in real time.

```javascript
const ai = eagle.extraModule.ai;
const { streamText } = ai;
const model = ai.getModel(ai.getDefaultModel("chat"));

const { textStream } = streamText({
    model,
    prompt: "Give a detailed overview of the history of digital art",
});

// Use an async iterator to receive text incrementally
for await (const textPart of textStream) {
    console.log(textPart); // Display in real time
}
```

### Displaying in the UI in Real Time

```javascript
let fullText = "";

const { textStream } = streamText({
    model,
    prompt: "Write a short essay",
});

for await (const textPart of textStream) {
    fullText += textPart;
    // Update the UI element
    document.getElementById("output").textContent = fullText;
}
```

{% hint style="info" %}
For more advanced usage of `streamText`, refer to the [AI SDK official documentation](https://ai-sdk.dev/docs/ai-sdk-core/generating-text#streamtext).
{% endhint %}

***

## streamObject() — Streaming Object Generation <a href="#stream-object" id="stream-object"></a>

Receive structured objects incrementally via streaming. Each iteration yields the partially parsed object so far.

```javascript
const ai = eagle.extraModule.ai;
const { streamObject } = ai;
const model = ai.getModel(ai.getDefaultModel("chat"));

const { partialObjectStream } = streamObject({
    model,
    schema: {
        type: "object",
        properties: {
            analysis: {
                type: "object",
                properties: {
                    colors: { type: "array", items: { type: "string" } },
                    style: { type: "string" },
                    mood: { type: "string" },
                    suggestions: { type: "array", items: { type: "string" } },
                },
            },
        },
    },
    prompt: "Analyze this artwork's colors, style, and mood, and provide suggestions for improvement.",
});

for await (const partialObject of partialObjectStream) {
    console.log("Current result:", partialObject);
    // partialObject progressively gains more fields, for example:
    // { analysis: { colors: ["red"] } }
    // { analysis: { colors: ["red", "blue"], style: "Impressionism" } }
    // { analysis: { colors: ["red", "blue"], style: "Impressionism", mood: "serene" } }
}
```

{% hint style="success" %}
**Best Practice:** `streamObject` is ideal for scenarios where analysis results need to be progressively displayed in the UI, allowing users to see partial results before the AI finishes.
{% endhint %}

{% hint style="info" %}
For more advanced usage of `streamObject`, refer to the [AI SDK official documentation](https://ai-sdk.dev/docs/ai-sdk-core/generating-structured-data#streamobject).
{% endhint %}

***

## Provider Management Methods <a href="#provider-management" id="provider-management"></a>

All methods below are **synchronous** — no `await` required.

***

### getProviders() <a href="#get-providers" id="get-providers"></a>

Get an array of all registered Providers.

* Returns `ProviderFunction[]` — an array of all Providers

```javascript
const ai = eagle.extraModule.ai;

// ✅ Correct: returns an array
const providers = ai.getProviders();
console.log(providers.length); // 8

providers.forEach(provider => {
    console.log(provider.name); // "openai", "anthropic", "google", ...
});
```

{% hint style="danger" %}
**Note:** `getProviders()` returns an **array**, not an object. The following usage is incorrect:

```javascript
// ❌ Incorrect: cannot destructure as an object
const { openai, anthropic } = ai.getProviders();

// ❌ Incorrect: does not need await
const providers = await ai.getProviders();
```

{% endhint %}

***

### getProvider(providerName) <a href="#get-provider" id="get-provider"></a>

Get a specific Provider by name.

* `providerName` string — Provider name (e.g., `"openai"`, `"google"`)
* Returns `ProviderFunction | undefined` — the matching Provider, or `undefined` if not found

```javascript
const openai = ai.getProvider("openai");
const google = ai.getProvider("google");

if (openai) {
    const model = openai("gpt-5");
}
```

{% hint style="info" %}
If you need a specific Provider, use `getProvider()` instead of `getProviders()` for cleaner and more concise code. However, in most cases, using `getDefaultModel()` directly is the better choice.
{% endhint %}

***

### getAvailableProviders() <a href="#get-available-providers" id="get-available-providers"></a>

Get all configured Providers (those the user has finished setting up).

* Returns `ProviderFunction[]` — an array of configured Providers

```javascript
const available = ai.getAvailableProviders();

if (available.length === 0) {
    console.log("No AI providers configured yet. Please configure one in AI SDK settings.");
} else {
    console.log(`${available.length} provider(s) configured:`);
    available.forEach(p => console.log(`- ${p.name}`));
}
```

{% hint style="info" %}
The difference between this method and `getProviders()` is that `getProviders()` returns all 8 Providers (including unconfigured ones), while `getAvailableProviders()` returns only those the user has finished configuring.
{% endhint %}

***

### getModel(providerAndModel) <a href="#get-model" id="get-model"></a>

Get a model instance directly using the `provider::model` format.

* `providerAndModel` string — format: `"provider::model"`
* Returns `Model` — a model object that can be passed directly to `generateText()` and other methods

```javascript
// Get a model directly
const model = ai.getModel("openai::gpt-5");

const result = await generateText({
    model: model,
    prompt: "Hello!",
});
```

{% hint style="danger" %}
**Note:** You must use the `::` double colon to separate the Provider and Model name; otherwise, an error will be thrown.
{% endhint %}

***

## Settings and Reload <a href="#settings" id="settings"></a>

***

### open() <a href="#open" id="open"></a>

Open the "AI Models" settings panel in Preferences. This is useful for providing a "Model Settings" button in your plugin's interface, allowing users to quickly configure Model Providers and Default Models.

* Returns `void`

```javascript
const ai = eagle.extraModule.ai;

// Example: place a "Model Settings" button in your plugin's interface
document.getElementById("settings-btn").addEventListener("click", () => {
    ai.open(); // Automatically opens the model settings section in Preferences
});
```

{% hint style="success" %}
**Best Practice:** When `getDefaultModel()` returns `undefined` (the user has not yet set a default model), display a prompt in your plugin's interface with a button that calls `open()` to guide the user through the setup.
{% endhint %}

***

### reload() <a href="#reload" id="reload"></a>

Reload the AI SDK configuration. After the user opens the settings panel via `open()` and adjusts the configuration, calling this method reads the latest configuration.

* Returns `void`

```javascript
const ai = eagle.extraModule.ai;

// Guide the user to configure settings, then reload
ai.open();

// After the user completes the setup, reload to get the latest configuration
ai.reload();

const defaultLLM = ai.getDefaultModel("chat"); // Get the model the user just configured
```

{% hint style="info" %}
`open()` does not block execution, so the system cannot know when the user finishes configuring. It is recommended to call `reload()` when you need to use the model to ensure you read the latest configuration.
{% endhint %}

***

## Default Model Methods <a href="#default-model" id="default-model"></a>

AI SDK supports setting and reading default models, allowing users to specify their preferred models centrally in the "AI Models" section of Preferences.

***

### getDefaultModel(type) <a href="#get-default-model" id="get-default-model"></a>

Get the default model for the specified type. This is a **synchronous method**.

* `type` string — model type, possible values: `"chat"` (Language Model) or `"image"` (Vision Model)
* Returns `string | undefined` — the `"provider::model"` string of the default model, or `undefined` if not set

```javascript
// Get the user's default Language Model
const defaultLLM = ai.getDefaultModel("chat");

if (defaultLLM) {
    console.log(`Default Language Model: ${defaultLLM}`); // "openai::gpt-5"
    const model = ai.getModel(defaultLLM);
    const result = await generateText({ model, prompt: "Hello!" });
}

// Get the user's default Vision Model
const defaultVLM = ai.getDefaultModel("image");

if (defaultVLM) {
    console.log(`Default Vision Model: ${defaultVLM}`); // "openai::dall-e-3"
}
```

{% hint style="info" %}
Users can separately select their preferred **Language Model** (`"chat"`) and **Vision Model** (`"image"`) in the "AI Models" section of Preferences, and plugins can retrieve whichever they need.
{% endhint %}

***

## Provider Instance Methods <a href="#provider-instance-methods" id="provider-instance-methods"></a>

Provider instances obtained via `getProvider()` can be called as functions to get a Model, and they also provide the following methods.

***

### verify() <a href="#verify" id="verify"></a>

Verify whether the Provider's connection and authentication are valid. Used to check if the user's current configuration can connect successfully.

* Returns `Promise<VerifyResult>` — verification result object
  * `ok` boolean — whether verification succeeded
  * `error` APIError (optional) — error details on failure

```javascript
const openai = ai.getProvider("openai");

const result = await openai.verify();

if (result.ok) {
    console.log("Connection successful!");
} else {
    console.error("Connection failed:", result.error.message);
    console.error("HTTP status code:", result.error.status);
}
```

{% hint style="danger" %}
**Note:** `verify()` does not return a `boolean`; it returns an object containing `ok` and `error`.

```javascript
// ❌ Incorrect
const isValid = await openai.verify();
if (isValid) { ... }

// ✅ Correct
const result = await openai.verify();
if (result.ok) { ... }
```

{% endhint %}

***

### getModels() <a href="#get-models" id="get-models"></a>

Get a list of all available models for this Provider.

* Returns `Promise<string[]>` — an array of model IDs

```javascript
const openai = ai.getProvider("openai");
const models = await openai.getModels();

console.log(models);
// ["gpt-5.2", "gpt-5", "o3", ...]
```

{% hint style="info" %}
This method sends a request to the Provider's API. Make sure the user has configured this Provider. If not configured, an `APIError` will be thrown.
{% endhint %}

***

### hasModel(modelId) <a href="#has-model" id="has-model"></a>

Check whether this Provider includes a specific model.

* `modelId` string — the model ID (e.g., `"gpt-5"`)
* Returns `Promise<boolean>` — whether the model exists

```javascript
const openai = ai.getProvider("openai");
const exists = await openai.hasModel("gpt-5");

if (exists) {
    console.log("This model is available");
}
```

***

## Provider Instance Properties <a href="#provider-instance-properties" id="provider-instance-properties"></a>

The following are read-only properties of Provider instances:

***

### `name` string

The name of the Provider.

```javascript
const openai = ai.getProvider("openai");
console.log(openai.name); // "openai"
```

***

### `baseURL` string | undefined

The currently configured API endpoint.

```javascript
const openai = ai.getProvider("openai");
console.log(openai.baseURL); // "https://api.openai.com" or undefined
```

***

## Supported Providers Reference <a href="#supported-providers" id="supported-providers"></a>

| Provider          | Name                  | Type                         | Default Base URL                                    |
| ----------------- | --------------------- | ---------------------------- | --------------------------------------------------- |
| OpenAI            | `"openai"`            | Commercial (Cloud)           | Requires manual configuration                       |
| Anthropic         | `"anthropic"`         | Commercial (Cloud)           | Requires manual configuration                       |
| Google Gemini     | `"google"`            | Commercial (Cloud)           | Requires manual configuration                       |
| DeepSeek          | `"deepseek"`          | Commercial (Cloud)           | Requires manual configuration                       |
| Tongyi Qwen       | `"tongyi"`            | Commercial (Cloud)           | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
| Ollama            | `"ollama"`            | Local                        | `http://localhost:11434/v1`                         |
| LM Studio         | `"lmstudio"`          | Local                        | `http://localhost:1234/v1`                          |
| OpenAI Compatible | `"openai-compatible"` | Custom (Compatible Protocol) | Requires manual configuration (API Key optional)    |

{% hint style="info" %}
**OpenAI Compatible** works with any service that implements the OpenAI API protocol. Users only need to provide a Base URL; the API Key is optional (required for some cloud services, typically not needed for local servers). The system automatically appends `/v1` to the URL if not already present.
{% endhint %}

{% hint style="danger" %}
**Note:** The Provider name for Google Gemini is `"google"`, **not** `"gemini"`.

```javascript
// ✅ Correct
const google = ai.getProvider("google");
ai.getModel("google::gemini-3-flash");

// ❌ Incorrect
const gemini = ai.getProvider("gemini");
ai.getModel("gemini::gemini-3-flash");
```

{% endhint %}

***

## Error Handling <a href="#error-handling" id="error-handling"></a>

### APIError Class

AI SDK throws an `APIError` when an API request fails, containing complete error information.

#### Properties

| Property       | Type                  | Description                                   |
| -------------- | --------------------- | --------------------------------------------- |
| `message`      | `string`              | Error message                                 |
| `status`       | `number \| undefined` | HTTP status code (e.g., 401, 403, 500)        |
| `statusText`   | `string \| undefined` | HTTP status text (e.g., "Unauthorized")       |
| `code`         | `string \| undefined` | Error code (extracted from the response body) |
| `provider`     | `string \| undefined` | Provider name                                 |
| `url`          | `string \| undefined` | Request URL                                   |
| `responseBody` | `unknown`             | Raw error content returned by the server      |

#### Methods

| Method       | Description                                                    |
| ------------ | -------------------------------------------------------------- |
| `toJSON()`   | Returns a complete error details object (suitable for logging) |
| `toString()` | Returns the error message string                               |

### Error Handling Example

```javascript
const ai = eagle.extraModule.ai;
const { generateText, APIError } = ai;

try {
    const model = ai.getModel(ai.getDefaultModel("chat"));
    const result = await generateText({
        model,
        prompt: "Hello!",
    });
    console.log(result.text);
} catch (error) {
    if (error instanceof APIError) {
        console.error("API error:", error.message);
        console.error("Status code:", error.status);       // 401, 429, 500...
        console.error("Provider:", error.provider);         // "openai"

        // Handle based on status code
        switch (error.status) {
            case 401:
                console.error("Invalid API Key. Please check your AI SDK settings.");
                break;
            case 429:
                console.error("Rate limit exceeded. Please try again later.");
                break;
            case 500:
                console.error("Internal server error.");
                break;
        }

        // Log full details
        console.log(JSON.stringify(error.toJSON(), null, 2));
    } else {
        console.error("Unexpected error:", error);
    }
}
```

### Network Errors

When the Provider is unreachable (e.g., local Ollama is not running), you will also receive an `APIError`:

```javascript
const ollama = ai.getProvider("ollama");
const result = await ollama.verify();

if (!result.ok) {
    console.error(result.error.message);
    // "[ollama] Network error: fetch failed"
}
```

***

## Best Practices <a href="#best-practices" id="best-practices"></a>

### 1. Prefer Using Default Models

This is the most important recommendation. Use `getDefaultModel()` to inherit the user's preferences from the AI SDK settings interface, rather than hardcoding a specific Provider and model in your code:

```javascript
const ai = eagle.extraModule.ai;
const { generateText } = ai;

// ⭐ Recommended: the user has already set their preferred model in AI SDK
const defaultLLM = ai.getDefaultModel("chat");

if (!defaultLLM) {
    console.log("Please select a default Language Model in AI SDK settings first");
    return;
}

const model = ai.getModel(defaultLLM);
const result = await generateText({ model, prompt: "..." });
```

**Why not hardcode models?** If you specify `ai.getProvider("openai")("gpt-5")` in your code but the user hasn't configured OpenAI, it will fail. You would need to handle the "Provider not configured" edge case separately, which is cumbersome. Using default models guarantees the user has already configured and verified the model, saving a significant amount of defensive code.

### 2. Handle verify() Results Correctly

```javascript
// ✅ Correct: check result.ok
const result = await openai.verify();
if (result.ok) {
    // Connection successful
} else {
    console.error(result.error.message);
}

// ❌ Incorrect: do not use directly as a boolean
if (await openai.verify()) { ... }
```

### 3. Use Streaming for Long Text Generation

When expecting a long response, use `streamText()` for a better user experience:

```javascript
const { textStream } = streamText({
    model: ai.getModel(ai.getDefaultModel("chat")),
    prompt: longPrompt,
});

for await (const chunk of textStream) {
    appendToUI(chunk);
}
```

***

## Full Example <a href="#full-example" id="full-example"></a>

Below is a comprehensive plugin example demonstrating how to properly use the main features of AI SDK:

```javascript
eagle.onPluginCreate(async (plugin) => {
    const ai = eagle.extraModule.ai;
    const { generateText, generateObject, streamText, APIError } = ai;

    // 1. Get the user's default Language Model from Preferences
    const defaultLLM = ai.getDefaultModel("chat");

    if (!defaultLLM) {
        console.log("Please select a default Language Model in AI SDK settings first");
        return;
    }

    const model = ai.getModel(defaultLLM);
    console.log(`Using default model: ${defaultLLM}`);

    // 2. Basic text generation
    try {
        const result = await generateText({
            model,
            prompt: "Describe this plugin's functionality in one sentence",
        });
        console.log("Generated result:", result.text);
    } catch (error) {
        if (error instanceof APIError) {
            console.error(`API error [${error.status}]: ${error.message}`);
        }
    }

    // 3. Structured object generation
    try {
        const result = await generateObject({
            model,
            schema: {
                type: "object",
                properties: {
                    tags: { type: "array", items: { type: "string" } },
                    category: { type: "string" },
                },
            },
            prompt: "Generate 3 tags and a category for a cat photo",
        });
        console.log("Tags:", result.object.tags);
        console.log("Category:", result.object.category);
    } catch (error) {
        console.error("Object generation failed:", error.message);
    }

    // 4. Streaming text generation
    try {
        const { textStream } = streamText({
            model,
            prompt: "List 5 creative techniques for digital art",
        });

        let fullText = "";
        for await (const chunk of textStream) {
            fullText += chunk;
        }
        console.log("Streaming result:", fullText);
    } catch (error) {
        console.error("Streaming generation failed:", error.message);
    }
});
```

***

## API Cheat Sheet <a href="#api-cheatsheet" id="api-cheatsheet"></a>

### AI SDK Top-Level Methods

| Method                      | Sync/Async | Return Type                     | Description                                  |
| --------------------------- | ---------- | ------------------------------- | -------------------------------------------- |
| `getProviders()`            | Sync       | `ProviderFunction[]`            | Get all Providers                            |
| `getProvider(name)`         | Sync       | `ProviderFunction \| undefined` | Get a specific Provider                      |
| `getAvailableProviders()`   | Sync       | `ProviderFunction[]`            | Get configured Providers                     |
| `getModel(provider::model)` | Sync       | `Model`                         | Get a model instance                         |
| `getDefaultModel(type)`     | Sync       | `string \| undefined`           | Get the default model                        |
| `open()`                    | Sync       | `void`                          | Open the model settings panel in Preferences |
| `reload()`                  | Sync       | `void`                          | Reload the latest configuration              |
| `generateText(options)`     | Async      | `Promise<GenerateTextResult>`   | Generate text                                |
| `generateObject(options)`   | Async      | `Promise<GenerateObjectResult>` | Generate a structured object                 |
| `streamText(options)`       | Async      | `StreamTextResult`              | Streaming text generation                    |
| `streamObject(options)`     | Async      | `StreamObjectResult`            | Streaming object generation                  |

### Provider Instance Methods

| Method              | Sync/Async | Return Type             | Description                          |
| ------------------- | ---------- | ----------------------- | ------------------------------------ |
| `provider(modelId)` | Sync       | `Model`                 | Get a model (function call)          |
| `verify()`          | Async      | `Promise<VerifyResult>` | Verify connection and authentication |
| `getModels()`       | Async      | `Promise<string[]>`     | Get the model list                   |
| `hasModel(modelId)` | Async      | `Promise<boolean>`      | Check if a model exists              |

### Provider Instance Properties

| Property  | Type                  | Description               |
| --------- | --------------------- | ------------------------- |
| `name`    | `string`              | Provider name (read-only) |
| `baseURL` | `string \| undefined` | API endpoint (read-only)  |

### VerifyResult

| Property | Type                    | Description                    |
| -------- | ----------------------- | ------------------------------ |
| `ok`     | `boolean`               | Whether verification succeeded |
| `error`  | `APIError \| undefined` | Error object on failure        |
