Skip to content

ai.escalayer #

Escalayer

Escalayer is a module for executing AI tasks with automatic escalation to more powerful models when needed. It provides a framework for creating complex AI workflows by breaking them down into sequential unit tasks.

Overview

Escalayer allows you to:

  1. Create complex AI tasks composed of multiple sequential unit tasks
  2. Execute each unit task with a cheap AI model first
  3. Automatically retry with a more powerful model if the task fails
  4. Process and validate AI responses with custom callback functions

Architecture

The module is organized into the following components:

  • Task: Represents a complete AI task composed of multiple sequential unit tasks
  • UnitTask: Represents a single step in the task with prompt generation and response processing
  • ModelConfig: Defines the configuration for an AI model
  • OpenRouter Integration: Uses OpenRouter to access a wide range of AI models

Usage

Basic Example

import freeflowuniverse.herolib.ai.mcp.aitools.escalayer

fn main() {
    // Create a new task
    mut task := escalayer.new_task(
        name: 'rhai_wrapper_creator'
        description: 'Create Rhai wrappers for Rust functions'
    )

    // Define the unit tasks
    task.new_unit_task(
        name: 'separate_functions'
        prompt_function: separate_functions
        callback_function: process_functions
    )

    // Initiate the task
    result := task.initiate('path/to/rust/file.rs') or {
        println('Task failed: ${err}')
        return
    }

    println('Task completed successfully')
    println(result)
}

// Define the prompt function
fn separate_functions(input string) string {
    return'Read rust file and separate it into functions ${input}'
}

// Define the callback function
fn process_functions(response string)! string {
    // Process the AI response
    // Return error if processing fails
    if response.contains('error') {
        return error('Failed to process functions: Invalid response format')
    }
    return response
}

Advanced Configuration

You can configure each unit task with different models, retry counts, and other parameters:

// Configure with custom parameters
task.new_unit_task(
    name: 'create_wrappers'
    prompt_function: create_wrappers
    callback_function: process_wrappers
    retry_count: 2
    base_model: escalayer.ModelConfig{
        name: 'claude-3-haiku-20240307'
        provider: 'anthropic'
        temperature: 0.5
        max_tokens: 4000
    }
)

How It Works

  1. When you call task.initiate(input), the first unit task is executed with its prompt function.
  2. The prompt is sent to the base AI model.
  3. The response is processed by the callback function.
  4. If the callback returns an error, the task is retried with the same model.
  5. After a specified number of retries, the task escalates to a more powerful model.
  6. Once a unit task succeeds, its result is passed as input to the next unit task.
  7. This process continues until all unit tasks are completed.

Environment Setup

Escalayer uses OpenRouter for AI model access. Set the following environment variable:

OPENROUTER_API_KEY=your_api_key_here

You can get an API key from OpenRouter.

Original Requirements

This module was designed based on the following requirements:

  • Create a system for executing AI tasks with a retry mechanism
  • Escalate to more powerful models if cheaper models fail
  • Use OpenAI client over OpenRouter for AI calls
  • Break down complex tasks into sequential unit tasks
  • Each unit task has a function that generates a prompt and a callback that processes the response
  • Retry if the callback returns an error, with the error message prepended to the input string

For a detailed architecture overview, see escalayer_architecture.md.

For a complete example, see example.v.

fn default_base_model #

fn default_base_model() ModelConfig

Default model configurations

fn default_retry_model #

fn default_retry_model() ModelConfig

fn new_task #

fn new_task(params TaskParams) &Task

Create a new task

struct ModelConfig #

struct ModelConfig {
pub mut:
	name        string
	provider    string
	temperature f32
	max_tokens  int
}

ModelConfig defines the configuration for an AI model

struct Task #

struct Task {
pub mut:
	name           string
	description    string
	unit_tasks     []UnitTask
	current_result string
}

Task represents a complete AI task composed of multiple sequential unit tasks

fn (Task) new_unit_task #

fn (mut t Task) new_unit_task(params UnitTaskParams) &UnitTask

Add a new unit task to the task

fn (Task) initiate #

fn (mut t Task) initiate(input string) !string

Initiate the task execution

struct TaskParams #

@[params]
struct TaskParams {
pub:
	name        string
	description string
}

TaskParams defines the parameters for creating a new task

struct UnitTask #

struct UnitTask {
pub mut:
	name              string
	prompt_function   fn (string) string
	callback_function fn (string) !string
	base_model        ModelConfig
	retry_model       ModelConfig
	retry_count       int
}

UnitTask represents a single step in the task

fn (UnitTask) execute #

fn (mut ut UnitTask) execute(input string) !string

Execute the unit task

struct UnitTaskParams #

@[params]
struct UnitTaskParams {
pub:
	name              string
	prompt_function   fn (string) string
	callback_function fn (string) !string
	base_model        ?ModelConfig
	retry_model       ?ModelConfig
	retry_count       ?int
}

UnitTaskParams defines the parameters for creating a new unit task