LLM Notice: This documentation site supports content negotiation for AI agents. Request any page with Accept: text/markdown or Accept: text/plain header to receive Markdown instead of HTML. Alternatively, append ?format=md to any URL. All markdown files are available at /md/ prefix paths. For all content in one file, visit /llms-full.txt
Skip to main content

Scheduling System

FYV implements a sophisticated self-scheduling mechanism that enables perpetual automated rebalancing without relying on external bots or keepers. This document explains how the scheduling system works and ensures continuous vault optimization.

Overview

The scheduling system consists of three main components:

  1. FlowTransactionScheduler - Flow's native transaction scheduling infrastructure
  2. SchedulerRegistry - Tracks all vaults and their scheduling state
  3. Supervisor - Recovery mechanism for stuck vaults

Together, these components create a self-sustaining automation system where vaults schedule their own rebalancing indefinitely.

Self-Scheduling Mechanism

How It Works

Each AutoBalancer implements a self-perpetuating scheduling loop:

Initial Schedule (vault creation):


_10
// During vault creation
_10
autoBalancer.scheduleFirstRebalance()
_10
_10
FlowTransactionScheduler.schedule(
_10
functionCall: "rebalance()",
_10
executeAt: currentTime + 60 seconds
_10
)

Execution (scheduled time arrives):


_14
// Scheduler calls
_14
autoBalancer.rebalance()
_14
_14
// Perform rebalancing logic
_14
checkRatio()
_14
executeIfNeeded()
_14
_14
// Reschedule next execution
_14
scheduleNextRebalance()
_14
_14
FlowTransactionScheduler.schedule(
_14
functionCall: "rebalance()",
_14
executeAt: currentTime + 60 seconds
_14
)

Perpetual Loop:


_10
Execute → Rebalance → Schedule Next → Wait 60s → Execute → ...

This creates an infinite loop where each rebalance execution schedules the next one, requiring no external coordination.

Atomic Registration

Vault creation and scheduling registration happen atomically to prevent orphaned vaults:


_16
transaction createVault() {
_16
prepare(signer: AuthAccount) {
_16
// Create all components
_16
let vault <- createYieldVault(...)
_16
let autoBalancer <- createAutoBalancer(...)
_16
let position <- createPosition(...)
_16
_16
// Register (all steps must succeed)
_16
registerInRegistry(autoBalancer) // Step 1
_16
scheduleFirstRebalance(autoBalancer) // Step 2
_16
linkComponents(...) // Step 3
_16
_16
// If ANY step fails → entire transaction reverts
_16
// No partial vaults created
_16
}
_16
}

Atomicity guarantee: Either vault is fully created with working schedule, OR transaction fails and nothing is created.

SchedulerRegistry

The SchedulerRegistry maintains a global record of all active vaults and their scheduling state.

Registry Structure


_17
pub contract FlowYieldVaultsSchedulerRegistry {
_17
// Maps vault ID → scheduling info
_17
access(contract) var registry: {UInt64: ScheduleInfo}
_17
_17
pub struct ScheduleInfo {
_17
pub let vaultID: UInt64
_17
pub let autoBalancerCap: Capability<&AutoBalancer>
_17
pub let nextScheduledTime: UFix64
_17
pub let status: ScheduleStatus // Active, Pending, Stuck
_17
}
_17
_17
pub enum ScheduleStatus: UInt8 {
_17
pub case Active // Scheduling working normally
_17
pub case Pending // Awaiting schedule
_17
pub case Stuck // Failed to reschedule
_17
}
_17
}

Registration Lifecycle

On vault creation:


_10
registry.register(
_10
vaultID: 42,
_10
autoBalancerCap: capability,
_10
status: ScheduleStatus.Pending
_10
)

After first successful schedule:


_10
registry.updateStatus(
_10
vaultID: 42,
_10
status: ScheduleStatus.Active,
_10
nextScheduledTime: currentTime + 60
_10
)

If schedule fails:


_10
registry.updateStatus(
_10
vaultID: 42,
_10
status: ScheduleStatus.Stuck
_10
)
_10
// Supervisor will attempt recovery

On vault liquidation:


_10
registry.unregister(vaultID: 42)
_10
// Vault removed from tracking

Supervisor Recovery System

The Supervisor handles vaults that become stuck or fail to self-schedule.

What Can Go Wrong?

Despite atomicity guarantees, vaults can become stuck for several reasons:

  1. Transaction failure during reschedule due to gas issues or network congestion
  2. Capability revocation if user accidentally breaks autoBalancer capability
  3. Scheduler overload if too many transactions scheduled simultaneously
  4. Network issues during schedule transaction propagation

Supervisor Implementation


_32
pub resource Supervisor {
_32
// Scan registry and recover stuck vaults
_32
pub fun recover() {
_32
let pending = registry.getPendingVaults(limit: 50)
_32
_32
for vaultID in pending {
_32
let scheduleInfo = registry.getScheduleInfo(vaultID)
_32
_32
// Attempt to reschedule
_32
if let autoBalancer = scheduleInfo.autoBalancerCap.borrow() {
_32
autoBalancer.scheduleNextRebalance()
_32
_32
registry.updateStatus(
_32
vaultID: vaultID,
_32
status: ScheduleStatus.Active
_32
)
_32
}
_32
}
_32
_32
// If more work remains, schedule next supervisor run
_32
if registry.hasPendingVaults() {
_32
self.scheduleSelf()
_32
}
_32
}
_32
_32
access(self) fun scheduleSelf() {
_32
FlowTransactionScheduler.schedule(
_32
functionCall: "recover()",
_32
executeAt: currentTime + 120 seconds
_32
)
_32
}
_32
}

Bounded Processing

The Supervisor processes a maximum of 50 vaults per execution to prevent timeout:


_10
Iteration 1: Process vaults 1-50 → Reschedule supervisor
_10
Iteration 2: Process vaults 51-100 → Reschedule supervisor
_10
Iteration 3: Process vaults 101-120 → No more pending, stop

This ensures the recovery process can handle any number of stuck vaults without failing due to gas limits.

Recovery Triggers

The Supervisor runs in two scenarios:

1. Scheduled Recovery (proactive):


_10
Every 10 minutes:
_10
→ Check for pending vaults
_10
→ Attempt recovery
_10
→ Reschedule if more work exists

2. Manual Recovery (reactive):


_10
transaction triggerSupervisor() {
_10
prepare(admin: AuthAccount) {
_10
let supervisor = admin.borrow<&Supervisor>(...)
_10
supervisor.recover()
_10
}
_10
}

Scheduling Parameters

Key configuration parameters control scheduling behavior:


_13
pub struct SchedulingConfig {
_13
// Rebalancing frequency
_13
pub let rebalanceIntervalSeconds: UInt64 // Default: 60
_13
_13
// Supervisor recovery frequency
_13
pub let supervisorIntervalSeconds: UInt64 // Default: 600 (10 min)
_13
_13
// Max vaults per supervisor run
_13
pub let maxSupervisorBatchSize: UInt64 // Default: 50
_13
_13
// Stale threshold (mark as stuck)
_13
pub let staleThresholdSeconds: UInt64 // Default: 300 (5 min)
_13
}

Tuning Considerations

Rebalance Interval:

  • Shorter (30s): More responsive, higher gas costs, better optimization
  • Longer (120s): Less responsive, lower gas costs, acceptable for stable vaults

Supervisor Interval:

  • Shorter (300s): Faster recovery, more frequent checks, higher overhead
  • Longer (1200s): Slower recovery, less overhead, acceptable for stable network

Batch Size:

  • Smaller (25): Lower gas per execution, more supervisor runs needed
  • Larger (100): Higher gas per execution, fewer runs needed, risk of timeout

Monitoring Scheduling Health

Users and administrators can monitor the scheduling system's health:

Check Vault Schedule Status


_10
import FlowYieldVaultsSchedulerRegistry from 0xFYV
_10
_10
pub fun main(vaultID: UInt64): ScheduleStatus {
_10
let registry = FlowYieldVaultsSchedulerRegistry.getRegistry()
_10
let info = registry.getScheduleInfo(vaultID)
_10
_10
return info.status
_10
}
_10
// Returns: Active, Pending, or Stuck

Get Next Scheduled Time


_10
pub fun main(vaultID: UInt64): UFix64 {
_10
let registry = FlowYieldVaultsSchedulerRegistry.getRegistry()
_10
let info = registry.getScheduleInfo(vaultID)
_10
_10
return info.nextScheduledTime
_10
}
_10
// Returns: Unix timestamp of next rebalance

Count Pending Vaults


_10
pub fun main(): UInt64 {
_10
let registry = FlowYieldVaultsSchedulerRegistry.getRegistry()
_10
return registry.countPendingVaults()
_10
}
_10
// Returns: Number of vaults awaiting schedule

Failure Modes and Recovery

Scenario 1: Single Vault Fails to Reschedule

What happens:

  1. Vault executes rebalance successfully
  2. Reschedule transaction fails (network issue)
  3. Vault marked as "Stuck" in registry
  4. Supervisor detects stuck vault on next run
  5. Supervisor reschedules the vault
  6. Vault returns to "Active" status

User impact: Minor delay (up to 10 minutes) before next rebalance

Scenario 2: Scheduler Overload

What happens:

  1. Many vaults scheduled at same time
  2. Scheduler queue fills up
  3. Some reschedule transactions timeout
  4. Multiple vaults marked "Stuck"
  5. Supervisor processes in batches of 50
  6. All vaults eventually recovered

User impact: Temporary scheduling delays, no loss of funds

Scenario 3: Capability Revocation

What happens:

  1. User accidentally unlinks AutoBalancer capability
  2. Vault can no longer be scheduled
  3. Vault marked "Stuck" permanently
  4. User must manually fix capability
  5. Call forceRebalance() to restart scheduling

User impact: Vault stops rebalancing until fixed

Scenario 4: Supervisor Failure

What happens:

  1. Supervisor itself fails to reschedule
  2. Stuck vaults accumulate
  3. Admin manually triggers supervisor
  4. Supervisor recovers all pending vaults
  5. Supervisor returns to normal operation

User impact: Longer delays (requires admin intervention)

Best Practices

Monitor Your Vault: Check scheduling status periodically to ensure "Active" state.

Don't Revoke Capabilities: Avoid unlinking or destroying AutoBalancer capabilities as this breaks scheduling.

Use forceRebalance() Sparingly: Manual rebalancing bypasses scheduling logic; only use if truly stuck.

Track Rebalance History: Monitor rebalance frequency to detect scheduling issues early.

Report Stuck Vaults: If your vault becomes stuck, report it so admins can investigate root cause.

Advanced: Custom Scheduling

Developers can implement custom scheduling logic for specialized use cases:


_21
pub resource CustomAutoBalancer: AutoBalancerInterface {
_21
// Custom interval based on conditions
_21
pub fun getNextInterval(): UInt64 {
_21
let ratio = self.getCurrentRatio()
_21
_21
if ratio > 1.10 || ratio < 0.90 {
_21
return 30 // More frequent when far from target
_21
} else {
_21
return 120 // Less frequent when stable
_21
}
_21
}
_21
_21
pub fun scheduleNextRebalance() {
_21
let interval = self.getNextInterval()
_21
_21
FlowTransactionScheduler.schedule(
_21
functionCall: "rebalance()",
_21
executeAt: currentTime + interval
_21
)
_21
}
_21
}

This enables dynamic scheduling based on vault state, optimizing gas costs vs. responsiveness.

Summary

FYV's scheduling system achieves truly automated yield farming through self-scheduling where vaults schedule their own rebalancing, atomic registration preventing orphaned vaults, Supervisor recovery for stuck vaults, and bounded processing handling any scale.

Key guarantees:

  • Every vault has either working schedule OR doesn't exist (atomicity)
  • Stuck vaults automatically recovered (within 10 minutes)
  • No external dependencies (no bot infrastructure needed)
  • Scales to thousands of vaults (batched processing)

Key Takeaway

The self-scheduling mechanism is what makes FYV truly "set and forget." Vaults perpetually schedule themselves, the Supervisor recovers failures, and users never need to manually trigger rebalancing. It's automation all the way down.