Scheduling System
FYV implements a sophisticated self-scheduling mechanism that enables perpetual automated rebalancing without relying on external bots or keepers. This document explains how the scheduling system works and ensures continuous vault optimization.
Overview
The scheduling system consists of three main components:
- FlowTransactionScheduler - Flow's native transaction scheduling infrastructure
- SchedulerRegistry - Tracks all vaults and their scheduling state
- Supervisor - Recovery mechanism for stuck vaults
Together, these components create a self-sustaining automation system where vaults schedule their own rebalancing indefinitely.
Self-Scheduling Mechanism
How It Works
Each AutoBalancer implements a self-perpetuating scheduling loop:
Initial Schedule (vault creation):
_10// During vault creation_10autoBalancer.scheduleFirstRebalance()_10 ↓_10FlowTransactionScheduler.schedule(_10 functionCall: "rebalance()",_10 executeAt: currentTime + 60 seconds_10)
Execution (scheduled time arrives):
_14// Scheduler calls_14autoBalancer.rebalance()_14 ↓_14// Perform rebalancing logic_14checkRatio()_14executeIfNeeded()_14 ↓_14// Reschedule next execution_14scheduleNextRebalance()_14 ↓_14FlowTransactionScheduler.schedule(_14 functionCall: "rebalance()",_14 executeAt: currentTime + 60 seconds_14)
Perpetual Loop:
_10Execute → Rebalance → Schedule Next → Wait 60s → Execute → ...
This creates an infinite loop where each rebalance execution schedules the next one, requiring no external coordination.
Atomic Registration
Vault creation and scheduling registration happen atomically to prevent orphaned vaults:
_16transaction createVault() {_16 prepare(signer: AuthAccount) {_16 // Create all components_16 let vault <- createYieldVault(...)_16 let autoBalancer <- createAutoBalancer(...)_16 let position <- createPosition(...)_16_16 // Register (all steps must succeed)_16 registerInRegistry(autoBalancer) // Step 1_16 scheduleFirstRebalance(autoBalancer) // Step 2_16 linkComponents(...) // Step 3_16_16 // If ANY step fails → entire transaction reverts_16 // No partial vaults created_16 }_16}
Atomicity guarantee: Either vault is fully created with working schedule, OR transaction fails and nothing is created.
SchedulerRegistry
The SchedulerRegistry maintains a global record of all active vaults and their scheduling state.
Registry Structure
_17pub contract FlowYieldVaultsSchedulerRegistry {_17 // Maps vault ID → scheduling info_17 access(contract) var registry: {UInt64: ScheduleInfo}_17_17 pub struct ScheduleInfo {_17 pub let vaultID: UInt64_17 pub let autoBalancerCap: Capability<&AutoBalancer>_17 pub let nextScheduledTime: UFix64_17 pub let status: ScheduleStatus // Active, Pending, Stuck_17 }_17_17 pub enum ScheduleStatus: UInt8 {_17 pub case Active // Scheduling working normally_17 pub case Pending // Awaiting schedule_17 pub case Stuck // Failed to reschedule_17 }_17}
Registration Lifecycle
On vault creation:
_10registry.register(_10 vaultID: 42,_10 autoBalancerCap: capability,_10 status: ScheduleStatus.Pending_10)
After first successful schedule:
_10registry.updateStatus(_10 vaultID: 42,_10 status: ScheduleStatus.Active,_10 nextScheduledTime: currentTime + 60_10)
If schedule fails:
_10registry.updateStatus(_10 vaultID: 42,_10 status: ScheduleStatus.Stuck_10)_10// Supervisor will attempt recovery
On vault liquidation:
_10registry.unregister(vaultID: 42)_10// Vault removed from tracking
Supervisor Recovery System
The Supervisor handles vaults that become stuck or fail to self-schedule.
What Can Go Wrong?
Despite atomicity guarantees, vaults can become stuck for several reasons:
- Transaction failure during reschedule due to gas issues or network congestion
- Capability revocation if user accidentally breaks autoBalancer capability
- Scheduler overload if too many transactions scheduled simultaneously
- Network issues during schedule transaction propagation
Supervisor Implementation
_32pub resource Supervisor {_32 // Scan registry and recover stuck vaults_32 pub fun recover() {_32 let pending = registry.getPendingVaults(limit: 50)_32_32 for vaultID in pending {_32 let scheduleInfo = registry.getScheduleInfo(vaultID)_32_32 // Attempt to reschedule_32 if let autoBalancer = scheduleInfo.autoBalancerCap.borrow() {_32 autoBalancer.scheduleNextRebalance()_32_32 registry.updateStatus(_32 vaultID: vaultID,_32 status: ScheduleStatus.Active_32 )_32 }_32 }_32_32 // If more work remains, schedule next supervisor run_32 if registry.hasPendingVaults() {_32 self.scheduleSelf()_32 }_32 }_32_32 access(self) fun scheduleSelf() {_32 FlowTransactionScheduler.schedule(_32 functionCall: "recover()",_32 executeAt: currentTime + 120 seconds_32 )_32 }_32}
Bounded Processing
The Supervisor processes a maximum of 50 vaults per execution to prevent timeout:
_10Iteration 1: Process vaults 1-50 → Reschedule supervisor_10Iteration 2: Process vaults 51-100 → Reschedule supervisor_10Iteration 3: Process vaults 101-120 → No more pending, stop
This ensures the recovery process can handle any number of stuck vaults without failing due to gas limits.
Recovery Triggers
The Supervisor runs in two scenarios:
1. Scheduled Recovery (proactive):
_10Every 10 minutes:_10 → Check for pending vaults_10 → Attempt recovery_10 → Reschedule if more work exists
2. Manual Recovery (reactive):
_10transaction triggerSupervisor() {_10 prepare(admin: AuthAccount) {_10 let supervisor = admin.borrow<&Supervisor>(...)_10 supervisor.recover()_10 }_10}
Scheduling Parameters
Key configuration parameters control scheduling behavior:
_13pub struct SchedulingConfig {_13 // Rebalancing frequency_13 pub let rebalanceIntervalSeconds: UInt64 // Default: 60_13_13 // Supervisor recovery frequency_13 pub let supervisorIntervalSeconds: UInt64 // Default: 600 (10 min)_13_13 // Max vaults per supervisor run_13 pub let maxSupervisorBatchSize: UInt64 // Default: 50_13_13 // Stale threshold (mark as stuck)_13 pub let staleThresholdSeconds: UInt64 // Default: 300 (5 min)_13}
Tuning Considerations
Rebalance Interval:
- Shorter (30s): More responsive, higher gas costs, better optimization
- Longer (120s): Less responsive, lower gas costs, acceptable for stable vaults
Supervisor Interval:
- Shorter (300s): Faster recovery, more frequent checks, higher overhead
- Longer (1200s): Slower recovery, less overhead, acceptable for stable network
Batch Size:
- Smaller (25): Lower gas per execution, more supervisor runs needed
- Larger (100): Higher gas per execution, fewer runs needed, risk of timeout
Monitoring Scheduling Health
Users and administrators can monitor the scheduling system's health:
Check Vault Schedule Status
_10import FlowYieldVaultsSchedulerRegistry from 0xFYV_10_10pub fun main(vaultID: UInt64): ScheduleStatus {_10 let registry = FlowYieldVaultsSchedulerRegistry.getRegistry()_10 let info = registry.getScheduleInfo(vaultID)_10_10 return info.status_10}_10// Returns: Active, Pending, or Stuck
Get Next Scheduled Time
_10pub fun main(vaultID: UInt64): UFix64 {_10 let registry = FlowYieldVaultsSchedulerRegistry.getRegistry()_10 let info = registry.getScheduleInfo(vaultID)_10_10 return info.nextScheduledTime_10}_10// Returns: Unix timestamp of next rebalance
Count Pending Vaults
_10pub fun main(): UInt64 {_10 let registry = FlowYieldVaultsSchedulerRegistry.getRegistry()_10 return registry.countPendingVaults()_10}_10// Returns: Number of vaults awaiting schedule
Failure Modes and Recovery
Scenario 1: Single Vault Fails to Reschedule
What happens:
- Vault executes rebalance successfully
- Reschedule transaction fails (network issue)
- Vault marked as "Stuck" in registry
- Supervisor detects stuck vault on next run
- Supervisor reschedules the vault
- Vault returns to "Active" status
User impact: Minor delay (up to 10 minutes) before next rebalance
Scenario 2: Scheduler Overload
What happens:
- Many vaults scheduled at same time
- Scheduler queue fills up
- Some reschedule transactions timeout
- Multiple vaults marked "Stuck"
- Supervisor processes in batches of 50
- All vaults eventually recovered
User impact: Temporary scheduling delays, no loss of funds
Scenario 3: Capability Revocation
What happens:
- User accidentally unlinks AutoBalancer capability
- Vault can no longer be scheduled
- Vault marked "Stuck" permanently
- User must manually fix capability
- Call forceRebalance() to restart scheduling
User impact: Vault stops rebalancing until fixed
Scenario 4: Supervisor Failure
What happens:
- Supervisor itself fails to reschedule
- Stuck vaults accumulate
- Admin manually triggers supervisor
- Supervisor recovers all pending vaults
- Supervisor returns to normal operation
User impact: Longer delays (requires admin intervention)
Best Practices
Monitor Your Vault: Check scheduling status periodically to ensure "Active" state.
Don't Revoke Capabilities: Avoid unlinking or destroying AutoBalancer capabilities as this breaks scheduling.
Use forceRebalance() Sparingly: Manual rebalancing bypasses scheduling logic; only use if truly stuck.
Track Rebalance History: Monitor rebalance frequency to detect scheduling issues early.
Report Stuck Vaults: If your vault becomes stuck, report it so admins can investigate root cause.
Advanced: Custom Scheduling
Developers can implement custom scheduling logic for specialized use cases:
_21pub resource CustomAutoBalancer: AutoBalancerInterface {_21 // Custom interval based on conditions_21 pub fun getNextInterval(): UInt64 {_21 let ratio = self.getCurrentRatio()_21_21 if ratio > 1.10 || ratio < 0.90 {_21 return 30 // More frequent when far from target_21 } else {_21 return 120 // Less frequent when stable_21 }_21 }_21_21 pub fun scheduleNextRebalance() {_21 let interval = self.getNextInterval()_21_21 FlowTransactionScheduler.schedule(_21 functionCall: "rebalance()",_21 executeAt: currentTime + interval_21 )_21 }_21}
This enables dynamic scheduling based on vault state, optimizing gas costs vs. responsiveness.
Summary
FYV's scheduling system achieves truly automated yield farming through self-scheduling where vaults schedule their own rebalancing, atomic registration preventing orphaned vaults, Supervisor recovery for stuck vaults, and bounded processing handling any scale.
Key guarantees:
- Every vault has either working schedule OR doesn't exist (atomicity)
- Stuck vaults automatically recovered (within 10 minutes)
- No external dependencies (no bot infrastructure needed)
- Scales to thousands of vaults (batched processing)
The self-scheduling mechanism is what makes FYV truly "set and forget." Vaults perpetually schedule themselves, the Supervisor recovers failures, and users never need to manually trigger rebalancing. It's automation all the way down.