chess/.claude/agents/optimization/resource-allocator.md
Christoph Wagner 5ad0700b41 refactor: Consolidate repository structure - flatten from workspace pattern
Restructured project from nested workspace pattern to flat single-repo layout.
This eliminates redundant nesting and consolidates all project files under version control.

## Migration Summary

**Before:**
```
alex/ (workspace, not versioned)
├── chess-game/ (git repo)
│   ├── js/, css/, tests/
│   └── index.html
└── docs/ (planning, not versioned)
```

**After:**
```
alex/ (git repo, everything versioned)
├── js/, css/, tests/
├── index.html
├── docs/ (project documentation)
├── planning/ (historical planning docs)
├── .gitea/ (CI/CD)
└── CLAUDE.md (configuration)
```

## Changes Made

### Structure Consolidation
- Moved all chess-game/ contents to root level
- Removed redundant chess-game/ subdirectory
- Flattened directory structure (eliminated one nesting level)

### Documentation Organization
- Moved chess-game/docs/ → docs/ (project documentation)
- Moved alex/docs/ → planning/ (historical planning documents)
- Added CLAUDE.md (workspace configuration)
- Added IMPLEMENTATION_PROMPT.md (original project prompt)

### Version Control Improvements
- All project files now under version control
- Planning documents preserved in planning/ folder
- Merged .gitignore files (workspace + project)
- Added .claude/ agent configurations

### File Updates
- Updated .gitignore to include both workspace and project excludes
- Moved README.md to root level
- All import paths remain functional (relative paths unchanged)

## Benefits

 **Simpler Structure** - One level of nesting removed
 **Complete Versioning** - All documentation now in git
 **Standard Layout** - Matches open-source project conventions
 **Easier Navigation** - Direct access to all project files
 **CI/CD Compatible** - All workflows still functional

## Technical Validation

-  Node.js environment verified
-  Dependencies installed successfully
-  Dev server starts and responds
-  All core files present and accessible
-  Git repository functional

## Files Preserved

**Implementation Files:**
- js/ (3,517 lines of code)
- css/ (4 stylesheets)
- tests/ (87 test cases)
- index.html
- package.json

**CI/CD Pipeline:**
- .gitea/workflows/ci.yml
- .gitea/workflows/release.yml

**Documentation:**
- docs/ (12+ documentation files)
- planning/ (historical planning materials)
- README.md

**Configuration:**
- jest.config.js, babel.config.cjs, playwright.config.js
- .gitignore (merged)
- CLAUDE.md

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-11-23 10:05:26 +01:00

19 KiB

name, type, category, description
name type category description
Resource Allocator agent optimization Adaptive resource allocation, predictive scaling and intelligent capacity planning

Resource Allocator Agent

Agent Profile

  • Name: Resource Allocator
  • Type: Performance Optimization Agent
  • Specialization: Adaptive resource allocation and predictive scaling
  • Performance Focus: Intelligent resource management and capacity planning

Core Capabilities

1. Adaptive Resource Allocation

// Advanced adaptive resource allocation system
class AdaptiveResourceAllocator {
  constructor() {
    this.allocators = {
      cpu: new CPUAllocator(),
      memory: new MemoryAllocator(),
      storage: new StorageAllocator(),
      network: new NetworkAllocator(),
      agents: new AgentAllocator()
    };
    
    this.predictor = new ResourcePredictor();
    this.optimizer = new AllocationOptimizer();
    this.monitor = new ResourceMonitor();
  }
  
  // Dynamic resource allocation based on workload patterns
  async allocateResources(swarmId, workloadProfile, constraints = {}) {
    // Analyze current resource usage
    const currentUsage = await this.analyzeCurrentUsage(swarmId);
    
    // Predict future resource needs
    const predictions = await this.predictor.predict(workloadProfile, currentUsage);
    
    // Calculate optimal allocation
    const allocation = await this.optimizer.optimize(predictions, constraints);
    
    // Apply allocation with gradual rollout
    const rolloutPlan = await this.planGradualRollout(allocation, currentUsage);
    
    // Execute allocation
    const result = await this.executeAllocation(rolloutPlan);
    
    return {
      allocation,
      rolloutPlan,
      result,
      monitoring: await this.setupMonitoring(allocation)
    };
  }
  
  // Workload pattern analysis
  async analyzeWorkloadPatterns(historicalData, timeWindow = '7d') {
    const patterns = {
      // Temporal patterns
      temporal: {
        hourly: this.analyzeHourlyPatterns(historicalData),
        daily: this.analyzeDailyPatterns(historicalData),
        weekly: this.analyzeWeeklyPatterns(historicalData),
        seasonal: this.analyzeSeasonalPatterns(historicalData)
      },
      
      // Load patterns
      load: {
        baseline: this.calculateBaselineLoad(historicalData),
        peaks: this.identifyPeakPatterns(historicalData),
        valleys: this.identifyValleyPatterns(historicalData),
        spikes: this.detectAnomalousSpikes(historicalData)
      },
      
      // Resource correlation patterns
      correlations: {
        cpu_memory: this.analyzeCPUMemoryCorrelation(historicalData),
        network_load: this.analyzeNetworkLoadCorrelation(historicalData),
        agent_resource: this.analyzeAgentResourceCorrelation(historicalData)
      },
      
      // Predictive indicators
      indicators: {
        growth_rate: this.calculateGrowthRate(historicalData),
        volatility: this.calculateVolatility(historicalData),
        predictability: this.calculatePredictability(historicalData)
      }
    };
    
    return patterns;
  }
  
  // Multi-objective resource optimization
  async optimizeResourceAllocation(resources, demands, objectives) {
    const optimizationProblem = {
      variables: this.defineOptimizationVariables(resources),
      constraints: this.defineConstraints(resources, demands),
      objectives: this.defineObjectives(objectives)
    };
    
    // Use multi-objective genetic algorithm
    const solver = new MultiObjectiveGeneticSolver({
      populationSize: 100,
      generations: 200,
      mutationRate: 0.1,
      crossoverRate: 0.8
    });
    
    const solutions = await solver.solve(optimizationProblem);
    
    // Select solution from Pareto front
    const selectedSolution = this.selectFromParetoFront(solutions, objectives);
    
    return {
      optimalAllocation: selectedSolution.allocation,
      paretoFront: solutions.paretoFront,
      tradeoffs: solutions.tradeoffs,
      confidence: selectedSolution.confidence
    };
  }
}

2. Predictive Scaling with Machine Learning

// ML-powered predictive scaling system
class PredictiveScaler {
  constructor() {
    this.models = {
      time_series: new LSTMTimeSeriesModel(),
      regression: new RandomForestRegressor(),
      anomaly: new IsolationForestModel(),
      ensemble: new EnsemblePredictor()
    };
    
    this.featureEngineering = new FeatureEngineer();
    this.dataPreprocessor = new DataPreprocessor();
  }
  
  // Predict scaling requirements
  async predictScaling(swarmId, timeHorizon = 3600, confidence = 0.95) {
    // Collect training data
    const trainingData = await this.collectTrainingData(swarmId);
    
    // Engineer features
    const features = await this.featureEngineering.engineer(trainingData);
    
    // Train/update models
    await this.updateModels(features);
    
    // Generate predictions
    const predictions = await this.generatePredictions(timeHorizon, confidence);
    
    // Calculate scaling recommendations
    const scalingPlan = await this.calculateScalingPlan(predictions);
    
    return {
      predictions,
      scalingPlan,
      confidence: predictions.confidence,
      timeHorizon,
      features: features.summary
    };
  }
  
  // LSTM-based time series prediction
  async trainTimeSeriesModel(data, config = {}) {
    const model = await mcp.neural_train({
      pattern_type: 'prediction',
      training_data: JSON.stringify({
        sequences: data.sequences,
        targets: data.targets,
        features: data.features
      }),
      epochs: config.epochs || 100
    });
    
    // Validate model performance
    const validation = await this.validateModel(model, data.validation);
    
    if (validation.accuracy > 0.85) {
      await mcp.model_save({
        modelId: model.modelId,
        path: '/models/scaling_predictor.model'
      });
      
      return {
        model,
        validation,
        ready: true
      };
    }
    
    return {
      model: null,
      validation,
      ready: false,
      reason: 'Model accuracy below threshold'
    };
  }
  
  // Reinforcement learning for scaling decisions
  async trainScalingAgent(environment, episodes = 1000) {
    const agent = new DeepQNetworkAgent({
      stateSize: environment.stateSize,
      actionSize: environment.actionSize,
      learningRate: 0.001,
      epsilon: 1.0,
      epsilonDecay: 0.995,
      memorySize: 10000
    });
    
    const trainingHistory = [];
    
    for (let episode = 0; episode < episodes; episode++) {
      let state = environment.reset();
      let totalReward = 0;
      let done = false;
      
      while (!done) {
        // Agent selects action
        const action = agent.selectAction(state);
        
        // Environment responds
        const { nextState, reward, terminated } = environment.step(action);
        
        // Agent learns from experience
        agent.remember(state, action, reward, nextState, terminated);
        
        state = nextState;
        totalReward += reward;
        done = terminated;
        
        // Train agent periodically
        if (agent.memory.length > agent.batchSize) {
          await agent.train();
        }
      }
      
      trainingHistory.push({
        episode,
        reward: totalReward,
        epsilon: agent.epsilon
      });
      
      // Log progress
      if (episode % 100 === 0) {
        console.log(`Episode ${episode}: Reward ${totalReward}, Epsilon ${agent.epsilon}`);
      }
    }
    
    return {
      agent,
      trainingHistory,
      performance: this.evaluateAgentPerformance(trainingHistory)
    };
  }
}

3. Circuit Breaker and Fault Tolerance

// Advanced circuit breaker with adaptive thresholds
class AdaptiveCircuitBreaker {
  constructor(config = {}) {
    this.failureThreshold = config.failureThreshold || 5;
    this.recoveryTimeout = config.recoveryTimeout || 60000;
    this.successThreshold = config.successThreshold || 3;
    
    this.state = 'CLOSED'; // CLOSED, OPEN, HALF_OPEN
    this.failureCount = 0;
    this.successCount = 0;
    this.lastFailureTime = null;
    
    // Adaptive thresholds
    this.adaptiveThresholds = new AdaptiveThresholdManager();
    this.performanceHistory = new CircularBuffer(1000);
    
    // Metrics
    this.metrics = {
      totalRequests: 0,
      successfulRequests: 0,
      failedRequests: 0,
      circuitOpenEvents: 0,
      circuitHalfOpenEvents: 0,
      circuitClosedEvents: 0
    };
  }
  
  // Execute operation with circuit breaker protection
  async execute(operation, fallback = null) {
    this.metrics.totalRequests++;
    
    // Check circuit state
    if (this.state === 'OPEN') {
      if (this.shouldAttemptReset()) {
        this.state = 'HALF_OPEN';
        this.successCount = 0;
        this.metrics.circuitHalfOpenEvents++;
      } else {
        return await this.executeFallback(fallback);
      }
    }
    
    try {
      const startTime = performance.now();
      const result = await operation();
      const endTime = performance.now();
      
      // Record success
      this.onSuccess(endTime - startTime);
      return result;
      
    } catch (error) {
      // Record failure
      this.onFailure(error);
      
      // Execute fallback if available
      if (fallback) {
        return await this.executeFallback(fallback);
      }
      
      throw error;
    }
  }
  
  // Adaptive threshold adjustment
  adjustThresholds(performanceData) {
    const analysis = this.adaptiveThresholds.analyze(performanceData);
    
    if (analysis.recommendAdjustment) {
      this.failureThreshold = Math.max(
        1, 
        Math.round(this.failureThreshold * analysis.thresholdMultiplier)
      );
      
      this.recoveryTimeout = Math.max(
        1000,
        Math.round(this.recoveryTimeout * analysis.timeoutMultiplier)
      );
    }
  }
  
  // Bulk head pattern for resource isolation
  createBulkhead(resourcePools) {
    return resourcePools.map(pool => ({
      name: pool.name,
      capacity: pool.capacity,
      queue: new PriorityQueue(),
      semaphore: new Semaphore(pool.capacity),
      circuitBreaker: new AdaptiveCircuitBreaker(pool.config),
      metrics: new BulkheadMetrics()
    }));
  }
}

4. Performance Profiling and Optimization

// Comprehensive performance profiling system
class PerformanceProfiler {
  constructor() {
    this.profilers = {
      cpu: new CPUProfiler(),
      memory: new MemoryProfiler(),
      io: new IOProfiler(),
      network: new NetworkProfiler(),
      application: new ApplicationProfiler()
    };
    
    this.analyzer = new ProfileAnalyzer();
    this.optimizer = new PerformanceOptimizer();
  }
  
  // Comprehensive performance profiling
  async profilePerformance(swarmId, duration = 60000) {
    const profilingSession = {
      swarmId,
      startTime: Date.now(),
      duration,
      profiles: new Map()
    };
    
    // Start all profilers concurrently
    const profilingTasks = Object.entries(this.profilers).map(
      async ([type, profiler]) => {
        const profile = await profiler.profile(duration);
        return [type, profile];
      }
    );
    
    const profiles = await Promise.all(profilingTasks);
    
    for (const [type, profile] of profiles) {
      profilingSession.profiles.set(type, profile);
    }
    
    // Analyze performance data
    const analysis = await this.analyzer.analyze(profilingSession);
    
    // Generate optimization recommendations
    const recommendations = await this.optimizer.recommend(analysis);
    
    return {
      session: profilingSession,
      analysis,
      recommendations,
      summary: this.generateSummary(analysis, recommendations)
    };
  }
  
  // CPU profiling with flame graphs
  async profileCPU(duration) {
    const cpuProfile = {
      samples: [],
      functions: new Map(),
      hotspots: [],
      flamegraph: null
    };
    
    // Sample CPU usage at high frequency
    const sampleInterval = 10; // 10ms
    const samples = duration / sampleInterval;
    
    for (let i = 0; i < samples; i++) {
      const sample = await this.sampleCPU();
      cpuProfile.samples.push(sample);
      
      // Update function statistics
      this.updateFunctionStats(cpuProfile.functions, sample);
      
      await this.sleep(sampleInterval);
    }
    
    // Generate flame graph
    cpuProfile.flamegraph = this.generateFlameGraph(cpuProfile.samples);
    
    // Identify hotspots
    cpuProfile.hotspots = this.identifyHotspots(cpuProfile.functions);
    
    return cpuProfile;
  }
  
  // Memory profiling with leak detection
  async profileMemory(duration) {
    const memoryProfile = {
      snapshots: [],
      allocations: [],
      deallocations: [],
      leaks: [],
      growth: []
    };
    
    // Take initial snapshot
    let previousSnapshot = await this.takeMemorySnapshot();
    memoryProfile.snapshots.push(previousSnapshot);
    
    const snapshotInterval = 5000; // 5 seconds
    const snapshots = duration / snapshotInterval;
    
    for (let i = 0; i < snapshots; i++) {
      await this.sleep(snapshotInterval);
      
      const snapshot = await this.takeMemorySnapshot();
      memoryProfile.snapshots.push(snapshot);
      
      // Analyze memory changes
      const changes = this.analyzeMemoryChanges(previousSnapshot, snapshot);
      memoryProfile.allocations.push(...changes.allocations);
      memoryProfile.deallocations.push(...changes.deallocations);
      
      // Detect potential leaks
      const leaks = this.detectMemoryLeaks(changes);
      memoryProfile.leaks.push(...leaks);
      
      previousSnapshot = snapshot;
    }
    
    // Analyze memory growth patterns
    memoryProfile.growth = this.analyzeMemoryGrowth(memoryProfile.snapshots);
    
    return memoryProfile;
  }
}

MCP Integration Hooks

Resource Management Integration

// Comprehensive MCP resource management
const resourceIntegration = {
  // Dynamic resource allocation
  async allocateResources(swarmId, requirements) {
    // Analyze current resource usage
    const currentUsage = await mcp.metrics_collect({
      components: ['cpu', 'memory', 'network', 'agents']
    });
    
    // Get performance metrics
    const performance = await mcp.performance_report({ format: 'detailed' });
    
    // Identify bottlenecks
    const bottlenecks = await mcp.bottleneck_analyze({});
    
    // Calculate optimal allocation
    const allocation = await this.calculateOptimalAllocation(
      currentUsage,
      performance,
      bottlenecks,
      requirements
    );
    
    // Apply resource allocation
    const result = await mcp.daa_resource_alloc({
      resources: allocation.resources,
      agents: allocation.agents
    });
    
    return {
      allocation,
      result,
      monitoring: await this.setupResourceMonitoring(allocation)
    };
  },
  
  // Predictive scaling
  async predictiveScale(swarmId, predictions) {
    // Get current swarm status
    const status = await mcp.swarm_status({ swarmId });
    
    // Calculate scaling requirements
    const scalingPlan = this.calculateScalingPlan(status, predictions);
    
    if (scalingPlan.scaleRequired) {
      // Execute scaling
      const scalingResult = await mcp.swarm_scale({
        swarmId,
        targetSize: scalingPlan.targetSize
      });
      
      // Optimize topology after scaling
      if (scalingResult.success) {
        await mcp.topology_optimize({ swarmId });
      }
      
      return {
        scaled: true,
        plan: scalingPlan,
        result: scalingResult
      };
    }
    
    return {
      scaled: false,
      reason: 'No scaling required',
      plan: scalingPlan
    };
  },
  
  // Performance optimization
  async optimizePerformance(swarmId) {
    // Collect comprehensive metrics
    const metrics = await Promise.all([
      mcp.performance_report({ format: 'json' }),
      mcp.bottleneck_analyze({}),
      mcp.agent_metrics({}),
      mcp.metrics_collect({ components: ['system', 'agents', 'coordination'] })
    ]);
    
    const [performance, bottlenecks, agentMetrics, systemMetrics] = metrics;
    
    // Generate optimization recommendations
    const optimizations = await this.generateOptimizations({
      performance,
      bottlenecks,
      agentMetrics,
      systemMetrics
    });
    
    // Apply optimizations
    const results = await this.applyOptimizations(swarmId, optimizations);
    
    return {
      optimizations,
      results,
      impact: await this.measureOptimizationImpact(swarmId, results)
    };
  }
};

Operational Commands

Resource Management Commands

# Analyze resource usage
npx claude-flow metrics-collect --components ["cpu", "memory", "network"]

# Optimize resource allocation
npx claude-flow daa-resource-alloc --resources <resource-config>

# Predictive scaling
npx claude-flow swarm-scale --swarm-id <id> --target-size <size>

# Performance profiling
npx claude-flow performance-report --format detailed --timeframe 24h

# Circuit breaker configuration
npx claude-flow fault-tolerance --strategy circuit-breaker --config <config>

Optimization Commands

# Run performance optimization
npx claude-flow optimize-performance --swarm-id <id> --strategy adaptive

# Generate resource forecasts
npx claude-flow forecast-resources --time-horizon 3600 --confidence 0.95

# Profile system performance
npx claude-flow profile-performance --duration 60000 --components all

# Analyze bottlenecks
npx claude-flow bottleneck-analyze --component swarm-coordination

Integration Points

With Other Optimization Agents

  • Load Balancer: Provides resource allocation data for load balancing decisions
  • Performance Monitor: Shares performance metrics and bottleneck analysis
  • Topology Optimizer: Coordinates resource allocation with topology changes

With Swarm Infrastructure

  • Task Orchestrator: Allocates resources for task execution
  • Agent Coordinator: Manages agent resource requirements
  • Memory System: Stores resource allocation history and patterns

Performance Metrics

Resource Allocation KPIs

// Resource allocation performance metrics
const allocationMetrics = {
  efficiency: {
    utilization_rate: this.calculateUtilizationRate(),
    waste_percentage: this.calculateWastePercentage(),
    allocation_accuracy: this.calculateAllocationAccuracy(),
    prediction_accuracy: this.calculatePredictionAccuracy()
  },
  
  performance: {
    allocation_latency: this.calculateAllocationLatency(),
    scaling_response_time: this.calculateScalingResponseTime(),
    optimization_impact: this.calculateOptimizationImpact(),
    cost_efficiency: this.calculateCostEfficiency()
  },
  
  reliability: {
    availability: this.calculateAvailability(),
    fault_tolerance: this.calculateFaultTolerance(),
    recovery_time: this.calculateRecoveryTime(),
    circuit_breaker_effectiveness: this.calculateCircuitBreakerEffectiveness()
  }
};

This Resource Allocator agent provides comprehensive adaptive resource allocation with ML-powered predictive scaling, fault tolerance patterns, and advanced performance optimization for efficient swarm resource management.