hackathon/docs/config/rate-limit-configuration.md
Christoph Wagner a489c15cf5 feat: Add complete HSP implementation with integration tests passing
Initial implementation of HTTP Sender Plugin following TDD methodology
  with hexagonal architecture. All 313 tests passing (0 failures).

  This commit adds:
  - Complete domain model and port interfaces
  - All adapter implementations (HTTP, gRPC, file logging, config)
  - Application services (data collection, transmission, backpressure)
  - Comprehensive test suite with 18 integration tests

  Test fixes applied during implementation:
  - Fix base64 encoding validation in DataCollectionServiceIntegrationTest
  - Fix exception type handling in IConfigurationPortTest
  - Fix CompletionException unwrapping in IHttpPollingPortTest
  - Fix sequential batching in DataTransmissionServiceIntegrationTest
  - Add test adapter failure simulation for reconnection tests
  - Use adapter counters for gRPC verification

  Files added:
  - pom.xml with all dependencies (JUnit 5, Mockito, WireMock, gRPC, Jackson)
  - src/main/java: Domain model, ports, adapters, application services
  - src/test/java: Unit tests, integration tests, test utilities
2025-11-20 22:38:55 +01:00

7.4 KiB

Rate Limiting Configuration

Overview

The HSP system implements configurable rate limiting for HTTP polling operations to prevent overwhelming endpoint devices and ensure controlled data collection.

Requirement: Req-FR-16 (enhanced) Phase: 1.1 - Foundation & Quick Wins Implementation: RateLimitedHttpPollingAdapter

Configuration Schema

JSON Configuration

Add the following to your hsp-config.json:

{
  "http_polling": {
    "rate_limiting": {
      "enabled": true,
      "requests_per_second": 10.0,
      "per_endpoint": true
    }
  },
  "endpoints": [
    {
      "url": "http://device-1.local/diagnostics",
      "rate_limit_override": 5.0
    },
    {
      "url": "http://device-2.local/diagnostics",
      "rate_limit_override": 20.0
    }
  ]
}

Configuration Parameters

Parameter Type Default Description Constraint
enabled boolean true Enable/disable rate limiting -
requests_per_second double 10.0 Global rate limit Must be > 0
per_endpoint boolean true Apply rate limit per endpoint or globally -
rate_limit_override double (optional) Per-endpoint rate limit override Must be > 0

Implementation Details

Algorithm

The implementation uses Google Guava's RateLimiter, which implements a token bucket algorithm:

  1. Token Bucket: Tokens are added at a constant rate
  2. Request Processing: Each request consumes one token
  3. Blocking Behavior: If no tokens available, request blocks until token is available
  4. Smooth Rate: Distributes requests evenly over time (no bursts)

Thread Safety

  • Thread-Safe: RateLimiter is thread-safe, allowing concurrent access
  • No Locking: Uses non-blocking algorithms internally
  • Fair Distribution: Requests are served in FIFO order when rate-limited

Performance Characteristics

  • Memory: O(1) - minimal overhead per instance
  • CPU: O(1) - constant time acquire operation
  • Latency: Average delay = 1 / requests_per_second

Usage Examples

Example 1: Global Rate Limiting

// Create base HTTP adapter
IHttpPollingPort httpAdapter = new HttpPollingAdapter(httpConfig);

// Wrap with rate limiting (10 requests per second)
IHttpPollingPort rateLimited = new RateLimitedHttpPollingAdapter(
    httpAdapter,
    10.0  // 10 req/s
);

// Use the rate-limited adapter
CompletableFuture<byte[]> data = rateLimited.pollEndpoint("http://device.local/data");

Example 2: Per-Endpoint Rate Limiting

// Different rate limits for different endpoints
Map<String, IHttpPollingPort> adapters = new HashMap<>();

for (EndpointConfig endpoint : config.getEndpoints()) {
    IHttpPollingPort baseAdapter = new HttpPollingAdapter(endpoint);

    double rateLimit = endpoint.getRateLimitOverride()
        .orElse(config.getDefaultRateLimit());

    IHttpPollingPort rateLimited = new RateLimitedHttpPollingAdapter(
        baseAdapter,
        rateLimit
    );

    adapters.put(endpoint.getUrl(), rateLimited);
}

Example 3: Dynamic Rate Adjustment

// For future enhancement - dynamic rate adjustment
public class AdaptiveRateLimiter {
    private RateLimitedHttpPollingAdapter adapter;

    public void adjustRate(double newRate) {
        // Would require enhancement to RateLimitedHttpPollingAdapter
        // to support dynamic rate changes
        // Currently requires creating new instance
    }
}

Testing

Test Coverage

The implementation includes comprehensive tests:

  1. Initialization Tests: Valid and invalid configuration
  2. Rate Limiting Tests: Within and exceeding limits
  3. Time Window Tests: Rate limit reset behavior
  4. Concurrency Tests: Thread safety with concurrent requests
  5. Burst Traffic Tests: Handling sudden request spikes
  6. Exception Tests: Error propagation from underlying adapter

Running Tests

# Run unit tests
mvn test -Dtest=RateLimitedHttpPollingAdapterTest

# Generate coverage report
mvn test jacoco:report

# Verify coverage thresholds (95% line, 90% branch)
mvn verify

Monitoring

Metrics to Monitor

  1. Rate Limit Wait Time: Time spent waiting for rate limiter permits
  2. Request Throughput: Actual requests per second achieved
  3. Queue Depth: Number of requests waiting for permits
  4. Rate Limit Violations: Attempts that were throttled

Example Monitoring Integration

public class MonitoredRateLimitedAdapter implements IHttpPollingPort {
    private final RateLimitedHttpPollingAdapter delegate;
    private final MetricsCollector metrics;

    @Override
    public CompletableFuture<byte[]> pollEndpoint(String url) {
        long startTime = System.nanoTime();

        CompletableFuture<byte[]> result = delegate.pollEndpoint(url);

        result.thenRun(() -> {
            long duration = System.nanoTime() - startTime;
            metrics.recordRateLimitDelay(url, duration);
        });

        return result;
    }
}

Troubleshooting

Issue: Requests Too Slow

Symptom: Data collection takes longer than expected

Solution:

  1. Check rate limit setting: requests_per_second
  2. Increase rate limit if endpoints can handle it
  3. Monitor endpoint response times
  4. Consider per-endpoint rate limits

Issue: Endpoints Overwhelmed

Symptom: HTTP 429 (Too Many Requests) or timeouts

Solution:

  1. Decrease requests_per_second
  2. Implement exponential backoff (Phase 1, Task 3.2)
  3. Add per-endpoint rate limit overrides
  4. Monitor endpoint health

Issue: Uneven Distribution

Symptom: Some endpoints polled more frequently than others

Solution:

  1. Enable per_endpoint: true in configuration
  2. Set appropriate rate_limit_override per endpoint
  3. Review polling schedule distribution

Future Enhancements

Planned Enhancements (Post-Phase 1)

  1. Dynamic Rate Adjustment: Adjust rate based on endpoint health
  2. Adaptive Rate Limiting: Auto-tune based on response times
  3. Token Bucket Size: Configure burst allowance
  4. Rate Limit Warm-up: Gradual ramp-up after restart
  5. Priority-Based Limiting: Different rates for different data priorities

Integration Points

  • Backpressure Controller (Phase 1.2): Adjust rate based on buffer usage
  • Health Check (Phase 3.6): Include rate limit statistics
  • Configuration Reload (Future): Hot-reload rate limit changes

References

Requirements Traceability

  • Req-FR-16: Rate limiting for HTTP requests (enhanced)
  • Req-Arch-6: Thread-safe concurrent operations
  • Req-NFR-2: Performance under load

External References


Document Version: 1.0 Last Updated: 2025-11-20 Author: HSP Development Team Status: Implemented (Phase 1.1)