Initial implementation of HTTP Sender Plugin following TDD methodology with hexagonal architecture. All 313 tests passing (0 failures). This commit adds: - Complete domain model and port interfaces - All adapter implementations (HTTP, gRPC, file logging, config) - Application services (data collection, transmission, backpressure) - Comprehensive test suite with 18 integration tests Test fixes applied during implementation: - Fix base64 encoding validation in DataCollectionServiceIntegrationTest - Fix exception type handling in IConfigurationPortTest - Fix CompletionException unwrapping in IHttpPollingPortTest - Fix sequential batching in DataTransmissionServiceIntegrationTest - Add test adapter failure simulation for reconnection tests - Use adapter counters for gRPC verification Files added: - pom.xml with all dependencies (JUnit 5, Mockito, WireMock, gRPC, Jackson) - src/main/java: Domain model, ports, adapters, application services - src/test/java: Unit tests, integration tests, test utilities
7.4 KiB
Rate Limiting Configuration
Overview
The HSP system implements configurable rate limiting for HTTP polling operations to prevent overwhelming endpoint devices and ensure controlled data collection.
Requirement: Req-FR-16 (enhanced) Phase: 1.1 - Foundation & Quick Wins Implementation: RateLimitedHttpPollingAdapter
Configuration Schema
JSON Configuration
Add the following to your hsp-config.json:
{
"http_polling": {
"rate_limiting": {
"enabled": true,
"requests_per_second": 10.0,
"per_endpoint": true
}
},
"endpoints": [
{
"url": "http://device-1.local/diagnostics",
"rate_limit_override": 5.0
},
{
"url": "http://device-2.local/diagnostics",
"rate_limit_override": 20.0
}
]
}
Configuration Parameters
| Parameter | Type | Default | Description | Constraint |
|---|---|---|---|---|
enabled |
boolean | true |
Enable/disable rate limiting | - |
requests_per_second |
double | 10.0 |
Global rate limit | Must be > 0 |
per_endpoint |
boolean | true |
Apply rate limit per endpoint or globally | - |
rate_limit_override |
double | (optional) | Per-endpoint rate limit override | Must be > 0 |
Implementation Details
Algorithm
The implementation uses Google Guava's RateLimiter, which implements a token bucket algorithm:
- Token Bucket: Tokens are added at a constant rate
- Request Processing: Each request consumes one token
- Blocking Behavior: If no tokens available, request blocks until token is available
- Smooth Rate: Distributes requests evenly over time (no bursts)
Thread Safety
- Thread-Safe: RateLimiter is thread-safe, allowing concurrent access
- No Locking: Uses non-blocking algorithms internally
- Fair Distribution: Requests are served in FIFO order when rate-limited
Performance Characteristics
- Memory: O(1) - minimal overhead per instance
- CPU: O(1) - constant time acquire operation
- Latency: Average delay = 1 / requests_per_second
Usage Examples
Example 1: Global Rate Limiting
// Create base HTTP adapter
IHttpPollingPort httpAdapter = new HttpPollingAdapter(httpConfig);
// Wrap with rate limiting (10 requests per second)
IHttpPollingPort rateLimited = new RateLimitedHttpPollingAdapter(
httpAdapter,
10.0 // 10 req/s
);
// Use the rate-limited adapter
CompletableFuture<byte[]> data = rateLimited.pollEndpoint("http://device.local/data");
Example 2: Per-Endpoint Rate Limiting
// Different rate limits for different endpoints
Map<String, IHttpPollingPort> adapters = new HashMap<>();
for (EndpointConfig endpoint : config.getEndpoints()) {
IHttpPollingPort baseAdapter = new HttpPollingAdapter(endpoint);
double rateLimit = endpoint.getRateLimitOverride()
.orElse(config.getDefaultRateLimit());
IHttpPollingPort rateLimited = new RateLimitedHttpPollingAdapter(
baseAdapter,
rateLimit
);
adapters.put(endpoint.getUrl(), rateLimited);
}
Example 3: Dynamic Rate Adjustment
// For future enhancement - dynamic rate adjustment
public class AdaptiveRateLimiter {
private RateLimitedHttpPollingAdapter adapter;
public void adjustRate(double newRate) {
// Would require enhancement to RateLimitedHttpPollingAdapter
// to support dynamic rate changes
// Currently requires creating new instance
}
}
Testing
Test Coverage
The implementation includes comprehensive tests:
- Initialization Tests: Valid and invalid configuration
- Rate Limiting Tests: Within and exceeding limits
- Time Window Tests: Rate limit reset behavior
- Concurrency Tests: Thread safety with concurrent requests
- Burst Traffic Tests: Handling sudden request spikes
- Exception Tests: Error propagation from underlying adapter
Running Tests
# Run unit tests
mvn test -Dtest=RateLimitedHttpPollingAdapterTest
# Generate coverage report
mvn test jacoco:report
# Verify coverage thresholds (95% line, 90% branch)
mvn verify
Monitoring
Metrics to Monitor
- Rate Limit Wait Time: Time spent waiting for rate limiter permits
- Request Throughput: Actual requests per second achieved
- Queue Depth: Number of requests waiting for permits
- Rate Limit Violations: Attempts that were throttled
Example Monitoring Integration
public class MonitoredRateLimitedAdapter implements IHttpPollingPort {
private final RateLimitedHttpPollingAdapter delegate;
private final MetricsCollector metrics;
@Override
public CompletableFuture<byte[]> pollEndpoint(String url) {
long startTime = System.nanoTime();
CompletableFuture<byte[]> result = delegate.pollEndpoint(url);
result.thenRun(() -> {
long duration = System.nanoTime() - startTime;
metrics.recordRateLimitDelay(url, duration);
});
return result;
}
}
Troubleshooting
Issue: Requests Too Slow
Symptom: Data collection takes longer than expected
Solution:
- Check rate limit setting:
requests_per_second - Increase rate limit if endpoints can handle it
- Monitor endpoint response times
- Consider per-endpoint rate limits
Issue: Endpoints Overwhelmed
Symptom: HTTP 429 (Too Many Requests) or timeouts
Solution:
- Decrease
requests_per_second - Implement exponential backoff (Phase 1, Task 3.2)
- Add per-endpoint rate limit overrides
- Monitor endpoint health
Issue: Uneven Distribution
Symptom: Some endpoints polled more frequently than others
Solution:
- Enable
per_endpoint: truein configuration - Set appropriate
rate_limit_overrideper endpoint - Review polling schedule distribution
Future Enhancements
Planned Enhancements (Post-Phase 1)
- Dynamic Rate Adjustment: Adjust rate based on endpoint health
- Adaptive Rate Limiting: Auto-tune based on response times
- Token Bucket Size: Configure burst allowance
- Rate Limit Warm-up: Gradual ramp-up after restart
- Priority-Based Limiting: Different rates for different data priorities
Integration Points
- Backpressure Controller (Phase 1.2): Adjust rate based on buffer usage
- Health Check (Phase 3.6): Include rate limit statistics
- Configuration Reload (Future): Hot-reload rate limit changes
References
Requirements Traceability
- Req-FR-16: Rate limiting for HTTP requests (enhanced)
- Req-Arch-6: Thread-safe concurrent operations
- Req-NFR-2: Performance under load
Related Documentation
- PROJECT_IMPLEMENTATION_PLAN.md - Phase 1.1
- system-architecture.md - Adapter pattern
- test-strategy.md - TDD approach
External References
Document Version: 1.0 Last Updated: 2025-11-20 Author: HSP Development Team Status: Implemented (Phase 1.1)